text
stringlengths 6
128k
|
---|
# Decomposition space theory
Boldizsár Kalmár
## 1\. Introduction
In these notes we give a brief introduction to decomposition theory and we
summarize some classical and well-known results. The main question is that if
a partitioning of a topological space (in other words a _decomposition_) is
given, then what is the topology of the quotient space. The main result is
that an _upper semi-continuous_ decomposition yields a homeomorphic
decomposition space if the decomposition is _shrinkable_ (i.e. there exist
self-homeomorphisms of the space which shrink the partitions into arbitrarily
small sets in a controllable way). This is called _Bing shrinkability
criterion_ and it was introduced in [Bi52, Bi57]. It is applied in major
$4$-dimensional results: in the disk embedding theorem and in the proof of the
$4$-dimensional topological Poincaré conjecture [Fr82, FQ90, BKKPR]. It is
extensively applied in constructing approximations of manifold embeddings in
dimension $\geq 5$, see [AC79] and Edwards’s cell-like approximation theorem
[Ed78]. If a decomposition is shrinkable, then a decomposition element has to
be _cell-like_ and _cellular_. Also the quotient map is approximable by
homeomorphisms. A cell-like map is a map where the point preimages are similar
to points while a cellular map is a map where the point preimages can be
approximated by balls. There is an essential difference between the two types
of maps: ball approximations always give cell-like sets but in a smooth
manifold for a cell-like set $C$ the complement has to be simply connected in
a nbhd of $C$ in order to be cellular. Finding conditions for a decomposition
to be shrinkable is one of the main goal of the theory. For example, cell-like
decompositions are shrinkable if the non-singleton decomposition elements have
codimension $\geq 3$, that is any maps of disks can be made disjoint from them
[Ed16]. In many constructions Cantor sets (a set of uncountably many points
that cutting out from the real line we are left with a manifold) arise as
limits of sequences of sets defining the decomposition. The interesting fact
is that a limit Cantor set can be non-standard and it can have properties very
different from the usual middle-third Cantor set in $[0,1]$. An example for
such a non-standard Cantor set is given by Antoine’s necklace but many other
explicit constructions are studied in the subsequent sections. The present
notes will cover the following: upper semi-continuous decompositions, defining
sequences, cellular and cell-like sets, examples like Whitehead continuum,
Antoine’s necklace and Bing decomposition, shrinkability criterion and near-
homeomorphism, approximating by homeomorphisms and shrinking countable upper
semi-continuous decompositions. We prove for example that every cell-like
subset in a $2$-dimensional manifold is cellular, that Antoine’s necklace is a
wild Cantor set, that in a complete metric space a usc decomposition is
shrinkable if and only if the decomposition map is a near-homeomorphism and
that every manifold has collared boundary.
## 2\. Decompositions
A neighborhood (nbhd for short) of a subset $A$ of a topological space $X$ is
an open subset of $X$ which contains $A$.
###### Definition 2.1.
Let $X$ be a topological space. A set $\mathcal{D}\subset\mathcal{P}(X)$ is a
_decomposition_ of $X$ if the elements of $\mathcal{D}$ are pairwise disjoint
and $\bigcup\mathcal{D}=X$. An element of $\mathcal{D}$ which consists of one
single point is called a _singleton_. A non-singleton decomposition element is
called _non-degenerate_. The elements of $\mathcal{D}$ are the _decomposition
elements_. The set of non-degenerate elements is denoted by
$\mathcal{H}_{\mathcal{D}}$.
If $f\colon X\to Y$ is an arbitrary (not necessarily continuous) map between
the topological spaces $X$ and $Y$, then the set
$\\{f^{-1}(y):y\in Y\\}$
is a decomposition of $X$. A decomposition defines an equivalence relation on
$X$ as usual, i.e. $a,b\in X$ are equivalent iff $a$ and $b$ are in the same
element of $\mathcal{D}$.
###### Definition 2.2.
If $\mathcal{D}$ is a decomposition of $X$, then the _decomposition space_
$X_{\mathcal{D}}$ is the space $\mathcal{D}$ with the following topology: the
subset $U\subset\mathcal{D}$ is open exactly if $\pi^{-1}(U)$ is open. Here
$\pi\colon X\to\mathcal{D}$ is the _decomposition map_ which maps each $x\in
X$ into its equivalence class.
In other words $X_{\mathcal{D}}$ is the quotient space with the quotient
topology and
$\pi\colon X\to X_{\mathcal{D}}$
is just the quotient map. Recall that by well-known statements
$X_{\mathcal{D}}$ is compact, connected and path-connected if $X$ is compact,
connected and path-connected, respectively. Obviously $\pi$ is continuous.
###### Proposition 2.3.
The decomposition space is a $T_{1}$ space if the decomposition elements are
closed.
###### Proof.
We have to show that the points in the space $X_{\mathcal{D}}$ are closed. If
$U$ is a point complement in $X_{\mathcal{D}}$, then $\pi^{-1}(U)$ is the
complement of a decomposition element, which is open so $U$ is also open. ∎
We would like to construct and study such decompositions which have especially
nice properties concerning the behavior of the sequences of decomposition
elements.
###### Definition 2.4.
Let $f\colon X\to\mathbb{R}$ be a function. It is _upper semi-continuous_
(resp. _lower semi-continuous_) if for every $x\in X$ and $\varepsilon>0$
there is a nbhd $V_{x}$ such that $f(V_{x})\subset(-\infty,f(x)+\varepsilon)$
(resp. $f(V_{x})\subset(f(x)-\varepsilon,\infty))$.
For us, upper semi-continuous functions will be important. They are such
functions, where a sequence $f(x_{n})$ can have only smaller or equal values
than $f(x)+\varepsilon_{n}$ as $x_{n}\rightarrow x$, where
$\varepsilon_{n}\geq 0$ and $\varepsilon_{n}\rightarrow 0$. Let
$f\colon\mathbb{R}\to\mathbb{R}$ be an upper semi-continuous, positive
function and consider the following decomposition of $\mathbb{R}^{2}$. Take
the vertical segments of the form
(2.1) $A_{x}=\\{(x,y):y\in[0,f(x)]\\}$
for each $x\in\mathbb{R}$. Together with the points in $\mathbb{R}^{2}$ which
are not in these segments (these points are the so-called singletons) this
gives a decomposition of $\mathbb{R}^{2}$. This has an interesting property:
let $y\in[0,f(x)]$ for some $x\in\mathbb{R}$ and let $(x_{n})\in\mathbb{R}$ be
a sequence (which is not necessarily convergent). If every nbhd of the point
$(x,y)$ intersects all but finitely many segments $A_{x_{n}}$, then the points
$(u,v)\in\mathbb{R}^{2}$ each of whose nbhds intersects all but finitely many
$A_{x_{n}}$ are in $A_{x}$ as well, see Figure 1. The set of the points
$(u,v)$ is called the _lower limit_ of the sequence $A_{x_{n}}$. In other
words, if an $A_{x}$ intersects the lower limit of a sequence $A_{x_{n}}$,
then all the lower limit is a subset of $A_{x}$. More generally we have the
following.
Figure 1. The graph of an upper semi-continuous function $f$ and some segments
$A_{x}$. If the segments $A_{x_{n}}$ “converge” to a segment $A_{x}$, then
$f(x_{n})$ converges to a number $\leq f(x)$.
###### Definition 2.5.
Let $A_{n}$ be a sequence of subsets of the space $X$. The _lower limit_ of
$A_{n}$ is the set of the points $p\in X$ each of whose nbhds intersects all
but finitely many $A_{n}$. It is denoted by $\liminf A_{n}$. The _upper limit_
of $A_{n}$ is the set of the points $p\in X$ each of whose nbhds intersects
infinitely many $A_{n}$s. It is denoted by $\limsup A_{n}$.
Note that $\liminf A_{n}\subset\limsup A_{n}$ is always true. In the previous
example the sets $A_{x_{n}}$ could approach the set $A_{x}$ only in a manner
determined by the function $f$. This leads to the following general
definition.
###### Definition 2.6.
Let $\mathcal{D}$ be a decomposition of a space $X$ such that all elements of
$\mathcal{D}$ are closed and compact and they can converge to each other only
in the following way: if $A\in\mathcal{D}$, then for every nbhd $U$ of $A$
there is a nbhd $V$ of $A$ with the property $V\subset U$ such that if some
element $B\in\mathcal{D}$ intersects $V$, then $B\subset U$, i.e. the set $B$
is completely inside the nbhd $U$. Then $\mathcal{D}$ is an _upper semi-
continuous decomposition_ (_usc_ decomposition for short). If all the
decomposition elements are closed but not necessarily compact, then we say it
is a _closed upper semi-continuous decomposition_.
For example, the decomposition defined in (2.1) is usc.
###### Lemma 2.7.
Let $\mathcal{D}$ be a decomposition of the space $X$ such that each
decomposition element is closed. The following are equivalent:
1. (1)
$\mathcal{D}$ is a closed usc decomposition,
2. (2)
for every $D\in\mathcal{D}$ and every nbhd $U$ of $D$ there is a saturated
nbhd $W\subset U$ of $D$, that is an open set $W$ which is a union of
decomposition elements,
3. (3)
for each open subset $U\subset X$, the set $\cup\\{D\in\mathcal{D}:D\subset
U\\}$ is open,
4. (4)
for each closed subset $F\subset X$, the set $\cup\\{D\in\mathcal{D}:D\cap
F\neq\emptyset\\}$ is closed,
5. (5)
the decomposition map $\pi\colon X\to X_{\mathcal{D}}$ is a closed map.
###### Proof.
Suppose $\mathcal{D}$ is usc and $U$ is a nbhd of $D$. Let $W$ be the union of
all decomposition elements which are subsets of $U$. Then $D\subset W$
obviously and $W$ is open because if $x\in W$, then $x\in D^{\prime}$ for some
decomposition element $D^{\prime}\subset W$ and $D^{\prime}\subset U$, so by
definition $D^{\prime}\subset V\subset U$ for a nbhd $V$ but the nbhd $V$ of
$x$ is in $W$ since all the decomposition elements intersecting $V$ have to be
in $U$, which means they are in $W$ as well. This shows that (1) implies (2).
Suppose (2) holds. If $U$ is an open set, then for each decomposition element
$D\subset U$ a saturated nbhd $W$ of $D$ is also in $U$ and also in
$\cup\\{D\in\mathcal{D}:D\subset U\\}$. This means that the set
$\cup\\{D\in\mathcal{D}:D\subset U\\}$ is a union of open sets, which proves
(3). We have that (3) and (4) are equivalent because we can take the
complement of a given closed set $F$ or an open set $U$. We have that (4) and
(5) are equivalent: from (4) we can show (5) by taking an arbitrary closed set
$F\subset X$, then $\cup\\{D\in\mathcal{D}:D\cap F\neq\emptyset\\}$ is closed,
its complement is a saturated open set whose $\pi$-image is open so $\pi(F)$
is closed. If we suppose (5), then for a closed set $F\subset X$ the set
$\pi^{-1}(\pi(F))=\cup\\{D\in\mathcal{D}:D\cap F\neq\emptyset\\}$
is closed so we get (4). Finally (3) implies (1): if $D\in\mathcal{D}$ and $U$
is a nbhd of $D$, then let $V$ be the open set
$\cup\\{D\in\mathcal{D}:D\subset U\\}$, this is a nbhd of $D$, it is in $U$
and if a $D^{\prime}\in\mathcal{D}$ intersects $V$, then it is in $V$ and
hence also in $U$. ∎
There is also the notion of _lower semi-continuous decomposition_ : a
decomposition $\mathcal{D}$ of a metric space is lower semi-continuous if for
every element $A\in\mathcal{D}$ and for every $\varepsilon>0$ there is a nbhd
$V$ of $A$ such that if some decomposition element $B$ intersects $V$, then
$A$ is in the $\varepsilon$-nbhd of $B$. A decomposition of a metric space is
_continuous_ if it is upper and lower semi-continuous, see Figure 2. We will
not study decompositions which are only lower semi-continuous.
Figure 2. A lower semi-continuous, an upper semi-continuous and a continuous
decomposition. In each of the cases the non-degenerate decomposition elements
are line segments, which converge to other line segments. The dots indicate
convergence. Only the non-singleton decomposition elements are sketched. The
lower semi-continuous decomposition consists of decomposing the area under the
graph of a lower semi-continuous function into vertical line segments, there
are no singletons among the decomposition elements and the decomposed space
itself is not closed. The upper semi-continuous and continuous decompositions
are decompositions of the rectangle. Only the upper semi-continuous
decomposition has singletons.
###### Theorem 2.8.
Let $X$ be a $T_{3}$ space and $\mathcal{D}$ is a closed usc decomposition. If
$A_{n}\in\mathcal{D}$ is a sequence of decomposition elements and
$A\in\mathcal{D}$ are such that $A\cap\liminf A_{n}\neq\emptyset$, then
$\limsup A_{n}\subset A$.
###### Proof.
Suppose there is a point $x\in A$ such that $x\in\liminf A_{n}$ as well. By
contradiction suppose that $\limsup A_{n}\nsubseteq A$, this means that a
point $y\in\limsup A_{n}$ is such that $y\notin A$. Since $y\in D$ for a
decomposition element, we get $D\neq A$ so $D$ is disjoint from the
decomposition element $A$. The space $X$ is $T_{3}$, the sets $D$ and
$\\{x\\}$ are closed so there is a nbhd $U$ of $D$ and a nbhd $V$ of $x$ which
are disjoint from each other. We also have a nbhd $W\subset U$ of $D$ which is
a union of decomposition elements by Lemma 2.7. Since $x\in\liminf A_{n}$, we
have that for an integer $k$ the sets $A_{k},A_{k+1},\ldots$ intersect $V$.
The nbhd $W$ is saturated, this implies that a decomposition element does not
intersect both of $W$ and $V$. So $A_{k},A_{k+1},\ldots$ are disjoint from
$W$. This contradicts to that $W$ is a nbhd of $y$ and so infinitely many
$A_{n}$ has to intersect $W$ because $y\in\limsup A_{n}$. ∎
An other example for a usc decomposition is the equivalence relation on
$S^{n}$ defined by $x\sim-x$. Here the decomposition elements are not
connected and the decomposition space is the projective space
${\mathbb{R}}{P}^{n}$. Or another example is the closed usc decomposition of
$\mathbb{R}^{2}$, where the two non-singleton decomposition elements are the
two arcs of the graph of the function $x\mapsto 1/x$, all the other
decomposition elements are singletons. The decomposition space is homeomorphic
to
$A\cup_{\varphi}B\cup_{\psi}A^{\prime},$
where $A$ and $A^{\prime}$ are open disks, each of them with one additional
point in its frontier denoted by $a$ and $a^{\prime}$ respectively. The space
$B$ is an open disk with two additional points $b,b^{\prime}$ in its frontier
and the gluing homeomorphisms are $\varphi\colon\\{a\\}\to\\{b\\}$ and
$\psi\colon\\{a^{\prime}\\}\to\\{b^{\prime}\\}$. If a decomposition is given,
then we would like to understand the decomposition space as well.
###### Proposition 2.9.
The decomposition space of a closed usc decomposition of a normal space is
$T_{4}$.
###### Proof.
We have to show that if $\mathcal{D}$ is a usc decomposition of a normal space
$X$, then any two disjoint closed sets in the space $X_{\mathcal{D}}$ can be
separated by open sets. Let $A,B$ be disjoint closed sets in
$X_{\mathcal{D}}$. Then $\pi^{-1}(A)$ and $\pi^{-1}(B)$ are disjoint closed
sets and by being $X$ normal and by Lemma 2.7 they have disjoint saturated
nbhds $U_{1}$ and $U_{2}$. Taking $\pi(U_{1})$ and $\pi(U_{2})$ we get
disjoint nbhds of $A$ and $B$. The decomposition elements are closed so
$X_{\mathcal{D}}$ is $T_{1}$, which finally implies that $X_{\mathcal{D}}$ is
$T_{4}$. ∎
If a space $X$ is not normal, then it is easy to define such closed usc
decomposition, where the decomposition space is even not $T_{2}$. Take two
disjoint closed sets $A,B$ in $X$ which can not be separated by open sets. For
example the direct product of the Sorgenfrei line with itself is not normal
and choose the points with rational and irrational coordinates in the
antidiagonal respectively, to have two closed sets $A$ and $B$. These two sets
are the two non-singleton elements of the decomposition $\mathcal{D}$, other
elements are singletons. Then $\mathcal{D}$ is closed usc but
$X_{\mathcal{D}}$ is not $T_{2}$ because $\pi(A)$ and $\pi(B)$ can not be
separated by open sets.
###### Definition 2.10.
Let $\mathcal{D}$ be a decomposition of the space $X$. A decomposition is
_finite_ if it has only finitely many non-degenerate elements and _countable_
if it has countably many non-degenerate elements. A decomposition is
_monotone_ if every decomposition element is connected. If $X$ is a metric
space, then a decomposition is _null_ if the decomposition elements are
bounded and for every $\varepsilon>0$ there is only a finite number of
elements whose diameter is greater than $\varepsilon$.
###### Proposition 2.11.
Let $\mathcal{D}$ be a decomposition and suppose that all elements are closed.
If $\mathcal{D}$ is finite, then it is a closed usc decomposition.
###### Proof.
Let $C\subset X$ be a closed subset, then $\pi^{-1}(\pi(C))$ is closed because
it is the finite union of the closed set $C$ and the non-degenerate elements
which intersect $C$. Then by Lemma 2.7 (4) the statement follows. ∎
###### Proposition 2.12.
If $\mathcal{D}$ is a closed and null decomposition of a metric space, then it
is usc.
###### Proof.
Denote the metric by $d$. All the decomposition elements are compact because
they are bounded. Let $U$ be a nbhd of a $D\in\mathcal{D}$, then there is an
$\varepsilon>0$ such that the $\varepsilon$-nbhd of $D$ is in $U$. Since
$\mathcal{D}$ is null, there are only finitely many decomposition elements
$D_{1},\ldots,D_{n}$ whose diameter is greater than $\varepsilon/4$ and
$D_{i}\neq D$. Let $\delta$ be the minimum of $\varepsilon/4$ and the
distances between $D$ and the $D_{i}$s. If $D^{\prime}\in\mathcal{D}$ is such
that the distance between $D^{\prime}$ and $D$ is less than $\delta$, then
$D^{\prime}$ is in the $\varepsilon$-nbhd of $D$: there are $x\in D$ and $y\in
D^{\prime}$ such that $d(x,y)<\delta$ so for every $a\in D^{\prime}$
$\inf\\{d(a,b):b\in D\\}\leq d(a,y)+d(y,x)+\inf\\{d(x,b):b\in
D\\}=d(a,y)+d(y,x)\leq\\\ {\mathrm{diam}}\thinspace
D^{\prime}+\delta\leq\varepsilon/2,$
which means that $D^{\prime}$ is in the $\varepsilon$-nbhd of $D$ so
$D^{\prime}\subset U$. ∎
###### Proposition 2.13.
Let $\mathcal{D}$ be a usc decomposition of a space $X$.
1. (1)
If $X$ is $T_{2}$, then $X_{\mathcal{D}}$ is $T_{2}$ as well.
2. (2)
If $X$ is regular, then $X_{\mathcal{D}}$ is $T_{3}$.
###### Proof.
The decomposition elements are compact so every $\pi^{-1}(a)$ and
$\pi^{-1}(b)$ for different $a,b\in X_{\mathcal{D}}$ can be separated by open
sets. The statement follows easily. ∎
###### Proposition 2.14.
Let $\mathcal{D}$ be a usc decomposition of a $T_{2}$ space $X$. The
decomposition $\mathcal{D}^{\prime}$ whose elements are the connected
components of the elements of $\mathcal{D}$ is a monotone usc decomposition.
###### Proof.
Take an element $D^{\prime}\in\mathcal{D}^{\prime}$ and denote by $D$ the
decomposition element in $\mathcal{D}$ which contains $D^{\prime}$. Suppose
$D\neq D^{\prime}$. Then $D-D^{\prime}$ is closed in $D$ so it is closed in
$X$. Let $U$ be a nbhd of $D^{\prime}$. Then there exists a nbhd
$U^{\prime}\subset U$ of $D^{\prime}$ which is disjoint from a nbhd
$U^{\prime\prime}$ of the closed set $D-D^{\prime}$. By the usc property we
can find a nbhd $V$ of $D$ such that $V\subset U^{\prime}\cup
U^{\prime\prime}$ and if a $C\in\mathcal{D}$ intersects $V$, then $C\subset
U^{\prime}\cup U^{\prime\prime}$. If $C^{\prime}\in\mathcal{D}^{\prime}$
intersects $V\cap U^{\prime}$, then the element $C\in\mathcal{D}$ which
contains $C^{\prime}$ as a connected component intersects $V$ hence $C\subset
U^{\prime}\cup U^{\prime\prime}$. Since $U^{\prime}$ and $U^{\prime\prime}$
are disjoint, the component $C^{\prime}$ of $C$ is in $U^{\prime}$ because it
intersects $U^{\prime}$. We got that $C^{\prime}\subset U$. ∎
For example, it follows that the decomposition of a compact $T_{2}$ space $X$
whose elements are the connected components of the space is a usc
decomposition. To see this, at first take the decomposition $\mathcal{D}$,
where $\mathcal{H}_{\mathcal{D}}=\\{X\\}$ and hence the decomposition has no
singletons. This is usc so we can apply the previous proposition.
###### Proposition 2.15.
If $X$ is a metric space and $\mathcal{D}$ is its usc decomposition, then
$X_{\mathcal{D}}$ is metrizable. If $X$ is separable, then $X_{\mathcal{D}}$
is also separable.
###### Proof.
By [St56] if there is a continuous closed map $f$ of a metric space onto a
space $Y$ such that for every $y\in Y$ the closed set
$f^{-1}(y)-{\mathrm{int}}\thinspace f^{-1}(y)$ is compact, then $Y$ is
metrizable. But for every $y\in X_{\mathcal{D}}$ the set $\pi^{-1}(y)$ and so
its closed subset $\pi^{-1}(y)-{\mathrm{int}}\thinspace\pi^{-1}(y)$ are
compact hence $X_{\mathcal{D}}$ is metrizable. Moreover if $X$ is separable,
then there is a countable subset $S\subset X$ intersecting every open set,
which gives the countable set $\pi(S)$ intersecting every open set in
$X_{\mathcal{D}}$. ∎
## 3\. Examples and properties of decompositions
Usually, we are interested in the topology of the decomposition space if a
decomposition of $X$ is given. Especially those situations are stimulating
where the decomposition space turns out to be homeomorphic to $X$.
Let $X=\mathbb{R}$ and let $\mathcal{D}$ be a decomposition such that
$\mathcal{H}_{\mathcal{D}}$ consists of countably many disjoint compact
intervals. Then this is a usc decomposition: any open interval
$U\subset\mathbb{R}$ contains at most countably many compact intervals of
$\mathcal{H}_{\mathcal{D}}$ and the infimum of the left endpoints of these
intervals could be in $U$ or it could be the left boundary point of $U$.
Similarly, we have this for the right endpoints. In all cases the union of the
decomposition elements being in $U$ is open. For an arbitrary open set
$U\subset\mathbb{R}$ we have the same, this means we have a usc decomposition.
Later we will see, that the decomposition space $X_{\mathcal{D}}$ is
homeomorphic to $\mathbb{R}$. Moreover the decomposition map $\pi\colon X\to
X_{\mathcal{D}}$ is approximable by homeomorphisms, which means there are
homeomorphisms from $\mathbb{R}$ to $\mathbb{R}$ arbitrarily close to $\pi$ in
the sense of uniform metric. For example, let $X=\mathbb{R}$ and consider the
infinite Cantor set-like construction by taking iteratively the middle third
compact intervals in the interval $[0,1]$. These are countably many intervals
and define the decomposition ${\mathcal{D}}$ so that the non-degenerate
elements are these intervals. We can obtain this decomposition ${\mathcal{D}}$
by taking the connected components of $[0,1]-\mbox{Cantor set}$ and then
taking the closure of them. This is usc and we will see that the decomposition
space is $\mathbb{R}$.
If $X=\mathbb{R}^{2}$, then an analogous decomposition is that
$\mathcal{H}_{\mathcal{D}}$ consists of countably many compact line segments.
More generally, let $\mathcal{H}_{\mathcal{D}}$ be countably many _flat_ arcs,
that is such subsets $A$ of $\mathbb{R}^{2}$ for which there exist self-
homeomorphisms $h_{A}$ of $\mathbb{R}^{2}$ mapping $A$ into the standard
compact interval $\\{(x,0)\in\mathbb{R}^{2}:0\leq x\leq 1\\}$. Such a
decomposition is not necessarily usc, for example take the function
$f\colon[0,1)\to\mathbb{R}$, $f(x)=1+x$, and the sequence $x_{n}=1-1/n$.
Define the decomposition by
$\mathcal{H}_{\mathcal{D}}=\\{(x_{n},y):y\in[0,f(x_{n})],n\in\mathbb{N}\\}$
and the singletons are all the other points of $\mathbb{R}^{2}$. Then
$\mathcal{H}_{\mathcal{D}}$ consists of countably many straight line segments
but this decomposition is not usc: consider the point $(1,3/2)\in\mathcal{D}$
and its $\varepsilon$-nbhds for small $\varepsilon>0$. These intersect
infinitely many non-degenerate decomposition elements but none of the elements
is a subset of any of these $\varepsilon$-nbhds. The decomposition space is
not $T_{2}$: the points $\pi((1,y))$, where $0\leq y\leq 2$, cannot be
separated by disjoint nbhds because the sequence $\pi((x_{n},0))$ converges to
all of them.
However, if $\mathcal{D}$ is such a decomposition of $\mathbb{R}^{2}$ that
$\mathcal{H}_{\mathcal{D}}$ consists of countably many flat arcs and further
we suppose that $\mathcal{D}$ is usc, then the decomposition space
$X_{\mathcal{D}}$ is homeomorphic to $\mathbb{R}^{2}$ and again $\pi$ can be
approximated by homeomorphisms, we will see this later.
We get another interesting example by taking a smooth function with finitely
many critical values on a closed manifold $M$. Then the decomposition elements
are defined to be the connected components of the point preimages of the
function. This is a monotone decomposition ${\mathcal{D}}$ and it is usc
because the decomposition map $\pi\colon M\to M_{\mathcal{D}}$ is a closed
map: in $M$ a closed set is compact, its $\pi$-image is compact as well and
$M_{\mathcal{D}}$ is $T_{2}$ because it is a graph [Iz88, Re46, Sa20] so this
$\pi$-image is also closed.
If $X$ is $3$-dimensional, then the possibilites increase tremendously. This
is illustrated by the following surprising statement.
###### Proposition 3.1.
For every compact metric space $Y$ there exists a monotone usc decomposition
of the compact ball $D^{3}$ such that $Y$ can be embedded into the
decomposition space.
###### Proof.
Recall that by the Alexandroff-Hausdorff theorem the Cantor set in the $[0,1]$
interval can be mapped surjectively and continuously onto every compact metric
space. Let $T$ be a tetrahedron in $D^{3}$, denote two of its non-intersecting
edges by $e$ and $f$. Identify these edges linearly with $[0,1]$ and let
$C_{1}$ and $C_{2}$ be the Cantor sets in $e$ and $f$, respectively. For
$i=1,2$ denote the existing surjective maps of $C_{i}$ onto $Y$ by
$\psi_{i}\colon C_{i}\to Y$. For every $x\in Y$ take the union of all the line
segments in $T$ connecting all the points of $\psi_{1}^{-1}(x)$ to all the
points of $\psi_{2}^{-1}(x)$. Denote this subset of $T$ by $D_{x}$, see Figure
3. They are compact and connected for all $x\in Y$ and they are pairwise
disjoint because all the lines in $T$ connecting points of $e$ and $f$ are
pairwise disjoint. So we have a monotone usc decomposition with
$\mathcal{H}_{\mathcal{D}}=\\{D_{x}:x\in Y\\}$. Define the embedding $i$ of
$Y$ into $D^{3}_{\mathcal{D}}$ by $i(x)=\pi(\psi_{1}^{-1}(x))$. This map is
injective, closed because $\pi$ is closed and continuous because $\psi_{1}$ is
closed. ∎
$f$$e$
Figure 3. The tetrahedron $T$, the edges $e$ and $f$ and a set $D_{x}$
pictured in blue.
To see further examples in $\mathbb{R}^{3}$ let us introduce some notions.
###### Definition 3.2 (Defining sequence).
Let $X$ be a connected $n$-dimensional manifold. A _defining sequence_ for a
decomposition of $X$ is a sequence
$C_{1},C_{2},\ldots,C_{n},\ldots$
of compact $n$-dimensional submanifolds-with-boundary in $X$ such that
$C_{n+1}\subset\mathrm{int}\thinspace C_{n}$. The decomposition elements of
the defined decomposition are the connected components of
$\cap_{n=1}^{\infty}C_{n}$ and the other points of $X$ are singletons.
Obviously a decomposition defined in this way is monotone. The set
$\cap_{n=1}^{\infty}C_{n}$ is closed and compact so its connected components
are closed and compact as well. Also the space $\cap_{n=1}^{\infty}C_{n}$ is
$T_{2}$ hence its decomposition to its connected components is usc. Then
adding all the points of $X-\cap_{n=1}^{\infty}C_{n}$ to this decomposition as
singletons results our decomposition. This is usc: the only thing which is not
completely obvious is that in a nbhd of an added point the conditions being
usc are satisfied or not. But $\cap_{n=1}^{\infty}C_{n}$ is closed, its
complement is open so every such singleton has a nbhd disjoint from
$\cap_{n=1}^{\infty}C_{n}$.
###### Proposition 3.3.
If all $C_{n}$ in a defining sequence is connected, then
$\cap_{n=1}^{\infty}C_{n}$ is connected.
###### Proof.
Let $C$ denote the non-empty set $\cap_{n=1}^{\infty}C_{n}$. Suppose $C$ is
not connected, this means there are disjoint closed non-empty subsets
$A,B\subset C$ such that $A\cup B=C$. These $A$ and $B$ are closed in the
ambient manifold $X$ as well, so there exist disjoint nbhds $U$ of $A$ and $V$
of $B$ in $X$. It is enough to show that for some $n\in\mathbb{N}$ we have
$C_{n}\subset U\cup V$, because then $C_{n}\cap U\neq\emptyset$, $C_{n}\cap
V\neq\emptyset$ imply that $C_{n}$ is not connected, which is a contradiction.
If we suppose that for every $n\in\mathbb{N}$ we have $C_{n}\cap(X-(U\cup
V))\neq\emptyset$, then for every $n$ we have
$C_{n}\cap(X-U)\cap(X-V)\neq\emptyset$, i.e. the closed set $F=(X-U)\cap(X-V)$
and each element of the nested sequence $C_{1},C_{2},\ldots$ satisfy
$C_{n}\cap F\neq\emptyset.$
Of course
$C_{n+1}\cap F\subset C_{n}\cap F$
which implies that
$F\cap C=F\cap(\cap_{n=1}^{\infty}C_{n})=\cap_{n=1}^{\infty}(C_{n}\cap
F)\neq\emptyset$
because all $C_{n}\cap F$ is closed in the compact space $C_{1}$. But $F\cap
C\neq\emptyset$ contradicts to $C\subset U\cup V$. ∎
The $\pi$-image of the union of non-degenerate elements of a decomposition
associated to a defining sequence is closed and also totally disconnected
because if $\cap_{n=1}^{\infty}C_{n}$ is not connected, then all the pairs of
decomposition elements have disjoint saturated nbhds which yield disjoint
nbhds of their $\pi$-image.
### 3.1. The Whitehead continuum
One of the most famous such decomposition is related to the so called
Whitehead continuum. Its defining sequence consists of solid tori embedded
into each other in such a way that $C_{i+1}$ is a thickened Whitehead double
of the center circle of $C_{i}$, see Figure 4. The intersection
$\cap_{i=1}^{\infty}C_{i}$ is a compact subset of $\mathbb{R}^{3}$, this is
the Whitehead continuum, which we denote by $\mathcal{W}$. The decomposition
consists of the connected components of $\mathcal{W}$ and the singletons in
the complement of them. If the diameters $d_{i}$ of the meridians of the tori
$C_{i}$ converges to $0$ as $i$ goes to $\infty$, then $\mathcal{W}$
intersects the vertical sheet $S$ in Figure 4 in a Cantor set: $C_{i}\cap S$
is equal to $2^{i-1}$ copies of disks of diameter $d_{i}$ nested into each
other. The intersection $S\cap(\cap_{i=1}^{\infty}C_{i})$ is then a Cantor
set. The Whitehead continuum $\mathcal{W}$ is connected because the $C_{i}$
tori are connected but it is not path-connected. We will see later that the
decomposition space $\mathbb{R}^{3}_{\mathcal{W}}$ is not homeomorphic to
$\mathbb{R}^{3}$ but taking its direct product with $\mathbb{R}$ we get
$\mathbb{R}^{4}$. An important property of $\mathbb{R}^{3}-\mathcal{W}$ is
that it is a contractible $3$-manifold, which is not homeomorphic to
$\mathbb{R}^{3}$.
For understanding further properties of this decomposition, we are going to
define some notions.
$S$
Figure 4. A sketch of the defining sequence of the Whitehead decomposition.
The first figure shows the solid torus $C_{1}$ and the Whitehead double of its
center circle. The second figure shows the Whitehead double of the center
circle of $C_{2}$. The torus $C_{2}$ is not shown but we get it by thickening
the Whitehead double in $C_{1}$. Then thicken the knot in the second figure
(so we get the solid torus $C_{3}$) and take its center circle. Take the
Whitehead double of this circle and so we get the knot embedded in $C_{3}$ in
the third figure. In the third figure we can see the intersection of $C_{3}$
with a vertical sheet $S$, which is four small disks. This vertical sheet $S$
intersects the Whitehead continuum in a Cantor set.
###### Definition 3.4 (Cellular set, cell-like set).
Let $X$ be an $n$-dimensional manifold and $C\subset X$ be a subset of $X$.
The set $C$ is _cellular_ if there is a sequence
$B_{1},B_{2},\ldots,B_{n},\ldots$ of closed $n$-dimensional balls in $X$ such
that $B_{n+1}\subset\mathrm{int}\thinspace B_{n}$ and
$C=\cap_{n=1}^{\infty}B_{n}$. A compact subset $C$ of a topological space $X$
is _cell-like_ if for every nbhd $U$ of $C$ there is a nbhd $V$ of $C$ in $U$
such that the inclusion map $V\to U$ is homotopic in $U$ to a constant map.
Similarly, a decomposition is called cellular or cell-like if each of its
decomposition elements is cellular or cell-like, respectively.
For example the “topologist’s sine curve” in $\mathbb{R}^{2}$ is cellular. A
cellular set is compact and also connected but not necessarily path-connected.
It is also easy to see that every compact contractible subset of a manifold is
cell-like. Also a compact and contractible metric space is cell-like in
itself. A cell-like set $C$ is connected because if there were two open
subsets $U_{1}$ and $U_{2}$ in $X$ separating some connected components of
$C$, then it would be not possible to contract any nbhd $V\subset U_{1}\cup
U_{2}$ of $C$ to one single point.
###### Proposition 3.5.
The set $\mathcal{W}$ is cell-like but not cellular.
###### Proof.
Let $U$ be a nbhd of $\mathcal{W}$. Then there is an $n$ such that
$C_{i}\subset U$ for all $i\geq n$. Let $V$ be such a small tubular nbhd of
$C_{n+1}$ which is inside $C_{n}$. Then since the Whitehead double of the
center circle of $C_{n}$ is null-homotopic in the solid torus $C_{n}$, the
thickened Whitehead double $C_{n+1}$ and its nbhd $V$ are also null-homotopic
in $C_{n}$, hence the map $V\to U$ is homotopic in $U$ to a constant map.
###### Lemma 3.6.
The $3$-manifold $S^{3}-\mathcal{W}$ is not simply connected at infinity.
###### Proof.
We have to show that there is a compact subset $C\subset S^{3}-\mathcal{W}$
such that for every compact set $D\subset S^{3}-\mathcal{W}$ containing $C$
the induced homomorphism
$\varphi\colon\pi_{1}(S^{3}-\mathcal{W}-D)\to\pi_{1}(S^{3}-\mathcal{W}-C)$
is not the zero homomorphism. Let $C$ be the closure of $S^{3}-C_{1}$. If $D$
is a compact set in $S^{3}-\mathcal{W}$ containing $C$, then $S^{3}-D$ is a
nbhd of $\mathcal{W}$ in $C_{1}$. Then there is an $n$ such that $C_{i}\subset
S^{3}-D$ for all $i\geq n$. Consider the commutative diagram
$\begin{CD}\pi_{1}(S^{3}-C_{n}-D)@>{}>{}>\pi_{1}(S^{3}-C_{n}-C)\\\
@V{}V{}V@V{}V{\alpha}V\\\
\pi_{1}(S^{3}-\mathcal{W}-D)@>{\varphi}>{}>\pi_{1}(S^{3}-\mathcal{W}-C).\end{CD}$
By [NW37] the generator of the group $\pi_{1}(S^{3}-C_{n}-C)$ represented by
the meridian of the torus $C_{n}$ is mapped by $\alpha$ into a generator of
$\pi_{1}(S^{3}-\mathcal{W}-C)$. Since this meridian also represents an element
of $\pi_{1}(S^{3}-C_{n}-D)$, we get that $\varphi$ is not the zero
homomorphism. ∎
Let us continue the proof of Proposition 3.5. If $\mathcal{W}$ is cellular,
then there are $B_{1},B_{2},\ldots,B_{n},\ldots$ closed $n$-dimensional balls
in $S^{3}$ such that $B_{n+1}\subset\mathrm{int}\thinspace B_{n}$ and
$\mathcal{W}=\cap_{n=1}^{\infty}B_{n}$. This would imply that
$S^{3}-\mathcal{W}$ is simply connected at infinity because if $C\subset
S^{3}-\mathcal{W}$ is a compact set, then take a $B_{n}\subset S^{3}-C$ and a
loop in $\mathrm{int}B_{n}-\mathcal{W}$, then there is a
$B_{m}\subset\mathrm{int}B_{n}$ not containing this loop and the loop in null-
homotopic in $\mathrm{int}B_{n}-B_{m}$ because
$\pi_{1}(\mathrm{int}B_{n}-B_{m})=0$. Hence we obtain that $S^{3}-\mathcal{W}$
is not cellular. ∎
With more effort we could show that $S^{3}-\mathcal{W}$ is contractible so it
is homotopy equivalent to $\mathbb{R}^{3}$ but by the previous statement it is
not homeomorphic to $\mathbb{R}^{3}$. It is known that the set
$\mathcal{W}\times\\{0\\}$ is cellular in $\mathbb{R}^{3}\times\mathbb{R}$ and
the decomposition space of the decomposition of
$\mathbb{R}^{3}\times\mathbb{R}$ whose only non-degenerate element is
$\mathcal{W}\times\\{0\\}$ is homeomorphic to $\mathbb{R}^{4}$. This fact is
the starting point of the proof of the $4$-dimensional Poincaré conjecture.
Being cell-like often does not depend on the ambient space. To understand
this, we have to introduce a new notion.
###### Definition 3.7 (Absolute nbhd retract).
A metric space $Y$ is an _absolute nbhd retract_ (or _ANR_ for short) if for
an arbitrary metric space $X$ and its closed subset $A$ every map $f$ from $A$
to $Y$ extends to a nbhd of $A$. In other words, the nbhd $U$ and the dashed
arrow exist in the following diagram and make the diagram commutative.
$A$$X$$U$$Y$$f$$\subseteq$$\subseteq$$\subseteq$
This is equivalent to say that for every metric space $Z$ and embedding
$i\colon Y\to Z$ such that $i(Y)$ is closed there is a nbhd $U$ of $i(Y)$ in
$Z$ which retracts onto $i(Y)$, that is $r|_{i(Y)}=\mathrm{id}_{i(Y)}$ for
some map $r\colon U\to i(Y)$. It is a fact that every manifold is an ANR.
The property of cell-likeness is independent of the ambient space until that
is an ANR as the following statement shows.
###### Proposition 3.8.
If $C\subset X$ is a compact cell-like set in a metric space $X$, then the
embedded image of $C$ in an arbitrary ANR is also cell-like.
###### Proof.
Suppose $e\colon C\to Y$ is an embedding into an ANR $Y$. We have to show that
$e(C)$ is cell-like. Let $U$ be a nbhd of $e(C)$. Since $Y$ is ANR, there is a
nbhd $\tilde{V}$ of $C$ in $X$ such that $e$ extends to an
$\tilde{e}\colon\tilde{V}\to Y$. Let $V\subset X$ be the open set
$\tilde{V}\cap\tilde{e}^{-1}(U)$, it is a nbhd of $C$. There is a nbhd $W$ of
$C$ such that $C\subset W\subset V$ and there is a homotopy of the inclusion
$W\subset V$ to the constant in $V$ since $C$ is cell-like, denote this
homotopy by $\varphi\colon W\times[0,1]\to V$. Then $\varphi|_{C\times[0,1]}$
is a homotopy of the inclusion $C\subset V$ to the constant. Take
$\tilde{e}\circ\varphi|_{C\times[0,1]}\circ(e^{-1}|_{e(C)}\times\mathrm{id}_{[0,1]}),$
this is a homotopy of the inclusion $e(C)\subset U$ to the constant in $U$.
The space $e(C)\times[0,1]$ is compact in $Y\times[0,1]$ and the homotopy maps
it into $Y$, which is ANR. This implies that there is a nbhd $\tilde{U}\subset
U$ of $e(C)$ such that the inclusion $\tilde{U}\subset U$ is homotopic to
constant in $U$. ∎
For example, this shows that a compact and contractible metric space is cell-
like if we embed it into any ANR. In practice, we do not consider cell-like
sets as subsets in some ambient space but rather as compact metric spaces
which are cell-like if we embed them into an arbitrary ANR.
It is clear that every cellular set $C$ is cell-like because in every nbhd $U$
of $C$ some open ball is contractible. Also, we have seen that the Whitehead
continuum is cell-like but not cellular. In order to compare cell-like and
cellular sets we introduce the notion of cellularity criterion.
###### Definition 3.9 (Cellularity criterion).
A subset $Y\subset X$ satisfies the _cellularity criterion_ if for every nbhd
$U$ of $Y$ there is a nbhd $V$ of $Y$ such that $V\subset U$ and every loop in
$V-Y$ is null-homotopic in $U-Y$.
The cellularity criterion and being cellular measure how wildly a subset is
embedded into a space. The next theorem compares cell-like and cellular sets
in a PL manifold. We omit its difficult proof here.
###### Theorem 3.10.
Let $C$ be a cell-like subset of a PL $n$-dimensional manifold, where $n\geq
4$. Then $C$ is cellular if and only if $C$ satisfies the cellularity
criterion.
In dimension $2$ we have a simpler statement:
###### Theorem 3.11.
Every cell-like subset in a $2$-dimensional manifold $X$ is cellular.
###### Proof.
At first suppose $X=\mathbb{R}^{2}$ and $C\subset\mathbb{R}^{2}$ is a cell-
like set. Let $U$ be a bounded nbhd of $C$ and let $V\subset U$ a nbhd of $C$
such that the inclusion $V\to U$ is homotopic to constant. Choose another nbhd
$W\subset V$ of $C$ as well such that $\mathrm{cl}\thinspace W\subset V$. Take
a compact smooth $2$-dimensional manifold $H\subset V$ such that
$C\subset\mathrm{int}H$, $\partial H\subset V-\mathrm{cl}\thinspace W$ and
$\mathrm{int}H$ is connected. Such an $H$ can be obtained by taking a Morse
function $f\colon V\to[0,1]$ which maps the nbhd $W$ of $C$ into $0$ and a
small nbhd of $\mathbb{R}^{2}-V$ into $1$. Then the preimage of a regular
value $r$ close to $1/2$ is a smooth $1$-dimensional submanifold of
$\mathbb{R}^{2}$ and the preimage of $(-\infty,r]$ is a compact subset
containing $W$ and $C$, denote this $f^{-1}((-\infty,r])$ by $H$. Then $H$ is
a compact smooth $2$-dimensional submanifold of $\mathbb{R}^{2}$, see Figure
5. Take its connected component (this is also a path-connected component
because $H$ is a manifold) which contains $C$ and denote this by $H$ as well.
$C$$U$$V$$W$${{\color[rgb]{0,0,1}H_{2}}}$${{\color[rgb]{0,0,1}H_{1}}}$
Figure 5. The compact manifold $H=H_{1}\cup H_{2}$. Its component $H_{2}$
contains $C$. Since $H_{2}-C$ is path-connected, there is a path (dashed in
the figure) in $H_{2}$ connecting two different components of the boundary of
$H_{2}$.
We show that $H-C$ is connected. For this consider the commutative diagram
$\begin{CD}H_{1}(H;\mathbb{Z}_{2})@>{}>{}>H_{1}(H,H-C;\mathbb{Z}_{2})@>{}>{}>H_{0}(H-C;\mathbb{Z}_{2})@>{}>{}>H_{0}(H)@>{}>{}>0\\\
@V{}V{}V@V{}V{}V@V{}V{}V@V{}V{}V\\\
H_{1}(\mathbb{R}^{2};\mathbb{Z}_{2})@>{}>{}>H_{1}(\mathbb{R}^{2},\mathbb{R}^{2}-C;\mathbb{Z}_{2})@>{}>{}>H_{0}(\mathbb{R}^{2}-C;\mathbb{Z}_{2})@>{i_{*}}>{}>H_{0}(\mathbb{R}^{2})@>{}>{}>0\end{CD}$
coming from the long exact sequences and the inclusion
$(H,H-C)\subset(\mathbb{R}^{2},\mathbb{R}^{2}-C)$. This is just the diagram
$\begin{CD}H_{1}(H;\mathbb{Z}_{2})@>{}>{}>H_{1}(H,H-C;\mathbb{Z}_{2})@>{}>{}>H_{0}(H-C;\mathbb{Z}_{2})@>{}>{}>\mathbb{Z}_{2}@>{}>{}>0\\\
@V{}V{}V@V{}V{\cong}V@V{}V{}V@V{}V{\cong}V\\\
0@>{}>{}>H_{1}(\mathbb{R}^{2},\mathbb{R}^{2}-C;\mathbb{Z}_{2})@>{}>{}>H_{0}(\mathbb{R}^{2}-C;\mathbb{Z}_{2})@>{i_{*}}>{}>\mathbb{Z}_{2}@>{}>{}>0\end{CD}$
If the group $H_{0}(\mathbb{R}^{2}-C;\mathbb{Z}_{2})$ is $\mathbb{Z}_{2}$,
i.e. the manifold $\mathbb{R}^{2}-C$ is connected, then exactness implies that
$H_{0}(H-C;\mathbb{Z}_{2})\cong\mathbb{Z}_{2}$ so $H-C$ is connected. To show
that $\mathbb{R}^{2}-C$ is connected, we apply [HW41, Theorem VI.5, page 86],
which implies that if $C$ is a closed subset of a space $D$ and $f,g$ are
homotopic maps of $C$ into $S^{1}$ such that $f$ extends to $D$, then $g$
extends to $D$ and the extensions are homotopic. Suppose the open set
$\mathbb{R}^{2}-C$ is not connected, then it is the disjoint union of two open
sets $A$ and $B$. At least one of these is bounded because for large enough
$s$ the set $\mathbb{R}^{2}-[-s,s]^{2}$ is disjoint from $C$ and it is
connected hence it is in $A$ or $B$ but then $[-s,s]^{2}$ contains $B$ or $A$,
respectively. Suppose $A$ is bounded, $p\in A$ and $q\in B$. For a subset
$S\subset\mathbb{R}^{2}$ and point $x\in\mathbb{R}^{2}$ denote by
$\pi_{S,x}\colon S-\\{x\\}\to S^{1}$ the radial projection of $S-\\{x\\}$ to
the circle $S^{1}$ of radius $1$ centered at $x$. Then $\pi_{C,q}$ extends to
$\mathbb{R}^{2}-\\{q\\}$ so also to $A\cup C$ but $\pi_{C,p}$ does not extend
to $A\cup C$ because such an extension would extend to a much larger disk $P$
centered at $p$ as well by radial projection and then a retraction of $P$ onto
its boundary (if we identify it with the target circle of $\pi_{C,p}$) would
exists. Consequently $\pi_{C,q}$ and $\pi_{C,p}$ are not homotopic and so at
least one of them is not homotopic to constant. This means if
$\mathbb{R}^{2}-C$ is not connected, then there is a map $C\to S^{1}$ which is
not homotopic to constant. But since the inclusion $V\subset U$ and then also
$C\subset U$ are homotopic to constant, we get that $\mathbb{R}^{2}-C$ is
connected.
Finally, we get that $H-C$ is a path-connected smooth $2$-dimensional manifold
with boundary. Hence if the number of components of $\partial H$ is larger
than one, then there exists a smooth curve transversal to $\partial H$,
disjoint from $C$ and connecting different components of $\partial H$. We can
cut $H$ along this curve and by repeating this process we end up with
$\partial H$ being a single circle. By the Jordan curve theorem $H$ is a
compact $2$-dimensional disk. In this way we get
$C\subset W\subset\mathrm{int}H\subset H\subset V\subset U.$
Since in $\mathbb{R}^{2}$ every compact set is a countable intersection of
open sets which form a decreasing sequence, we have
$C=\cap_{n=1}^{\infty}U_{n}$, where $U_{1}\supset U_{2}\supset\cdots\supset
U_{n}\supset\cdots$, where the sets $U_{n}$ are open. We can also assume that
for each $n$ we have $\mathrm{cl}\thinspace U_{n+1}\subset U_{n}$.
We obtain countably many compact $2$-dimensional disks $H_{1},H_{2},\ldots$ by
the previous construction, which satisfy
$C\subset U_{n+1}\subset\mathrm{int}H_{n}\subset H_{n}\subset V\subset U_{n}.$
Hence $C=\cap_{n=1}^{\infty}H_{n}$ so $C$ is cellular.
In the case of $X$ is an arbitrary $2$-dimensional manifold, since $C$ is
cell-like, there exists a nbhd of $C$ which is homotopic to constant so $C$ is
contained in a simply-connected $2$-dimensional manifold nbhd, which is
homeomorphic to $\mathbb{R}^{2}$. Hence a similar argument gives that $C$ is
cellular. ∎
###### Proposition 3.12.
If $C$ is cell-like in a smooth $n$-dimensional manifold $X$, where $n\geq 3$,
then $C\times\\{0\\}$ is cellular in $X\times\mathbb{R}^{3}$.
###### Proof.
It is enough to show that $C\times\\{0\\}$ satisfies the cellularity
criterion. It is easy to see that $C\times\\{0\\}$ is cell-like in
$X\times\mathbb{R}^{3}$. Let $U$ be a nbhd of $C\times\\{0\\}$ in
$X\times\mathbb{R}^{3}$. It is obvious that there is a nbhd $V\subset U$ of
$C\times\\{0\\}$ such that every loop $\gamma\colon[0,1]\to V$ is null-
homotopic in $U$. Let $\gamma$ be an arbitrary loop in $V-C\times\\{0\\}$, it
is homotopic to a smooth loop in $\tilde{\gamma}\colon V-C\times\\{0\\}$ by a
homotopy $H$. A homotopy of $\tilde{\gamma}$ to constant can be approximated
by a smooth map $\tilde{H}\colon D^{2}\to U$, where $\tilde{H}|_{\partial
D^{2}}=\tilde{\gamma}$. In the subspace $X\times\\{0\\}$ of
$X\times\mathbb{R}^{3}$ let $W$ be a nbhd of $C\times\\{0\\}$ which is
disjoint from the homotopy $\tilde{H}$. Perturb $\tilde{H}$ keeping
$\tilde{H}|_{\partial D^{2}}$ fixed to get a transversal map to the
$n$-dimensional manifold $W$ in $U$, hence we get that $\gamma$ is null-
homotopic in $U-C\times\\{0\\}$. So the cellularity criterion holds for
$C\times\\{0\\}$. ∎
### 3.2. Antoine’s necklace
Take the defining sequence where
* •
$C_{1}$ is a solid torus,
* •
$C_{2}$ is a finite number of solid tori embedded in $C_{1}$ in such a way
that each torus is unknotted and linked to its neighbour as in a usual chain,
* •
$C_{3}$ is again a finite number of similarly linked solid tori,
…, etc., see Figure 6.
Figure 6. A sketch of the defining sequence of Antoine’s necklace. We can see
the solid torus $C_{1}$, the linked tori $C_{2}$ and some linked tori from the
collection $C_{3}$, etc. The number of components of $C_{n+1}$ in $C_{n}$ is
large enough to make the diameters of the tori converge to $0$.
We always consider at least three tori in each $C_{n}$. We require that the
maximal diameter of tori in $C_{n}$ converges to $0$. The set
$\cap_{n=1}^{\infty}C_{n}$ is called Antoine’s necklace and denoted by
$\mathcal{A}$. It is easy to see that each of its components is cell-like.
Unlike Whitehead continuum the components of $\mathcal{A}$ are cellular
because every component of $C_{n+1}$ is inside a ball in $C_{n}$.
Recall that the Cantor set is the topological space
$D_{1}\times D_{2}\times\cdots\times D_{n}\times\cdots,$
where every space $D_{n}$ is a finite discrete metric space with $|D_{n}|\geq
2$.
###### Proposition 3.13.
The space $\cap_{n=1}^{\infty}C_{n}$ is homeomorphic to the Cantor set.
###### Proof.
Denote the number of tori embedded in $C_{1}$ by $m_{1}$, these tori are
$C_{2,1},\ldots,C_{2,m_{1}}$
whose disjoint union is $C_{2}$. For $1\leq i_{1}\leq m_{1}$ take the
$i_{1}$-th torus $C_{2,i_{1}}$ and denote the number of tori embedded into it
by $m_{2,i_{1}}$, these tori are
$C_{3,i_{1},1},\ldots,C_{3,i_{1},m_{2,i_{1}}}$
whose disjoint union is $C_{3}$. Again for $1\leq i_{2}\leq m_{2,i_{1}}$ take
the $i_{2}$-th torus $C_{3,i_{1},i_{2}}$ and denote the number of tori
embedded into it by $m_{3,i_{1},i_{2}}$, these tori are
$C_{4,i_{1},i_{2},1},\ldots,C_{4,i_{1},i_{2},m_{3,i_{1},i_{2}}}$
whose disjoint union is $C_{4}$. In general in the $n$-th step for $1\leq
i_{n}\leq m_{n,i_{1},\ldots,i_{n-1}}$ take the $i_{n}$-th torus
$C_{n+1,i_{1},\ldots,i_{n}}$ and denote the number of tori embedded into it by
$m_{n+1,i_{1},\ldots,i_{n}}$, these tori are
$C_{n+2,i_{1},\ldots,i_{n},1},\ldots,C_{n+2,i_{1},\ldots,i_{n},m_{n+1,i_{1},\ldots,i_{n}}}$
whose disjoint union is $C_{n+2}$.
Now we construct a Cantor set $\mathcal{C}$ in the interval $[0,1]$. Divide
$[0,1]$ into $2m_{1}-1$ closed intervals
$I_{2,1},\ldots,I_{2,2m_{1}-1}\subset[0,1]$
of equal length and disjoint interiors. Then divide the $i_{1}$-th interval
$I_{2,i_{1}}$, where $i_{1}$ is odd, into $2m_{2,i_{1}}-1$ closed intervals
$I_{3,i_{1},1},\ldots,I_{3,i_{1},2m_{2,i_{1}}-1}$
of equal length. Then divide the $i_{2}$-th interval $I_{3,i_{1},i_{2}}$,
where $i_{2}$ is odd, into $2m_{3,i_{1},i_{2}}-1$ closed intervals
$I_{4,i_{1},i_{2},1},\ldots,I_{4,i_{1},i_{2},2m_{3,i_{1},i_{2}}-1}$
of equal length. In the $n$-th step divide the $i_{n}$-th interval
$I_{n+1,i_{1},\ldots,i_{n}}$, where $i_{n}$ is odd, into the closed intervals
$I_{n+2,i_{1},\ldots,i_{n},1},\ldots,I_{n+2,i_{1},\ldots,i_{n},2m_{n+1,i_{1},\ldots,i_{n}}-1}$
of equal length and so on. So all the intervals $I_{n+1,i_{1},\ldots,i_{n}}$
have length
$\frac{1}{(2m_{1}-1)\cdots(2m_{n,i_{1},\ldots,i_{n-1}}-1)}.$
Then let
$\mathcal{C}=\bigcap_{n=1}^{\infty}\bigcup_{{\begin{smallmatrix}1\leq
i_{1}\leq m_{1}\\\ 1\leq i_{2}\leq m_{2,i_{1}}\\\ \cdots\\\ 1\leq i_{n}\leq
m_{n,i_{1},\ldots,i_{n-1}}\end{smallmatrix}}}I_{n+1,2i_{1}-1,\ldots,2i_{n}-1}.$
Assign to a point $x\in\cap_{n=1}^{\infty}C_{n}$ the point
$\bigcap_{n=1}^{\infty}I_{n+1,2i_{1}(x)-1,\ldots,2i_{n}(x)-1},$
which is the intersection of the closed intervals containing $x$. This defines
a map
$f\colon\cap_{n=1}^{\infty}C_{n}\to\mathcal{C},$
which is clearly surjective. It is injective as well because if $x\neq
x^{\prime}$, then for large $n$ they are in different $C_{n}$ so they are
mapped into different intervals as well. The map $f$ is continuous because if
$x$ and $x^{\prime}$ are in the same $C_{n}$ until some large enough $n$, then
they are mapped to the same intervals until a large index so $f(x)$ and
$f(x^{\prime})$ are close enough. Then $f$ is a homeomorphism since its domain
is compact and it maps injectively into a $T_{2}$ space. ∎
Of course the components of $\mathcal{A}$ are points so the decomposition
space is obviously $\mathbb{R}^{3}$. An important property of $\mathcal{A}$ is
that it is _wild_ , i.e. there is no self-homeomorphism of $\mathbb{R}^{3}$
mapping $\mathcal{A}$ onto the Cantor set in a line segment. To prove this, we
study the local behaviour of the complement of $\mathcal{A}$.
###### Definition 3.14.
Let $k\geq 0$. A closed subset $A$ of a space $X$ is locally $k$-co-connected
($k$-LCC for short) if for every point $a\in A$ and for every nbhd $U$ of $a$
in $X$ there is a nbhd $V\subset U$ of $a$ in $X$ such that if
$\varphi\colon\partial D^{k+1}\to V-A$ is a map of the $k$-sphere, then
$\varphi$ extends to a map of $D^{k+1}$ into $U-A$.
###### Proposition 3.15.
The set $\mathcal{A}$ in $\mathbb{R}^{3}$ is not $1$-LCC.
###### Sketch of the proof.
At first we show that if $\alpha\colon S^{1}\to C_{1}$ is the meridian of the
torus $C_{1}$, then every smooth embedding $\tilde{\alpha}\colon
D^{2}\to\mathbb{R}^{3}$ extending $\alpha$ is such that
$\tilde{\alpha}(D^{2})$ intersects $\mathcal{A}$. If this was not true, then
$\tilde{\alpha}(D^{2})$ would intersect at most finitely many tori
$C_{1},\ldots,C_{n}$ and it would be possible to perturb $\tilde{\alpha}$ to
get a smooth embedding transversal to each $\partial C_{n}$. Then it is
possible to show that there is a disk $D_{1}\subset D^{2}$ such that
$\tilde{\alpha}(\partial D_{1})$ intersects some torus $\partial C_{2,i_{1}}$
in a meridian. Inductively, $\tilde{\alpha}(D^{2})$ has to intersect some
torus $\partial C_{m,i_{1},\ldots,i_{m-1}}$ for arbitrarily large $m>n$, which
is a contradiction. Suppose that $\mathcal{A}$ is $1$-LCC. Let $\beta\colon
D^{2}\to\mathbb{R}^{3}$ be a smooth embedding such that $\beta(\partial
D^{2})$ is a meridian of $C_{1}$. Cover $\beta(D^{2})\cap\mathcal{A}$ by open
sets $\\{U_{\gamma}\\}_{\gamma\in\Gamma}$ around each of its points, then
there is a covering $\\{V_{\gamma}\\}_{\gamma\in\Gamma}$ such that for all
$\gamma\in\Gamma$ we have $V_{\gamma}\subset U_{\gamma}$ and each map
$\partial D^{2}\to(\mathbb{R}^{3}-\mathcal{A})\cap V_{\gamma}$ can be extended
to a map $D^{2}\to(\mathbb{R}^{3}-\mathcal{A})\cap U_{\gamma}$. We can also
suppose that $\cup_{\gamma}U_{\gamma}$ is disjoint from $\beta(\partial
D^{2})$. By Lebesgue lemma there is a refinement of $D^{2}$ into finitely many
small disks with disjoint interiors such that each of their boundary circles
is mapped by $\beta$ into some $V_{\gamma}$. After a small perturbation we can
suppose that each of the $\beta$-images of these boundary circles is disjoint
from $C_{n}$ for some common large $n$ but it is still in some $V_{\gamma}$.
Now change $\beta$ on each of the small disks to get a map into
$(\mathbb{R}^{3}-\mathcal{A})\cap U_{\gamma}$. By Dehn’s lemma there are
embeddings as well of the small disks into $(\mathbb{R}^{3}-\mathcal{A})\cap
U_{\gamma}$. In this way we get an embedding of the original disk $D^{2}$
which is disjoint from $\mathcal{A}$. This contradicts to the fact that every
embedded disk $D^{2}\subset\mathbb{R}^{3}$ with boundary circle being a
meridian of $C_{1}$ intersects $\mathcal{A}$. ∎
The standard Cantor set
$C\subset\mathbb{R}\times\\{0\\}\times\\{0\\}\subset\mathbb{R}^{3}$ is
$1$-LCC, because having a small loop in its complement $\mathbb{R}^{3}-C$
yields by approximation a small smooth loop in $\mathbb{R}^{3}-C$ transversal
to and disjoint from $\mathbb{R}\times\\{0\\}\times\\{0\\}$. Then deform this
loop by compressing it in a direction parallel to
$\mathbb{R}\times\\{0\\}\times\\{0\\}$ until the loop sits in the plane
$\\{x\\}\times\mathbb{R}^{2}$ for some number $x\in\mathbb{R}^{3}-C$. After
these the loop can be squeezed easily inside this plane to a point in
$\mathbb{R}^{3}-C$. This implies that Antoine’s necklace is a wild Cantor set
in $\mathbb{R}^{3}$.
### 3.3. Bing decomposition
If in the construction of Antoine’s necklace there are always two tori
components of $C_{n+1}$ in each component of $C_{n}$, then we call the arising
decomposition Bing decomposition. Apriori there could be many different Bing
decompositions depending on how the solid tori are embedded into each other.
It is not obvious that we can arrange the components of $C_{n}$ embedded in
such a way that $\cap_{n=1}^{\infty}C_{n}$ is a Cantor set, which would follow
if the maximal diameter of the tori in $C_{n}$ converges to $0$. A random
defining sequence can be seen in Figure 7.
Figure 7. A sketch of a defining sequence of the Bing decomposition. We can
see the torus $C_{1}$, the two torus components of $C_{2}$ and the four torus
components of $C_{3}$. The maximal diameter of tori in $C_{n}$ does not
converge to $0$ necessarily.
Now we construct a defining sequence, where the maximal diameter of the tori
in $C_{n}$ converges to $0$. For this, consider the following way to define a
finite sequence of finite sequences of embeddings:
$D_{0},$ $D_{0}\supset D_{2,1},$ $D_{0}\supset D_{3,1}\supset D_{3,2},$
$D_{0}\supset D_{4,1}\supset\cdots\supset D_{4,3},$
$D_{1}$$D_{2,1}$$D_{3,2}$$D_{4,3}$$D_{5,4}$
Figure 8. A sketch of constructing the tori $D_{n+1,n}$. Instead of the solid
tori we just draw their center circles. We always take the previously obtained
linked tori $D_{n,n-1}$, squeeze them to become “flat” as the figure shows,
then curve them a little and link them with another copy at the two “endings”.
Hence we get $D_{n+1,n}$. The sequence of embeddings $D_{0}\supset
D_{n+1,1}\supset\cdots\supset D_{n+1,n}$ can be kept in sight by checking all
the smaller linkings.
…, etc., where $D_{n,0}=D_{0}$ is a solid torus, $D_{n,k}$ is a disjoint union
of $2^{k}$ copies of solid tori and the components of $D_{n,k}$ are pairwise
embedded in the components of $D_{n,k-1}$, moreover these pairs are linked
just like in the defining sequence of Bing decomposition, for further
subtleties see Figure 8.
Arriving to the tori $D_{n+1,n}$ and assuming that their meridional size is
small enough, we obtain a regular $2n$-gon-like arrangement of $2^{n}$ copies
of solid tori as Figure 8 shows. Two conditions are satisfied: the meridional
size of all the tori is small and an “edge” of this $2n$-gon is also small.
This means that if this $D_{n+1,n}$ is embedded into a torus (as the figure
suggests) whose meridional size is small, then the maximal diameter of the
torus components of $D_{n+1,n}$ is small if $n$ is large.
###### Proposition 3.16.
There is a defining sequence $C_{1},\ldots,C_{n},\ldots$ of the Bing
decomposition, where the maximal diameter of tori in $C_{n}$ converges to $0$.
Hence $\cap_{n=1}^{\infty}C_{n}$ is homeomorphic to the Cantor set.
###### Proof.
Let $\varepsilon_{n}>0$ be a sequence whose limit is $0$. Let $n_{1}$ be so
large that in $C_{n_{1}}$ in a defining sequence the meridional size of tori
is smaller than $\varepsilon_{1}$. Let $m_{1}$ be so large that we can embed
$D_{m_{1}+1,m_{1}}$ into the torus components of $C_{n_{1}}$ so that the
maximal diameter of tori in the obtained $C_{n_{1}+m_{1}}$ is smaller then
$\varepsilon_{1}$. Then let $n_{2}>n_{1}+m_{1}$ be so large that in a
continuation of the defining sequence in $C_{n_{2}}$ the meridional size of
tori is smaller than $\varepsilon_{2}$. Let $m_{2}$ be so large that we can
embed $D_{m_{2}+1,m_{2}}$ into the torus components of $C_{n_{2}}$ so that the
maximal diameter of tori in the obtained $C_{n_{2}+m_{2}}$ is smaller then
$\varepsilon_{2}$. And so on. It is easy to see that the maximal diameter of
tori converges to $0$. ∎
This implies that the decomposition space of this decomposition is
$\mathbb{R}^{3}$. For an arbitrary defining sequence the space
$\cap_{n=1}^{\infty}C_{n}$ may be not the Cantor set, however the
decomposition space could be still homeomorphic to the ambient space
$\mathbb{R}^{3}$. It is a very important observation that the embedding of the
tori in $D_{n+1,n}$ can be obtained by an isotopy of
$C_{1}\subset\cdots\subset C_{n+1}$ in any defining sequence, see [Bi52]. By
such an isotopy for a given defining sequence we can manage something similar
to the previous statement: if $n$ is large enough, then the meridional size of
the torus components in $C_{n}$ is smaller than a given $\varepsilon>0$. Then
apply the required isotopy for $C_{n+1},\ldots,C_{n+k}$ for some large $k$ to
make the maximal diameter of the torus components of $C_{n+k}$ smaller than
$\varepsilon$. Note that since $n$ is large enough and all the isotopy happens
inside $C_{n}$, all the isotopy happens inside an arbitrarily small nbhd of
$\cap_{n=1}^{\infty}C_{n}$. This means that for every $\varepsilon>0$ there is
a self-homeomorphism $h$ of $\mathbb{R}^{3}$ with support $C_{1}$ such that
$h(\mathcal{D})<\varepsilon$ for every decomposition element
$\mathcal{D}\subset\cap_{n=1}^{\infty}C_{n}$ and also $\pi\circ
h(\mathcal{D})$ stays in the $\varepsilon$-nbhd of $\pi(\mathcal{D})$ for some
metric on the decomposition space. This condition is called shrinkability
criterion and it implies that the decomposition space is homeomorphic to the
ambient space $\mathbb{R}^{3}$ as we will see in the next section.
## 4\. Shrinking
Let $X$ be a topological space and $\mathcal{D}$ a decomposition of $X$. An
open cover $\mathcal{U}$ of $X$ is called $\mathcal{D}$-saturated if every
$U\in\mathcal{U}$ is a union of decomposition elements.
###### Definition 4.1 (Bing shrinkability criterion).
Let $\mathcal{D}$ be a usc decomposition of the space $X$. We say
$\mathcal{D}$ is _shrinkable_ if for every open cover $\mathcal{V}$ and
$\mathcal{D}$-saturated open cover $\mathcal{U}$ there is a self-homeomorphism
$h$ of $X$ such that for every $D\in\mathcal{D}$ the set $h(D)$ is in some
$V\in\mathcal{V}$ and for every $x\in X$ there is a $U\in\mathcal{U}$ such
that $x,h(x)\in U$. In other words, $h$ shrinks the elements of $\mathcal{D}$
to arbitrarily small sets and $h$ is $\mathcal{U}$-close to the identity. We
say $\mathcal{D}$ is _strongly shrinkable_ if for every open set $W$
containing all the non-degenerate elements of $\mathcal{D}$ the decomposition
$\mathcal{D}$ is shrinkable so that the support of $h$ is in $W$.
In other words $\mathcal{D}$ is shrinkable if its elements can be made small
enough simultaneously so that this shrinking process does not move the points
of $X$ too far in the sense of measuring the distance in the decomposition
space. If $X$ has a shrinkable decomposition, then we expect that the local
structure of $X$ is similar to the structure of the nbhds of the decomposition
elements.
###### Proposition 4.2.
Let $X$ be a regular space and let $\mathcal{D}$ be a shrinkable usc
decomposition of $X$. If every $x\in X$ has arbitrarily small nbhds satisfying
a fixed topological property, then every $D\in\mathcal{D}$ has arbitrarily
small nbhds satisfying the same property.
###### Proof.
Let $W$ be an arbitrary nbhd of an element $D\in\mathcal{D}$. Then there is a
saturated nbhd $\tilde{U}_{1}$ of $D$ such that $\tilde{U}_{1}\subset W$. Let
$U_{1}$ denote $\pi(\tilde{U}_{1})$. Since $X_{\mathcal{D}}$ is regular, there
are open sets $U_{2}$ and $U_{3}$ such that
$\pi(D)\subset U_{3}\subset\mathrm{cl}\thinspace U_{3}\subset
U_{2}\subset\mathrm{cl}\thinspace U_{2}\subset U_{1}.$
Then take the sets
$\pi^{-1}(U_{3}),\mbox{\ }\pi^{-1}(U_{2})-D,\mbox{\
}\pi^{-1}(U_{1}-\mathrm{cl}\thinspace U_{3}),\mbox{\ and\
}X-\pi^{-1}(\mathrm{cl}\thinspace U_{2}),$
see Figure 9. These yield a $\mathcal{D}$-saturated open cover $\mathcal{U}$
of $X$. Let $\mathcal{V}$ be an open cover of $X$ which refines $\mathcal{U}$
and consists of open sets with our fixed property. Since $\mathcal{D}$ is
shrinkable, we have a homeomorphism $h\colon X\to X$ such that $h(D)\subset V$
for some $V\in\mathcal{V}$ and $h$ is $\mathcal{U}$-close to the identity.
Then $D\subset h^{-1}(V)$ so it is enough to show that
$h^{-1}(V)\subset W.$
Suppose there exists some $x\in h^{-1}(V)-W$, then $x\in
h^{-1}(V)-\pi^{-1}(U_{1})$ since $\pi^{-1}(U_{1})\subset W$. Hence among the
sets in $\mathcal{U}$ only $X-\pi^{-1}(\mathrm{cl}\thinspace U_{2})$ contains
$x$ so $h(x)\in V$ has to be in $X-\pi^{-1}(\mathrm{cl}\thinspace U_{2})$ as
well. This implies that $V\subset X-\pi^{-1}(\mathrm{cl}\thinspace U_{3})$
because $V$ is a subset of some sets in $\mathcal{U}$. Also we know that
$h(D)\subset\pi^{-1}(U_{3})$ since $D\subset\pi^{-1}(U_{3})$. This means that
$h(D)\subset V\subset X-\pi^{-1}(\mathrm{cl}\thinspace U_{3})$ cannot hold so
$h^{-1}(V)\subset W$. ∎
$X$$D$
Figure 9. The set $D$ and the $\pi$-preimages of the sets $U_{3}\subset
U_{2}\subset U_{1}$ in $X$. In the figure the sets $\pi^{-1}(U_{3})$ and
$\pi^{-1}(U_{2})-D$ are shaded.
For example every decomposition element of a shrinkable decomposition of a
manifold is cellular.
It is often not too difficult to check whether a decomposition of a space $X$
is shrinkable. A corollary of shrinkability is that the decomposition space is
homeomorphic to $X$. This is often applied when we want to construct embedded
manifolds and the construction uses mismatched pieces, which we eliminate by
taking them as the decomposition elements and then looking at the
decomposition space.
###### Definition 4.3 (Near-homeomorphism, approximating by homeomorphism).
Let $X$ and $Y$ be topological spaces. An $f\colon X\to Y$ surjective map is a
_near-homeomorphism_ if for every open covering $\mathcal{W}$ of $Y$ there is
a homeomorphism $h\colon X\to Y$ such that for every $x\in X$ the points
$f(x)$ and $h(x)$ are in some $W\in\mathcal{W}$, in other words $h$ is
_$\mathcal{W}$ -close_ to $f$.
If $(Y,\varrho)$ is a metric space, then $f$ being a near-homeomorphism
implies that $f$ can be approximated by homeomorphisms in the possibly
infinite-valued metric $d(f,g)=\sup\\{\varrho(f(x),g(x))\\}$. Notice that if
$f\colon X\to Y$ is a near-homeomorphism, then $X$ and $Y$ are actually
homeomorphic.
The main result is that a usc decomposition yields a homeomorphic
decomposition space if the decomposition is shrinkable. This is applied in
major $4$-dimensional results: in the disk embedding theorem and in the proof
of the $4$-dimensional topological Poincaré conjecture [Fr82, BKKPR]. It is
extensively applied in constructing approximations of manifold embeddings in
dimension $\geq 5$, see [AC79] and Edwards’s cell-like approximation theorem.
For an open cover $\mathcal{W}$ of a space $X$ and a subset $A\subset X$ let
$St(A,\mathcal{W})$ denote the subset
$\bigcup\\{W\in\mathcal{W}:W\cap A\neq\emptyset\\}.$
This is called the _star_ of $A$ and it is a nbhd of $A$. Of course if
$A\subset B$, then $St(A,\mathcal{W})\subset St(B,\mathcal{W})$. If
$\mathcal{W}^{\prime}$ is an open cover which is a refinement of the open
cover $\mathcal{W}$, then obviously $St(A,\mathcal{W}^{\prime})\subset
St(A,\mathcal{W})$. We will use often that if the covering
$\mathcal{W}^{\prime}$ is a _star-refinement_ of the covering $\mathcal{W}$,
that is the collection
$\left\\{St(W_{\alpha},\mathcal{W}^{\prime}):W_{\alpha}\in\mathcal{W}^{\prime}\right\\}$
of stars of elements of $\mathcal{W}^{\prime}$ is a refinement of
$\mathcal{W}$, then for every point $x\in X$ we have
$St(\\{x\\},\mathcal{W}^{\prime})\subset W$ for some $W\in\mathcal{W}$.
The following theorem requires a complete metric on the space $X$, for example
the statement holds for an arbitrary manifold.
###### Theorem 4.4.
Let $\mathcal{D}$ be a usc decomposition of a space $X$ admitting a complete
metric. Then the following are equivalent:
1. (1)
the decomposition map $\pi\colon X\to X_{\mathcal{D}}$ is a near-
homeomorphism,
2. (2)
$\mathcal{D}$ is shrinkable.
If additionally $X$ is also locally compact and separable, then shrinkability
is equivalent to
1. (3)
if $C\subset X$ is an arbitrary compact set, $\varepsilon>0$ and $\mathcal{U}$
is a $\mathcal{D}$-saturated open cover of $X$, then there is a homeomorphism
$h\colon X\to X$ such that ${\mathrm{diam}}\thinspace h(D)<\varepsilon$ for
every $D\subset C$, $D\in\mathcal{D}$ and $h$ is $\mathcal{U}$-close to the
identity.
###### Proof.
Near-homeomorphism (1) implies shrinking (2) and (3). Of course (2) implies
(3) so we are going to prove only that (1) implies (2). At first, suppose that
the decomposition map $\pi\colon X\to X_{\mathcal{D}}$ is a near-
homeomorphism. We have to show that $\mathcal{D}$ is shrinkable by finding an
appropriate homeomorphism $h$. We know that since $X$ is metric, the
decomposition space $X_{\mathcal{D}}$ is metrizable hence it is paracompact.
(To show that $\mathcal{D}$ is shrinkable, we will use only that the space $X$
is paracompact and $T_{4}$.) Let $\mathcal{V}$ be an open cover and let
$\mathcal{U}$ be a $\mathcal{D}$-saturated open cover of $X$. Take the open
covering $\\{\pi(U):U\in\mathcal{U}\\}$ of $X_{\mathcal{D}}$. Since
$X_{\mathcal{D}}$ is paracompact, this covering has a star-refinement
$\mathcal{W}_{0}$, i.e. $\mathcal{W}_{0}$ is a covering and the collection of
stars of elements of $\mathcal{W}_{0}$, that is the collection
$\left\\{St(W_{\alpha},\mathcal{W}_{0}):W_{\alpha}\in\mathcal{W}_{0}\right\\}$
is a refinement of $\\{\pi(U):U\in\mathcal{U}\\}$, see [Du66, Section 8.3].
Similarly $\mathcal{W}_{0}$ has a star-refinement covering $\mathcal{W}_{1}$.
Then there is a homeomorphism
$h_{1}\colon X\to X_{\mathcal{D}}$
which is $\mathcal{W}_{1}$-close to $\pi$ because $\pi$ is a near-
homeomorphism. Take the open cover
$\mathcal{W}_{1}\bigcap h_{1}(\mathcal{V})=\\{W\cap
h_{1}(V):W\in\mathcal{W}_{1},V\in\mathcal{V}\\}$
and a star-refinement $\mathcal{W}_{2}$ of it. Of course $\mathcal{W}_{2}$ is
a star-refinement of $\mathcal{W}_{1}$ and $h_{1}(\mathcal{V})$ as well. There
is a homeomorphism
$h_{2}\colon X\to X_{\mathcal{D}}$
which is $\mathcal{W}_{2}$-close to $\pi$. Let $h\colon X\to X$ be the
composition
$h_{1}^{-1}\circ h_{2}.$
At first we show that $h$ shrinks every decomposition element
$D\in\mathcal{D}$ into some $V\in\mathcal{V}$. Let $D\in\mathcal{D}$. It is
enough to show that $h_{2}(D)\subset h_{1}(V)$ for some $V\in\mathcal{V}$. We
have that for every $x\in D$ the points $\pi(D)$ and $h_{2}(x)$ are in the
same $W_{x}\in\mathcal{W}_{2}$ so
$h_{2}(D)\subset St(\\{\pi(D)\\},\mathcal{W}_{2})\subset h_{1}(V)$
for some $V\in\mathcal{V}$ because $\mathcal{W}_{2}$ is a star-refinement of
$h_{1}(\mathcal{V})$. Now we show that $h$ is $\mathcal{U}$-close to the
identity. We have that for every $x\in D$ the points $\pi(D)$ and $h_{1}(x)$
are in the same $W_{x}\in\mathcal{W}_{1}$ because $h_{1}$ is
$\mathcal{W}_{1}$-close to $\pi$ so
$h_{1}(D)\subset St(\\{\pi(D)\\},\mathcal{W}_{1}).$
Since $\mathcal{W}_{2}$ is a refinement of $\mathcal{W}_{1}$, we have
$h_{2}(D)\subset St(\\{\pi(D)\\},\mathcal{W}_{2})\subset
St(\\{\pi(D)\\},\mathcal{W}_{1}).$
These imply that
$h_{1}(D)\cup h_{2}(D)\subset St(\\{\pi(D)\\},\mathcal{W}_{1})\subset W_{0}$
for some $W_{0}\in\mathcal{W}_{0}$ because $\mathcal{W}_{1}$ is a star-
refinement of $\mathcal{W}_{0}$. Hence for every $D\in\mathcal{D}$ we have
$D\cup h(D)=h_{1}^{-1}\circ h_{1}(D\cup h(D))=h_{1}^{-1}(h_{1}(D)\cup
h_{2}(D))\subset h_{1}^{-1}(W_{0})$
so if we show that
$h_{1}^{-1}(W_{0})\subset U$
for some $U\in\mathcal{U}$, then we prove the statement. Since $h_{1}$ and
$\pi$ are $\mathcal{W}_{1}$-close, they are $\mathcal{W}_{0}$-close as well.
This means that for every $x\in X$ the points $\pi(x)$ and $h_{1}(x)$ are in
the same $W_{x}\in\mathcal{W}_{0}$. So if $x\in h^{-1}_{1}(W_{0})$, then
$\pi(x)\in St(W_{0},\mathcal{W}_{0}),$
which gives that
$\pi(h^{-1}_{1}(W_{0}))\subset St(W_{0},\mathcal{W}_{0})\subset\pi(U)$
for some $U\in\mathcal{U}$ because $W_{0}$ is a star-refinement of
$\pi(\mathcal{U})$. Then the statement follows because
$h^{-1}_{1}(W_{0})\subset\pi^{-1}\circ\pi(h^{-1}_{1}(W_{0}))\subset\pi^{-1}\circ\pi(U)=U.$
Shrinking (2) or (3) implies near-homeomorphism (1). At first observe that in
the case of (3) if $X$ is locally compact and separable, then $X$ is
$\sigma$-compact so $X$ is the union $\cup_{n=1}^{\infty}C_{n}$ of countably
many compact sets
$C_{1}\subset C_{2}\subset\cdots\subset C_{n}\subset\cdots.$
We also suppose that every $C_{n}$ is $\mathcal{D}$-saturated and has non-
empty interior. Let $\mathcal{W}$ be an arbitrary open cover of
$X_{\mathcal{D}}$. We have to construct a homeomorphism $h\colon X\to
X_{\mathcal{D}}$ which is $\mathcal{W}$-close to $\pi$. At first, we construct
a sequence
$\mathcal{U}_{0},\mathcal{U}_{1},\ldots,\mathcal{U}_{n},\ldots$
of $\mathcal{D}$-saturated open covers of $X$ and a sequence
$h_{0},h_{1},\ldots,h_{n},\ldots$
of self-homeomorphisms of $X$ with some useful properties. Let
$\mathcal{U}_{0}$ be a $\mathcal{D}$-saturated open cover of $X$ such that the
collection of the closures of the elements of $\mathcal{U}_{0}$ refines the
open cover $\pi^{-1}(\mathcal{W})$. This obviously exists because
$X_{\mathcal{D}}$ is regular so around every point of $X_{\mathcal{D}}$ there
is a small closed nbhd contained in some element of $\mathcal{W}$. Let $h_{0}$
be the identity homeomorphism. Let $\varepsilon_{n}>0$ be a decreasing
sequence converging to $0$. Define $\varepsilon_{0}$ to be $\infty$. Denote
the metric on $X$ by $d$. Suppose inductively that we constructed already the
covers $\mathcal{U}_{0},\ldots,\mathcal{U}_{n}$ and the homeomorphisms
$h_{0},\ldots,h_{n}$ with the following properties:
1. (1)
1. (a)
$\mathcal{U}_{i+1}$ is a $\mathcal{D}$-saturated open cover, which refines
$\mathcal{U}_{i}$ for $0\leq i\leq n-1$,
2. (b)
for all $0\leq i\leq n$ the set $\mathcal{U}_{i}$ refines the collection of
$\varepsilon_{i}$-nbhds of the elements of $\mathcal{D}$ and also refines the
collection $\\{\pi^{-1}(B_{\varepsilon_{i}}(y)):y\in X_{\mathcal{D}}\\}$,
where $B_{\varepsilon_{i}}(y)$ is the open ball of radius $\varepsilon_{i}$
around $y$,
2. (2)
1. (a)
for every $0\leq i\leq n-1$ every $D\in\mathcal{D}$ has a nbhd
$U\in\mathcal{U}_{i}$ such that for every $U^{\prime}\in\mathcal{U}_{i+1}$
which contains $D$ we have
$h_{i}(U^{\prime})\cup h_{i+1}(U^{\prime})\subset h_{i}(U),$
2. (b)
for every $0\leq i\leq n$ the diameter of each $h_{i}(U)$,
$U\in\mathcal{U}_{i}$, is smaller than $\varepsilon_{i}$,
3. (b′)
in the case of $X$ is $\sigma$-compact we require only that for every $0\leq
i\leq n$ and for every nbhd $U\in\mathcal{U}_{i}$ such that $U\cap
C_{i}\neq\emptyset$ the diameter of each $h_{i}(U)$ is smaller than
$\varepsilon_{i}$.
There will be some important corollaries of these constructions. Part (a) of
(2) implies that every $D\in\mathcal{D}$ has a nbhd $U\in\mathcal{U}_{i}$ such
that for every $k\geq 1$ and $U^{\prime}\in\mathcal{U}_{i+k}$ which contains
$D$ we have
(4.1) $h_{i+k}(U^{\prime})\subset h_{i}(U).$
For $k=1$ this is immediate from (2)(a) and for $k\geq 2$ this follows by a
simple induction. This means that once we have $U_{n}$ and $h_{n}$ for every
$n\in\mathbb{N}$ satisfying (1) and (2), the sequence $h_{n}$ is a Cauchy
sequence in the sense of local uniform convergence in the space of maps of $X$
into $X$. Indeed, if $x\in X$, then for some $D\in\mathcal{D}$ we have $x\in
D$ and then $D$ has a nbhd $U\in\mathcal{U}_{n}$ for every $n$ such that by
applying (4.1) for all $k\in\mathbb{N}$
(4.2) $h_{n+k}(D)\subset h_{n}(U),$
which means that $d(h_{n}(x),h_{n+k}(x))<\varepsilon_{n}$ for all $x\in X$ by
(2)(b). In the case of $X$ is $\sigma$-compact we have that for some
$m\in\mathbb{N}$ the intersection $D\cap C_{n}\neq\emptyset$ for $n\geq m$
hence by (2)(b′) we have $\mathrm{diam}\thinspace h_{n}(U)<\varepsilon_{n}$
for every $n\geq m$ and nbhd $U\in\mathcal{U}_{n}$ of $D$. This implies that
for all $n\geq m$ we get $d(h_{n}(x),h_{n+k}(x))<\varepsilon_{n}$ for all $k$
and $x\in D$, where $D\subset C_{n}$. Since $(X,d)$ is complete, the sequence
$h_{n}$ converges locally uniformly to a continuous map
$\chi\colon X\to X,$
which will be a good candidate for obtaining our desired near-homeomorphism.
Defining $\mathcal{U}_{n+1}$ and $h_{n+1}$. So let us return to the definition
of the covers $\mathcal{U}_{n}$ and homeomorphisms $h_{n}$. Suppose
inductively that we constructed already the covers
$\mathcal{U}_{0},\ldots,\mathcal{U}_{n}$ and the homeomorphisms
$h_{0},\ldots,h_{n}$ with the properties (1) and (2). We are going to define
$\mathcal{U}_{n+1}$ and $h_{n+1}$. The metrizable space $X_{\mathcal{D}}$ is
paracompact so the open cover $\pi(\mathcal{U}_{n})$ has a star-refinement
whose $\pi$-preimage $\mathcal{U}_{n}^{\prime}$ is a $\mathcal{D}$-saturated
open cover of $X$, which star-refines $\mathcal{U}_{n}$. Let $\mathcal{V}$ be
an open cover of $X$ such that the diameter of each of its elements is smaller
than $\varepsilon_{n+1}$. Then we have two possibilities.
* •
If $\mathcal{D}$ is shrinkable, then there is a self-homeomorphism $H$ of $X$,
which is $h_{n}(\mathcal{U}_{n}^{\prime})$-close to the identity and shrinks
the elements of $h_{n}(\mathcal{D})$ into the sets of $\mathcal{V}$. Let
$h_{n+1}=H\circ h_{n}.$
Clearly the diameter of each $h_{n+1}(D)$, where $D\in\mathcal{D}$, is smaller
than $\varepsilon_{n+1}$.
* •
If only those elements of $\mathcal{D}$ are shrinkable which are in a chosen
compact set as we suppose in (3) of the statement of Theorem 4.4, then there
is a homeomorphism $H_{0}\colon X\to X$ such that the elements of
$\mathcal{D}$ in the compact set $C_{n+1}$ are mapped by $H_{0}$ into some
element of $h_{n}^{-1}(\mathcal{V})$ and $H_{0}$ is
$\mathcal{U}_{n}^{\prime}$-close to the identity. This implies that
$h_{n}\circ H_{0}\circ h_{n}^{-1}$ is such a self-homeomorphism of $X$ that
maps the elements of $h_{n}(\mathcal{D})$ which are in $h_{n}(C_{n+1})$ into
the sets of $\mathcal{V}$ and it is $h_{n}(\mathcal{U}_{n}^{\prime})$-close to
the identity. Denote $h_{n}\circ H_{0}\circ h_{n}^{-1}$ by $H$. Then let
$h_{n+1}=H\circ h_{n}.$
So $h_{n+1}$ maps every $D\in\mathcal{D}$, $D\subset C_{n+1}$ into a set of
diameter smaller than $\varepsilon_{n+1}$.
The definition of $\mathcal{U}_{n+1}$ is a little more complicated. For every
$U_{n}^{\prime}\in\mathcal{U}_{n}^{\prime}$
$h_{n+1}(U_{n}^{\prime})\subset
h_{n}(St(U_{n}^{\prime},\mathcal{U}_{n}^{\prime}))$
because
$h_{n+1}(U_{n}^{\prime})=H\circ h_{n}(U_{n}^{\prime})\subset
St(h_{n}(U_{n}^{\prime}),h_{n}(\mathcal{U}_{n}^{\prime}))$
since $H$ is $h_{n}(\mathcal{U}_{n}^{\prime})$-close to the identity and also
$St(h_{n}(U_{n}^{\prime}),h_{n}(\mathcal{U}_{n}^{\prime}))=h_{n}(St(U_{n}^{\prime},\mathcal{U}_{n}^{\prime})).$
The covering $\mathcal{U}_{n}^{\prime}$ star-refines $\mathcal{U}_{n}$ so for
every $U_{n}^{\prime}\in\mathcal{U}_{n}^{\prime}$ there is an
$U_{n}\in\mathcal{U}_{n}$ such that
$h_{n}(St(U_{n}^{\prime},\mathcal{U}_{n}^{\prime}))\subset h_{n}(U_{n}),$
which obviously implies that for every
$U_{n}^{\prime}\in\mathcal{U}_{n}^{\prime}$ there is an
$U_{n}\in\mathcal{U}_{n}$ such that
$h_{n}(U_{n}^{\prime})\cup h_{n+1}(U_{n}^{\prime})\subset
h_{n}(St(U_{n}^{\prime},\mathcal{U}_{n}^{\prime}))\subset h_{n}(U_{n}).$
Let $\mathcal{S}$ be a $\mathcal{D}$-saturated open cover of $X$ with the
following properties:
1. (i)
the elements of $\mathcal{S}$ are nbhds of the elements of $\mathcal{D}$ such
that the diameter of each $h_{n+1}(S)$, where $S\in\mathcal{S}$, is smaller
than $\varepsilon_{n+1}$ (in the case of $S\cap C_{n+1}\neq\emptyset$ if $X$
is $\sigma$-compact),
2. (ii)
$\mathcal{S}$ refines the collection of $\varepsilon_{n+1}$-nbhds of the
elements of $\mathcal{D}$,
3. (iii)
$\mathcal{S}$ also refines the $\mathcal{D}$-saturated coverings
1. (a)
$\mathcal{U}_{n}^{\prime}$ and
2. (b)
the collection $\\{\pi^{-1}(B_{\varepsilon_{n+1}}(y)):y\in
X_{\mathcal{D}}\\}$,
4. (iv)
for every $S\in\mathcal{S}$ there is a $U_{n}\in\mathcal{U}_{n}$ such that
$h_{n}(S)\cup h_{n+1}(S)\subset h_{n}(U_{n}).$
Let $\mathcal{U}_{n+1}$ be the $\pi$-preimage of an open cover of
$X_{\mathcal{D}}$ which star-refines the open cover $\pi(\mathcal{S})$. It
follows that $\mathcal{U}_{n+1}$ star-refines $\mathcal{S}$. After we defined
$\mathcal{U}_{n+1}$ and $h_{n+1}$ let us check if
$\mathcal{U}_{0},\ldots,\mathcal{U}_{n+1}$ and $h_{0},\ldots,h_{n+1}$ satisfy
the conditions (1) and (2) on page 4. The cover $\mathcal{U}_{n+1}$ refines
the cover $\mathcal{U}_{n}$ because $\mathcal{U}_{n}^{\prime}$ refines
$\mathcal{U}_{n}$, $\mathcal{S}$ refines $\mathcal{U}_{n}^{\prime}$ by
(iii)(a) and $\mathcal{U}_{n+1}$ refines $\mathcal{S}$. So (1)(a) holds. Also
(1)(b) holds because of (ii) and (iii)(b). To prove (2)(a) observe that for
every $D\in\mathcal{D}$ the set $St(D,\mathcal{U}_{n+1})$ is a subset of
$St(U,\mathcal{U}_{n+1})$ for some $U\in\mathcal{U}_{n+1}$. Then
$St(D,\mathcal{U}_{n+1})\subset S$ for some $S\in\mathcal{S}$ since
$\mathcal{U}_{n+1}$ star-refines $\mathcal{S}$. By (iv) there exists a
$U\in\mathcal{U}_{n}$ such that
$h_{n}(S)\cup h_{n+1}(S)\subset h_{n}(U)$
so
$h_{n}(St(D,\mathcal{U}_{n+1}))\cup h_{n+1}(St(D,\mathcal{U}_{n+1}))\subset
h_{n}(U).$
But every $U^{\prime}\in\mathcal{U}_{n+1}$ which contains $D$ is in
$St(D,\mathcal{U}_{n+1})$ so we have
$h_{n}(U^{\prime})\cup h_{n+1}(U^{\prime})\subset h_{n}(U),$
which proves (2)(a). Finally, the diameter of each $h_{n+1}(U)$,
$U\in\mathcal{U}_{n+1}$, is smaller than $\varepsilon_{n+1}$ (if $U\cap
C_{n+1}\neq\emptyset$ in the case of $\sigma$-compact $X$), because
$\mathcal{U}_{n+1}$ refines $\mathcal{S}$ and we can apply (i).
Constructing the near-homeomorphism. After having these infinitely many
$\mathcal{D}$-saturated open coverings
$\mathcal{U}_{0},\mathcal{U}_{1},\ldots$
and homeomorphisms
$h_{0},h_{1},\ldots$
take the map
$\chi\colon X\to X$
that we obtained applying (4.2) and defined to be the pointwise limit of the
sequence $h_{n}$. At first, we show that $\chi$ is surjective. Let $x\in X$
and $x_{n}=h_{n+1}^{-1}(x)$. Let $D_{n}\in\mathcal{D}$ be such that $x_{n}\in
D_{n}$, then by (2)(a) we get a nbhd $U_{n}\in\mathcal{U}_{n}$ of $D_{n}$ such
that for every $U^{\prime}\in\mathcal{U}_{n+1}$ containing $D_{n}$ we have
$h_{n}(U^{\prime})\cup h_{n+1}(U^{\prime})\subset h_{n}(U_{n}).$
In this way we get a decreasing sequence
$U_{1}\supset U_{2}\supset\cdots\supset U_{n}\supset\cdots$
because of the following. It is enough to show that $D_{n}\subset U_{n+1}$ as
well, then by (2)(a) we obtain $h_{n}(U_{n+1})\subset h_{n}(U_{n})$ so
$U_{n+1}\subset U_{n}$. But $D_{n}\subset U_{n+1}$ because
$h_{n+2}(V)\subset h_{n+1}(U_{n+1})$
by (2)(a) for every $V\in\mathcal{U}_{n+2}$ containing $D_{n+1}$, so we also
have
$h_{n+2}(x_{n+1})\in h_{n+2}(D_{n+1})\subset h_{n+2}(V),$
which implies
$x\in h_{n+1}(U_{n+1})$
hence
$h_{n+1}(x_{n})\in h_{n+1}(U_{n+1})\mbox{\ \ \ \ and so\ \ \ \ }x_{n}\in
U_{n+1}$
but $U_{n+1}$ is $\mathcal{D}$-saturated hence also $D_{n}\subset U_{n+1}$.
The sequence $(x_{n})$ has a Cauchy (hence convergent) subsequence: since
$x_{n}\in U_{n}$, for all $k\geq 0$
$x_{n+k}\in U_{n}$
and for every $\varepsilon>0$ there is $\varepsilon_{n}<\varepsilon$ such that
$U_{n}$ is in the $\varepsilon$-nbhd of some $D\in\mathcal{D}$ by (1)(b).
Since the metric space $X$ is complete, there is an $x_{0}\in X$ such that a
subsequence $(x_{n_{k}})$ of $(x_{n})$ converges to $x_{0}$.
All of these imply that because of the definition of $\chi$ and the locally
uniform convergence of $h_{n}$ we have
$\chi(x_{0})=\lim_{k\to\infty}h_{n_{k}+1}(x_{n_{k}})=\lim_{k\to\infty}x=x.$
This means $\chi$ is surjective.
It will turn out that $\chi$ is not injective so it is not a homeomorphism.
However, the composition $\pi\circ\chi^{-1}$ of the relation $\chi^{-1}$ and
the decomposition map $\pi$ is a homeomorphism. To see this, we show that the
sets $\chi^{-1}(x)$, where $x\in X$, are exactly the decomposition elements of
$\mathcal{D}$. By (4.2) for every $n\in\mathbb{N}$ and $D\in\mathcal{D}$ there
is a nbhd $U\in\mathcal{U}_{n}$ of $D$ such that for every $k\geq 0$
$h_{n+k}(D)\subset h_{n}(U)$
hence
$\chi(D)=\lim_{k\to\infty}h_{n+k}(D)\subset\mathrm{cl}\thinspace h_{n}(U).$
It is a fact that ${\mathrm{diam}}\thinspace\mathrm{cl}\thinspace
A={\mathrm{diam}}\thinspace A$ for an arbitrary subset $A$ of a metric space
so by (2)(b) we obtain $\chi(D)<\varepsilon_{n}$ for each $n$ and by (2)(b′)
for some $C_{m}\supset D$ we obtain $\chi(D)<\varepsilon_{n}$ for each $n\geq
m$, which implies that $\chi(D)$ is a point. To show that the $\chi$-preimage
of a point is not bigger than a decomposition element, observe that for
different elements $D_{1}$ and $D_{2}$ and for large enough $n$ by (1)(b)
there are $U,V\in\mathcal{U}_{n}$ which lie in the small
$\varepsilon_{n}$-nbhds of $D_{1}$ and $D_{2}$, respectively, hence $U$ and
$V$ are disjoint. Then similarly to above,
$\chi(D_{1})\subset\mathrm{cl}\thinspace h_{n}(U)\mbox{\ \ \ \ and\ \ \ \
}\chi(D_{2})\subset\mathrm{cl}\thinspace h_{n}(V),$
which implies that $\chi(D_{1})$ and $\chi(D_{2})$ are different so the sets
$\chi^{-1}(x)$, where $x\in X$, are exactly the decomposition elements of
$\mathcal{D}$.
This means that $\pi\circ\chi^{-1}$ is a bijection. Its inverse is continuous
because $\chi$ is continuous and $\pi$ is a closed map since the decomposition
is usc. To prove that $\pi\circ\chi^{-1}$ is continuous it is enough to show
that $\chi$ is a closed map. Let $A\subset X$ be a closed set and observe that
a point $y\in X$ is in $X-\chi(A)$ if and only if
$\chi^{-1}(y)\cap\chi^{-1}(\chi(A))=\emptyset$, which holds exactly if
$\chi^{-1}(y)\cap\pi^{-1}(\pi(A))=\emptyset$. This means that in order to show
that $\chi(A)$ is closed it is enough to prove that for any decomposition
element $D$ such that $D\cap\pi^{-1}(\pi(A))=\emptyset$ the point $\chi(D)$ is
an inner point of $X-\chi(A)$. If $\varepsilon_{n}$ is small enough, then
since $D\cap\pi^{-1}(\pi(A))=\emptyset$, by (1)(b) for every
$U_{n}\in\mathcal{U}_{n}$ containing $D$ we have
$\mathrm{cl}\thinspace U_{n}\cap\mathrm{cl}\thinspace
St(A,\mathcal{U}_{n})=\emptyset.$
By (4.2) we have $\chi(D)\in h_{n}(\mathrm{cl}\thinspace U_{n})$ and obviously
$\chi(A)\subset h_{n}(\mathrm{cl}\thinspace
St(A,\mathcal{U}_{n}))=\mathrm{cl}\thinspace h_{n}(St(A,\mathcal{U}_{n}))$
so finally we get
$\chi(D)\in X-\mathrm{cl}\thinspace h_{n}(St(A,\mathcal{U}_{n}))\subset
X-\chi(A)$
implying that $\chi(D)$ is an inner point of $X-\chi(A)$. As a consequence the
map $\pi\circ\chi^{-1}$ is a homeomorphism. We have to prove that it is
$\mathcal{W}$-close to the identity. By (4.1) for every $D$ and for all $n$
there exist $U_{n}\in\mathcal{U}_{n}$ nbhds of $D$ such that
$U_{0}=h_{0}(U_{0})\supset h_{1}(U_{1})\supset\cdots\supset
h_{n}(U_{n})\supset\cdots.$
So $h_{n}(D)\in U_{0}$ for every $n$ and then $\chi(D)\in\mathrm{cl}\thinspace
U_{0}$. Since the collection of the closures of the elements in
$\mathcal{U}_{0}$ refines the cover $\pi^{-1}(\mathcal{W})$, both of $D$ and
$\chi(D)$ are in the same $\pi^{-1}(W)\in\mathcal{W}$. This implies that if we
denote $\chi(D)$ by $x$, then both of $\chi^{-1}(x)$ and $x$ are in
$\pi^{-1}(W)$. As a result $\chi(\chi^{-1}(x))=x$ and $\chi(x)$ are in
$\chi(\pi^{-1}(W))$ so by applying the map $\pi\circ\chi^{-1}$ we get that
$\pi\circ\chi^{-1}(x)\mbox{\ \ \ \ and\ \ \ \
}\pi\circ\chi^{-1}(\chi(x))=\pi(x)$
are in $W$. This shows that $\pi\circ\chi^{-1}$ is $\mathcal{W}$-close to
$\pi$. ∎
The goal of most of the applications of shrinking is to obtain some kind of
embedding of a manifold by the process of approximating a given map. Let
$\mathbb{R}^{d}_{+}$ denote the closed halfspace in $\mathbb{R}^{d}$.
###### Definition 4.5 (Flat subspace and locally flat embedding).
Let $A\subset X$ be a chosen subspace of a topological space $X$. We say that
the subspace $B\subset X$ homeomorphic to $A$ is _flat_ if there is a
homeomorphism $h\colon X\to X$ such that $h(B)=A$. Let $X$ be an
$n$-dimensional manifold. An embedding $e\colon B\to X$ of a $d$-dimensional
manifold $B$ is _locally flat_ if every point $e(b)$ has a nbhd $U$ in $X$
such that the pair
$(U,e(B)\cap U)\mbox{\ is homeomorphic to\
}\left\\{\begin{array}[]{ccc}(\mathbb{R}^{n},\mathbb{R}^{d})&\mbox{if $b$ is
an inner point of $B$}\\\ (\mathbb{R}^{n},\mathbb{R}^{d}_{+})&\mbox{if $b$ is
a boundary point of $B$}.\end{array}\right.$
###### Definition 4.6 (Collared and bicollared subspaces).
The subspace $A\subset X$ is _collared_ if there is an embedding $f\colon
A\times[0,1)\to X$ onto an open subspace of $X$ such that $f(a,0)=a$. The
subspace $A\subset X$ is _bicollared_ if there is an embedding $f\colon
A\times(-1,1)\to X$ such that $f(a,0)=a$. The subspace $A\subset X$ is
_locally collared_ (or _locally bicollared_) if every $a\in A$ has a nbhd $U$
in $X$ such that $A\cap U$ is collared (resp. bicollared).
A typical application of shrinking is the following.
###### Theorem 4.7.
Let $X$ be an $n$-dimensional manifold with boundary $\partial X$. Then
$\partial X$ is collared in $X$.
###### Proof.
Attach the manifold $\partial X\times[0,1]$ to $X$ along $\partial X\subset X$
by the identification
$\varphi\colon\partial X\times\\{0\\}\to X,$ $\varphi(x,0)=x.$
In this way we get a manifold $\tilde{X}$, which contains the attached
$\partial X\times[0,1]$ as a subset. The boundary of $\tilde{X}$ is $\partial
X\times\\{1\\}$ and so the boundary $\partial\tilde{X}$ is obviously collared.
Let $\mathcal{D}$ be the decomposition of $\tilde{X}$ into the intervals
$\\{\\{x\\}\times[0,1]:x\in\partial X\\}$ and the singletons in
$\tilde{X}-\partial X\times[0,1]$. Then $X$ and the quotient space
$\tilde{X}_{\mathcal{D}}$ are homeomorphic by the map
$\alpha\colon X\to\tilde{X}_{\mathcal{D}},$ $\alpha(x)=[x],$
where $[x]$ denotes the equivalence class of $x$. Indeed, $\alpha$ is a
bijection mapping $X-\partial X$ to the classes consisting of single points
and mapping the boundary points $x\in\partial X$ to the class $[x]$. It is
easy to see that $\alpha$ and also $\alpha^{-1}$ are continuous so $\alpha$ is
a homeomorphism. If we prove that $\tilde{X}$ is also homeomorphic to the
decomposition space $\tilde{X}_{\mathcal{D}}$ by a map $\beta$ as the diagram
$X$$\tilde{X}$$\tilde{X}_{\mathcal{D}}$$\alpha$$\beta$
shows, then we obtain that $X$ and $\tilde{X}$ are homeomorphic through the
map $\beta^{-1}\circ\alpha$, which finishes the proof. A homeomorphism $\beta$
exists if we prove that ${\mathcal{D}}$ is shrinkable because then
$\pi\colon\tilde{X}\to\tilde{X}_{\mathcal{D}}$ is a near-homeomorphism. Let
$\mathcal{V}$ be an arbitrary open cover of $\tilde{X}$ and let $\mathcal{U}$
be a $\mathcal{D}$-saturated open cover of $\tilde{X}$. Let $\mathcal{W}$ be a
refinement of $\mathcal{V}$ such that $\mathcal{W}$ contains all the small
nbhds of the form $U_{x}\times[1,1-\varepsilon_{x})$ for all
${(x,1)}\in\partial\tilde{X}$ and for some appropriate $\varepsilon_{x}>0$ and
relative nbhd $U_{x}\subset\partial X$. We also suppose that in $\mathcal{W}$
the nbhds of the inner points of $\tilde{X}$ are only these or such nbhds
which do not intersect $\partial\tilde{X}$. We will apply Theorem 4.4. Let
$C\subset\tilde{X}$ be a compact set and let $E\subset\tilde{X}$ be a compact
set containing the attached $\\{x\\}\times[0,1]$ for all
$(x,1)\in\partial\tilde{X}$ such that $\\{x\\}\times[0,1]$ intersects $C$.
Since $E$ is compact, there are finitely many nbhds in $\mathcal{W}$ and also
in $\mathcal{U}$ which cover $E$. Let us restrict ourselves to these finitely
many nbhds. Let $\varepsilon>0$ be such that $\varepsilon<\varepsilon_{x}$ for
all these finitely many points $(x,1)\in\partial\tilde{X}$. Let $U$ be the
union of the chosen finitely many nbhds in $\mathcal{U}$ and let $\delta>0$ be
such that for a metric on $\tilde{X}$ the $\delta$-nbhd of
$\bigcup_{(x,1)\in\partial\tilde{X}\cap E}\\{x\\}\times[0,1]$
is inside $U$. Then define a homeomorphism $h\colon\tilde{X}\to\tilde{X}$
which maps
$\bigcup_{(x,1)\in\partial\tilde{X}\cap E}\\{x\\}\times[0,1]$
into
$\bigcup_{(x,1)\in\partial\tilde{X}\cap E}\\{x\\}\times[1,1-\varepsilon)$
by mapping each arc $\\{x\\}\times[0,1]$, where $(x,1)\in\partial\tilde{X}$,
into itself. We suppose that the support of $h$ is inside the $\delta/2$-nbhd
of $\bigcup_{(x,1)\in\partial\tilde{X}\cap E}\\{x\\}\times[0,1]$. This $h$
satisfies (3) of Theorem 4.4 so $\pi$ is a near-homeomorphism which yields the
claimed homeomorphism $\beta$. ∎
## 5\. Shrinkable decompositions
The following notions are often used to describe types of decompositions which
turn out to be shrinkable.
###### Definition 5.1.
Let $\mathcal{D}$ be a usc decomposition of $\mathbb{R}^{n}$.
* •
$\mathcal{D}$ is _cell-like_ if every decomposition element is cell-like,
* •
$\mathcal{D}$ is _cellular_ if every decomposition element is cellular,
* •
the decomposition elements are _flat arcs_ if for every $D\in\mathcal{D}$
there is a homeomorphism $h\colon\mathbb{R}^{n}\to\mathbb{R}^{n}$ such that
$h(D)$ is a straight line segment,
* •
$\mathcal{D}$ is _starlike_ if every decomposition element $D$ is a starlike
set, that is, $D$ is a union of compact straight line segments with a common
endpoint $x_{0}\in\mathbb{R}^{n}$,
* •
$\mathcal{D}$ is _starlike-equivalent_ if for every $D\in\mathcal{D}$ there is
a homeomorphism $h\colon\mathbb{R}^{n}\to\mathbb{R}^{n}$ such that $h(D)$ is
starlike,
* •
$\mathcal{D}$ is _thin_ if for every $D\in\mathcal{D}$ and every nbhd $U$ of
$D$ there is an $n$-dimensional ball $B\subset\mathbb{R}^{n}$ such that
$D\subset B\subset U$ and $\partial B$ is disjoint from the non-degenerate
elements of $\mathcal{D}$,
* •
$\mathcal{D}$ is _locally shrinkable_ if for each $D\in\mathcal{D}$ we have
that for every nbhd $U$ of $D$ and open cover $\mathcal{V}$ of
$\mathbb{R}^{n}$ there is a homeomorphism
$h\colon\mathbb{R}^{n}\to\mathbb{R}^{n}$ with support $U$ such that
$h(D)\subset V$ for some $V\in\mathcal{V}$,
* •
$\mathcal{D}$ _inessentially spans_ the disjoint closed subsets
$A,B\subset\mathbb{R}^{n}$ if for every $\mathcal{D}$-saturated open cover
$\mathcal{U}$ of $\mathbb{R}^{n}$ there is a homeomorphism
$h\colon\mathbb{R}^{n}\to\mathbb{R}^{n}$ which is $\mathcal{U}$-close to the
identity and no element of $\mathcal{D}$ meets both of $h(A)$ and $h(B)$,
* •
the decomposition element $D$ has _embedding dimension_ $k$ if for every
$(n-k-1)$-dimensional smooth submanifold $M$ of $\mathbb{R}^{n}$ and open
cover $\mathcal{V}$ of $\mathbb{R}^{n}$ there is a homeomorphism
$h\colon\mathbb{R}^{n}\to\mathbb{R}^{n}$ which is $\mathcal{V}$-close to the
identity, $h(M)\cap D=\emptyset$ and this is not true for $(n-k)$-dimensional
submanifolds.
Most of these notions have the corresponding verisons in arbitrary manifolds
or spaces. A condition that is obviously satisfied by at least $5$-dimensional
Euclidean spaces is the following.
###### Definition 5.2 (Disjoint disks property).
The metric space $X$ has the _disjoint disks property_ if for arbitrary maps
$f_{1}$ and $f_{2}$ from $D^{2}$ to $X$ and for every $\varepsilon>0$ there
are approximating maps $g_{i}$ from $D^{2}$ to $X$ $\varepsilon$-close to
$f_{i}$, $i=1,2$, such that $g_{1}(D^{2})$ and $g_{2}(D^{2})$ are disjoint.
The next theorem [Ed78] is one of the fundamental results of decomposition
theory, we omit its proof here.
###### Theorem 5.3.
Let $X$ be an at least $5$-dimensional manifold and let $\mathcal{D}$ be a
cell-like decomposition of $X$. Then $\mathcal{D}$ is shrinkable if and only
if $X_{\mathcal{D}}$ is finite dimensional and has the disjoint disks
property.
Recall that a separable metric space is finite dimensional if every point has
arbitrarily small nbhds having one less dimensional frontiers and dimension
$-1$ is by definition the dimension of the empty set. For example, a manifold
is finite dimensional.
In the following statement we enumerate several conditions which imply that a
(usc) decomposition is shrinkable.
###### Theorem 5.4.
The following decompositions are strongly shrinkable:
1. (1)
cell-like usc decompositions of a $2$-dimensional manifold,
2. (2)
countable usc decompositions of $\mathbb{R}^{n}$ if the decomposition elements
are flat arcs,
3. (3)
countable and starlike usc decompositions of $\mathbb{R}^{n}$,
4. (4)
countable and starlike-equivalent usc decompositions of $\mathbb{R}^{3}$,
5. (5)
null and starlike-equivalent usc decompositions of $\mathbb{R}^{n}$,
6. (6)
thin usc decompositions of $3$-manifolds,
7. (7)
countable and thin usc decompositions of $n$-dimensional manifolds,
8. (8)
countable and locally shrinkable usc decompositions of a complete metric space
if $\cup\mathcal{H}_{\mathcal{D}}$ is $G_{\delta}$,
9. (9)
monotone usc decompositions of $n$-dimensional manifolds if $\mathcal{D}$
inessentially spans every pair of disjoint, bicollared $(n-1)$-dimensional
spheres,
10. (10)
null and cell-like decompositions of smooth $n$-dimensional manifolds if the
embedding dimension of every $D\in\mathcal{D}$ is $\leq n-3$,
Before proving Theorem 5.4 let us make some observations and preparations. At
first, note that there are usc decompositions of $\mathbb{R}^{3}$ into
straight line segments which are not shrinkable: in the proof of Proposition
3.1 for any given compact metric space $Y$ we constructed a decomposition of
$\mathbb{R}^{3}$ into straight line segments and singletons such that $Y$ is a
subspace of the decomposition space. Since $\mathbb{R}^{3}$ is a complete
metric space and the decomposition is usc it is also shrinkable if and only if
$\pi$ is approximable by homeomorphisms. This means that if $Y$ cannot be
embedded into $\mathbb{R}^{3}$, then the decomposition space cannot be
homeomorphic to $\mathbb{R}^{3}$ and then this decomposition is not
shrinkable.
If the decomposition is countable, then we can shrink successively the
decomposition elements if there is a guaranty of not expanding an already
shrunken element while shrinking another one. The next proposition is a
technical tool for this process.
###### Proposition 5.5.
Let $\mathcal{D}$ be a countable usc decomposition of a locally compact metric
space $X$. Suppose for every $D\in\mathcal{D}$, for every $\varepsilon>0$ and
for every homeomorphism $f\colon X\to X$ there exists a homeomorphism $h\colon
X\to X$ such that
1. (1)
outside of the $\varepsilon$-nbhd of $D$ the homeomorphism $h$ is the same as
$f$,
2. (2)
$\mathrm{diam}\thinspace h(D)<\varepsilon$ and
3. (3)
for every $D^{\prime}\in\mathcal{D}$ we have $\mathrm{diam}\thinspace
h(D^{\prime})<\varepsilon+\mathrm{diam}\thinspace f(D^{\prime})$.
Then $\mathcal{D}$ is strongly shrinkable.
###### Sketch of the proof.
Let $\varepsilon>0$ and let $\mathcal{U}$ be a $\mathcal{D}$-saturated open
cover of $X$. We enumerate the non-degenerate elements of $\mathcal{D}$ which
have diameter at least $\varepsilon/2$ as $D_{1},D_{2},\ldots$. We can find
$\mathcal{D}$-saturated open sets $U_{1},U_{2},\ldots$ such that for all $n$
we have $D_{n}\subset U_{n}$ and all sets $U_{n}$ are pairwise disjoint or
coincide. These $U_{n}$ are subsets of sets in $\mathcal{U}$ and they will
ensure $\mathcal{U}$-closeness. We produce a sequence
${\mathrm{id}}=h_{0},h_{1},\ldots$ of self-homeomorphisms of $X$ and a
sequence $C_{1},C_{2},\ldots$ of $\mathcal{D}$-saturated closed nbhds of
$D_{1},D_{2},\ldots$, respectively, such that a couple of conditions are
satisfied for every $n\geq 1$:
1. (a)
$h_{n}|_{X-U_{n}}=h_{n-1}|_{X-U_{n}}$,
2. (b)
${\mathrm{diam}}\thinspace h_{n}(D_{n})<\varepsilon$,
3. (c)
for every $D\in\mathcal{D}$ we have ${\mathrm{diam}}\thinspace
h_{n}(D)<(1-\frac{1}{2^{n}})\frac{\varepsilon}{2}+{\mathrm{diam}}\thinspace
D$,
4. (d)
$h_{n+1}|_{C_{1}\cup\cdots\cup C_{n}}=h_{n}|_{C_{1}\cup\cdots\cup C_{n}}$,
5. (e)
if some $D\in\mathcal{D}$ is in $C_{n}$, then ${\mathrm{diam}}\thinspace
h_{n}(D)<\varepsilon$ and
6. (f)
$h_{n}=h_{n-1}$ if ${\mathrm{diam}}\thinspace h_{n-1}(D_{n})<\varepsilon$.
The sets $C_{n}$ serve as protective buffers in which no further motion will
occur. For $n=1$ by the conditions (1), (2) and (3) in the statement of
Proposition 5.5 with the choice $f={\mathrm{id}}$ we can find a homeomorphism
$h_{1}\colon X\to X$ satisfying (a), (b) and (c) and also an appropriate
$C_{1}$ such that (d) and (e) are satisfied as well. If $h_{k}$ and $C_{k}$
are defined already for $1\leq k\leq n$, then we find $h_{n+1}$ and $C_{n+1}$
as follows. If ${\mathrm{diam}}\thinspace h_{n}(D_{n+1})<\varepsilon$, then
let $h_{n+1}=h_{n}$. If the diameter of $h_{n}(D_{n+1})$ is at least
$\varepsilon$, then by the conditions (1), (2) and (3) with the choice
$f=h_{n}$ we can find a homeomorphism $h_{n+1}\colon X\to X$ satisfying
1. (i)
$h_{n+1}|_{X-U_{n+1}}=h_{n}|_{X-U_{n+1}}$
2. (ii)
${\mathrm{diam}}\thinspace h_{n+1}(D_{n+1})<\varepsilon/2^{n+2}$,
3. (iii)
for every $D\in\mathcal{D}$ we have ${\mathrm{diam}}\thinspace
h_{n+1}(D)<\varepsilon/{2^{n+2}}+{\mathrm{diam}}\thinspace h_{n}(D)$
furthermore (iii) and (c) imply that for every $D\in\mathcal{D}$ we have
${\mathrm{diam}}\thinspace
h_{n+1}(D)<\varepsilon/{2^{n+2}}+\left(1-\frac{1}{2^{n}}\right)\frac{\varepsilon}{2}+{\mathrm{diam}}\thinspace
D=\left(1-\frac{1}{2^{n+1}}\right)\frac{\varepsilon}{2}+{\mathrm{diam}}\thinspace
D$
so (a), (b) and (c) are satisfied. It is not too difficult to get (d) and (e)
with some $C_{n+1}$ as well. After having all $h_{1},h_{2},\ldots$ and
$C_{1},C_{2},\ldots$ with properties (a)-(f) it is easy to see by (d), (e) and
(f) that every $D\in\mathcal{D}$ which is in $C_{1}\cup\cdots\cup C_{n}$ is
shrunk by $h_{n}$ to size smaller than $\varepsilon$ and other $h_{n+i}$ does
not modify this. If some $D\in\mathcal{D}$ had diameter smaller than
$\varepsilon/2$ originally, then (c) implies that its diameter is smaller than
$\varepsilon$ during all the process. These imply that the sequence
$h_{1},h_{2},\ldots$ is locally stationary and it converges to a shrinking
homeomorphism $h$. ∎
We are going to give a sketch of the proof of Theorem 5.4. For the detailed
proof of (1) see [Mo25], for the proofs of (2) and (3) see [Bi57], for the
proof of (4) see [DS83], for (5) see [Be67], for (6) see [Wo77] and for (7),
(8), (9) and (10) see [Pr66], [Bi57], [Ca78] and [Ca79, Ed16], respectively.
###### Sketch of the proof of Theorem 5.4.
(1) follows from the fact that in a $2$-dimensional manifold $X$ a cell-like
decomposition is thin. The reason of this is that an arbitrarily small
$2$-dimensional disk nbhd $B$ with the property $\partial
B\cap(\cup\mathcal{H}_{\mathcal{D}})=\emptyset$ can be obtained by finding the
circle $\partial B$ in $X$ as a limit of a sequence of maps $f_{n}\colon
S^{1}\to X$ avoiding smaller and smaller decomposition elements. A thin usc
decomposition of a $2$-dimensional manifold is shrinkable if the points of
$\pi(\bigcup\mathcal{H}_{\mathcal{D}})$ do not converge to each other in a too
complicated way. Since the quotient space $X_{\mathcal{D}}$ can be filtered in
a way which implies this, the decomposition map $\pi$ can be successively
approximated by maps which are homeomorphisms on the induced filtration in
$X$.
(2)-(5) follows from Proposition 5.5: the flat arcs, starlike sets and
starlike-equivalent sets can be shrunk successively because of geometric
reasons.
To prove (6) and (7) we also use Proposition 5.5. Let $D\in\mathcal{D}$ be a
non-degenerate decomposition element, $U$ a nbhd of $D$ and let $B$ be a ball
such that $D\subset B\subset U$ and $\partial B$ is disjoint from the non-
degenerate elements of $\mathcal{D}$. After applying a self-homeomorphism of
$X$, we can suppose that $B$ is the unit ball. Let $k$ be some large enough
integer and let $1>\delta_{0}>\delta_{1}>\cdots>\delta_{k-1}>0$ be such that
if $D^{\prime}\in\mathcal{D}$ intersects the $\delta_{n+1}$-nbhd of $\partial
B$, then $D^{\prime}$ is inside the $\delta_{n}$-nbhd of $\partial B$. Define
a homeomorphism $f\colon B\to B$ which is the identity on $\partial B$, keeps
the center of $B$ fixed and on each radius $R$ the point at distance
$\delta_{n}$ from $\partial B$, where $1\leq n\leq k-1$, is mapped to the
point at distance $n/k$ from the center. We require that the homeomorphism $f$
is linear between these points. After applying this homeomorphism, every
$D^{\prime}\in\mathcal{D}$ in $B$ is shrunk to size small enough.
In the proof of (8) we enumerate the non-degenerate decomposition elements and
we construct a sequence of homeomorphisms of the ambient space which shrink
the decomposition elements successively using the locally shrinkable property.
To prove (9) for a given $\varepsilon>0$ we cover the manifold by two
collections $\\{B_{\alpha}\\}_{\alpha\in A}$ and
$\\{B_{\alpha}^{\prime}\\}_{\alpha\in A}$ of $n$-dimensional balls such that
$B_{\alpha}\subset\mathrm{int}\thinspace B_{\alpha}^{\prime}$ and
$\mathrm{diam}\thinspace B_{\alpha}^{\prime}<\varepsilon$. Then the closed
sets $\pi(\partial B_{\alpha})$ and $\pi(\partial B_{\alpha}^{\prime})$ are
made disjoint by applying homeomorphisms $h_{\alpha}$ successively. This
implies that the homeomorphism $h$ obtained by composing all the
homeomorphisms $h_{\alpha}$ is such that for every $D\in\mathcal{D}$ the set
$h(D)$ is fully contained in some ball $B_{\alpha}^{\prime}$ so its diameter
is smaller than $\varepsilon$.
In the proof of (10) at first we obtain that every decomposition element $D$
is cellular because of the following. By assumption $D$ is cell-like and it
behaves like an at most $(n-3)$-dimensional submanifold so the $2$-skeleton of
the ambient manifold is disjoint from $D$. This means that $D$ satisfies the
cellularity criterion since the $2$-skeleton carries the fundamental group.
Hence $D$ is cellular, which implies that it is contained in an $n$-dimesional
ball and also in a starlike-equivalent set $C$ of embedding dimension $\leq
n-2$. Now it is possible to use an argument similar to the proof of
Proposition 5.5: we can shrink $C$ to become smaller than an $\varepsilon>0$
by successively compressing $C$ and in each iteration carefully controlling
and avoiding other decomposition elements close to $C$ which would become too
large during the compression procedure. ∎
## References
* [Al27] P. S. Alexandroff, Über stetige Abbildungen kompakter Räume, Math. Ann. 96 (1927), 555–571.
* [AC79] F. D. Ancel and J. W. Cannon, The locally flat approximation of cell-like embedding relations, Ann. of Math. 109 (1979), 61–86.
* [Be67] R. J. Bean, Decompositions of $E^{3}$ with a null sequence of starlike equivalent nondegenerate elements are $E^{3}$, Illinois J. Math. 11 (1967), 21–23.
* [BKKPR] S. Behrens, B. Kalmár, M. H. Kim, M. Powell, and A. Ray, editors, The disc embedding theorem, Oxford University Press, to appear.
* [Bi52] R. H. Bing, A Homeomorphism Between the $3$-Sphere and the Sum of Two Solid Horned Spheres, Ann. of Math. 56 (1952), 354–362.
* [Bi55] by same author, Partially continuous decompositions, Proceedings of the Amer. Math. Soc. (1955), 124–133.
* [Bi57] by same author, Upper semicontinuous decompositions of $E^{3}$, Ann. of Math. 65 (1957), 363–374.
* [Bi83] by same author, The geometric topology of $3$-manifolds, Amer. Math. Soc., 1983.
* [Ca78] J. W. Cannon, $(E^{3}/X)\times E^{1}\approx E^{4}$ ($X$, a cell-like set): an alternative proof, Trans. Amer. Math. Soc. 240 (1978), 277–285.
* [Ca79] by same author, Shrinking cell-like decompositions of manifolds. Codimension three, Ann. of Math. 110 (1979), 83–112.
* [Da86] R. J. Daverman, Decompositions of manifolds, Academic Press, 1986.
* [DV09] R. J. Daverman and G. A. Venema, Embeddings in manifolds, Amer. Math. Soc., 2009.
* [DS83] R. Denman and M. Starbird, Shrinking countable decomposition of $S^{3}$, Trans. Amer. Math. Soc. 276 (1983), 743–756.
* [Du66] J. Dugundji, Topology, Allyn and Bacon, 1966.
* [Ed78] R. D. Edwards, The topology of manifolds and cell-like maps, Proc. Inter. Cong. Math., Helsinki 1978, 111–127.
* [Ed16] by same author, Approximating certain cell-like maps by homeomorphisms, arXiv:1607.08270.
* [Fr82] M. H. Freedman, The topology of four-dimensional manifolds, J. Diff. Geom. 17 (1982), 357–453.
* [FQ90] M. H. Freedman and F. Quinn, Topology of $4$-manifolds, Princeton Univ. Press, Princeton, N.J., 1990.
* [Ha27] F. Hausdorff, Mengenlehre, zweite, neubearbeitete Auflage, Verlag Walter de Gruyter & Co., Berlin, 1927.
* [HW41] W. Hurewicz and H. Wallman, Dimension theory, Princeton University Press, 1941.
* [Iz88] S. A. Izar, Funcoes de Morse e topologia das superficies I: O grafo de Reeb de $f\colon M\to\mathbb{R}$, Metrica no. 31, Estudo e Pesquisas em Matematica, IBILCE, Brazil, 1988.
* [Mo25] R. L. Moore, Concerning upper semicontinuous collections of compacta, Trans. Amer. Math. Soc. 27 (1925), 416–428.
* [NW37] M. H. A. Newman and J. H. C. Whitehead, On the group of a certain linkage, The Quarterly Journal of Mathematics, 8 (1937), 14–21.
* [Pr66] T. M. Price, A necessary condition that a cellular upper semicontinuous decomposition of $E^{n}$ yield $E^{n}$, Trans. Amer. Math. Soc. 122 (1966), 427–435.
* [Re46] G. Reeb, Sur les points singuliers d’une forme de Pfaff completement integrable ou d’une fonction numerique, Comptes Rendus Hebdomadaires des Seances de l’Academie des Sciences 222 (1946), 847–849.
* [Sa20] O. Saeki, Reeb spaces of smooth functions on manifolds, arxiv:2006.01689
* [St56] A. H. Stone, Metrizability of Decomposition Spaces, Proc. of the Amer. Math. Soc. 7 (1956), 690–700.
* [Wh35] J. H. C. Whitehead, The Quarterly Journal of Mathematics, 6 (1935), 268–279.
* [Wo77] E. P. Woodruff, Decomposition spaces having arbitrarily small neighborhoods with $2$-sphere boundaries, Trans. Amer. Math. Soc. 232 (1977), 195–204.
* [Yo18] S. Yokura, On decomposition spaces, Alexandroff spaces and related topics, RIMS (2018), 5–26.
|
# Defeating Proactive Jammers Using Deep Reinforcement Learning for Resource-
Constrained IoT Networks
Abubakar S. Ali1 Shimaa Naser1 and Sami Muhaidat*1 2]
1Department of Electrical Engineering and Computer Science, Khalifa
University, Abu Dhabi 127788, UAE
2Department of Systems and Computer Engineering, Carleton University, Ottawa,
ON K1S 5B6, Canada
###### Abstract
Traditional anti-jamming techniques like spread spectrum, adaptive power/rate
control, and cognitive radio, have demonstrated effectiveness in mitigating
jamming attacks. However, their robustness against the growing complexity of
internet-of-thing (IoT) networks and diverse jamming attacks is still limited.
To address these challenges, machine learning (ML)-based techniques have
emerged as promising solutions. By offering adaptive and intelligent anti-
jamming capabilities, ML-based approaches can effectively adapt to dynamic
attack scenarios and overcome the limitations of traditional methods. In this
paper, we propose a deep reinforcement learning (DRL)-based approach that
utilizes state input from realistic wireless network interface cards. We train
five different variants of deep Q-network (DQN) agents to mitigate the effects
of jamming with the aim of identifying the most sample-efficient, lightweight,
robust, and least complex agent that is tailored for power-constrained
devices. The simulation results demonstrate the effectiveness of the proposed
DRL-based anti-jamming approach against proactive jammers, regardless of their
jamming strategy which eliminates the need for a pattern recognition or
jamming strategy detection step. Our findings present a promising solution for
securing IoT networks against jamming attacks and highlights substantial
opportunities for continued investigation and advancement within this field.
###### Index Terms:
Jamming, anti-jamming, cognitive radio, deep reinforcement learning
## I Introduction
Cognitive radio networks (CRNs) have emerged as a revolutionary paradigm in
wireless communication, offering intelligent means to optimizate the available
spectrum resources through dynamic channel identification [1]. Neverthless,
the open nature of wireless communication channels exposes CRNs to potential
security breaches, particularly jamming attacks which can degrade network
performance and significantly reduce the throughput [2]. Traditional jamming
countermeasures, such as frequency hopping or direct sequence spread spectrum
(DSSS), have inherent limitations, especially when confronted with advanced
jammers that are capable of detecting and disrupting these techniques [3].
Although, game-theoretical strategies have been explored to address this
issue, such techniques assume impractical preconditions like a priori
knowledge of the perturbation pattern and can falter when faced with rapidly
changing jamming strategies [4, 5, 6].
Deep reinforcement learning (DRL), a blend of reinforcement learning and deep
learning, has been spotlighted due to its adaptability to dynamic environments
and ability to learn from raw data, without the need for pre-existing
knowledge. In the context of anti-jamming systems, DRL has been employed in
various ways in multiple works. For instance, the authors of [7] proposed a
deep anti-jamming reinforcement learning algorithm (DARLA) that used raw
spectrum data as the environmental state, addressing the anti-jamming problem
in a dynamic environment. Similarly, the work in [8] proposed a sequential
deep reinforcement learning algorithm (SDRLA) to improve anti-jamming
performance. Other research has introduced wideband autonomous cognitive
radios [9], transformer encoder-like Q-networks [10], and unmanned aerial
vehicle (UAV) jammers modeled as partially observable Markov decision
processes [11]. Some studies have also used the signal-to-interference-plus-
noise ratio (SINR) to enhance anti-jamming techniques [12, 13]. However, the
aforementioned studies relied on supplementary equipment or data such as raw
spectrum data or SINR which can be energy-inefficient and difficult to
acquire, rendering them unsuitable for resource-constrained internet-of-things
(IoT) networks.
In our prior study [14], we introduced a novel approach that uses a single
vector of clear channel assessment (CCA) information as the state input. This
simplifies the environmental state representation, hence, reducing the
computational complexity of the neural network. Our previous work also was a
departure from the approach presented in [8] as it involved a generic DRL
agent capable of effectively operating within dynamic jamming pattern
environments without requiring a preliminary pattern recognition process.
However, despite these capabilites, the CCA-based method faces some
challenges, particularly related to the information extraction from WLAN
network interface cards (NICs) and its efficacy against random channel hopping
jamming. In this paper, we strive to overcome these challenges by proposing an
improved anti-jamming scheme. In specific, we exploit a novel radio frequency
(RF)-jamming detection testbed [15], utilize the spectrum sensing capabilities
of WLAN NICs, and apply ML algorithms to detect and avoid jamming attacks.
Additionally, we conduct a comprehensive investigation of different agent
alternatives to optimize the anti-jamming performance in dynamic pattern
jamming scenarios.
Figure 1: System topology is composed of the transmitter, receiver, and
jammer. The transmitter tries to communicate with the receiver in the presence
of a jamming attack.
## II System Model and Formulation
In this section, we describe the system, jammer, and signal models under
jamming attack as illustrated in Fig. 1. We consider the UNII-1 band of the
5GHz radio spectrum and assume that the radio environment consists of one user
(a transmitter-receiver pair) against one jammer. A novel aspect of our model
is the presence of an agent at the transmission end, which formulates real-
time anti-jamming strategies. These strategies are then shared with the
receiver through a reliable control link. We also assume that the transmitter
possesses broad-band spectrum sensing capabilities [14]. For ease of analysis,
we segment the continuous time into discrete time slots, assuming that both
the user and the jammer operate within the same time slot. In each time slot
$t$, the user selects a frequency $f_{T,t}$ from the range
$\left[f_{L},f_{U}\right]$ for data transmission to the receiver, using power
$P_{T,t}$. Concurrently, the jammer attempts to interrupt this transmission by
selecting a frequency $f_{J,t}$ and power $P_{J,t}$ according to a predefined
jamming pattern.
### II-A Jammer Model
To investigate proactive jamming attack mitigation, we adopt a range of
jamming strategies to effectively counter such threats. Specifically, we
employ four distinct approaches: constant, sweeping, random, and dynamic
jamming techniques. In this model, we assume that the jammer jams a single
frequency $f_{J,t}$ with a varying distance $d_{JT}$ between the jammer and
transmitter and varying jamming powers $P_{J,t}$. Given the proactive nature
of the jammer, it is assumed to be unaware of the current state of the
channel. In the case of the constant jamming strategy, at the beginning of a
transmission session, the jammer picks one of the available channels of the RF
spectrum to jam consistently. Operating in a manner similar to the constant
jammer, the combined jammer possesses the ability to disrupt multiple
channels. However, it should be noted that not all channels can be jammed
simultaneously by this particular type of jammer. On the other hand, in the
sweeping jamming strategy, the jammer starts jamming the RF spectrum starting
from $f_{L}$ (i.e. $f_{J,t}=f_{L})$ and gradually increasing its jamming
frequency until it reaches $f_{U}$ (i.e. $f_{J,t}=f_{U}$) in a sweeping
fashion. The change of frequency from one to the adjacent occurs at the
beginning of each time slot. In contrast, in the random jamming strategy, the
jammer randomly selects a frequency $f_{J,t}$ from the set of the available
frequencies $\left\\{f_{L},\cdots,f_{U}\right\\}$ and jams at the beginning of
every time slot. Finally, in the dynamic pattern jamming strategy, the jammer
has the capability of selecting one of the three aforementioned jamming
strategies (i.e. constant, sweeping, or random) at the beginning of each
transmission session.
### II-B Signal Model
The received discrete baseband signal $r[n]$ at the receiver after matched
filtering and sampling at the symbol intervals can be expressed as follows
$r\left[n\right]=\sqrt{P^{rx}_{T}}\;x\left[n\right]+\sqrt{P^{rx}_{J}}\;j\left[n\right]+w\left[n\right],$
(1)
where $x[n]$ and $j[n]$ represent the discrete-time baseband signals
transmitted by the transmitter and the jammer, respectively. Furthermore,
$w[n]$ denotes the zero-mean additive white Gaussian noise (AWGN) with
variance $\sigma^{2}$. Finally, $P^{rx}_{T}$ and $P^{rx}_{J}$ represent the
average received power from the transmit and the jamming signals,
respectively, which can be written as follows
$P^{rx}_{J}=\phi^{JR}P_{J,t},$ (2)
and
$P^{rx}_{T}=\phi^{TR}P_{T,t},$ (3)
where $\phi^{JR}=\gamma_{0}d_{JR}^{-\epsilon}$ and
$\phi^{TR}=\gamma_{0}d_{TR}^{-\epsilon}$ are the channel power gains of the
jammer-receiver and transmitter-receiver links, respectively. Also,
$\gamma_{0}$ represents the channel power gain at a reference distance of 1m.
$d_{JR}$ and $d_{TR}$ are the distances of the jammer-receiver and
transmitter-receiver links, respectively. Finally, $\epsilon\geq 2$ denotes
the path loss exponent.
### II-C Problem Formulation
The received SINR can be therefore expressed as follows
$\Theta=\frac{P^{R}}{P^{rx}_{J}+\sigma^{2}},$ (4)
where $P^{R}$ is the power received from the transmitted signal at the
receiver.
Consider $\Theta_{th}$ as the SINR threshold required for successful
transmission. The objective at time slot $t$ is to maximize the normalized
throughput, defined as $\mathcal{U}(f_{T,t})=\delta(\theta\geq\theta_{th})$,
where $\delta(x)$ is a function that equals 1 if $x$ is true, and 0 otherwise.
## III Proposed DRL-Based Approach
In this section, we introduce a DRL-based anti-jamming scheme that obtains its
state information by scanning the entire spectrum.
### III-A MDP Formulation
We utilize the received power feature from the generated dataset to represent
the state vector $\mathbf{P_{t}}$. Specifically, the state vector is
represented as $\mathbf{P_{t}}=[p_{t,1},p_{t,2},\cdots,p_{t,N_{c}}]$, where
$p_{t,i}$ is the received power at time $t$ for frequency $i$. The size of the
state space is $\left|\mathcal{S}\right|=N_{c}$. In our formulation, the
action $a_{t}\in\\{f_{1},f_{2},\cdots,f_{N_{c}}\\}$ represents the selection
of frequency $i$ at time slot $t$. Similarly, the action space size is
$\left|\mathcal{A}\right|=N_{c}$. The transmitter-receiver pair aims to
achieve successful transmission with a low channel switching cost $\Gamma$.
Therefore, the reward at time slot $t$ can be expressed as
$r_{t}=\begin{cases}\mathcal{U}(f_{T,t})-\Gamma\delta(a_{t}\neq
a_{t-1})&\text{ if }f_{T,t}\neq f_{J,t}\\\ 0&\text{ if
}f_{T,t}=f_{J,t}.\end{cases}$ (5)
The reward function presented in (5) takes into account the throughput factor
and ignores the energy consumption factor. This is due to the fact that in the
current anti-jamming strategy, the transmit power is fixed. Furthermore, the
normalization of the reward values to 1 and 0 is valid since the considered
jammer is proactive. Based on this, upon obtaining the reward $r_{t}$, the
environment transitions to the next state $s_{t+1}$ based on a transition
probability $p(s_{t+1}|s_{t},a_{t})$. This probability represents the
likelihood of transitioning from state $s_{t}$ to state $s_{t+1}$ given the
action $a_{t}$. The initial state is denoted by $s_{0}$ and the terminal state
is the state at which the agent ceases decision-making, which is denoted by
$s_{T}$. The goal of the agent is to find the optimal policy,
$\pi{(s)}=\arg\max_{a}Q(s,a)$, that maps the state to the best action. The
optimal policy is found by learning the optimal action-value function,
$Q^{*}(s,a)$, using an RL algorithm such as DRL.
Figure 2: Architecture of the proposed DDQN Q-network. Figure 3: Overview of
the training and deployment phases of the proposed DRL-based anti-jamming
approach.
### III-B Agent Design
We train five different agents to determine the most suitable strategy for
power-constrained devices. These agents include DQN, DQN with fixed targets,
DDQN, Dueling DQN, and DDQN with prioritized replay. Each agent has a unique
combination of neural network architecture, experience replay mechanism, and
target network update frequency. By training and evaluating the performance of
these agents, we aim to identify the most appropriate approach for power-
constrained devices in effectively countering proactive jamming attacks.
#### III-B1 DQN
The DQN algorithm is a model-free, online, off-policy RL method in which a
value-based RL agent is employed to train a Q-network that estimates and
returns future rewards [16, 17]. The selection of this type of agent is
motivated by the fact that our observation space is continuous, and our action
space is discrete. Our DQN algorithm implementation is presented in Algorithm
1.
The implemented DQN agent uses a function approximator in the form of a neural
network, whose weights $\theta_{Q}$ are updated with every iteration. The
Q-network is used to determine the Q-value of the action. The Q-network
comprises two hidden layers, as illustrated in Fig. 2, and a ReLU activation
function $f(x)=\textup{max}(0,x)$ is chosen [18]. The experience reply buffer
$\mathcal{D}$ stores the agent’s experience, which is the transition pair at
time-step $t$ and is defined as $(s_{t},a_{t},r_{t},s_{t+1})$.
The stochastic gradient descent (SGD) algorithm [19] is used during training
to update the weights $\theta_{t}$ at every time-step $t$.
$\mathbb{Initialize}$ $\theta_{Q},\epsilon_{t}=1,\delta,i=j=0,\textup{ and
}K$;
while _j $<\left|\mathcal{E}\right|$_ do
set $s_{t}=s_{t_{0}}$;
while _t $<\left|\mathcal{T}\right|$_ do
$X_{t}\sim U\left(0,1\right)$;
if _$X_{t} <\epsilon_{t}$_ then
$a_{t}=\textup{random}(1,\cdots,N_{c})$;
else
$a_{t}=\underset{a_{t}}{\textup{arg max}}Q(s_{t},a{t}\mid\theta_{Q})$;
end if
$a_{t}\mapsto\mathbb{T}$;
Obtain $r_{t}$ and $s_{t+1}$;
Store the experience $[s_{t},a_{t},r_{t},s_{t+1}]$ in $\mathcal{D}$;
Sample a random mini-batch of $K$ experiences from $\mathcal{D}$;
if _$s_{t}==s_{t_{f}}$_ then
$y_{t}=r_{t}$;
else
$y_{t}=\mathbb{E}_{s_{t},a_{t}}[r_{s_{t},s_{t+1},a_{t}}+\gamma
Q_{\pi}(s_{t+1},a_{t_{\textup{max}}}\mid\theta_{Q})\mid s_{t},a_{t}]$;
end if
Update Q-network parameters $\theta_{Q}=\theta_{t}-\eta\nabla
L_{t}(\theta_{t})$;
where
$L_{t}(\theta_{t})=\mathbb{E}_{s_{t},a_{t}}[(y_{t}-Q_{\pi}(s_{t},a_{t};\theta_{t}))^{2}]$;
Update the exploration rate $\epsilon_{t+1}=\epsilon_{t}-\delta$;
Set $s_{t}=s_{t+1}$;
$t=t+1$;
end while
$j=j+1$;
end while
$\mathbf{Output}$ optimal policy $\overset{*}{\pi}$;
Algorithm 1 DQN Algorithm for Anti-Jamming.
#### III-B2 DQN with Fixed Targets
This variant of DQN updates the target network less frequently, reducing the
risk of oscillations and instability during learning. The algorithm is similar
to the DQN, but the target network is updated less frequently. This can be
achieved by increasing the value of $C$ (the number of steps between target
network updates). The neural network architecture and other components remain
unchanged from the DQN architecture.
#### III-B3 DDQN
The Double Deep Q-Network (DDQN) is an improvement over DQN that reduces the
overestimation of Q-values by using two separate networks to estimate the
current and target Q-values. The neural network architecture and other
components remain unchanged from the DQN architecture.
#### III-B4 Dueling DQN
This algorithm is similar to the DQN, but with a different neural network
architecture that decouples the estimation of state values and action
advantages, potentially leading to better performance and stability. To
implement this, the architecture of the Q-network in Fig. 2 is modified to
include two separate streams for state values and action advantages, and then
these streams are combined to obtain the final Q-values. The other components
remain unchanged from the DQN architecture.
#### III-B5 DQN with Prioritized Replay
This approach combines DQN with prioritized experience replay, which samples
more important experiences more frequently during learning, potentially
improving learning efficiency. To implement this, the uniform sampling of
experiences from the replay buffer $\mathcal{D}$ is replaced with prioritized
sampling based on the absolute TD-error of each experience. Additionally, the
loss function $L_{t}(\theta_{t})$ is updated to include importance-sampling
weights to correct for the bias introduced by the prioritized sampling. The
neural network architecture remains unchanged from the DQN architecture.
### III-C Training and Deployment of the Agent
In this section, we detail the training and deployment of our proposed DRL-
based anti-jamming approach, which aims to mitigate jamming attacks in power-
constrained devices. Fig. 3 presents an overview of the training and
deployment phases of the proposed DRL-based anti-jamming approach. The
training phase involves the setup of the system, loading the corresponding
data from the spectral scan dataset, obtaining the received power (dBm)
feature of each channel, and training the agents based on the reward value
obtained from the selected channel. At the beginning, a system setup is made
to specify the type of jammer (i.e., sweeping, random, constant, or dynamic
pattern jammers), the jamming power, and the distance. Based on this setup,
the corresponding data is loaded from the spectral scan dataset. Depending on
the type of jammer, the received power (dBm) feature of each channel is
obtained. For instance, if the jammer is constant, and the jamming frequency
is 5180 MHz at 20 cm with a jamming power of 10 dBm, then the dataset with the
corresponding filename will be loaded. This ensures that the 5180 MHz
frequency will have the highest received power compared to the other
frequencies. Based on this state information, the agent will select a channel
and receive a reward value based on the selected channel, as defined in (5).
Using this reward value, the agent’s network is trained and then the
environment transitions to the next state. It is worth noting that this
process repeats until convergence or a terminal state is reached.
During the deployment phase, the trained agent is implemented within the
environment it was originally trained on. However, in this phase, the agent
does not undergo further training as it exploits the knowledge gained from the
training phase. Given a system setup and the current channel $f_{T,t}$, the
agent takes in the state vector, which describes the whole spectrum, as input
and selects the best channel $f_{T,t+1}$ to switch to. If the selected channel
$f_{T,t+1}$ is the same as the current channel $f_{T,t}$, then transmission
continues on $f_{T,t}$. If $f_{T,t+1}\neq f_{T,t}$, a channel switch
announcement (CSA) is carried out, and the subsequent transmission switches to
$f_{T,t+1}$. This process keeps repeating until all data is transmitted or the
terminal state is reached.
## IV Results and Discussions
To evaluate the proposed DRL-based anti-jamming solution, we aim to
investigate its performance under dynamic pattern jamming, where the jammer
randomly selects one of the three jamming patterns namely, sweep, random, and
combined at the beginning of each transmission session. This evaluation is
important as our primary objective is to develop a generic anti-jamming agent
capable of mitigating various jamming patterns. We perform the simulations
using a custom-based simulator designed based on the collected dataset in
[15]. Also, unless otherwise stated, the simulation parameters used in our
study are presented in Table I. Furthermore, we tune the hyper-parameters of
the proposed DRL-based anti-jamming scheme during training to achieve a good
policy for the agent, as shown in Table II. Finally, we investigate the
effects of the $\Gamma$ parameter on the total throughput of the proposed
framework, and we compare the results obtained by using different values of
$\Gamma$.
TABLE I: Simulation Parameters Parameter | Value
---|---
RF spectrum band | 5GHz UNII-1
Bandwidth of communication signal | 20 MHz
Bandwidth of jamming signal | 20 MHz
Number of channels $N_{c}$ | 8
Initial channel center frequency $f_{T,0}$ | 5.180 GHz
Distance between channel frequencies | 20 MHz
Distance between jammer and transmitter $d_{JT}$ | 20 cm
Jamming power $P_{J,t}$ | 10d Bm
TABLE II: DRL Hyper-parameters. Parameter | Value
---|---
Number of training episodes $\left|\mathcal{E}\right|$ | 100
Number of testing episodes $\left|\mathcal{E}\right|$ | 100
Number of time-steps $\left|\mathcal{T}\right|$ | 100
Discount factor $\gamma$ | 0.95
Initial exploration rate $\zeta$ | 1
Exploration decay $\delta$ | 0.005
Minimum exploration rate $\zeta_{\textup{min}}$ | 0.01
Experience buffer size $\mathcal{D}$ | 10000
Minimum batch size $K$ | 32
Averaging window size | 10
Early termination criterion | Average reward = 90
Channel switching cost $\Gamma$ | [0, 0.05, 0.1, 0.15]
(a) $\Gamma=0.00$.
(b) $\Gamma=0.05$.
(c) $\Gamma=0.10$.
(d) $\Gamma=0.15$.
Figure 4: Learning performance of the investigated DRL-based anti-jamming
agents under dynamic pattern jamming with $\Gamma=0,0.05,0.1,0.15.$
Fig. 4 depicts the learning performance of the DRL-based anti-jamming agents
under dynamic pattern jamming, with different values of $\Gamma$. We observe
that DQN with fixed Q-targets, DDQN, and DDQN with prioritized replay achieve
a mean reward of approximately 100, while Dueling DQN achieves a mean reward
of around 95. However, the DQN agent only manages to obtain a mean reward of
approximately 86, and this failure persists for all values of $\Gamma$. Unlike
in our prior work, [14], in this work all the agents were able to learn the
dynamics of the system and evade the jammer. Importantly, we note that all the
trained DRL agents, except for DQN, can learn a policy to escape the dynamic
pattern jamming. Moreover, we observe that for all types of jammers, the DRL
agents can make intelligent channel selection decisions to evade jamming.
Interestingly, the DDQN with prioritized replay achieved the most stable
learning convergence across all values of $\Gamma$.
In Fig. 5, we present the normalized mean throughput of the legitimate user
under various jamming patterns. We observe that, for all values of $\Gamma$,
all the evaluated agents, except DQN, have the ability to completely evade
dynamic pattern jamming. Moreover, for all agents, we observe a reduction in
throughput as the value of $\Gamma$ increases, with a greater reduction for
higher values of $\Gamma$. As seen in the case of the learning performance,
the DDQN with prioritized replay achieved a consistently high throughput over
all values of $\Gamma$.
The impact of $\Gamma$ on the channel switching behavior of the agents is
demonstrated in Fig. 6. It is observed that the agents switch channels 100% of
the time, regardless of the values of $\Gamma$. This indicates that in order
to evade dynamic pattern jamming, the agents develop a policy that maps the
states to the optimal action and ignores the jamming pattern. This leads to
continuous channel switching even under values of $\Gamma>0$. In other words,
the agents choose to be penalized by the channel switching cost and experience
a reduction in overall throughput instead of remaining on a single channel and
losing 1/8 of their total throughput.
Figure 5: Normalized throughput performance of the DRL-based anti-jamming
agent under dynamic pattern jamming. Figure 6: Impact of channel switching
cost ($\Gamma$) on the DRL-based anti-jamming agent under dynamic pattern
jamming.
Finally, we study the convergence times and inference speeds of the five DRL
agents as shown in Table III. During training, the DQN agent demonstrated the
fastest convergence speed among all the agents, with an average convergence
time of 388.28 seconds. The speed of convergence and inference in DRL agents
is determined by the complexity of the learning algorithm and the efficiency
of the exploration strategy. DQN, with its simpler learning algorithm and
efficient exploration, converges faster. On the other hand, DDQN with
prioritized replay memory involves more complex computations and a more
sophisticated memory management system, which slows down both the convergence
and the inference speed.
TABLE III: Comparison of the Convergence and inference times for the five
Agents. The results are present in the format of mean ($\pm$ std.) obtained
from 10-folds.
Agent | Convergence Time (sec) | Inference Speed (KHz)
---|---|---
DQN | 388.28 ($\pm$ 3.62) | 507.23 ($\pm$ 4.30)
DQN with Fixed Targets | 405.37 ($\pm$ 1.74) | 472.43 ($\pm$ 2.15)
DDQN | 457.42 ($\pm$ 3.26) | 437.78 ($\pm$ 3.43)
Dueling DQN | 405.79 ($\pm$ 6.54) | 464.58 ($\pm$ 2.87)
DDQN with Prioritized Replay | 532.85 ($\pm$ 3.91) | 382.31 ($\pm$ 5.25)
Overall, all the algorithms investigated showed good performance in jamming
detection and avoidance. The inference speed of the algorithms varied, with
DQN being the fastest during training. Among all DRL-based approaches, DDQN
with prioritized replay memory offers the best trade-off between throughput
and speed.
## V Conclusions
This paper investigates the intelligent anti-jamming problem within a dynamic
jamming environment. In our endeavor to construct a more practical scheme, we
incorporated a jamming detection testbed and jamming data acquired from actual
WLAN network interface cards. Utilizing this dataset, we developed a custom
simulation and introduced a DRL agent with a fully connected neural network
architecture to navigate the intricate decision-making problem inherent to
anti-jamming. With our proposed scheme, the agent is capable of learning the
most effective anti-jamming strategy through a continuous process of trial and
error, testing various actions, and observing their environmental impact. We
used simulation results from a variety of environmental settings to
corroborate the effectiveness of the proposed DRL-based anti-jamming scheme.
It’s important to note, however, that a high-power wideband jammer leaves no
room for evasion. Consequently, future research will involve creating an anti-
jamming technique focused on confronting the jammer at the same frequency, as
opposed to evasion or concealment.
## References
* [1] S. Haykin, “Cognitive radio: brain-empowered wireless communications,” _IEEE J. Sel. Areas Commun._ , vol. 23, no. 2, pp. 201–220, 2005.
* [2] A. G. Fragkiadakis, E. Z. Tragos, and I. G. Askoxylakis, “A survey on security threats and detection techniques in cognitive radio networks,” _IEEE Commun. Surv. Tutor._ , vol. 15, no. 1, pp. 428–445, 2012.
* [3] D. Torrieri, _Principles of spread-spectrum communication systems_. Springer, 2005, vol. 1.
* [4] Y. Xu, G. Ren, J. Chen, Y. Luo, L. Jia, X. Liu, Y. Yang, and Y. Xu, “A one-leader multi-follower bayesian-stackelberg game for anti-jamming transmission in UAV communication networks,” _IEEE Access_ , vol. 6, pp. 21 697–21 709, 2018.
* [5] H. Noori and S. Sadeghi Vilni, “Jamming and anti-jamming in interference channels: A stochastic game approach,” _IET Commun._ , vol. 14, no. 4, pp. 682–692, 2020.
* [6] I. K. Ahmed and A. O. Fapojuwo, “Stackelberg equilibria of an anti-jamming game in cooperative cognitive radio networks,” _IEEE Trans. Cogn. Commun._ , vol. 4, no. 1, pp. 121–134, 2017.
* [7] X. Liu, Y. Xu, L. Jia, Q. Wu, and A. Anpalagan, “Anti-jamming communications using spectrum waterfall: A deep reinforcement learning approach,” _IEEE Commun. Lett._ , vol. 22, no. 5, pp. 998–1001, 2018.
* [8] S. Liu, Y. Xu, X. Chen, X. Wang, M. Wang, W. Li, Y. Li, and Y. Xu, “Pattern-aware intelligent anti-jamming communication: A sequential deep reinforcement learning approach,” _IEEE Access_ , vol. 7, pp. 169 204–169 216, 2019.
* [9] S. Machuzak and S. K. Jayaweera, “Reinforcement learning based anti-jamming with wideband autonomous cognitive radios,” in _Proc. IEEE Int. Conf. Commun. China (ICCC)_. IEEE, 2016, pp. 1–5.
* [10] J. Xu, H. Lou, W. Zhang, and G. Sang, “An intelligent anti-jamming scheme for cognitive radio based on deep reinforcement learning,” _IEEE Access_ , vol. 8, pp. 202 563–202 572, 2020.
* [11] N. Gao, Z. Qin, X. Jing, Q. Ni, and S. Jin, “Anti-intelligent UAV jamming strategy via deep Q-networks,” _IEEE Trans. Commun._ , vol. 68, no. 1, pp. 569–581, 2019.
* [12] L. Xiao, D. Jiang, D. Xu, H. Zhu, Y. Zhang, and H. V. Poor, “Two-dimensional antijamming mobile communication based on reinforcement learning,” _IEEE Trans. Veh. Technol._ , vol. 67, no. 10, pp. 9499–9512, 2018.
* [13] Y. Bi, Y. Wu, and C. Hua, “Deep reinforcement learning based multi-user anti-jamming strategy,” in _Proc. IEEE Int. Conf. Commun. (ICC)_. IEEE, 2019, pp. 1–6.
* [14] A. S. Ali, W. T. Lunardi, L. Bariah, M. Baddeley, M. A. Lopez, J.-P. Giacalone, and S. Muhaidat, “Deep reinforcement learning based anti-jamming using clear channel assessment information in a cognitive radio environment,” in _Proc. 5th Int. Conf. Adv. Commun. Tech. Netw. (CommNet)_ , 2022, pp. 1–6.
* [15] A. S. Ali, G. Singh, W. T. Lunardi, L. Bariah, M. Baddeley, M. Andreoni _et al._ , “RF jamming dataset: A wireless spectral scan approach for malicious interference detection,” _TechRxiv. Preprint_ , 2022.
* [16] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski _et al._ , “Human-level control through deep reinforcement learning,” _nature_ , vol. 518, no. 7540, pp. 529–533, 2015.
* [17] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” _arXiv_ , vol. abs/1312.5602, 2013.
* [18] V. Nair and E. G. Hinton, “Rectified linear units improve restricted boltzmann machines,” in _Proc. Int. Conf. Mach. Learn. (ICML)_ , Jul. 2010, pp. 2094–2100.
* [19] L. Bottou and O. Bousquet, “The tradeoffs of large scale learning,” in _Proc. Int. Conf. Adv. Neural. Inf. Process Syst._ , 2008, pp. 161–168.
|
Mihaela Claudia Rosca
On discretisation drift and smoothness regularisation in neural network training
Department of Computer Science
The deep learning recipe of casting real-world problems as mathematical optimisation and tackling the optimisation by training deep neural networks using gradient-based optimisation has undoubtedly proven to be a fruitful one. The understanding behind why deep learning works, however, has lagged behind its practical significance. We aim to make steps towards an improved understanding of deep learning with a focus on optimisation and model regularisation.
We start by investigating gradient descent (GD), a discrete-time algorithm at the basis of most popular deep learning optimisation algorithms. Understanding the dynamics of GD has been hindered by the presence of discretisation drift, the numerical integration error between GD and its often studied continuous-time counterpart, the negative gradient flow (NGF).
To add to the toolkit available to study GD, we derive novel continuous-time flows that account for discretisation drift. Unlike the NGF, these new flows can be used to describe learning rate specific behaviours of GD, such as training instabilities observed in supervised learning and two-player games.
We then translate insights from continuous time into mitigation strategies for unstable GD dynamics, by constructing novel learning rate schedules and regularisers that do not require additional hyperparameters.
Like optimisation, smoothness regularisation is another pillar of deep learning's success with wide use in supervised learning and generative modelling.
Despite their individual significance, the interactions between smoothness regularisation and optimisation have yet to be explored.
We find that smoothness regularisation affects optimisation across multiple deep learning domains, and that incorporating smoothness regularisation in reinforcement learning leads to a performance boost that can be recovered using adaptions to optimisation methods.
We end by showing
that optimisation can also affect smoothness, as discretisation drift can act as an implicit smoothness regulariser in neural network training.
Our work focuses on understanding and improving components of deep learning systems. As such, it provides two avenues for impact: first, by providing insights that can be useful to other researchers in the field and second, by helping downstream applications that use such systems.
Inside deep learning academic research, this work provides a set of novel tools and insights concerning two main building blocks of deep learning systems, namely optimisation and models. Our insights include novel continuous-time flows to analyse optimisation dynamics, as well as a rethinking of the effect of smoothness constraints imposed on deep learning models. We have accompanied theoretical results with experimental frameworks for analysis and validation, including approaches that can stabilise training and reduce hyperparameter sensitivity and thus computational costs.
Future theoretical research directions based on our line of work include specialising our theoretical results to specific classes of neural networks, further expanding the family of available continuous-time flows capturing optimisation dynamics, and finding new beneficial or detrimental implicit regularisation forces in deep learning optimisation. Promising avenues for practical impact include new methods for training deep learning models, and finding new approaches for model selection. Throughout this work, we provide insights across multiple deep learning domains, including supervised learning, two-player games, and reinforcement learning. The work included here has been presented at multiple conferences and published in journals.
While we do not tackle specific applications directly, this work can have impact outside the research domain by improving the stability of deep learning models used in applications across multiple domains; examples include image classification and generation, as well as game play using reinforcement learning.
One of the biggest opportunities I have been provided with throughout my PhD studies was to have not one, but two incredible supervisors. My primary supervisor, Marc Deisenroth, has been an wonderful source of guidance and strength. Marc, you have raised my bar for rigour in thinking and writing, and have shown me that technical excellence does not come in spite of kindness, but due to kindness; for that, I will be eternally grateful. My second supervisor, Shakir Mohamed, has been my champion and supporter for many years. Shakir, working with you has been the privilege of a life-time; your breadth of technical knowledge is outstanding, and your careful and deliberate application of it through a meticulous choice of impactful research directions should serve as a guide to us all.
I have also been lucky to have an amazing set of collaborators throughout my PhD: Andriy Mnih, Arthur Gretton, Benoit Dherin, Chongli Qin, Claudia Clopath, David G.T. Barrett, Florin Gogianu, Lucian Busoniu, Michael Figurnov, Michael Munn, Razvan Pascanu, Theophane Weber, Tudor Berariu, and Yan Wu. Special thanks to Benoit Dherin, for many evening chats on everything to do with deep learning optimisation and regularisation; to Yan Wu, for a long term fruitful collaboration; to Theophane Weber for many chats on reinforcement learning; to Tudor Berariu and Florin Gogianu for being so much fun while doing research; and to Arthur Gretton for guidance early in my PhD.
This work was done as part of the DeepMind-UCL PhD program, and I am grateful to those organising it and championing this program. Thanks also Frederic Besse, and to Sarah Hodkinson and Claudia Pope for their support.
During my PhD I was lucky to write a book chapter on Implicit Generative models with Balaji Lakshminarayanan and Shakir Mohamed, for Kevin P. Murphy's second edition of `Machine Learning: a Probabilistic Perspective'. Beyond allowing me to share my enthusiasm for this area of machine learning, a great clarity of thinking followed the writing process, which helped fledge some of the new ideas presented in this work. I thus must thank Kevin, Balaji, and Shakir for this opportunity.
Thanks also go to my examination committee, Ferenc Huszár and Patrick Rebeschini, for their time and an incredible and thoughtful discussion on my work.
I also have many reasons to be grateful to those who have supported me personally during this time and beyond. I am here because my parents have prioritised my education and encouraged me throughout the way. I also had the most loving of grandmothers, and I am sure she would have loved to see me pursue this degree.
I have been fortunate to have a great set of friends, especially in the though times of coronavirus lockdowns. Niklas, you have been there for me since the times of the Imperial labs, and have always had faith in me; there are no words that can express how grateful I am. Fabio, you are a consistent, reliable, and wise friend; you are also raising my favourite humans, who are a great source of happiness for me. Many thanks also go to Ada, Dhruva, David, George, James, Ira, Sascha, Siqi, Slemi, Victor, and many other friends, for the time spent together and the great memories.
CHAPTER: INTRODUCTION
Why has deep learning been so successful in recent decades at solving a myriad of problems, such as scientific discovery and applications [Jumper et al., 2021, Ravuri et al., 2021, Fawzi et al., 2022], language and image generation [Ramesh et al., 2022, Brown et al., 2020, Hoffmann et al., 2022, Sauer et al., 2023], and super level human game play in intricate games [Silver et al., 2017, Silver et al., 2017, Berner et al., 2019]?
A common answer to the question of why deep learning has been so successful is that it scales well with increases in amount of data and computational resources, both of which have drastically increased in recent years.
But this answer raises further questions, since many of the reasons why deep learning scales well have yet to be fully uncovered. Particularly, we still do not know why the recipe behind deep learning—using simple gradient-based algorithms to optimise deep neural networks—is so effective on large datasets.
One valid response to this observation is to state that it might not matter why deep learning works, but that it does and to continue exploring its benefits on a wide range of domains, as well as continue the trend of scaling up data, architectures, and compute.
This approach works, can lead to progress, and has been widely used in other sciences. This is not the approach we take here, however.
Throughout this thesis, we take the point of view that understanding why a system works is not only desirable, but necessary in order to ensure targeted research progress, by exploring the areas most likely to lead to an acceleration of new methods, increased resource efficiency, and safe deployment.
In taking this point of view, we follow the footsteps of a growing body of work aiming to uncover, examine, and understand deep learning systems [Arora et al., 2018, Barrett and Dherin, 2021, Smith et al., 2021, Arora et al., 2018, Mescheder et al., 2017, Nagarajan and Kolter, 2019, Nagarajan and Kolter, 2017, Keskar et al., 2017, Dinh et al., 2017, Jiang et al., 2019, Smith et al., 2018].
There are many avenues worthy of study inside the deep learning domain, ranging from investigating why certain model architectures work well [Li et al., 2018, Bjorck et al., 2018, Brock et al., 2021, Ding et al., 2022], to studying why certain types of generative models or learning principles outperform others [Sriperumbudur et al., 2009, Arjovsky et al., 2017, Fedus et al., 2018].
Here, we choose to focus on the study of optimisation and model regularisation in the form of smoothness regularisation. We are motivated by their generality and applicability across deep learning domains from supervised learning, probabilistic modelling, and reinforcement learning; their significant impact on training performance and test set and out of distribution generalisation [Hoffman et al., 2019, Miyato et al., 2018, Radford et al., 2015]; and the opportunity for analysis due to the vast gap between their aforementioned practical performance and the understanding of their mechanisms. Unanswered questions include: What causes instability in deep learning? How can instability be mitigated? Why is there not more instability? What are the implicit regularisation effects of popular optimisers? What are the effects of smoothness regularisation? What are the interactions between model regularisation and optimisation in deep learning? Which optimisation and regularisation effects are domain specific and which transfer across deep learning domains? These are some of the questions we will tackle in this thesis.
A motivating example. Generative Adversarial Networks (GANs) [Goodfellow et al., 2014] are a type of generative model that has led to many image generation breakthroughs [Brock et al., 2018, Sauer et al., 2023, Karras et al., 2019]. Many GAN variants have been proposed, some based on minimising different distributional divergences or distances [Nowozin et al., 2016, Arjovsky et al., 2017, Gulrajani et al., 2017, Bińkowski et al., 2018, Mao et al., 2017] while others use convergence analysis around a Nash equilibrium [Mescheder et al., 2017, Nagarajan and Kolter, 2017]; each of these variants are theoretically appealing and have different properties worth studying. Yet, the biggest successes in GANs have consistently come from changes to optimisation and regularisation: the choice of optimiser [Radford et al., 2015], large batch sizes [Brock et al., 2018], and incorporating smoothness regularisation [Miyato et al., 2018] are the key ingredients of all successful GANs, while the loss functions used are often those closest to the original formulation [Brock et al., 2018, Miyato et al., 2018, Zhang et al., 2019, Karras et al., 2021]. Similar optimisation and regularisation techniques have also allowed the expansion of GANs from continuous input domains, such as images, to discrete input domains, such as text [de Masson d'Autume et al., 2019]. The GAN example showcases what opportunities are available by understanding and improving optimisation and smoothness regularisation, opportunities we aim to explore here.
§ OPTIMISATION IN DEEP LEARNING
Machine learning is the science of casting real world problems such as prediction, generation, and action into optimisation problems.
Thus, many machine learning problems can be formulated as
\begin{align}
\min_{\vtheta} E(\vtheta),
\end{align}
where $E$ is a loss function suitably chosen for the underlying task and $\vtheta \in \mathbb{R}^D$. For many machine learning tasks, $E$ is an expected value of an unknown data distribution $p^*(\vx)$; $E$ gets estimated given a training dataset $\mathcal{D}$ of unbiased samples from $p^*(\vx)$: ${E(\vtheta) = \mathbb{E}_{p^*(\vx)} E(\vtheta; \vx) \approx \frac{1}{|\mathcal{D}|} \sum_{\vx_i \in \mathcal{D}} E(\vtheta; \vx_i)}$.
A distinction is made between the model and the objective function components of $E(\vtheta; \vx)$, with the model $f(\vtheta; \vx)$ depending on the parameters $\vtheta$, and the objective function depending only on the model's output.
In deep learning, $f$ is a neural network, $\vtheta$ are its parameters, and $E$ is non-convex. Due to the high dimensionality of $\vtheta$ or large dataset size, many optimisation algorithms are too costly or inefficient to be used in deep learning, as they scale unfavourably with parameter or dataset size. Luckily, however, we can use the compositional structure of neural networks to compute the gradient $\nabla_{\vtheta} E \in \mathbb{R}^{D}$ using the backpropagation algorithm [Werbos, 1982, Rumelhart et al., 1986, LeCun et al., 1989]. From there, we can use first-order gradient-based iterative algorithms, such as gradient descent and its variants [45, Robbins and Monro, 1951, Kingma and Ba, 2015, Tieleman and Hinton, 2012].
Studying the behaviour of optimisation algorithms can be challenging, even for algorithms as seemingly simple as gradient descent. Typical questions of study include: Will the algorithm converge to a local minima? Does the algorithm get stuck in saddle points? When does the algorithm fail to converge and when it does converge, how quickly does it converge? The complexity of these issues gets compounded when studying the optimisation of deep neural networks with millions to billions of parameters that do not satisfy commonly used assumptions such as convexity; this often creates a discrepancy between what theory analyses and what is used in practice.
Fundamentally, the deep learning optimisation community is interested in answering the questions: Why do first-order optimisers work so well in training neural networks
and How can we improve optimisation in deep learning?
Progress in this area has been made recently both through theoretical and empirical means of studying neural network optimisation dynamics [Du et al., 2019, Zou et al., 2020, Du et al., 2017, Cohen et al., 2021, Keskar et al., 2017, Lewkowycz et al., 2020].
Novel insights challenge commonly held assumptions, such as the belief that local minima and saddle points are a major challenge for neural network optimisation due to high parameter dimensionality [Du et al., 2019, Zou et al., 2020, Du et al., 2017, Elkabetz and Cohen, 2021].
There has also been a growing body of work showing the importance of the Hessian $\nabla_{\vtheta}^2 E$ in supervised learning optimisation, and specifically on the connection between the learning rate and the largest Hessian eigenvalue and observed instabilities in training [Cohen et al., 2021, Keskar et al., 2017, Lewkowycz et al., 2020]; we will examine the importance of the Hessian in a new light in Chapter <ref>.
Many analyses of discrete-time methods, such as gradient descent, take a continuous-time approach [Glendinning, 1994, Saxe et al., 2014, Nagarajan and Kolter, 2017, Lampinen and Ganguli, 2019, Arora et al., 2018, Advani et al., 2020, Elkabetz and Cohen, 2021, Vardi and Shamir, 2021, França et al., 2020, Balduzzi et al., 2018] as continuous-time proofs tend to be easier to construct [May, 1976, Elkabetz and Cohen, 2021].
The downside of taking a continuous-time and not discrete-time approach to optimisation analysis is that one does not directly analyse the update of interest; the discrepancy between the discrete-time trajectory and its continuous counterpart, which we will call discretisation drift, can lead to conclusions that do not transfer from continuous-time analysis to discrete-time algorithms [Yaida, 2018, Liu et al., 2021].
Discretisation drift is still not greatly understood and has only recently come into attention in deep learning [Kunin et al., 2021, Barrett and Dherin, 2021, Smith et al., 2021].
The incorporation of techniques from the numerical integration community that construct continuous-time flows accounting for discretisation drift
has shed light on the implicit regularisation effects of gradient descent and highlighted the importance of the learning rate in generalisation [Barrett and Dherin, 2021, Smith et al., 2021]. We will use the same numerical integration techniques in Chapter <ref>, where we find a new continuous-time flow that, unlike existing flows, can capture instabilities induced by gradient descent.
We then use insights from our continuous-time analysis to devise an automatic learning rate schedule that can trade-off training stability and generalisation performance.
[Single objective.]
[Zero-sum games.]
[Overview of optimisation in deep learning.]Overview of optimisation in deep learning. Many optimisation problems in deep learning can be seen as either single-objective problems, where all parameters are updated to minimise the same loss, or multi objective problems where where different sets of parameters are updated to minimise different objectives; when the number of objectives is two, these are called two-player games. A special case of two-player games are adversarial games such as zero-sum games, where the objective of one player is to minimise a function $E$, while the objective of the other player is to maximise the same function. This adversarial structure can lead to challenges in optimisation, with approaches that lead to converge to local minima in single objective optimisation fig:single_objective_intro resulting in cyclic trajectories in zero sum games fig:two_player_games_intro.
While many studies focus on optimisation dynamics in the single-objective case, such as supervised learning, strides have also been made towards understanding optimisation in two-player games [Mescheder et al., 2017, Balduzzi et al., 2018, Schäfer and Anandkumar, 2019, Nagarajan and Kolter, 2017, Qin et al., 2020]. Unlike in the single-objective setting, in two-player games not all parameters minimise the same objective, as each player has its own set of parameters, here denoted by $\vphi$ and $\vtheta$, and its own objective:
\begin{align}
&\min_{\vphi} E_{\vphi}(\vphi, \vtheta) \\
&\min_{\vtheta} E_{\vtheta}(\vphi, \vtheta).
\end{align}
Two-player games can be adversarial, where the two players have opposing objectives, and even zero sum, where one player's gain is another's loss: ${E_{\vphi}(\vtheta, \vphi) = - E_{\vtheta}(\vtheta, \vphi)}$.
We visualise a simple two-player zero-sum game in Figure <ref>, showing that adversarial dynamics can make it challenging to reach a local equilibrium compared to a single-objective counterpart, shown in Figure <ref>.
Interest in the analysis of two-player game optimisation has been fuelled by interest in GANs [Goodfellow et al., 2014], a generative model trained via an adversarial two player game.
While GANs have been a successful generative model, they have a reputation of being notoriously hard to train, and this has served as a motivation for finding approaches to stabilise and improve the dynamics of adversarial two-player games.
Analysing the behaviour and convergence of two-player games has led to progress in understanding the sources of instability and divergence in games and has led to many a regulariser that can stabilise training [Qin et al., 2020, Mescheder et al., 2018, Nagarajan and Kolter, 2017, Balduzzi et al., 2018, Wang et al., 2019, Mazumdar et al., 2019]. We add to this body of work by further investigating sources of instability in two-player games, coming from discretisation drift rather than the original continuous-time system, in Chapter <ref>. By modelling discretisation drift, we devise explicit regularisers that stabilise training without requiring any additional hyperparameter sweep.
§ SMOOTHNESS W.R.T. INPUTS
Optimisation provides an approach to approximate an unknown decision surface required for prediction, generation, or action from an available dataset. That is, given an optimal predictor $f^*$, optimisation provides a recipe to find a parametrised function $f(\cdot;\vtheta)$ close to $f^*$ on the training dataset. But there are often many such optimisation solutions, each with different properties outside the training data.
How useful this approximation is in the underlying application depends on how well it performs on new data.
This is an essential desideratum of machine learning algorithms, and it is relevant for generalisation, continual learning [Parisi et al., 2019], and transfer learning [Zhuang et al., 2020]. Since we know from the no-free-lunch theorem [Shalev-Shwartz and Ben-David, 2014] that no machine learning algorithm will perform better than all other algorithms for any data distribution and prediction problem, to provide guarantees beyond the training data requires making assumptions about the data distribution, the structure of $f^*$, or both.
One approach to encode beliefs about $f^*$ into $f(\cdot;{\vtheta})$ is through what is often referred to in machine learning as an inductive bias [Mitchell, 1980][When we talk about smoothness with respect to inputs, we often write the predictor as a function of data, given parameters. When discussing optimisation and the focus is on parameters, we write the reverse for clarity.].
In accordance with Occam's razor [MacKay, 1991], we often want to encode a simplicity inductive bias: we are searching for the least complex functions that can explain the data. One approach to formalising simplicity is by measuring the effect changes in function input have on the function's output. If small changes in function input do not result in large changes in function output, we call that function smooth. A smoothness inductive bias encodes a preference for learning smooth functions: between two functions that fit the data equally well, the smoothest one should be preferred. We show an intuitive example highlighting the importance of smoothness in Figure <ref>. Figure <ref> exemplifies how too much smoothness can hurt training performance by reducing model capacity, while Figure <ref> shows how the lack of smoothness can lead to overfitting.
[Very smooth fit,
hurts training data fit.]
[Desired level of smoothness.]
[Too smooth fit,
hurts generalisation.]
[The importance of smoothness on the model fit.]The importance of smoothness with respect to inputs on the model fit: too much smoothness can hurt capacity fig:intro_too_smooth, while not enough smoothness can hurt generalisation fig:intro_too_unsmooth.
The formal definition of smoothness is often taken to be Lipschitz smoothness or related measures.
A function $f(\cdot; \vtheta): \mathcal{X} \rightarrow \mathcal{Y}$ is $K$-Lipschitz if
\begin{align}
\norm{f(\vx_1; \vtheta) - f(\vx_2; \vtheta)}_{\mathcal{Y}} \le K \norm{\vx_1-\vx_2}_{\mathcal{X}} \hspace{3em} \forall \vx_1, \vx_2 \in \mathcal{X},
\end{align}
where $K$ is known as the Lipschitz constant of function $f(\cdot; \vtheta)$. Throughout this manuscript, when we refer to the smoothness of a neural network, we refer to the smoothness w.r.t. data $\vx$, and not w.r.t. parameters $\vtheta$.
For many a machine learning task smoothness with respect to inputs constitutes a reasonable prior; this is can be easily illustrated in the image domain, where the prediction or action that ought to be taken does not change when a few pixels in the inputs change: an image of a dog still encodes a dog if a few pixels are modified. Thus, it does not come as a surprise that despite having vastly different motivations and implementations, many existing deep learning regularisation methods target a smoothness inductive bias, including $L_2$ regularisation, dropout [Srivastava et al., 2014], early stopping.
We will discuss in detail how these regularisation approaches are connected to smoothness in Chapter <ref>, formalise these connections in Chapter <ref>, as well as link smoothness to important phenomena in deep learning, such as double descent [Belkin, 2021, Belkin et al., 2019, Nakkiran et al., 2019].
As the importance of smoothness is beginning to be recognised by the deep learning community, methods that directly target smoothness regularisation have been developed [Miyato et al., 2018, Yoshida and Miyato, 2017, Hoffman et al., 2019]. These regularisation methods have been incorporated in supervised learning to help generalisation [Hoffman et al., 2019, Bartlett et al., 2017] and adversarial robustness [Sokolić et al., 2017, Novak et al., 2018, Cisse et al., 2017], as well as into generative models leading to improvements in performance and stability [Miyato et al., 2018, Vahdat and Kautz, 2020, Zhang et al., 2019]. We investigate the importance of smoothness regularisation and its effects in Chapter <ref> as well as incorporate it into a new problem domain, reinforcement learning, in Chapter <ref>.
§ INTERACTIONS BETWEEN OPTIMISATION AND SMOOTHNESS REGULARISATION
We have thus far discussed optimisation and smoothness regularisation as separate pillars of a deep learning system. Optimisation—concerned with changes in parameters—affects training performance, speed, and stability.
Smoothness—concerned with changes in inputs—can be a powerful inductive bias avoiding learning overly complex models.
What is often overlooked is the implicit bias optimisation has on the model class being learned.
While neural architectures and model regularisers define the class of functions we can represent with our models, optimisation methods determine which functions in the function class we can learn [Barrett and Dherin, 2021]. Likewise, different model architectures are easier to learn than others, with ResNets [He et al., 2016] providing a prime example of an architecture that aids optimisation [Li et al., 2018], followed by model normalisation techniques such as batch normalisation [Ioffe and Szegedy, 2015], which have been shown to primarily benefit optimisation [Bjorck et al., 2018].
While not previously studied, we show that smoothness regularisation and optimisation heavily interact in deep learning.
We start by showing examples of this interaction in Chapter <ref>, where we observe that smoothness regularisation methods can interact with hyperparameter choices such as learning rates.
We then highlight how in some problem domains smoothness regularisation leads to increased performance by improving optimisation dynamics in Chapter <ref>. In Chapter <ref>, we show how the implicit regularisation induced by the discretisation drift of an optimisation method can lead to a smoothness inductive bias, and that the strengh of this inductive bias is dependent on optimsation hyperparameters such as learning rate and batch size.
Throughout this thesis, we will see examples of how optimisation can affect the learned model class and how the model smoothness can affect optimisation. While it is tempting to think of each component of a deep learning system individually, it can hinder progress and misguide our intuitions. We will use these observations to argue for a cohesive view between optimisation and regularisation in deep learning.
CHAPTER: OPTIMISATION IN DEEP LEARNING
In this chapter we describe the optimisation tools and methods we will require in the rest of this thesis.
When describing the commonly used algorithms in deep learning we often take a continuous-time approach.
Compared to a discrete-time approach, a continuous-time approach tends to be more amenable to proof construction [Elkabetz and Cohen, 2021], having many tools available, including stability analysis, and can be used to construct conserved quantities [Kunin et al., 2021]. Indeed, the ease of analysis of continuous-time approaches has lead to a general uptake into continuous-time methods in deep learning recently, with use in generative models, architectures, and optimisation [Chen et al., 2018, Chen et al., 2018, Kidger, 2022, Grathwohl et al., 2019, Song et al., 2021, Qin et al., 2020].
§ THE DIRECTION OF STEEPEST DESCENT
Once we have formulated our machine learning problem into an optimisation problem
\begin{align}
\min_{\vtheta \in \mathbb{R}^D} E(\vtheta),
\label{eq:opt_problem_2}
\end{align}
we have to choose an optimisation algorithm with which to solve the above problem.
Solving the problem in Eq (<ref>) in closed form is intractable for the problems we are interested in. Thus, the optimisation algorithms used are iterative, and aim to answer the question: given a current set of parameters $\vtheta$, what is a direction of descent of function $E$?
The simplest choice of the direction is given by the negative gradient $- \nabla_{\vtheta} E(\vtheta)$, known as the direction of steepest descent.
To see why, consider the continuous-time flow, often referred to as the gradient flow or negative gradient flow (NGF)
\begin{equation}
\dot{\vtheta} = - \nabla_{\vtheta} E.
\label{eq:ngf_first}
\end{equation}
If parameters $\vtheta$ follow the NGF in continuous-time, $E(\vtheta)$ will decrease until a stationary point $\nabla_{\vtheta} E = \mathbf{0}$ is reached, since
\begin{equation}
\frac{d E}{d t} = \frac{d \vtheta}{d t}^T \nabla_{\vtheta} E = - \left(\nabla_{\vtheta} E\right)^T \nabla_{\vtheta} E = - || \nabla_{\vtheta} E||^2 \le 0.
\label{eq:e_min_ngf}
\end{equation}
If $E$ is convex, following the NGF will find the global minimum.
If $E$ is not convex, following the NGF in continuous-time will reach a local minimum (a proof will be provided in Corollary <ref>). A point $\vtheta^* \in \mathbb{R}^D$ is a local minimum if there is a neighbourhood $\mathcal{V}$
around $\vtheta^*$ such that $\forall \vtheta' \in \mathcal{V} \hspace{1em} E(\vtheta') \le E(\vtheta^*)$; a more amenable characterisation is that local minima are stationary points $\nabla_{\vtheta} E(\vtheta^*) = \mathbf{0}$ with a positive semi-definite Hessian, i.e. $\nabla_{\vtheta}^2 E(\vtheta^*)$ has only non-negative eigenvalues. It is often convenient to also define strict local minima, i.e. local minima where eigenvalues are strictly positive.
We have already found two insights that would follow us throughout the rest of this thesis: the ease of analysis in continuous-time, and the importance of the negative gradient. Before exploring the optimisation algorithms using these insights, we briefly turn to the question of computing the gradient.
§.§ Computing the gradient
To compute the gradient $\nabla_{\vtheta} E$ we have to consider the specification of the loss function $E$.
In machine learning, $E$ often takes the form of an expectation or sum of expectations; if the expectation is under the data distribution $p^*(\vx)$ and we can write $E(\vtheta) = \mathbb{E}_{p^*(\vx)} l(\vtheta; \vx)$, we have
\begin{align}
\nabla_{\vtheta} E(\vtheta) = \nabla_{\vtheta} \mathbb{E}_{p^*(\vx)} l(\vtheta; \vx) = \mathbb{E}_{p^*(\vx)} \nabla_{\vtheta} l(\vtheta; \vx).
\label{eq:swap}
\end{align}
We can now compute an unbiased estimate of the gradient via Monte Carlo estimation:
\begin{align}
\nabla_{\vtheta} E(\vtheta) = \mathbb{E}_{p^*(\vx)}\nabla_{\vtheta} l(\vtheta; \vx) \approx \frac{1}{N}\sum_{i=1}^{N}\nabla_{\vtheta} l( \vtheta; \vx_i), \hspace{1em} \vx_i \sim p^*(\vx).
\label{eq:mc}
\end{align}
In deep learning, the gradient $\nabla_{\vtheta} l( \vtheta; \vx_i)$ is often computed via backpropagation, i.e. using the compositional structure of neural networks to compute gradients in a recursive fashion via the chain rule.
If the entire dataset is used to approximate the gradient in Eq (<ref>), this corresponds to full-batch training; if an unbiased sample from the dataset is used, the training procedure is referred to as mini-batch training.
Additional challenges arise for objectives of the form $E(\vtheta) = \mathbb{E}_{p(\vx; \vtheta)} l(\vx)$, as is the case for some reinforcement learning and generative modelling objectives. In such situations, the change of expectation and gradient in Eq (<ref>) is no longer possible.
For certain family of distributions $p(\vx; \vtheta)$, however, gradient estimators are available.
In later chapters we will use the pathwise estimator, where the random variable $X$ with distribution $p(\vx; \vtheta)$ can be written as $X = g(Z; \vtheta)$, in which case we can use the change of variable formula for probability distributions:
\begin{align}
\mathbb{E}_{p(\vx; \vtheta)} l(\vx) = \mathbb{E}_{p(\vz)} l(g(Z; \vtheta)).
\end{align}
From here we can write
\begin{align}
\nabla_{\vtheta}\mathbb{E}_{p(\vx; \vtheta)} l(\vx) = \nabla_{\vtheta}\mathbb{E}_{p(\vz)} l(g(Z; \vtheta)) = \mathbb{E}_{p(\vz)} \nabla_{\vtheta} l(g(Z; \vtheta)),
\end{align}
which is again amenable to Monte Carlo estimation.
We refer to our general overview for more details on the pathwise estimator as well as other estimators [Mohamed et al., 2020].
§ ALGORITHMS: FROM GRADIENT DESCENT TO ADAM
We have seen how following the negative gradient in continuous-time leads to convergence to a local minimum.
Implementing infinitely small updates in continuous time on our discrete computers is not feasible, however, and thus we need a discrete-time algorithm for optimisation. Discretising continuous-time flows is a heavily studied topic with its own area of research in applied mathematics, numerical integration [Hairer et al., 2006].
Numerical integrators provide approaches to approximate the solution of the flow at time $h$ and initial conditions $\vtheta(0)$, which we denote as $\vtheta(h; \vtheta(0))$.
A simple discretisation method is Euler discretisation, which when applied to the NGF leads to
\begin{align}
\vtheta(h; \vtheta(0)) &= \vtheta(0) + \int_{0}^h \dot{\vtheta}(t) d t = \vtheta(0) - \int_{0}^h \nabla_{\vtheta} E(\vtheta(t)) d t \\
&\approx \vtheta(0) - \int_{0}^h \nabla_{\vtheta} E(\vtheta(0)) d t = \vtheta(0) - h \nabla_{\vtheta} E(\vtheta(0)). \label{eq:euler_int}
\end{align}
Setting $\vtheta(0) = \vtheta_{t-1}$ in the above equation leads to the familiar gradient descent update:
\begin{equation}
\vtheta_t = \vtheta_{t-1} - h \nabla_{\vtheta} E(\vtheta_{t-1}).
\end{equation}
By assuming that the vector field of the flow—the gradient $\nabla_{\vtheta} E$—does not change over the time interval $h$ we obtained a fast discrete-time algorithm.
However, due to the approximation in Eq (<ref>), following gradient descent is not the same as following the NGF: the numerical integration error leads to a difference between the gradient descent and NGF trajectories. Throughout this thesis we will refer to this difference in trajectories as discretisation drift (often also known as discretisation error).
Due to discretisation drift, there are no guarantees that gradient descent will reach a local minimum. Unlike the NGF,
gradient descent can diverge or converge to saddle points (stationary points that have at least one negative eigenvalue of the Hessian).
We visualise the effects of discretisation drift in Figure <ref>. Discretisation drift can speed up training (Figure <ref>), destabilise it (Figure <ref>) or lead to divergence (Figure <ref>).
Importantly, when using gradient descent, the discrete counterpart of the NGF, for large $h$ the function $E$ can increase, leading to training instabilities.
Understanding when gradient descent leads to instabilities can be challenging, due to intricate dependencies between the learning rate and the shape of $E$; this gets compounded in the case of deep learning due to the very high dimensional nature of the parameter space. We will tackle this question later in this thesis.
Recent work has made great strides in understanding the behaviour of gradient descent in deep learning. Questions of study include whether gradient descent converges to local or global minima [Du et al., 2019, Jacot et al., 2018, Du et al., 2018, Bartlett et al., 2018], the prevalence and effect of saddle points [Dauphin et al., 2014, Du et al., 2017],
the interaction between neural architectures and optimisation [Li et al., 2018, Bjorck et al., 2018, Li et al., 2021], the connection between learning rates and Hessian eigenvalues and instabilities [Cohen et al., 2021],
the effect of the learning rate on generalisation [Barrett and Dherin, 2021, Li et al., 2019].
[The importance of the learning rate in gradient descent.]The importance of the learning rate in gradient descent: small learning rates lead to convergence to local minima but slow to converge fig:dd_intro_1, while larger learning rates can lead to instability fig:dd_intro_2 and divergence fig:dd_intro_3.
Rprop and RMSProp. We have derived gradient descent as a discretisation of the NGF; we used the NGF since it decreases the value of the objective function $E$. We note, however, that only the sign of the gradient for each parameter is relevant in order to achieve a descent direction locally. To see why, consider constants $c_i >0$ and the system:
\begin{equation}
\dot{\vtheta_i} = - c_i \nabla_{\vtheta_i} E.
\end{equation}
This flow, like the NGF, also decreases $E$, since as $c_i >0$
\begin{equation}
\frac{d E}{d t} = \sum_i \frac{d E}{d\vtheta_i} \frac{d \vtheta_i}{d t} = \sum_i \frac{d E}{d\vtheta_i} (- c_i \nabla_{\vtheta_i} E) = - \sum_i c_i (\nabla_{\vtheta_i} E)^2\le 0.
\label{eq:c_i_ode}
\end{equation}
This observation is the motivation for Rprop [Riedmiller and Braun, 1993], which instead of using the gradient as the parameter update, uses only its sign: $sign(\nabla_{\vtheta_i} E) = \frac{\nabla_{\theta_i} E}{\sqrt{\left(\nabla_{\vtheta_i} E\right)^2}}$. We can think of Rprop as the discretisation of the following flow at each iteration:
\begin{equation}
\dot{\vtheta_i} = - \frac{1}{|\nabla_{\vtheta_i} E(\vtheta_0)|} \nabla_{\vtheta_i} E .
\end{equation}
While this can solve issues with exploding gradients and an imbalance between the magnitude of the updates of different parameters, it can be crude: Rprop cannot distinguish between areas of space where gradient are positive, but very small or areas of space with very large gradients; the update in both of these cases would be the learning rate $h$. The solution proposed to this issue was RMSprop [Tieleman and Hinton, 2012], which instead of normalising the gradient of each parameter by $\left(\nabla_{\vtheta_i} E\right)^2$, it normalises the gradient by a moving average of $\left(\nabla_{\vtheta_i} E\right)^2$ obtained from previous iterations. By using element-wise operations, we can write the RMSprop update as
\begin{align}
\vv_t &= \beta \vv_{t-1} + (1-\beta) \nabla_{\vtheta} E(\vtheta_{t-1})^2; \hspace{5em}
\vtheta_t = \vtheta_{t-1} - h \frac{\nabla_{\vtheta} E(\vtheta_{t-1})}{\sqrt{\vv_t}},
\end{align}
where $\beta$ is a hyperparameter often set to values in $[0.9, 0.999]$.
Since the denominator is positive, the RMSprop update has the same sign as the gradient, but a different magnitude. The use of moving averages can dampen the effect of a large gradient, which would lead to instabilities in gradient descent, but still accounts for the the magnitude of the gradient, not only its sign, unlike Rprop.
RMSprop and its variants are often used in reinforcement learning [Mnih et al., 2015, Kearney et al., 2018, Hessel et al., 2018].
Momentum. Moving averages are an important staple of optimisation algorithms, beyond adjusting the magnitude of the local gradient, as we have seen with RMSprop. The idea behind momentum algorithms is to speed up or stabilise training by using previous iteration gradients.
To provide intuition regarding momentum, we note that along a trajectory towards a local minimum the sign of the gradient will be the same throughout that trajectory.
Let $\vtheta^*$ be a local minimum of $E(\vtheta)$ and $\vtheta_{t+1}$ and $\vtheta_t$ iterates along a trajectory towards $\vtheta^*$ and consider parameter index $i$ such that $sign(\vtheta_{t+1,i} - \vtheta^*_{i}) = sign(\vtheta_{t, i} - \vtheta^*_{i})$; this encodes the assumption that the trajectory goes towards a $\vtheta^*$ without escaping it in dimension $i$. But $sign(\vtheta_{t+1, i} - \vtheta^*_i) = sign(\nabla_{\vtheta} E (\vtheta_t)_i)$ as they are both the direction which minimises $E$. Thus, $sign(\nabla_{\vtheta} E (\vtheta_t)_i)=sign( \nabla_{\vtheta} E(\vtheta_{t+1})_i)$. This tells us that as long as a trajectory is not jumping over a local minimum in a direction, the sign of the gradient in that direction does not change.
Hence, one can speed up training by taking bigger steps in that direction by accounting for the previous updates; alternatively this is often framed as an approach to speed up training in areas of low curvature (where by definition the gradient does not change substantially) [Sutskever et al., 2013].
This intuition suggests the following algorithm [Polyak, 1964]:
\begin{align}
\vtheta_t = \vtheta_{t-1} - h \sum_{i=1}^t \beta^{i-1} \nabla_{\vtheta} E(\vtheta_{t-i}),
\end{align}
where the update is given by a geometrical average of the gradients at previous iterations, and $\beta$ is the decay rate, a hyperparameter with $\beta < 1$.
We can rewrite the momentum algorithm in its commonly known form:
\begin{align}
\vv_t &= \beta \vv_{t-1} - h \nabla_{\vtheta} E(\vtheta_{t-1}); \hspace{10em}
\vtheta_t = \vtheta_{t-1} + \vv_t .
\end{align}
Traditionally momentum has been seen as a discretisation of the second order flow
\begin{align}
{\ddot{\vtheta}} + c_v \dot{\vtheta} = - \nabla_{\vtheta} E (\vtheta),
\end{align}
from where the vector $\vv$ can be introduced:
\begin{align}
{\dot{\vv}} = - c_v \vv - \nabla_{\vtheta} E (\vtheta); \hspace{10em}
{\dot{\vtheta}} = \vv
\end{align}
which, when setting $c_v = \frac{1-\beta}{h}$ and Euler discretised with learning rates $h$ and $1$, leads to the discrete updates above. Momentum can also be seen as a discretisation of the first-order continuous-time flow, by combining implicit and explicit Euler discretisation [Kunin et al., 2021]:
\begin{equation}
\dot{\vtheta} = -\frac{1}{1 - \beta} \nabla_{\vtheta} E .
\label{eq:momentum_ode}
\end{equation}
Since $\beta < 1 \implies \frac{1}{1 - \beta} > 1$, this is in line with our previous intuition of speeding up training. As with gradient descent, however, this intuition does not account for the effect of large learning rates leading to discretisation errors; while following the flow in Eq (<ref>) always decreases $E$, following the trajectory of the momentum optimiser instead loses that guarantee.
[$\beta=0.9$ and small $h$.]
[$\beta=0.9$ and large $h$.]
[$\beta=0.5$ and large $h$.]
[The importance of the learning rate and decay rate in gradient descent with momentum.]The importance of the learning rate $h$ and decay rate $\beta$ in gradient descent with momentum. fig:mom1: momentum can speed up training but can also introduce instabilities when the gradient changes direction; the learning rate used is the same as in Figure <ref>, where gradient descent converged to the minimum without instabilities. fig:mom2 and fig:mom3: for learning rates where gradient descent diverges, as we observed in Figure <ref>, momentum can stabilise training by dampening the effect of local large gradient magnitudes and large learning rates; from this simple example one can already observe the intricacies of the interactions of decay rates and learning rates when momentum is used.
We show the effect of $\beta$ and $h$ on the optimisation trajectory in Figure <ref>.
While momentum can speed up training and reach a neighbourhood of a local minimum quicker than vanilla gradient descent, for large decay rates $\beta$ it can lead to large updates that move further away from the equilibrium. We show this effect
in Figure <ref>, where we observe that despite using a small learning rate, with a high decay rate $\beta$ momentum can oscillate around the minimum significantly before reaching it, while gradient descent with the same learning rate does not oscillate around the minimum (Figure <ref>). This effect can be mitigated by using a smaller decay rate, as exemplified in Figure <ref>.
Dampening the strength of the current gradient update through momentum can also stabilise training, as we show in Figure <ref>. While for the same learning rate gradient descent diverges (Figure <ref>), momentum does not, since the large local gradients moving away from the equilibrium are dampened by the moving averages obtained from past gradients. For quadratic functions, this difference between gradient descent and momentum can be seen in the smallest learning rate which leads to divergence: while gradient descent diverges if $h > 2/\lambda_0$ where $\lambda_0$ is the largest eigenvalue of the Hessian $\nabla_{\vtheta}^2 E$, for momentum divergence occurs when $h > 2/\lambda_0 + 2\beta/\lambda_0\ge 2/\lambda_0$ since $\beta \ge 0$ [Cohen et al., 2021]. We will come back to the importance of the learning rate relative to the Hessian eigenvalues in later chapters.
Gradient descent with momentum is still very much used for deep learning optimisation and has been credited with increased generalisation performance compared to other optimisers [Gupta et al., 2021, Zhou et al., 2020], though recent work has shown that other well tuned methods can outperform momentum if well tuned [Kingma and Ba, 2015].
The Adam optimiser [Kingma and Ba, 2015], perhaps the most popular optimisation algorithm in deep learning, combines some of the previously discussed insights: it uses momentum and an RMSprop-like approach of dividing by a moving average of the square gradient (all operations below are element-wise):
\begin{align}
\vm_t &= \beta_1 \vm_{t-1} + (1 - \beta_1) \nabla_{\vtheta}E(\vtheta_{t-1})\\
\vv_t &= \beta_2 \vv_{t-1} + (1 - \beta_2) \nabla_{\vtheta}E(\vtheta_{t-1})^2\\
\vtheta_t &= \vtheta_{t-1} - h \frac{\frac{1}{1 - \beta_1^t} \vm_t}{\sqrt{\frac{1}{1 - \beta_2^t}\vv_t} + \epsilon},
\label{eq:last_adam}
\end{align}
where $\beta_1, \beta_2$ and $\epsilon$ are hyperparameters usually set to $\beta_2 = 0.999$ and $\beta_1 \in [0, 0.9]$ and $\epsilon \in [10^{-1}, 10^{-8}]$ with the default at $10^{-8}$.
The division in Eq (<ref>) is obtained using a bias correction argument for the moving averages. To see this, we write:
\begin{align}
\vm_t &= (1 - \beta_1) \sum_{i=1}^t \beta_1^{i-1} \nabla_{\vtheta}E(\vtheta_{t-i}).
\label{eq:adam_m_expanded}
\end{align}
The authors make the argument that if the gradients at iteration $i$ $\nabla_{\vtheta}E(\vtheta_{i})$ are drawn from the distribution $p_{g,i}$
\begin{align}
\mathbb{E}_{p_m} [\vm_t] &= (1 - \beta_1) \sum_{i=1}^t \beta_1^{i-1} \mathbb{E}_{p_{g, i}} \left[\nabla_{\vtheta}E(\vtheta_{t-i})\right]\\
& = (1 - \beta_1^{t}) \mathbb{E}_{p_g} \left[\nabla_{\vtheta}E(\vtheta_{t-1})\right] + \vxi,
\end{align}
where $\vxi$ is a bias term accounting for the changes in the expected value of the gradients at different iterations; it will be $\mathbf{0}$ if the gradient mean does not change between iterations otherwise it is expected to be small due to the decaying contribution of older gradients.
Thus, to ensure the expected value of the moving average stays the same as the gradient and does not decrease steadily as training progresses, the division in Eq (<ref>) is used; a similar argument can be made for the second moment. This ensures that in expectation the update (if a small $\epsilon$ is used), is approximately of magnitude $h$ for each parameter. We note that while the assumption that the bias $\vxi$ is small does not hold for certain areas of training, such as when the gradient changes direction due to instability around an equilibrium, the above argument holds in areas of low curvature (i.e. areas of space where the gradient does not change substantially); this is something we will use later.
By investigating the commonly used Adam hyperparameters, $\beta_1 \in [0, 0.9]$ and $\beta_2 = 0.999$, we get intuition about its behaviour. Since $\beta_1$ is smaller than $\beta_2$, Adam is more sensitive to the sign of the gradient (controlled by $\beta_1$) than its magnitude (controlled by $\beta_2$). Figures <ref> and <ref> show that the exponential moving averages in the denominator of the Adam update can dampen large updates compared to momentum (the corresponding momentum figures are shown in Figure <ref> and <ref>), but oscillations around the minimum still occur.
When a low decay rate for the nominator is used, such as $\beta_1 = 0.5$, in conjunction with a large $h$, instabilities can occur if small gradients are followed by a large gradient, since $\vm_t$ will be large but $\vv_t$ will remain small due to $\beta_2 = 0.999$. We see this in Figure <ref>, which we can contrast with momentum for $\beta=\beta_1$ and the same learning rate $h$ in Figure <ref>: momentum exhibits stable training, and does not exit the area around the equilibrium. Relatedly, the use of moving averages in the denominator $\vv_t$ has been linked to the lack of convergence in Adam for simple online convex optimisation problems [Reddi et al., 2019].
[$\beta_1=0.9$, small $h$, $\epsilon=10^{-8}$.]
[$\beta_1=0.9$, large $h$, $\epsilon=10^{-8}$.]
[$\beta_1=0.5$, large $h$, $\epsilon=10^{-8}$.]
[$\beta_1=0.9$, small $h$, $\epsilon=10^{-2}$.]
[$\beta_1=0.9$, small $h$, $\epsilon=10^{-2}$.]
[$\beta_1=0.5$, large $h$, $\epsilon=10^{-2}$.]
[The effect of hyperparameters in Adam.]The importance of the Adam hyperparameters: learning rate $h$ and decay rate $\beta_1$ and $\epsilon$. We use the default $\beta_2 = 0.999$. Top row: the update normalisation in Adam can stabilise training but can also lead to instability compared to momentum; this row uses $\epsilon=10^{-8}$, the default value and the other hyperparameters are the same as in Figure <ref> which shows the corresponding momentum trajectories. Bottom row: while keeping other hyperparameters fixed, changing $\epsilon$ to $10^{-2}$ leads to vastly different behaviours and can stabilise training by avoiding dividing by small values.
The hyperparameter $\epsilon$ does not change the sign of the update, but it can change its magnitude. Specifically, a high $\epsilon$ decreases the magnitude of the parameter update. We show the effect of $\epsilon$ in Figure <ref>, where we show that using $\epsilon =10^{-2}$ (bottom row) can lead to more stable trajectories compared to $\epsilon =10^{-8}$ (top row).
Adam is a very popular the optimiser in deep learning, and has been attributed with progress in various fields including GANs [Radford et al., 2015], and training modern models such as transformers [Liu et al., 2020, Choi et al., 2019]. The $\epsilon$ hyperparameter in Adam has been noted to be important for certain machine learning applications [Choi et al., 2019], including reinforcement learning, where the $\epsilon$ used is large compared to supervised learning [Bellemare et al., 2017, Hessel et al., 2018].
§ CONTINUOUS-TIME METHODS
We now describe the continuous-time approaches we will use in the rest of this thesis.
§.§ A brief overview of stability analysis
Stability analysis is a tool for continuous-time systems that helps decide whether $\vtheta^* \in \mathbb{R}^D$ is a stable fixed point for a continuous-time flow
\begin{align}
\dot{\vtheta} = f(\vtheta),
\end{align}
where $f: \mathbb{R}^D \rightarrow \mathbb{R}^D$ is the vector field of the flow.
The most used definitions of stability are Lyapunov stability and asymptotic stability. Lyapunov stability requires
\begin{align}
\forall \epsilon > 0, \exists \delta > 0 \hspace{3em} \norm{\vtheta(0) - \vtheta^*} < \delta \implies \ \forall t \ge 0 \ \norm{\vtheta(t) - \vtheta^*} < \epsilon.
\end{align}
Asymptotic stability requires Lyapunov stability and additionally
\begin{align}
\exists \delta > 0 \hspace{3em} \norm{\vtheta(0) - \vtheta^*} < \delta \implies \lim_{t \rightarrow \infty} \norm{\vtheta(t) - \vtheta^*} = \mathbf{0}.
\end{align}
The rate of convergence of asymptotic stability is exponential if
\begin{align}
\exists \delta, C, \alpha > 0 \hspace{1em} \norm{\vtheta(0) - \vtheta^*} < \delta \implies \forall t \ge 0 \ \norm{\vtheta(t) - \vtheta^*} \le C \norm{\vtheta(0) - \vtheta^*} e^{- \alpha t}.
\end{align}
Exponential asymptotic stability can be established using the following criterion:
(From [Hartman, 1960]) A fixed point $\vtheta^*$ with $ f(\vtheta^*) = \mathbf{0}$ where the Jacobian of the vector field $\jacthetaf(\vtheta^*)$ has eigenvalues with strictly negative real part is a stable fixed point of the flow $\dot{\vtheta} = f(\vtheta)$, with asymptotic exponential convergence. In contrast, if a fixed point has a Jacobian with an eigenvalue with strictly positive real part, that is an unstable equilibrium and the flow will not converge to it.
Remark <ref> provides us with a useful test to establish convergence based on a fixed point's Jacobian eigenvalues.
The test is inconclusive for equilibria with non-strictly negative real part Jacobian eigenvalues, i.e. Jacobians where a subset of eigenvalues are have strictly negative real part while some have $0$ real part.
Throughout this thesis, when performing stability analysis of a continuous-time system we will be testing exponential asymptotic stability using the above criterion.
The NGF $ \dot{\vtheta} = - \nabla_{\vtheta} E$ is only locally attracted to local minima of $E$, and all strict local minima are exponentially attractive under the NGF.
This follows from the application of Remark <ref>. If a stationary point ${\nabla_{\vtheta} E(\vtheta^*) = \mathbf{0}}$ is attractive under the NGF, the eigenvalues of the Jacobian $\jacparam{\vtheta}{(-\nabla_{\vtheta}E)}\left(\vtheta^*\right)$ have strictly negative or 0 real part. Since $\jacparam{\vtheta}{(-\nabla_{\vtheta}E)}\left(\vtheta^*\right) = - \nabla_{\vtheta}^2 E$ is a symmetric matrix, this condition is equivalent to the eigenvalues of $\nabla_{\vtheta}^2 E$ being non-negative, the local minimum condition. Conversely, since strict local minima have strictly positive eigenvalues, the Jacobian has strictly negative eigenvalues at strict local minima, leading to exponential local stability under the NGF.
§.§ Backward error analysis
Backward error analysis (BEA) is a tool in numerical analysis developed to understand the discretisation error of numerical integrators. We now present an overview of how to use BEA in the context of the gradient descent update $\vtheta_t = \vtheta_{t-1} - h \nabla_{\vtheta} E(\vtheta_{t-1})$ with $\vtheta \in \mathbb{R}^D$; for a general overview see Hairer et al., 2006.
BEA provides a modified vector field
\begin{equation}
\tilde f_n(\vtheta) = - \nabla_{\vtheta} E + h f_1(\vtheta) + \cdots + h^n f_n(\vtheta)
\end{equation}
by finding functions $f_1:\mathbb{R}^D \rightarrow \mathbb{R}^D$, ..., $f_n: \mathbb{R}^D \rightarrow \mathbb{R}^D$, such that the solution of the modified flow at order $n$, that is,
\begin{align}
\bm{{\dot{\vtheta}}} = - \nabla_{\vtheta} E + h f_1(\vtheta) + \cdots + h^n f_n(\vtheta)
\label{eq:general_modified_vector_field}
\end{align}
follows the discrete dynamics of the gradient descent update with an error $\| \vtheta_t - \vtheta(h)\|$ of order $\mathcal O(h^{n+2})$, where $\vtheta(h)$ is the solution of the modified equation truncated at order $n$ at time $h$ and $\vtheta(0) = \vtheta_{t-1}$.
The full modified vector field with all orders ($n \rightarrow \infty$)
\begin{equation}
\tilde f(\vtheta) = - \nabla_{\vtheta} E + h f_1(\vtheta) + \cdots + h^n f_n(\vtheta) + \cdots
\label{eq:bea_series}
\end{equation}
is usually divergent and only forms an asymptotic expansion. What BEA provides is the Taylor expansion in $h$ of an unknown $h$-dependent vector field $f_h(\vtheta)$ developed at $h=0$:
\begin{equation}
\tilde f(\vtheta) = \textrm{Taylor}_{h=0} f_h(\vtheta).
\end{equation}
Thus, a strategy for finding $f_h$ is to find a series of the form in Eq (<ref>) via BEA and then find the function $f_h$ such that its Taylor expansion in $h$ at 0 results in the found series.
Using this approach we can find the flow $ \dot{\vtheta} = f_h(\vtheta)$ which describes the gradient descent step $\vtheta_t = \vtheta_{t-1} - h \nabla_{\vtheta} E(\vtheta_{t-1})$ exactly.
While flows obtained using BEA are constructed to approximate one gradient descent step, the same flows can be used over multiple gradient descent steps as shown in Section <ref> in the Appendix.
Barrett and Dherin, 2021 used this technique to find the $\mathcal{O}(h^2)$ correction term of gradient descent in the single-objective setting. They showed that for a model with parameters $\vtheta$ and loss $E(\vtheta)$, optimised with gradient descent $\vtheta_t = \vtheta_{t-1} - h\nabla_{\vtheta} E(\vtheta)$, the first-order modified equation is
\begin{align}
\dot{\vtheta} = -\nabla_{\vtheta}E -\frac{h}{2} \nabla_{\vtheta}^2 E \nabla_{\vtheta} E,
\label{eq:first_igr}
\end{align}
which can be written as
\begin{align}
\dot{\vtheta} =-\nabla_{\vtheta} \tilde E(\vtheta) = -\nabla_{\vtheta}\left(E(\vtheta) + \frac h4 \norm{\nabla_{\vtheta} E(\vtheta)}^2\right).
\end{align}
Thus, gradient descent can be seen as implicitly minimising the modified loss
\begin{align}
\tilde E(\vtheta) = E(\vtheta) + \frac h4 \norm{\nabla_{\vtheta} E(\vtheta)}^2.
\label{eq:sup_learning_igr}
\end{align}
This shows that when training models with gradient descent, there is an implicit regularisation effect, dependent on learning rate $h$, which biases learning towards paths with low gradient norms. The authors refer to this phenomenon as `implicit gradient regularisation'; we will thus refer to the flow in Eq (<ref>) as the IGR flow.
§.§ BEA proofs
The general structure of BEA proofs is as follows: start with a Taylor expansion in $h$ of the modified flow in Eq (<ref>); write each term in the Taylor expansion as a function of $\nabla_{\vtheta} E$ and the desired $f_i$ using the chain rule; group together terms of the same order in $h$ in the expansion; and identify $f_i$ such that all terms of $\mathcal{O}(h^p)$ are 0 for $p \ge 2$, as is the case in the gradient descent update. A formal overview of BEA proofs can be found in Section <ref> in the Appendix.
We now exemplify how to use BEA to find the IGR flow we showed in Eq (<ref>) [Barrett and Dherin, 2021]. Since we are only looking for the first correction term, we only need to find $f_1$.
We perform a Taylor expansion to find the value of $\vtheta(h)$ up to order $\mathcal O(h^{3})$ and then identify $f_1$ from that expression such that the error $\| \vtheta_t - \vtheta(h)\|$ is of order $\mathcal O(h^{3})$.
We have
\begin{align}
\vtheta(h) = \vtheta_{t-1} + h \vtheta^{(1)}(\vtheta_{t-1}) + \frac{h^2}{2} \vtheta^{(2)}(\vtheta_{t-1}) + \mathcal{O}(h^3).
\end{align}
We know by the definition of the modified vector field in Eq (<ref>) that
\begin{align}
\vtheta^{(1)} = - \nabla_{\vtheta} E + h f_1({\vtheta}).
\end{align}
We can then use the chain rule to obtain
\begin{align}
\vtheta^{(2)} = \frac{d \left(- \nabla_{\vtheta} E + h f_1({\vtheta})\right)}{dt} = - \frac{d \nabla_{\vtheta} E}{dt} + \mathcal{O}(h) = \nabla_{\vtheta}^2 E \nabla_{\vtheta} E + \mathcal{O}(h).
\end{align}
\begin{align}
\vtheta(h) = \vtheta_{t-1} - h \nabla_{\vtheta} E(\vtheta_{t-1}) + h^2 f_1(\vtheta_{t-1}) + \frac{h^2}{2} \nabla_{\vtheta}^2 E (\vtheta_{t-1})\nabla_{\vtheta} E (\vtheta_{t-1})+ \mathcal{O}(h^3).
\end{align}
We can then write
\begin{align}
\vtheta_t - \vtheta(h) &= \vtheta_{t-1} - h \nabla_{\vtheta} E(\vtheta_{t-1}) - \vtheta(h) \\
&= h^2 f_1(\vtheta_{t-1}) + \frac{h^2}{2} \nabla_{\vtheta}^2 E (\vtheta_{t-1})\nabla_{\vtheta} E (\vtheta_{t-1})+ \mathcal{O}(h^3).
\end{align}
For the error to be of order $\mathcal{O}(h^3)$ the terms of order $\mathcal{O}(h^2)$ have to be $\mathbf{0}$.
This entails
\begin{align}
f_1(\vtheta_{t-1}) = -\frac{1}{2} \nabla_{\vtheta}^2 E(\vtheta_{t-1}) \nabla_{\vtheta} E(\vtheta_{t-1}),
\label{eq:igr_proof_value_iterate}
\end{align}
from which we conclude to $f_1 = -\frac{1}{2} \nabla_{\vtheta}^2 E \nabla_{\vtheta} E $ leading to Eq (<ref>).
§ MULTIPLE OBJECTIVE OPTIMISATION
We have thus far discussed optimisers for single-objective problems, which encompasses many machine learning use cases, including supervised learning and certain formulations of reinforcement learning and generative modelling.
There are, however, use cases in machine learning beyond single-objective problems, including GANs and types of reinforcement learning, such as actor-critic frameworks.
These settings are often formulated via nested minimisation problems:
\begin{align}
\min_{\vtheta} E_{\vtheta}(\arg \min_{\vphi} E_{\vphi}(\vphi, \vtheta), \vtheta).
\label{eq:nested_op}
\end{align}
Solving nested optimisation problems is computationally expensive, as it would require solving the inner optimisation problem for each parameter update of the outer optimisation problem. Thus, in deep learning practice nested optimisation problems are often translated into problems of the form
\begin{align}
&\min_{\vphi} E_{\vphi}(\vphi, \vtheta) \label{eq:prob_games_1} \\
&\min_{\vtheta} E_{\vtheta}(\vphi, \vtheta) \label{eq:prob_games_2}. \
\end{align}
Multi-objective optimisation problems of this kind are often cast as a game. Each player has a set of actions to take, in the form of an update for their parameters, $\vphi$ for the first player, and $\vtheta$ for the second. The loss functions are the negative of the player's respective payoff. As it is common in the literature, we will often adapt the game theory nomenclature in this setting;
this connection allows for a bridge between the machine learning and the game theory communities.
When translating problems of the form in Eq (<ref>) into optimisation procedures that preserve the nested structure of a game it is common to use one of the single-objective algorithms discussed Section <ref> to update the first player's parameters, followed by the use of the same algorithm to update the second player's parameters. This approach is often referred to as alternating updates; we summarise it in Algorithm <ref>. Optimisation approaches that further account for the nested game structure do exist [Metz et al., 2017], but they tend to be more computationally prohibitive and thus less used in practice.
In some settings, there is no nested structure or the nested structure is further discarded and both players are updated simultaneously, as we show in Algorithm <ref>.
Alternating updates
$k \ge 0$; the number of first player updates for each second player update
Choice of optimiser_routine($E$, current_value); e.g: $gd(E, \vtheta_t) = \vtheta_t - h \nabla_{\vtheta} E(\vtheta_t)$
Initialise $\vphi_0, \vtheta_0$
$t\gets 0$
$\vphi \gets \vphi_t$
$i$ in {1, ..., $k$}
$\vphi \gets$ optimiser_routine($E_{\vphi}(\cdot, \vtheta_t), \vphi)$
$\vphi_{t+1} \gets \vphi$
$\vtheta_{t +1} \gets$ optimiser_routine($E_{\vtheta}(\vphi_{t+1}, \cdot), \vtheta_t)$
$t\gets t+1$
Simultaneous updates
Initialise $\vphi_0, \vtheta_0$
Choice of optimiser_routine($E$, current_value); e.g: $gd(E, \vtheta_t) = \vtheta_t - h \nabla_{\vtheta} E(\vtheta_t)$
$\vphi_{t +1} \gets$ optimiser_routine($E_{\vphi}(\cdot, \vtheta_t), \vphi_t)$
$\vtheta_{t +1} \gets$ optimiser_routine($E_{\vtheta}(\vphi_t, \cdot), \vtheta_t)$
$t\gets t+1$
Simultaneous gradient descent updates derived for solving the problem in Eqs (<ref>) and (<ref>)—using gradient descent as the optimiser in Algorithm <ref>—can be seen as the Euler discretisation of the continuous-time flow
\begin{align}
&\dot{\vphi} = -\nabla_{\vphi} E_{\vphi} \label{eq:gen_games_ode1} \\
&\dot{\vtheta} = -\nabla_{\vtheta} E_{\vtheta} \label{eq:gen_games_ode2}.
\end{align}
Analysis of these game dynamics is more challenging than in the single objective case (shown in Eq (<ref>)), as we can no longer ascertain that following such flows minimises the corresponding loss functions since
\begin{align}
\frac{dE_{\vphi}}{d t} = \frac{d \vphi}{d t}^T \nabla_{\vphi} E_{\vphi} + \frac{d \vtheta}{d t}^T \nabla_{\vtheta} E_{\vphi}
= - \norm{\nabla_{\vphi} E_{\vphi}}^2 - \nabla_{\vtheta} E_{\vtheta}^T \nabla_{\vtheta} E_{\vphi},
\end{align}
which need not be negative. Furthermore, it is unclear how to transfer the alternating structure that preserves player order from Algorithm <ref> into a continuous-time flow; we will address this in a later chapter. Nonetheless, the flows in Eqs (<ref>) and (<ref>) are often studied to understand the behaviour of gradient descent in nested optimisation and games [Nagarajan and Kolter, 2017, Balduzzi et al., 2018, Jin et al., 2020, Qin et al., 2020].
The challenges pertaining to analysing optimisation in games get further compounded when considering the notions of equilibria. The natural translation of local minimum from single-objective optimisation to games is a local Nash equilibrium. A local Nash equilibrium is a point in parameter space $(\vphi^*, \vtheta^*)$ such that no player has an incentive to deviate from $(\vphi^*, \vtheta^*)$ to another point in its local neighbourhood; that is $\vphi^*$ is a local minimum for $E_{\vphi}(\cdot, \vtheta^*)$ and $\vtheta^*$ is a local minimum for $E_{\vtheta}(\vphi^*, \cdot)$. Equivalently, $\nabla_{\vphi} E_{\vphi}(\vphi^*, \vtheta^*) = \mathbf{0}$ and $\nabla_{\vtheta} E_{\vtheta}(\vphi^*, \vtheta^*) = \mathbf{0}$ and $\nabla_{\vphi}^2
E_{\vphi}(\vphi^*, \vtheta^*)$ and $\nabla_{\vtheta}^2 E_{\vtheta}(\vphi^*, \vtheta^*)$ are positive semi-definite, i.e. do not have negative eigenvalues.
While in the single-objective case, local minima are the only exponentially attractive equilibria of the NGF (see Corollary <ref>),
this correspondence between desired equilibrium and attractive dynamics does not generally occur in games. By applying stability analysis (Remark <ref>), we know that the system in Eqs (<ref>)
and (<ref>) is attracted to equilibria for which the Jacobian
\begin{align}
\vJ =
\begin{bmatrix}
-\jactwoparam{\vphi}{\vphi}{E_{\vphi}} & - \jactwoparam{\vtheta}{\vphi}{E_{\vphi}} \\
- \jactwoparam{\vphi}{\vtheta}{E_{\vtheta}} & -\jactwoparam{\vtheta}{\vtheta}{E_{\vtheta}}
\end{bmatrix}
\end{align}
evaluated at $(\vphi^*, \vtheta^*)$ has only eigenvalues with strictly negative real part.
In contrast, a Nash equilibrium requires that the block diagonals of $\vJ$, namely $-\nabla_{\vphi}^2 E_{\vphi}(\vphi^*, \vtheta^*)$ and $-\nabla_{\vtheta}^2 E_{\vtheta}(\vphi^*, \vtheta^*)$, have negative eigenvalues.
Since for an unconstrained matrix there is no relationship between its eigenvalues and the eigenvalues of its block diagonals, there is no inclusion relationship between Nash equilibria and locally attractive equilibria of the underlying continuous game dynamics that applies to all two-player games.
For zero-sum games, i.e. games where $E_{\vphi} = - E_{\vtheta} = E$, a strict Nash equilibrium will be locally attractive when following the flow in Eqs (<ref>)
and (<ref>). To see why, we have to examine the eigenvalues of the flow's Jacobian
\begin{align}
\vJ =
\begin{bmatrix}
- \jactwoparam{\vphi}{\vphi}{E} & - \jactwoparam{\vtheta}{\vphi}{E} \\
\jactwoparam{\vphi}{\vtheta}{E} & \jactwoparam{\vtheta}{\vtheta}{E}
\end{bmatrix}
\end{align}
evaluated at a strict Nash equilibrium of such game.
Due to the zero-sum formulation, the off-diagonal blocks of $\vJ$ satisfy $- \jactwoparam{\vtheta}{\vphi}{E} = - (\jactwoparam{\vphi}{\vtheta}{E})^T$, and we have
\begin{align}
\vJ + \vJ^T = \begin{bmatrix}
- \nabla_{\vphi}^2 E & \mathbf{0} \\
\mathbf{0} & \nabla_{\vtheta}^2 E
\end{bmatrix}.
\end{align}
Thus, $\vJ + \vJ^T$ at the Nash equilibrium $(\vphi^*, \vtheta^*)$ is a symmetric block diagonal matrix with negative definite blocks, $-\nabla_{\vphi}^2 E(\vphi^*, \vtheta^*) $ and $\nabla_{\vtheta}^2 E(\vphi^*, \vtheta^*)$, and thus has negative eigenvalues. From here it follows that $\vJ$ has eigenvalues with strictly negative real part, and thus satisfies the condition of Remark <ref>.
We note that the reverse does not hold, as being locally attractive in this case is a weaker condition than that of a Nash equilibrium.
This observation together with the realisation that in deep learning certain games might not have Nash equilibria [Farnia and Ozdaglar, 2020], has lead the search for other equilibrium measures including local stability and Stackelberg equilibria [Fiez et al., 2020, Berard et al., 2020, Wang et al., 2019].
§ CONCLUSION
In this section we have introduced the main optimisation algorithms used in deep learning, together with their main challenges and the analytical tools we will employ in the rest of this thesis.
CHAPTER: A NEW CONTINUOUS-TIME MODEL OF GRADIENT DESCENT
CHAPTER: CONTINUOUS TIME MODELS OF OPTIMISATION IN TWO-PLAYER GAMES
§ A COMMENT ON DIFFERENT LEARNING RATES
When considering different learning rates for the two players, we have thus far kept the consistency between physical time and learning rates. That is, we assumed the gradient descent updates are obtained by Euler discretisation with learning rates $\lrp h$ and $\lrt h$ of the flow
\begin{align}
\dot{\vphi} &= f( \vphi, \vtheta) \label{eq:old_two_lr_ode1} \\
\dot{\vtheta} &= g( \vphi, \vtheta) \label{eq:old_two_lr_ode2}.
\end{align}
We then used BEA to fine modified flow such that after time $\lrp h$ and $\lrt h$ the error between the modified flow and the discrete updates is $\mathcal{O}(h^3)$.
Alternatively, we can consider the gradient descent updates as the result of Euler discretisation with learning rate $h$ of the flow
\begin{align}
\dot{\vphi} &= \lrp f( \vphi, \vtheta) \label{eq:new_two_lr_ode1} \\
\dot{\vtheta} &= \lrt g( \vphi, \vtheta) .
\label{eq:new_two_lr_ode2}
\end{align}
While in this approach the learning rate loses the connection with physical time for each player, it keeps the consistency of time between two players: after one iteration the same amount of time, $h$, has passed for both players. With this in mind, we will call this `the same-physical time between players' approach.
Considering gradient descent as a discretisation of Eqs (<ref>) and (<ref>) instead of Eqs (<ref>) and (<ref>) has implications for the analysis of the behaviour of gradient descent using continuous-time tools such as stability analysis, as the two systems can lead to different results based on the two learning rates. Using Eqs (<ref>) and (<ref>) can also lead to a different perspective from the loss minimisation view. For the game with $f = - \nabla_{\vphi} E$ and $g = \nabla_{\vtheta} E$, the second system leads to the interpretation that the first player is minimising function $\lrp E(\vphi, \vtheta)$ while the second player is maximising function $\lrt E(\vphi, \vtheta)$. This is different than the zero-sum formulation of the first player minimising $E(\vphi, \vtheta)$ while the second player is maximising it, resulting from our previous interpretation.
The same-physical time between players approach also presents the following challenge:
Given learning rates $\lrp h$ and $\lrt h$ for the two-players, $h$ cannot be identified exactly and has to be chosen by the user. The choice of $h$ leads to different flows describing the dynamics of the discrete updates. This is in contrast with the approach taken previously, which only depends on $\lrp$ and $\lrt$ through the learning rates $\lrp h$ and $\lrt h$.
Following the same-physical time between players approach also has implications for the use of BEA to find flows with error $\mathcal{O}(h^3)$ after one Euler update. Luckily, to find the modified flows using BEA under the same-physical time approach, we simply have to apply our results from Section <ref> for the system in Eqs (<ref>) and (<ref>) by using their corresponding vector fields and learning rate $h$. We do so, and obtain:
The discrete simultaneous Euler updates
in Eqs (<ref>) and (<ref>) follow the continuous system
\begin{align}
\dot{\vphi} &= \lrp f - \frac{h}{2} \left(\lrp^2 \jacphif f + \lrp \lrt \jacthetaf g \right) \\
\dot{\vtheta} &= \lrt g - \frac{h}{2} \left(\lrp \lrt \jacphig f + \lrt^2 \jacthetag g \right)
\end{align}
with an error of size $\mathcal O(h^3)$ after one update step.
The discrete alternating Euler updates in (<ref>) and (<ref>) follow the continuous system
\begin{align}
\dot{\vphi} &= \lrp f - \frac {h}{2} \left(\frac{\lrp^2}{m}\jacphif f + \lrp \lrt \jacthetaf g \right) \\
\dot{\vtheta} &= \lrt g - \frac{h}{2} \left(- \lrt \lrp \jacphig f + \frac{\lrt^2}{k} \jacthetag g \right)
\end{align}
with an error of size $\mathcal O(h^3)$ after one update step.
We observe that both modified flows each player's vector field depends on the learning rate of the other player, unlike in our previous results where for simultaneous updates, the vector field for each player depended only on its learning rate.
When interpreting these results to obtain implicit regularisation effects as we have done previously in zero-sum and common-payoff games, the resulting coefficients of the implicit regularisers are different than those obtained in Section <ref>. For zero-sum games, we obtain the following corollaries:
In a zero-sum two-player differentiable game, simultaneous gradient descent updates—as described in Eqs (<ref>) and (<ref>)—follows a gradient flow given by the modified losses
\begin{align}
\tilde E_{\vphi}&= - \lrp E + \frac{\lrp^2 h}{4} \norm{\nabla_{\vphi} E}^2 - \frac{\lrp \lrt h}{4} \norm{\nabla_{\vtheta} E}^2
\label{eq:min_min_simultaneous_zero_sum1-diff-lr} \\
\tilde E_{\vtheta}&= \lrt E - \frac{\lrp \lrt h}{4} \norm{\nabla_{\vphi} E}^2 + \frac{\lrt^2 h}{4} \norm{\nabla_{\vtheta} E}^2
\label{eq:min_min_simultaneous_zero_sum2-diff-lr}
\end{align}
with an error of size $\mathcal O(h^3)$ after one update step.
In a zero-sum two-player differentiable game, alternating gradient descent—as described in Eqs (<ref>) and (<ref>)—follows a gradient flow given by the modified losses
\begin{align}
\tilde E_{\vphi}&= -\lrp E + \frac{\lrp^2 h}{4m} \norm{\nabla_{\vphi} E}^2 - \frac{\lrp \lrt h}{4} \norm{\nabla_{\theta} E}^2
\label{eq:min_min_alt_zero_sum1_diff_lr}
\\
\tilde E_{\vtheta}&= \lrt E + \frac{\lrp \lrt h}{4} \norm{\nabla_{\vphi} E}^2 + \frac{\lrt^2 h}{4k}\norm{\nabla_{\theta} E}^2
\label{eq:min_min_alt_zero_sum2_diff_lr}
\end{align}
with an error of size $\mathcal O(h^3)$ after one update step.
From Corollary <ref> one concludes for alternating updates that the second player's interaction always minimises the gradient norm of the first player, which again helps explain the empirically observed stability of alternating updates over simultaneous updates; this is in contrast with our previous interpretation which suggested that this occurs only for certain learning rate ratios (Corollary <ref>).
Simultaneous updates First player ($\vphi$) Second player ($\vtheta$)
Corollary <ref> $\frac{\lrp^2 h^2}{4} \nabla_{\vphi} \norm{\nabla_{\vtheta} E}^2$ $\frac{\lrt^2 h^2}{4} \nabla_{\vtheta} \norm{\nabla_{\vphi} E}^2$
Corollary <ref> $\frac{\lrp \lrt h^2}{4} \nabla_{\vphi} \norm{\nabla_{\vtheta} E}^2$ $\frac{\lrp \lrt h^2}{4} \nabla_{\vtheta} \norm{\nabla_{\vphi} E}^2$
Alternating updates First player ($\vphi$) Second player ($\vtheta$)
Corollary <ref> $\frac{\lrp^2 h^2}{4} \nabla_{\vphi} \norm{\nabla_{\vtheta} E}^2$ $\left(\frac{\lrt^2 h}{4} - \frac{2 \lrp \lrt h^2}{4}\right) \nabla_{\vtheta} \norm{\nabla_{\vphi} E}^2$
Corollary <ref> $\frac{\lrp \lrt h^2}{4} \nabla_{\vphi} \norm{\nabla_{\vtheta} E}^2$ $-\frac{\lrp \lrt h^2}{4} \nabla_{\vtheta} \norm{\nabla_{\vphi} E}^2$
[Contrasting the implicit regularisation effects of the continuous-time flows proposed in zero-sum games.]The strength of the interaction terms in DD under the different modified continuous-time flows we find for simultaneous and alternating gradient descent in zero-sum games. We obtained the different flows via different approaches of handling different learning rates for the two players, namely $\lrp h \ne \lrt h$. We do not write the self terms here as they are the same for both approaches. We write the implicit regularisation coefficients as observed in the continuous-time flow displacement for one gradient descent update, rather than the modified loss, in order to account for the different learning rates under the two interpretations (which are $\lrp h$ and $h$ for the first player, respectively). For simultaneous updates, the insight that the interaction terms for each player leads to a maximisation of the other player's gradient norm remains true regardless of interpretation, though the strength of the regularisation changes in the two interpretations. For alternating updates, the different interpretations suggest not only different regularisation strengths but also signs, with the result from Corollary <ref> showing that for alternating updates, the interaction term of the second player always minimises the gradient norm of the first player, as opposed to the result in Corollary <ref>, for which this effect depends on the learning rate ratio $\lrp/\lrt$. Both results are consistent in explaining why alternating updates are more stable than simultaneous updates, since in both cases for alternating updates the second player's interaction term has a smaller coefficient when maximising the first player's gradient norm compared to that of simultaneous updates, or is minimising the first player's gradient norm.
If we would like to implement explicit regularisation methods to cancel the implicit regularisers obtained above using the same-physical time between players approach, we have to discretise the modified losses with learning rate $h$; this is opposed to using learning rates $\lrp h$ and $\lrt h$ to discretise the modified flows obtained in Section <ref>. For the self terms, no difference is obtained compared to the previous interpretation; under both approaches of tackling different learning rates, the strength of the self terms in the parameters update is proportional to $\lrp^2 h^2$ and $\lrt^2 h^2$, respectively. For interaction terms, however, the coefficients required in the parameter update are changed. We highlighted these changes in Table <ref>; these coefficients also show that while in order to find the modified flow in the same-physical time between player approach we require to choose $\lrp$, $\lrt$ and $h$, for explicit regularisation we only need the learning rates of the two players.
We thus can investigate the effect of explicit regularisation to cancel the interaction terms under the same-physical time approach in zero-sum games. As before, we do so by using the generator saturating loss in GAN training. We show results in Figures <ref> and <ref> and compare with our previous results from Section <ref>. While for the simultaneous updates using the same physical time approach leads to more hyperparameters with increased performance, no significant difference is obtained for alternating updates.
[An alternative interpretation to different learning rates leads to a new set of flows for simultaneous gradient descent updates.]Simultaneous updates in zero-sum games: cancelling the interaction terms under the different interpretations of tackling different learning rates (Corollaries <ref> and <ref>) does not result in a significant empirical difference. We denote the approach of Corollary <ref> as `same physical time', as per its motivation of using the same physical time between the two players.
[An alternative interpretation to different learning rates leads to a new set of flows for alternating gradient descent updates.]Alternating updates in zero-sum games: cancelling the interaction terms under the different interpretations of tackling different learning rates (Corollaries <ref> and <ref>) does not result in a significant empirical difference. We denote the approach of Corollary <ref> as `same physical time', as per its motivation of using the same physical time between the two players.
We conclude by noting that when studying two-player games trained with different learning rates, one can choose between two approaches of constructing corresponding continuous dynamics and modified losses. Both are valid, in that both lead to modified flows with the same error in learning rate, and can be used to understand implicit regularisation effects and the discrepancies between simultaneous and alternating Euler updates. Additional examination is needed to understand how these approaches explain continuous-time behaviour over a larger number of steps; we leave that as future work.
§ EXTENDING THE PF FOR TWO-PLAYER GAMES
In Chapter <ref>, we introduced the principal flow (PF) as a model of single-objective gradient descent. We now briefly show how the same approach can be generalised to simultaneous Euler updates in two-player games, which will allow us to capture discretisation drift effects beyond those of order $\mathcal{O}(h^3)$ we explored in the rest of this chapter. For simplicity, we use assume equal learning rates for the two-players, though one can use the `same-physical time approach' from the previous section to adapt the results to different learning rates.
The approach we took in the single-objective setting had two steps: first, we developed the principal series with BEA (Theorem <ref>) and second, we used the principal series given by BEA to find the PF based on the eigen-decomposition of the Hessian (Corollary <ref> and Theorem <ref>). The first step can be readily translated to games as it applies to any vector field, using Theorem <ref> in the Appendix, which Theorem <ref> is a corollary of. For completeness, we reproduce Theorem <ref> here:
The modified flow given by BEA with an error of order $\mathcal{O}(h^{p+1})$ to the Euler update ${\vpsi_t = \vpsi_{t-1} + h \gamesvf(\vpsi_{t-1})}$
has the form:
\begin{align}
\dot{\vpsi} = \sum_{n=0}^{p} \frac{(-1)^{n}}{n+1} {(\jacparam{\vpsi}{\gamesvf})}^n \gamesvf + \csecondorderfamv
\end{align}
where $\csecondorderfamv$ denotes a class of functions defined as a sum of individual terms each containing higher than first order derivatives applied to $\gamesvf$.
To use the above result in games, consider the two-player game with the original the dynamics in Eqs (<ref>)-(<ref>):
\begin{align}
\dot{\left[ \begin{array}{c}
\vphi \\
\vtheta
\end{array}\right]}
&= \left[ \begin{array}{c}
f \\
\end{array}\right].
\end{align}
If we define
\begin{align}
\vpsi
= \left[ \begin{array}{c}
\vphi \\
\vtheta
\end{array}\right]
\hspace{3em}
= \left[ \begin{array}{c}
f(\vphi, \vtheta) \\
g(\vphi, \vtheta)
\end{array}\right]
\end{align}
and the negative of the Jacobian as
\begin{align}
\vH(\vphi, \vtheta) = -
\begin{bmatrix}
\jacparam{\vphi}{f} & \jacparam{\vtheta}{f} \\
\jacparam{\vphi}{g} & \jacparam{\vtheta}{g}
\end{bmatrix},
\end{align}
we can replace this choice vector field and $\vH$ in Theorem <ref>, and obtain:
The full series of BEA constructed from simultaneous Euler updates (Eqs <ref> and <ref>) with learning rate $h$ exactly is of the form
\begin{align}
\dot{\left[ \begin{array}{c}
\vphi \\
\vtheta
\end{array}\right]}
&= \sum_{p=0}^{\infty} \frac{1}{p+1} h^p {\vH(\vphi, \vtheta)}^p \left[ \begin{array}{c}
f \\
\end{array}\right] + \csecondorderfam.
\end{align}
We can no longer use the eigen-decomposition of $\vH$, as unlike in the single-objective case ($f = - \nabla_{\vphi} E$ and $g = - \nabla_{\vtheta} E$) we examined in Chapter <ref>, $\vH(\vphi, \vtheta)$ need not be symmetric and thus we can no longer conclude Corollary <ref>. We instead consider the Jordan normal form of $\vH$
\begin{align}
\vH(\vphi, \vtheta) = \vP^{-1} \vJ \vP,
\end{align}
where $\vJ$ is a block diagonal matrix. We then have
\begin{align}
{\vH(\vphi, \vtheta)}^p = \vP^{-1} \vJ^p \vP,
\end{align}
leading to
\begin{align}
\dot{\left[ \begin{array}{c}
\vphi \\
\vtheta
\end{array}\right]}
&= \sum_{p=0}^{\infty} \frac{-1}{p+1} \vP^{-1} (h\vJ)^p \vP \left[ \begin{array}{c}
-f \\
\end{array}\right] + \csecondorderfam,
\end{align}
where we use the negative of the vector field in order to obtain consistency with the PF results in Chapter <ref>, and recover single-objective results if $f = - \nabla_{\vphi} E$ and $g = - \nabla_{\vtheta} E$.
We are interested in finding a form for the series
\begin{align}
\sum_{p=0}^{\infty} \frac{-1}{p+1} (h \vJ)^p ,
\end{align}
which if converges can be written as another block diagonal matrix,
\begin{align}
\frac{1}{h} \log(\vI - h \vJ) \vJ^{-1}.
\end{align}
This result leads to the game equivalent of the PF:
When expanding the PF to simultaneous Euler updates (Eqs <ref> and <ref>) with learning rate $h$ in two-player games, we obtain:
\begin{align}
\dot{\left[ \begin{array}{c}
\vphi \\
\vtheta
\end{array}\right]}
&= \frac{1}{h} \vP^{-1} \log(\vI - h\vJ) \vJ^{-1} \vP \left[ \begin{array}{c}
-f \\
\end{array}\right] + \csecondorderfam,
\label{eq:gen_pf_games}
\end{align}
where $\vH(\vphi, \vtheta) = \vP^{-1} \vJ \vP$ is the Jordan normal form of $\vH$.
While the Jordan normal form of $\vH$ is not unique due to permutations of the order of the blocks in $\vJ$, $\vP^{-1} \log(\vI - h\vJ) \vJ^{-1} \vP$ will be the same regardless of the block order.
§.§ Stability analysis
We now perform stability analysis on the modified flow obtained in Eq (<ref>), and hope it sheds light on the behaviour of Euler updates—and thus gradient descent—in games. We start by estimating the eigenvalues of the flow's Jacobian:
\begin{align}
\frac{1}{h} \vP^{-1} \log(\vI - h \vJ) \vP.
\end{align}
As $\vP$ is invertible, the Jacobian's eigenvalues are the same as the eigenvalues of
\begin{align}
\frac{1}{h}\log(\vI - h \vJ).
\end{align}
Since $\log(\vI - h \vJ)$ is block diagonal, its eigenvalues are the eigenvalues of its blocks $\frac{1}{h}\log(\vI_{n_k} - h \vJ_k)$, where $ \vI_{n_k}$ is the identity matrix of dimension $n_k$ and $\vJ_k \in \mathbb{C}^{n_k, n_k}$.
Each block $\vJ_k$ corresponds to a unique eigenvalue $\lambda_k$ of $\vH$ with duplicity $n_k$. Given
\begin{align}
\vZ_{n_k} = \begin{bmatrix}
0 & 1& \hdots & 0 & 0 \\
\vdots & \vdots& \ddots & \ddots & \vdots\\
0 & 0 & 0 & \hdots & 1 \\
0 & 0 & 0 & 0 & 0
\end{bmatrix},
\end{align}
a matrix with all $0$s apart from the one off diagonal, $\vJ_{k}$ can be written as
\begin{align}
\vJ_{k} = \lambda_k \vI_{n_k} + \vZ_{n_k}.
\end{align}
From here
\begin{align}
\log (\vI - h \vJ_{k}) = \log(\vI - h\lambda_k \vI_{n_k} - h \vZ_{n_k}),
\end{align}
and since $\vI - h \vJ_{k}$ is upper triangular, its eigenvalues are its diagonal entries, which are $1 - h \lambda_k$ and the eigenvalues of $\log (\vI - h \vJ_{k})$ are $\log(1 - h \lambda_k)$. Thus, to perform stability analysis we have to assess whether $\Re{[\frac{1}{h}\log(1 - h \lambda_k)]} < 0$ for all $\lambda_k$. Since $h>0$, this is equivalent to $\Re{[\log(1 - h \lambda_k)]} < 0$ for all $\lambda_k$. Unlike the single-objective case, $\lambda_k$ might be complex.
If we write $\lambda_k = x_k + i y_k$ we have:
\begin{align}
\Re[\log(1 - h (x_k + i y_k))] = \log \sqrt{(1 - h x_k)^2 + h^2 y_k^2},
\end{align}
which is negative if $(1 - h x_k)^2 + (h y_k)^2 < 1$.
We now contrast this condition to the exponential stability convergence condition of the original continuous-time game dynamics. Under the original continuous-time dynamics, for a fixed point to be attractive, the real part of the $\vH$'s eigenvalues would need to be strictly positive since $\vH$ is the negative of the flow's Jacobian, i.e. $x_k > 0$. When accounting for DD, however, the importance of the imaginary part $y_k$ becomes apparent.
Moreover, if we rewrite the condition $(1 - h x_k)^2 + (h y_k)^2 < 1$ as a condition on the maximum learning rate to be convergent, we obtain
\begin{align}
(1 - h x_k)^2 + (h y_k)^2 < 1 \implies h < \frac{1}{x_k} \frac{2}{1 + \left(\frac{y_k}{x_k}\right)^2},
\end{align}
which is the condition Mescheder et al., 2017 (their Lemma 4) obtained from a discrete-time perspective. Thus in the game setting too, we recover existing fundamental results using a continuous-time perspective. We note that using this intuition Mescheder et al., 2017 derive the algorithm Consensus Optimisation (CO), which we empirically assessed in Section <ref>.
If $f = - \nabla_{\vphi} E$ and $g = - \nabla_{\vtheta} E$ we have a single objective, with $y_k = 0$ $\forall k$. In this case we recover the convergence conditions we are familiar with from the PF, which we derived in Section <ref>.
In the zero-sum case, where $f = - \nabla_{\vphi} E$ and $g = \nabla_{\vtheta} E$, we have seen in Section <ref> that strict local Nash equilibria are local stable fixed points under the original game dynamics. That is, at a strict local Nash equilibrium we have that $x_k > 0, \forall k$. Under the continuous-time dynamics we derived in this section, which account for DD, this is no longer true, since $x_k$ being positive does not lead to $(1 - h x_k)^2 + (h y_k)^2 < 1$. We thus obtain an analogous result in zero-sum games to that of the single-objective case in Chapter <ref>, where we saw that unlike the NGF, whether or not the PF is attracted to strict local minima depends on the learning rate $h$.
§.§ An empirical example and different learning rates
[Same learning rate.]
[Different learning rates.]
[A visualisation of a simple example of the extension of the PF to two-games.]A visualisation of a simple example of the extension of the PF to two-games. Here we observe that the PF better tracks the gradient descent update compared to the flow obtained from using the $\mathcal{O}(h^3)$ error, and the NGF.
The game PF describes the Euler discretisation of the system in Eq (<ref>) exactly (see Section <ref> for a full proof). We visualise this example in Figure <ref>, where we observe that the PF tracks the behaviour of the discrete updates better than the modified third-order correction previously derived.
We note that the Jordan normal form (here implemented using `sympy' [Meurer et al., 2017]) is not always numerically stable and the observed (small) difference between the discrete updates and the PF is due to numerical issues.
If different learning rates are used for the two-players, we can use the approach outlined in Section <ref> to construct vector fields that account for the each player's learning rate, and then apply the same analysis as above to the corresponding Jacobian to construct a flow and stability analysis conditions. We use this approach to visualise the same example from Eq (<ref>) in Figure <ref>, and observe that here too the PF tracks the Euler updates closely.
CHAPTER: FINDING NEW IMPLICIT REGULARISERS BY REVISITING BACKWARD ERROR ANALYSIS
In previous chapters, we used Backward Error Analysis (BEA) to construct flows that approximate a discrete optimiser update and shed light on optimisation dynamics, both for single-objectives and two-player games trained with gradient descent.
The current usage of BEA is not without limitations, however, since not all the vector fields of continuous-time flows obtained using BEA can be written as a gradient, hindering the construction of modified losses revealing implicit regularisers. Implicit regularisers uncover quantities minimised by following discrete updates and thus can be used to improve performance and stability across problem domains, from supervised learning to two-player games such as Generative Adversarial Networks [Barrett and Dherin, 2021, Smith et al., 2021, Geiping et al., 2022, Schäfer et al., 2020]; we have seen an example in common-payoff and zero-sum games in the previous chapter. In this chapter, we provide a novel approach to use BEA, and show how our approach can be used to construct continuous-time flows with vector fields that can be written as gradients. We then use this to find previously unknown implicit regularisation effects, such as those induced by multiple stochastic gradient descent steps while accounting for the exact data batches used in the updates, and in generally differentiable two-player games.
§ REVISITING BEA
In Section <ref>, we described BEA and showed how Barrett and Dherin, 2021 use BEA to find the IGR flow
\begin{align}
\dot{\vtheta} = -\nabla_{\vtheta}E -\frac{h}{2} \nabla_{\vtheta}^2 E \nabla_{\vtheta} E = -\nabla_{\vtheta}\left(E(\vtheta) + \frac h4 \norm{\nabla_{\vtheta} E(\vtheta)}^2\right),
\end{align}
which has an error of $\mathcal{O}(h^3)$ after one gradient descent update ${\vtheta_t = \vtheta_{t-1} - h\nabla_{\vtheta} E(\vtheta)}$.
Gradient descent can thus be seen as implicitly minimising the modified loss $E(\vtheta) + \frac h4 \norm{\nabla_{\vtheta} E(\vtheta)}^2$. This showcases an implicit regularisation effect induced by the discretisation drift of gradient descent, dependent on learning rate $h$, which biases learning towards paths with low gradient norms.
We would like to go beyond full-batch gradient descent and model the implicit regularisation induced by the discretisation of one stochastic gradient descent (SGD). Given the SGD update $\vtheta_t = \vtheta_{t -1} - h \nabla_{\vtheta} E(\vtheta_{t -1}; \vX_{t})$—where we denoted $E(\vtheta_{t-1}; \vX) = \frac{1}{B}\sum_{i=1}^B E(\vtheta_{t-1}; \vx_i)$ as the average loss over batch $\vX$ and $B$ is the batch size—
we can use the IGR flow induced at this time step:
\begin{align}
\dot{\vtheta} = - \nabla_{\vtheta} E(\vtheta; \vX_{t}) - \frac h 4 \nabla_{\vtheta} \norm{\nabla_{\vtheta} E(\vtheta; \vX_{t})}^2.
\label{eq:igr_stochastic}
\end{align}
Since the vector field in Eq (<ref>) is the negative gradient of the modified loss ${E(\vtheta; \vX_{t}) + \frac h 4 \norm{\nabla_{\vtheta} E(\vtheta; \vX_{t})}^2}$, this reveals a local implicit regularisation which minimises the gradient norm $ \norm{\nabla_{\vtheta} E(\vtheta; \vX_{t})}^2$. It is not immediately clear, however, how to combine the IGR flows obtained for each SGD update in order to model the combined effects of multiple SGD updates, each using a different batch. What are, if any, the implicit regularisation effects induced by two SGD steps
\begin{align}
&\vtheta_{t} = \vtheta_{t-1} - h \nabla_{\vtheta} E(\vtheta_{t-1}; \vX_{t}) \label{eq:sgd1} \\
&\vtheta_{t+1} = \vtheta_t - h \nabla_{\vtheta} E(\vtheta_t; \vX_{t+1})?
\label{eq:sgd2}
\end{align}
Smith et al., 2021 find a modified flow in expectation over the shuffling of batches in an epoch, and use it to find implicit regularisation effects specific to SGD and study the effect of batch sizes in SGD. Since their approach works in expectation over an epoch, however, it does not account for the implicit regularisation effects of a smaller number of SGD steps, or account for the exact data batches used in the updates; we return to their results in the next section.
We take a different approach, and introduce a novel way to find implicit regularisers in SGD by revisiting the BEA proof structure and the assumptions made thus far when using BEA.
Our approach can be summarised as follows:
Given the discrete update $\vtheta_t = \vtheta_{t-1} - h \nabla_{\vtheta}E (\vtheta_{t-1})$,
BEA constructs $\dot{\vtheta} = \nabla_{\vtheta} E + h f_1(\vtheta) + \cdots + h^p f_p(\vtheta)$, such that $\norm{\vtheta(h; \vtheta_{t-1}) - \vtheta_t} \in \mathcal{O}(h^{p+2})$ for a choice of $p \in \mathbb{N}, p \ge 1$. This translates into a constraint on the value of $\vtheta(h;\vtheta_{t-1})$.
Thus, BEA asserts only what the value of the correction terms $f_i$ in the vector field of the modified flow is at $\vtheta_{t-1}$.
Given the constraints on $f_i(\vtheta_{t-1})$, we can choose $f_i: \mathbb{R}^D \rightarrow \mathbb{R}^D$ in the vector field of the modified flow $\dot{\vtheta}$ to depend on the initial condition $\vtheta_{t-1}$.
For example, in the proof by construction of the IGR flow (Section <ref>), we obtained that if $\vtheta_t = \vtheta_{t-1} - h \nabla_{\vtheta}E (\vtheta_{t-1})$ and we want to find $\dot{\vtheta} = -\nabla_{\vtheta} E(\vtheta) + h f_1(\vtheta)$ such that $\norm{\vtheta(h; \vtheta_{t-1}) - \vtheta_t}$ is of order $\mathcal{O}(h^3)$, then $f_1({\color{red}{\vtheta_{t-1}}}) = - \frac{1}{2} \nabla_{\vtheta}^2 E({\color{red}{\vtheta_{t-1}}})\nabla_{\vtheta} E({\color{red}{\vtheta_{t-1}}})$ (Eq (<ref>)). From there, following Barrett and Dherin, 2021, we concluded that $f_1(\vtheta) = - \frac{1}{2} \nabla_{\vtheta}^2 E(\vtheta)\nabla_{\vtheta} E(\vtheta)$. But notice how BEA only sets a constraint on the value of the vector field at the initial point $\vtheta_{t-1}$. If we allow the modified vector field to depend on the initial condition, an equally valid choice for $f_1(\vtheta) = - \frac{1}{2} \nabla_{\vtheta}^2 E(\vtheta)\nabla_{\vtheta} E({\color{red}{\vtheta_{t-1}}})$ or $f_1(\vtheta) = -\frac{1}{2} \nabla_{\vtheta}^2 E({\color{red}{\vtheta_{t-1}}})\nabla_{\vtheta} E(\vtheta)$. By construction, the above flows will also have an error of $\mathcal{O}(h^3)$ after one gradient descent step of learning rate $h$ with initial parameters $\vtheta_{t-1}$. The latter vector fields only describe the gradient descent update with initial parameters $\vtheta_{t-1}$ and thus they only apply to this specific gradient descent step, though as previously noted that is also the case with the IGR flow due to the dependence on the data batch—see Eq (<ref>). The advantage of constructing modified vector fields that depend on initial parameters lies in the ability to write modified losses when a modified vector field depending only on $\vtheta$ cannot be written as a gradient operator, as we shall see in the next sections.
This observation about BEA leads us to the following remarks:
There are multiple flows that lead to the same order in learning rate error after one discrete update. Many of these flows depend on the initial conditions of the system, i.e. the initial parameters of the discrete update. We visualise this approach in Figure <ref> and contrast it with the existing BEA interpretation.
[Standard interpretation.]
[every text node part/.style=align=center,inner sep=0,outer sep=0, scale=0.9][overlay]
(theta_t_minus_1) at (0,0);
(theta_t) at (2.2,-1);
(theta_t_plus_1) at (4.4, -0.5);
(draw) at ($(theta_t_minus_1) + (+0.3,-0.32)$) $\vtheta_{t-1}$;
(draw) at ($(theta_t) + (-0.2,-0.2)$) $\vtheta_{t}$;
(draw) at ($(theta_t_plus_1) + (-0.05,-0.35)$) $\vtheta_{t+1}$;
(first_time_transition) at ($(theta_t_minus_1) + (+0.55,1.6)$);
(second_time_transition) at ($(first_time_transition) + (3,0)$);
(draw) at (first_time_transition) $t -1 \rightarrow t $;
(draw) at (second_time_transition) $t \rightarrow t + 1$;
(cont_theta_t) at (2.2,0.8);
(cont_theta_t_plus_1) at (4.4, 1.05);
(mod_cont_theta_t) at (2.2, -0.6);
(mod_cont_theta_t_plus_1) at (4.4, -0.2);
[NavyBlue,thick,dashed] (theta_t_minus_1) – (theta_t);
[NavyBlue,thick,dashed] (theta_t) – (theta_t_plus_1);
[OliveGreen,thick] (theta_t_minus_1) to[out=50,in=180] node[midway,above] $\dot{\vtheta}$ (cont_theta_t);
[OliveGreen,thick] (theta_t) to[out=50,in=180] node[midway,above]$\dot{\vtheta}$ (cont_theta_t_plus_1);
[Plum,thick] (theta_t_minus_1) to[out=50,in=180] node[near end,above] $\dot{\tilde{\vtheta}}$ (mod_cont_theta_t);
[Plum,thick] (theta_t) to[out=30,in=170] node[near end,above] $\dot{\tilde{\vtheta}}$ (mod_cont_theta_t_plus_1);
[black,thick,dashed] ($(theta_t) + (0,-0.5)$) – ($(cont_theta_t) + (0,0.7)$);
] (theta_t_plus_1) – (mod_cont_theta_t_plus_1)
node [pos=0.5,anchor=west,xshift=0.15cm] $\mathcal{O}(h^3)$;
] (theta_t_plus_1) – (cont_theta_t_plus_1)
node [pos=0.5,anchor=west,xshift=1.6cm] $\mathcal{O}(h^2)$;
[Novel interpretation.]
[every text node part/.style=align=center,inner sep=0,outer sep=0,scale=0.9][overlay]
(theta_t_minus_1) at (0,0);
(theta_t) at (2.2,-1);
(theta_t_plus_1) at (4.4, -0.5);
(draw) at ($(theta_t_minus_1) + (+0.3,-0.32)$) $\vtheta_{t-1}$;
(draw) at ($(theta_t) + (-0.2,-0.2)$) $\vtheta_{t}$;
(draw) at ($(theta_t_plus_1) + (-0.05,-0.35)$) $\vtheta_{t+1}$;
(first_time_transition) at ($(theta_t_minus_1) + (+0.55,1.6)$);
(second_time_transition) at ($(first_time_transition) + (3,0)$);
(draw) at (first_time_transition) $t -1 \rightarrow t $;
(draw) at (second_time_transition) $t \rightarrow t + 1$;
(cont_theta_t) at (2.2,0.8);
(cont_theta_t_plus_1) at (4.4, 1.05);
(mod_cont_theta_t) at (2.2, -0.6);
(mod_cont_theta_t_plus_1) at (4.4, -0.2);
[NavyBlue,thick,dashed] (theta_t_minus_1) – (theta_t);
[NavyBlue,thick,dashed] (theta_t) – (theta_t_plus_1);
[OliveGreen,thick] (theta_t_minus_1) to[out=50,in=180] node[midway,above] $\dot{\vtheta}$ (cont_theta_t);
[OliveGreen,thick] (theta_t) to[out=50,in=180] node[midway,above]$\dot{\vtheta}$ (cont_theta_t_plus_1);
[Plum,thick] (theta_t_minus_1) to[out=50,in=180] node[near end,above] $\dot{\tilde{\vtheta}}_{\color{red}{t}}$ (mod_cont_theta_t);
[Plum,thick] (theta_t) to[out=30,in=170] node[near end,above] $\dot{\tilde{\vtheta}}_{{\color{red}{t + 1}}}$ (mod_cont_theta_t_plus_1);
[black,thick,dashed] ($(theta_t) + (0,-0.5)$) – ($(cont_theta_t) + (0,0.7)$);
] (theta_t_plus_1) – (mod_cont_theta_t_plus_1)
node [pos=0.5,anchor=west,xshift=0.15cm] $\mathcal{O}(h^3)$;
] (theta_t_plus_1) – (cont_theta_t_plus_1)
node [pos=0.5,anchor=west,xshift=1.6cm] $\mathcal{O}(h^2)$;
[Visualising the standard approach to backward error analysis alongside an approach which constructs a different flow per gradient descent iteration.]Visualising the standard approach to BEA alongside an approach which constructs a different flow per gradient descent iteration. In our previous use of BEA tkz:orig_bea, we constructed modified flows $\dot{\tilde{\vtheta}}$ to capture the discretisation error of gradient descent; these flows did not depend on the initial iteration parameters. In this chapter, we take the second approach tkz:novel_bea, allowing us to construct additional flows $\dot{\tilde{\vtheta}}_{\color{red}{t}}$, which depend on initial parameters, and showcase additional implicit regularisation effects.
Implicit in the choice of the IGR flow [Barrett and Dherin, 2021], as well as the flows we have introduced thus far in this work using BEA, namely the PF (Chapter <ref>) and the modified flows of two-player games (Chapter <ref>), make an implicit assumption: that we are looking for the modified flows that hold for every training iteration and do not depend on the initial parameters. This is however challenged already in the case of stochastic gradient descent, where each implementation of the flows requires dependence on data batches, as shown in Eq (<ref>).
§ IMPLICIT REGULARISATION IN MULTIPLE STOCHASTIC GRADIENT DESCENT STEPS
We now use the above observations to build modified flows that capture multiple gradient descent updates with an error of $\mathcal{O}(h^3)$ after $n$ SGD steps. We are interested in modified flows
that can be used to construct modified losses by writing the vector field of the flow as the negative gradient of a function; this enables us to capture the implicit regularisation effects of taking multiple SGD steps.
We analyse $n$ SGD steps, starting at iteration $t$
\begin{align}
\vtheta_{t+\mu} = \vtheta_{t+\mu-1} - \nabla_{\vtheta} E(\vtheta_{t+\mu-1}; \vX^{t+\mu}), \hspace{3em} \mu \in \{0, \dots, n-1\}
\label{eq:sgd_multiple_updates_bea}
\end{align}
where for simplicity we denoted $E(\vtheta_{t-1}; \vX) = \frac{1}{B}\sum_{i=1}^B E(\vtheta_{t-1}; \vx_i)$, with elements $\vx_i$ forming the batch $\vX$.
We start with the following remark:
Denote $E(\vtheta; \{\vX^{t}, \dots, \vX^{t+n-1} \}) = \frac{1}{n} \sum_{\mu=0}^{n-1} E(\vtheta; \vX^{t+\mu})$, i.e. the average loss obtained from the $n$ data batches. Then the trajectory obtained by taking $n$ steps of stochastic gradient descent follows the trajectory of minimising the loss in continuous-time with $\mathcal{O}(h^2)$ error:
\begin{align}
\tilde{E}(\vtheta) &= E(\vtheta; \{\vX^{t}, \dots, \vX^{t+n-1} \}).
\end{align}
Perhaps surprisingly, the above shows that effects from mini-batches appear only at higher-order terms in learning rate $h$. Thus, to capture implicit regularisation effects that account for the mini-batches used in the SGD updates, we use BEA. We find a modified flow that describes the SGD update with an error of $\mathcal{O}(h^3)$, with a vector field that can be written as a negative gradient. We provide proofs and the flows that construct the regularisers in the Appendix <ref>.
We use the same steps used for our other BEA proofs: 1) we expand the $n$ discrete updates in Eq (<ref>) to write a relation between the parameters at time $t-1$ and $t+n-1$ up to second order in learning rate; 2) expand the changes in continuous-time up to $\mathcal{O}(h^2)$; and 3) match the $\mathcal{O}(h^2)$ terms to find the terms which quantify the drift.
The significant difference with our previous approaches is that we allow the vector field to depend on the initial parameters, and choose a vector field that can be written as a gradient in order to construct a modified loss.
Denote $E(\vtheta; \{\vX^{t}, \dots, \vX^{t+n-1} \}) = \frac{1}{n} \sum_{\mu=0}^{n-1} E(\vtheta; \vX^{t+\mu})$, i.e. the average loss obtained from the $n$ data batches. Then the trajectory obtained by taking $n$ steps of stochastic gradient descent follows the trajectory of minimising the loss in continuous-time with $\mathcal{O}(h^3)$ error
\begin{align}
\tilde{E}(\vtheta) &= E(\vtheta; \{\vX^{t}, \dots, \vX^{t+n-1} \})
+ \frac{n h}{4} \underbrace{\norm{ \nabla_{\vtheta}E(\vtheta; \{\vX^{t}, \dots, \vX^{t+n-1} \})}^2}_{\text{full batch norm regularisation}} \label{eq:sgd_norm_min}\\
& \quad - \frac{h}{n}\sum_{\mu =1 }^{n-1} \underbrace{\left[ \nabla_{\vtheta} E(\vtheta; \vX^{t+\mu})^T \left(\sum_{\tau = 0}^{\mu-1} \nabla_{\vtheta}E({\color{red}{\vtheta_{t -1}}}; \vX^{t+\tau})\right)\right]}_{\text{mini-batch gradient alignment}}.
\label{eq:modified_sup_learning}
\end{align}
We thus find the implicit regularisation effects induced by $n$ steps of SGD, capturing the importance of exact batches used and their order in Eq (<ref>). We note that without making use the observations regarding BEA in the previous section, and thus without the parameters $\vtheta_{t -1}$, a modified loss could not have been constructed outside of the full-batch case where one can recover the IGR flow.
The following remark immediately follows by setting $n=2$ in Eq (<ref>):
When taking a second stochastic gradient descent step, there is an implicit regularisation term maximising the dot product between the gradient at the current step and the gradient at the previous step: $\nabla_{\vtheta} E^T \nabla_{\vtheta} E(\vtheta_{t-1})$. This can be achieved by aligning the direction of the gradients between the two iterations or increasing the gradient norm, but we note that increasing gradient norm is counter other implicit regularisation effect in Eq (<ref>).
We now compare this novel modified loss with the modified loss we obtained by ignoring stochasticity and assuming all $n$ updates have been done with a full-batch; this entails using the IGR loss (proof for multiple steps in Section <ref>)
\begin{align}
\tilde{E} = E(\vtheta;\{\vX^{t}, \dots, \vX^{t+n-1} \}) + \frac{h}{4} \|\nabla_{\vtheta} E(\vtheta; \{\vX^{t}, \dots, \vX^{t+n-1} \}) \|^2 .
\label{eq:igr_flow_mul}
\end{align}
The above modified losses show that both full-batch gradient descent and SGD have a pressure to minimise the gradient norm $\|\nabla_{\vtheta} E(\vtheta; \{\vX^{t}, \dots, \vX^{t+n-1} \})\|$. SGD leads to an additional regularisation effect capturing the importance of the order in which mini-batches are presented in training: maximising the dot product between gradients computed at the current parameters given a batch and the gradients computed at the initial parameters for all batches presented before the given batch. While this can be achieved both by increasing the norm of the gradients or by aligning gradients with those at the initial iteration, we note that increasing the gradient norm is counter to the other regulariser induced by SGD, the gradient norm minimisation effect shown in Eq (<ref>).
The important role of learning rates in BEA appears here too: for our approximations to hold $nh$ has to be sufficiently small. If we adjust the learning rate by the number of updates, i.e. if we set the learning rate for SGD equal to $h/n$, where $h$ is the learning rate used by full-batch gradient descent, we obtain the same implicit regularisation coefficient for the gradient norm $\|\nabla_{\vtheta} E(\vtheta;\{\vX^{t}, \dots, \vX^{t+n-1} \}) \|$ minimisation as the IGR flow in Eq <ref>, and the main difference between the two modified losses is given by the mini-batch gradient alignment terms present in Eq (<ref>).
The number of updates, $n$, also plays an important role. While the mini-batch gradient regularisation term in Eq (<ref>) has a coefficient of $\frac{h}{n}$, there are $\frac{n(n-1)}{2}$ terms composing the term. Thus the magnitude of the mini-batch gradient regularisation term can grow with $n$, but its effects strongly depend on the distribution of the gradients computed at different batches. For example, if gradients $\nabla_{\vtheta} E(\vtheta_{t-1};\vX^{t +i})$ are normally distributed with the mean at the full-batch gradient, i.e. $\nabla_{\vtheta} E(\vtheta_{t-1};\vX^{t +i}) \sim \mathcal{N}(\nabla_{\vtheta} E(\vtheta_{t-1}), \sigma^2)$, as the number of updates grows the regularisation effect in Eq <ref> will result in a pressure to align mini-batch gradients with the full-batch gradient at the initial parameters, $\nabla_{\vtheta} E(\vtheta_{t-1})$. Since our results hold for multiple values of $n$, empirical assessments need to be made to understand the interplay between the number of updates and the strength of mini-batch gradient regularisation on training.
The approach provided here is complementary to that of that of Smith et al., 2021, who obtain a relationship similar to Eq (<ref>) but in expectation—they describe expected value of the modified loss $\mathbb{E}_{\sigma} \left[E_{sgd}(\vtheta; \{\vX^{\sigma(t)}, \dots, \vX^{\sigma(t+n-1)} \}) \right]$ where the expectation is taken over all possible data batch shufflings $\sigma$, but not the elements in the batch.
As before we denote $E(\vtheta; \vX^{k})$ as the loss given by mini-batch $k$ (the equivalent to $\hat{C}_k$ in their notation, see their Eqs (1) and (2)), and write the modified loss they obtain in expectation as
\begin{align}
\mathbb{E_\sigma} \left[E_{sgd}(\vtheta) \right] &= E(\vtheta;\{\vX^{0}, \dots, \vX^{n-1} \}) + \frac{h}{4n} \sum_{k=0}^{n-1} \norm{\nabla_{\vtheta} E(\vtheta;\vX^k)}^2 \\&= E(\vtheta; \{\vX^{0}, \dots, \vX^{n-1} \}) + \frac{h}{4}\|\nabla_{\vtheta} E(\vtheta; \{\vX^{0}, \dots, \vX^{n-1} \}) \|^2 \\ &\hspace{1em} + \frac{h}{4n} \sum_{k=0}^{n-1} \norm{\nabla_{\vtheta} E(\vtheta;\vX^k) - \nabla_{\vtheta} E(\vtheta; \{\vX^{0}, \dots, \vX^{n-1} \}}^2.
\end{align}
Our proposed approach does not require working in expectations and accounts for the exact batches sampled from the dataset; thus it directly describes SGD as used in practice.
If we make the same assumptions as Smith et al., 2021 and take expectation over all possible batch shufflings in an epoch in Eq (<ref>) we obtain (proof in Section <ref>):
\begin{align}
\hspace{-0.7em}\mathbb{E}_{\sigma} \left[E_{sgd}(\vtheta) \right]
&= E(\vtheta; \{\vX^{t}, \dots, \vX^{t+n-1} \}) \\&\hspace{1em} + \frac{nh}{4} \norm{\nabla_{\vtheta} E(\vtheta;\{\vX^{t}, \dots, \vX^{t+n-1} \})}^2
\\&\hspace{1em}- \frac{h}{2n} \nabla_{\vtheta} E(\vtheta; \{\vX^{t}, \dots, \vX^{t+n-1} \})^T \nabla_{\vtheta} E({\color{red}\vtheta_{t-1}}; \{\vX^{t}, \dots, \vX^{t+n-1} \})
\\&\hspace{1em} - \frac{h}{2n}\left[\sum_{k=0}^{n-1} \nabla_{\vtheta} E(\vtheta;\vX^{t+k}) ^T \nabla_{\vtheta} E({\color{red}\vtheta_{t-1}}; \vX^{t+k})\right].
\end{align}
As expected, we obtain a different result than that of Smith et al., 2021: while the gradient norm minimisation is still present in our results, the per-batch regularisation in their formulation gets translated into a dot product term, where both the full-batch and per-batch gradients are regularised to be aligned with those at the beginning of the epoch. Both approaches share the limitation that $nh$ needs to be suitably small for approximations to be relevant; we note that since we do not require $n$ to be the number of updates in an epoch—and we obtain interestingly regularisation effects for $n=2$, see Remark <ref>—this is less of an issue for our approach. Their approach has the advantage of finding an implicit regularisation effect that does not depend on the initial parameters. By depending on initial parameters, however, our approach does not require working in expectations and accounts for the exact batches used in the SGD updates. We hope this can be used to stabilise SGD over multiple steps, as we have done with a single GD step in Section <ref>, that it can be used in continual and transfer learning [Iman et al., 2022, Zhuang et al., 2020, Parisi et al., 2019], as well as understanding the effects of the order of examples on model optimisation in online learning.
§ IMPLICIT REGULARISATION IN GENERALLY DIFFERENTIABLE TWO-PLAYER GAMES
In Chapter <ref>, we presented a framework for the quantification of discretisation drift in two-player games. We found distinct modified flows that describe simultaneous Euler updates and alternating Euler updates.
In both cases, we found corrections $f_1$ and $g_1$ to the original system
\begin{align}
\dot{\vphi} &= f( \vphi, \vtheta) \\
\dot{\vtheta} &= g( \vphi, \vtheta),
\end{align}
such that the modified continuous system
\begin{align}
\dot{\vphi} &= f( \vphi, \vtheta) + h f_1( \vphi, \vtheta) \label{eq:general_bae_formulation1}\\
\dot{\vtheta} &= g( \vphi, \vtheta) + h g_1( \vphi, \vtheta), \label{eq:general_bae_formulation2}
\end{align}
follows the discrete steps of the method with a local error of order $\mathcal O(h^3)$.
More precisely, if $(\vphi_{t}, \vtheta_{t})$ denotes the discrete step of the method at time $t$ and $( \vphi(h), \vtheta(h))$ corresponds to the continuous solution of the modified system above starting at $(\vphi_{t-1}, \vtheta_{t-1})$, $\| \vphi_{t} - \vphi( h) \| \textrm{ and } \| \vtheta_{t} - \vtheta( h) \|$
are of order $\mathcal O(h^3)$. In this section, we assume for simplicity that both players use the same learning rate $h$ and simultaneous updates, but the same arguments can be made when they use different learning rates or alternating updates.
Using this framework, we constructed modified loss functions in the case of zero-sum (Section <ref>) and common-payoff games (Section <ref>). However, using the aforementioned modified flows, we cannot always write the vector fields of the modified flows as a gradient for differentiable two-player games and thus we cannot construct modified losses. To see why, consider the case where we have two loss functions for the two players respectively $E_{\vphi}(\vphi, \vtheta): \mathbb{R}^m\times \mathbb{R}^n \rightarrow \mathbb{R}$ and $E_{\vtheta}(\vphi, \vtheta): \mathbb{R}^m\times \mathbb{R}^n \rightarrow \mathbb{R}$.
This leads to the update functions $f = -\nabla_{\vphi} E_{\vphi}$ and $g = -\nabla_{\vtheta} E_{\vtheta}$.
By using $f = -\nabla_{\vphi} E_{\vphi}$ and $g = -\nabla_{\vtheta} E_{\vtheta}$ in Theorem <ref>, we have that for simultaneous gradient descent
\begin{align}
f_1 &= - \frac{1}{2} \jacphif f- \frac{1}{2} \jacthetaf g\\
&= \underbrace{- \frac{1}{4} \nabla_{\vphi} \norm{ \nabla_{\vphi} E_{\vphi}}^2}_{\text{self term}} \underbrace{- \frac{1}{2} \jacparam{\vtheta}{\nabla_{\vphi} E_{\vphi}} \nabla_{\vtheta} E_{\vtheta}}_{\text{interaction term}}
\end{align}
and similarly
\begin{align}
g_1 &= \underbrace{- \frac{1}{4} \nabla_{\vtheta} \norm{ \nabla_{\vtheta} E_{\vtheta}}^2}_{\text{self term}} \underbrace{- \frac{1}{2}
\jacparam{\vphi}{\nabla_{\vtheta} E_{\vtheta}} \nabla_{\vphi} E_{\vphi}}_{\text{interaction term}}.
\end{align}
Thus, it is not always possible to write $f_1$ and $g_1$ as gradient functions, since the interaction terms—see Definition <ref> for the definition of self and interaction terms—cannot always be written as a gradient. Thus, the modified flows cannot be used to construct modified losses leading to implicit regularisers in generally differentiable two-player games, as we have done for zero-sum games in Chapter <ref>.
We now use the interpretation of BEA we provided in this chapter, to choose two other functions $f_1$ and $g_1$ and we will use them to construct another set of modified flows. These flows will still satisfy the value constraints required by the BEA proofs and by construction after one gradient descent update the error between the modified flows and the discrete updates is $\mathcal{O}(h^3)$. Indeed, what the BEA proofs provide for simultaneous gradient descent is (see Eq (<ref>) in the Appendix)
\begin{align}
f_1(\vphi_{t-1}, \vtheta_{t-1}) &= - \frac{1}{2} \jacphif(\vphi_{t-1}, \vtheta_{t-1}) f(\vphi_{t-1}, \vtheta_{t-1}) - \frac{1}{2} \jacthetaf(\vphi_{t-1}, \vtheta_{t-1}) g(\vphi_{t-1}, \vtheta_{t-1}) \\
g_1(\vphi_{t-1}, \vtheta_{t-1}) &= - \frac{1}{2} \jacphig(\vphi_{t-1}, \vtheta_{t-1}) f(\vphi_{t-1}, \vtheta_{t-1}) - \frac{1}{2} \jacthetag(\vphi_{t-1}, \vtheta_{t-1}) g(\vphi_{t-1}, \vtheta_{t-1}).
\end{align}
From here, we can choose $f_1$ and $g_1$, now depending on the iteration number $t$:
\begin{align}
f_{1, t}(\vphi, \vtheta) &= - \frac{1}{2} \jacphif(\vphi, \vtheta) f(\vphi, \vtheta) - \frac{1}{2} \jacthetaf(\vphi, \vtheta) g({\color{red}\vphi_{t-1}, \vtheta_{t-1}}) \\
g_{1, t}(\vphi, \vtheta) &= - \frac{1}{2} \jacphig(\vphi, \vtheta) f({\color{red}\vphi_{t-1}, \vtheta_{t-1}}) - \frac{1}{2} \jacthetag(\vphi, \vtheta) g(\vphi, \vtheta).
\end{align}
Here, we treat $g(\vphi_{t-1}, \vtheta_{t-1})$ and $f(\vphi_{t-1}, \vtheta_{t-1})$ as constants. We replace $f = - \nabla_{\vphi} E_{\vphi}$ and $g = - \nabla_{\vtheta} E_{\vtheta}$ in $f_{1, t}$ and $g_{1, t}$ and write the drift terms as gradient functions:
\begin{align}
f_{1, t}(\vphi, \vtheta) &= - \frac{1}{2} \jacparam{\vphi}{\nabla_{\vphi} E_{\vphi}} \nabla_{\vphi} E_{\vphi} - \frac{1}{2} \jacparam{\vtheta}{\nabla_{\vphi} E_{\vphi}} \nabla_{\vtheta} E_{\vtheta}(\vphi_{t-1}, \vtheta_{t-1}) \\
&= - \frac{1}{2} \nabla_{\vphi} \norm{\nabla_{\vphi}E_{\vphi}}^2 - \frac{1}{2} \nabla_{\vphi}\left(\nabla_{\vtheta} E_{\vphi} ^T \nabla_{\vtheta} E_{\vtheta}(\vphi_{t-1}, \vtheta_{t-1}) \right) \\
&= - \nabla_{\vphi} \left(\frac{1}{2} \norm{\nabla_{\vphi}E_{\vphi}}^2 + \frac{1}{2} \nabla_{\vtheta} E_{\vphi}^T \nabla_{\vtheta} E_{\vtheta}(\vphi_{t-1}, \vtheta_{t-1}) \right) \\
g_{1, t}(\vphi, \vtheta) &= - \nabla_{\vtheta} \left(\frac{1}{2} \norm{\nabla_{\vtheta}E_{\vtheta}}^2 + \frac{1}{2} \nabla_{\vphi} E_{\vtheta}^T \nabla_{\vphi} E_{\vphi}(\vphi_{t-1}, \vtheta_{t-1}) \right).
\end{align}
Replacing the above in the modified flows in Eqs (<ref>) and (<ref>)), we obtain
\begin{align}
\dot{\vphi} &= f( \vphi, \vtheta) + h f_1( \vphi, \vtheta) \\
&= - \nabla_{\vphi} \left(E_{\vphi} + h \left(\frac{1}{2} \norm{\nabla_{\vphi}E_{\vphi}}^2 + \frac{1}{2} \nabla_{\vtheta} E_{\vphi}^T \nabla_{\vtheta} E_{\vtheta}(\vphi_{t-1}, \vtheta_{t-1}) \right) \right) \label{eq:disc_int_app} \\
\dot{\vtheta} &= g( \vphi, \vtheta) + h g_1( \vphi, \vtheta) \\
&= - \nabla_{\vtheta} \left(E_{\vtheta} + h \left(\frac{1}{2} \norm{\nabla_{\vtheta}E_{\vtheta}}^2 + \frac{1}{2} \nabla_{\vphi} E_{\vtheta} ^T \nabla_{\vphi} E_{\vphi}(\vphi_{t-1}, \vtheta_{t-1}) \right)\right).
\end{align}
We can now write modified losses for each gradient descent iteration of a general two-player differentiable game (which depend on the iteration $t$), which describe the local trajectory of simultaneous gradient descent up to $\mathcal{O}(h^3)$:
\begin{align}
\tilde{E}_{\vphi, t} = E_{\vphi} + h \left(\underbrace{\frac{1}{2} \norm{\nabla_{\vphi}E_{\vphi}}^2}_{\text{self term}} + \underbrace{\frac{1}{2} \nabla_{\vtheta} E_{\vphi}^T \nabla_{\vtheta} E_{\vtheta}({\color{red}\vphi_{t-1}, \vtheta_{t-1}}) }_{\text{interaction term}} \right) \label{eq:int_term_d_first} \\
\tilde{E}_{\vtheta, t} = E_{\vtheta} + h \left(\underbrace{\frac{1}{2} \norm{\nabla_{\vtheta}E_{\vtheta}}^2}_{\text{self term}} + \underbrace{\frac{1}{2} \nabla_{\vphi} E_{\vtheta}^T \nabla_{\vphi} E_{\vphi}({\color{red}\vphi_{t-1}, \vtheta_{t-1}}}_{\text{interaction term}})\right).
\end{align}
The self terms result in the implicit gradient regularisation found in supervised learning [Barrett and Dherin, 2021] and zero-sum games (Chapter <ref>): each player has an incentive to minimise its own gradient norm. The interaction term for each player encourages minimising the dot product between the gradients of its loss with respect to the other player's parameters and the previous gradient update of the other player.
Consider the first player's interaction term, equal to $(-\nabla_{\vtheta} E_{\vphi}(\vphi_t, \cdot))^T (- \nabla_{\vtheta}E_{\vtheta}({\color{red}\vphi_{t-1}, \vtheta_{t-1}}))$, where $-E_{\vtheta}({\color{red}\vphi_{t-1}, \vtheta_{t-1}})$ is the previous update direction of $\vtheta$ aimed at minimising $E_{\vtheta}$.
Its implicit regularisation effect depends on the functional form of $E_{\vphi}$ and $E_{\vtheta}$: if $\nabla_{\vtheta} E_{\vtheta}$ and $\nabla_{\vtheta} E_{\vphi}$ have aligned directions by construction, then the implicit regularisation effect nudges the first player's update towards a point in space where the second player's update changes direction; the opposite is true if $\nabla_{\vtheta} E_{\vtheta}$ and $\nabla_{\vtheta} E_{\vphi}$ are misaligned by construction.
§.§ The effect of the interaction terms: a GAN example
We work through the regularisation effect of interaction terms for a pair of commonly used GAN losses that do not form a zero-sum game (using the generator non-saturating loss [Goodfellow et al., 2015]) and contrast it with the zero-sum case (the saturating loss); we have previously investigated these losses in Chapter <ref>. As before, we denote the first player, the discriminator, as $D$, parametetrised by $\vphi$, and the generator as $G$, parametrised by $\vtheta$. We denote the data distribution as $p^*(\vx)$ and the latent distribution $p(\vz)$.
Given the non-saturating GAN loss Goodfellow et al., 2014:
\begin{align}
E_{\vphi}(\vphi, \vtheta) &= \mathbb{E}_{p^*(\vx)} \log D(\vx; \vphi) + \mathbb{E}_{p(\vz)} \log (1 - D(G(\vz; \vtheta); \vphi)) \\
E_{\vtheta}(\vphi, \vtheta) &= \mathbb{E}_{p(\vz)} - \log D(G(\vz; \vtheta); \vphi),
\end{align}
we can then write the interaction term $\nabla_{\vtheta} E_{\vphi}^T \nabla_{\vtheta} \nabla_{\vtheta} E_{\vtheta}({\color{red}\vphi_{t-1}, \vtheta_{t-1}})$ in Eq (<ref>) at iteration $t$ as (derivation in Appendix <ref>)
\begin{align}
\frac{1}{B^2}\sum_{i,j=1}^B &c^{non-sat}_{i,j} \nabla_{\vtheta} D(G(\vz^i_t; \vtheta); \vphi)^T \nabla_{\vtheta} D(G(\vz^j_{t-1}; {\color{red}\vtheta_{t-1}}); {\color{red}\vphi_{t-1})}), \hspace{2em} \text{with} \\
&c^{non-sat}_{i,j} = \frac{1}{1 - D(G(\vz^i_t; \vtheta); \vphi)} \frac{1}{D(G(\vz^j_{t-1}; {\color{red}\vtheta_{t-1}}); {\color{red}\vphi_{t-1})}} \label{eq:c_i_j_non_sat},
\end{align}
where $\vz^i_t$ is the latent variable with index $i$ in the batch at time $t$ with batch size $B$.
Thus, the strength of the regularisation—$c^{non-sat}_{i,j}$—depends on how confident the discriminator is. In particular, this implicit regularisation encourages the discriminator update into a new set of parameters where the gradient $\nabla_{\vtheta} D(G(\vz^i_t; \vtheta); \vphi)$ points away from the direction of $\nabla_{\vtheta} D(G(\vz^j_{t-1}; \vtheta_{t-1}); \vphi_{t-1})$ when $c_{i,j}$ is large. This occurs for $\vz^i_t$ where the discriminator is fooled by the generator—i.e. $1 - D(G(\vz_i; \vtheta); \vphi)$ is close to 0 and $\frac{1}{1 - D(G(\vz^i_t; \vtheta); \vphi)}$ is large—and samples $\vz^j_{t-1}$ where the discriminator was correct at the previous iteration—$D(G(\vz^j_{t-1}; \vtheta_{t-1}); \vphi_{t-1})$ low and thus $\frac{1}{D(G(\vz^j_{t-1}; \vtheta_{t-1}); \vphi_{t-1})}$ is large. This can be seen as beneficial regularisation for the generator, by ensuring the update direction $- {\nabla_{\vtheta} E_{\vtheta} = \mathbb{E}_{p(\vz)} \frac{1}{D(G(\vz; \vtheta); \vphi)} \nabla_{\vtheta} D(G(\vz; \vtheta); \vphi)}$ is adjusted accordingly to the discriminator's output. We note, however, that regularisation might not have a strong effect, as gradients $\nabla_{\vtheta} D(G(\vz; \vtheta); \vphi)$ that have a high weight in the generator's update are those where the discriminator is correct and classifies generated data as fake—i.e. $\frac{1}{D(G(\vz; \vtheta); \vphi)}$ is large—but there the factor $\frac{1}{1 - D(G(\vz; \vtheta); \vphi)}$ in the interaction term coefficient in Eq (<ref>) will be low. This is inline with our empirical results in Section <ref>, where we saw that for the non-saturating loss discretisation drift does not have a strong effect on performance.
We can contrast this with what regularisation we obtain when using the saturating loss [Goodfellow et al., 2014], where $E_{\vtheta}(\vphi, \vtheta) = - \mathbb{E}_{p(\vz)} \log (1 - D(G(\vz; \vtheta); \vphi))$. We used the saturating loss extensively in experiments in Section <ref>, and we have shown it performs poorly in comparison to the non-saturating loss, in-line with the literature [Goodfellow et al., 2014]. If we perform the same analysis as above for the saturating loss we obtain the following implicit regulariser
\begin{align}
\frac{1}{B^2} \sum_{i,j=1}^B &c^{sat}_{i,j} \nabla_{\vtheta} D(G(\vz^i_t; \vtheta); \vphi)^T \nabla_{\vtheta} D(G(\vz^j_{t-1}; {\color{red}\vtheta_{t-1}}); {\color{red}\vphi_{t-1})}), \hspace{3em} \text{with}\\
&c^{sat}_{i,j} = \frac{1}{1 - D(G(\vz^i_t; \vtheta); \vphi)} \frac{1}{1-D(G(\vz^j_{t-1}; {\color{red}\vtheta_{t-1}}); {\color{red}\vphi_{t-1})})} \label{eq:c_i_j_sat}.
\end{align}
|
# SensEmo: Enabling Affective Learning through Real-time Emotion Recognition
with Smartwatches
Kushan Choksi1, Hongkai Chen2, Karan Joshi1, Sukrutha Jade1, Shahriar Nirjon3,
and Shan Lin1 1Department of Electrical and Computer Engineering
Stony Brook University, Stony Brook, NY, USA
Email<EMAIL_ADDRESS>{karan3502<EMAIL_ADDRESS><EMAIL_ADDRESS>2Department of Information Engineering
The Chinese University of Hong Kong, Hong Kong SAR, China
Email<EMAIL_ADDRESS>3Department of Computer Science
University of North Carolina, Chapel Hill, NC, USA
Email<EMAIL_ADDRESS>
###### Abstract
Recent research has demonstrated the capability of physiological signals to
infer both user emotional and attention responses. This presents an
opportunity for leveraging widely available physiological sensors in
smartwatches, to detect real-time emotional cues in users, such as stress and
excitement. In this paper, we introduce SensEmo, a smartwatch-based system
designed for affective learning. SensEmo utilizes multiple physiological
sensor data, including heart rate and galvanic skin response, to recognize a
student’s motivation and concentration levels during class. This recognition
is facilitated by a personalized emotion recognition model that predicts
emotional states based on degrees of valence and arousal. With real-time
emotion and attention feedback from students, we design a Markov decision
process-based algorithm to enhance student learning effectiveness and
experience by offering suggestions to the teacher regarding teaching content
and pacing. We evaluate SensEmo with 22 participants in real-world classroom
environments. Evaluation results show that SensEmo recognizes student emotion
with an average of 88.9% accuracy. More importantly, SensEmo assists students
to achieve better online learning outcomes, e.g., an average of 40.0% higher
grades in quizzes, over the traditional learning without student emotional
feedback.
## I Introduction
Emotion impacts learning, with curiosity and motivation being crucial for
effective learning and academic success [1]. However, students often struggle
to maintain motivation and concentration during class, especially when the
difficulty level and teaching pace do not match their needs. Experienced
teachers use strategies like interactive discussions to boost engagement, but
there is a lack of effective methods to monitor students’ emotions. This
problem is more significant in online classrooms, where interactions are
limited, making it difficult to get emotional feedback from students. Thus, an
affective learning system that accurately identifies a learner’s emotional
state is needed.
Affective learning involves emotional engagement in acquiring knowledge,
skills, and attitudes, recognizing emotions’ role in cognitive processes,
memory, and decision-making. Research has used methods like eye tracking [2],
facial recognition [3], speech recognition [4], text recognition [5], and
gesture movement [6] to detect emotions. However, these methods are not
suitable for classroom settings. For instance, camera-based methods raise
privacy concerns and struggle to capture all students at once. Additionally,
in online learning, factors like lighting, network conditions, and privacy
further limit these methods.
To address these challenges, we use off-the-shelf smartwatches with
physiological sensing capabilities for emotion recognition. Studies have
showed strong links between physiological signals and emotional states [7]. We
develop a personalized emotion model by analyzing physiological signals from
different users, identifying changes related to motivation and concentration.
Leveraging this model, we develop SensEmo, which helps teachers monitor
students’ emotions and adjust teaching methods. For example, SensEmo suggests
providing concrete examples to clarify key concepts when it detects confusion
among multiple students. Similarly, it advises increasing the pace when most
students appear bored. It is important to note that various factors can
influence a student’s motivation level in real-world scenarios, but this paper
focuses on classroom learning situations.
To assess our emotion recognition solution, we implemented SensEmo on
commercial smartwatches and tested it with 22 volunteers using International
Affective Picture System (IAPS) emotion image dataset [8]. Physiological
signal features collected by smartwatches were mapped to the valence-arousal
space and classified into four states: curious, bored, confused, and
satisfied. Additionally, we conducted simulated and real-world experiments to
evaluate the system’s impact on affective learning. Results indicate that
SensEmo helps maintain students’ emotional states and enhances learning
outcomes.
In summary, our main contributions are the following.
* •
We developed SensEmo, the first affective learning system that utilizes real-
time emotion sensing and recognition with a smartwatch to provide students’
feedback.
* •
We proposed a personalized emotion model that classifies physiological signals
into specific levels of motivation and concentration tailored to the learning
environment.
* •
We introduced an affective learning model using a reinforcement learning-based
controller, which adapts teaching content and pace according to real-time
emotion and individual learning preferences of students. This model is
applicable to both online and classroom learning systems.
* •
In real learning scenarios with 22 student volunteers, SensEmo achieved 88.9%
accuracy in emotion recognition. Our preliminary experiments in online and
classroom settings demonstrated SensEmo’s promise to boost learning outcomes
and enhance student emotions.
## II Related Work
Emotion recognition has been explored in many domains. A KNN algorithm
classifies emotions using physiological and subjective components [9].
StressSense uses smartphone-detected voice for stress recognition [10]. Text
analysis studies emotions via semantic labels, attributes [11], and typing
rhythms [12]. Wearable devices detect emotions using respiratory frequency,
heart rate variability, skin conductance, and accelerometer data [13].
However, limited research focuses on emotion recognition in learning
environments. Studies explore wearable biosensors to enhance learning by
monitoring physiological signals [14], assessing cognitive load during
problem-solving [15], and predicting depression in students [16]. However, few
have examined emotion recognition’s role in improving learning performance.
Adaptive systems have been proposed to adjust game difficulty based on users’
emotions to maintain engagement [17], and e-learning systems have used neuro-
fuzzy networks to estimate user behavior and adapt content presentation [18].
However, they are usually limited by real-world implementation and validation.
In contrast, SensEmo utilizes physiological signals from smartwatches to guide
teaching content and pace in real-world settings.
## III System Design
### III-A Application Scenarios
SensEmo can be implemented in both in-person and online remote classroom
settings, where students wear a smartwatch, and a mobile app collects real-
time physiological data. This data is then sent to a central server for
emotion recognition, adapting the learning process for students. In online
remote learning, teaching content and pace can be automatically adjusted for
each individual student. In an in-person classroom setting, the instructor
focuses on maintaining positive emotional responses among students. This paper
explores SensEmo’s use in affective learning, with potential applications in
driving monitoring [19], health monitoring [20], and interviewing/consultation
assistance [21].
### III-B System Overview
Figure 1: SensEmo uses physiological signals to personalize the emotion model
and adapt the learning content and pace to the student’s emotional state.
SensEmo collects real-time emotional feedback from students’ physiological
signals during engagement with teaching content. This feedback is used to
adapt the course automatically. The components of SensEmo are shown in Figure
1.
Sensor system. SensEmo’s sensor system uses a smartwatch to capture real-time
physiological signals, including heart rate, skin resistance, and skin
temperature.
Physiological data collection. A smartphone application communicates with
smartwatches to retrieve sensor data, which is sent to the cloud for
processing and feature extraction.
Cloud computing. A cloud platform is used due to storage and computational
requirements, facilitating feature mapping, emotion recognition, and
controller computations.
Affective learning controller. The learning controller uses real-time feedback
on emotions and concentration to adjust teaching content and pace. It employs
a Markov Decision Process (MDP) trained via reinforcement learning, creating a
loop that informs the instructor of students’ emotions and preferences. The
design, shown in Figure 2, includes emotion recognition and reinforcement
learning-based control.
Figure 2: Overview of the affective learning feedback controller, including
emotion recognition and a reinforcement learning-based controller.
### III-C Emotion Recognition
During the students’ engagement in learning, SensEmo uses smartwatches to
capture physiological data. To interpret this data, it is crucial to correlate
physiological signals with emotional states. In this section, we describe our
approach to emotion recognition.
Physiological feature selection. Emotional responses trigger autonomic nervous
system reactions, captured through physiological signals. SensEmo uses the
following features to deduce emotions. (1) Electrodermal activity: Continuous
changes in skin’s electrical characteristics, influenced by moisture. Skin
conductance response, increasing with stress and perspiration, and skin
conductance level can be obtained from electrodermal activity. (2) Blood
volume pulse: Captures changes in blood volume, indicating blood flow. The
inter-beat interval, which represents the time duration between two
heartbeats, is derived from the blood volume pulse signal. This interval is
utilized to determine features such as heart rate and heart rate variability.
(3) Skin temperature: Linked to the autonomic nervous system. Excitement can
raise skin temperature. We measure both skin temperature response and level.
Personalized emotion model calibration. Emotion recognition using
physiological signals varies among individuals [22]. For instance, one
person’s heart rate may rise to 100 bps when excited, another’s to 120 bps. To
address the user-specific differences, we developed a personalized calibration
technique to normalize emotion-reflecting physiological changes. Our local
min-max normalization maps physiological signals to a feature space from -1 to
1, calculated as $f^{\prime}=(f-{f_{min}})/({f_{max}}-{f_{min}})$, where $f$
is the feature, ${f_{max}}$ and ${f_{min}}$ are local maxima and minima. This
normalization has been used to analyze different physiological features
consistently, disregarding individual emotional responses [23].
Valence/arousal scales from physiological features. SensEmo predicts students’
valence and arousal scales from calibrated features. We use the IAPS image
dataset, which elicits emotional responses. Images are categorized as
pleasant, neutral, or unpleasant, and rated on valence (pleasant to
unpleasant) and arousal (calm to excited) scales. These scales are target
values for mapping physiological features into the valence-arousal space,
widely used in psychological research [24]. During our study, users’
physiological signals were collected while viewing these images. This
collected dataset is used to train a model to translate physiological signals
into valence and arousal scales.
Learning emotion recognition from valence-arousal space. The valence-arousal
space categorizes emotions in a two-dimensional space [25]. Existing theories
label emotions within this space [26]. SensEmo maps valence and arousal scales
to learning-related emotions like boredom, confusion, curiosity, and
satisfaction [27]. Figure 3 shows the valence-arousal space highlighting
learning-relevant emotions. This approach, allowing continuous emotion
modeling, has been validated by clinical data [28].
Figure 3: The 2-dimensional valence-arousal space and mapping of emotions of
bored, satisfied, curious, and confused.
### III-D Reinforcement Learning-based Control
After processing student emotion data, the system identifies the current
emotional state, $S_{t}$, and preferences for content difficulty and pace.
These inputs are fed to a reinforcement learning-based controller with two
main components: a decision-making policy lookup and an MDP controller. The
lookup retrieves information about optimal and sub-optimal decisions made by
the MDP controller in the current and previous steps. Sub-optimal decisions
are fallback options if the optimal policy isn’t feasible. For instance,
lectures may not able to change content or adjust pace as suggested by the MDP
controller. The MDP controller optimizes decision sequences based on the
current emotional state $S_{t}$ and student preferences, aiming to keep the
user in a state of curiosity, characterized by relaxed alertness [29]. The MDP
structure is shown in Figure 4, comprising states (bored, satisfied, curious,
confused) and actions (increase/decrease pace, simplify/no change in content).
Transition probabilities determine the likelihood of moving from one state to
another, influenced by the current state and the student’s preferences.
However, determining the optimal decision that maintains the user in the
desired emotional state of curiosity relies on value iteration of the MDP
model.
Figure 4: The discrete MDP representation of SensEmo.
SensEmo faces uncertainties in both the students’ emotional states and
learning performance, which are influenced by the instructor’s decisions. The
system uses a MDP represented by a 4-tuple: states $S$, actions $A$,
transition probabilities $P$, and rewards $R$. The goal is to find an optimal
policy $\pi^{*}$ that maximizes cumulative rewards over time, guiding
decisions to induce curiosity. The MDP is defined as follows.
* •
States: $S=\\{s_{0},s_{1},s_{2},s_{3}\\}$. Each represents emotion state
bored, satisfied, curious, and confused, respectively. Additionally,
$s^{\prime}$ denotes the state that follows after $s$.
* •
Actions: $A=\\{a_{0},a_{1},a_{2},a_{3}\\}$. Each represents action increase
pace, decrease pace, simplify content, and no change in content. Note that $a$
is the action taken during each iteration, as indicated in equations (1) and
(2).
* •
Transition Probabilities: Determined by a Markov chain asymptotic analysis
[30] with our experimental data.
* •
Reward: $R_{a}(s|s^{\prime})$. It represents the incentives for state
transition. The initial rewards were defined based on the observed differences
in asymptotic state transition probabilities. These values were obtained
empirically, considering the variations in state transition probabilities.
The optimization problem for the MDP aims at maximizing the accumulated
rewards based on current emotional states of the students, in order to
determine the optimal policy $\pi^{*}$. The optimal policy $\pi^{*}(s)$ is
determined asymptotically using the value iteration optimization function,
which is defined by the following equations.
$\displaystyle\pi(s)$
$\displaystyle:=\operatorname*{arg\,max}_{a}\\{\sum_{s^{\prime}}P(s^{\prime}|s,a)(R(s^{\prime}|s,a)+\gamma
V(s^{\prime}))\\}$ (1) $\displaystyle V_{i+1}(s)$
$\displaystyle:=\max_{a}\\{\sum_{s^{\prime}}P_{a}(s^{\prime}|s)(R_{a}(s,s^{\prime})+\gamma
V_{i}(s^{\prime}))\\}$ (2)
The policy $\pi(s)$ maps states $s$ to actions $a$ and the value function
$V_{i+1}(s)$ at iteration $i+1$ represents the maximum expected return from
state $s$, subject to the optimal action and the initialization of the value
function. Initial values for states are assigned randomly. The discount factor
$\gamma\in[0,1]$ weighs the importance of future rewards, empirically set to
guide actions towards curious or satisfied states. In our system, the discount
factors for $s_{1},\ldots,s_{4}$ are set to $0.1$, $0.45$, $0.35$, and $0.1$,
respectively. Furthermore, the decision-making process takes into account the
student’s preferences, such as desirable learning pace or preference between
illustrations and descriptions. This preference data is collected through a
pre-lecture questionnaire that can be conveniently stored online.
## IV System Implementation
In this section, we describe the implementation and realization of both the
hardware and algorithms of SensEmo.
### IV-A Sensor System
Data Acquisition. We implemented SensEmo on Microsoft Band 2 smartwatches,
which can detect electrodermal activity, blood volume pulse, and skin
temperature. Each sensor has a default sampling rate of $1$ Hz. Proper sensor
placement is crucial to avoid motion artifacts that can affect measurement
accuracy. An Android app was installed on users’ smartphones to receive sensor
data from the smartwatches and facilitate data collection.
Feature extraction. SensEmo extracts six features from the sensor data: skin
conductance response (SCR), skin conductance level (SCL), heart rate (HR),
heart rate variability (HRV), skin temperature response (STR), and skin
temperature level (STL). It computes moving averages of HR, STR, and SCL, and
running deviations of HRV, SCR, and STR using a 50-sample moving window. HRV
is extracted from the RR-interval data stream using the Root-mean-square of
successive differences (RMSSD) method.
### IV-B Emotion Recognition
We describe two major components of emotion recognition: personalized model
calibration and emotion recognition.
Personalized model calibration. First, SensEmo calculates the maximum and
minimum values of the six physiological sensor features (i.e., SCR, SCL, HR,
HRV, STR, and STL) within a 50-sample window for each user. Then, it computes
the normalized features for all users.
Emotion recognition. We use the IAPS image dataset to map normalized features
to valence and arousal scales. During data collection, users’ physiological
signals were recorded while viewing images, with the valence and arousal
scales of those images considered as the ground truth. A Fine Gaussian Support
Vector Machine (SVM) was chosen for predicting valence and arousal scales due
to its memory efficiency. The dataset was split into a 70:15:15 ratio for
training, validation, and testing. Finally, a fuzzy emotion modeling approach
[28] mapped the valence and arousal scales onto the valence-arousal space,
identifying specific emotions with nuanced representation by considering fuzzy
boundaries between emotional states.
### IV-C MDP Control with Collective Emotion
The input to the MDP is the collective emotion state from individual student
emotions, which is defined as the weighted average of the individual emotions.
It is computed as
$s^{*}\approx(\sum_{i=1}^{N}W_{i}s_{i})/(\sum_{i=1}^{N}n_{i})$, where $n_{i}$
is the number of students in emotional state $s_{i}$, and $N$ is total number
of emotions.
Figure 5: Convergent asymptotic behavior of the MDP.
To ensure continuous operation, we analyzed the MDP structure’s stability,
irreducibility, aperiodicity, ergodicity, and control process consistency,
involving asymptotic behavior and eigenvalue analysis. Figure 5 shows a
convergence sample of the Markov chain’s asymptotic behavior. By analyzing
asymptotics with our data, we determined state transition values and computed
the stationary state distribution from a uniform distribution.
Table I shows an optimal and a sub-optimal policy for SensEmo, helping it
adjust decisions based on value iteration results. Note that Table I
represents one optimal policy, which can vary with data, iterations, reward
schemes, and student preferences. SensEmo stores sub-optimal solutions with
lower rewards than the optimal policy for alternative suggestions. These are
used when optimal suggestions cannot be implemented. For instance, if
confusion is the collective emotion, SensEmo may suggest content
simplification. If infeasible, it proposes sub-optimal adjustments like
reducing the teaching pace. We observe that the sub-optimal policy can change
depending on the prevailing conditions, while the optimal policy remains
stable.
### IV-D Discussion of System Generalization
In SensEmo, personalized calibration enables our approach to be applied to
diverse groups of individuals with varying baselines, ensuring its
adaptability. In addition, the MDP-based decision-making framework can be
easily extended to accommodate different numbers of states and objectives,
allowing for flexible customization of the state transition probabilities in a
generalized system function. For example, SensEmo can be utilized for
monitoring mental states in medical systems [31]. The methodology employed for
emotion recognition in our approach is also capable of incorporating a broader
range of emotion states within the valence-arousal space. While we utilized a
simple four-quadrant classification of emotional space in our case, the number
of emotions can be expanded, allowing for further classification of emotional
subspaces.
TABLE I: Optimal and sub-optimal policies based on asymptotic value iterations for MDP | Policy Based on Value Iteration
---|---
Current State $s$ | Optimal Policy $\pi(s)$ | Sub-optimal Policy $\pi^{*}(s)$
bored | Enriching content | Simplifying content
satisfied | Making no change | Decreasing pace
confused | Simplifying content | Decreasing pace
curious | Decreasing pace | Enriching content
## V System Evaluation
In this section, we evaluate the performance of SensEmo in emotion recognition
and its impact on improving learning outcomes. To conduct the evaluation, we
recruited 22 graduate students between the ages of 22 and 27. Additionally,
all experiments described in this paper received approval/waiver from the IRB.
### V-A Emotion Recognition Evaluation
Experimental Setup. To collect data for different emotion states, it is
crucial to use stimuli that are consistent and reliable throughout the
experiment. In order to establish a neutral emotional state for volunteers, we
utilizes the IAPS image dataset. We present the user with a series of five
images that have medium valence and low arousal, lasting for 15 minutes.
Following this, the desired images are displayed for data collection purposes.
Between subsequent data collection intervals, we again show the user images
with medium valence and low arousal. It is important to note that images with
low valence and high arousal, which may contain negative content such as
mutilation, can have a long-lasting emotional impact and consequently
influence subsequent data collection. For this reason, data collection
involving such images is performed towards the end of the experiments.
Figure 6: Emotion recognition confusion matrices of SensEmo with (left) and
without (right) personalized calibration.
System Accuracy. To create the dataset, we carefully select IAPS images that
span a wide range of valence and arousal levels, encompassing both very low
and very high values. From the dataset, we gather data for a total of 200
images. Among these, 100 images are shared among the volunteers, while the
remaining 100 images are distinct, covering various ranges of valence and
arousal. This compilation forms our dataset, which includes recorded
physiological sensor data along with rated arousal and valence scales.
Following personalized calibration, we train a Fine Gaussian SVM with
supervised learning. To evaluate the classification accuracy, we conduct a
10-fold cross-validation. The resulting confusion matrix is shown in in Figure
6 (left). The experimental findings indicate that the Fine Gaussian SVM
achieves an overall accuracy of 88.9%. In comparison, the k-nearest neighbors
algorithm (KNN) yields an accuracy of 75.2%.
System Reliability. In this evaluation, we examine the reliability of SensEmo
when the users are presented with instantaneous stimuli, i.e., video and music
[32]. Specifically, we gather physiological data while users are exposed to
video stimuli in the form of music videos that incorporate elements of strong
emotional stimuli. It is important to note that these videos are compilations
of multiple graphics sourced from the IAPS dataset, which elicit various
emotions at different times. The same set of videos is utilized for all users
in order to evaluate if similar patterns of physiological signals emerge
across different individuals in response to the same stimuli. Our findings
reveal that the response patterns of GSR and HRV exhibit similarities among
different users when experiencing the same emotion. These results demonstrate
that not only can we rely on patterns of physiological signals to infer
emotions, but SensEmo also reliably captures and records such data.
Furthermore, through personalized calibration, we mitigate differences in the
physiological signal responses of different users and enhance the accuracy of
emotion recognition.
Ablation study on personalized calibration. Figures 6 shows the confusion
matrices obtained with and without personalized calibration, respectively.
Across all emotions, there is a significant and noticeable increase in the
true positive rate. Additionally, the overall recognition accuracy improves
from 27.4% to 88.9% with the implementation of personalized calibration.
Throughout our experiments, we observed instances of incorrect and unstable
classification of user emotional states when personalized calibration was not
applied, even when the user’s emotions remained consistent. These findings
suggest that personalized calibration improves robustness of SensEmo. It is
worth noting that similar outcomes were observed for all the selected
features. These results underscore the importance of personalized calibration
in standardizing feature extraction and enhancing emotional recognition.
Figure 7: Comparison of various aspects in online and in-person classroom
settings. (Left) Students using SensEmo exhibit a longer duration of curiosity
emotion in online remote learning setting. (Middle) Emotional responses from
all users in two sessions for the classroom evaluation. (Right) Quiz scores of
two sessions in in-person classroom setting.
### V-B Reinforcement Learning-based Controller Evaluation
The role of the reinforcement learning-based controller is to adjust the
system according to the students’ current emotional state. It is essential for
the controller to respond promptly and efficiently, ensuring it can adapt to
the right moment. Consequently, the response time of the feedback control
system, and consequently, the entire system, should be minimized. In addition
to response time, another crucial factor is the acceptance of the feedback
system by the students. It is important that the students perceive the
feedback system positively and find it valuable in their learning experience.
Response Time. We evaluate the response time for the adaptation decision
inference using the emotion recognition results. In this analysis, we also
take into consideration the time required by the emotion recognition model. By
examining the response time, we gain insights into how long it takes for the
system to provide a useful suggestion or adaptation once the user’s data
becomes available. To measure the response time, we utilize software timers.
These experiments involve using real lecture slides obtained from the MIT
OpenCourseWare [33], where no adaptation is provided to users but only
recorded for the purpose of analysis. After analyzing the results, we find
that the average response time of the feedback system is approximately 3.1
seconds, which is significantly shorter than the duration of any individual
lecture slide.
System Acceptance. The purpose of the feedback is to enhance the student’s
learning experience with minimal effort on their part. To evaluate the
effectiveness of the feedback system, we introduce another parameter: the
number of manual interventions made by the student to adjust the pace or
content. This parameter serves as a measure of feedback acceptance, where a
lower number of interventions indicates better performance of the feedback
system. We track these interventions using a software counter. Our experiments
show that, on average, students only make approximately 2.2 interventions per
minute. This suggests that the feedback system effectively assists students in
maintaining a suitable teaching pace and content, minimizing the need for
frequent manual interventions.
### V-C Resource Utilization Study
We evaluate the energy consumption and memory usage of SensEmo. In our
experiments, the estimated battery life for a Microsoft Band 2 with the
SensEmo app connected is 5 hours, which exceeds the duration of a typical
class session. Regarding memory usage, the Android application requires only
3.28 MB of internal memory space. Furthermore, the average runtime memory
space utilized by the application is a mere 215 KB. In addition, Table II
shows a detailed breakdown of power usage while the app runs on Android
devices.
TABLE II: Power usage breakdown of SensEmo application running on Android devices State | Screen | Bluetooth | Band | Power
---|---|---|---|---
Standby | Off | Off | Disconnected | $\leq$ 22 mW
App | Off | On | Disconnected | 273 mW
App | Off | On | Connected | 340 mW
### V-D Real-world Learning Scenarios
In this section, we conducted real-world experiments to evaluate SensEmo ’s
impact on learning outcomes in both online remote learning and in-person
classroom settings. For the online remote learning, 22 graduate students were
randomly divided into two groups of 11. One group used SensEmo while the other
did not. All students wore identical smartwatches and viewed lecture slides
from four different lectures (30-50 minutes each) on topics ranging from
biology to geography, without any prior knowledge. Their physiological data
was recorded, and comprehension was tested with a quiz of 10 concept questions
at the end. In the in-person classroom setting, 10 graduate students
participated in two sessions, wearing identical smartwatches. In the first
session, the instructor did not receive real-time feedback. In the second
session, the same instructor received real-time feedback from SensEmo and
adjusted the lecture content and pace based on students’ emotions. Both
sessions concluded with a quiz of 5 conceptual questions. Students also
completed a survey about their emotional states during the sessions.
Learning outcome. For the online setting, students using SensEmo scored 40.0%
higher on the quiz compared to those who did not. Emotion recognition analysis
showed longer durations of curiosity among SensEmo users, suggesting
significant potential learning impact, shown in Figure 7 (left). This suggests
that even a modest increase in the time spent experiencing the desired emotion
can significantly influence learning outcomes. Also, SensEmo’s adaptability in
teaching pace and content further contributed to improved learning
performance. In the classroom setting, students exhibited more curiosity and
less boredom in the session with real-time feedback, shown in Figure 7
(middle). Although confusion increased, it might indicate active learning.
Quiz scores, shown in Figure 7 (right), were higher in the second session,
hinting at SensEmo’s positive impact despite non-randomization and material
control issues. Students also reported that wearing the smartwatch was neither
distracting nor inconvenient.
## VI Conclusion
In this paper, we present SensEmo, the first affective learning system that
uses real-time data from physiological sensors available in commercial
smartwatches. It achieves an average of 88.9% accuracy in inferring users’
emotions based on valence and arousal scales, within learning environments. By
utilizing sensed emotional states, SensEmo enables the adaptation of teaching
materials and pace. The effectiveness of this adaptability has been evaluated
in both online remote learning for individual users and in-person classroom
settings with multiple users. Our findings suggest that integrating wearable
sensing into affective learning systems has the potential to enhance learning
outcomes when compared to traditional approaches.
## Acknowledgements
We thank the anonymous reviewers for their valuable feedback and suggestions
for improving the quality of this paper. Research supported in part by NSF
CNS-1952096, CNS-1553273 (CAREER), Hong Kong RGC GRF 14201924, and CUHK Direct
Grant for Research 4055216.
## References
* [1] C.-H. Su and C.-H. Cheng, “A mobile gamification learning system for improving the learning motivation and achievements,” _Journal of Computer Assisted Learning_ , 2015.
* [2] V. M. G. Barrios, C. Gütl, A. M. Preis, K. Andrews, M. Pivec, F. Mödritscher, and C. Trummer, “AdELE: A framework for adaptive e-learning through eye tracking,” _Proceedings of IKNOW_ , 2004.
* [3] A. Dhall, A. Asthana, R. Goecke, and T. Gedeon, “Emotion recognition using PHOG and LPQ features,” in _2011 IEEE International Conference on Automatic Face & Gesture Recognition_. IEEE, 2011, pp. 878–883.
* [4] A. S. Utane and S. Nalbalwar, “Emotion recognition through speech,” _International Journal of Applied Information Systems (IJAIS)_ , 2013.
* [5] K. Choksi, S. Nagaraj, R. Thielke, and S. Lin, “mDB: Monitoring dysfunctional behaviors for patients with bipolar disorder,” in _42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC)_. IEEE, 2020.
* [6] S. Singh, V. Sharma, K. Jain, and R. Bhall, “EDBL-algorithm for detection and analysis of emotion using body language,” in _1st International Conference on Next Generation Computing Technologies (NGCT)_. IEEE, 2015, pp. 820–823.
* [7] R. W. Levenson, “The autonomic nervous system and emotion,” _Emotion Review_.
* [8] P. J. Lang, “International affective picture system (IAPS): Technical manual and affective ratings,” _The Center for Research in Psychophysiology, University of Florida_ , 1995.
* [9] F. Nasoz, K. Alvarez, C. L. Lisetti, and N. Finkelstein, “Emotion recognition from physiological signals using wireless sensors for presence technologies,” _Cognition, Technology & Work_, 2004.
* [10] H. Lu, D. Frauendorfer, M. Rabbi, M. S. Mast, G. T. Chittaranjan, A. T. Campbell, D. Gatica-Perez, and T. Choudhury, “StressSense: Detecting stress in unconstrained acoustic environments using smartphones,” in _Proceedings of the 2012 ACM Conference on Ubiquitous Computing_. ACM, 2012.
* [11] C.-H. Wu, Z.-J. Chuang, and Y.-C. Lin, “Emotion recognition from text using semantic labels and separable mixture models,” _ACM transactions on Asian language information processing (TALIP)_ , 2006.
* [12] C. Epp, M. Lippold, and R. L. Mandryk, “Identifying emotional states using keystroke dynamics,” in _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_. ACM, 2011.
* [13] A. Hernando, J. Lazaro, E. Gil, A. Arza, J. M. Garzón, R. Lopez-Anton, C. de la Camara, P. Laguna, J. Aguiló, and R. Bailón, “Inclusion of respiratory frequency information in heart rate variability analysis for stress assessment,” _IEEE journal of biomedical and health informatics_ , vol. 20, no. 4, pp. 1016–1025, 2016.
* [14] M. A. Hernández-Mustieles, Y. E. Lima-Carmona, M. A. Pacheco-Ramírez, A. A. Mendoza-Armenta, J. E. Romero-Gómez, C. F. Cruz-Gómez, D. C. Rodríguez-Alvarado, A. Arceo, J. G. Cruz-Garza, M. A. Ramírez-Moreno _et al._ , “Wearable biosensor technology in education: A systematic review,” _Sensors_ , vol. 24, no. 8, p. 2437, 2024.
* [15] W. L. Romine, N. L. Schroeder, J. Graft, F. Yang, R. Sadeghi, M. Zabihimayvan, D. Kadariya, and T. Banerjee, “Using machine learning to train a wearable device for measuring students’ cognitive load during problem-solving activities based on electrodermal activity, body temperature, and heart rate: Development of a cognitive load tracker for both personal and classroom use,” _Sensors_ , vol. 20, no. 17, p. 4833, 2020.
* [16] R. Wang, W. Wang, A. DaSilva, J. F. Huckins, W. M. Kelley, T. F. Heatherton, and A. T. Campbell, “Tracking depression dynamics in college students using mobile phone and wearable sensing,” _Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies_ , vol. 2, no. 1, pp. 1–26, 2018.
* [17] G. Chanel, C. Rebetez, M. Bétrancourt, and T. Pun, “Emotion assessment from physiological signals for adaptation of game difficulty,” _IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans_ , 2011.
* [18] S. Asteriadis, P. Tzouveli, K. Karpouzis, and S. Kollias, “Estimation of behavioral user state based on eye gaze and head pose—application in an e-learning environment,” _Multimedia Tools and Applications_ , 2009.
* [19] H. Huang, H. Chen, and S. Lin, “Magtrack: Enabling safe driving monitoring with wearable magnetics,” in _Proceedings of the 17th annual international conference on mobile systems, applications, and services_ , 2019, pp. 326–339.
* [20] Y. Li, D. S. F. Yu, S. Chen, G. Xing, and H. Chen, “EmoMarker: A privacy-preserving, multi-modal sensing system for dyadic digital biomarkers of expressed emotions for patients with dementia,” in _Proceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services_ , 2024, pp. 614–615.
* [21] B. Yang, S. Jiang, L. Xu, K. Liu, H. Li, G. Xing, H. Chen, X. Jiang, and Z. Yan, “DrHouse: An LLM-empowered diagnostic reasoning system through harnessing outcomes from sensor data and expert knowledge,” _arXiv preprint arXiv:2405.12541_ , 2024.
* [22] J. M. Gottman, N. S. Jacobson, R. H. Rushe, and J. W. Shortt, “The relationship between heart rate reactivity, emotionally aggressive behavior, and general violence in batterers.” _Journal of family psychology_ , 1995.
* [23] H. Henderi, T. Wahyuningsih, and E. Rahwanto, “Comparison of Min-Max normalization and Z-score normalization in the K-nearest neighbor (kNN) algorithm to test the accuracy of types of breast cancer,” _International Journal of Informatics and Information Systems_ , vol. 4, no. 1, pp. 13–20, 2021.
* [24] K.-H. Choi, J. Kim, O. S. Kwon, M. J. Kim, Y. H. Ryu, and J.-E. Park, “Is heart rate variability (HRV) an adequate tool for evaluating human emotions?–a focus on the use of the International Affective Picture System (IAPS),” _Psychiatry research_ , 2017.
* [25] P. J. Lang, “The emotion probe: studies of motivation and attention.” _American psychologist_ , vol. 50, no. 5, p. 372, 1995.
* [26] J. Posner, J. A. Russell, and B. S. Peterson, “The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology,” _Development and psychopathology_ , 2005.
* [27] B. Kort, R. Reilly, and R. W. Picard, “An affective model of interplay between emotions and learning: Reengineering educational pedagogy-building a learning companion.” in _icalt_ , 2001.
* [28] R. L. Mandryk and M. S. Atkins, “A fuzzy physiological approach for continuously modeling emotion during interaction with play technologies,” _International journal of human-computer studies_ , 2007.
* [29] R. N. Caine, G. Caine, C. McClintic, and K. J. Klimek, _12 brain/mind learning principles in action: Teach for the development of higher-order thinking and executive function_. Corwin Press, 2015.
* [30] E. Seneta, _Non-negative matrices and Markov chains_. Springer Science & Business Media, 2006.
* [31] Z. Shao, R. Chandramouli, K. Subbalakshmi, and C. T. Boyadjiev, “An analytical system for user emotion extraction, mental state modeling, and rating,” _Expert Systems with Applications_ , 2019.
* [32] F. Silveira, B. Eriksson, A. Sheth, and A. Sheppard, “Predicting audience responses to movie content from electro-dermal activity signals,” in _Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing_. ACM, 2013.
* [33] “MIT OpenCourseWare,” https://ocw.mit.edu/.
|
$\tilde{\phi}_{0}(\eta):=\phi_{0}(2^{N_{0}}\varepsilon_{2}\eta).$ By
definition of para-differential operators (A.13) and (A.29):
$\mathrm{pdo}(a)-{\rm
op}(a)=\mathrm{pdo}((1-\tilde{\phi}_{0}(D_{x}))(a_{2}+a_{R}+a_{LF})).$
By Proposition 20 of [8], for all $s^{\prime}\leq s:$
(A.30)
$\|\mathrm{pdo}((1-\tilde{\phi}_{0}(D_{x}))a_{2})v\|_{H^{s^{\prime}}}\lesssim\sup_{\begin{smallmatrix}|\beta|\leq
2[d/2]+2\\\
\xi\in\mathbb{R}^{d}\end{smallmatrix}}\langle\xi\rangle^{|\beta|-m}\|\partial_{\xi}^{\beta}((1-\tilde{\phi}_{0}(D_{x}))a)\|_{H^{s^{\prime}}}\|v\|_{H^{m+d/2}},$
and also, for $d/2+m^{\prime}\leq s:$
(A.31)
$\|\mathrm{pdo}((1-\tilde{\phi}_{0}(D_{x}))a_{2})v\|_{L^{2}}\lesssim\sup_{\begin{smallmatrix}|\beta|\leq
2[d/2]+2\\\
\xi\in\mathbb{R}^{d}\end{smallmatrix}}\langle\xi\rangle^{|\beta|-m}\|\partial_{\xi}^{\beta}((1-\tilde{\phi}_{0}(D_{x}))a)\|_{H^{d/2+m^{\prime}}}\|v\|_{H^{m-m^{\prime}}}.$
By Proposition 23 of [8], the $a_{R}$ term in (A.29) satisfies the same
bounds, (A.30) and (A.31).
The low-frequency term $a_{LF},$ or $(1-\tilde{\phi}_{0}(D_{x}))a_{LF}$ in
decomposition (A.29), is a finite sum of terms of the form
$\mathrm{pdo}((1-\tilde{\phi}_{0}(D_{x}))\phi_{0}(2^{-q}D_{x})a(\cdot,\xi)\phi_{q}(\xi)).$
Given $v\in L^{2},$ the map
$\mathrm{pdo}((1-\tilde{\phi}_{0}(D_{x}))\phi_{0}(2^{-q}D_{x})a(\cdot,\xi)\phi_{q}(\xi))v$
has compact support in Fourier. In particular, its $H^{s^{\prime}}$ norm, for
$0\leq s^{\prime}\leq s,$ is controlled by its $L^{2}$ norm. We may use the
classical bound (A.17):
$\displaystyle\|\mathrm{pdo}((1-\tilde{\phi}_{0}(D_{x}))$
$\displaystyle\phi_{0}(2^{-q}D_{x})a(\cdot,\xi)\phi_{q}(\xi))v\|_{L^{2}}$
$\displaystyle\lesssim\|(1-\tilde{\phi}_{0}(D_{x}))\phi_{0}(2^{-q}D_{x})a(\cdot,\xi)\|_{m,1+[d/2],1+[d/2]}\|\phi_{q}(D_{x})v\|_{H^{m}}.$
For fixed $q$ and any $m,$ we have
$\|\phi_{q}(D_{x})v\|_{H^{m}}\lesssim\|v\|_{L^{2}},$ for all $v\in L^{2}.$
Thus
(A.32)
$\|\mathrm{pdo}((1-\tilde{\phi}_{0}(D_{x}))a_{LF})v\|_{H^{s^{\prime}}}\lesssim\|(1-\tilde{\phi}_{0}(D_{x}))\phi_{0}(2^{-N_{0}}D_{x})a(\cdot,\xi)\|_{m,1+[d/2],1+[d/2]}\|v\|_{L^{2}}.$
With the decomposition (A.29), estimates (A.30) and (A.32) bound the action of
the difference $\mathrm{pdo}(a)-{\rm op}(a)$ on $H^{m+d/2},$ and estimates
(A.31) and (A.32) bound the action of $\mathrm{pdo}(a)-{\rm op}(a)$ on
$H^{m-m^{\prime}},$ with $m^{\prime}\leq s-d/2,$ where $s$ measures the
spatial regularity of $a.$ We note that these estimates do not depend on the
form of $a$ assumed in (A.4), and are valid for any symbol $a$ of order $m$
with an adequate degree of spatial regularity.
For $a$ in the form (A.4), we now put in the semiclassical quantization, and
consider
(A.33) $\|(\mathrm{pdo}_{j}(a)-{\rm
op}_{j}(a))v\|_{L^{2}}=\|H_{j}^{-1}\big{(}\mathrm{pdo}(h_{j}a)-{\rm
op}(h_{j}a))H_{j}v\|_{L^{2}},$
for $v\in H^{m+d/2}(\mathbb{R}^{d})$ or $v\in H^{m}(\mathbb{R}^{d}).$ In view
of bounds (A.30) and (A.31), and the expression of $a$ in terms of $\sigma$ in
(A.4), we now want a control of
$\sup_{\begin{smallmatrix}|\beta|\leq 2[d/2]+2\\\
\xi\in\mathbb{R}^{d}\end{smallmatrix}}\langle\xi\rangle^{|\beta|-m}\|((1-\tilde{\phi}_{0}(D_{x}))\partial_{\xi}^{\beta}h_{j}\sigma(u,\xi))\|_{H^{s^{\prime}}}.$
We have, by definition of $h_{j}$ (3.1):
$\displaystyle\|((1-\tilde{\phi}_{0}(D_{x}))\partial_{\xi}^{\beta}h_{j}\sigma(u,\xi))\|^{2}_{H^{s^{\prime}}}\leq\int_{|\eta|\geq
c}2^{-2jd}\langle\eta\rangle^{2s^{\prime}}|\partial_{\xi}^{\beta}{\mathcal{F}}\big{(}\sigma(u(\cdot),\xi)\big{)}(2^{j}\eta)|^{2}\,d\eta.$
where $c=\varepsilon_{1}/(2^{N_{0}}\varepsilon_{2}).$ The above upper bound is
equal to
$2^{-jd}\int_{|\eta|\geq
2^{j}c}\langle\eta\rangle^{2s^{\prime}}|\partial_{\xi}^{\beta}{\mathcal{F}}\big{(}\sigma(u(\cdot),\xi)\big{)}(\eta)|^{2}\,d\eta,$
which we can bound by
$2^{2js^{\prime}-jd}\int_{|\eta|\geq
2^{j}c}|\eta|^{2s}(2^{j}c)^{-2s}|\partial_{\xi}^{\beta}{\mathcal{F}}\big{(}\sigma(u(\cdot),\xi)\big{)}(\eta)|^{2}\,d\eta\lesssim
2^{-j(2s-2s^{\prime}-d)}\|\partial_{\xi}^{\beta}\sigma(u,\xi)\|_{H^{s}}.$
We now use Lemma A.2 (applied to $\partial_{\xi}^{\beta}\sigma,$ which
satisfies assumption (A.3) with $m$ replaced by $m-|\beta|$), and find that
the contribution of $(h_{j}a)_{2}$ and $(h_{j}a)_{R}$ in the difference (A.33)
is controlled by
$2^{-j(s-d/2)}C(\|u\|_{L^{\infty}})\|u\|_{H^{s}}\|H_{j}v\|_{H^{m+d/2}},\quad\mbox{if
we use \eqref{a2},}$
and by
$2^{-j(s-d)}C(\|u\|_{L^{\infty}})\|u\|_{H^{s}}\|H_{j}v\|_{H^{m}},\quad\mbox{if
we use \eqref{a2bis}.}$
Finally, we deduce from (A.32) that the low-frequency term $(h_{j}a)_{LF}$
contributes
$\langle\xi\rangle^{m-|\beta|}\|\partial_{\xi}^{\beta}\partial_{x}^{\alpha}(1-\tilde{\phi}_{0}(D_{x}))\phi_{0}(2^{-N}D)a\|_{L^{\infty}}\lesssim\langle\xi\rangle^{m-|\beta|}\big{\|}{\mathcal{F}}\Big{(}\partial_{x}^{\alpha}(1-\tilde{\phi}_{0}(D_{x}))\phi_{0}(2^{-N_{0}}D)\partial_{\xi}^{\beta}h_{j}a\Big{)}\big{\|}_{L^{1}}.$
Since
$(1-\tilde{\phi}_{0}(D_{x}))\phi_{0}(2^{-N_{0}}D)\partial_{\xi}^{\beta}h_{j}a$
is compactly supported in Fourier (with respect to $x$), the
$\partial_{x}^{\alpha}$ contributes only a constant, and we focus on
$\displaystyle\big{\|}{\mathcal{F}}\Big{(}(1-\tilde{\phi}_{0}(D_{x}))\phi_{0}(2^{-N_{0}}D)\partial_{\xi}^{\beta}h_{j}a\Big{)}\big{\|}_{L^{1}}$
$\displaystyle\leq\int_{c_{1}\leq|\eta|\leq
c_{2}}|{\mathcal{F}}(\partial_{\xi}^{\beta}h_{j}a)(\eta)|\,d\eta$
$\displaystyle=\int_{2^{j}c_{1}\leq|\eta|\leq
2^{j}c_{2}}|{\mathcal{F}}(\partial_{\xi}^{\beta}a)(\eta)|\,d\eta,$
for some $0<c_{1}<c_{2}.$ In the integrand we can write $1\lesssim
2^{-js}|\eta|^{s},$ and then by Hölder’s inequality we bound the above by a
constant times
$2^{jd/2}2^{-js}\|\partial_{\xi}^{\beta}a\|_{H^{s}},$
and we may now apply Lemma A.2 to $\partial_{\xi}^{\beta}a.$ This completes
the proof of Proposition A.3.
### A.5 Operator composition
Given $m_{1},m_{2}\in\mathbb{R},$ given $r>0$, given $a_{1}\in
C^{r}S^{m_{1}},$ $a_{2}\in C^{r}S^{m_{2}}$ if $r\in\mathbb{N},$ or given
$a_{1}\in C^{[r],r-[r]}S^{m_{1}},$ $a_{2}\in C^{[r],r-[r]}S^{m_{2}}$ if
$r\notin\mathbb{N},$ we have
(A.34) ${\rm op}_{j}(a_{1}){\rm op}_{j}(a_{2})=\sum_{0\leq
k<r}2^{-jk}\frac{(-i)^{|\alpha|}}{\alpha!}{\rm
op}_{j}\left(\partial_{\xi}^{\alpha}a_{1}\partial_{x}^{\alpha}a_{2}\right)+2^{-jr}R_{r}(a_{1},a_{2}),$
with a remainder $R_{r}(a_{1},a_{2})$ which maps $H^{m_{1}+m_{2}-r}$ to
$L^{2},$ with norm
(A.35)
$\big{\|}R_{r}(a_{1},a_{2})v\big{\|}_{L^{2}}\lesssim\Big{(}\|a_{1}\|_{m_{1},0,m(d,r)}\|a_{2}\|_{m_{2},r,d}+\|a_{1}\|_{m_{1},r,m(d,r)}\|a_{2}\|_{m_{2},0,d}\Big{)}\|v\|_{j,m_{1}+m_{2}-r},$
for some $m(d,r)$ depending on $d$ and $r.$ For a proof of (A.34)-(A.35), see
for instance Theorem 6.1.4 of [4].
The above fractional-order composition result has a simple proof in the
particular case of a Fourier multiplier and a Sobolev map:
###### Lemma A.5.
Given $s>1+d/2,$ given a Fourier multiplier $\chi(D_{x})$ with
$|\cdot|^{s-1-d/2}{\mathcal{F}}^{-1}\chi\in L^{1}(\mathbb{R}^{d}),$ given
$f\in H^{s-1}(\mathbb{R}^{d}),$ we have
$\displaystyle\Big{\|}[\chi(D_{x}),f]v-\sum_{1\leq\alpha\leq[s-1-d/2]}\frac{(-i)^{|\alpha|}}{|\alpha|!}$
$\displaystyle(\partial_{x}^{\alpha}f)(x)(\partial_{\xi}^{\alpha}\chi)(D_{x})v\Big{\|}_{L^{2}}$
$\displaystyle\lesssim\|f\|_{H^{s-1}}\big{\|}|\cdot|^{s-1-d/2}{\mathcal{F}}^{-1}\chi\big{\|}_{L^{1}}\|v\|_{L^{2}},$
In particular, under the above assumptions, we have
$\displaystyle\Big{\|}[\chi(2^{-j}D_{x},f]v$
$\displaystyle-\sum_{1\leq\alpha\leq[\theta]-1}2^{-j|\alpha|}\frac{(-i)^{|\alpha|}}{|\alpha|!}(\partial_{x}^{\alpha}f)(x)(\partial_{\xi}^{\alpha}\chi)(2^{-j}D_{x})v\Big{\|}_{L^{2}}$
$\displaystyle\leq
2^{-j\theta}\sup_{|\alpha|=[\theta]}\|\partial_{x}^{\alpha}f\|_{0,\theta-[\theta]}\big{\|}|\cdot|^{\theta}{\mathcal{F}}^{-1}\chi\big{\|}_{L^{1}}\|v\|_{L^{2}}.$
The Hölder norm in the upper bound is finite by the Sobolev embedding (A.38).
In the statement of Lemma A.5, we used the convention that the sum over an
empty set is equal to zero. This simple composition result is used in the
proof of Lemma 3.9.
###### Proof.
For any $v\in L^{2}:$
$[\chi(D_{x}),f]v=\int_{\mathbb{R}^{3}}\big{(}{\mathcal{F}}^{-1}\chi\big{)}(x-x^{\prime})(f(x^{\prime})-f(x))v(x^{\prime})\
dx^{\prime},$
and with a Taylor expansion of $f\in H^{s-1},$ with $s-1>d/2:$
(A.36)
$[\chi(D_{x}),f]=\sum_{1\leq\alpha\leq[s-1-d/2]-1}\frac{(-i)^{|\alpha|}}{|\alpha|!}(\partial_{x}^{\alpha}f)(x)(\partial_{\xi}^{\alpha}\chi)(D_{x})+R_{[s-1-d/2]}(\chi,f).$
The remainder is explicitly
(A.37) $\displaystyle R_{[\theta]}(\chi,f)v=\int_{\mathbb{R}^{3}}$
$\displaystyle\int_{0}^{1}\frac{(-i(1-t))^{[\theta]-1}}{([\theta]-1)!}$
$\displaystyle\quad\cdot\big{(}{\mathcal{F}}^{-1}\chi\big{)}(x-x^{\prime})\sum_{|\alpha|=[\theta]}\partial_{x}^{\alpha}f(x+t(x^{\prime}-x))\cdot(x^{\prime}-x)^{(\alpha)}v(x^{\prime})\,dx^{\prime}\,dt^{\prime}.$
In (A.37), given $\alpha=(\alpha_{1},\dots,\alpha_{d})\in\mathbb{N}^{d},$
given $y=(y_{1},\dots,y_{d})\in\mathbb{R}^{d},$ we used notation
$\partial_{x}^{\alpha}f(x+ty)\cdot y^{(\alpha)}=\sum_{1\leq k\leq
d}y_{1}^{\alpha_{1}}\cdots
y_{d}^{\alpha_{d}}\partial_{x_{1}}^{\alpha_{1}}\cdots\partial_{x_{d}}^{\alpha_{d}}f(x+ty).$
By the Sobolev embedding
(A.38) $H^{s-1}\hookrightarrow
C^{[\theta],\theta-[\theta]},\qquad\theta=s-1-d/2,$
given $f\in H^{s-1}$ and $|\alpha|=[\theta],$ the map $\partial_{x}^{\alpha}f$
belong to $C^{0,\theta-[\theta]}.$ Thus we may write
$\partial_{x}^{\alpha}f(x+t(x^{\prime}-x))=\partial_{x}^{\alpha}f(x)+f_{\alpha}(x,t(x^{\prime}-x))|x^{\prime}-x|^{\theta-[\theta]},$
where $f_{\alpha}$ is bounded in both its arguments. This implies
(A.39) $\displaystyle
R_{[\theta]}(\chi,f)v=\frac{(-i)^{[\theta]}}{[\theta]!}\sum_{|\alpha|=m}(\partial_{x}^{\alpha}f)(x)(\partial_{\xi}^{\alpha}\chi)(D_{x})v+\tilde{R}_{\theta}(\chi,f)v,$
with notation
$\displaystyle\tilde{R}_{\theta}(\chi,f)v=\int_{\mathbb{R}^{3}}\int_{0}^{1}\frac{(-i(1-t))^{m-1}}{(m-1)!}\big{(}{\mathcal{F}}^{-1}\chi\big{)}(x-x^{\prime})\tilde{f}(x,t(x^{\prime}-x))|x^{\prime}-x|^{\theta}v(x^{\prime})\,dx^{\prime}\,dt^{\prime},$
where $\tilde{f}$ is bounded in both its arguments, with
$\|\tilde{f}\|_{L^{\infty}}\leq\sup_{\begin{smallmatrix}|\alpha|=[\theta]\\\
x\neq
y\end{smallmatrix}}\frac{|\partial_{x}^{\alpha}f(x)-\partial_{x}^{\alpha}f(y)|}{|x-y|^{\theta-[\theta]}}=:\sup_{|\alpha|=[\theta]}\|\partial_{x}^{\alpha}f\|_{0,\theta-[\theta]}.$
In particular, given $\chi$ compactly supported and $f\in H^{s-1},$ we find
$\|\tilde{R}_{\theta}(\chi,f)v\|_{L^{2}}\lesssim\||\cdot|^{\theta}{\mathcal{F}}^{-1}\chi\|_{L^{1}}\sup_{|\alpha|=[\theta]}\|\partial_{x}^{\alpha}f\|_{0,\theta-[\theta]}\|v\|_{L^{2}},$
where the upper bound is finite by (A.38).
In semiclassical quantization, that is with $\chi(2^{-j}D_{x})$ in place of
$\chi(D_{x}),$ it suffices to observe that
$\big{\|}|\cdot|^{\theta}{\mathcal{F}}^{-1}(\chi(2^{-j}\cdot))\big{\|}_{L^{1}}=2^{-j\theta}\big{\|}|\cdot|^{\theta}{\mathcal{F}}^{-1}(\cdot))\big{\|}_{L^{1}}.$
∎
### A.6 Gårding’s inequality
For $d_{\star}:=3+d,$ for all $Q\in C^{d_{\star}}S^{0},$ if $\Re e\,Q\geq 0,$
then
(A.40) $\Re
e\,(\mathrm{pdo}_{j}(Q)u,u)_{L^{2}}+2^{-j}\|Q\|_{0,d_{\star},d_{\star}}\|u\|_{L^{2}}^{2}\geq
0.$
Inequality (A.40) is Gårding’s classical inequality in a semiclassical
setting, as it can be found for instance in Theorem 4.32 in [19]222Theorem
4.32 in [19] deals with operators in Weyl quantization. We can for instance
use Lemma 4.1.5 in [10] in order to bound the difference between the operator
in Weyl quantization and the operator in classical quantization. The
difference involves spatial derivatives of the symbol, which in semiclassical
quantization bring out a factor $2^{-j}.$. The specific number of derivatives
$d_{\star}=3+d$ needed in (A.40) is found in the proof of Theorem 1.1.26 in
[10].
The Gårding inequality (A.40) extends to para-differential operators, via the
analysis of Section A.4.1. Indeed, we observe that
$(H_{j}^{-1})^{\star}=H_{j}$ (the adjoint is in the $L^{2}$ sense), so that
$\Re e\,({\rm op}_{j}(Q)u,u)_{L^{2}}=\Re e\,(({\rm
op}(h_{j}Q)-\mathrm{pdo}(h_{j}Q))H_{j}u,H_{j}u)_{L^{2}}+\Re
e\,(\mathrm{pdo}_{j}(Q)u,u)_{L^{2}},$
and, as stated in Remark A.4, we may use bounds (A.31) and (A.32) in order to
bound the action of ${\rm op}(h_{j}Q)-\mathrm{pdo}_{j}(Q)$ as an operator from
$L^{2}$ to $L^{2}.$
Consider the case that $Q$ is compactly supported, jointly in $(x,\xi),$ and
the estimates (A.31) and (A.32) bounding the action of ${\rm
op}_{j}(h_{j}Q)-\mathrm{pdo}_{j}(h_{j}Q).$ We can use the arguments from the
proof of Proposition A.3 to verify the bound
$\langle\xi\rangle^{|\beta|}\|(1-\tilde{\phi}_{0}(D_{x}))(h_{j}\partial_{\xi}^{\beta}Q)\|_{H^{d/2}}\lesssim
2^{-j(s-d)}\|\partial_{\xi}^{\beta}Q\|_{H^{s}},$
for $s$ as large as allowed by the regularity of $Q.$ Since $Q$ is compactly
supported in $x,$ the above is controlled by
$2^{-j(s-d)}\|\partial_{\xi}^{\beta}\partial_{x}^{\alpha}Q\|_{L^{\infty}},$
for $|\alpha|\leq[s]+1.$ We see in the last lines of the proof of Proposition
A.3 that the low-frequency term $Q_{LF}$ contributes a similar term. Thus, for
$Q$ compactly supported in $(x,\xi),$ we find
$\|({\rm op}_{j}(h_{j}Q)-\mathrm{pdo}_{j}(h_{j}Q))v\|_{L^{2}}\lesssim
2^{-j(s-d)}\|Q\|_{0,[s]+1,2([d/2]+1)}\|v\|_{L^{2}},$
hence a paradifferential version of Gårding’s inequality as follows:
(A.41) $\displaystyle\Re e\,($ $\displaystyle{\rm op}_{j}(Q)u,u)_{L^{2}}$
$\displaystyle+\Big{(}2^{-j}\|Q\|_{0,d_{\star},d_{\star}}+2^{-j(s-d)}\|Q\|_{0,[s]+1,2([d/2]+1)}\Big{)}\|u\|_{L^{2}}^{2}\geq
0,$ for all $Q\in C^{\max(d_{\star},[s]+1)}S^{0}$ compactly supported in
$(x,\xi),$ and all $u\in L^{2},$
with $s>d.$
## Appendix B Rates of growth via Gårding for the flows of operators with
symbols having Hölder spatial regularity
Given $\theta>0$ and $T>0,$ consider a continuous map $Q:t\in[0,T]\to Q(t)\in
C^{[\theta],\theta-[\theta]}S^{0}$ (see the definition of classical classes of
symbols with limited spatial regularity in Appendix A) with support near
$(x^{0},\xi^{0}),$ for some
$(x^{0},\xi^{0})\in\mathbb{R}^{d}\times\mathbb{S}^{d-1},$ in the following
sense:
(B.1)
$\mbox{supp}\,Q(t)\subset\big{\\{}(x,\xi)\in\mathbb{R}^{d}\times\mathbb{R}^{d},\quad|x-x^{0}|+|\xi-\xi^{0}|\leq
R\\},\quad\mbox{for some $0<R<1,$}$
uniformly in $t.$ The family of symmetric matrices $\Re e\,Q(t,\cdot)$ is
continuous and compactly supported. We denote $\gamma_{+}$ an upper spectral
bound:
(B.2) $\Re e\,\,Q(t)\leq\gamma_{+}(t),\quad\mbox{for some
$t\to\gamma_{+}(t)\in\mathbb{R},$ for all $(x,\xi),$}$
and $\gamma_{-}(t)$ a spectral lower bound in restriction to a smaller ball
near $(x^{0},\xi^{0}):$
(B.3) $\displaystyle\gamma_{-}(t)\leq\Re e\,Q(t),$ for some
$\gamma_{-}\in\mathbb{R},$ for all $(x,\xi)$ such that
$|x-x^{0}|+|\xi-\xi^{0}|\leq r,$ for some $0<r<R.$
In (B.2) and (B.3), the matrix $\Re e\,Q$ is the symmetric matrix
$(Q+Q^{*})/2.$ We use notation $\psi,$ like in the main proof, to describe a
smooth space-frequency cut-off such that (B.3) holds on the support of $\psi.$
Consider the initial-value problem
(B.4) $\left\\{\begin{aligned} \partial_{t}u={\rm op}_{j}(Q)u,\\\
u(0)=u_{0}\in L^{2}.\end{aligned}\right.$
Since $Q(t)$ is order 0, the linear operator ${\rm op}_{j}(Q(t))$ maps $L^{2}$
to $L^{2},$ hence (B.4) has a unique global solution $u$ by the Cauchy-
Lipschitz theorem.
We can prove the following lower and upper rates of growth for the solution
$u$ to (B.4):
###### Proposition B.1.
For the solution $u$ to (B.4), for some $C>0$ which depends only on $Q$ and
the dimension $d:$
* •
We have the upper bound
(B.5)
$\|u(t)\|_{L^{2}}\leq\exp\Big{(}\int_{0}^{t}\big{(}\gamma_{+}(t^{\prime})+C2^{-j\theta_{\star}}\big{)}\,dt^{\prime}\,\Big{)}\|u_{0}\|_{L^{2}},$
where $\theta_{\star}:=\theta/(\theta+d_{\star}),$ with $d_{\star}$ is a
number of derivatives of the symbols in Gårding’s inequality (A.40),
$\theta>0$ is the Hölder regularity of $Q,$ and $\gamma_{+}$ is introduced in
(B.2).
* •
Let $\tilde{\psi}$ be a space-frequency cut-off such that the lower bound
(B.3) holds over the support of $\tilde{\psi}.$ Then, given $u_{0}$ such that
${\rm op}_{j}(\tilde{\psi})u_{0}\neq 0,$ for $t$ such that
(B.6) $0\leq t\leq t_{\star},\quad\mbox{with $t_{\star}$ defined by
$\displaystyle{\int_{0}^{t_{\star}}(\gamma^{+}-\gamma^{-})(t^{\prime})\,dt^{\prime}=j(\theta_{\star}\ln
2-\varepsilon)}$}$
for any $\varepsilon>0,$ where $\theta_{\star}$ is introduced below (B.5) and
the lower rate $\gamma_{-}$ is introduced in (B.3), we have the lower bound,
(B.7)
$\frac{9}{10}\exp\Big{(}\int_{0}^{t}\big{(}\gamma_{-}(t^{\prime})-C2^{-j\theta_{\star}})\,dt^{\prime}\Big{)}\,\Big{)}\|{\rm
op}_{j}(\tilde{\psi})u_{0}\|_{L^{2}}\leq\|u(t)\|_{L^{2}},$
for large enough $j,$ depending only on $Q,$ $d,$ and $\varepsilon.$
###### Proof.
We use notations and results from Section A.3, as we regularize $Q$ into
$Q^{\varepsilon}\in C^{\infty}S^{0},$ by spatial convolution with a smoothing
kernel. By (A.19)-(A.22), we have
(B.8) $\|{\rm op}_{j}(Q-Q^{\varepsilon})\|_{L^{2}\to
L^{2}}\lesssim\varepsilon^{\theta},\qquad\mbox{and}\qquad\|Q^{\varepsilon}\|_{0,k,k^{\prime}}\lesssim\varepsilon^{-k},$
using notation (A.9). Via (B.8), the lower and upper bounds (B.3) and (B.2)
for $Q$ translate into lower and upper bounds for $Q^{\varepsilon}.$ We let
$\gamma_{+}^{\varepsilon}=\gamma_{+}+C\varepsilon^{\theta},\qquad\gamma_{-}^{\varepsilon}:=\gamma_{-}-C\varepsilon^{\theta},$
for some $C>0,$ and
$v_{\pm}=\exp\Big{(}-\int_{0}^{1}\gamma^{\varepsilon}_{\pm}(t^{\prime})\,dt^{\prime}\Big{)}u.$
Then, $v_{\pm}$ solve
$\displaystyle\partial_{t}v_{-}={\rm
op}_{j}(Q^{\varepsilon}-\gamma_{-}^{\varepsilon})v_{-}+{\rm
op}_{j}(Q-Q^{\varepsilon})v_{-}-C\varepsilon^{\theta}v_{-},$
$\displaystyle\partial_{t}v_{+}+{\rm
op}_{j}(\gamma_{+}^{\varepsilon}-Q^{\varepsilon})v_{+}={\rm
op}_{j}(Q-Q^{\varepsilon})v_{+}+C\varepsilon^{\theta}v_{+}.$
Since $\Re e\,(\gamma_{+}^{\varepsilon}-Q^{\varepsilon})\geq 0$ in the whole
domain $\mathbb{R}^{d}_{x}\times\mathbb{R}^{d}_{\xi},$ we may apply Gårding’s
inequality (A.40), and obtain
$\frac{1}{2}\partial_{t}\|v_{+}\|_{L^{2}}^{2}\leq
C(2^{-j}\varepsilon^{-d_{\star}}+2^{-j(s-d)}\varepsilon^{-([s]+1)}+\varepsilon^{\theta})\|v_{+}\|_{L^{2}}^{2},$
for any $s$ and for some constant $C>0,$ which depends neither on $j$ nor on
$\varepsilon.$ We choose $\varepsilon$ and $s$ so that the errors above are
the same order of magnitude:
(B.9) $\varepsilon:=2^{-j/(\theta+d_{\star})},$
and $s\geq(1-1/(\theta+d_{\star}))^{-1}(d+(\theta+1)/(\theta+d_{\star})).$
Then all three errors above have size $2^{-j\theta/(\theta+d_{\star})},$ up to
a multiplicative constant which does not depend on $j.$ The upper bound (B.5)
follows.
The proof of the lower bound (B.7) is just a bit more delicate, since the
posited lower bound (B.3) on $\Re e\,Q$ is valid only locally, on the support
of $\tilde{\psi}.$ Let $\tilde{\psi}^{\flat}$ such that
$\tilde{\psi}^{\flat}\prec\tilde{\psi},$ and
$w:={\rm op}_{j}(\tilde{\psi}^{\flat})v_{-}.$
Then, $w$ solves
$\partial_{t}w={\rm op}_{j}(\tilde{\psi}^{\flat}){\rm
op}_{j}(Q^{\varepsilon}-\gamma_{-}^{\varepsilon})v_{-}+{\rm
op}_{j}(\tilde{\psi}^{\flat}){\rm op}_{j}(Q-Q^{\varepsilon})v_{-}.$
Using $\tilde{\psi}^{\flat}\prec\tilde{\psi}$ and (A.34), we find
${\rm op}_{j}(\tilde{\psi}^{\flat}){\rm op}_{j}(Q^{\varepsilon})={\rm
op}_{j}(\tilde{\psi}^{\flat}\tilde{\psi}){\rm op}_{j}(Q^{\varepsilon})={\rm
op}_{j}(\tilde{\psi}Q^{\varepsilon}){\rm
op}_{j}(\tilde{\psi}^{\flat})+2^{-j}{\rm op}_{j}(Q^{\varepsilon}_{1}),$
where $Q^{\varepsilon}_{1}\in S^{0}$ is bounded by (A.35), in particular
involves one spatial derivative of $Q^{\varepsilon},$ so that, with the second
bound in (B.8),
$\|{\rm op}_{j}(Q^{\varepsilon}_{1})\|_{L^{2}\to
L^{2}}\lesssim\varepsilon^{-1}.$
uniformly in $j.$ Thus the equation in $w$ is
$\partial_{t}w={\rm
op}_{j}(\tilde{\psi}(Q^{\varepsilon}-\gamma_{-}^{\varepsilon}))w+2^{-j}{\rm
op}_{j}(Q^{\varepsilon}_{1})v_{-}+{\rm op}_{j}(\tilde{\psi}^{\flat}){\rm
op}_{j}(Q-Q^{\varepsilon})v_{-}+C\varepsilon^{\theta}w.$
We now apply (A.41):
$\frac{1}{2}\partial_{t}\|w\|_{L^{2}}^{2}+2^{-j\theta/(\theta+d_{\star})}C\|w\|_{L^{2}}^{2}\geq\Re
e\,\Big{(}2^{-j}{\rm op}_{j}(Q^{\varepsilon}_{1})v_{-}+{\rm
op}_{j}(\tilde{\psi}^{\flat}){\rm
op}_{j}(Q-Q^{\varepsilon})v_{-},w\Big{)}_{L^{2}}.$
The $2^{-j\theta/(\theta+d_{\star})}$ prefactor in the Gårding error term is
as in the upper bound, above. The upper bound (B.5) gives an upper bound for
$v_{-}:$
$\|v_{-}(t)\|_{L^{2}}\leq\|u_{0}\|_{L^{2}}\exp\Big{(}\int_{0}^{t}\big{(}\,(\gamma^{\varepsilon}_{+}-\gamma^{\varepsilon}_{-})(t^{\prime})+C2^{-j\theta_{\star}}\big{)}\,dt^{\prime}\Big{)},$
hence
$\displaystyle\partial_{t}\|w\|_{L^{2}}$
$\displaystyle+C2^{-j\theta_{\star}}\|w\|_{L^{2}}$
$\displaystyle\geq-C\Big{(}2^{-j}\varepsilon^{-1}+\varepsilon^{\theta}\Big{)}\exp\Big{(}\int_{0}^{t}\big{(}\,\gamma^{\varepsilon}_{+}-\gamma^{\varepsilon}_{-}+C2^{-j\theta_{\star}}\big{)}\,dt^{\prime}\Big{)}\|u_{0}\|_{L^{2}}.$
The above lower bound implies
$\displaystyle\|w(t)\|_{L^{2}}$ $\displaystyle\geq
e^{-tC2^{-j\theta_{\star}}}\|w_{0}\|_{L^{2}}$
$\displaystyle-C\Big{(}2^{-j}\varepsilon^{-1}+\varepsilon^{\theta}\Big{)}\exp\Big{(}\int_{0}^{t}\big{(}(\gamma^{\varepsilon}_{+}-\gamma^{\varepsilon}_{-})(t^{\prime})+C2^{-j\theta_{\star}}\big{)}\,dt^{\prime}\Big{)}\|u_{0}\|_{L^{2}}.$
Recall the definition of $\varepsilon$ in terms of $j$ in (B.9). We have
$2^{-j}\varepsilon^{-1}\leq 2^{-j\theta/(\theta+d_{\star})}.$ Thus the above
lower bound takes the form
$\displaystyle\|w(t)\|_{L^{2}}$ $\displaystyle\geq
e^{-tC2^{-j\theta_{\star}}}\|w_{0}\|_{L^{2}}$
$\displaystyle-C2^{-j\theta_{\star}}\exp\Big{(}\int_{0}^{t}\big{(}(\gamma^{\varepsilon}_{+}-\gamma^{\varepsilon}_{-})(t^{\prime})+C2^{-j\theta_{\star}}\big{)}\,dt^{\prime}\Big{)}\|u_{0}\|_{L^{2}}.$
By assumption on $u_{0},$ the ratio
$c_{0}:=\|w(0)\|_{L^{2}}/\|u_{0}\|_{L^{2}}$ is positive, and we have
$\displaystyle\|w(t)\|_{L^{2}}$ $\displaystyle\geq
e^{-tC2^{-j\theta_{\star}}}\|w_{0}\|_{L^{2}}\Big{(}1-\frac{C}{c_{0}}2^{-j\theta_{\star}}\exp\Big{(}\int_{0}^{t}(\gamma^{\varepsilon}_{+}-\gamma^{\varepsilon}_{-}+C2^{-j\theta_{\star}}\,)\Big{)}.$
Thus the condition is
$\frac{C}{c_{0}}2^{-j\theta_{\star}}\exp\Big{(}\int_{0}^{t}(\,\gamma^{\varepsilon}_{+}-\gamma^{\varepsilon}_{-}+C2^{-j\theta_{\star}})\,\Big{)}\leq\frac{1}{10}.$
That is, for some $C>0$ which depends only on $Q$ and the dimension $d,$
$C2^{-j\theta_{\star}}\exp\Big{(}\int_{0}^{t}(\gamma_{+}-\gamma_{-}+C2^{-j\theta_{\star}})\,\Big{)}\|u_{0}\|_{L^{2}}\leq\frac{1}{10}\|{\rm
op}_{j}(\psi^{\flat})u_{0}\|_{L^{2}}.$
If $t$ is not too large, in the sense of (B.6), and $j$ is large enough, then
the above inequality holds, and this implies the lower bound (B.7). ∎
## References
* [1] J. Bourgain, D. Li. Strong ill-posedness of the incompressible Euler equation in borderline Sobolev spaces. Invent. Math. 201 (2015), no. 1, 97-157.
* [2] T. Kato, Perturbation theory for linear operators. Reprint of the 1980 edition. Classics in Mathematics. Springer-Verlag, Berlin, 1995.
* [3] G.Métivier, _Remarks on the well posedness of the nonlinear Cauchy problem._ Contemp.Math., vol. 368, Amer. Math. Soc., Providence, RI, 2005.
* [4] G.Métivier, _Para-differential Calculus and applications to the Cauchy problem for nonlinear systems_ , Centro di Ricerca Mathematica Ennio De Giorgi (CRM) Series, 5. Edizioni della Normale, Pisa, 2008.
* [5] B. Morisse, On hyperbolicity and Gevrey well-posedness. Part one: the elliptic case. Annales Henri Lebesgue, Volume 3 (2020) p. 1195–1239.
* [6] B. Morisse, On hyperbolicity and Gevrey well-posedness. Part two: Scalar or degenerate transitions. J. Differential Equations 264 (2018), no. 8, 5221-5262.
* [7] B. Morisse, On hyperbolicity and Gevrey well-posedness. Part three: a model of weakly hyperbolic systems. https://arxiv.org/abs/1803.04724, 2018. To appear in Indiana University Mathematics Journal.
* [8] D. Lannes, Sharp estimates for pseudo-differential operators with symbols of limited smoothness and commutators. J. Funct. Anal. 232 (2006), no. 2, 495–539.
* [9] P. D. Lax, Asymptotic solutions of oscillatory initial value problems. Duke Math. J. 24 (1957) 627–646.
* [10] N. Lerner, _Metrics on the Phase Space and Non-Selfadjoint Operators_. Pseudo-Differential Operators. Theory and Applications, 3. Birkhäuser 2010. xii+397 pp.
* [11] N. Lerner, Y. Morimoto, C.-J. Xu, Instability of the Cauchy-Kovalevskaya solution for a class of non-linear systems, American J. Math., 132 (2010), 1, 99-123.
* [12] N. Lerner, T. Nguyen, B. Texier, The onset of instability in first-order systems. J. Eur. Math. Soc. (JEMS) 20 (2018), no. 6, 1303-1373.
* [13] Y. Lu, B. Texier, A stability criterion for high-frequency oscillations, Mém. Soc. Math. Fr. 142 (2015).
* [14] S. Mizohata, Some remarks on the Cauchy problem. J. Math. Kyoto Univ. 1961-1962 109–127.
* [15] K. Ndoumajoud, B. Texier, On Métivier’s Lax-Mizohata theorem and extensions to weak defects of hyperbolicity. Part two. Preprint, 2020.
* [16] B. Texier, _Derivation of the Zakharov equations_ , Archive for Rational Mechanics and Analysis 184 (2007), 121–183.
* [17] B. Texier, _Approximations of pseudo-differential flows,_ Indiana Univ. Math. J. 65 (2016), no.1, 243-272.
* [18] B. Texier, _Basic matrix perturbation theory._ Enseign. Math. 64 (2018), no. 3-4, 249–263.
* [19] M. Zworski, _Semi-classical analysis_ , Graduate studies in mathematics, AMS, 2012.
|
# Twist and Turn Squeezing in a Multi-Mode Bose-Einstein Condensate
Junbang Liu Department of Quantum Science, Research School of Physics,
Australian National University, Canberra, Ngunnawal Country, Australia.
Thomas Bartlett Department of Quantum Science, Research School of Physics,
Australian National University, Canberra, Ngunnawal Country, Australia.
Joseph Hope Department of Quantum Science, Research School of Physics,
Australian National University, Canberra, Ngunnawal Country, Australia. Simon
Haine Department of Quantum Science, Research School of Physics, Australian
National University, Canberra, Ngunnawal Country, Australia.
<EMAIL_ADDRESS>
###### Abstract
Here we examine the generation of Twist and Turn (TNT) Squeezing in a large
atom-number Bose-Einstein Condensate for the purposes of generating quantum-
enhanced states for atom interferometry. Unlike previous analysis, we examine
situations where the multi-mode dynamics is significant, and cannot be
captured by a simple single-mode model. We find that in some regimes, with
careful choice of the rotation parameter, we can still obtain squeezing much
more rapidly than via one-axis twisting (OAT).
## I Introduction
There is currently considerable interest in the production of entangled states
in Bose-Einstein condensates with the motivation of enhancing the sensitivity
of atom interferometers and atomic clocks Pezzè _et al._ (2018); Szigeti _et
al._ (2021). Without many-particle entanglement, the phase sensitivity of such
experiments is fundamentally constrained to the shot-noise limit (SNL)
$\Delta\phi=1/\sqrt{N}$ Giovannetti _et al._ (2006); Pezzé and Smerzi (2009).
In recent years, experiments in atomic systems based on the one-axis twisting
(OAT) squeezing scheme of Kitagawa and Ueda Kitagawa and Ueda (1993); Mølmer
and Sørensen (1999) have demonstrated metrologically useful spin-squeezing
Esteve _et al._ (2008); Leroux _et al._ (2010), and sub-shot-noise phase
detection Gross _et al._ (2010); Riedel _et al._ (2010); Muessel _et al._
(2014) in proof-of-principle experiments. However, typical experiments are
limited to only moderate quantum enhancement due to constraints on the state
preparation time imposed by dephasing Li _et al._ (2008, 2009); Haine
(2018a), multi-mode dynamics Haine and Johnsson (2009); Haine _et al._
(2014), or in the case of atomic gravimetry, expansion of the freely
propagating wave-packets Szigeti _et al._ (2020). This leads to a degree of
quantum enhancement that is considerably less than the theoretical optimum.
Recently, a related method known as Twist and Turn (TNT) squeezing Micheli
_et al._ (2003); Sorelli _et al._ (2019); Mirkhalaf _et al._ (2018) has been
demonstrated Strobel _et al._ (2014); Muessel _et al._ (2015). The TNT
Hamiltonian, which is a specific case of the Lipkin-Meshkov-Glick Hamiltonian
Lipkin _et al._ (1965) using the same nonlinear interactions as OAT with an
additional linear rotation, typically reaches larger degrees of quantum
enhancement for the same interaction time. As TNT is based on the same
interactions that leads to OAT squeezing, it can be implemented in the same
experimental set-ups.
As well as spin-squeezing, it has been shown that TNT dynamics is capable of
generating highly entangled states beyond the Gaussian regime Strobel _et
al._ (2014); Muessel _et al._ (2015). In this case, the metrologically useful
entanglement can be quantified via the quantum Fisher information (QFI)
Demkowicz-Dobrzanski _et al._ (2015); Braunstein and Caves (1994); Paris
(2009); Tóth and Apellaniz (2014), and can be accessed via interaction based
readouts Davis _et al._ (2016); Hosten _et al._ (2016); Fröwis _et al._
(2016); Macrì _et al._ (2016); Linnemann _et al._ (2016); Nolan _et al._
(2017); Szigeti _et al._ (2017); Anders _et al._ (2018); Mirkhalaf _et al._
(2018); Haine (2018b); Hayes _et al._ (2018); Haine (2021).
So far, experimental demonstrations of TNT dynamics have been restricted to
samples of a few hundred atoms. One reason for this is that larger atom
numbers introduce multimode dynamics Li _et al._ (2009); Haine and Johnsson
(2009); Opanchuk _et al._ (2012); Haine _et al._ (2014). These dynamics can
cause the two spin components to spatially separate. In the case of OAT, this
separation can significantly increase the rate of entangling dynamics Riedel
_et al._ (2010); Li _et al._ (2008); Nolan and Haine (2018). However, for
efficient TNT dynamics, we require continuous spatial overlap to enable
continuous coupling between the two spin components. In this paper, we
investigate how this multimode dynamics effects the implementation of TNT
dynamics.
The structure of this paper is as follows: In section II we introduce the
theoretical model used. In section III we derive an effective single-mode
model from our multi-mode model, and recap ideal single-mode behaviour of both
the OAT and TNT hamiltonians. In section IV we identify three parameter
regimes, and discuss how multi-mode dynamics effects the ability to generate
spin-squeezing and entanglement in each of these regimes, before summarising
our findings in section V.
## II Model
We consider an atomic BEC with two relevant atomic states $|a\rangle$ and
$|b\rangle$, confined in a potential $V(x)$. For example, these levels could
be the two hyperfine ground states of 87Rb, in which case
$|a\rangle\equiv|F=1,m_{F}=0\rangle$ and $|b\rangle\equiv|F=2,m_{F}=0\rangle$.
We also allow for continuous coupling between the two states via a radio
frequency or microwave transition, which provides the ‘turn’ in the TNT
dynamics. Introducing the bosonic field operators
$\hat{\psi}_{i}(\mathbf{r})$, which annihilate an atom of state $|i\rangle$
from position $\mathbf{r}$, and obey the usual bosonic commutation relations
$\displaystyle\left[\hat{\psi}_{i}(\mathbf{r})\,,\,\,\hat{\psi}^{\dagger}_{j}(\mathbf{r}^{\prime})\right]=\delta_{ij}\delta(\mathbf{r}-\mathbf{r}^{\prime})\,,$
(1)
the Hamiltonian describing the system is
$\displaystyle\hat{H}$
$\displaystyle=\sum_{j=a,b}\int\hat{\psi}^{\dagger}_{j}(\mathbf{r})\hat{H}_{0}\hat{\psi}_{j}(\mathbf{r})\,\,d^{3}\mathbf{r}$
$\displaystyle+\sum_{j,k=a,b}\frac{U_{ij}}{2}\int\hat{\psi}^{\dagger}_{j}(\mathbf{r})\hat{\psi}^{\dagger}_{k}(\mathbf{r})\hat{\psi}_{j}(\mathbf{r})\hat{\psi}_{k}(\mathbf{r})\,\,d^{3}\mathbf{r}$
$\displaystyle+\frac{\hbar\Omega}{2}\int\left(\hat{\psi}^{\dagger}_{a}(\mathbf{r})\hat{\psi}_{b}(\mathbf{r})+\hat{\psi}^{\dagger}_{b}(\mathbf{r})\hat{\psi}_{a}(\mathbf{r})\right)\,\,d^{3}\mathbf{r}\,,$
(2)
where
$\displaystyle\hat{H}_{0}$
$\displaystyle=\frac{-\hbar^{2}}{2m}\bm{\nabla}^{2}+V(x)$ (3)
is the single particle Hamiltonian, and
$U_{ij}=\frac{4\pi\hbar^{2}a_{ij}}{m}$ (4)
is the inter-particle interactions between atoms in states $|i\rangle$ and
$|j\rangle$, parameterised by the $s$-wave scattering length $a_{ij}$ Chin
_et al._ (2010). The final term in Eq. (2) describes continuous coupling
between states $|a\rangle$ and $|b\rangle$ with Rabi frequency $\Omega$. We
assume that all atoms are initially in state $|a\rangle$ in the ground
motional state before applying a $\frac{\pi}{2}$ pulse at $t=0$ to coherently
couple 50% of the population to state $|b\rangle$, such that the state of each
atom is $\frac{1}{\sqrt{2}}\left(|a\rangle+|b\rangle\right)$. The system then
evolves under Eq. (2).
The goal of the scheme is to produce an entangled state, which when used as
the input state of an atom interferometer, is capable of providing
sensitivities better than the SNL. Atom interferometery is best described via
the SU(2) collective pseudo-spin operators defined by
$\displaystyle\hat{J}_{x}$
$\displaystyle=\frac{1}{2}\int\left(\hat{\psi}^{\dagger}_{b}(\mathbf{r})\hat{\psi}_{a}(\mathbf{r})+\hat{\psi}^{\dagger}_{a}(\mathbf{r})\hat{\psi}_{b}(\mathbf{r})\right)\,d^{3}\mathbf{r}\,,$
(5a) $\displaystyle\hat{J}_{y}$
$\displaystyle=-\frac{i}{2}\int\left(\hat{\psi}^{\dagger}_{b}(\mathbf{r})\hat{\psi}_{a}(\mathbf{r})-\hat{\psi}^{\dagger}_{a}(\mathbf{r})\hat{\psi}_{b}(\mathbf{r})\right)\,d^{3}\mathbf{r}\,,$
(5b) $\displaystyle\hat{J}_{z}$
$\displaystyle=\frac{1}{2}\int\left(\hat{\psi}^{\dagger}_{a}(\mathbf{r})\hat{\psi}_{a}(\mathbf{r})-\hat{\psi}^{\dagger}_{b}(\mathbf{r})\hat{\psi}_{b}(\mathbf{r})\right)\,d^{3}\mathbf{r}\,.$
(5c)
Assuming a simple Mach-Zehnder interferometer composed of a
$\frac{\pi}{2}-\pi-\frac{\pi}{2}$ pulse sequence, if measurements capable of
resolving the mean and variance of the number difference at the output of the
interferometer are made, the phase sensitivity is given by
$\displaystyle\Delta\phi$ $\displaystyle=\frac{\xi}{\sqrt{N}}\,,$ (6)
where $\xi$ is the Wineland spin-squeezing parameter Wineland _et al._ (1992)
defined by
$\displaystyle\xi$
$\displaystyle=\sqrt{\frac{N\mathrm{Var}(\hat{J}_{\theta})}{\langle\hat{J}_{x}\rangle^{2}}}\,,$
(7)
with
$\displaystyle\hat{J}_{\theta}$
$\displaystyle=\hat{J}_{z}\cos\theta+\hat{J}_{y}\sin\theta\,,$ (8)
where the angle $\theta$ is chosen to minimise
$\mathrm{Var}(\hat{J}_{\theta})$. This state is then converted into a state
with reduced fluctuations in number difference by a rotation by $-\theta$
around the $\hat{J}_{x}$, before entering the interferometer. In practice,
this is done via an additional coherent Rabi pulse.
## III Ideal single-mode dynamics
We begin by revising the ideal _single-mode_ dynamics of both the OAT and TNT
Hamiltonians. We do this by expanding the field operator into an orthonormal
single-particle basis, and then make the assumption that only one mode is
occupied, ie
$\displaystyle\hat{\psi}_{a}(\mathbf{r})$
$\displaystyle=\sum_{j=0}^{\infty}\hat{a}_{j}u_{a,j}(\mathbf{r})\approx
u_{a,0}(\mathbf{r})\hat{a}_{0}$ (9a) $\displaystyle\hat{\psi}_{b}(\mathbf{r})$
$\displaystyle=\sum_{j=0}^{\infty}\hat{b}_{j}u_{b,j}(\mathbf{r})\approx
u_{b,0}(\mathbf{r})\hat{b}_{0}\,.$ (9b)
This approximation has been shown to be reasonably valid in tight confining
potentials with small numbers of atoms Gross _et al._ (2010); Riedel _et
al._ (2010), but breaks down when the population increases to the point where
the energy due to atomic interactions dominate Haine _et al._ (2014). Making
this approximation in Eq. (2) gives
$\displaystyle\hat{H}$
$\displaystyle=\hbar\chi_{aa}\hat{a}^{\dagger}\hat{a}^{\dagger}\hat{a}\hat{a}+\hbar\chi_{bb}\hat{b}^{\dagger}\hat{b}^{\dagger}\hat{b}\hat{b}+2\hbar\chi_{ab}\hat{a}^{\dagger}\hat{a}\hat{b}^{\dagger}\hat{b}$
$\displaystyle+\frac{\hbar\Omega}{2}\left(\eta\hat{a}^{\dagger}\hat{b}+\eta^{*}\hat{b}^{\dagger}\hat{a}\right)\,,$
(10)
where
$\displaystyle\chi_{ij}=\frac{U_{ij}}{2\hbar}\int\left|u_{i}(\mathbf{r})\right|^{2}\left|u_{j}(\mathbf{r})\right|^{2}\,d^{3}\mathbf{r}\,,$
(11)
and
$\displaystyle\eta=\int
u_{a}^{*}(\mathbf{r})u_{b}(\mathbf{r})\,d^{3}\mathbf{r}\,,$ (12)
and we have made the replacements $\hat{a}_{0}\rightarrow\hat{a}$,
$\hat{b}_{0}\rightarrow\hat{b}$, $u_{a(b),0}(\mathbf{r})\rightarrow
u_{a(b)}(\mathbf{r})$ for notational simplicity. Introducing the single-mode
version of the pseudo-spin operators (Eq. (5))
$\displaystyle\hat{J}_{x}$
$\displaystyle=\frac{1}{2}\left(\hat{a}^{\dagger}\hat{b}+\hat{b}^{\dagger}\hat{a}\right)\,,$
(13a) $\displaystyle\hat{J}_{y}$
$\displaystyle=\frac{i}{2}\left(\hat{a}^{\dagger}\hat{b}-\hat{b}^{\dagger}\hat{a}\right)\,,$
(13b) $\displaystyle\hat{J}_{z}$
$\displaystyle=\frac{1}{2}\left(\hat{a}^{\dagger}\hat{a}-\hat{b}^{\dagger}\hat{b}\right)\,,$
(13c)
and assuming perfect overlap ($\eta=1$), the Hamiltonian (Eq. (10)) can be
written as
$\displaystyle\hat{H}$
$\displaystyle=\hbar\chi\hat{J}_{z}^{2}+\hbar\chi_{-}(\hat{N}-1)\hat{J}_{z}+\hbar\Omega\hat{J}_{x}\,,$
(14)
where $\chi=\chi_{aa}+\chi_{bb}-2\chi_{ab}$, $\chi_{-}=\chi_{aa}-\chi_{bb}$,
and $\hat{N}=\hat{a}^{\dagger}\hat{a}+\hat{b}^{\dagger}\hat{b}$ is the total
number of atoms. For a fixed number of atoms $N$, when $\chi_{-}=0$, and
$\Omega=\chi N/2$, this is the well studied TNT Hamiltonian, and when
$\Omega=0$ we recover the OAT Hamiltonian. Specifically, we define
$\displaystyle\hat{H}_{\mathrm{OAT}}$
$\displaystyle=\hbar\chi\hat{J}_{z}^{2}\,,$ (15a)
$\displaystyle\hat{H}_{\mathrm{TNT}}$
$\displaystyle=\hbar\chi\hat{J}_{z}^{2}+\Omega\hat{J}_{x}\,.$ (15b)
In writing Eq. (15a) and Eq. (15b), we have neglected terms that only depend
on $\hat{N}$, as they do not result in any observable effects.
Figures 1 and 2 show the evolution of an initial coherent spin state (CSS)
under these Hamiltonians. We visualise the state using the Husimi $Q$-function
Arecchi _et al._ (1972); Agarwal (1998). The most notable difference between
OAT and TNT dynamics, is the speed at which the state evolves away from a CSS.
Figure (3) shows the variance of the three pseuedo-spins ($\hat{J}_{x}$,
$\hat{J}_{y}$, $\hat{J}_{z}$) for both cases. We see that the timescale for
the variance to increase from $\sim N/4$ (that of a CSS) to $\sim N^{2}/8$
(that of a highly non-classical state, such as a twin Fock-state, for example)
is approximately 40 times faster for the TNT dynamics.
Figure 1: The Husimi $Q$-function $Q(\theta,\varphi)$, for the state evolving
under Eq. (15a) for $N=100$. The $Q$-function is defined as
$Q(\theta,\varphi)=\left|\langle\theta,\varphi|\Psi(t)\rangle\right|^{2}$,
where
$|\theta,\varphi\rangle=e^{i\varphi\hat{J}_{z}}e^{i\theta\hat{J}_{y}}|J_{z}=N/2\rangle$
represents the spin coherent state along $\theta$ and $\varphi$ directions
corresponding to rotating the maximal $\hat{J}_{z}$ eigenstate around
azimuthal and polar angles $\\{\theta,\varphi\\}$ Arecchi _et al._ (1972).
(a): $\chi t=0$ and (b): $\chi t=0.05$, (c): $\chi t=0.1$, (d): $\chi t=0.3$
Figure 2: The Husimi $Q$-function $Q(\theta,\varphi)$, for the state evolving
under Eq. (15b) for $N=100$. The $Q$-function is defined as
$Q(\theta,\varphi)=\left|\langle\theta,\varphi|\Psi(t)\rangle\right|^{2}$,
where
$|\theta,\varphi\rangle=e^{i\varphi\hat{J}_{z}}e^{i\theta\hat{J}_{y}}|J_{z}=N/2\rangle$
represents the spin coherent state along $\theta$ and $\varphi$ directions
corresponding to rotating the maximal $\hat{J}_{z}$ eigenstate around
azimuthal and polar angles $\\{\theta,\varphi\\}$ Arecchi _et al._ (1972).
(a): $\chi t=0$ and (b): $\chi t=0.02$, (c): $\chi t=0.04$, (d): $\chi t=0.06$
Two quantities of significant interest for sensing applications are the spin-
squeezing parameter $\xi$, and the quantum Fisher information. Figure (4a)
shows the spin-squeezing parameter for both cases. We see that in the case of
TNT, $\xi$ decreases significantly faster than for OAT, and OAT reaches a
lower minimum. Specifically, it takes OAT more than twice as long to reach the
minimum $\xi$ from TNT dynamics. Alternatively, if the state preparation time
is constrained, TNT achieves a value of $\xi$ 2.3 times smaller for the same
state preparation time. Beyond this minimum, ‘ _oversqueezing_ ’ prevents any
further reduction in $\xi$ despite the presence of metrologically useful
entanglement Tóth (2012). When measurements are made that can resolve the full
probability distribution rather than just the mean and variance of the
collective spin, as is required for resolving the full metrological potential
of highly entangled quantum states Haine (2018b), a more relevant metric for
quantifying the sensitivity is the quantum Fisher information (QFI), which
relates to the sensitivity via the quantum Cramer-Rao bound
$\Delta\phi=1/\sqrt{F_{Q}}$. For an initial pure state entering a MZ
interferometer, the QFI is $F_{Q}=4\mathrm{Var}(\hat{J}_{y})$. However, if we
allow for an additional $\hat{J}_{x}$ rotation before the MZ interferometer,
the QFI can be increased further. Specifically, we use the definition
$F_{Q}=4\mathrm{V}(\hat{J}_{\theta})$ (16)
where $\hat{J}_{\theta}=\cos\theta\hat{J}_{y}+\sin\theta\hat{J}_{z}$, and
$\theta$ is chosen to maximise the QFI. We note that a full probability-
resolving measurement is not required if an interaction-based readout is used
after the interferometer Nolan _et al._ (2017); Mirkhalaf _et al._ (2018);
Haine (2018b). Figure (4b) shows the QFI for both OAT and TNT for $N=10^{5}$
particles. We see that TNT reaches a higher value, and achieves this maximum
significantly more quickly. Specifically, TNT reaches the threshold
$F_{Q}\approx N^{2}/2$, which is the QFI of the highly non-classical twin-Fock
state, $\sim 40$ times faster than OAT. Alternatively, at the time when TNT
reaches this threshold, the QFI is $\sim 337$ times larger than for OAT.
Figure 3: Variance of the three components of the pseudo-spin for (a) OAT and
(b) TNT dynamics. The black dotted line represents the threshold $N^{2}/8$,
which is the approximate value that OAT dynamics plateaus at for long time.
The number of atoms was $N=10^{5}$. Figure 4: (a) Evolution of the spin
squeezing parameter for OAT (blue solid line) and TNT dynamics (red dashed
line). (b) The QFI for OAT (blue solid line) and TNT (red dashed line). The
three horizontal black dotted lines represent $F_{Q}=N$ (the limit for
unentangled particles), $N^{2}/2$, (the long-time plateu for OAT dynamics) and
$N^{2}$ (the Heisenberg limit). The number of particles was $N=10^{5}$.
## IV Multi-Mode Dynamics
We now investigate the effects of multi-mode dynamics. Assuming a cigar shaped
trapping potential where the two transverse dimensions ($y$ & $z$) are much
tighter than the longitudinal dimension ($x$), the dynamics in the $x$
direction is well approximated by the Heisenberg equations of motion
$\displaystyle i\hbar\frac{d}{dt}\hat{\psi}_{a}$
$\displaystyle=\left(\hat{H}_{1D}+\tilde{U}_{aa}\hat{\psi}^{\dagger}_{a}\hat{\psi}_{a}+\tilde{U}_{ab}\hat{\psi}^{\dagger}_{b}\hat{\psi}_{b}\right)\hat{\psi}_{a}+\frac{\hbar\Omega}{2}\hat{\psi}_{b}$
(17a) $\displaystyle i\hbar\frac{d}{dt}\hat{\psi}_{b}$
$\displaystyle=\left(\hat{H}_{1D}+\tilde{U}_{ab}\hat{\psi}^{\dagger}_{a}\hat{\psi}_{a}+\tilde{U}_{bb}\hat{\psi}^{\dagger}_{b}\hat{\psi}_{b}\right)\hat{\psi}_{b}+\frac{\hbar\Omega}{2}\hat{\psi}_{a}$
(17b)
where
$\displaystyle\hat{H}_{1D}$
$\displaystyle=\frac{-\hbar^{2}}{2m}\frac{\partial^{2}}{\partial
x^{2}}+\frac{1}{2}m\omega_{x}^{2}x^{2}\,,$ (18)
and $\tilde{U}_{ij}=U_{ij}/A_{\perp}$ is the dimensionally reduced effective
one dimensional interaction strength obtained by dividing the three
dimensional interaction strength by a parameter characterising the transverse
area of the system. Throughout this work, we use $m=87$ amu, and
$A_{\perp}=10^{-10}$ m2.
We model the situation where initially all the atoms in are in state
$|a\rangle$ in the ground motional state, and then coherently transfer half
the population to state $|b\rangle$. Unless $U_{aa}=U_{bb}=U_{ab}$, this new
state will not be the ground state, and motional dynamics will occur. As
$\chi\propto U_{aa}+U_{bb}-2U_{ab}\neq 0$, we cannot obtain entangling
dynamics without also exciting motional dynamics. We consider three distinct
cases that provide qualitatively different dynamics:
* •
Case I: $U_{aa}=U_{bb}>U_{ab}$. In this case, the two components will undergo
breathing oscillations, but will tend to breathe-together. Specifically, we
choose $a_{aa}=a_{bb}=100.0a_{0}$, $a_{ab}=97.0a_{0}$, where $a_{0}=5.29\times
10^{-11}$ m.
* •
Case II: $a_{bb}>a_{aa}>a_{ab}$. In this case, the components tend to
separate, as one component breathes inwards while the other breathes outwards,
such that the overlap of the two components is significantly decreased.
However, by adjusting the relative atom numbers, a _breathe-together_ solution
exists Sinatra, A. and Castin, Y. (2000). Specifically, we chose
$a_{aa}=95.0a_{0}$, $a_{bb}=100.0a_{0}$, $a_{ab}=90.0a_{0}$.
* •
Case III: $U_{aa}>U_{ab}>U_{bb}$. The two components will tend to separate,
and no breathe-together solution exists. Specifically, we choose
$a_{aa}=100.0a_{0}$, $a_{bb}=95.0a_{0}$, $a_{ab}=97.0a_{0}$. This represents
the scattering parameters of the two hyperfine ground-states of 87Rb.
To illustrate the three cases, we calculate the density distribution under the
mean-field approximation by solving the Gross-Pitaevskii equation Dalfovo _et
al._ (1999), obtained by making the substitution
$\hat{\psi}_{j}(x)\rightarrow\psi_{j}(x)$ in equations (17a,17b). Figures (5,
6, 7) shows the evolution of the density distribution for these three cases.
In particular, note that in cases I the two components evolve together, while
in case II and III, the two components spatially separate. This separation
will hinder the ability to implement TNT dynamics, as the varying spatial
overlap will complicate the coherent coupling required for the $\hat{J}_{x}$
rotation. Additionally, the spin-squeezing parameter requires a high degree of
overlap between the modes.
Figure 5: Evolution of the density of each component for Case I.
$|\psi_{a}(x,t)|^{2}$ (blue solid line), $|\psi_{b}(x,t)|^{2}$ (red dashed
line), compared to the initial state
$|\psi_{a}(x,0)|^{2}=|\psi_{b}(x,0)|^{2}=|\psi_{0}(x)|^{2}$ (black dotted
line). Both components vary only slightly from the initial condition, but
remain identical to each other. Figure 6: Evolution of the density of each
component for Case II when a 50/50 beamsplitter is implemented.
$|\psi_{a}(x,t)|^{2}$ (blue solid line), $|\psi_{b}(x,t)|^{2}$ (red dashed
line), compared to the initial state
$|\psi_{a}(x,0)|^{2}=|\psi_{b}(x,0)|^{2}=|\psi_{0}(x)|^{2}$ (black dotted
line). Figure 7: Evolution of the density of each component for Case III.
$|\psi_{a}(x,t)|^{2}$ (blue solid line), $|\psi_{b}(x,t)|^{2}$ (red dashed
line), compared to the initial state
$|\psi_{a}(x,0)|^{2}=|\psi_{b}(x,0)|^{2}=|\psi_{0}(x)|^{2}$ (black dotted
line).
The Gross-Pitaeveskii equation is incapable of capturing the evolution of the
quantum statistics, which is required to investigate spin-squeezing and
entanglement. To investigate this effect, we simulate the dynamics of the
system using the Truncated Wigner (TW) method, which has previously been used
to model the dynamics of quantum gases Steel _et al._ (1998); Sinatra _et
al._ (1995); Norrie _et al._ (2006); Drummond and Opanchuk (2017), and unlike
the GPE, can be used to model non-classical particle correlations Haine and
Ferris (2011); Ruostekoski and Martin (2013); Haine _et al._ (2014); Szigeti
_et al._ (2017); Haine (2018b); Szigeti _et al._ (2020). The derivation of
the TW method has been described in detail elsewhere Drummond and Hardman
(1993); Steel _et al._ (1998); Blakie _et al._ (2008). Briefly, the equation
of motion for the Wigner function of the system can be found from the von-
Neumann equation by using correspondences between differential operators on
the Wigner function and the original quantum operators Gardiner and Zoller
(2004). By truncating third- and higher-order derivatives (the TW
approximation), a Fokker–Planck equation (FPE) is obtained. The FPE is then
mapped to a set of stochastic partial differential equations for complex
fields $\psi_{j}(x,t)$, which loosely correspond to the original field
operators $\hat{\psi}_{j}(x,t)$, with initial conditions stochastically
sampled from the appropriate Wigner distribution Blakie _et al._ (2008);
Olsen and Bradley (2009). The complex fields obey the partial differential
equations
$\displaystyle i\hbar\frac{d}{dt}\psi_{a}$
$\displaystyle=\hat{H}_{1D}\psi_{a}+\frac{\hbar\Omega}{2}\psi_{b}$
$\displaystyle+\left(\tilde{U}_{aa}\left(|\psi_{a}|^{2}-\frac{1}{\Delta
x}\right)+\tilde{U}_{ab}\left(|\psi_{b}|^{2}-\frac{1}{\Delta
x}\right)\right)\psi_{a}$ (19a) $\displaystyle i\hbar\frac{d}{dt}\psi_{b}$
$\displaystyle=\hat{H}_{1D}\psi_{b}+\frac{\hbar\Omega}{2}\psi_{a}$
$\displaystyle+\left(\tilde{U}_{ab}\left(|\psi_{a}|^{2}-\frac{1}{\Delta
x}\right)+\tilde{U}_{bb}\left(|\psi_{b}|^{2}-\frac{1}{\Delta
x}\right)\right)\psi_{b}$ (19b)
where $\Delta$ is discretisation size of the spatial $x$ grid. By averaging
over many trajectories with stochastically sampled initial conditions,
expectation values of quantities corresponding to symmetrically ordered
operators in the full quantum theory can be obtained via the correspondence
$\langle\\{f(\hat{\psi}^{\dagger}_{j},\hat{\psi}_{j})\\}_{\mathrm{sym}}\rangle=\overline{f[\psi_{j}^{*},\psi_{j}]}$,
where ‘sym’ denotes symmetric ordering and the overline denotes the mean over
many stochastic trajectories. The initial conditions for the simulations are
chosen as $\psi_{a}(x,0)=\Psi_{0}(x)+\eta_{a}(x)$,
$\psi_{b}(x,0)=\eta_{b}(x)$, where $\Psi_{0}(x)$ is the ground state of the
single-component time-independent Gross-Pitaevski equation, and
$\eta_{j}(\xi)$ are complex Gaussian noises satisfying
$\overline{\eta^{*}_{i}(x_{n})\eta_{j}(x_{m})}=\frac{1}{2}\delta_{m,n}\delta_{i,j}/\Delta$,
for spatial grid points $x_{m}$ and $x_{n}$. At $t=0$, the $\pi/2$ beam
splitting pulse which initiates the dynamics is implemented via the
transformation
$\psi_{a}(x)=\frac{1}{\sqrt{2}}\left(\psi_{a}(x,0)+\psi_{b}(x,0)\right)$,
$\psi_{a}(x)=\frac{1}{\sqrt{2}}\left(\psi_{b}(x,0)-\psi_{a}(x,0)\right)$.
### IV.1 Case: I
In order to implement TNT dynamics, the rotation parameter $\Omega$ must be
set to the optimum value $\Omega=\chi N/2$. While this is simple to do in the
single-mode model, in the multimode model the effective $\chi$ depends on the
time-dependent density distributions. Furthermore, it has previously been
shown that when strong multimode dynamics are present, estimates of $\chi$
based on Eq. (11) are poor Haine _et al._ (2014). We estimate the effective
$\chi$ by first setting $\Omega=0$ and calculating the variance of the three
pseudo-spin operators, and comparing to ideal single-mode OAT dynamics for the
same number of atoms. The single-mode dynamics resulting from Eq. (15a) is
also calculated via the TW method 111The quantum dynamics for the single-mode
model is obtained using the TW method by solving the ODEs
$i\dot{\alpha}=\frac{1}{2}\chi(|\alpha|^{2}-|\beta|^{2})\alpha+\Omega\beta$,
$i\dot{\beta}=-\frac{1}{2}\chi(|\alpha|^{2}-|\beta|^{2})\beta+\Omega\alpha$,
where $\alpha$ and $\beta$ are the stochastic variables corresponding to
$\hat{a}_{0}$ and $\hat{b}_{0}$ respectively.. Figure (8) shows a comparison
between the multi-mode dynamics and single-mode dynamics, with the parameter
$\chi$ in the single-mode mode adjusted to provide the best match to the
multimode dynamics. We use this as our estimate of $\chi$ when choosing a
value of $\Omega$ when implementing TNT dynamics in the multi-mode model.
Figure 8: Comparison between the multi-mode dynamics and ideal single-mode
behaviour for Case I, when only OAT dynamics is implemented. When we choose
$\chi=2.2\times 10^{-3}$ s-1, we have excellent agreement in the variance of
the pseudospins. $V(J_{y})$ as calculated from the single-mode model is
indicated via the red dotted line. Figure 9: Comparison between multi-mode OAT
dynamics and ideal OAT single-mode behaviour for Case I, for the spin-
squeezing parameter (a) and QFI (b). Figure 10: Comparison between the multi-
mode dynamics and ideal single-mode behavior when TNT dynamics is implemented.
The value of $\Omega$ is chosen as $\Omega=\chi N/2$, where the value
$\chi=2.2\times 10^{-3}$ s-1, was used. $V(J_{z})$ as calculated from the
single-mode model is indicated via the red dotted line.
Figure 11: Comparison between multi-mode TNT dynamics and ideal TNT single-
mode behaviour for Case I, for the spin-squeezing parameter (a) and QFI (b)
.
We see here that setting $\chi=2.2\times 10^{-3}$ s-1 provides excellent
agreement between the single-mode model and the multimode model for both the
pseudospin variances (fig. 8) spin-squeezing parameter, and QFI (fig.9). Using
this value in our choice of $\Omega=N\chi/2$ should therefore result in TNT
dynamics. Figure 10 shows the multimode dynamics compared to the single mode
dynamics. We see that there is good agreement between the single-mode and
multi-mode models, which is unsurprising given that the two components evolve
identically, ensuring that the overlap between the two components remains
constant, and that the relative phase is the same at each point in space. This
ensures the coupling term results in the pure $J_{x}$ rotation required for
TNT dynamics. In this parameter regime, the spin squeezing parameter and QFI
also display excellent agreement with the ideal single-mode dynamics (figure
11). Importantly, we see that development of entanglement (characterised by
large quantum Fisher information) occurs much more rapidly than with OAT
dynamics.
### IV.2 Case II
When $U_{bb}\neq U_{aa}$, the two components will evolve differently,
effecting both the spatial overlap, and the relative phase. As before, we
first simulate the system with $\Omega=0$ to investigate the agreement between
the single-mode and multi-mode systems for OAT dynamics. However, as can be
seen in figure (6), the two components begin to separate, which will inhibit
the performance of the TNT dynamics. In order to prevent this, we exploit the
_breathe-together_ solution Li _et al._ (2008). This is achieved by replacing
the initial 50/50 beam-splitter with an asymmetric beamsplitter such that each
component experiences the same interaction strength. Specifically, this is
achieved by choosing the beam-splitter angle such that the ratio of population
in each component, $N_{a}$ and $N_{b}$, satisfy
$\frac{N_{a}}{N_{b}}=\frac{U_{bb}-U_{ab}}{U_{aa}-U_{ab}}\,.$ (20)
As can be seen from figure 12, this choice of initial condition results in the
motional dynamics being frozen out. The dynamics of the quantum statistics,
however, is shown in figure 13, as well as the dynamics for the equivalent
single-mode system. The different initial conditions result in reduced final
spin variances for both the multi-mode and ideal single-mode dynamics. By
comparing both cases, we can infer that the effective interaction parameter
$\chi$ is $\chi=5.25\times 10^{-3}$ Hz. Again, we use this value to choose the
appropriate value of $\Omega$ for implementation of TNT dynamics. However, as
the breathe-together solution is asymmetric in population, the optimum value
of $\Omega$ will be slightly different. We account for this by trialling a
range of values, as shown in figure 15. We find that the most effective
entangling dynamics (that is, the dynamics that leads to the largest spin-
variance) is when $\Omega=0.85\chi N/2$.
Figure 12: Evolution of the density of each component for Case II when an
asymmetric beamspitter is implemented, in order to satisfy the breathe-
together solution. $|\psi_{a}(x,t)|^{2}$ (blue solid line),
$|\psi_{b}(x,t)|^{2}$ (red dashed line), compared to the initial conditions
$|\psi_{a}(x,0)|^{2}$ and $|\psi_{b}(x,0)|^{2}$ (black dotted lines). Both
components deviate only slightly from their initial conditions. Figure 13:
Comparison between multi-mode OAT dynamics and ideal OAT single-mode behaviour
for Case II with the breathe-together initial condition. For the single-mode
simulation, $\chi=5.25\times 10^{-3}$ Hz was found to have the best agreement
with the multimode results. $V(J_{y})$ as calculated from the single-mode
model is indicated via the red dotted line. Figure 14: Comparison between
multi-mode OAT dynamics and ideal OAT single-mode behaviour for Case II with
the breathe-together solution, for the spin-squeezing parameter (a) and QFI
(b). Figure 15: Comparison between single-mode TNT dynamics (a) and multimode
TNT dynamics ((b) and (c)). The initial condition was chosen to satisfy the
breathe-together solution. In (a) and (b), the optimal rotation rate
($\Omega=0.85\chi N/2$) was used, which gives slightly better performance than
the usual TNT solution ($\Omega=\chi N/2$), shown in (c).
In figure 16 we see good agreement between the ideal single-mode and multi-
mode TNT dynamics for the spin-squeezing parameter and QFI. Importantly, the
entangling dynamics occurs $\sim 40$ times faster than for OAT alone.
Figure 16: Comparison between multi-mode TNT dynamics and ideal TNT single-
mode behaviour for Case II with the breathe-together solution, for the spin-
squeezing parameter (a) and QFI (b). $\Omega=0.85\chi N/2$ was used for both
cases.
### IV.3 Case III
When $U_{aa}>U_{ab}>U_{bb}$, there is no breathe-together solution. As before,
we first simulate the system with $\Omega=0$ to investigate the behaviour in a
multimode system for OAT dynamics. However, as can be seen in figure (7), the
two components begin to separate, which will inhibit the performance of the
TNT dynamics. Additionally, as can be seen in figure 17, the asymmetry in
scattering lengths results in an additional rotation of the collective spin
around the $J_{z}$ axis, which will further inhibit TNT dynamics, as we
require a coherent rotation around the collective spin direction. We can
correct for this term by adding an additional rotation of angular frequency
$\omega_{r}$, either by adjusting the detuning between the two levels, or by
dynamically rotating the relevant rotation axis for our TNT dynamics. Figure
18 shows the spin dynamics of the system under OAT dynamics with this
additional correction. While not mimicking the single mode OAT dynamics
perfectly, it displays qualitatively similar behaviour.
Figure 17: Multimode OAT dynamics behaviour for Case III when 50/50
beamspliter is implemented. (a) the expectation values of the spin operators,
and (b) the variances of the spin operators Figure 18: Multimode OAT dynamics
behaviour for Case III when a rotation about the $\hat{J}_{z}$ axis is added
to compensate for the spin-precession. (a) the expectation values of the spin
operators, and (b) the variances of the spin operators.
We have modelled TNT dynamics with this additional rotation term (figure 19).
We see that, while the timescale of the entangling dynamics is significantly
faster than OAT alone, it does not behave as well as ideal TNT dynamics. The
reason for this is that while the additional rotation partially corrects for
the drifting phase difference between the two components, the multimode
dynamics ensures that there is a dynamic and spatially-varying phase
difference, so this cancellation is imperfect. This effect has an even more
pronounced effect on the spin-squeezing and QFI (figure 21); there is little
improvement in either the rate or depth of spin-squeezing achievable by
implementing TNT dynamics.
Figure 19: Multimode TNT dynamics behaviour for Case III, for the spin
variance (a) and three mean value of angular operators (b). Figure 20:
Multimode OAT dynamics behaviour for Case III, for the spin squeeze parameter
(a) and QFI (b). Figure 21: Multimode TNT dynamics behaviour for Case III, for
the spin squeeze parameter (a) and QFI (b).
## V Discussion
Our modelling indicates that in case I and case II systems, TNT can be used to
significantly speed up spin-squeezing and entangling dynamics even in regimes
where significant multi-mode dynamics is present. Importantly, in case II
systems, use of the breathe-together solution can be used to ensure strong
mode-overlap between the two components. In both of these cases, the spin-
squeezing dynamics of the full system is well approximated by an effective
two-mode model.
However, in case III systems, the implementation of TNT dynamics provides
little-to-no benefit over conventional OAT dynamics. We attribute this to a
spatially varying phase-profile of the two components, which results in a
variation in the rotation axis of the effective TNT rotation.
Our results indicate that in some regimes, spin-squeezing and entanglement
generation via atomic self-interaction is achievable in BECs with large
numbers of atoms, even in the presence of multi-mode dynamics, and that TNT
dynamics can be used to decrease the state-preparation time required in order
to achieve this squeezing. Alternatively, faster entangling dynamics can
result in better spin-squeezing in cases where the state-preparation time is
limited Haine and Hope (2020); Szigeti _et al._ (2020, 2021).
## VI ACKNOWLEDGMENTS
This research was undertaken with the assistance of re-sources and services
from the National Computational Infrastructure (NCI), which is supported by
the Australian Government. The Australian National University is situated on
land traditionally owned by the Ngunnawal people.
## References
* Pezzè _et al._ (2018) Luca Pezzè, Augusto Smerzi, Markus K. Oberthaler, Roman Schmied, and Philipp Treutlein, “Quantum metrology with nonclassical states of atomic ensembles,” Rev. Mod. Phys. 90, 035005 (2018).
* Szigeti _et al._ (2021) Stuart S. Szigeti, Onur Hosten, and Simon A. Haine, “Improving cold-atom sensors with quantum entanglement: Prospects and challenges,” Applied Physics Letters 118, 140501 (2021), https://doi.org/10.1063/5.0050235 .
* Giovannetti _et al._ (2006) Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone, “Quantum metrology,” Phys. Rev. Lett. 96, 010401 (2006).
* Pezzé and Smerzi (2009) Luca Pezzé and Augusto Smerzi, “Entanglement, nonlinear dynamics, and the Heisenberg limit,” Phys. Rev. Lett. 102, 100401 (2009).
* Kitagawa and Ueda (1993) Masahiro Kitagawa and Masahito Ueda, “Squeezed spin states,” Phys. Rev. A 47, 5138–5143 (1993).
* Mølmer and Sørensen (1999) Klaus Mølmer and Anders Sørensen, “Multiparticle entanglement of hot trapped ions,” Phys. Rev. Lett. 82, 1835–1838 (1999).
* Esteve _et al._ (2008) J. Esteve, C. Gross, A. Weller, S. Giovanazzi, and M. K. Oberthaler, “Squeezing and entanglement in a Bose-Einstein condensate,” Nature 455, 1216 (2008).
* Leroux _et al._ (2010) Ian D. Leroux, Monika H. Schleier-Smith, and Vladan Vuletić, “Implementation of cavity squeezing of a collective atomic spin,” Phys. Rev. Lett. 104, 073602 (2010).
* Gross _et al._ (2010) C. Gross, T. Zibold, E. Nicklas, J. Esteve, and M. K. Oberthaler, “Nonlinear atom interferometer surpasses classical precision limit,” Nature 464, 1165–1169 (2010).
* Riedel _et al._ (2010) Max F. Riedel, Pascal Böhi, Yun Li, Theodor W. Hänsch, Alice Sinatra, and Philipp Treutlein, “Atom-chip-based generation of entanglement for quantum metrology,” Nature 464, 1170–1173 (2010).
* Muessel _et al._ (2014) W. Muessel, H. Strobel, D. Linnemann, D. B. Hume, and M. K. Oberthaler, “Scalable spin squeezing for quantum-enhanced magnetometry with bose-einstein condensates,” Phys. Rev. Lett. 113, 103004 (2014).
* Li _et al._ (2008) Yun Li, Y. Castin, and A. Sinatra, “Optimum spin squeezing in bose-einstein condensates with particle losses,” Phys. Rev. Lett. 100, 210401 (2008).
* Li _et al._ (2009) Yun Li, P. Treutlein, J. Reichel, and A. Sinatra, “Spin squeezing in a bimodal condensate: spatial dynamics and particle losses,” The European Physical Journal B 68, 365–381 (2009).
* Haine (2018a) Simon A Haine, “Quantum noise in bright soliton matterwave interferometry,” New Journal of Physics 20, 033009 (2018a).
* Haine and Johnsson (2009) Simon A. Haine and Mattias T. Johnsson, “Dynamic scheme for generating number squeezing in Bose-Einstein condensates through nonlinear interactions,” Phys. Rev. A 80, 023611 (2009).
* Haine _et al._ (2014) S. A. Haine, J. Lau, R. P. Anderson, and M. T. Johnsson, “Self-induced spatial dynamics to enhance spin squeezing via one-axis twisting in a two-component Bose-Einstein condensate,” Phys. Rev. A 90, 023613 (2014).
* Szigeti _et al._ (2020) Stuart S. Szigeti, Samuel P. Nolan, John D. Close, and Simon A. Haine, “High-precision quantum-enhanced gravimetry with a bose-einstein condensate,” Phys. Rev. Lett. 125, 100402 (2020).
* Micheli _et al._ (2003) A. Micheli, D. Jaksch, J. I. Cirac, and P. Zoller, “Many-particle entanglement in two-component bose-einstein condensates,” Phys. Rev. A 67, 013607 (2003).
* Sorelli _et al._ (2019) Giacomo Sorelli, Manuel Gessner, Augusto Smerzi, and Luca Pezzè, “Fast and optimal generation of entanglement in bosonic josephson junctions,” Phys. Rev. A 99, 022329 (2019).
* Mirkhalaf _et al._ (2018) Safoura S. Mirkhalaf, Samuel P. Nolan, and Simon A. Haine, “Robustifying twist-and-turn entanglement with interaction-based readout,” Phys. Rev. A 97, 053618 (2018).
* Strobel _et al._ (2014) Helmut Strobel, Wolfgang Muessel, Daniel Linnemann, Tilman Zibold, David B. Hume, Luca Pezzè, Augusto Smerzi, and Markus K. Oberthaler, “Fisher information and entanglement of non-gaussian spin states,” Science 345, 424–427 (2014).
* Muessel _et al._ (2015) W. Muessel, H. Strobel, D. Linnemann, T. Zibold, B. Julia-Diaz, and M. K. Oberthaler, “Twist-and-turn spin squeezing in Bose-Einstein condensates,” arXiv:1507.02930 (2015).
* Lipkin _et al._ (1965) H.J. Lipkin, N. Meshkov, and A.J. Glick, “Validity of many-body approximation methods for a solvable model: (i). exact solutions and perturbation theory,” Nuclear Physics 62, 188 – 198 (1965).
* Demkowicz-Dobrzanski _et al._ (2015) R. Demkowicz-Dobrzanski, M. Jarzyna, and J. Kolodynski, “Quantum limits in optical interferometry,” Progress in Optics 60, 345 (2015).
* Braunstein and Caves (1994) Samuel L. Braunstein and Carlton M. Caves, “Statistical distance and the geometry of quantum states,” Phys. Rev. Lett. 72, 3439–3443 (1994).
* Paris (2009) Matteo G. A. Paris, “Quantum estimation for quantum technology,” International Journal of Quantum Information 07, 125–137 (2009).
* Tóth and Apellaniz (2014) Géza Tóth and Iagoba Apellaniz, “Quantum metrology from a quantum information science perspective,” Journal of Physics A: Mathematical and Theoretical 47, 424006 (2014).
* Davis _et al._ (2016) E. Davis, G. Bentsen, and M. Schleier-Smith, “Approaching the Heisenberg limit without single-particle detection,” Phys. Rev. Lett. 116, 053601 (2016).
* Hosten _et al._ (2016) O. Hosten, R. Krishnakumar, N. J. Engelsen, and M. A. Kasevich, “Quantum phase magnification,” Science 352, 1552–1555 (2016).
* Fröwis _et al._ (2016) Florian Fröwis, Pavel Sekatski, and Wolfgang Dür, “Detecting large quantum Fisher information with finite measurement precision,” Phys. Rev. Lett. 116, 090801 (2016).
* Macrì _et al._ (2016) Tommaso Macrì, Augusto Smerzi, and Luca Pezzè, “Loschmidt echo for quantum metrology,” Phys. Rev. A 94, 010102 (2016).
* Linnemann _et al._ (2016) D. Linnemann, H. Strobel, W. Muessel, J. Schulz, R. J. Lewis-Swan, K. V. Kheruntsyan, and M. K. Oberthaler, “Quantum-enhanced sensing based on time reversal of nonlinear dynamics,” Phys. Rev. Lett. 117, 013001 (2016).
* Nolan _et al._ (2017) Samuel P. Nolan, Stuart S. Szigeti, and Simon A. Haine, “Optimal and robust quantum metrology using interaction-based readouts,” Phys. Rev. Lett. 119, 193601 (2017).
* Szigeti _et al._ (2017) Stuart S. Szigeti, Robert J. Lewis-Swan, and Simon A. Haine, “Pumped-up SU(1,1) interferometry,” Phys. Rev. Lett. 118, 150401 (2017).
* Anders _et al._ (2018) Fabian Anders, Luca Pezzè, Augusto Smerzi, and Carsten Klempt, “Phase magnification by two-axis countertwisting for detection-noise robust interferometry,” Phys. Rev. A 97, 043813 (2018).
* Haine (2018b) Simon A. Haine, “Using interaction-based readouts to approach the ultimate limit of detection-noise robustness for quantum-enhanced metrology in collective spin systems,” Phys. Rev. A 98, 030303(R) (2018b).
* Hayes _et al._ (2018) Anthony J Hayes, Shane Dooley, William J Munro, Kae Nemoto, and Jacob Dunningham, “Making the most of time in quantum metrology: concurrent state preparation and sensing,” Quantum Science and Technology 3, 035007 (2018).
* Haine (2021) Simon A Haine, “Searching for signatures of quantum gravity in quantum gases,” New Journal of Physics 23, 033020 (2021).
* Opanchuk _et al._ (2012) B. Opanchuk, M. Egorov, S. Hoffmann, A. I. Sidorov, and P. D. Drummond, “Quantum noise in three-dimensional BEC interferometry,” EPL (Europhysics Letters) 97, 50003 (2012).
* Nolan and Haine (2018) Samuel P. Nolan and Simon A. Haine, “Generating macroscopic superpositions with interacting bose-einstein condensates: Multimode speedups and speed limits,” Phys. Rev. A 98, 063606 (2018).
* Chin _et al._ (2010) Cheng Chin, Rudolf Grimm, Paul Julienne, and Eite Tiesinga, “Feshbach resonances in ultracold gases,” Rev. Mod. Phys. 82, 1225–1286 (2010).
* Wineland _et al._ (1992) D. J. Wineland, J. J. Bollinger, W. M. Itano, F. L. Moore, and D. J. Heinzen, “Spin squeezing and reduced quantum noise in spectroscopy,” Phys. Rev. A 46, R6797–R6800 (1992).
* Arecchi _et al._ (1972) F. T. Arecchi, Eric Courtens, Robert Gilmore, and Harry Thomas, “Atomic coherent states in quantum optics,” Phys. Rev. A 6, 2211–2237 (1972).
* Agarwal (1998) G. S. Agarwal, “State reconstruction for a collection of two-level systems,” Phys. Rev. A 57, 671–673 (1998).
* Tóth (2012) Géza Tóth, “Multipartite entanglement and high-precision metrology,” Phys. Rev. A 85, 022322 (2012).
* Sinatra, A. and Castin, Y. (2000) Sinatra, A. and Castin, Y., “Binary mixtures of bose-einstein condensates: Phase dynamics and spatial dynamics,” Eur. Phys. J. D 8, 319–332 (2000).
* Dalfovo _et al._ (1999) Franco Dalfovo, Stefano Giorgini, Lev P. Pitaevskii, and Sandro Stringari, “Theory of Bose-Einstein condensation in trapped gases,” Rev. Mod. Phys. 71, 463–512 (1999).
* Steel _et al._ (1998) M. J. Steel, M. K. Olsen, L. I. Plimak, P. D. Drummond, S. M. Tan, M. J. Collett, D. F. Walls, and R. Graham, “Dynamical quantum noise in trapped Bose-Einstein condensates,” Phys. Rev. A 58, 4824–4835 (1998).
* Sinatra _et al._ (1995) A Sinatra, F Castelli, L A Lugiato, P Grangier, and J P Poizat, “Effective two-level model versus three-level model,” Quantum and Semiclassical Optics: Journal of the European Optical Society Part B 7, 405 (1995).
* Norrie _et al._ (2006) A. A. Norrie, R. J. Ballagh, and C. W. Gardiner, “Quantum turbulence and correlations in bose-einstein condensate collisions,” Phys. Rev. A 73, 043617 (2006).
* Drummond and Opanchuk (2017) Peter D. Drummond and Bogdan Opanchuk, “Truncated wigner dynamics and conservation laws,” Phys. Rev. A 96, 043616 (2017).
* Haine and Ferris (2011) S. A. Haine and A. J. Ferris, “Surpassing the standard quantum limit in an atom interferometer with four-mode entanglement produced from four-wave mixing,” Phys. Rev. A 84, 043624 (2011).
* Ruostekoski and Martin (2013) J. Ruostekoski and A. D. Martin, “The truncated wigner method for bose gases,” in _Quantum Gasses_ , edited by The editor (Imperial College Press, 2013) pp. 203–214.
* Drummond and Hardman (1993) P. D. Drummond and A. D. Hardman, “Simulation of quantum effects in raman-active waveguides,” EPL (Europhysics Letters) 21, 279 (1993).
* Blakie _et al._ (2008) P. B. Blakie, A. S. Bradley, M. J. Davis, R. J. Ballagh, and C. W. Gardiner, “Dynamics and statistical mechanics of ultra-cold Bose gases using c-field techniques,” _Advances in Physics_ , Advances in Physics 57, 363–455 (2008).
* Gardiner and Zoller (2004) C. W. Gardiner and P. Zoller, _Quantum Noise: A Handbook of Markovian and Non-Markovian Quantum Stochastic Methods with Applications to Quantum Optics_ , 3rd ed. (Springer, Berlin and Heidelberg, 2004).
* Olsen and Bradley (2009) M.K. Olsen and A.S. Bradley, “Numerical representation of quantum states in the positive-P and wigner representations,” Optics Communications 282, 3924 – 3929 (2009).
* Note (1) The quantum dynamics for the single-mode model is obtained using the TW method by solving the ODEs $i\mathaccentV{dot}05F{\alpha}=\frac{1}{2}\chi(|\alpha|^{2}-|\beta|^{2})\alpha+\Omega\beta$, $i\mathaccentV{dot}05F{\beta}=-\frac{1}{2}\chi(|\alpha|^{2}-|\beta|^{2})\beta+\Omega\alpha$, where $\alpha$ and $\beta$ are the stochastic variables corresponding to $\mathaccentV{hat}05E{a}_{0}$ and $\mathaccentV{hat}05E{b}_{0}$ respectively.
* Haine and Hope (2020) Simon A. Haine and Joseph J. Hope, “Machine-designed sensor to make optimal use of entanglement-generating dynamics for quantum sensing,” Phys. Rev. Lett. 124, 060402 (2020).
|
# Resolution to Sutner’s Conjecture
William Boyles Department of Mathematics, North Carolina State University,
Raleigh, NC 27695<EMAIL_ADDRESS>
## 1 Introduction
Consider a game played on a simple graph $G=(V,E)$ where each vertex consists
of a clickable light. Clicking any vertex $v$ toggles on on/off state of $v$
and its neighbors. One wins the game by finding a sequence of clicks that
turns off all the lights. When $G$ is a $5\times 5$ grid, this game was
commercially available from Tiger Electronics as Lights Out.
Sutner was one of the first to study these games mathematically. He showed
that for any $G$ the initial configuration of all lights on is solvable [3].
He also found that when $d(G)=\text{dim}\left(\ker{(A+I)}\right)$ over the
field $\mathbb{Z}_{2}$, where $A$ is the adjacency matrix of $G$, is 0 all
initial configurations are solvable. In particular, 1 out of every $2^{d(G)}$
initial configurations are solvable, while each solvable configuration has
$2^{d(G)}$ distinct solutions [3]. When investigating $n\times n$ grid graphs,
Sutner conjectured the following relationship:
$\displaystyle d_{2n+1}$ $\displaystyle=2d_{n}+\delta_{n}\text{,
}\delta_{n}\in\\{0,2\\}$ $\displaystyle\delta_{2n+1}$
$\displaystyle=\delta_{n},$
where $d_{n}=d(G)$ for $G$ an $n\times n$ grid graph [3].
We resolve this conjecture in the affirmative. We use results from Sutner that
give the nullity of a $n\times n$ board as the GCD of two polynomials in the
ring $\mathbb{Z}_{2}[x]$ [4]. We then apply identities from Hunziker,
Machiavelo, and Park that relate the polynomials $(2n+1)\times(2n+1)$ grids
and $n\times n$ grids [2]. We then apply a result from Ore about the GCD of
two products [6]. Together, these results allow us to prove Sutner’s
conjecture. We then go further and show for exactly which values of $n$
$\delta_{n}$ is 0 or 2.
## 2 Fibonacci Polynomials
###### Definition 1.
Let $f_{n}(x)$ be the polynomial in the ring $\mathbb{Z}_{2}[x]$ defined
recursively by
$f_{n}(x)=\begin{cases}0&n=0\\\ 1&n=1\\\
xf_{n-1}(x)+f_{n-2}(x)&\text{otherwise }.\end{cases}$
These polynomials are often referred to as Fibonacci polynomials because when
defined over the reals, evaluating $f_{n}(x)$ at $x=1$ gives the $n$th
Fibonacci number. Since coefficients of $f_{n}(x)$ are either 1 or 0, one can
visualize them by coloring squares black or white to represent the
coefficients. For example,
$f(6,x)=1x^{5}+0x^{4}+0x^{3}+0x^{2}+1x+0=\blacksquare\square\square\square\square\blacksquare\square.$
Plotting $f(1,x)$ through $f(128,x)$ in the same way, aligning terms of the
same degree, we see a Sierpinski triangle rotated 90 degrees.
Sutner, using a well-known connection between the Sierpinski triangle and
parity of binomial coefficients notes [4]
$f_{n}(x)=\sum_{i}{\binom{n+i}{2i+1}x^{i}\mod 2}.$
Sutner showed how to calculate $d_{n}$ as the degree of the GCD of two
polynomials in $\mathbb{Z}_{2}[x]$ [4].
###### Theorem 2.1 (Sutner).
For all $n\in\mathbb{N}$.
$d_{n}=\deg{\gcd\left(f_{n+1}(x),f_{n+1}(x+1)\right)}.$
The recursive definition given in Theorem 2.1 and Sutner’s formula in terns of
the parity of binomial coefficients provide brute force ways to calculate
$f_{n}(x)$. Hunziker, Machiavelo, and Park show the following identity that
makes calculating $f_{n}(x)$ easier when $n$ is divisible by powers of 2 [2].
###### Theorem 2.2 (Hunziker, Machiavelo, and Park).
Let $n=b\cdot 2^{k}$ where $b$ and $k$ are non-negative integers. Then
$f_{n}(x)=x^{2^{k}-1}f_{b}^{2^{k}}(x).$
In particular, we will use this result to relate $f_{2n+2}(x)$ and
$f_{4n+4}(x)$ to $f_{n+1}(x)$.
###### Corollary 2.2.1.
The following identities hold:
$\displaystyle f_{2n+2}(x)$ $\displaystyle=xf_{n+1}^{2}(x)$ $\displaystyle
f_{4n+4}(x)$ $\displaystyle=x^{3}f_{n+1}^{4}(x).$
###### Proof.
Notice that $2n+2=(n+1)2^{1}$ and $4n+4=(n+1)2^{2}$. Thus, our desired
identities follow from Theorem 2.2. ∎
Now that we have a way to express $f_{2n+2}(x)$ and $f_{4n+4}(x)$ as a product
of $f_{n+1}(x)$ and a power of $x$, we need a way to express the GCD of
products so we can relate $d_{2n+1}$ and $d_{n}$. This is exactly what a
number-theoretic result from Ore provides [6].
###### Theorem 2.3 (Ore).
Let $a$, $b$, $c$, and $d$ be integers. Let $(a,b)$ denote $\gcd{(a,b)}$. Then
$(ab,cd)=(a,c)(b,d)\left(\frac{a}{(a,c)},\frac{d}{(b,d)}\right)\left(\frac{c}{(a,c)},\frac{b}{(b,d)}\right).$
Although Ore’s result deals specifically with integers, both the integers and
$\mathbb{Z}_{2}[x]$ are Euclidean domains, so the result will still hold.
Hunziker, Machiavelo, and Park also showed the following identity [2].
###### Theorem 2.4 (Hunziker, Machiavelo, and Park).
A polynomial $\tau(x)$ in $\mathbb{Z}_{2}[x]$ divides both $f_{n}(x)$ and
$f_{m}(x)$ if and only if it divides $f_{\gcd{(m,n)}}$. In particular,
$\gcd{\left(f_{m}(x),f_{n}(x)\right)}=f_{\gcd{(m,n)}}(x).$
We specifically will use the following corollary:
###### Corollary 2.4.1.
For some polynomial $\tau(x)$ in $\mathbb{Z}_{2}[x]$, let $n\geq 0$ be the
smallest integer such that $\tau(x)$ divides $f_{n}(x)$. Then for all $m\geq
0$, $\tau(x)$ divides $f_{m}(x)$ if and only if $n$ divides $m$.
###### Proof.
Let $\tau(x)$ be some polynomial in $\mathbb{Z}_{2}[x]$. Let $f_{n}(x)$ be the
smallest Fibonacci polynomial such that $\tau(x)$ divides $f_{n}(x)$.
Assume that $\tau(x)$ divides $f_{m}(x)$ for some number $m$. Then $\tau(x)$
is a common factor of $f_{m}(x)$ and $f_{n}(x)$, so Theorem 2.4 tells us that
$\tau(x)$ divides $f_{\gcd{(m,n)}}(x)$. Since $f_{n}(x)$ is the smallest
Fibonacci polynomial that is divisible by $\tau(x)$,
$\gcd{(m,n)}\geq n.$
This inequality only holds if $\gcd{(m,n)}=n$. Thus, $m$ must be a multiple of
$n$ as desired.
Now assume that $m$ is a multiple of $n$. Then $\gcd{(m,n)}=n$. Theorem 2.4
tells us
$\gcd{\left(f_{m}(x),f_{n}(x)\right)}=f_{\gcd{(m,n)}}(x)=f_{n}(x).$
Since $\tau(x)$ divides $f_{n}(x)$, and $f_{n}(x)$ is the GCD of $f_{m}(x)$
and $f_{n}(x)$, $\tau(x)$ must also divide $f_{m}(x)$, as desired. ∎
In particular, we will use the following instances of Corollary 2.4.1 to
determine when $\delta_{n}$ is 0 or 2.
###### Corollary 2.4.2.
The following are true:
1. (i)
$x$ divides $f_{n}(x)$ if and only if $n\equiv 0\mod 2$.
2. (ii)
$x+1$ divides $f_{n}(x+1)$ if and only if $n\equiv 0\mod 2$.
3. (iii)
$x+1$ divides $f_{n}(x)$ if and only if $n\equiv 0\mod 3$.
4. (iv)
$x$ divides $f_{n}(x+1)$ if and only if $n\equiv 0\mod 3$.
###### Proof.
Notice,
1. (i)
We find that $f_{2}(x)=x$ is the smallest Fibonacci polynomial divisible by
$x$, so we apply Corollary 2.4.1 to get the desired result.
2. (ii)
Follows from (i) by substituting $x+1$ for $x$.
3. (iii)
We find that $f_{3}(x)=(x+1)^{2}$ is the smallest Fibonacci polynomial
divisible by $x+1$, so we apply Corollary 2.4.1 to get the desired result.
4. (iv)
Follows from (iii) by substituting $x+1$ for $x$.
∎
## 3 Proof of Sutner’s Conjecture
Finally, we are ready to prove Sutner’s conjecture [3].
###### Theorem 3.1.
For all $n\in\mathbb{N}$,
$d_{2n+1}=2d_{n}+\delta_{n},$
where $\delta_{n}\in\\{0,2\\}$, and $\delta_{2n+1}=\delta_{n}$.
###### Proof.
Let $(a,b)$ denote $\gcd{(a,b)}$. Applying the results from Theorems 2.1, 2.2,
and 2.3,
$\displaystyle d_{2n+1}$
$\displaystyle=\deg\left(f_{2n+2}(x),f_{2n+2}(x+1)\right)$
$\displaystyle=\deg\left(xf^{2}_{n+1}(x),(x+1)f^{2}_{n+1}(x+1)\right)$
$\displaystyle=\deg(x,x+1)\left(f^{2}_{n+1}(x),f^{2}_{n+1}(x+1)\right)\left(\frac{x+1}{(x,x+1)},\frac{f^{2}_{n+1}(x)}{(f^{2}_{n+1}(x),f^{2}_{n+1}(x+1))}\right)\left(\frac{x}{(x,x+1)},\frac{f^{2}_{n+1}(x+1)}{(f^{2}_{n+1}(x),f^{2}_{n+1}(x+1))}\right)$
$\displaystyle=\deg\left(f_{n+1}(x),f_{n+1}(x+1)\right)^{2}\left(x+1,\frac{f^{2}_{n+1}(x)}{(f_{n+1}(x),f_{n+1}(x+1))^{2}}\right)\left(x,\frac{f^{2}_{n+1}(x+1)}{(f_{n+1}(x),f_{n+1}(x+1))^{2}}\right)$
$\displaystyle=\deg\left(f_{n+1}(x),f_{n+1}(x+1)\right)^{2}\left(x+1,\frac{f_{n+1}(x)}{(f_{n+1}(x),f_{n+1}(x+1))}\right)\left(x,\frac{f_{n+1}(x+1)}{(f_{n+1}(x),f_{n+1}(x+1))}\right)$
$\displaystyle=2d_{n}+\deg\left(x+1,\frac{f_{n+1}(x)}{(f_{n+1}(x),f_{n+1}(x+1))}\right)\left(x,\frac{f_{n+1}(x+1)}{(f_{n+1}(x),f_{n+1}(x+1))}\right).$
Notice that if we substitute $x+1$ for $x$,
$\left(x+1,\frac{f_{n+1}(x)}{(f_{n+1}(x+1),f_{n+1}(x))}\right)\text{ becomes
}\left(x,\frac{f_{n+1}(x+1)}{(f_{n+1}(x),f_{n+1}(x+1))}\right).$
Thus, we see that these two remaining GCD terms in our expression for
$d_{2n+1}$ are either both 1 or not 1 simultaneously. This means we can
further simplify to
$d_{2n+1}=2d_{n}+2\deg\left(x,\frac{f_{n+1}(x+1)}{(f_{n+1}(x),f_{n+1}(x+1))}\right).$
So, we see that
$d_{2n+1}=2d_{n}+\delta_{n}\text{, where
}\delta_{n}=2\deg\left(x,\frac{f_{n+1}(x+1)}{(f_{n+1}(x),f_{n+1}(x+1))}\right).$
Thus, $\delta_{n}\in\\{0,2\\}$ as desired.
What remains is to show that $\delta_{n}=\delta_{2n+1}$. Applying Corollary
2.2.1,
$\displaystyle d_{4n+3}$
$\displaystyle=\deg\left(x^{3}f^{4}_{n+1}(x),(x+1)^{3}f^{4}_{n+1}(x+1)\right)$
$\displaystyle=\deg\left(x^{3},(x+1)^{3}\right)\left(f^{4}_{n+1}(x),f^{4}_{n+1}(x+1)\right)\left(x^{3},\frac{f^{4}_{n+1}(x+1)}{(f^{4}_{n+1}(x),f^{4}_{n+1}(x+1))}\right)\left((x+1)^{3},\frac{f^{4}_{n+1}(x)}{(f^{4}_{n+1}(x),f^{4}_{n+1}(x+1))}\right)$
$\displaystyle=\deg\left(f_{n+1}(x),f_{n+1}(x+1)\right)^{4}\left(x^{3},\frac{f^{4}_{n+1}(x+1)}{(f_{n+1}(x),f_{n+1}(x+1))^{4}}\right)\left((x+1)^{3},\frac{f^{4}_{n+1}(x)}{(f_{n+1}(x),f_{n+1}(x+1))^{4}}\right)$
$\displaystyle=\deg\left(f_{n+1}(x),f_{n+1}(x+1)\right)^{4}\left(x^{3},\frac{f^{3}_{n+1}(x+1)}{(f_{n+1}(x),f_{n+1}(x+1))^{3}}\right)\left((x+1)^{3},\frac{f^{3}_{n+1}(x)}{(f_{n+1}(x),f_{n+1}(x+1))^{3}}\right)$
$\displaystyle=\deg\left(f_{n+1}(x),f_{n+1}(x+1)\right)^{4}\left(x,\frac{f_{n+1}(x+1)}{(f_{n+1}(x),f_{n+1}(x+1))}\right)^{3}\left(x+1,\frac{f_{n+1}(x)}{(f_{n+1}(x),f_{n+1}(x+1))}\right)^{3}$
$\displaystyle=4d_{n}+3\delta_{n}.$
Also, from our work previously in this proof,
$\displaystyle d_{4n+3}$ $\displaystyle=d_{2(2n+1)+1}$
$\displaystyle=2d_{2n+1}+\delta_{2n+1}$
$\displaystyle=2\left(2d_{n}+\delta_{n}\right)+\delta_{2n+1}$
$\displaystyle=4d_{n}+2\delta_{n}+\delta_{2n+1}.$
For these two expressions for $d_{4n+3}$ to be equal, we must have
$\delta_{2n+1}=\delta_{n}$, as desired. ∎
This result may have been proven prior by Yamagishi [5]. However, Yamagishi
does not mention the connection to Sutner’s conjecture, and the proof provided
is not as direct as the one we provide because Yamagishi’s work is concerned
with tori rather than grids.
## 4 When does $\delta_{n}=2$?
Theorem 3.1 proves Sutner’s conjecture as stated and even gives a formula for
finding $\delta_{n}$. However, this formula is somewhat messy, containing one
polynomial division and two polynomial GCDs. We can improve this formula to
just a modulo operation on $n$. We’ll do so by using the divisibility
properties established in Corollary 2.4.2.
###### Theorem 4.1.
The value of $\delta_{n}$ is 2 if and only if $n+1$ is divisible by 3.
###### Proof.
From our work in Theorem 3.1, we know that
$\delta_{n}=2\deg\left(x+1,\frac{f_{n+1}(x)}{(f_{n+1}(x),f_{n+1}(x+1))}\right).$
So we see that $\delta_{n}$ is 2 exactly when $f_{n+1}(x)$ can be divided
without remainder by $x+1$ more times than $f_{n+1}(x+1)$.
For $n+1$ is not divisible by 3, Corollary 2.4.2 tells us that $f_{n+1}(x)$ is
not divisible by $x+1$. So in this case, $\delta_{n}=0$, as desired.
For $n+1$ divisible by 3, let $n+1=b\cdot 2^{k}$ for some integers $b,k\geq 0$
where $b$ is odd. Since $n+1$ is divisible by 3, $b$ must also be divisible by
3. Applying Corollary 2.2.1,
$f_{n+1}(x)=x^{2^{k}-1}f_{b}^{2^{k}}(x)\text{ and
}f_{n+1}(x+1)=(x+1)^{2^{k}-1}f_{b}^{2^{k}}(x+1).$
Since $b$ is an odd multiple of 3, Corollary 2.4.2 tell us that $x+1$ divides
$f_{b}(x)$, but $x+1$ does not divide $f_{b}(x+1)$. So,
$f_{n+1}(x)=x^{2^{k}-1}(x+1)^{2^{k}}g^{2^{k}}(x)\text{ and
}f_{n+1}(x+1)=(x+1)^{2^{k}-1}x^{2^{k}}g^{2^{k}}(x+1),$
for some $g(x)\in\mathbb{Z}_{2}[x]$, where $g(x)$ and $g(x+1)$ are both
divisible by neither $x$ nor $x+1$. So, we see that $f_{n+1}(x)$ can be
divided without remainder by $x+1$ one more time than $f_{n+1}(x+1)$. So,
$\delta_{n}=2$, as desired. ∎
## 5 Future Work
There are many other relationships with $d_{n}$, many yet to be proven. For
example, Sutner mentions that for all $k\in\mathbb{N}$, $d_{2^{k}-1}=0$ [3].
We believe that the following relationships hold, but are unaware of a proof.
###### Conjecture 5.1.
There are infinitely many $n$ such that $d_{n}=2$. In particular, for all
$k\in\mathbb{N}$, $d_{2\cdot 3^{k}-1}=2$.
This conjecture is similar to Sutner’s result that shows there are infinitely
many $n$ such that $d_{n}=0$.
###### Conjecture 5.2.
Let $a$ be an odd natural number. If $a$ is not divisible by 21, then for all
$k\in\mathbb{N}$,
$d_{a^{k}-1}=d_{a-1}.$
Goshima and Yamagishi conjectured a similar statement on tori instead of grids
and for $a$ prime [1].
###### Theorem 5.1.
The case of $a=3$ for Conjecture 5.2 and 5.1 are equivalent.
###### Proof.
For $a=3$, Conjecture 5.2 says that for all $k\in\mathbb{N}$,
$d_{3^{k}-1}=d_{3-1}=0.$
Since $3^{k}$ is divisible by 3, Theorem 4.1 tells us that
$\delta_{3^{k}-1}=2$. So, applying Theorem 3.1,
$d_{2\cdot 3^{k}-1}=2d_{3^{k}-1}+\delta_{3^{k}-1}=2,$
exactly what Conjecture 5.1 states. One can apply all same steps in reverse to
shows that Conjecture 5.1 implies the $a=3$ case of Conjecture 5.2. ∎
## References
* [1] Masato Goshima and Masakazu Yamagishi, _On the dimension of the space of harmonic functions on a discrete torus_ , Experimental Mathematics 19 (2010), no. 4, 421–429.
* [2] Markus Hunziker, António Machiavelo, and Jihun Park, _Chebyshev polynomials over finite fields and reversibility of $\sigma$-automata on square grids_, Theoretical Computer Science 320 (2004), no. 2, 465–483.
* [3] Klaus Sutner, _Linear cellular automata and the Garden-of-Eden_ , The Mathematical Intelligencer 11 (1989), no. 2, 49–53.
* [4] , _sigma-automata and chebyshev-polynomials_ , Theoretical Computer Science 230 (1996), 49–73.
* [5] Masakazu Yamagishi, _Periodic harmonic functions on lattices and chebyshev polynomials_ , Linear Algebra and its Applications 476 (2015), 1–15.
* [6] Øystein Ore, _Number theory and its history_ , p. 109, McGraw-Hill, 1948, Section 5-4, Problem 2.
|
# Stronger arithmetic equivalence
Andrew V. Sutherland Supported by NSF grant DMS-1522526 and Simons Foundation
grant 550033.
###### Abstract
Motivated by a recent result of Prasad, we consider three stronger notions of
arithmetic equivalence: _local integral equivalence_ , _integral equivalence_
, and _solvable equivalence_. In addition to having the same Dedekind zeta
function (the usual notion of arithmetic equivalence), number fields that are
equivalent in any of these stronger senses must have the same class number,
and solvable equivalence forces an isomorphism of adele rings. Until recently
the only nontrivial example of integral and solvable equivalence arose from a
group-theoretic construction of Scott that was exploited by Prasad. Here we
provide infinitely many distinct examples of solvable equivalence, including a
family that contains Scott’s construction as well as an explicit example of
degree 96. We also construct examples that address questions of Scott, and of
Guralnick and Weiss, and shed some light on a question of Prasad.
title = Stronger arithmetic equivalence, author = Andrew V. Sutherland,
plaintextauthor = Andrew V. Sutherland, year=2021, number=23, received=28
April 2021, published=15 November 2021, doi=10.19086/da.29452,
[classification=text]
## 1 Introduction
Number fields that have the same Dedekind zeta function are said to be
_arithmetically equivalent_. Arithmetically equivalent number fields need not
be isomorphic, but they necessarily have the same normal closure and share
many arithmetic invariants. The first nontrivial example of arithmetically
equivalent number fields was given by Gassmann [15], who showed that all such
examples arise from a simple group-theoretic construction, a _Gassmann triple_
$(G,H_{1},H_{2})$ of finite groups in which $H_{1}$ and $H_{2}$ are subgroups
of $G$ that intersect every $G$-conjugacy class with the same cardinality; see
Proposition 2.6 for several equivalent definitions. Gassmann proved that
number fields $K_{1}$, $K_{2}$ with Galois closure $L$ are arithmetically
equivalent if and only if
$(\operatorname{Gal}(L/\mathbf{Q}),\operatorname{Gal}(L/K_{1}),\operatorname{Gal}(L/K_{2}))$
is a Gassmann triple. The number fields $K_{1}$ and $K_{2}$ are isomorphic if
and only if $\operatorname{Gal}(L/K_{1})$ and $\operatorname{Gal}(L/K_{2})$
are conjugate in $\operatorname{Gal}(L/\mathbf{Q})$; we are thus interested in
_nontrivial Gassmann triples_ $(G,H_{1},H_{2})$, those in which $H_{1}$ and
$H_{2}$ are nonconjugate subgroups of $G$.
Gassmann triples $(G,H_{1},H_{2})$ naturally arise in many other settings,
most notably in the construction of isospectral manifolds. As shown by Sunada
[47], if $\pi\colon M\to M_{0}$ is a normal finite Riemannian covering with
transformation group $G$, the quotient manifolds $M/H_{1}$ and $M/H_{2}$ are
_isospectral_ : they have the same sequence of Laplacian eigenvalues. Unlike
the number field case, $M_{1}$ and $M_{2}$ may be isometric even when $H_{1}$
and $H_{2}$ are nonconjugate, but if $H_{1}$ and $H_{2}$ are nonisomorphic and
$M$ is the universal covering of $M_{0}$, then $M/H_{1}$ and $M/H_{2}$ have
nonisomorphic fundamental groups $H_{1}$ and $H_{2}$ and cannot be isometric;
see [44, §4.2] and [47, Corollary 1]. A consequence of this result is that
there are infinitely many distinct ways in which one cannot “hear the shape of
a drum” [16, 23, 33]. A similar result holds in algebraic geometry: if $X$ is
a projective curve over a number field $K$ and $(G,H_{1},H_{2})$ is a Gassmann
triple with $G\subseteq\operatorname{Aut}(X)$, then the Jacobians of the
quotient curves $X/H_{1}$ and $X/H_{2}$ are isogenous over $K$, as proved by
Prasad and Rajan in [39]. As shown in [1], this result can be generalized to
étale Galois covers of $K$-varieties. There is also a discrete analog to
Sunada’s theorem in which one considers a finite graph $\Gamma$ with
automorphism group $G$: Gassmann triples $(G,H_{1},H_{2})$ can be used to
construct nonisomorphic isospectral graphs $\Gamma/H_{1}$ and $\Gamma/H_{2}$,
subject to conditions on $H_{1}$ and $H_{2}$; see [19]. An introduction to the
topics of arithmetic equivalence and isospectrality can be found in [49].
Subgroups $H_{1},H_{2}$ of $G$ form a Gassmann triple $(G,H_{1},H_{2})$ if and
only if the permutation modules $\mathbf{Q}[H_{1}\backslash G]$ and
$\mathbf{Q}[H_{2}\backslash G]$ given by the $G$-action on right cosets are
isomorphic as $\mathbf{Q}[G]$-modules. We then say that $H_{1}$ and $H_{2}$
are _rationally equivalent_ , and if $\mathbf{Z}[H_{1}\backslash G]$ and
$\mathbf{Z}[H_{2}\backslash G]$ are isomorphic as $\mathbf{Z}[G]$-modules, we
say that $H_{1}$ and $H_{2}$ are _integrally equivalent_. Prasad calls
$(G,H_{1},H_{2})$ a _refined Gassmann triple_ when $H_{1},H_{2}\leq G$ are
integrally equivalent, and shows that if $G$ is the Galois group of a Galois
number field $L/\mathbf{Q}$ then the fixed fields $K_{1}:=L^{H_{1}}$ and
$L_{2}:=K_{2}^{H_{2}}$ not only have the same Dedekind zeta function, they
must also have isomorphic idele groups (and in particular, isomorphic class
groups); see [38, Theorem 2]. The first (and so far only) nontrivial example
of a refined Gassmann triple was constructed by Scott [42] more than thirty
years ago. Prasad notes that this example can be realized by number fields,
and that such number fields not only have the same Dedekind zeta function and
isomorphic idele groups, they have isomorphic rings of adeles [38, Theorem 3],
and are thus _locally isomorphic_ , meaning their local algebras are
isomorphic at every place (see Theorem 2.16). Thus even when taken in
aggregate these invariants are not enough to guarantee an isomorphism of
number fields.111One can attach auxiliary $L$-functions to a number field that
in combination with the Dedekind zeta function ensure an isomorphism of number
fields whenever all of these $L$-functions coincide [9, 46]; for function
fields see [4, 8, 45].
As noted by Prasad, Scott’s example is essentially the only nontrivial example
of integral equivalence currently known [38, Remark 1]. It is not clear
whether the particular feature of Scott’s refined Gassmann triple that allowed
Prasad to prove an isomorphism of adele rings is necessarily enjoyed by others
(assuming there are any). Whether Scott’s example is a singular special case
or just the most accessible example of a general phenomenon remains an open
question.
In this article we consider two alternative strengthenings of the notion of
arithmetic equivalence: _local integral equivalence_ and _solvable
equivalence_. The latter implies the former and is sufficient to prove
equality of the number field invariants considered by Prasad, notably
including local isomorphism, which is not obviously implied by integral
equivalence (indeed, we show that it is not implied by the similar but weaker
notion of local integral equivalence).
An attractive feature of both local integral equivalence and solvable
equivalence is that they are much easier conditions to check than integral
equivalence; see Propositions 2.2 and 3.1, and Definition 3.5. In this article
we provide infinitely many nontrivial examples of solvably equivalent triples
$(G,H_{1},H_{2})$ that can be realized as Galois groups of number fields (see
Theorem 3.9), and we prove that
* •
locally integrally equivalent subgroups need not be isomorphic (see §4.1);
* •
locally integrally equivalent number fields need not be locally isomorphic
(see §4.2);
* •
locally integrally equivalent subgroups need not be integrally equivalent (see
§4.3);
* •
solvably equivalent subgroups need not be integrally equivalent (see §4.4).
We construct an explicit example of locally integrally equivalent number
fields of degree 32 arising from a triple $(G,H_{1},H_{2})$ with
$H_{1}\not\simeq H_{2}$, and an explicit example of solvably equivalent number
fields of degree 96 that are not integrally equivalent. Thus solvable
equivalence does not imply integral equivalence; we leave open the question of
whether integral equivalence implies solvable equivalence.
The example in §4.1 negatively answers a question of Guralnick and Weiss [18,
Question 2.11] and is relevant to the question of Prasad [38, Question 1] as
to whether integrally equivalent subgroups must be isomorphic, since it shows
that locally integrally equivalent subgroups need not be. The solvably
equivalent subgroups we construct are all isomorphic, which leads to the
question of whether solvably equivalent subgroups are necessarily isomorphic.
This question is perhaps more accessible than Prasad’s question, since the
only example of integral equivalence currently known arises in a setting where
rational equivalence is already enough to force isomorphism (rationally
equivalent subgroups of ${\mathbf{GL}}_{2}(\mathbf{F}_{\\!p})$,
${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$,
${\mathbf{PSL}}_{2}(\mathbf{F}_{\\!p})$ must be isomorphic, see [38, Question
1] and [48, Remark 3.7]). The example of solvable equivalence given in §4.4
does not arise in this setting, and one can find many others.
The example in §4.2 refines an answer to a question of Stuart and Perlis [37,
§4] given by Mantilla-Soler [30, Theorem 3.7] by showing that the sum of the
ramification indices above a given prime in arithmetically equivalent number
fields need not coincide even when their products do (as they must for number
fields that are locally integrally equivalent; see Proposition 3.1).
The examples in §4.3, §4.4 negatively answer a question of Guralnick and Weiss
[18, Question 2.10] as to whether local integral equivalence implies integral
equivalence (this question appears to have also been addressed in the thesis
of D. Hahn in the case of solvable groups [40]). The example in §4.4 also
addresses a question of Scott [42, Remark 4.3] regarding low rank permutation
modules.
## 2 Background and preparation
In this section we recall background material, set notation, and summarize
some of the results we will use. The material in this section is well known to
experts, but we provide short proofs in cases where we were unable to find a
suitable reference (a few of these results seem to be folklore).
Let $G$ be a finite group. For each subgroup $H\leq G$ we use $[H\backslash
G]$ to denote the transitive $G$-set given by the (right) action of $G$ on
(right) cosets of $H$; this action is faithful if and only if the intersection
of all the $G$-conjugates of $H$ (its _normal core_ in $G$) is the trivial
group. We use $\chi_{H}\colon G\to\mathbf{Z}$ to denote the permutation
character $g\mapsto\\#[H\backslash G]^{g}$ that sends $g$ to the number of
$H$-cosets it fixes (the induced character $1_{H}^{G}$), and note that
$\chi_{H}(g)\neq 0$ if and only if $g$ is conjugate to an element of $H$.
We extend $\chi_{H}$ to subgroups $K$ of $G$ by defining
$\chi_{H}(K)\coloneqq\\#[H\backslash G]^{K}$ (the _mark_ of $K$ on
$[H\backslash G]$), so that $\chi_{H}(\langle g\rangle)=\chi_{H}(g)$;
equivalently, $\chi_{H}(K)$ is the number of singleton fibers in the map
$[H\backslash G]\to[H\backslash G/K]$ defined by $Hg\mapsto HgK$. For $g\in G$
we have $HgK=Hg$ if and only if $gKg^{-1}\subseteq H$, and it follows that
$\chi_{H}(K)=\frac{\\#\bigl{\\{}g\in G:gKg^{-1}\leq
H\bigr{\\}}}{\\#H}=\frac{\\#N_{G}(K)}{\\#H}\\#\bigl{\\{}gKg^{-1}\leq H:g\in
G\bigr{\\}},$ (2.1)
where $N_{G}(K)$ denotes the normalizer of $K$ in $G$.
###### Lemma 2.1.
Let $H$ and $K$ be subgroups of a finite group $G$. The integer $\chi_{H}(K)$
depends only on the $G$-conjugacy classes of $H$ and $K$ and the function
$\chi_{H}$ depends only on the $G$-conjugacy class of $H$.
###### Proof.
Replacing either $H$ or $K$ with a $G$-conjugate does not change the RHS of
(2.1). ∎
If $\mathcal{P}$ is a class of groups (e.g. cyclic groups or solvable groups),
we call its elements _$\mathcal{P}$ -groups_, and refer to the subgroups of a
group $G$ that lie in $\mathcal{P}$ as its _$\mathcal{P}$ -subgroups_. We say
that $\mathcal{P}$ is _subgroup-closed_ if it contains all subgroups of its
elements. A function that maps subgroups of $G$ to subgroups of $G$ is _$G$
-class preserving_ if it maps subgroups to $G$-conjugates.
###### Proposition 2.2.
Let $G$ be a finite group and let $\mathcal{P}$ be a subgroup-closed class of
groups. For any two subgroups $H_{1},H_{2}\leq G$ the following are
equivalent:
1. (i)
There is $G$-class preserving bijection between the sets of
$\mathcal{P}$-subgroups of $H_{1}$ and $H_{2}$;
2. (ii)
$\\#\\{gKg^{-1}\leq H_{1}:g\in G\\}=\\#\\{gKg^{-1}\leq H_{2}:g\in G\\}$ for
every $\mathcal{P}$-subgroup $K$ of $G$;
3. (iii)
$\chi_{H_{1}}(K)=\chi_{H_{2}}(K)$ for every $\mathcal{P}$-subgroup $K$ of $G$;
4. (iv)
$\chi_{H_{1}}(K)=\chi_{H_{2}}(K)$ for every $\mathcal{P}$-subgroup $K$ of
$H_{1}$ or $H_{2}$;
5. (v)
the $G$-sets $[H_{1}\backslash G]$ and $[H_{2}\backslash G]$ are isomorphic as
$K$-sets for every $\mathcal{P}$-subgroup $K$ of $G$.
6. (vi)
the $G$-sets $[H_{1}\backslash G]$ and $[H_{2}\backslash G]$ are isomorphic as
$K$-sets for every $\mathcal{P}$-subgroup $K$ of $H_{1}$ or $H_{1}$.
###### Proof.
(i) $\Leftrightarrow$ (ii) is immediate, (ii) $\Leftrightarrow$ (iii) follows
from (2.1), (iii) $\Leftrightarrow$ (iv) follows from the fact that
$\chi_{H_{i}}(K)=0$ if $K$ is not conjugate to a subgroup of $H_{i}$, and the
implications (v) $\Rightarrow$ (vi) $\Rightarrow$ (iv) are clear; it thus
suffices to show (iii) $\Rightarrow$ (v).
Let $K$ be a $\mathcal{P}$-group and consider the $K$-sets
$X_{1}:=[H_{1}\backslash G]$ and $X_{2}=[H_{2}\backslash G]$. We have
$\chi_{H_{1}}(k)=\chi_{H_{2}}(k)$ for all $k\in K$, thus
$\\#X_{1}=\chi_{H_{i}}(1)=\\#X_{2}$, and $X_{1}$ and $X_{2}$ both have
$\frac{1}{\\#K}\sum_{k}\chi_{H_{i}}(k)$ orbits. The stabilizer $H\leq K$ of
any element $x$ in a $K$-orbit $X$ of $X_{1}$ or $X_{2}$ is a
$\mathcal{P}$-group, and $\chi_{H_{1}}(H)=\chi_{H_{2}}(H)$ implies that the
$K$-orbits of $X_{1}$ and $X_{2}$ can be put in a bijection that preserves
conjugacy classes of stabilizers (and thus preserves cardinalities). If $H$ is
the stabilizer of an element of a $K$-orbit $X$ in $X_{1}$ or $X_{2}$, then
$X$ is isomorphic to the $K$-set $[H\backslash K]$. It follows that $X_{1}$
and $X_{2}$ are isomorphic $K$-sets. ∎
We call subgroups $H_{1},H_{2}\leq G$ that satisfy the equivalent properties
of Proposition 2.2 _$\mathcal{P}$ -equivalent_, and this defines an
equivalence relation on the subgroups of $G$. A necessary condition for
$\mathcal{P}$-equivalence is that $H_{1}$ and $H_{2}$ must have the same
_$\mathcal{P}$ -statistics_, meaning that they contain the same number of
$\mathcal{P}$-subgroups in every isomorphism class of groups. When
$\mathcal{P}$ is the class of cyclic groups this amounts to having the same
_order statistics_ (numbers of elements of each order).
###### Remark 2.3.
Condition (iv) of Proposition 2.2 provides an efficient method for testing
whether two subgroups of $G$ are $\mathcal{P}$-equivalent. Conditions (ii) and
(iii) can both be used to efficiently partition the set of subgroups of $G$
into $\mathcal{P}$-equivalence classes without the need for pairwise testing;
conjugate subgroups lie in the same equivalence class, so it suffices to work
with a set of conjugacy class representatives.
###### Lemma 2.4.
Let $K,H_{1},H_{2}$ be finite groups with $|H_{1}|=|H_{2}|=n$, let
$\phi_{1}\colon K\to H_{1}$ and $\phi_{2}\colon K\to H_{2}$ be injective group
homomorphisms, and let $\rho_{1}\colon H_{1}\to S_{n}$ and $\rho_{2}\colon
H_{2}\to S_{n}$ be left regular representations of $H_{1}$ and $H_{2}$. Then
$K_{1}\coloneqq\rho_{1}(\phi_{1}(H_{0}))$ and
$K_{2}\coloneqq\rho_{2}(\phi_{2}(H_{0}))$ are conjugate subgroups of $S_{n}$.
###### Proof.
The $K$-sets $X_{1}=X_{2}=\\{1,\ldots,n\\}$ given by the actions of
$K_{1}\coloneqq\rho_{1}(\phi_{1}(H_{0}))$ and
$K_{2}\coloneqq\rho_{2}(\phi_{2}(H_{0}))$ are free $K$-sets with the same
number of orbits, hence isomorphic. There is thus a $K$-equivariant map
$\sigma\colon X_{1}\to X_{2}$ with $\sigma\in S_{n}$ satisfying
$\rho_{1}(\phi_{1}(k))\sigma=\sigma\rho_{2}(\phi_{2}(k))$ for $k\in K$, and
$\sigma^{-1}K_{1}\sigma=K_{2}$. ∎
###### Corollary 2.5.
Let $\mathcal{P}$ be a subgroup-closed class of groups. Finite groups $H_{1}$
and $H_{2}$ of the same order can be embedded as $\mathcal{P}$-equivalent
subgroups of some group $G$ if and only if they have the same
$\mathcal{P}$-statistics.
###### Proof.
The necessity of having the same $\mathcal{P}$-statistics is obvious, and
Lemma 2.4 proves sufficiency. ∎
For any integral domain $R$, we use $R[H\backslash G]$ to denote the
corresponding permutation module; this is the free $R$-module with basis
$[H\backslash G]$ equipped with the $R$-linear extension of the $G$-action on
$[H\backslash G]$; we thus view $R[H\backslash G]$ as a (right) $R[G]$-module.
If $H_{1},H_{2}\leq G$ have the same index $n$, after ordering the $G$-sets
$[H_{1}\backslash G]$ and $[H_{2}\backslash G]$, we may uniquely identify each
$R[G]$-module homomorphism $R[H_{1}\backslash G]\to R[H_{2}\backslash G]$ with
a matrix $M\in R^{n\times n}$ whose determinant $\operatorname{det}M$ does not
depend on our choices. If $\rho_{1},\rho_{2}\colon G\to S_{n}$ are the
permutation representations of $G$ acting on $\\{1,\ldots,n\\}$ via our chosen
orderings of $[H_{1}\backslash G]$ and $[H_{2}\backslash G]$, respectively,
then the matrices $M\in R^{n\times n}$ that correspond to elements of
$\operatorname{Hom}_{R[G]}(R[H_{1}\backslash G],R[H_{2}\backslash G])$ are
precisely those that are fixed by the diagonal action of
$\rho_{1}\times\rho_{2}$ on matrix entries; in other words, the entries of $M$
must satisfy
$M_{ij}=M_{\rho_{1}(g)(i),\rho_{2}(g)(j)}\qquad(\text{for all }g\in G).$
We define
$d(H_{1},H_{2}):=\gcd\left\\{\operatorname{det}M:M\in\operatorname{Hom}_{\mathbf{Z}[G]}(\mathbf{Z}[H_{1}\backslash
G],\mathbf{Z}[H_{2}\backslash G])\right\\},$
and extend this definition to all subgroups of $G$ by defining
$d(H_{1},H_{2})=0$ whenever $\\#H_{1}\neq\\#H_{2}$.
We now give several equivalent conditions for subgroups to be
$\mathcal{P}$-equivalent when $\mathcal{P}$ is the class of cyclic groups.
###### Proposition 2.6.
Let $G$ be a finite group. For all subgroups $H_{1}$ and $H_{2}$ of $G$ the
following are equivalent:
1. (i)
There is a bijection of sets $H_{1}\leftrightarrow H_{2}$ that preserves
$G$-conjugacy;
2. (ii)
$\\#(H_{1}\cap C)=\\#(H_{2}\cap C)$ for every conjugacy class $C$ of $G$;
3. (iii)
$\chi_{H_{1}}(K)=\chi_{H_{2}}(K)$ for every cyclic $K\leq G$;
4. (iv)
The $G$-sets $[H_{1}\backslash G]$ and $[H_{2}\backslash G]$ are isomorphic as
$K$-sets for every cyclic $K\leq G$;
5. (v)
$\mathbf{Q}[H_{1}\backslash G]\simeq\mathbf{Q}[H_{2}\backslash G]$;
6. (vi)
$d(H_{1},H_{2})\neq 0$.
###### Proof.
The equivalence of (i) and (ii) is immediate. The equivalence of (ii) and
(iii) follows from the formula $\chi_{H_{i}}(g)\\#H_{i}=\\#(H_{i}\cap
C(g))\\#Z(g)$, where $C(g)$ is the conjugacy class of $g$ and $Z(g)$ is its
centralizer (in $G$); see [36, Eq. 8]. The equivalence of (iii) and (iv)
follows from applying Proposition 2.2 to the class of cyclic groups. For the
equivalence of (iii) and (v), note that
$\dim_{\mathbf{Q}}(\mathbf{Q}[H_{i}\backslash G]^{K})=\chi_{H_{i}}(K)$ for
cyclic $K\leq G$ and then apply the corollary to [43, Theorem 30]. Clearing
the denominators in
$M\in\operatorname{Hom}_{\mathbf{Q}[G]}(\mathbf{Q}[H_{1}\backslash
G],\mathbf{Q}[H_{2}\backslash G])$ shows the equivalence of (v) and (vi). ∎
###### Remark 2.7.
The condition $K\leq G$ in (iii), (iv) can be replaced by “$K\leq H_{1}$ or
$K\leq H_{2}$” via Lemma 2.2.
###### Definition 2.8.
Subgroups $H_{1}$ and $H_{2}$ of a finite group $G$ that satisfy the
equivalent conditions of Proposition 2.6 are said to be _rationally
equivalent_ (or _Gassmann equivalent_).
A triple of groups $(G,H_{1},H_{2})$ with $H_{1},H_{2}\leq G$ rationally
equivalent is called a _Gassmann triple_ [15]. By Proposition 2.6, rational
equivalence defines an equivalence relation on the subgroups of $G$. Conjugate
subgroups of $G$ are necessarily rational equivalent, so we may view this as
an equivalence relation on conjugacy classes of subgroups. Rational
equivalence classes may be arbitrarily large [27].
We are interested in the nontrivial rational equivalence classes, those which
contain nonconjugate but rationally equivalent subgroups $H_{1},H_{2}\leq G$.
Equivalently, we are interested in the cases where $[H_{1}\backslash
G]\not\simeq[H_{2}\backslash G]$ as $G$-sets, but $\mathbf{Q}[H_{1}\backslash
G]\simeq\mathbf{Q}[H_{2}\backslash G]$ as $G[\mathbf{Q}]$-modules. Standard
examples include the subgroups
$H_{1}\coloneqq\bigl{\\{}\bigl{[}\begin{smallmatrix}1&*\\\
0&*\end{smallmatrix}\bigr{]}\in{\mathbf{GL}}_{2}(\mathbf{F}_{p})\bigr{\\}}$
and $H_{2}\coloneqq\bigl{\\{}\bigl{[}\begin{smallmatrix}1&0\\\
*&*\end{smallmatrix}\bigr{]}\in{\mathbf{GL}}_{2}(\mathbf{F}_{p})\bigr{\\}}$ of
$G\coloneqq{\mathbf{GL}}_{2}(\mathbf{F}_{p})$, where $p$ is an odd prime [11],
and similar examples in ${\mathbf{GL}}_{n}(\mathbf{F}_{p})$ for $n>2$ and any
prime $p$. In these examples the subgroups $H_{1}$ and $H_{2}$ are not
$G$-conjugate, but transposition gives a bijection $H_{1}\leftrightarrow
H_{2}$ that preserves $G$-conjugacy. The smallest example occurs for the group
$G$ with GAP identifier $\langle 32,43\rangle$, which contains two
nonconjugate rationally equivalent subgroups $H_{1}$ and $H_{2}$ isomorphic to
the Klein $4$-group.222A GAP identifier $\langle m,n\rangle$ denotes the
isomorphism class of an abstract group of order $m$; the positive integer $n$
is an ordinal that distinguishes distinct isomorphism classes of groups of
order $m$. For $m\leq 2000$ not equal to 1024 explicit presentations of these
groups can be found in the small groups database [3], which is available in
both GAP [14] and Magma [5].
###### Remark 2.9.
Rationally equivalent subgroups necessarily have the same order but need not
be isomorphic. The smallest example of a Gassmann triple $(G,H_{1},H_{2})$
with $H_{1}\not\simeq H_{2}$ arises for $G\simeq\langle 384,5755\rangle$ with
subgroups $H_{1}\simeq\langle 16,3\rangle$ and $H_{2}\simeq\langle
16,10\rangle$. The groups $H_{1}$ and $H_{2}$ are the first of infinitely many
pairs of nonisomorphic groups with the same order statistics (one can take
$(\mathbf{Z}/p\mathbf{Z})^{3}$ and the Heisenberg group
$H_{3}(\mathbf{F}_{\\!p})$ for any prime $p$, for example). Corollary 2.5
implies that all such pairs $H_{1}$ and $H_{2}$ can be realized as part of a
Gassmann triple $(G,H_{1},H_{2})$.
The original motivation for studying rational equivalence stems from its
relationship to zeta functions of number fields. Recall that the _Dedekind
zeta function_ of a number field $K$ is defined by
$\zeta_{K}(z)\coloneqq\prod_{\mathfrak{p}}(1-N(\mathfrak{p})^{-z})^{-1},$
where $\mathfrak{p}$ varies over primes of $K$ (nonzero prime ideals of its
ring of integers $\mathcal{O}_{K}$) and
$N(\mathfrak{p})\coloneqq[\mathcal{O}_{K}:\mathfrak{p}]$ is the cardinality of
the residue field at $\mathfrak{p}$ (its absolute norm). The Euler product for
$\zeta_{K}(z)$ defines a holomorphic function on $\operatorname{Re}(z)>1$ that
extends to a meromorphic function on $\mathbf{C}$ with a simple pole at $z=1$
whose residue is given by the _analytic class number formula_ :
$\lim_{z\to
1^{+}}(z-1)\zeta_{K}(z)=\frac{2^{r}(2\pi)^{s}h_{K}R_{K}}{\\#\mu(K)|D_{K}|^{1/2}}.$
(2.2)
Here $r$ and $s$ are the number of real and complex places of $K$ (its
_signature_), $h_{K}$ is the class number, $R_{K}$ is the regulator, $\mu(K)$
is the group of roots of unity in $K^{\times}$, and $D_{K}$ is the
discriminant of $K$.
###### Theorem 2.10.
For number fields $K_{1}$ and $K_{2}$ the following are equivalent:
1. (i)
$\zeta_{K_{1}}(s)=\zeta_{K_{2}}(s)$;
2. (ii)
$K_{1}$ and $K_{2}$ have Galois closure $L$ with
$\operatorname{Gal}(L/K_{1}),\operatorname{Gal}(L/K_{2})\leq\operatorname{Gal}(L/\mathbf{Q})$
rationally equivalent;
3. (iii)
There is a bijection between the primes of $K_{1}$ and $K_{2}$ that preserves
residue fields.
###### Proof.
These equivalences all follow from [35, Theorem 1]. ∎
###### Definition 2.11.
Number fields $K_{1}$ and $K_{2}$ that satisfy the equivalent conditions of
Theorem 2.10 are said to be _arithmetically equivalent_.
If $K_{1}$ and $K_{2}$ are arithmetically equivalent number fields with common
Galois closure $L$ and we put $G:=\operatorname{Gal}(L/\mathbf{Q})$,
$H_{1}:=\operatorname{Gal}(L/K_{1})$, $H_{2}:=\operatorname{Gal}(L/K_{2})$,
then $(G,H_{1},H_{2})$ is a _faithful_ Gassmann triple, meaning that
$\mathbf{Q}[H_{1}\backslash G]\simeq\mathbf{Q}[H_{2}\backslash G]$ is a
faithful representation of $G$. Equivalently, $H_{1}$ and $H_{2}$ have trivial
normal core in $G$. There is no loss of generality in restricting our
attention to faithful Gassmann triples: if $H_{1},H_{2}\leq G$ are rationally
equivalent then they necessarily have the same normal core $N$, the quotients
$H_{1}/N,H_{2}/N\leq G/N$ are rationally equivalent, and $H_{1}/N$ and
$H_{2}/N$ are conjugate in $G/N$ if and only if $H_{1}$ and $H_{2}$ are
conjugate in $G$.
Arithmetically equivalent number fields share many (but not all) arithmetic
invariants.
###### Theorem 2.12.
Arithmetically equivalent number fields have the same degree, discriminant,
signature, and roots of unity.
###### Proof.
See [35, Theorem 1]. ∎
The analytic class number formula (2.2) implies that if $K_{1}$ and $K_{2}$
are arithmetically equivalent number fields then we must have
$h_{K_{1}}R_{K_{1}}=h_{K_{2}}R_{K_{2}},$
but it may happen that $h_{K_{1}}\neq h_{K_{2}}$ (in which case $R_{K_{1}}\neq
R_{K_{2}}$), and even when $h_{K_{1}}=h_{K_{2}}$ the class groups need not be
isomorphic.333The fields
$\mathbf{Q}[x]/(x^{7}-3x^{6}+10x^{5}-21x^{4}-6x^{3}+58x^{2}-41x-6)$ and
$\mathbf{Q}[x]/(x^{7}-x^{6}+x^{5}+5x^{4}+9x^{3}+5x^{2}-7x-4)$ with LMFDB [29]
labels 7.3.1427382162361.1 and 7.3.1427382162361.2 are an example; see [2] for
analogous exceptions in the context of isospectral Riemannian manifolds. It
follows from Theorem 2.12 that if $K_{1}$ and $K_{2}$ are arithmetically
equivalent then a prime $p$ of $\mathbf{Q}$ ramifies in $K_{1}$ if and only if
it ramifies in $K_{2}$.
###### Remark 2.13.
The ramified rational primes in arithmetically equivalent number fields
necessarily coincide, but they may have different factorization patterns. This
was shown by Perlis in [35, page 351] for the arithmetically equivalent number
fields $K_{1}\coloneqq\mathbf{Q}(\sqrt[8]{97})$ and
$K_{2}\coloneqq\mathbf{Q}(\sqrt[8]{1552})$ where we have
$2\mathcal{O}_{K_{1}}=\mathfrak{p}_{1}\mathfrak{p}_{2}\mathfrak{p}_{3}^{2}\mathfrak{p}_{4}^{4}\qquad\text{versus}\qquad
2\mathcal{O}_{K_{2}}=\mathfrak{q}_{1}^{2}\mathfrak{q}_{2}^{2}\mathfrak{q}_{3}^{2}\mathfrak{q}_{4}^{2}.$
In this example the products of the ramification indices differ, but the sums
are the same. As shown by Mantilla-Soler [30, Thm. 3.7], there are cases where
the sums also differ. Indeed, the number fields
$K_{1}:=\mathbf{Q}[x]/(x^{7}-3x^{6}+4x^{5}-5x^{4}+3x^{3}-x^{2}-2x+1)$ and
$K_{2}:=\mathbf{Q}[x]/(x^{7}-x^{5}-2x^{4}-2x^{3}+2x^{2}-x+4)$ with LMFDB
labels 7.3.30558784.1 and 7.3.30558784.2 are arithmetically equivalent with
$2\mathcal{O}_{K_{1}}=\mathfrak{p}_{1}\mathfrak{p}_{2}^{4}\qquad\text{versus}\qquad
2\mathcal{O}_{K_{2}}=\mathfrak{q}_{1}\mathfrak{q}_{2}^{2}.$
This example settled a question of Stuart and Perlis [37, §4].
For a number field $K$ with Galois closure $L$ that is the fixed field of
$H\leq G=\operatorname{Gal}(L/\mathbf{Q})$, the decomposition of rational
primes in $K$, can be computed using the $G$-set $[H\backslash G]$. The lemma
below can be used to explain the examples in Remark 2.13 and to prove part
(iii) of Theorem 2.10.
###### Lemma 2.14.
Let $L$ be a Galois extension of $\mathbf{Q}$ with Galois group $G$, let
$\mathfrak{p}$ be a prime of $L$ above $p:=\mathfrak{p}\cap\mathbf{Q}$ with
decomposition group $D_{\mathfrak{p}}$ and inertia group $I_{\mathfrak{p}}$,
and let $K$ be the fixed field of $H\leq G$.
1. (i)
There is a bijection $[H\backslash G/D_{\mathfrak{p}}]\to\\{\text{primes of
$K$ above $p$}\\}$ defined by $H\sigma
D_{\mathfrak{p}}\mapsto\sigma(\mathfrak{p})\cap K$.
2. (ii)
The prime $\sigma(\mathfrak{p})\cap K$ has ramification index $[H\sigma
I_{\mathfrak{p}}:H\sigma]$ and residue field degree $[H\sigma
D_{\mathfrak{p}}:H\sigma I_{\mathfrak{p}}]$.
###### Proof.
This is well known; see [34, §9] and [51], for example. ∎
Theorem 2.10 implies that if $K_{1}$ and $K_{2}$ are arithmetically equivalent
number fields then for every unramified rational prime $p$ there is a
bijection between the primes of $K_{1}$ above $p$ and the primes of $K_{2}$
above $p$ such that the completions of $K_{1}$ and $K_{2}$ at corresponding
primes above $p$ are isomorphic extensions of $\mathbf{Q}_{p}$, since (up to
isomorphism) there is a unique unramified extension of $\mathbf{Q}_{p}$ of
each degree. By Theorem 2.12, this also holds for the archimedean place
$\infty$ of $\mathbf{Q}$, since the signatures of $K_{1}$ and $K_{2}$
coincide, but as shown by the examples of Remark 2.13, this need not hold at
ramified primes.
###### Definition 2.15.
Two number fields $K_{1}$ and $K_{2}$ are said to be _locally isomorphic_ if
there is a bijection between the places of $K_{1}$ and the places of $K_{2}$
such that the completions at corresponding places are isomorphic (both as
topological rings and as $\mathbf{Q}_{p}$-algebras); equivalently,
$K_{1}\otimes_{\mathbf{Q}}\mathbf{Q}_{p}\simeq
K_{2}\otimes_{\mathbf{Q}}\mathbf{Q}_{p}$ for all $p\leq\infty$. If this holds
for all but finitely many places then $K_{1}$ and $K_{2}$ are said to be
_locally isomorphic almost everywhere_.
For a number field $K$ we use $\mathbf{A}_{K}$ to denote its ring of adeles,
which we may regard both as a topological ring and as an
$\mathbf{A}_{\mathbf{Q}}$-algebra.
###### Theorem 2.16.
Let $K_{1}$ and $K_{2}$ be number fields. The following hold:
1. (i)
$K_{1}$ and $K_{2}$ are locally isomorphic almost everywhere if and only if
they are arithmetically equivalent, and if and only if almost every prime of
$\mathbf{Q}$ has the same number of primes above it in $K_{1}$ and $K_{2}$;
2. (ii)
$K_{1}$ and $K_{2}$ are locally isomorphic if and only if
$\mathbf{A}_{K_{1}}\simeq\mathbf{A}_{K_{2}}$ (as topological rings and
$\mathbf{A}_{\mathbf{Q}}$-algebras);
3. (iii)
if $K_{1}$ and $K_{2}$ are locally isomorphic then there is a natural
isomorphism of their Brauer groups that commutes with all restriction maps
induced by common inclusions of number fields.
###### Proof.
The first equivalence in (i) follows from [35, Theorem 1] and the second was
proved in [37], the forward implication in (ii) is immediate and the reverse
implication is due to Iwasawa [22, Lemma 7] (also see [26, Lemma 3]), and the
implication in (iii) is proved in [28]. ∎
###### Remark 2.17.
The converse of part (iii) of Theorem 2.16 is false. Arithmetically equivalent
number fields with naturally isomorphic Brauer groups need not be locally
isomorphic, as shown in [31].
Theorem 2.16 implies that locally isomorphic number fields are necessarily
arithmetically equivalent, but the converse need not hold. As observed in
Remark 2.13, arithmetically equivalent number fields may have incompatible
ramification indices, which precludes local isomorphism.
###### Remark 2.18.
Locally isomorphic number fields need not have the same class number; the
fields $\mathbf{Q}(\sqrt[8]{-33})$ and $\mathbf{Q}(\sqrt[8]{-528})$ with class
numbers 256 and 128 are an example [12, p. 214].
The following proposition provides an effective way to test for local
isomorphism.
###### Proposition 2.19.
Let $L,K_{1},K_{2}$ be number fields corresponding to a Gassmann triple
$(G,H_{1},H_{2})$, and let $D_{\mathfrak{p}}\subseteq G$ be the decomposition
group of a place $\mathfrak{p}$ of $L$ above a place $p$ of $\mathbf{Q}$. Then
$K_{1}\otimes_{\mathbf{Q}}\mathbf{Q}_{p}\simeq
K_{2}\otimes_{\mathbf{Q}}\mathbf{Q}_{p}$ if and only if $[H_{1}\backslash G]$
and $[H_{2}\backslash G]$ are isomorphic as $D_{\mathfrak{p}}$-sets. These
equivalent conditions necessarily hold for every unramified place $p$ of
$\mathbf{Q}$.
###### Proof.
Recall that for any field $F$ with separable closure $\Omega$ there is a
functorial equivalence between the category of étale $F$-algebras $A$ and the
category of finite $\operatorname{Gal}(\Omega/F)$-sets $S$; see [32, Theorem
8.20]. The $\operatorname{Gal}(\Omega/F)$-action on $S$ is continuous, hence
factors through a finite quotient $Q$, and by a $Q$-set $S$ we mean the
$\operatorname{Gal}(\Omega/F)$-set $S$ with the action of each
$\sigma\in\operatorname{Gal}(\Omega/F)$ given by the action of its projection
to $Q$.
For $i=1,2$, the $G$-set $[H_{i}\backslash G]$ corresponds to the étale
$\mathbf{Q}$-algebra $K_{i}$. If we view $D_{\mathfrak{p}}$ as the Galois
group of the étale $\mathbf{Q}_{p}$-algebra $L\otimes\mathbf{Q}_{p}$, the
$D_{\mathfrak{p}}$-set $[H_{i}\backslash G]$ corresponds to the étale
$\mathbf{Q}_{p}$-algebra $K_{i}\otimes\mathbf{Q}_{p}$.
The last statement follows from (iv) of Proposition 2.6, since if
$\mathfrak{p}$ is unramified then $D_{\mathfrak{p}}$ is cyclic. ∎
Finally we recall the following result on arithmetical isomorphisms which can
be found in [24, IV].
###### Proposition 2.20.
Let $G$ be a finite group with subgroups $H_{1},H_{2}\leq G$, let $R$ be an
integral domain, let $A$ be an $R[G]$-module, and let $A_{1}\coloneqq
A^{H_{1}}$ and $A_{2}\coloneqq A^{H_{2}}$ be the $R$-submodules of $A$ fixed
by $H_{1}$ and $H_{2}$, respectively. Every
$M\in\operatorname{Hom}_{R[G]}(R[H_{1}\backslash G],R[H_{2}\backslash G])$
with $\operatorname{det}M\in R^{\times}$ induces an $R[G]$-module isomorphism
$\delta_{M}\colon A_{1}\to A_{2}$.
###### Proof.
See [24, Theorem IV.1.6a]. ∎
## 3 Stronger forms of arithmetic equivalence
Recall that a finite group $K$ is said to be _cyclic modulo_ $p$ (or
$p$-_hypo-elementary_) if the quotient of $K$ by the intersection of its
$p$-Sylow subgroups (its $p$-core) is cyclic. For the sake of brevity we shall
simply call such a group _$p$ -cyclic_. The class of $p$-cyclic groups
includes all $p$-groups and all cyclic groups.
###### Proposition 3.1.
Let $G$ be a finite group and $p$ a prime. For $H_{1},H_{2}\leq G$ the
following are equivalent:
1. (i)
There is a $G$-class preserving bijection between the sets of $p$-cyclic
subgroups of $H_{1}$ and $H_{2}$;
2. (ii)
$\chi_{H_{1}}(K)=\chi_{H_{2}}(K)$ for every $p$-cyclic $K\leq G$;
3. (iii)
the $G$-sets $[H_{1}\backslash G]$ and $[H_{2}\backslash G]$ are isomorphic as
$K$-sets for every $p$-cyclic $K\leq G$;
4. (iv)
$\mathbf{Z}_{p}[H_{1}\backslash G]\simeq\mathbf{Z}_{p}[H_{2}\backslash G]$;
5. (v)
$\mathbf{F}_{p}[H_{1}\backslash G]\simeq\mathbf{F}_{p}[H_{2}\backslash G]$;
6. (vi)
$p\nmid d(H_{1},H_{2})$.
Moreover, in (ii) and (iii) one can replace “$K\leq G$” with “$K\leq H_{1}$ or
$K\leq H_{2}$”.
###### Proof.
The equivalence of (i), (ii), (iii) is given by Proposition 2.2. The
equivalence of (ii) and (iv) follows from [42, Proposition 3.1] (attributed to
Conlon [10]). The equivalence of (iv) and (v) is given by [17, Theorem
2.9(i)]. The equivalence of (v) and (vi) is immediate, since
$\mathbf{F}_{p}[H_{1}\backslash G]\simeq\mathbf{F}_{p}[H_{2}\backslash G]$ if
and only if there exists
$M\in\operatorname{Hom}_{\mathbf{Z}[G]}(\mathbf{Z}[H_{1}\backslash
G],\mathbf{Z}[H_{1}\backslash G])$ whose reduction modulo $p$ is invertible,
equivalently, $p\nmid\operatorname{det}M)$. That the weakened forms of (ii)
and (iii) suffice follows form Proposition 2.2 ∎
###### Definition 3.2.
Let $H_{1},H_{2}\leq G$ be finite groups. If $\mathbf{Z}_{p}[H_{1}\backslash
G]\simeq\mathbf{Z}_{p}[H_{2}\backslash G]$ for every prime $p$ then $H_{1}$
and $H_{2}$ are _locally integrally equivalent_ , and if
$\mathbf{Z}[H_{1}\backslash G]\simeq\mathbf{Z}[H_{2}\backslash G]$ then they
are _integrally equivalent_.
###### Remark 3.3.
Two $\mathbf{Z}[G]$-modules that are isomorphic as $\mathbf{Z}_{p}[G]$-modules
for every prime $p$ are said to lie in the same genus [18, 42]; subgroups
$H_{1},H_{2}\leq G$ are locally integrally equivalent if and only of the
permutation modules $\mathbf{Z}[H_{1}\backslash G]$ and
$\mathbf{Z}[H_{2}\backslash G]$ lie in the same genus.
Proposition 3.1 implies that subgroups $H_{1},H_{2}\leq G$ are locally
integrally equivalent if and only if
$d(H_{1},H_{2})=\gcd\left\\{\operatorname{det}M:M\in\operatorname{Hom}_{\mathbf{Z}[G]}(\mathbf{Z}[H_{1}\backslash
G],\mathbf{Z}[H_{2}\backslash G])\right\\}=1,$
in which case there is a finite set of matrices
$M\in\operatorname{Hom}_{\mathbf{Z}[G]}(\mathbf{Z}[H_{1}\backslash
G],\mathbf{Z}[H_{2}\backslash G]$ whose determinants have trivial GCD.
Integral equivalence holds if and only if a singleton set with this property
exists, that is, $\operatorname{det}M=\pm 1$ for some
$M\in\operatorname{Hom}_{\mathbf{Z}[G]}(\mathbf{Z}[H_{1}\backslash
G],\mathbf{Z}[H_{2}\backslash G])$. Rational equivalence only requires
$d(H_{1},H_{1})\neq 0$ and is obviously implied by local integral equivalence.
Essentially only one nontrivial example of integral equivalence is known, due
to Scott [42], in which $G\simeq{\mathbf{PSL}}_{2}(29)$ and $H_{1}$ and
$H_{2}$ are nonconjugate subgroups of $G$ isomorphic to the alternating group
$A_{5}$ that are conjugate in ${\mathbf{PGL}}_{2}(29)$; one can use this
example to construct others, but these all have a subgroup with a quotient
isomorphic to ${\mathbf{PSL}}_{2}(29)$. As noted by Scott and proved in
Theorem 3.9 below, for every prime $p\equiv\pm 29\bmod 120$ the group
${\mathbf{PSL}}_{2}(p)$ contains nonconjugate subgroups isomorphic to $A_{5}$
that are locally integrally equivalent. But with the exception of $p=29$ it is
not known whether these subgroups are also integrally equivalent.
###### Proposition 3.4.
Let $K_{1}$ and $K_{2}$ be number fields with common Galois closure $L$, and
let $H_{1}:=\operatorname{Gal}(L/K_{1})$, $H_{2}:=\operatorname{Gal}(L/K_{2})$
be locally integrally equivalent subgroups of
$G:=\operatorname{Gal}(L/\mathbf{Q})$. Then the following hold:
1. (i)
$K_{1}$ and $K_{2}$ are arithmetically equivalent;
2. (ii)
the class groups of $K_{1}$ and $K_{2}$ are isomorphic;
3. (iii)
the regulators of $K_{1}$ and $K_{2}$ are equal;
4. (iv)
for every prime $p$ the products of the ramification indices of the primes of
$K_{1}$ and $K_{2}$ above $p$ coincide.
###### Proof.
As noted above, local integral equivalence implies rational equivalence, so
(i) follows from Proposition 2.6 and Theorem 2.10. Proposition 3.1 and [36,
Theorem 3] together imply that the class groups of $K_{1}$ and $K_{2}$ have
isomorphic $p$-Sylow subgroups for every prime $p$ and are therefore
isomorphic (since they are abelian), so (ii) holds. Properties (i) and (ii)
together imply (iii), by Theorem 2.12 and the analytic class number formula.
Local integral equivalence implies $d(H_{1},H_{2})=1$, which when combined
with [24, Theorem IV.2.3] implies (iv). ∎
For number fields satisfying the hypothesis of Proposition 3.4, all the
quantities that appear in the analytic class number formula (2.2) must
coincide. However, such fields need not be locally isomorphic, as shown by the
example in §4.2, and locally isomorphic number fields may have different class
numbers and regulators, as shown by the example in Remark 2.18.
We now introduce a strictly stronger notion of equivalence that implies both
local integral equivalence and local isomorphism of corresponding number
fields.
###### Definition 3.5.
Subgroups $H_{1}$ and $H_{2}$ of a finite group $G$ are _solvably equivalent_
if they satisfy the following equivalent properties (as guaranteed by
Proposition 2.2):
1. (i)
There is a $G$-class preserving bijection between the sets of solvable
subgroups of $H_{1}$ and $H_{2}$;
2. (ii)
$\chi_{H_{1}}(K)=\chi_{H_{2}}(K)$ for every solvable $K\leq G$;
3. (iii)
the $G$-sets $[H_{1}\backslash G]$ and $[H_{2}\backslash G]$ are isomorphic as
$K$-sets for every solvable $K\leq G$.
Solvably equivalent subgroups are always locally integrally equivalent, since
$p$-cyclic groups are solvable, but as demonstrated by the example in §4.3,
locally integrally equivalent subgroups need not be solvably equivalent. As
shown by the example in §4.4, solvably equivalent subgroups need not be
integrally equivalent, but it is not clear whether the converse holds; the
integrally equivalent subgroups of ${\mathbf{PSL}}_{2}(29)$ in Scott’s example
are solvably equivalent, but as noted in the introduction, it is not clear
whether this is always true, nor is it clear that integral equivalence
guarantees local isomorphism of corresponding number fields (this is not true
of local integral equivalence, and if it were true for integral equivalence
then property (2) in Theorem 3 in [38] could have been included in Theorem 2
in [38]).
###### Question 3.6.
Is there a Gassmann triple $(G,H_{1},H_{2})$ in which $H_{1}$ and $H_{2}$ are
integrally equivalent but not solvably equivalent? More precisely, is there a
group $G$ containing subgroups $H_{1},H_{2}$ and a solvable subgroup $K$ such
that $\mathbf{Z}[H_{1}\backslash G]$ and $\mathbf{Z}[H_{2}\backslash G]$ are
isomorphic as $\mathbf{Z}[G]$-modules but not as $K$-sets?
###### Proposition 3.7.
Let $K_{1}$ and $K_{2}$ be number fields with the same Galois closure $L$, and
put $H_{1}:=\operatorname{Gal}(L/K_{1})$ and
$H_{2}:=\operatorname{Gal}(L/K_{2})$. If $H_{1}$ and $H_{2}$ are solvably
equivalent subgroups of $G:=\operatorname{Gal}(L/\mathbf{Q})$ then
1. (i)
$K_{1}$ and $K_{2}$ are arithmetically equivalent;
2. (ii)
$K_{1}$ and $K_{2}$ have isomorphic class groups and equal regulators;
3. (iii)
$K_{1}$ and $K_{2}$ are locally isomorphic, and in particular there is a
bijection between the primes of $K_{1}$ and $K_{2}$ that preserves both
inertia degrees and ramification indices;
4. (iv)
the adele rings $\mathbf{A}_{K_{1}}$ and $\mathbf{A}_{K_{2}}$ are isomorphic
(as topological groups and $\mathbf{A}_{\mathbf{Q}}$-algebras);
###### Proof.
Solvable equivalence implies local integral equivalence, so (i) and (ii) both
follow from Proposition 3.4. For each prime $\mathfrak{p}$ of $L$ the
decomposition subgroup
$D_{\mathfrak{p}}\subseteq\operatorname{Gal}(L/\mathbf{Q})$ is solvable, so we
have an isomorphism of $D_{\mathfrak{p}}$-sets $[H_{1}\backslash
G]\simeq[H_{2}\backslash G]$, which implies (iii), by Proposition 2.19, and
(iv) is then implied by Theorem 2.16. ∎
###### Remark 3.8.
In Proposition 3.7, the hypothesis that $H_{1}$ and $H_{2}$ are solvably
equivalent is stronger than necessary. It could be replaced, for example, by
the condition that $\chi_{H_{1}}(K)=\chi_{H_{2}}(K)$ for every $K\leq G$ with
normal subgroups $W\leq I$ such that $W$ is a $p$-group, $I/W$ is cyclic of
order prime to $p$, and $K/I$ is cyclic. Even this is stronger than necessary,
since, for example, it is satisfied by both $C_{2}^{4}$ and
${\mathbf{SL}}_{2}(3)$, neither of which occurs as the Galois group of an
extension of $\mathbf{Q}_{p}$ for any prime $p$ (the former contains too many
normal subgroups of index 2 and the latter was ruled out by Weil in [50,
§15]).
The following theorem gives an infinite family of groups each of which contain
a pair of nonconjugate solvably equivalent subgroups.
###### Theorem 3.9.
Let $p\equiv\pm 29\bmod 120$ be prime. The group
${\mathbf{SL}}_{2}(\mathbf{F}_{p})$ contains a pair of nonconjugate solvably
equivalent subgroups $H_{1},H_{2}$ whose projective images are nonconjugate
solvably equivalent subgroups of ${\mathbf{PSL}}_{2}(\mathbf{F}_{p})$
isomorphic to the alternating group $A_{5}$.
###### Proof.
It follows from [48, Lemma 3.21.3c] that for $p\equiv\pm 1\bmod 5$, up to
conjugacy in ${\mathbf{GL}}_{2}(\mathbf{F}_{\\!p})$ there is a unique subgroup
$H_{1}$ of ${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$ with projective image
isomorphic to $A_{5}$; it is isomorphic to
${\mathbf{SL}}_{2}(\mathbf{F}_{5})$. The outer automorphism of
${\mathbf{SL}}_{2}(\mathbf{F}_{p})$ corresponds to conjugation by an element
with nonsquare determinant; let
$\sigma\coloneqq\bigl{[}\begin{smallmatrix}r&0\\\
0&1\end{smallmatrix}\bigr{]}$ be such an element, with
$r\in\mathbf{F}_{\\!p}^{\times}-\mathbf{F}_{\\!p}^{\times 2}$. Conjugation by
$\sigma$ fixes all but four of the conjugacy classes in
${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$: it interchanges the conjugacy classes
of $\bigl{[}\begin{smallmatrix}1&1\\\ 0&1\end{smallmatrix}\bigr{]}$ and
$\bigl{[}\begin{smallmatrix}1&r\\\ 0&1\end{smallmatrix}\bigr{]}$, and also
those of $\bigl{[}\begin{smallmatrix}-1&-1\\\ 0&-1\end{smallmatrix}\bigr{]}$
and $\bigl{[}\begin{smallmatrix}-1&-r\\\ 0&-1\end{smallmatrix}\bigr{]}$ (these
are the conjugacy classes of elements of order divisible by $p$).
Let $H_{2}\coloneqq\sigma H_{1}\sigma^{-1}$; the groups $H_{1}$ and $H_{2}$
are not conjugate in ${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$, by [13, Theorem
4.1]. These groups do not contain any elements of order divisible by $p$,
since $p\geq 29$ and $\\#{\mathbf{SL}}_{2}(\mathbf{F}_{5})=2^{2}\cdot 3\cdot
5$. Conjugation by $\sigma$ thus defines an
${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$-conjugacy preserving bijection between
$H_{1}$ and $H_{2}$, implying that $H_{1}$ and $H_{2}$ are rationally
equivalent subgroups of ${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$.
To show that $H_{1}$ and $H_{2}$ are solvably equivalent, it suffices to show
that $\sigma$ defines an ${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$-conjugacy
class preserving bijection of solvable subgroups of $H_{1}$ and $H_{2}$, and
having proved rational equivalence we only need to consider the noncyclic
solvable subgroups of $H_{1}$ and $H_{2}$. Up to isomorphism, there are four
possibilities for the image of such a subgroup in in
${\mathbf{PSL}}_{2}(\mathbf{F}_{\\!p})$: $D_{2}$, $D_{3}$, $D_{5}$, and
$A_{4}$, where $D_{2}\coloneqq C_{2}\times C_{2}$ is the Klein group. It
follows from Proposition 3.13 below that there is exactly one
${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$-conjugacy class of subgroups isomorphic
to $D_{2}$, $D_{3}$, $D_{5}$, $A_{4}$ when $p\equiv\pm 3\bmod 8$, $p\equiv\pm
5\bmod 12$, $p\equiv\pm 9\bmod 20$, and $p\equiv\pm 3\bmod 8$, respectively.
These constraints are simultaneously met precisely when $p\equiv\pm 29\bmod
120$, and in this situation it is clear that $\sigma$ must define an
${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$-conjugacy class preserving bijection of
solvable subgroups of $H_{1}$ and $H_{2}$, since it preserves isomorphism
classes.
Finally, note that the conjugacy class preserving bijection between solvable
subgroups of $H_{1}$ and $H_{2}$ descends to
${\mathbf{PSL}}_{2}(\mathbf{F}_{\\!p})$, while $H_{1}$ and $H_{2}$ both
contain $-1$ and remain nonconjugate in
${\mathbf{PSL}}_{2}(\mathbf{F}_{\\!p})$. ∎
###### Remark 3.10.
Theorem 3.9 accounts for all nontrivial pairs of solvably equivalent subgroups
of ${\mathbf{SL}}_{2}(\mathbf{F}_{p})$, in fact all nontrivial pairs of
locally integrally equivalent subgroups of
${\mathbf{SL}}_{2}(\mathbf{F}_{p})$, as noted by Scott [42]. Up to a central
extension the same applies to subgroups of
${\mathbf{GL}}_{2}(\mathbf{F}_{\\!p})$, since every nonsolvable subgroup of
${\mathbf{GL}}_{2}(\mathbf{F}_{\\!p})$ that does not contain
${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$ has projective image $A_{5}$ [43, §2].
###### Remark 3.11.
As proved by Zywina [52], the group ${\mathbf{PSL}}_{2}(\mathbf{F}_{\\!p})$
can be realized as the Galois group of a number field for every prime $p$.
This implies that there are infinitely many distinct examples of pairs of
nonisomorphic solvably equivalent number fields whose Galois groups do not
admit a common quotient.
###### Remark 3.12.
As shown in §4.4, subgroups of ${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$ are not
the only source of nontrivial solvably equivalent pairs of subgroups, and one
can do better than the minimal degree 203 admitted by Theorem 3.9: degree 96
is possible.
Recall that each subgroup of ${\mathbf{GL}}_{2}(\mathbf{F}_{\\!p})$ of order
prime to $p$ can be classified according to the isomorphism class of its image
in ${\mathbf{PGL}}_{2}(\mathbf{F}_{\\!p})$, which must be cyclic, dihedral, or
one of $A_{4}$, $S_{4}$, $A_{5}$; see [43, §2], for example. Note that we
consider $D_{2}:=C_{2}\times C_{2}$ to be a dihedral group. The proposition
below characterizes the isomorphism classes of order prime to $p$ that arise
in ${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$, up to conjugacy in
${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$; see [48, §3] for an analogous
classification for conjugacy classes of subgroups of
${\mathbf{GL}}_{2}(\mathbf{F}_{\\!p})$ (including those of order divisible by
$p$), which we will use in the proof of the proposition.
We use the notation $2D_{n}$ to denote the binary dihedral group of order
$4n$, these arise as subgroups of ${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$
containing $-1$ with projective image $D_{n}$, and similar define $2A_{4}$,
$2S_{4}$, $2A_{5}$. We say that a conjugacy class of subgroups of
${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$ is $C_{n}$ (resp. $2D_{n}$, $2A_{4}$,
$2S_{4}$, $2A_{5}$) if it is the conjugacy class of a subgroup isomorphic to
$C_{n}$ (resp. $2D_{n}$, $2A_{4}$, $2S_{4}$, $2A_{5}$).
###### Proposition 3.13.
Let $p>3$ be prime, and let $S$ be the set of integers that divide either
$p-1$ or $p+1$. Up to conjugacy in ${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$ the
subgroups of ${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$ of order prime to $p$ are
as follows:
* •
For each integer $n\geq 1$ with $p\equiv\pm 1\bmod n$, a single conjugacy
class $C_{n}$.
* •
For each integer $2n>2$ with $p\equiv\pm 1\bmod 4n$, two conjugacy classes
$2D_{n}$.
* •
For each integer $2n>2$ with $p\equiv\pm 1\bmod 2n$ and $p\not\equiv\pm 1\bmod
4n$, a single conjugacy class $2D_{n}$.
* •
Two conjugacy classes $2A_{4}$ if $p\equiv\pm 1\bmod 8$ and one otherwise.
* •
Two conjugacy classes $2S_{4}$ if $p\equiv\pm 1\bmod 8$ and none otherwise.
* •
Two conjugacy classes $2A_{5}\simeq{\mathbf{SL}}(2,5)$ if $p\equiv\pm 1\bmod
5$ and none otherwise.
###### Proof.
Every cyclic subgroup of ${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$ order prime to
$p$ must be conjugate in ${\mathbf{GL}}_{2}(\mathbf{F}_{\\!p})$ to a subgroup
of one of the two Cartan subgroups $C$: the _split Cartan_ isomorphic to
$\mathbf{F}_{\\!p}^{\times}\times\mathbf{F}_{\\!p}^{\times}$, or the _nonsplit
Cartan_ isomorphic to $\mathbf{F}_{p^{2}}^{\times}$. The intersection of $C$
with ${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$ is cyclic of order $p-1$ or $p+1$,
and the intersection of these groups is the cyclic group $\\{\pm 1\\}$ of
order $2=\gcd(p-1,p+1)$. It follows that up to
${\mathbf{GL}}_{2}(\mathbf{F}_{\\!p})$-conjugacy there is a unique cyclic
subgroup $C_{n}$ of ${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$ of order $n$ for
each $n$ dividing $p-1$ or $p+1$, and [13, Theorem 4.1] implies that it is
also unique up to ${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$-conjugacy.
For a Cartan subgroup $C$ of ${\mathbf{GL}}_{2}(\mathbf{F}_{\\!p})$, let
$C^{+}$ denote its normalizer. It follows from [48, Lemma 3.13] that for each
subgroup $H$ of $C\cap{\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$ there is at most
one subgroup $G$ of $C^{+}\cap{\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$ with
dihedral image in ${\mathbf{PSL}}(2,p)$, and that subgroup must contain $-1$.
It follows from [48, Lemmas 3.16 and 3.18] that there is exactly one $G$ for
each $H\neq\\{\pm 1\\}$ that contains $-1$, up to conjugacy in
${\mathbf{GL}}_{2}(\mathbf{F}_{\\!p})$. It follows that for each integer
$2n>2$ dividing $p-1$ or $p+1$ that up to
${\mathbf{GL}}_{2}(\mathbf{F}_{\\!p})$-conjugacy there is a unique conjugacy
class $2D_{n}$ of ${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$, and it follows from
[13, Theorem 4.1] and Remark 3.14 below that this
${\mathbf{GL}}_{2}(\mathbf{F}_{\\!p})$-conjugacy class splits into two
${\mathbf{GL}}_{2}(\mathbf{F}_{\\!p})$-conjugacy classes if and only if
$p\equiv\pm 1\bmod 4n$.
The statements for $2A_{4}$, $2S_{4}$, $2A_{5}$ are immediate from [48, Lemma
3.21] and [13, Theorem 4.1]. ∎
###### Remark 3.14.
There is a minor error in the statement [13, Theorem 4.1] regarding the group
$2D_{2}$, which is denoted $BD_{4\cdot 2}$ in [13]. There are two conjugacy
classes $2D_{2}$ in ${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$ when
$\sqrt{2}\in\mathbf{F}_{\\!p}$, equivalently, when $p\equiv\pm 1\bmod 8$, but
only one otherwise; this follows from the fact that the normalizer of
$BD_{4\cdot 2}$ in ${\mathbf{SL}}_{2}(\overline{\mathbf{F}}_{p})$ is $2S_{4}$
(not $2A_{4}$ as claimed in [13]), which is present in
${\mathbf{SL}}_{2}(\mathbf{F}_{\\!p})$ only when
$\sqrt{2}\in\mathbf{F}_{\\!p}$. The author is grateful to Yuval Flicker for
clarifying this point.
## 4 Computational results
In this section we present examples that realize the claims made in the
introduction, including that local integral equivalence does not imply group
isomorphism (§4.1), local isomorphism of number fields (§4.2), or integral
equivalence (§4.3), and that solvable equivalence does not imply integral
equivalence (§4.4). We also give a degree 32 example of locally integrally
equivalent number fields in §4.3 (best possible), and a degree 96 example of
solvably equivalent number fields in §4.4 (best known).
### 4.1 Locally integrally equivalent subgroups need not be isomorphic
In [38, Question 1], Prasad asks if integrally equivalent subgroups are
necessarily isomorphic. This is true in Scott’s example with two subgroups of
${\mathbf{PSL}}_{2}(\mathbf{F}_{29})$ isomorphic to the alternating group
$A_{5}$. The following example shows that locally integrally equivalent
subgroups need not be isomorphic. Let $G$ by the symmetric group $S_{21}$ and
consider the subgroups
$\displaystyle H_{1}\coloneqq\bigl{\langle}\,$ $\displaystyle(4\ 5)(6\ 15\ 7\
14)(8\ 17\ 9,16)(10\ 19\ 11\ 18)(12\ 21\ 13\ 20),$ $\displaystyle(1\ 2)(3\
5)(6\ 20\ 8\ 18)(7\ 21\ 9\ 19)(10\ 14\ 12\ 16)(11\ 15\ 13\
17)\,\bigr{\rangle},$ $\displaystyle H_{2}\coloneqq\bigl{\langle}\,$
$\displaystyle(4\ 5)(6\ 16\ 8\ 14)(7\ 17\ 9\ 15)(10\ 20\ 12\ 18)(11\ 21\ 13\
19),$ $\displaystyle(1\ 2)(3\ 5)(6\ 20\ 8\ 18)(7\ 21\ 9\ 19)(10\ 17\ 12\
15)(11\ 16\ 13\ 14)\,\bigr{\rangle},$
with GAP identifiers $\langle 48,12\rangle$ and $\langle 48,13\rangle$,
respectively. Each contains 41 subgroups that are $p$-cyclic for some prime
$p$. These fall into 15 distinct $G$-conjugacy classes and 11 distinct
isomorphism classes, which makes it easy to find a $G$-conjugacy class
preserving bijection between them (if one takes into account the isomorphism
class and the number of subgroups in each conjugacy classes, there are only 2
choices to consider). The subgroups $H_{1},H_{2}\leq G$ are thus locally
integrally equivalent, but not isomorphic. This negatively answers Question
2.11 posed by Guralnick and Weiss in [18].
This example is realized by infinitely many number fields: over $\mathbf{Q}$
the Galois group of a generic polynomial of degre 21 is $G=S_{21}$ and the
fixed fields of $H_{1}$ and $H_{2}$ are locally integrally equivalent number
fields of degree $21!/48$. It is one of many that were found by applying
Corollary 2.5 to the clas $\mathcal{P}$ of groups that are $p$-cyclic for some
prime $p$: computing $\mathcal{P}$-statistics for the isomorphism classes of
groups of order up to 255 already finds 107 pairs of isomorphism classes with
the same $\mathcal{P}$-statistics, including four isomorphism classes of
groups of order 192 with the same $\mathcal{P}$-statistics. One can often find
permutation representations of degree less than $|H_{1}|=|H_{2}|$ that also
work, as happens above.
###### Question 4.1.
Are solvably equivalent subgroups of a finite group $G$ necessarily
isomorphic?
Question 4.1 is equivalent to asking whether the isomorphism class of a
nonsolvable group determine its $\mathcal{P}$-statistics, where $\mathcal{P}$
is the class of solvable groups (by Corollary 2.5). For the 1022 isomorphism
classes of nonsolvable groups of order less than 2000, these
$\mathcal{P}$-statistics are all distinct, so any pair of nonisomorphic
solvably equivalent subgroups must have order greater than 2000.
### 4.2 Local integral equivalence does not imply local isomorphism of number
fields
Let $G$ be the group $A_{4}\times S_{5}$ with GAP identifier $\langle
1440,5846\rangle$. There is a unique pair of nonconjugate locally integrally
equivalent subgroups $H_{1},H_{2}\leq G$, both of which are isomorphic to the
dihedral group $D_{6}$ of order 12. The groups $G$, $H_{1}$, $H_{2}$ can be
explicitly represented as subgroups of $S_{9}$ via
$\displaystyle G$ $\displaystyle\coloneqq\bigl{\langle}(1\ 2\ 3)(5\ 6\ 7\ 8\
9),\,(1\ 2)(3\ 4)(5\ 6)\bigr{\rangle},$ $\displaystyle H_{1}$
$\displaystyle\coloneqq\bigl{\langle}(1\ 2)(3\ 4)(5\ 6\ 7)(8\ 9),\,(1\ 3)(2\
4)(5\ 6)\bigr{\rangle},$ $\displaystyle H_{2}$
$\displaystyle\coloneqq\bigl{\langle}(1\ 2)(3\ 4)(5\ 6\ 7)(8\ 9),\,(1\ 4)(2\
3)(5\ 6)\bigr{\rangle},$
and $H_{1}\cap H_{2}$ is cyclic of order 6. The four maximal subgroups of
$H_{1}$, isomorphic to $C_{2}^{2},S_{3},S_{3},C_{6}$, correspond to distinct
conjugacy classes of subgroups of $G$, and these are precisely the
$G$-conjugacy classes of the four maximal subgroups of $H_{2}$. There is thus
a $G$-conjugacy preserving bijection between the proper subgroups of $H_{1}$
and $H_{2}$ (all of which are $p$-cyclic for some prime $p$), and the group
$D_{6}\simeq H_{1},H_{2}$ is not $p$-cyclic for any prime $p$. It follows that
$H_{1}$ and $H_{2}$ are locally integrally equivalent subgroups of $G$.
The subgroups $H_{1}$ and $H_{2}$ are not $G$-conjugate, even though they are
$S_{9}$-conjugate, as can be verified by comparing their permutation
characters: $\chi_{H_{1}}(H_{1})=4$ differs from $\chi_{H_{2}}(H_{1})=0$, and
$\chi_{H_{1}}(H_{2})=0$ differs from $\chi_{H_{2}}(H_{2})=4$. The group
$D_{6}$ arises as a Galois group of extensions of $\mathbf{Q}_{p}$ for
$p\not\equiv 1\bmod 6$, and it follows from Proposition 2.19 that if $H_{1}$
is the decomposition group of a prime above $p$ in a Galois extension
$L/\mathbf{Q}$ with Galois group $G$, then the fixed fields $K_{1}\coloneqq
L^{H_{1}}$ and $K_{2}\coloneqq L^{H_{2}}$ are locally integrally equivalent
fields that cannot be locally isomorphic because four primes of $K_{1}$ above
$2$ must have residue field degree 1 and ramification index 1 (corresponding
to the four cosets in $[H_{1}\backslash G]$ fixed by $H_{1}$), but no primes
of $K_{2}$ above $2$ can have residue field degree 1 and ramification index 1.
To realize such an example it suffices to find a pair of linearly disjoint
$A_{4}$ and $S_{5}$ fields such that that there is a prime of the compositum
with decomposition group conjugate to $H_{1}$ or $H_{2}$. A search of $A_{4}$
and $S_{5}$ fields in the $L$-functions and modular forms database (LMFDB)
unramified away from 2,3,5,7 finds a suitable pair: we may take the Galois
closures for the fields $F_{1}\coloneqq\mathbf{Q}[x]/(x^{4}-6x^{2}-8x+60)$ and
$F_{2}\coloneqq\mathbf{Q}[x]/(x^{5}+5x^{3}+10x-2)$ with LMFDB labels
4.0.254016.2 and 5.1.500000.1, respectively. The compositum of their Galois
closures is a degree 1440 number field $L$ with Galois group $G$. The 120
primes of $L$ above $2$ all have residue degree 2, ramification index 6,
decomposition group conjugate to $H_{1}$, and inertia group conjugate to
$H_{1}\cap H_{2}$; the local algebra $L\otimes_{\mathbf{Q}}\mathbf{Q}_{2}$ is
isomorphic to $k^{120}$, where $k$ is the unique $D_{6}$-extension of
$\mathbf{Q}_{2}$ of degree 12 containing $\mathbf{Q}_{2}(\sqrt{2})$, with
LMFDB label 2.12.22.60.
Using the GaloisSubgroup function in Magma [5] one can compute defining
polynomials of degree 120 for the number fields $K_{1}\coloneqq L^{H_{1}}$ and
$K_{2}\coloneqq L^{H_{2}}$, and using the $p$-adic valuation extensions method
in Sage [41] one can determine the residue field degrees and ramification
indices of the primes above 2 in $K_{2}$ and $K_{2}$ by computing all
extensions of the $2$-adic valuation of $\mathbf{Q}$ to $K_{1}$ and $K_{2}$.
We have
$\displaystyle 2\mathcal{O}_{K_{1}}$
$\displaystyle=\mathfrak{p}_{1}\mathfrak{p}_{2}\mathfrak{p}_{3}\mathfrak{p}_{4}\mathfrak{p}_{5}^{6}\mathfrak{p}_{6}^{6}\mathfrak{p}_{7}^{6}\mathfrak{p}_{8}^{6}\mathfrak{p}_{9}^{6}\mathfrak{p}_{10}^{6}\mathfrak{p}_{11}^{6}\mathfrak{p}_{12}^{6}\mathfrak{p}_{13}^{2}\mathfrak{p}_{14}^{2}\mathfrak{p}_{15}^{3}\mathfrak{p}_{16}^{3}\mathfrak{p}_{17}^{6}\mathfrak{p}_{18}^{6}\mathfrak{p}_{19}^{6}\mathfrak{p}_{20}^{6},$
$\displaystyle 2\mathcal{O}_{K_{2}}$
$\displaystyle=\mathfrak{q}_{1}^{2}\mathfrak{q}_{2}^{2}\mathfrak{q}_{3}^{2}\mathfrak{q}_{4}^{2}\mathfrak{q}_{5}^{3}\mathfrak{q}_{6}^{3}\mathfrak{q}_{7}^{3}\mathfrak{q}_{8}^{3}\mathfrak{q}_{9}^{6}\mathfrak{q}_{10}^{6}\mathfrak{q}_{11}^{6}\mathfrak{q}_{12}^{6}\mathfrak{q}_{13}\mathfrak{q}_{14}\mathfrak{q}_{15}^{6}\mathfrak{q}_{16}^{6}\mathfrak{q}_{17}^{6}\mathfrak{q}_{18}^{6}\mathfrak{q}_{19}^{6}\mathfrak{q}_{20}^{6},$
where the primes $\mathfrak{p}_{i}$ of $K_{1}$ and $\mathfrak{q}_{i}$ of
$K_{2}$ have residue degree 1 for $i\leq 12$ and residue degree $2$ for
$i>12$.
###### Remark 4.2.
This example can be viewed as a refinement of the example of Mantilla-Soler
[30] noted in Remark 2.13: the sums 82 and 86 of the ramification indices
differ. But in the Mantilla-Soler example the products of the ramification
indices also differ, which is possible because the subgroups are rationally
equivalent but not locally integrally equivalent. Proposition 3.4 shows that
this is not possible when the subgroups are locally integrally equivalent. To
our knowledge, this is the first example of a pair of arithmetically
equivalent number fields and a prime $p$ for which the sums of the
ramification indices of the primes above $p$ differ but the products do not.
Finally, we note that the groups $H_{1}$ and $H_{2}$ are isomorphic to
$D_{6}$, hence solvable, but the values of the permutation characters
$\chi_{H_{1}}$ and $\chi_{H_{2}}$ differ on these groups, as noted above, so
they are not solvably equivalent, which shows that solvable equivalence is a
strictly stronger condition (as one would expect).
### 4.3 A minimal degree example of local integral equivalence
An exhaustive search of isomorphisms classes of groups of order less than 1024
in the small groups database [3] finds 74 groups $G$ that contain nonconjugate
$H_{1},H_{2}\leq G$ that are locally integrally equivalent and have trivial
normal core in $G$ (meaning that $(G,H_{1},H_{2})$ is a faithful Gassmann
triple). The order of $G$ is necessarily not a prime power, since $p$-groups
can be locally integrally equivalent only if they are conjugate, so only
1,206,112 of the 11,759,892 groups of order less than 1024 need to be checked.
Of these, two have order 384, seventeen have order 576, fifty have order 768,
and five have order 864, with the index of $H_{1},H_{2}$ in $G$ taking values
in $\\{32,48,64,72\\}$.
The two groups $G$ of order 384 have GAP identifiers $\langle
384,18046\rangle$ and $\langle 384,18050\rangle$, and are isomorphic to
transitive permutation groups of degree 32 with LMFDB labels 32T9403 and
32T9408, following the labeling convention in [7]. Both are (nonsplit)
2-extensions of $D_{4}\times S_{4}$, making it feasible to explicitly
construct examples of nonconjugate number fields $K_{1}$ and $K_{2}$ of degree
32 with common Galois closure $L$ with $G=\operatorname{Gal}(L/\mathbf{Q})$,
and $H_{1}=\operatorname{Gal}(L/K_{1})$ and
$H_{2}=\operatorname{Gal}(L/K_{2})$ locally integrally equivalent, by taking a
quadratic extension of the compositum of the Galois closure of two suitably
chosen $D_{4}$ and $S_{4}$ quartic number fields. Below we describe one such
example in detail.
The Magma computer algebra system [5] includes a database of transitive
permutation groups of degree up to 48 whose construction is described in [7,
20, 21]. An exhaustive analysis of the 40,238 transitive groups of degree less
than 32 finds none that contain a pair of locally integrally equivalent
subgroups of index equal to the degree. The following example thus achieves
the minimal possible degree $32$; for comparison, the minimal degree of
arithmetically equivalent number fields is $7$; see [6].
We begin with the $D_{4}$ field $\mathbf{Q}[x]/(x^{4}-6x^{2}-9)$ and the
$S_{4}$ field $\mathbf{Q}[x]/(x^{4}-2x^{3}-6x+3)$, with LMFDB labels
4.2.9216.1 and 4.2.3888.1, which are linearly disjoint over $\mathbf{Q}$. The
compositum of their Galois closures coincides with the splitting field of the
polynomial
$x^{16}+12x^{14}+72x^{12}+120x^{10}-234x^{8}+108x^{6}+396x^{4}-432x^{2}+81,$
which has Galois group $D_{4}\times S_{4}$. The number fields
$K_{1}\coloneqq\mathbf{Q}[x]/(f_{1}(x))$,
$K_{2}\coloneqq\mathbf{Q}[x]/(f_{2}(x))$ defined by
$\displaystyle f_{1}$ $\displaystyle\coloneqq
x^{32}+12x^{28}+72x^{24}+120x^{20}-234x^{16}+108x^{12}+396x^{8}-432x^{4}+81,$
$\displaystyle f_{2}$ $\displaystyle\coloneqq
x^{32}-12x^{28}+72x^{24}-120x^{20}-234x^{16}-108x^{12}+396x^{8}+432x^{4}+81,$
have the same Galois closure $L$ of degree 384. The group
$G\coloneqq\operatorname{Gal}(L/\mathbf{Q})$ is the transitive permutation
group 32T9403, generated by
$\displaystyle\sigma_{0}$
$\displaystyle\coloneqq(3,4,5,6,7,8)(9,10,11,12,13,14)(15,16,17,18,19,20)(21,22,23)(24,25,26)(27,28)(29,30)(31,32),$
$\displaystyle\sigma_{1}$
$\displaystyle\coloneqq(3,5)(6,8)(9,10)(11,14)(12,13)(15,17)(18,20)(21,24)(22,26)(23,25)(27,31)(28,32),$
$\displaystyle\sigma_{2}$
$\displaystyle\coloneqq(1,2)(3,17)(4,16)(5,15)(6,20)(7,19)(8,18)(9,13)(10,12)(22,23)(25,26)(29,30),$
$\displaystyle\sigma_{3}$
$\displaystyle\coloneqq(1,3,2,15)(4,9,16,12)(5,24,17,21)(6,30,18,29)(7,22,19,25)(8,14,20,11)(10,32,13,28)(23,27,26,31).$
The group $G$ contains exactly two conjugacy classes of subgroups of index 32
with trivial normal core, represented by
$H_{1}\coloneqq\langle\sigma_{1},\sigma_{2}\rangle$ and
$H_{2}\coloneqq\langle\sigma_{0},\sigma_{2}\rangle$, both isomorphic to
$D_{6}$. If we view $G$ as acting on the roots of $f_{1}(x)$, then under a
suitable ordering of roots we have $H_{1}=\operatorname{Gal}(L/K_{1})$ and
$H_{2}=\operatorname{Gal}(L/K_{2})$. The subgroups $H_{1}$ and $H_{2}$ are
locally integrally equivalent but not integrally equivalent. Indeed, for a
suitable choice of bases for $[H_{1}\backslash G]$ and $[H_{2}\backslash G]$,
every $M\in\operatorname{Hom}_{\mathbf{Z}[G]}(\mathbf{Z}[H_{1}\backslash
G],\mathbf{Z}[H_{2}\backslash G])$ has the form
$M:=\setcounter{MaxMatrixCols}{32}\begin{bmatrix}x_{8}&x_{8}&x_{5}&x_{8}&x_{8}&x_{7}&x_{8}&x_{5}&x_{8}&x_{2}&x_{1}&x_{8}&x_{6}&x_{8}&x_{8}&x_{7}&x_{7}&x_{8}&x_{7}&x_{4}&x_{3}&x_{3}&x_{1}&x_{7}&x_{8}&x_{6}&x_{2}&x_{4}&x_{5}&x_{6}&x_{7}&x_{8}\\\
x_{8}&x_{8}&x_{7}&x_{8}&x_{8}&x_{5}&x_{8}&x_{7}&x_{8}&x_{3}&x_{4}&x_{8}&x_{7}&x_{8}&x_{8}&x_{5}&x_{6}&x_{8}&x_{5}&x_{4}&x_{2}&x_{2}&x_{1}&x_{6}&x_{8}&x_{7}&x_{3}&x_{1}&x_{7}&x_{7}&x_{6}&x_{8}\\\
x_{7}&x_{6}&x_{8}&x_{3}&x_{7}&x_{8}&x_{5}&x_{1}&x_{2}&x_{8}&x_{7}&x_{5}&x_{8}&x_{3}&x_{7}&x_{8}&x_{8}&x_{2}&x_{1}&x_{5}&x_{8}&x_{8}&x_{6}&x_{8}&x_{6}&x_{4}&x_{8}&x_{7}&x_{8}&x_{8}&x_{4}&x_{7}\\\
x_{8}&x_{1}&x_{6}&x_{8}&x_{8}&x_{2}&x_{8}&x_{5}&x_{8}&x_{7}&x_{8}&x_{4}&x_{5}&x_{8}&x_{1}&x_{7}&x_{7}&x_{8}&x_{7}&x_{8}&x_{5}&x_{6}&x_{8}&x_{2}&x_{8}&x_{6}&x_{7}&x_{8}&x_{3}&x_{3}&x_{7}&x_{4}\\\
x_{8}&x_{8}&x_{6}&x_{8}&x_{8}&x_{7}&x_{8}&x_{6}&x_{8}&x_{2}&x_{4}&x_{8}&x_{5}&x_{8}&x_{8}&x_{7}&x_{7}&x_{8}&x_{7}&x_{1}&x_{3}&x_{3}&x_{4}&x_{7}&x_{8}&x_{5}&x_{2}&x_{1}&x_{6}&x_{5}&x_{7}&x_{8}\\\
x_{5}&x_{7}&x_{8}&x_{2}&x_{6}&x_{8}&x_{7}&x_{4}&x_{3}&x_{8}&x_{5}&x_{7}&x_{8}&x_{2}&x_{5}&x_{8}&x_{8}&x_{3}&x_{1}&x_{7}&x_{8}&x_{8}&x_{7}&x_{8}&x_{7}&x_{1}&x_{8}&x_{6}&x_{8}&x_{8}&x_{4}&x_{6}\\\
x_{8}&x_{4}&x_{7}&x_{8}&x_{8}&x_{3}&x_{8}&x_{7}&x_{8}&x_{5}&x_{8}&x_{1}&x_{7}&x_{8}&x_{1}&x_{5}&x_{6}&x_{8}&x_{6}&x_{8}&x_{7}&x_{7}&x_{8}&x_{3}&x_{8}&x_{7}&x_{6}&x_{8}&x_{2}&x_{2}&x_{5}&x_{4}\\\
x_{8}&x_{8}&x_{7}&x_{8}&x_{8}&x_{6}&x_{8}&x_{7}&x_{8}&x_{3}&x_{1}&x_{8}&x_{7}&x_{8}&x_{8}&x_{6}&x_{5}&x_{8}&x_{6}&x_{1}&x_{2}&x_{2}&x_{4}&x_{5}&x_{8}&x_{7}&x_{3}&x_{4}&x_{7}&x_{7}&x_{5}&x_{8}\\\
x_{5}&x_{7}&x_{8}&x_{7}&x_{6}&x_{8}&x_{7}&x_{8}&x_{6}&x_{1}&x_{3}&x_{7}&x_{8}&x_{7}&x_{6}&x_{8}&x_{8}&x_{5}&x_{8}&x_{2}&x_{1}&x_{4}&x_{2}&x_{8}&x_{7}&x_{8}&x_{4}&x_{3}&x_{8}&x_{8}&x_{8}&x_{5}\\\
x_{4}&x_{8}&x_{2}&x_{8}&x_{1}&x_{6}&x_{4}&x_{7}&x_{8}&x_{5}&x_{8}&x_{8}&x_{2}&x_{8}&x_{8}&x_{3}&x_{3}&x_{8}&x_{5}&x_{8}&x_{7}&x_{7}&x_{8}&x_{5}&x_{1}&x_{7}&x_{6}&x_{8}&x_{7}&x_{7}&x_{6}&x_{8}\\\
x_{7}&x_{3}&x_{8}&x_{5}&x_{7}&x_{1}&x_{6}&x_{8}&x_{7}&x_{8}&x_{7}&x_{3}&x_{8}&x_{6}&x_{2}&x_{8}&x_{8}&x_{7}&x_{8}&x_{5}&x_{8}&x_{8}&x_{6}&x_{4}&x_{5}&x_{8}&x_{8}&x_{7}&x_{1}&x_{4}&x_{8}&x_{2}\\\
x_{4}&x_{8}&x_{3}&x_{8}&x_{1}&x_{7}&x_{1}&x_{6}&x_{8}&x_{7}&x_{8}&x_{8}&x_{3}&x_{8}&x_{8}&x_{2}&x_{2}&x_{8}&x_{7}&x_{8}&x_{5}&x_{6}&x_{8}&x_{7}&x_{4}&x_{5}&x_{7}&x_{8}&x_{5}&x_{6}&x_{7}&x_{8}\\\
x_{7}&x_{5}&x_{8}&x_{3}&x_{7}&x_{8}&x_{6}&x_{4}&x_{2}&x_{8}&x_{7}&x_{6}&x_{8}&x_{3}&x_{7}&x_{8}&x_{8}&x_{2}&x_{4}&x_{6}&x_{8}&x_{8}&x_{5}&x_{8}&x_{5}&x_{1}&x_{8}&x_{7}&x_{8}&x_{8}&x_{1}&x_{7}\\\
x_{8}&x_{4}&x_{5}&x_{8}&x_{8}&x_{2}&x_{8}&x_{6}&x_{8}&x_{7}&x_{8}&x_{1}&x_{6}&x_{8}&x_{4}&x_{7}&x_{7}&x_{8}&x_{7}&x_{8}&x_{6}&x_{5}&x_{8}&x_{2}&x_{8}&x_{5}&x_{7}&x_{8}&x_{3}&x_{3}&x_{7}&x_{1}\\\
x_{7}&x_{5}&x_{8}&x_{5}&x_{7}&x_{8}&x_{5}&x_{8}&x_{7}&x_{1}&x_{2}&x_{6}&x_{8}&x_{6}&x_{7}&x_{8}&x_{8}&x_{7}&x_{8}&x_{3}&x_{4}&x_{1}&x_{3}&x_{8}&x_{6}&x_{8}&x_{4}&x_{2}&x_{8}&x_{8}&x_{8}&x_{7}\\\
x_{1}&x_{8}&x_{3}&x_{8}&x_{4}&x_{7}&x_{4}&x_{5}&x_{8}&x_{7}&x_{8}&x_{8}&x_{3}&x_{8}&x_{8}&x_{2}&x_{2}&x_{8}&x_{7}&x_{8}&x_{6}&x_{5}&x_{8}&x_{7}&x_{1}&x_{6}&x_{7}&x_{8}&x_{6}&x_{5}&x_{7}&x_{8}\\\
x_{5}&x_{2}&x_{8}&x_{7}&x_{6}&x_{4}&x_{7}&x_{8}&x_{5}&x_{8}&x_{6}&x_{2}&x_{8}&x_{7}&x_{3}&x_{8}&x_{8}&x_{6}&x_{8}&x_{7}&x_{8}&x_{8}&x_{7}&x_{1}&x_{7}&x_{8}&x_{8}&x_{5}&x_{1}&x_{4}&x_{8}&x_{3}\\\
x_{1}&x_{8}&x_{2}&x_{8}&x_{4}&x_{5}&x_{1}&x_{7}&x_{8}&x_{6}&x_{8}&x_{8}&x_{2}&x_{8}&x_{8}&x_{3}&x_{3}&x_{8}&x_{6}&x_{8}&x_{7}&x_{7}&x_{8}&x_{6}&x_{4}&x_{7}&x_{5}&x_{8}&x_{7}&x_{7}&x_{5}&x_{8}\\\
x_{6}&x_{7}&x_{8}&x_{2}&x_{5}&x_{8}&x_{7}&x_{1}&x_{3}&x_{8}&x_{6}&x_{7}&x_{8}&x_{2}&x_{6}&x_{8}&x_{8}&x_{3}&x_{4}&x_{7}&x_{8}&x_{8}&x_{7}&x_{8}&x_{7}&x_{4}&x_{8}&x_{5}&x_{8}&x_{8}&x_{1}&x_{5}\\\
x_{8}&x_{1}&x_{7}&x_{8}&x_{8}&x_{3}&x_{8}&x_{7}&x_{8}&x_{6}&x_{8}&x_{4}&x_{7}&x_{8}&x_{4}&x_{6}&x_{5}&x_{8}&x_{5}&x_{8}&x_{7}&x_{7}&x_{8}&x_{3}&x_{8}&x_{7}&x_{5}&x_{8}&x_{2}&x_{2}&x_{6}&x_{1}\\\
x_{8}&x_{8}&x_{5}&x_{1}&x_{8}&x_{7}&x_{8}&x_{3}&x_{1}&x_{7}&x_{8}&x_{8}&x_{6}&x_{4}&x_{8}&x_{7}&x_{7}&x_{4}&x_{2}&x_{8}&x_{5}&x_{6}&x_{8}&x_{7}&x_{8}&x_{3}&x_{7}&x_{8}&x_{6}&x_{5}&x_{2}&x_{8}\\\
x_{3}&x_{7}&x_{4}&x_{7}&x_{3}&x_{8}&x_{2}&x_{8}&x_{5}&x_{8}&x_{5}&x_{7}&x_{1}&x_{7}&x_{6}&x_{1}&x_{4}&x_{6}&x_{8}&x_{7}&x_{8}&x_{8}&x_{7}&x_{8}&x_{2}&x_{8}&x_{8}&x_{6}&x_{8}&x_{8}&x_{8}&x_{5}\\\
x_{8}&x_{8}&x_{7}&x_{4}&x_{8}&x_{5}&x_{8}&x_{2}&x_{1}&x_{5}&x_{8}&x_{8}&x_{7}&x_{1}&x_{8}&x_{6}&x_{5}&x_{4}&x_{3}&x_{8}&x_{7}&x_{7}&x_{8}&x_{6}&x_{8}&x_{2}&x_{6}&x_{8}&x_{7}&x_{7}&x_{3}&x_{8}\\\
x_{2}&x_{6}&x_{4}&x_{5}&x_{2}&x_{8}&x_{3}&x_{8}&x_{7}&x_{8}&x_{7}&x_{5}&x_{1}&x_{6}&x_{7}&x_{4}&x_{1}&x_{7}&x_{8}&x_{6}&x_{8}&x_{8}&x_{5}&x_{8}&x_{3}&x_{8}&x_{8}&x_{7}&x_{8}&x_{8}&x_{8}&x_{7}\\\
x_{8}&x_{8}&x_{6}&x_{4}&x_{8}&x_{7}&x_{8}&x_{3}&x_{4}&x_{7}&x_{8}&x_{8}&x_{5}&x_{1}&x_{8}&x_{7}&x_{7}&x_{1}&x_{2}&x_{8}&x_{6}&x_{5}&x_{8}&x_{7}&x_{8}&x_{3}&x_{7}&x_{8}&x_{5}&x_{6}&x_{2}&x_{8}\\\
x_{6}&x_{7}&x_{8}&x_{7}&x_{5}&x_{8}&x_{7}&x_{8}&x_{5}&x_{4}&x_{3}&x_{7}&x_{8}&x_{7}&x_{5}&x_{8}&x_{8}&x_{6}&x_{8}&x_{2}&x_{4}&x_{1}&x_{2}&x_{8}&x_{7}&x_{8}&x_{1}&x_{3}&x_{8}&x_{8}&x_{8}&x_{6}\\\
x_{7}&x_{3}&x_{8}&x_{6}&x_{7}&x_{4}&x_{5}&x_{8}&x_{7}&x_{8}&x_{7}&x_{3}&x_{8}&x_{5}&x_{2}&x_{8}&x_{8}&x_{7}&x_{8}&x_{6}&x_{8}&x_{8}&x_{5}&x_{1}&x_{6}&x_{8}&x_{8}&x_{7}&x_{4}&x_{1}&x_{8}&x_{2}\\\
x_{6}&x_{2}&x_{8}&x_{7}&x_{5}&x_{1}&x_{7}&x_{8}&x_{6}&x_{8}&x_{5}&x_{2}&x_{8}&x_{7}&x_{3}&x_{8}&x_{8}&x_{5}&x_{8}&x_{7}&x_{8}&x_{8}&x_{7}&x_{4}&x_{7}&x_{8}&x_{8}&x_{6}&x_{4}&x_{1}&x_{8}&x_{3}\\\
x_{2}&x_{5}&x_{1}&x_{6}&x_{2}&x_{8}&x_{3}&x_{8}&x_{7}&x_{8}&x_{7}&x_{6}&x_{4}&x_{5}&x_{7}&x_{1}&x_{4}&x_{7}&x_{8}&x_{5}&x_{8}&x_{8}&x_{6}&x_{8}&x_{3}&x_{8}&x_{8}&x_{7}&x_{8}&x_{8}&x_{8}&x_{7}\\\
x_{3}&x_{7}&x_{1}&x_{7}&x_{3}&x_{8}&x_{2}&x_{8}&x_{6}&x_{8}&x_{6}&x_{7}&x_{4}&x_{7}&x_{5}&x_{4}&x_{1}&x_{5}&x_{8}&x_{7}&x_{8}&x_{8}&x_{7}&x_{8}&x_{2}&x_{8}&x_{8}&x_{5}&x_{8}&x_{8}&x_{8}&x_{6}\\\
x_{8}&x_{8}&x_{7}&x_{1}&x_{8}&x_{6}&x_{8}&x_{2}&x_{4}&x_{6}&x_{8}&x_{8}&x_{7}&x_{4}&x_{8}&x_{5}&x_{6}&x_{1}&x_{3}&x_{8}&x_{7}&x_{7}&x_{8}&x_{5}&x_{8}&x_{2}&x_{5}&x_{8}&x_{7}&x_{7}&x_{3}&x_{8}\\\
x_{7}&x_{6}&x_{8}&x_{6}&x_{7}&x_{8}&x_{6}&x_{8}&x_{7}&x_{4}&x_{2}&x_{5}&x_{8}&x_{5}&x_{7}&x_{8}&x_{8}&x_{7}&x_{8}&x_{3}&x_{1}&x_{4}&x_{3}&x_{8}&x_{5}&x_{8}&x_{1}&x_{2}&x_{8}&x_{8}&x_{8}&x_{7}\\\
\end{bmatrix}$
for some $x_{1},\ldots,x_{8}\in\mathbf{Z}$ corresponding to the decomposition
of $G$ into eight double cosets $H_{1}gH_{2}$, consisting of
$2,2,2,2,3,3,6,12$ right cosets of $H_{1}$, respectively. A (nontrivial)
calculation finds that
$\displaystyle\operatorname{det}M=-\,$
$\displaystyle(2(x_{2}-x_{3})^{2}+3(x_{5}-x_{6})^{2})^{8}$
$\displaystyle\cdot\ $
$\displaystyle(2(x_{1}-x_{4})+(x_{5}+x_{6}-2x_{7}))^{6}$ $\displaystyle\cdot\
$ $\displaystyle(2(x_{1}+x_{2}+x_{3}+x_{4})-(x_{5}+x_{6}+2x_{7}+4x_{8}))^{3}$
$\displaystyle\cdot\ $
$\displaystyle(2(x_{1}-x_{2}-x_{3}+x_{4})-(x_{5}+x_{6}+2x_{7}-4x_{8}))^{3}$
$\displaystyle\cdot\ $
$\displaystyle(2(x_{1}-x_{4})-3(x_{5}+x_{6}-2x_{7}))^{2}$ $\displaystyle\cdot\
$ $\displaystyle(2(x_{1}+x_{2}+x_{3}+x_{4})+3(x_{5}+x_{6}+2x_{7}+4x_{8}))$
$\displaystyle\cdot\ $
$\displaystyle(2(x_{1}-x_{2}-x_{3}+x_{4})+3(x_{5}+x_{6}+2x_{7}-4x_{8})).$
The assignment
$x_{1}=x_{2}=1,\quad x_{3}=-1,\quad x_{4}=x_{5}=x_{6}=x_{7}=x_{8}=0$
gives $\operatorname{det}M=2^{32}$, while the assignment
$x_{5}=1,\quad x_{1}=x_{2}=x_{3}=x_{4}=x_{6}=x_{7}=x_{8}=0$
gives $\operatorname{det}M=3^{12}$; thus $d(H_{1},H_{2})=1$. It follows that
$H_{1}$ and $H_{2}$ are locally integrally equivalent, by Proposition 3.1. But
no assignment of $x_{1},\ldots,x_{8}\in\mathbf{Z}$ makes
$\operatorname{det}M=\pm 1$; indeed, any such assignment would require all 7
factors of $\operatorname{det}M$ listed above to have values in $\\{\pm 1\\}$,
which is not possible. Thus $H_{1}$ and $H_{2}$ are not integrally equivalent;
as noted in the introduction, this negatively answers Question 2.10 of
Guralnick and Weiss in [18].
There are infinitely many nonisomorphic variations of this example; replacing
$f_{1}(x)$ and $f_{2}(x)$ with $f_{1}(x\sqrt{T})$ and $f_{2}(x\sqrt{T})$
yields polynomials with Galois group $G$ over $\mathbf{Q}(T)$; for almost all
squarefree $a\in\mathbf{Z}$ the substitution $T=a$ yields nonisomorphic
$K_{1},K_{2}$ ramified at primes dividing $a$.
### 4.4 A degree 96 example of solvable equivalence
The results of §4.3 imply that any group $G$ that contains nonconjugate
solvably equivalent subgroups must have order at least $32\cdot 60=1920$,
since nonconjugate locally integrally equivalent subgroups must have index at
least 32, and nonsolvable groups must have order at least 60. A search of the
small groups database shows that there are no such $G$ of order 1920 or 1980,
and a search of transitive groups of degree up to 48 and order at most
$48\cdot 60=2880$ finds no such $G$, which implies a lower bound of 2940.
An exhaustive search of transitive groups of degree up to 48 and order at most
48,000 finds transitive groups of degrees 12, 16, 20, 24, 30, 32, 36, and 40
that contain nonconjugate solvably equivalent subgroups, including examples of
index 96, 192, 384, 576, 672, and 768. The first example of index 96 occurs
for the transitive group 16T1654 of order 5760, which is the smallest order we
found. This group $G$ contains five conjugacy classes of subgroups isomorphic
to $A_{5}$, of which exactly two have representatives $H_{1}$ and $H_{2}$ with
the property that every proper subgroup of $H_{1}$ is also a proper subgroup
of $H_{2}$. The groups $H_{1}$ are thus solvably equivalent subgroups of index
96. There are 5 double cosets $H_{1}gH_{2}$, comprised of $5,6,10,15,60$ right
cosets of $H_{1}$, respectively; each
$M\in\operatorname{Hom}_{\mathbf{Z}[G]}(\mathbf{Z}[H_{1}\backslash
G],\mathbf{Z}[H_{2}\backslash G])$ can thus be viewed as a matrix in
indeterminates $x_{1},x_{2},x_{3},x_{4},x_{5}\in\mathbf{Z}$, and we have
$\displaystyle\operatorname{det}M=-\,$
$\displaystyle(5x_{1}+6x_{2}+10x_{3}+15x_{4}+60x_{5})$ $\displaystyle\cdot\ $
$\displaystyle(x_{1}-6x_{2}-10x_{3}+3x_{4}+12x_{5})^{5}$ $\displaystyle\cdot\
$ $\displaystyle(3x_{1}+2x_{2}-2x_{3}-7x_{4}+4x_{5})^{15}$
$\displaystyle\cdot\ $ $\displaystyle(3x_{1}-2x_{2}+2x_{3}+x_{4}-4x_{5})^{30}$
$\displaystyle\cdot\ $ $\displaystyle(x_{1}+2x_{2}-2x_{3}+3x_{4}-4x_{5})^{45}$
By solving 32 systems of linear equations, one finds that no assignment of
$x_{1},x_{2},x_{3},x_{4},x_{5}\in\mathbf{Z}$ makes every factor in
$\operatorname{det}M$ equal to $\pm 1$. Thus $H_{1}$ and $H_{2}$ are not
integrally equivalent.
The regular inverse Galois problem for 16T1654 is known (it is a quotient of
12T277), thus there are infinitely many pairs of solvably equivalent number
fields $K_{1}$ and $K_{2}$ with Galois closure $L$ that satisfy
$\operatorname{Gal}(L/\mathbf{Q})=G$, $\operatorname{Gal}(L/K_{1})=H_{1}$,
$\operatorname{Gal}(L/K_{2})=H_{2}$. For example, we may take $L$ as the
splitting field of
$x^{16}-2x^{15}+3x^{14}-16x^{13}+18x^{12}-10x^{10}+40x^{9}-39x^{8}+54x^{7}+23x^{6}+16x^{5}-140x^{4}-188x^{3}-28x^{2}+104x-4,$
corresponding to the number field with LMFDB label
16.4.711702043399998895292416.2. The field $L$ contains solvable equivalent
subfields $K_{1}$ and $K_{2}$ of degree 96 that are necessarily arithmetically
equivalent, locally isomorphic, and have isomorphic class groups, by
Proposition 3.7. One can find 190 examples of 16T1654 number fields in the
Klüners and Malle Database of Number Fields [25].
###### Remark 4.3.
In [42, Remark 4.3] Scott raises several questions related to integral
permutation modules that lie in the same genus, which in our setting
corresponds to local integral equivalence. The _rank_ of a group $G$ acting on
a finite set $\Omega$ is the number of orbits of the diagonal action on
$\Omega\times\Omega$. Scott shows that if the rank of $G$ acting on $\Omega$
is 2 or 3 then local integral equivalence of $\mathbf{Z}[G]$-modules $\Omega$
and $\Omega^{\prime}$ implies an isomorphism of $G$-sets [42, Proposition
4.1]. His example with $G={\mathbf{PSL}}_{2}(29)$ proves that this does not
hold when the rank is $8$. The example in §4.4 shows that this also fails to
hold when the rank is $5$.
## Acknowledgments
The author would like to thank Alex Bartel for introducing him to the notion
of a Gassmann triple at an LMFDB workshop at Oregon State University in 2015.
The author is also grateful to Yuval Flicker for the clarification noted in
Remark 3.14, and to David Roe and Raymond van Bommel for their assistance in
finding a particularly efficient method to compute ramification indices that
is exploited in §4.2.
## References
* [1] Donu Arapura, Justin Katz, D.B. McReynolds, and P. Solapurkar, Integral Gassmann equivalence of algebraic and hyperbolic manifolds, Math. Z. 291 (2019), 179–214. (MR3936064)
* [2] Alex Bartel and Aurel Page, Torsion homology and regulators of isospectral manifolds, J. Topology 9 (2016), 1237–1256. (MR3620456)
* [3] Hans Ulrich Besche, Bettina Eick, and E.A. O’Brien, The groups of order at most $2000$, Electron. Res. Announc. Amer. Math. Soc. 7 (2001), 1–4. (MR1826989)
* [4] Jeremy Booher and José-Felipe Voloch, Recovering algebraic curves from $L$-function of Hilbert class fields, Res. Number Theory 6 (2020), article no. 43. (MR4167327)
* [5] Wieb Bosma, John J. Cannon, Claus Fieker, and Allan Steel (Eds.), Handbook of Magma functions, v2.25, 2020.
* [6] Wieb Bosma and Bart de Smit, On arithmetically equivalent number fields of small degree, in Algorithmic Number Theory, Fifth International Symposium, ANTS-V, C. Fieker and D.R. Kohel (Eds.), Lec. Notes Comp. Sci. 2369 (2002), 67–79. (MR204107))
* [7] John J. Cannon and Derek Holt, The transitive permutation groups of degree 32, Experiment. Math. 17 (2008) 307–314. (MR2355702)
* [8] Gunther Cornelissen, Aristides Kontogeorgis, and Lotte van der Zalm, Arithmetic equivalence for function fields, the Goss zeta function and a generalisation, J. Number Theory 130 (2010), 1000–1012. (MR2600417)
* [9] Gunther Cornelissen, Bart de Smit, Xin Li, Matilde Marcolli, and Harry Smit, Characterization of global fields by Dirichlet $L$-series, Res. Number Theory 5 (2019), article no. 7. (MR3887225)
* [10] S.B. Conlon, Monomial representations under integral similarity, J. Algebra 13 (1969), 496–508. (MR0252527)
* [11] Bart de Smit, Generating arithmetically equivalent number fields with elliptic curves, in Algorithmic Number Theory, Third International Symposium (ANTS-III), J.P. Buhler (Ed.), Lec. Notes Comp. Sci. 1423 (1998), 392–399. (MR1726087)
* [12] Bart de Smit and Robert Perlis, Zeta functions do not determine class numbers, Bull. Amer. Math. Soc. 31 (1994), 213–215. (MR1260520)
* [13] Yuval Flicker, Conjugacy classes of finite subgroups of ${\mathbf{SL}}(2,F),{\mathbf{SL}}(3,F)$, J. Théor. Nombres Bordeaux 31 (2019), 555–571. (MR4102614)
* [14] The GAP group, GAP — Groups, Algorithms, and Programming, Version 4.8.3, 2016.
* [15] Fritz Gassmann, Bemerkungen zu der vorstehenden Arbeit von Hurwitz (comments on the article Über Beziehungen zwischen den Primidealen eines algebraischen Körpers und den Substitutionen seiner Gruppe by Hurwitz) Math. Z. 25 (1926), 655-665.
* [16] Carolyn Gordon, David L. Webb, and Scott Wolpert, One cannot hear the shape of a drum, Bull. Amer. Math. Soc. (N.S.) 27 (1992), 134–138. (MR1136137)
* [17] Robert M. Guralnick and David B. Wales, Subgroups inducing the same permutation representation, II, J. Algebra 96 (1985), 94–113. (MR0808843)
* [18] Robert M. Guralnick and Al Weiss, Transitive permutation lattices in the same genus and embedding groups, Contemp. Math. 153 (1993), 21–33. (MR1247496)
* [19] Lorenz Halbeisen and Norbert Hungerbühler, Generation of isospectral graphs, J. Graph Theory 31 (1999), 255–265. (MR1688950)
* [20] Derek Holt, Gordon Royle, Gareth Tracey, The transitive groups of degree $48$ and some applications, Journal of Algebra, published online 29 June 2021.
* [21] Alexander Hulpke, Constructing transitive permutation groups, J. Symbolic Comput. 39 (2005), 1–30. (MR2168238)
* [22] Kenkichi Iwasawa, On the rings of valuation vectors, Ann. of Math. 57 (1953), 331–356. (MR0053970)
* [23] Mark Kac, Can one hear the shape of a drum?, Amer. Math. Monthly 73 (1966) no. 4, 1–23. (MR0201237)
* [24] Norbert Klingen, Arithmetical similarities, Oxford Science Publications, 1998. (MR1638821)
* [25] Jürgen Klüners and Gunter Malle, A database for field extensions of the rationals, LMS J. Comput. Math. 4 (2001), 182–196. (MR1901356)
* [26] Keiichi Komatsu, On the adele rings of algebraic number fields, Kodai Math. Sem. Rep. 28 (1976), 78–84. (MR0424760)
* [27] Keiichi Komatsu, On adele rings of arithmetically equivalent fields, Acta Arith. 43 (1984), 93–95. (MR0736723)
* [28] Benjamin Linowitz, D.B. McReynolds, and Nicholas Miller, Locally equivalent correspondences, Ann. Inst. Fourier (Grenoble) 67 (2017), 451–482. (MR3669503)
* [29] The LMFDB Collaboration, The $L$-functions and modular forms database, 2021, available at www.lmfdb.org.
* [30] Guillermo Mantilla-Soler, On a question of Perlis and Stuart regarding arithmetic equivalence, New York J. Math. 25 (2019), 558–573. (MR3982253).
* [31] D.B. McReynolds, Geometric spectra and commensurability, Canad. J. Math. 67 (2015), 184–197. (MR3292699)
* [32] J.S. Milne, Fields and Galois theory, v4.61, 2020, available at www.jmilne.org/math.
* [33] John Milnor, Eigenvalues of the Laplace operator on certain manifolds, Proc. Natl. Acad. Sci. 51 (1964), 542. (MR162204)
* [34] Jürgen Neukirch, Algebraic number theory, Springer, 1999. (MR1697859)
* [35] Robert Perlis, On the equation $\zeta_{k}(s)=\zeta_{k^{\prime}}(s)$, J. Number Theory 9 (1977), 342–360. (MR0447188)
* [36] Robert Perlis, On the class numbers of arithmetically equivalent fields, J. Number Theory 10 (1978), 489–509. (MR0515057)
* [37] Donna Joy Stuart and Rober Perlis, A new characterization of arithmetic equivalence, J. Number Theory 53 (1995), 300-308. (MR1348765)
* [38] Dipendra Prasad, A refined notion of arithmetically equivalent number fields and curves with isomorphic Jacobians, Advances Math. 312 (2017), 198–208. (MR3635810)
* [39] Dipendra Prasad and Conjeeveram S. Rajan, On an Archimedean analogue of Tate’s conjecture, J. Number Theory 99 (2003), 180–184. (MR1957251)
* [40] Klaus W. Roggenkamp, Permutation modules in the same genus, results from D. Hahn’s Ph.D. thesis (English summary), in _Darstellungstheorietage Jena 1996_ , B. Kühlshammer and K. Rosenbaum, eds., Sitzungsber. Math.-Naturwiss. Kl., Akad. Gemein. Wiss. Erfurt, Erfurt, Germany, 1996, 211-223. (MR1441096)
* [41] The Sage Development Team, _Sage Mathematics Software (Version 9.2)_ , 2020, http://www.sagemath.org.
* [42] Leonard L. Scott, Integral equivalence of permutation representations, in Group theory (Granville, OH, 1992), 262–274, World Sci. Publ., 1993. (MR1348907)
* [43] Jean-Pierre Serre, Linear representations of finite groups, Springer, 1977. (MR0450380)
* [44] Igor R. Shafarevich, Basic algebraic geometry 2, English third edition, Springer, 2013; translated from 2007 Russian third edition by Miles Reid. (MR3100288)
* [45] Pavel Solomatin, Global fields and their $L$-functions, PhD Thesis, Universiteit Leiden, 2021.
* [46] Harry Smit, $L$-series and homomorphisms of number fields, arXiv:1910.12321.
* [47] Toshikazu Sunada, Riemannian coverings and isospectral manifolds, Ann. of Math. 121 (1985), 169–186. (MR0782558)
* [48] Andrew V. Sutherland, Computing images of Galois representations attached to elliptic curves, Forum Math. Sigma 4 (2016), 79 pages. (MR3482279)
* [49] Andrew V. Sutherland, Arithmetic equivalence and isospectrality, preprint, 2018.
* [50] André Weil, Exercices dyadiques, Invent. Math. 27 (1974), 1–22. (MR0379445)
* [51] Melanie Matchett Wood, How to determine the splitting type of a prime, 2011.
* [52] David Zywina, The inverse Galois problem for ${\mathbf{PSL}}_{2}(\mathbf{F}_{\\!p})$, Duke Math. J. 164 (2015), 2253–2292. (MR3397386)
[avs] Andrew V. Sutherland
Department of Mathematics
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, Massachusetts 02139
USA
<EMAIL_ADDRESS>
https://math.mit.edu/~drew
|
# Smart Vectorizations for Single and Multiparameter Persistence
Baris Coskunuzer<EMAIL_ADDRESS>
Mathematical Sciences, University of Texas at Dallas, USA Cuneyt Gurcan Akcora
<EMAIL_ADDRESS>
Computer Science, University of Manitoba, Canada Zhiwei Zhen
<EMAIL_ADDRESS>
Mathematical Sciences, University of Texas at Dallas, USA Ignacio Segovia
Dominguez<EMAIL_ADDRESS>
Mathematical Sciences, University of Texas at Dallas, USA Murat Kantarcioglu
<EMAIL_ADDRESS>
Computer Science, University of Texas at Dallas, USA Yulia R. Gel
<EMAIL_ADDRESS>
Mathematical Sciences, University of Texas at Dallas, USA
###### Abstract
The machinery of topological data analysis becomes increasingly popular in a
broad range of machine learning tasks, ranging from anomaly detection and
manifold learning to graph classification. Persistent homology is one of the
key approaches here, allowing us to systematically assess the evolution of
various hidden patterns in the data as we vary a scale parameter. The
extracted patterns, or homological features, along with information on how
long such features persist throughout the considered filtration of a scale
parameter, convey a critical insight into salient data characteristics and
data organization.
In this work, we introduce two new and easily interpretable topological
summaries for single and multi-parameter persistence, namely, saw functions
and multi-persistence grid functions, respectively. Compared to the existing
topological summaries which tend to assess the numbers of topological features
and/or their lifespans at a given filtration step, our proposed saw and multi-
persistence grid functions allow us to explicitly account for essential
complementary information such as the numbers of births and deaths at each
filtration step.
These new topological summaries can be regarded as the complexity measures of
the evolving subspaces determined by the filtration and are of particular
utility for applications of persistent homology on graphs. We derive
theoretical guarantees on the stability of the new saw and multi-persistence
grid functions and illustrate their applicability for graph classification
tasks.
## 1 Introduction
Topological data analysis (TDA) has been recently applied to many different
domains ranging from cryptocurrency transaction analysis to anomaly detection.
Persistent homology-based approaches emerged as one of the key TDA techniques
that allow systematical assessment of the evolution of various hidden patterns
in the data as a function of a scale parameter. The extracted patterns, or
homological features, along with information on how long such features persist
throughout the considered filtration of a scale parameter, convey a critical
insight into underlying data properties. In the past, such homological
features have been successfully used as an input to various machine learning
tasks to enable efficient and accurate models that are robust to noise in the
data sets.
In this work, we provide new and easily interpretable topological summaries
for single and multi-parameter persistence, namely, saw functions and multi-
persistence grid functions (MPGFs). Compared to the existing topological
summaries which tend to assess the numbers of topological features and/or
their lifespans at a given filtration step, our proposed saw functions allow
us to extract additional information that captures the numbers of births and
deaths at each filtration step. On the other hand, MPGFs are one of the first
topological summaries in the Multi-Persistence case which needs no slicing. In
our empirical evaluations, we show that these summaries are useful in building
efficient and accurate graph classification models.
One important contribution of our work is to leverage these interpretable
features to build accurate graph classification models even with limited
training data. This is important because existing work on graph convolutional
networks (GCNs) for graph classification require large amounts of data and
expensive training process without providing any insights into why certain
classification result is provided. Compared to GCN-based approaches, the
proposed single and multi-parameter persistence features provide insights into
changes in the underlying sets or graphs as the scale parameter varies and
enables the building of an efficient machine learning model using these
extracted features. Especially for the application domain where the size of
the data set is limited, building machine learning models such as random
forests using the topological summaries for single and multi-parameter
persistence result in prediction accuracy close to best GCN models while
providing insights into why certain features are useful.
Contributions of our paper can be summarized as follows:
* •
We introduce two new topological summaries for single- and multi-persistence:
Saw functions and Multi-persistence Grid Functions (MPGFs). The new summaries
are interpretable as complexity measure of the space and, contrary to the
existing descriptors, contain such salient complementary information on the
underlying data structure as the numbers of births and deaths of topological
features at each filtration step.
* •
We prove theoretical stability of the new topological summaries. Our MPGF is
one of the first successful and provably stable vectorizations of multi-
parameter persistence without slicing. The theory of multi-parameter
persistence suffers from many technical difficulties to define persistence
diagrams in general settings. Our MPGFs bypass these problems by going
directly to the subspaces defined, and deliver 2D descriptors.
* •
The proposed new topological summaries are computationally simple and
tractable. Our MPGF takes a few minutes for most datasets, but on average
provides results that differ as little as 3.53% from popular graph neural
network solutions.
* •
To the best of our knowledge, this is the first paper bringing the machinery
of multi-persistence to graph learning. We discuss the utility and limitations
of multi-persistence for graph studies.
## 2 Related Work
Topological Summaries: While constructing the right filtration for a given
space $X$ is quite important, the vectorization and topological summaries of
the information produced in persistent homology are crucial to obtain the
desired results. This is because the summary should be suitable for ML tools
to be used in the problem so that the produced topological information can
address the question effectively. However, the produced persistent diagrams
(PDs) are a multi-set that does not live in a Hilbert space and, hence, cannot
be directly inputted into an ML model. In that respect, there are several
approaches to solve this issue where they can be considered in two categories.
The first one is called vectorization (Di Fabio and Ferri, 2015; Bubenik,
2015; Adams et al., 2017) which embeds PDs into $\mathbb{R}^{d}$, and the
second one is called kernelization (Kusano et al., 2016; Le and Yamada, 2018;
Zieliński et al., 2019; Zhao and Wang, 2019) which is to induce kernel Hilbert
spaces by using PDs. Both approaches enjoy certain stability guarantees and
some of their key parameters are learnable. However, the resulting performance
of such topological summaries as classifiers is often highly sensitive to the
choice of the fine-tuned kernel and transformation parameters. In this paper,
we find a middle way where our topological summaries both keep most of the
information produced in PDs and do not need fine calibration.
In (Chung and Lawson, 2019), the authors bring together the class of
vectorizations which are single variable functions in the domain of thresholds
in a nice way and call them Persistence curves. Here, the infrastructure is
very general, and most of the current vectorization can be interpreted under
this umbrella. In particular, Betti functions, life span, persistence
landspaces, Betti entropy, and several other summaries can be interpreted in
this class. Another similar infrastructure to describe the topological
summaries of persistent homology especially in graph case is PersLay (Carrière
et al., 2020). In this work, the authors define a general framework for
vectorizations of persistent diagrams, which can be used as a neural network
layer. The vectorizations in this class are very general and suitable to
include kernels in construction.
TDA for graph classification: Recently, TDA methods have been successfully
applied to the graph classification tasks often in combination with DL and
learnable kernelization of PDs (Hofer et al., 2017; Togninalli et al., 2019;
Rieck et al., 2019; Le and Yamada, 2018; Zhao and Wang, 2019; Hofer et al.,
2019; Kyriakis et al., 2021). Furthermore, as mentioned above, Carrière et al.
(2020) unified several current approaches to PD representations to obtain the
most suitable topological summary with a learnable set of parameters, under an
umbrella infrastructure. Finally, most recently, Cai and Wang (2020) obtained
successful results by combining filtrations with different vectorization
methods.
## 3 Background
In this section, we provide a brief introduction to persistent homology.
Homology $H_{k}(X)$ is an essential invariant in algebraic topology, which
captures the information of the $k$-dimensional holes (connected components,
loops, cavities) in the topological space $X$. Persistent homology is a way to
use this important invariant to keep track of the changes in a controlled
topological space sequence induced by the original space $X$. For basic
background on persistence homology, see (Edelsbrunner and Harer, 2010;
Zomorodian and Carlsson, 2005).
### 3.1 Persistent Homology
For a given metric space $X$, consider a continuous function
$f:X\to\mathbf{R}$. Define $X_{\alpha}=\\{x\in X\mid f(x)\leq\alpha\\}$ as the
$\alpha$-sublevel set of $X$. Choose $\\{\alpha_{i}\\}$ as an increasing
sequence of numbers with $\alpha_{0}=\min f$ to $\alpha_{N}=\max f$. Then,
$\\{X_{\alpha_{i}}\\}$ gives an exhaustion of the space $X$, i.e.
$X_{\alpha_{0}}\subset X_{\alpha_{1}}\subset...\subset X_{\alpha_{N}}=X$. By
using $X_{\alpha_{i}}$ itself, or inducing natural topological spaces
$\widehat{X}_{\alpha_{i}}$ (e.g. VR-complexes), one obtains a sequence of
topological spaces
$\widehat{X}_{\alpha_{0}}\subset\widehat{X}_{\alpha_{1}}\subset\ldots\widehat{X}_{\alpha_{i}}$
which is called a filtration induced by $f$. For each $i$, one can compute the
$k^{th}$ homology group $H_{k}(\widehat{X}_{\alpha_{i}})$ which describes the
$k^{th}$ dimensional holes in $\widehat{X}_{\alpha_{i}}$. The rank of the
homology group $H_{k}(\widehat{X}_{\alpha_{i}})$ is called the Betti number
$\mathcal{B}_{k}(\alpha_{i})$ which is basically number of $k$-dimensional
holes in the space $\widehat{X}_{\alpha_{i}}$.
By using persistent homology, we keep track of the topological changes in the
sequence $\\{\widehat{X}_{\alpha_{i}}\\}$ as follows. When a topological
feature $\sigma$ (connected component, loop, cavities) appears in
$H_{k}(\widehat{X}_{\alpha_{i}})$, we mark $b_{\sigma}=\alpha_{i}$ as its
birth time. The rank of $H_{k}(\widehat{X}_{\alpha_{i}})$ is called Betti
number. The feature $\sigma$ can disappear at a later time
$H_{k}(\widehat{X}_{\alpha_{j}})$ by merging with another feature or by being
filled in. Then, we mark $d_{\sigma}=\alpha_{j}$ as its death time. We say
$\sigma$ persists along $[b_{\sigma},d_{\sigma})$. The longer the length
($d_{\sigma}-b_{\sigma}$) of the interval, the more persistent the feature
$\sigma$ is (Adams and Coskunuzer, 2021). A multi-set
$PD_{k}(X,f)=\\{(b_{\sigma},d_{\sigma})\mid\sigma\in
H_{k}(\widehat{X}_{\alpha_{i}})\\}$ is called persistence diagram of $(X,f)$
which is the collection of $2$-tuples marking birth and death times of
$k$-dimensional holes $\\{\sigma\\}$ in $\widehat{X}_{\alpha_{i}}$. Similarly,
we call the collection of the intervals $\\{[b_{\sigma},d_{\sigma})\\}$ the
barcodes of $(X,f)$, which store the same information with the persistence
diagrams.
Even though the constructions are very similar, depending on the initial space
$X$, and the filter function $f$, the interpretation and the meaning of the
topological changes recorded by the persistent homology for $(X,f)$ might be
different. For example, when the initial space $X$ is a point cloud, and
$f(p)=d(p,X)$ is the distance function in the ambient space, this sublevel
construction coarsely captures the shape of the point cloud as follows: The
sublevel sets $f(p)=d(p,X)\leq\epsilon$ gives a union of $\epsilon$-balls
$\bigcup_{x\in X}N_{\epsilon}(x)$, and it naturally induces relevant complexes
(Vietoris-Rips, Cech, Clique) to capture the coarse geometry of the set. In
this case, by the celebrated Nerve Theorem, we observe that persistence
homology and our filtration captures the shape of the underlying manifold
structure of the point cloud (Edelsbrunner and Harer, 2010).
On the other hand, when we use a graph $G$ as our initial space $X$, and $f$
as a suitable filter function on $G$ (e.g. degree), the sublevel filtration
gives a sequence of subgraphs (or complexes induced by them). In particular,
the filter function $f$ positions the nodes in a suitable direction and
determines the direction for the evolution of these subgraphs. In this sense,
the filter function is highly dominant to govern the behavior of the
filtration, and outcome persistent homology. Hence, one can express that these
outputs not only describe the topological space $X$’s features (or shape) but
also the behavior of the filter function $f$ on $X$. While in the point cloud
case, the persistent homology can be said that it captures the shape of the
point cloud coarsely, in the graph case, the meaning of the outputs of the
persistent homology is highly different. In the following section, we discuss
the interpretation of the persistent homology induced by these different
filter functions as measuring the topological complexity of these subspaces.
While the persistence homology originated with a single filter function, one
can generalize the filtration idea to several functions. In particular, single
parameter filtration can be viewed as chopping off the manifold into slices
and bringing these slices together in an ordered fashion. For topologists,
this is the Morse Theoretic point of view to analyze a given space $X$ by
using a filter function (Morse function) $f$ (Mischaikow and Nanda, 2013).
This approach can naturally be adapted to several filter functions. By slicing
up space with more than one filter function at the same time, we can get a
much finer look at the topological changes in the space in different filtering
directions. This is called Multiparameter Persistence. By construction,
multiparameter persistence gives more refined information on the topological
space, and the theory has been growing rapidly in the past few years (Carrière
and Blumberg, 2020; Vipond, 2020; Harrington et al., 2019).
As a simple example for the construction of multiparameter persistence,
consider two filter functions $f,g:X\to\mathbf{R}$ on the space $X$. Let
$\mathcal{I}=(\alpha_{i},\beta_{j})$ be the partially ordered index set where
$(\alpha_{i},\beta_{j})\leq(\alpha_{i^{\prime}},\beta_{j^{\prime}})$ if
$\alpha_{i}\leq\alpha_{i^{\prime}}$ and $\beta_{j}\leq\beta_{j^{\prime}}$.
Then, define $X_{ij}=\\{x\in X\mid f(x)\leq\alpha_{i}\mbox{ and
}g(x)\leq\beta_{j}\\}$. Similarly, for each fixed $i_{o}$, we get a single
parameter filtration
$\widehat{X}_{i_{o}1}\subset\widehat{X}_{i_{o}2}\subset...\subset\widehat{X}_{i_{o}m_{2}}$
where $\widehat{X}_{ij}$ is the complex induced by $X_{ij}$. Multiparameter
persistence enables one to look at the space from more than one direction, and
to detect topological changes with respect to more than one filter function at
the same time. The advantage of multiparameter persistence over doing single
parameter persistence multiple times is that it gives a finer filtration to
understand the space, and one can further detect the interrelation between the
filtering functions.
### 3.2 Interpretation of Persistent Homology as Complexity Measure
In the previous section, we discussed that the interpretation of persistent
homology output highly depends on the filter function $f:X\to\mathbf{R}$. When
$f(p)=d(p,X)$ is the distance function to space $X$, persistent homology
captures the coarse shape of the point cloud given in ambient space. This
point cloud mostly comes from a dataset embedded in a feature space. Then
persistent homology gives information on patterns, topological features of the
manifold structure where the dataset lies. Therefore, this output gives
crucial information on the data and enables one to see the hidden patterns in
it.
Even though in the origins of the persistent homology, the filtration
constructed was induced by the distance function described above, the
technique to construct persistent homology allows using a different type of
filter functions. When we choose a different filter function on the space, we
get a completely different filtration, and naturally, the persistent homology
output completely changes. Over the years, the technique of persistent
homology highly developed, and generalized in many different settings.
Depending on the question at hand, one applies suitable filter functions to
space and gets a filtration to find the features and patterns detected by the
topological tools.
For example, when the filter function $f:X\to\mathbf{R}$ is coming from a
qualitative property of the set $X$, we can interpret the persistent homology
output as the records of “complexity changes” in the space with respect to
this filter function. This is because, in this setting, the persistent
homology keeps track of the topological changes in the sequence of subspaces
$\\{\widehat{X}_{\alpha}\\}$ determined by the filter function. As $\alpha$
evolves, the topological complexity of $X_{\alpha}$ changes. One can describe
topological complexity as the number of components, loops, cavities at a given
threshold level $\alpha$. In this setting, birth and death times of
topological features are not determined by the position of the points in $X$
with respect to each other, but they are determined by the filter function.
This new persistent homology output does not solely depend on the space $X$,
but mostly depends on the filter function $f$. Hence, one can also interpret
the persistent homology as valuable information on the behavior of the filter
function $f$ on $X$.
For instance, when $X$ is a graph $G$, and $f$ is the degree function on the
vertices of $G$, then the persistent homology first describes the topological
changes in the subgraphs generated by low degree nodes, and then evolves
toward subgraphs with high degree places. The persistent homology output is
the information on how the complexity changes in the filtration while the
degree function increases.
With the motivation of this interpretation of persistent homology, we present
two novel topological summaries capturing the “complexity changes” notion
described above very well as easily interpretable single variable and multi-
variable functions.
## 4 Novel Topological Summaries: Saw Functions
In the following, we interpret persistent homology as a tool to keep track of
the topological complexity changes in $X$ with respect to filter
$f:X\to\mathbf{R}$, and propose two new topological summaries. We define our
first topological summary, the Saw Function, for the single parameter case.
### 4.1 Saw Functions
Saw Function counts the existing live topological features in the intervals
$(\alpha_{i},\alpha_{i+1}]$ similar to the Betti functions (Edelsbrunner and
Harer, 2010). The advantage of Saw Functions is that they remedy the loss of
several essential information when vectorizing persistent diagrams.
When passing from persistent diagrams to Betti functions, one loses most of
the information on birth and death times as it only counts the number of live
features at time $\alpha_{i}$. The death/birth information for the whole
interval $(\alpha_{i},\alpha_{i+1})$ is lost during this transition. With Saw
Functions, we capture this information by keeping the number of deaths and
births in the interval $(\alpha_{i},\alpha_{i+1})$ in a natural way.
Throughout the construction, we use sublevel filtration as above, but the same
construction can be easily adapted to superlevel, or similar filtrations.
Given $(X,f)$ and $\mathcal{I}=\\{\alpha_{i}\\}$, let
$\\{\widehat{X}_{\alpha_{i}}\\}$ be the sublevel filtration as described in
Section 3.1, i.e.
$\widehat{X}_{\alpha_{0}}\subset\widehat{X}_{\alpha_{1}}\subset..\subset\widehat{X}_{\alpha_{N}}$.
Now, let $\mathcal{B}_{k}(\alpha_{i})$ be the $k^{th}$ Betti number of
$\widehat{X}_{\alpha_{i}}$. This defines a function
$\mathcal{B}_{k}:[\alpha_{0},\infty)\to\mathbf{N}$ called $k^{th}$ Betti
function of the filtration $(X,f,\mathcal{I})$ as follows:
$\mathcal{B}_{k}(t)=\mathcal{B}_{k}(\alpha_{j})$ where $\alpha_{j}$ is the
largest number in $\mathcal{I}$ with $\alpha_{j}\leq t$. By definition,
$\mathcal{B}_{k}$ is a piecewise constant function which only changes values
at $\mathcal{I}$. See black function in Figure 1.
Let $PD_{k}(X)$ be the persistence diagram of $(X,f)$ at $k^{th}$ level. Here,
we give the construction of Saw Function $\mathcal{S}_{0}(t)$ for $k=0$.
Higher levels will be similar.
Let $PD_{0}(X)=\\{(b_{j},d_{j})\\}$ where $b_{j},d_{j}$ represent the birth
and death of $j^{th}$ component in the filtration. Notice that
$b_{j},d_{j}\in\mathcal{I}=\\{\alpha_{i}\\}$ for any $j$. Let
$\mathbf{b}(\alpha_{i})=\sharp\\{b_{j}=\alpha_{i}\\}$ and
$\mathbf{d}(\alpha_{i})=\sharp\\{d_{j}=\alpha_{i}\\}$, i.e.
$\mathbf{b}(\alpha_{i})$ is the number of births at
$t\in(\alpha_{i-1},\alpha_{i}]$, and $\mathbf{d}(\alpha_{i})$ is the number of
deaths at $t\in(\alpha_{i-1},\alpha_{i}]$. The regular Betti functions only
capture the information
$\mathbf{b}(\alpha_{i})-\mathbf{d}(\alpha_{i})=\mathcal{B}_{k}(\alpha_{i})-\mathcal{B}_{k}(\alpha_{i-1})$.
With Saw Functions, we keep the number of births and deaths in the intervals
$(\alpha_{i-1},\alpha_{i}]$, and embed it naturally in the function as
follows.
When defining Saw Functions, we define a simple generator function for each
point $(b_{j},d_{j})\in PD_{0}(X)$. Let $\chi_{j}=\chi(I_{j})$ be the
characteristic function for half closed interval $I_{j}=[b_{j},d_{j})$, i.e.
$\chi_{j}(t)=1$ for $t\in I_{j}$, and $\chi_{j}(t)=0$ otherwise. Notice that
$\mathcal{B}_{0}(t)=\sum_{j}\chi_{j}$ by construction.
To include the counting of birth and death times information in the function,
we modify $\chi_{j}$. In general, for a given filtration $(X,f,\mathcal{I})$,
we choose a suitable lag parameter as follows:
Assuming the thresholds $\\{\alpha_{i}\\}$ are somewhat evenly distributed as
in generic case, we let
$\lambda\sim\frac{1}{4}\mbox{average}(\\{|\alpha_{i+1}-\alpha_{i}|\\})$. The
reason for the coefficient $1/4$, and how to choose $\lambda$ in general case
is explained in Remark 4.1. For simplicity, we assume our thresholds are
integer valued with $\alpha_{i+1}-\alpha_{i}=1$. One can think of $X$ is a
graph $G$, and $f$ is the degree, or eccentricity functions on the vertices of
$G$. For this case, set $\lambda=1/4$. Then, modify $\chi_{j}$ as follows.
$\widehat{\chi}_{j}(t)=\left\\{\begin{array}[]{cc}4(t-b_{j})&t\in[b_{j},b_{j}+\frac{1}{4}]\\\
4(t-d_{j})&t\in[d_{j}-\frac{1}{4},d_{j}]\\\
\chi_{j}(t)&\mbox{otherwise}\par\end{array}\right.$
Now, define the Saw Function by adding up each generator $\widehat{\chi}_{j}$
for each $(b_{j},d_{j})\in PD_{0}(X)$ as follows:
$\mathcal{S}_{0}(t)=\sum_{j}\widehat{\chi}_{j}(t)$. For general $k$, we have
$\mathcal{S}_{k}(t)=\sum_{(b^{k}_{j},d^{k}_{j})\in
PD_{k}(X)}\widehat{\chi}^{k}_{j}(t),$
where $PD_{k}(X)$ is the $k^{th}$ persistent diagram of $X$.
Figure 1: $\mathcal{B}_{0}(t)$ is a step function with
$\mathcal{B}_{0}(1)=30$, $\mathcal{B}_{0}(2)=25$, $\mathcal{B}_{0}(3)=15$, and
$\mathcal{B}_{0}(4)=20$. We get $\mathcal{S}_{0}(t)$ by modifying
$\mathcal{B}_{0}(t)$ with birth and death times $\mathbf{b}(n),\mathbf{d}(n)$.
Here, $\mathbf{b}(0)=30$, $\mathbf{b}(1)=15$, $\mathbf{b}(2)=10$,
$\mathbf{b}(3)=5$, $\mathbf{d}(1)=20$, $\mathbf{d}(2)=20$, $\mathbf{d}(3)=0$
and $\mathbf{d}(4)=10$.
Notice that with this definition, we have a piecewise linear function
$\mathcal{S}_{k}(t)$ with the following properties. For
$n\in\mathcal{I}\subset\mathbf{N}$,
* •
$\mathcal{S}_{k}(t)=\mathcal{B}_{k}(t)$ for
$t\in[n+\frac{1}{4},n+\frac{3}{4}]$
* •
$\mathcal{B}_{k}(n)-\mathcal{S}_{k}(n)=\mathbf{b}_{k}(n)$ where
$\mathbf{b}_{k}(n)$ the number of births of $k$-cycles at time $t\in(n-1,n]$.
* •
$\mathcal{B}_{k}(n-1)-\mathcal{S}_{k}(n)=\mathbf{d}_{k}(n)$ where
$\mathbf{d}_{k}(n)$ the number of deaths of $k$-cycles at time $t\in(n-1,n]$.
Now, when the filtration function is integer valued with $\lambda={1}/{4}$,
the explicit description of Saw Function $\mathcal{S}_{k}(t)$ is as follows.
For any $n\in\mathcal{I}$ and $t\geq\alpha_{0}$, we have
$\mathcal{S}_{k}(t)=\left\\{\begin{array}[]{ll}\mathcal{B}_{k}(t)-4\mathbf{d}_{k}(n)(t\mbox{-}n)+1&t\in[n-\frac{1}{4},n)\\\
\mathcal{B}_{k}(t)-\mathbf{b}_{k}(n)+4\mathbf{b}_{k}(n)(t\mbox{-}n)&t\in[n,n+\frac{1}{4}]\\\
\mathcal{B}_{k}(t)&\mbox{ otherwise}\\\ \end{array}\right.$
The novelty of Saw Functions is that it transforms most of the information
produced by persistence diagrams in a simple way to interpretable functions.
As described earlier, if one considers persistent homology as machinery which
keeps track of the complexity changes with respect to the filter function, Saw
Functions immediately summarizes the output in an interpretable way where one
immediately reads the live topological features at the given times as well as
the number of births and deaths at a given instance.
###### Remark 4.1 (Choice of the Lag Parameter $\lambda$).
In our sample construction, we chose
$\lambda\sim\frac{1}{4}\mbox{average}(\\{|\alpha_{i+1}-\alpha_{i}|\\})$ with
the assumption of the thresholds $\\{\alpha_{i}\\}$ are evenly distributed.
The reason for the coefficient $1/4$ should be clear from the Figure 1. Lag
parameter $\lambda>0$ marks two points in the interval
$(\alpha_{i},\alpha_{i+1})$, e.g.
$\alpha_{i}<\alpha_{i}+\lambda<\alpha_{i+1}-\lambda<\alpha_{i+1}$. Notice that
on the interval $[\alpha_{i}+\lambda,\alpha_{i+1}-\lambda]$, our Saw functions
coincides with the Betti functions. We chose $\lambda=1/4$ in our setting to
make it the length ($\alpha_{i+1}-\alpha_{i}-2\lambda$) of the interval
$[\alpha_{i}+\lambda,\alpha_{i+1}-\lambda]$ is the same with the length
($2\lambda$) of the interval where Betti function and the Saw function differs
$(\alpha_{i}-\lambda,\alpha_{i}+\lambda)$, i.e.
$\alpha_{i}+\lambda,\alpha_{i+1}-\lambda=2\lambda$. Depending on the
thresholds set $\\{\alpha_{i}\\}$, and the problem at hand, one can choose
varying lag parameters $\\{\lambda_{i}\\}$, too, i.e.
$(\alpha_{i}-\lambda_{i},\alpha_{i}+\lambda_{i})$.
### 4.2 Comparison with Similar Summaries:
In the applications, since we also embed birth and death information into the
function, when comparing persistent homologies of different spaces, Saw
Functions as their vectorization will give us a finer understanding of their
differences, and produce finer descriptors. In particular, if we compare the
Saw Functions with Betti functions or Persistence Curves, and similar
summaries (Umeda, 2017; Chen et al., 2020) the main differences are two-fold:
In non-technical terms, the summaries above are piecewise-constant and do not
store the information on the birth/death times in the threshold intervals
$(\alpha_{i},\alpha_{i+1})$. However, embedding this information to Saw
Functions give more thorough information around the topological changes near
the threshold instances. If the number of births and deaths are high numbers
at a threshold instance, Betti functions and similar summaries see only their
differences. However, in Saw Functions, such high numbers around a threshold
instance can be interpreted as an anomaly (or high activity) in the slice
$f^{-1}((\alpha_{i},\alpha_{i+1}))$ and the deep zigzag pieces exactly
quantifies this anomaly.
In technical terms, we can describe this difference as follows: Let
$(X^{+},f^{+},\mathcal{I}^{+})$ and $(X^{-},f^{-},\mathcal{I}^{-})$ describe
two filtrations. For simplicity, assume
$\mathcal{I}^{+}=\mathcal{I}^{-}=\mathcal{I}$. Assume that they both induce
the same Betti functions $\mathcal{B}_{k}^{+}(t)=\mathcal{B}_{k}^{-}(t)$ for
$t\in[\alpha_{0},\infty)$ in the $k^{th}$-level. Notice that this implies that
$\mathbf{b}^{+}_{k}(\alpha_{i})-\mathbf{d}^{+}_{k}(\alpha_{i})=\mathbf{b}^{-}_{k}(\alpha_{i})-\mathbf{d}^{-}_{k}(\alpha_{i})$
for any $i$ by definition. On the other hand, as Saw Functions store the
number of birth and death information, we see that $\mathcal{S}^{+}_{k}(t)$
may be highly different from $\mathcal{S}^{-}_{k}(t)$ as follows.
To see the difference in $L^{1}$-metric, let
$d_{1}(f,g)=\int_{\alpha_{0}}^{\infty}|f(t)-g(t)|\,dt$ represent
$L^{1}$-distance between $f$ and $g$. Then, we have
$d_{1}(\mathcal{S}^{+}_{k},\mathcal{S}^{-}_{k})=\sum_{i=1}^{N}\lambda\cdot|\mathbf{b}^{+}_{k}(\alpha_{i})-\mathbf{b}^{-}_{k}(\alpha_{i})|=\sum_{i=1}^{N}\lambda\cdot|\mathbf{d}^{+}_{k}(\alpha_{i})-\mathbf{d}^{-}_{k}(\alpha_{i})|$.
Here, the second equality comes from the fact that
$\mathbf{b}^{+}_{k}(\alpha_{i})-\mathbf{d}^{+}_{k}(\alpha_{i})=\mathbf{b}^{-}_{k}(\alpha_{i})-\mathbf{d}^{-}_{k}(\alpha_{i})$
by assumption. In other words, while the same filtration induces exactly same
Betti functions $\mathcal{B}^{+}_{k}(t)=\mathcal{B}^{-}_{k}(t)$, the induced
Saw Functions can be very far away from each other in the space of functions,
and this shows that they induce much finer descriptors.
Similarly, from $L^{2}$-metric perspective, Saw Functions produce much finer
information than the similar summaries in the literature because of the slope
information in the zigzags. In particular, Let
$d_{2}(f,g)=\int_{\alpha_{0}}^{\infty}|f(t)-g(t)|+|f^{\prime}(t)-g^{\prime}(t)|\,dt$
be the $L^{2}$-distance. If we use the notation above,
$(X^{\pm},f^{\pm},\mathcal{I})$ induces exactly same Betti functions
$\mathcal{B}^{+}_{k}(t)=\mathcal{B}^{-}_{k}(t)$, while
$d_{2}(\mathcal{S}^{+}_{k},\mathcal{S}^{-}_{k})=\sum_{i=1}^{N}(\lambda+1)\cdot|\mathbf{b}^{+}_{k}(\alpha_{i})-\mathbf{b}^{-}_{k}(\alpha_{i})|$.
Notice that the distance is much higher in $L^{2}$-metric, as the lag
parameter $\lambda$ can be a small number. In practice, if number of births
and deaths near a threshold, $\mathbf{b}_{k}(\alpha_{i})$ and
$\mathbf{d}_{k}(\alpha_{i})$, are both very close large numbers, Betti
functions and similar summaries does not see this phenomena as they only
detect their difference,
$\mathbf{b}_{k}(\alpha_{i})-\mathbf{d}_{k}(\alpha_{i})$. However, in Saw
Functions, we observe a steep slope, and deep zigzag near the threshold
$\alpha_{i}$ which can be interpreted as ”high activity”, or ”anomaly” near
the slice $f^{-1}((\alpha_{i-1},\alpha_{i}))$, which can be an effective tool
for classification purposes.
Tension at a threshold: Motivated by the idea above, we define $k$-tension of
$\alpha_{i}$ as
$\tau_{k}(\alpha_{i})=\mathbf{b}_{k}(\alpha_{i})+\mathbf{d}_{k}(\alpha_{i})$
which represents the amount of the activity (total number of births and
deaths) in the interval $(\alpha_{i-1},\alpha_{i}]$. As discussed above, while
betti functions only detect the difference
$\mathbf{b}_{k}(\alpha_{i})-\mathbf{d}_{k}(\alpha_{i})$, Saw Functions detect
also their sum
$\tau_{k}(\alpha_{i})=\mathbf{b}_{k}(\alpha_{i})+\mathbf{d}_{k}(\alpha_{i})$
at a given threshold. In particular, the tension $\tau_{k}(\alpha_{i})$ is the
stepness of the zigzag at $\alpha_{i}$ as it is the sum of the slopes of the
Saw Function at that point.
Here, we only discuss the comparison of Saw Functions with Betti functions,
but similar arguments can also be used to compare other persistence curves or
similar vectorizations.
Another advantage of Saw Functions over other common summaries is that one
does not need to choose suitable parameters or kernels in the process. In
other summaries, one needs to make suitable fine assumptions on the
parameters, or kernel functions which can highly distort the output if not
done properly.
Saw Functions as Persistence Curves:
If we consider Saw Functions in the Persistence Curves Framework (Chung and
Lawson, 2019), our Saw Functions can be interpreted as a new Persistence Curve
with the following inputs. With their notation, our Saw Function
$\mathcal{S}_{k}(t)$ corresponds to the persistence curve with generating
function $\psi(b,d,t)=\widehat{\chi}_{(b,d)}(t)$ (defined in the proof above)
and summary statistics $T=\sum$ (summation), i.e.
$\mathcal{S}_{k}(t)=\sum_{(b,d)\in PD}\widehat{\chi}_{(b,d)}(t)$. Note that
with this notation the Betti function $\mathcal{B}_{k}(t)=\sum_{(b,d)\in
PD}\chi_{(b,d)}(t)$ where $\chi_{(b,d)}(t)$ is the usual characteristic
function for interval $[b,d)$, i.e. $\chi_{(b,d)}(t)=1$ when $t\in[b,d)$ and
$\chi_{(b,d)}(t)=0$ otherwise.
### 4.3 Stability of Saw Functions
In this part, we discuss the stability of Saw Functions as vectorizations of
persistence diagrams. In other words, we compare the change in persistence
diagrams with the change they cause in Saw Functions.
Let $(X^{+},f^{+},\mathcal{I}^{+})$ and $(X^{-},f^{-},\mathcal{I}^{-})$ define
two filtrations. Let $PD_{k}(X^{\pm})$ be the corresponding persistence
diagrams in $k^{th}$ level. Define $p^{th}$ Wasserstein distance
$\mathcal{W}_{p}$ between persistence diagrams as follows. We use
$\mathcal{W}_{p}(X^{+},X^{-})$ notation instead of
$\mathcal{W}_{p}(PD_{k}(X^{+}),PD_{k}(X^{-}))$ for short. Let
$PD_{k}(X^{+})=\\{q_{j}^{+}\\}\cup\Delta^{+}$ and
$PD_{k}(X^{-})=\\{q_{l}^{-}\\}\cup\Delta^{-}$ where $\Delta^{\pm}$ represents
the diagonal with infinite multiplicity. Here,
$q_{j}^{+}=(b^{+}_{j},d_{j}^{+})\in PD_{k}(X^{+})$ represents the birth and
death times of a $k$-dimensional cycle $\sigma_{j}$. Let
$\phi:PD_{k}(X^{+})\to PD_{k}(X^{-})$ represent a bijection (matching). With
the existence of the diagonal $\Delta^{\pm}$ in both sides, we make sure the
existence of these bijections even if the cardinalities $|\\{q_{j}^{+}\\}|$
and $|\\{q_{l}^{-}\\}|$ are different. Then,
$\mathcal{W}_{p}(X^{+},X^{-})=\min_{\phi}(\sum_{j}\|q_{j}^{+}-\phi(q_{j}^{+})\|_{\infty}^{p})^{\frac{1}{p}},\quad
p\in\mathbb{Z}^{+}.$
In turn, the bottleneck distance is
$\mathcal{W}_{\infty}(X^{+},X^{-})=\max_{j}\|q_{j}^{+}-\phi(q_{j}^{+})\|_{\infty}$.
We compare the Wasserstein distance between persistence diagrams with the
distance between the corresponding Saw Functions. Let
$\mathcal{S}_{k}^{\pm}(t)$ be the $k^{th}$ Saw Functions as defined above.
Assuming $\mathcal{S}^{\pm}=0$ outside their domains $I^{\pm}$, the
$L^{p}$-distance between these two functions defined as
$d_{p}(\mathcal{S}^{+},\mathcal{S}^{-})=\biggl{(}\int_{I^{+}\cup
I^{-}}|\mathcal{S}^{+}(t)-\mathcal{S}^{-}(t)|^{p}\ dt\biggr{)}^{\frac{1}{p}}.$
Now, we state our stability result:
###### Theorem 4.1.
Let $(X^{\pm},f^{\pm},\mathcal{I}^{\pm})$ be as defined in previous paragraph.
Then, Saw Functions are stable for $p=1$, and unstable for $p=\infty$. i.e.
$d_{1}(\mathcal{S}^{+}(t),\mathcal{S}^{-}(t))\leq
C\cdot\mathcal{W}_{1}(X^{+},X^{-}).$
Proof: For simplicity, we first prove the result for Betti functions, and
modify it for Saw Functions. Notice that $\mathcal{B}_{k}(t)=\sum_{i}\chi_{i}$
and $\mathcal{S}_{k}(t)=\sum_{i}\widehat{\chi}_{i}$, where the index $i$
represents the points in $PD_{k}(G)=\\{q_{i}\\}$ where $q_{i}=(b_{i},d_{i})$.
That is, $\chi_{i}$ is the characteristic function for the interval
$[b_{i},d_{i})$. Then,
$|\mathcal{B}_{k}^{+}(t)-\mathcal{B}_{k}^{-}(t)|\leq|\sum_{i}\chi^{+}_{i}-\sum_{j}\chi_{j}^{-}|\leq\sum_{i}|\chi^{+}_{i}-\chi^{-}_{\phi(i)}|$
(1)
Notice that given $q^{+}=(b^{+},d^{+})$ and $q^{-}=(b^{-},d^{-})$, assuming
$d^{-}\leq d^{+}$, there are two cases for $\chi^{+}-\chi^{-}$. In case 1,
$d^{-}\leq b^{+}$, which gives
$\int_{\alpha_{0}}^{\infty}|\chi^{+}-\chi^{-}|=|d^{+}-b^{+}|+|d^{-}-b^{-}|\leq|d^{+}-d^{-}|+|b^{+}-b^{-}|$.
In case 2, $d^{-}>b^{+}$ and we get
$\int_{\alpha_{0}}^{\infty}|\chi^{+}-\chi^{-}|=|d^{+}-d^{-}|+|b^{+}-b^{-}|$.
Hence,
$\displaystyle\int_{\alpha_{0}}^{\infty}|\chi^{+}_{i}(t)$ $\displaystyle-$
$\displaystyle\chi^{-}_{\phi(i)}(t)|dt\leq|b_{i}^{+}-b_{\phi(i)}^{-}|+|d_{i}^{+}-d_{\phi(i)}^{-}|\
$ (2) $\displaystyle\leq$ $\displaystyle
2\max\bigl{\\{}|b_{i}^{+}-b_{\phi(i)}^{-}|,|d_{i}^{+}-d_{\phi(i)}^{-}|\bigr{\\}}$
$\displaystyle=$ $\displaystyle 2\|q^{+}_{i}-q^{-}_{\phi(i)}\|_{\infty}.$
Now, in view of (1), we obtain
$\displaystyle d_{p}(\mathcal{B}_{k}^{+}(t),\mathcal{B}_{k}^{-}(t))$
$\displaystyle=$
$\displaystyle\biggl{(}\int_{\alpha_{0}}^{\infty}|\mathcal{B}_{k}^{+}(t)-\mathcal{B}_{k}^{-}(t)|^{p}\
dt\biggr{)}^{\frac{1}{p}}$ $\displaystyle\leq$
$\displaystyle\biggl{(}\int_{\alpha_{0}}^{\infty}\bigl{[}\
\sum_{i}|\chi^{+}_{i}(t)-\chi^{-}_{\phi(i)}(t)|\ \bigr{]}^{p}\
dt\biggr{)}^{\frac{1}{p}}$
By (2), the stability result for $p=1$ follows from
$\displaystyle d_{1}(\mathcal{B}_{k}^{+}(t),\mathcal{B}_{k}^{-}(t))$
$\displaystyle=$
$\displaystyle\int_{\alpha_{0}}^{\infty}\sum_{i}|\chi^{+}_{i}(t)-\chi^{-}_{\phi(i)}(t)|dt$
$\displaystyle\leq$
$\displaystyle\sum_{i}2\|q^{+}_{i}-q^{-}_{\phi(i)}\|_{\infty}$
$\displaystyle=$ $\displaystyle 2\mathcal{W}_{1}(X^{+},X^{-}).$
This finishes the proof for Betti functions. Notice that Equation 2 is still
true when we replace $\chi^{\pm}_{i}$ with $\widehat{\chi}_{i}^{\pm}$. Then,
when we replace Betti functions $\mathcal{B}^{\pm}_{k}(t)$ with
$\mathcal{S}^{\pm}_{k}(t)$ and $\chi^{\pm}_{i}$ with
$\widehat{\chi}_{i}^{\pm}$ in the inequalities above, we get the same result
for Saw Functions, i.e.
$d_{1}(\mathcal{S}_{k}^{+}(t),\mathcal{S}_{k}^{-}(t))\leq
2\mathcal{W}_{1}(X^{+},X^{-})$. The proof follows. $\Box$
Note that when $p>1$, Equation 4.3 above may no longer be true. So, the
stability for $1<p<\infty$ is not known. For $p=\infty$, we have the following
counterexample.
Counterexample for $p=\infty$ Case:
Here, we describe a counterexample showing that similar statement for
$p=\infty$ cannot be true, i.e.
$d_{\infty}(\mathcal{S}^{+}(t),\mathcal{S}^{-}(t))\leq
C\cdot\mathcal{W}_{\infty}(X^{+},X^{-})$. We define two persistence diagrams,
where one has $n$ off-diagonal elements, and the other has no off-diagonal
element. Let $PD_{k}(X^{+})=\\{(1,2)^{n}\\}\cup\Delta$, i.e. $(1,2)$ has
multiplicity $n$. Let $PD_{k}(X^{-})=\Delta$, i.e. only diagonal elements.
Note that it is straightforward to get such a $X^{\pm}$ for any $k$ and $n$.
Then, as $d_{\infty}((1,2),\Delta)=\sqrt{2}/2$, we have
$\mathcal{W}_{\infty}(X^{+},X^{-})=\sqrt{2}/2$. On the other hand, each copy
of the point $(1,2)$ contributes $1$ to betti function in the interval
$[1,2)$. This means $\mathcal{B}^{+}_{k}(t)=n$ for $t\in[1,2)$ while
$\mathcal{B}^{-}_{k}(t)=0$ for any $t$. This gives
$d_{\infty}(\mathcal{S}_{k}^{+},\mathcal{S}_{k}^{-})=\sup_{t}|\mathcal{S}_{k}^{+}(t)-\mathcal{S}_{k}^{-}(t)|=n$
while $\mathcal{W}_{\infty}(X^{+},X^{-})=\sqrt{2}/2$. This shows that there is
no $C>0$ with $d_{\infty}(\mathcal{S}_{k}^{+}(t),\mathcal{S}_{k}^{-}(t))\leq
C\cdot\mathcal{W}_{\infty}(X^{+},X^{-})$ in general.
## 5 Multi-Persistence Grid Functions (MPGF)
In this part, we introduce our second topological summary: Multi-Persistence
Grid Functions (MPGF). Like in Saw Functions, we aim to use persistent
homology as a complexity measure of evolving subspaces determined by
filtration. In this case, instead of one filter function, we use multiple
filter functions. One can consider this approach as a vectorization of
multiparameter persistent homology.
First, to keep the setup simple, we construct MPGFs in $2$-parameter setting.
Then, we give the natural generalization in any number of parameters. Here, we
only consider sublevel filtrations for more than one function at the same
time.
### 5.1 MPGFs for 2-parameter
A Short Outline: Before starting the formal construction, we start by giving
an outline of the setup. Given two filter functions $f,g:X\to\mathbf{R}$ with
their threshold sets $\\{\alpha_{i}\\}$ and $\\{\beta_{j}\\}$ respectively,
one can chop off $X$ into finer pieces $X^{ij}=\\{x\in X\mid
f(x)\leq\alpha_{i},\ g(x)\leq\beta_{j}\\}$. The subspaces $\\{X^{ij}\\}$
induce a natural filtration, however partially ordered nature of the threshold
set $\\{(\alpha_{i},\beta_{j})\\}$, getting persistence homology in the
multiparameter setting is highly technical (Lesnick, 2019b). We bypass this
technicalities, by directly going to the subspaces $\\{X^{ij}\\}$, and
considering their Betti numbers $\mathcal{B}_{k}(X^{ij})$ as the subspaces’
topological complexity measure. Then, by using the grid
$\\{(\alpha_{i},\beta_{j})\\}$ in the threshold domain $R$, we assign these
Betti numbers to the corresponding boxes $\Delta_{ij}$ as in Figure 2. This
construction defines a $2$-parameter function $\mathcal{G}_{k}(x,y)$ which we
call MPGF.
Next is the formal construction of MPGFs. Let $X$ be a metric space, and
$f,g:X\to\mathbf{R}$ be two filter functions. Define $F:X\to\mathbf{R}^{2}$ as
$F(x)=(f(x),g(x))$. Let $\mathcal{I}_{1}=\\{\alpha_{1},...\alpha_{m_{1}}\\}$
be the set of thresholds for the first filtration $f$, where $\alpha_{1}=\min
f(x)<\alpha_{2}...<\alpha_{m_{1}}=\max f(x)$. Similarly,
$\mathcal{I}_{2}=\\{\beta_{1},...,\beta_{m_{2}}\\}$ is defined for $g$. Let
$\delta_{1}={(\alpha_{m_{1}}-\alpha_{1})}/{(m_{1}-1)}$,
$\delta_{2}={(\beta_{m_{2}}-\beta_{1})}/{(m_{2}-1)}$,
$\alpha_{0}=\alpha_{1}-\delta_{1}$, and $\beta_{0}=\beta_{1}-\delta_{2}$. We
define our MPGFs on the rectangle
$\mathcal{R}=(\alpha_{0},\alpha_{m_{1}}]\times(\beta_{0},\beta_{m_{2}}]$.
Notice that the points $\\{(\alpha_{i},\beta_{j})\\}$ gives a grid in our
rectangle $R$.
Let $\Delta_{ij}=(\alpha_{i-1},\alpha_{i}]\times(\beta_{j-1},\beta_{j}]$ for
$1\leq i\leq m_{1}$ and $1\leq j\leq m_{2}$. Then,
$\mathcal{R}=\bigcup_{i,j}\Delta_{ij}$. Let
$\Omega_{ij}=(\alpha_{0},\alpha_{i}]\times(\beta_{0},\beta_{j}]$ (see Figure
2).
Figure 2: Filtration. The rectangle $R$ is the domain of $\mathcal{G}_{k}$
which consists of $m_{1}\times m_{2}$ small rectangles $\\{\Delta_{ij}\\}$.
$\mathcal{G}_{k}$ is constant for each $\Delta_{ij}$ which is the $k$th Betti
number of the complex coming from $F^{-1}(\Omega_{ij})$.
Define $X^{ij}=\\{x\in X\mid f(x)\leq\alpha_{i}\mbox{ and
}g(x)\leq\beta_{j}\\}$, i.e. $X^{ij}=F^{-1}(\Omega_{ij})$. Then, let
$\widehat{X}^{ij}$ be the complex generated by $X^{ij}$. $\widehat{X}^{ij}$
can be the set $X^{ij}$ itself, or VR, or similar complex induced by $X^{ij}$.
Then, for each fixed $i_{0}$,
$\widehat{X}^{i_{0}1}\subset\widehat{X}^{i_{0}2}\subset...\subset\widehat{X}^{i_{0}m_{2}}$
gives a single parameter filtration for $(X^{i_{0}},g)$ where
$X^{i_{0}}=f^{-1}((-\infty,\alpha_{i_{0}}])$.
###### Definition 5.1.
Let $\mathcal{B}_{k}(\widehat{X}^{ij})$ be the $k^{th}$-Betti number of
$\widehat{X}^{ij}$. Define $k^{th}$ MPGF of the filtration for the filter
function $F:X\to\mathbf{R}^{2}$ as
$\mathcal{G}_{k}(x)=\mathcal{B}_{k}(\widehat{X}^{ij})$
for any $x\in\Delta_{ij}$.
That is, we assign every rectangle $\Delta_{ij}$ of the threshold domain, we
assign the Betti number of the space $\widehat{X}^{ij}$ corresponding to these
thresholds. Notice that $\mathcal{G}_{k}:\mathcal{R}\to\mathbf{N}$ has
constant value at any small rectangle $\Delta_{ij}\subset\mathcal{R}$. The
algorithm is outlined in Alg. 1.
Complexity: Algorithm 1 requires computation of Betti functions at each of the
$m_{1}\times m_{2}$ grid cell. Betti computations require finding the rank of
the homology group, which has a complexity of $\tilde{O}(|A|+r^{\omega})$
where $|A|$ is the number of non-zero entries in the group, r is the true rank
of the group and $\omega<2.38$ is the multiplication exponent (Cheung et al.,
2013).
Example: In Figure 3, we described a simple example to show how MPGF works. In
the example, $X$ is graph $G$, and $f,g:V\to\mathbf{N}$ integer valued
functions on vertices. Here, $\widehat{X}^{ij}$ is the subgraph $G^{ij}$
generated by $V^{ij}=\\{v\in V\mid f(v)\leq i,\ g(v)\leq j\\}$. In the grid,
we count the number of topological feature exist in $X^{ij}$, and record this
number to the box
$\Delta_{ij}=(\alpha_{i-1},\alpha_{i}]\times(\beta_{j-1},\beta_{j}]$. In the
example, we see MPGF $\mathcal{G}_{0}$ for $0$-cycles (connected components).
Why MPGFs are useful? Multiparameter filtration slices up space $X$ in a much
finer and controlled way than single parameter filtration so that we can see
the topological changes with respect to both functions at the same time. Then,
by using MPGFs for the corresponding multiparameter persistence, we can
analyze, and interpret both functions’ behavior on the space $X$, and
interrelation between these two functions. In many cases, we have more than
one important descriptors that are not directly connected, but both give very
valuable information on the dataset. In that case, when both functions are
used for multiparameter persistence, then together both functions will provide
much finer information on the dataset, and the functions’ undisclosed
relationship with each other could be observed. One very popular case is the
time-series datasets. One function is the time, and the other is another
important qualitative descriptor of the dataset.
Figure 3: Grid. In the graph $G$, black numbers are node degrees whereas red
numbers are node eccentricities (i.e., max distance to any node in the graph).
In the right, $G^{i}$ represents the subgraph corresponding to sublevel set
$f(v)\leq i$ where $f$ is the degree function. The subgraph $G^{ij}$ of
$G^{i}$ is defined by $g(v)\leq j$ where $g$ is the eccentricity function.
MPGF $\mathcal{G}_{0}(\Delta_{ij})=\mathcal{B}_{0}(G^{ij})$. Algorithm 1
Multi-Persistence Graph Classification
Input: graph $G$, functions $f,g$, grid sizes $m_{1},m_{2}$.
Output: A matrix of betti 0 and 1 values defined for a grid filtration.
Compute $f$ values for $\forall v\in G$
Fix $\alpha_{i}$ for $1\leq i\leq m_{1}$.
Initialize matrix $grid$
for $1\leq i\leq m_{1}$ do
Define $G^{i}=\\{v\in\mathcal{G}\mid f(v)\leq\alpha_{i}\\}$.
Set $G^{i}$ as the original input space for the filtration function
$g:X^{i}\to\mathbf{R}$
for $1\leq j\leq m_{2}$ do
Set subgraph $G^{ij}=\\{v\in G^{i}\mid f(v)\leq\beta_{j}\\}$.
$grid[i,j,0]=\mathcal{B}_{0}(G^{ij})$
$grid[i,j,1]=\mathcal{B}_{1}(G^{ij})$
end for
end for
RETURN $grid$
### 5.2 MPGFs for any number of parameters
We generalize the idea above in any number of parameters as follows: Let
$F:X\to\mathbf{R}^{d}$ be our filter function. In particular, let
$F=(f_{1},f_{2},..,f_{d})$. Let $\mathcal{I}_{i}=\\{\alpha_{ij}\mid 1\leq
j\leq m_{i}\\}$ be the thresholds of the filter function $f_{i}$. In
particular, $\\{\alpha_{i_{0}j}\\}$ is the thresholds of $f_{i_{0}}$ where
$\alpha_{i_{0}1}=\min_{V}f_{i_{0}}(v)<\alpha_{i_{02}}<..<\alpha_{i_{0}m_{i_{0}}}=\max_{V}f_{i_{0}}(v)$.
Again, let $\delta_{i}={(\alpha_{im_{i}}-\alpha_{i1})}/{(m_{i}-1)}$, and let
$\alpha_{i0}=\alpha_{i1}-\delta_{i}$. Define $d$-dimensional rectangular box
$\mathcal{R}=(\alpha_{10},\alpha_{1m_{1}}]\times(\alpha_{20},\alpha_{1m_{2}}]\times...\times(\alpha_{d0},\alpha_{dm_{d}}].$
Notice that the $d$-tuples
$\\{(\alpha_{1j_{1}},\alpha_{2j_{2}},...\alpha_{dj_{d}})\\}$ deliver a
$d$-dimensional grid in $R$ where $1\leq j_{i}\leq m_{i}$. Similarly, define
small $d$-dimensional boxes
$\displaystyle\Delta_{j_{1}j_{2}..j_{d}}$ $\displaystyle=$
$\displaystyle(\alpha_{1(j_{1}-1)},\alpha_{1j_{1}}]$ $\displaystyle\times$
$\displaystyle(\alpha_{2(j_{2}-1)},\alpha_{2j_{2}}]\times\ldots\times(\alpha_{d(j_{d}-1)},\alpha_{dj_{d}}].$
Then, again we have
$\mathcal{R}=\bigcup_{j_{1},..j_{d}}\Delta_{j_{1}j_{2}..j_{d}}$. In other
words, $\mathcal{R}$ is the union of $N=m_{1}\cdot m_{2}\cdot...\cdot m_{d}$
small boxes.
Similarly, define the subspace $X^{j_{1}j_{2}..j_{d}}=\\{x\in X\mid
f_{i}(x)\leq\alpha_{ij_{i}}\mbox{ for }1\leq i\leq d\\}$ of $X$ induced by the
thresholds $j_{1},j_{2},...,$ and $j_{d}$. Let
$\widehat{X}^{j_{1}j_{2}..j_{d}}$ be the complex induced by
$X^{j_{1}j_{2}..j_{d}}$. Here, $\widehat{X}^{j_{1}j_{2}..j_{d}}$ can be the
subspace $X^{j_{1}j_{2}..j_{d}}$ itself, or the VR (or similar) complex
induced by $X^{j_{1}j_{2}..j_{d}}$.
Let $\mathcal{B}_{k}(\widehat{X}^{j_{1}j_{2}..j_{d}})$ $k^{th}$ Betti number
of the space $\widehat{X}^{j_{1}j_{2}..j_{d}}$. Define $k^{th}$ MPGF of the
filtration $F:X\to\mathbf{R}^{d}$ as
$\mathcal{G}_{k}(x)=\mathcal{B}_{k}(\widehat{X}^{j_{1}j_{2}..j_{d}})$
for any $x\in\Delta_{j_{1}j_{2}..j_{d}}$. Notice that
$\mathcal{G}_{k}:\mathcal{R}\to\mathbf{N}$ has constant value at any small box
$\Delta_{j_{1}j_{2}..j_{d}}\subset\mathcal{R}$.
Algorithm for MPGFs in any number of parameters: We fix $j_{1},...,j_{d-1}$
for $1\leq j_{i}\leq m_{i}$. Define $X^{j_{1}j_{2}..j_{d-1}}=\\{x\in X\mid
f_{i}(v)\leq\alpha_{ij_{i}}\mbox{ for }1\leq i\leq d-1\\}$. Then, set
$X^{j_{1}j_{2}..j_{d-1}}$ as the original input space for the filtration
function $f_{d}:X^{j_{1}j_{2}..j_{d-1}}\to\mathbf{R}$. Compute the persistence
diagrams, and corresponding Betti functions for this filtration. Then, in
particular $\widehat{X}^{j_{1}j_{2}..j_{d-1}\widehat{j}}$ is the
$\widehat{j}^{th}$ complex in this filtration.
$\mathcal{G}_{k}(\Delta_{j_{1}j_{2}..j_{d-1}\widehat{j}})=\mathcal{B}_{k}(\widehat{X}^{j_{1}j_{2}..j_{d-1}\widehat{j}})$,
$k^{th}$-Betti number of $\widehat{X}^{j_{1}j_{2}..j_{d-1}\widehat{j}}$. In
other words, if one can obtain $\widehat{X}^{j_{1}..j_{d}}$, and compute its
$k^{th}$-Betti number directly, then
$\mathcal{G}_{k}(\Delta_{j_{1}...j_{d})}=\mathcal{B}_{k}(\widehat{X}^{j_{1}...j_{d}})$.
### 5.3 Comparison with other Multiparameter Persistence Summaries
The crucial advantage of our MPGFs over the existing topological summaries of
the multiparameter persistence is its simplicity and generality. Except for
some special cases, Multiparameter Persistence theory suffers from the problem
of the nonexistence of barcode decomposition because of the partially ordered
structure of the index set $\\{(\alpha_{i},\beta_{j})\\}$ (Thomas, 2019;
Lesnick, 2019a). The existing approaches remedy this issue by slicing
technique by studying one-dimensional fibers of the multiparameter domain. In
particular, in the threshold domain
$\Omega=[\alpha_{0},\alpha_{N}]\times[\beta_{0},\beta_{M}]$ in the $xy$-plane,
one needs to find a good slice (line segment) $ax+by=c$ in $\Omega$, where the
multipersistence restricted to such slice can be read as a single parameter
persistence. This makes the summaries vulnerable to the choice of ”good
slicing” (choice of $a,b,c$) and combining back the multiple single parameter
information into a multi-parameter setting. For example, in (Carrière and
Blumberg, 2020), the authors define a similar grid object called
”Multiparameter Persistence Image” where they slice the domain $\mathcal{R}$,
and consider restricted single parameter persistence diagrams in the slices.
Then, they successfully remedy the nonexistence of the barcode decomposition
problem by matching the nearby barcodes to the same family, which they call
vineyard decomposition. Then, by using this, for each grid point, they take a
weighted sum of each barcode family to see the “topological importance” of
that grid point in this decomposition. They use 3 types of fibering (vineyard
family) for the domain $\Omega$, which are expected to give good results for a
generic case. However, their construction heavily depends on these slicing
choices, and when filtration functions do not interact well, different slicing
might give different images.
In our MPGFs, we take a more direct approach to construct our summaries. We
bypass the serious technical issues in defining multiparameter persistence
diagrams and directly go to the topological behavior of the subspaces induced
by each grid. This gives a simple and general method that can be applied to
any topological space with more than one filtration function. The topological
summary is much simpler, and interpretable as one can easily read the changes
in the graph. It also enables one to read the interrelation between the
filtration functions, and their behavior on the space. Furthermore, as it does
not use any slicing, it directly captures the multidimensional information
produced by filtration functions.
Life Span Information: Notice that in both Saw Functions and Multi-Persistence
Grid functions, when vectorizing persistence diagrams, we consider persistence
homology as complexity measurement during the evolution of filter function.
While these functions give the count of live topological features, they seem
to miss important information from persistence diagrams, i.e. life spans. Life
span $d_{\sigma}-b_{\sigma}$ of a topological feature $\sigma$ is crucial
information to show how persistent the feature is. In particular, if
$d_{\sigma}-b_{\sigma}$ is large, it is an essential/persistent feature, while
if short, it can be considered nonessential/noise. In that sense, our
topological summaries do not carry the individual life span information
directly, but indirectly. In particular, any persistent feature $\sigma$
contributes to Saw or MPGF functions as many numbers of intervals (boxes) they
survive.
### 5.4 Stability of MPGFs
In this part, we discuss the stability of MPGFs. Even though the
multiparameter persistence theory is developing very fast in the past few
years, there are still technical issues to overcome to use this technique in
general. As mentioned before, because of the partially ordered nature of the
threshold domain in the multiparameter case, the single parameter persistent
homology theory does not generalize immediately to the multiparameter case.
Similarly, distance definitions have related problems in this setting. There
are several distance functions (interleaving, matching) suggested in the
literature for multiparameter persistence (Thomas, 2019). The theory is very
active, and there are several promising results in recent years (Lesnick,
2015; Carrière and Blumberg, 2020). Here, we conjecture that our MPGFs are
stable with respect to matching distance, and we sketch proof of the
conjecture in a special case.
Let $(X^{+},F^{+},\mathcal{I}^{+})$ and $(X^{-},F^{-},\mathcal{I}^{-})$ define
two filtrations where $F^{\pm}=(f^{\pm},g^{\pm})$, and
$\mathcal{I}^{\pm}=\\{(\alpha_{i}^{\pm},\beta_{j}^{\pm})\\}$. Let
$\mathfrak{D}_{M}(X^{+},X^{-})$ represent the matching distance of
multiparameter persistence modules induced by $(X^{+},F^{+},\mathcal{I}^{+})$
and $(X^{-},F^{-},\mathcal{I}^{-})$ (Thomas, 2019).
Next, by assuming $\mathcal{G}^{\pm}=0$ outside of their domains
$\mathcal{R}^{\pm}$, define the $L^{1}$-distance between induced MPGFs as
$\mathbf{d}(\mathcal{G}^{+},\mathcal{G}^{-})=\int\int_{\mathcal{R}^{+}\cup\mathcal{R}^{-}}|\mathcal{G}^{+}(x,y)-\mathcal{G}^{-}(x,y)|\,dxdy$
Now, we have the following conjecture for the stability of MPGFs.
###### Conjecture 5.1.
Let $(X^{\pm},F^{\pm},\mathcal{I}^{\pm})$ be as defined above. Then, MPGFs are
stable i.e.
$\mathbf{d}(\mathcal{G}^{+},\mathcal{G}^{-})\leq
C\cdot\mathfrak{D}_{M}(X^{+},X^{-}).$
Here, the main motivation for this conjecture is that it naturally holds in
the technically simplest case. Again, the partially ordered nature of the
thresholds in multiparameter setting prevents having good barcode
decomposition for the generators like in the single parameter persistent
homology (Lesnick, 2019b). However, there are simple cases where good barcode
decomposition exists (Lesnick, 2019b). In such a $2$-parameter setting, a
barcode $\mathcal{B}^{\pm}_{i}$ is not one rectangle, but unions of rectangles
(with vertices at thresholds) where the union is connected. Let
$\\{\mathcal{B}^{\pm}_{i}\\}$ be the set of barcodes in our multiparameter
persistent homology for $(X^{\pm},F^{\pm},\mathcal{I}^{\pm})$. Then, the
induced MPGFs $\mathcal{G}^{\pm}$ can be defined just like single parameter
persistent homology as follows. In particular, let
$\chi_{\mathcal{B}^{\pm}_{i}}(x,y)$ be the characteristic function for the
barcode $\mathcal{B}^{\pm}_{i}$, then we have
$\mathcal{G}^{\pm}(x,y)=\sum_{i}\chi_{\mathcal{B}^{\pm}_{i}}(x,y).$
In that case, our proof for Theorem 4.1 goes through similar way by replacing
generator functions $\widehat{\chi}^{\pm}_{j}(t)$ in the proof with the
characteristic functions $\chi_{\mathcal{B}^{\pm}_{i}}(x,y)$ of barcodes of
multiparameter persistence. Since having a good barcode is a very special
case, we leave this as a conjecture for general setting.
### 5.5 An alternative definition for MPGFs
Even though MPGFs are giving the Betti numbers of corresponding subspaces
$\\{X^{ij}\\}$, when restricted to single parameter, MPGFs are different than
regular Betti functions. While single parameter Betti functions defined as
$\mathcal{B}_{k}([\alpha_{i},\alpha_{i+1}))=\mathcal{B}_{k}(\alpha_{i})$, when
MPGFs restricted to single parameter, we slide the interval one unit back so
that $\mathcal{G}_{k}((\alpha_{i-1},\alpha_{i}])=\mathcal{G}_{k}(\alpha_{i})$.
Here, both $\mathcal{G}_{k}(\alpha_{i})=\mathcal{B}_{k}(\alpha_{i})$
represents the $k^{th}$ Betti number of the space $X^{i}=\\{x\in X\mid
f(x)\leq\alpha_{i}\\}$. We chose this presentation for MPGFs since it is more
natural and easy to read for multiparameter case as it can be seen in Figure 2
in the main text. In particular, if one wants to define MPGFs consistent with
the single parameter Betti functions, one needs to replace the half open
intervals $(\alpha_{i},\alpha_{i+1}]\times(\beta_{j},\beta_{j+1}]$ in the
original definition with the
$[\alpha_{i},\alpha_{i+1})\times[\beta_{j},\beta_{j+1})$ starting from
$\alpha_{0}=\min f$, and $\beta_{0}=\min g$, then the similar construction
will give the definition for MPGFs consistent with the single parameter Betti
functions. In this definition, the extra column goes to the right not to the
left. Compare the Figure 4 with the Figure 2 in the main text. This
alternative definition has exactly the same information with our MPGFs, but as
mentioned above, our definition is more user friendly in multiparameter case.
Figure 4: The rectangle $R$ is the domain of $\mathcal{B}_{k}$ consists of
$m_{1}.m_{2}$ small rectangles $\\{\Delta_{ij}\\}$. $\mathcal{B}_{k}$ is
constant for each $\Delta_{ij}$ which is the $k$ Betti number of the complex
coming from $F^{-1}(\Omega_{ij})$.
Betti Function Notation: When restricted to single parameter, our MPGFs above
is slightly different than regular Betti functions. While single parameter
Betti functions defined as
$\mathcal{B}_{k}([\alpha_{i},\alpha_{i+1}))=\mathcal{B}_{k}(\alpha_{i})$, in
our notation, we slide the interval back so that
$\mathcal{B}_{k}((\alpha_{i-1},\alpha_{i+1}])=\mathcal{B}_{k}(\alpha_{i})$.
This is because this presentation is more natural and easy to read for
multiparameter case as it can be seen in Figure 2. Recall that
$\mathcal{G}_{k}(\Delta_{ij})=\mathcal{B}_{k}(X_{ij})$ where
$X_{ij}=F^{-1}(\Omega_{ij})$. With our notation, we choose $\Delta_{ij}$
inside $\Omega_{ij}$ which is more natural in multiparameter case. However, if
we want to define MPGF $\mathcal{G}_{k}$ as direct generalization of Betti
function $\mathcal{B}_{k}$, then we need to use this alternative definition
described above. Then, with this definition, $\Delta_{ij}$ will be outside of
$\Omega_{ij}$ as in Figure 4.
Superlevel filtrations for MPGFs:
If the goal is to use superlevel filtration in original MPGFs, we apply the
following reverse approach:
Let $X$ be a metric space, and $f,g:X\to\mathbf{R}$ be two filtration
functions. Let $F:X\to\mathbf{R}^{2}$ defined as $F(x)=(f(x),g(x))$. Let
$\mathcal{I}_{1}=\\{\alpha_{1},\ldots,\alpha_{m_{1}}\\}$ be the set of
thresholds for the first filtration $f$, where $\alpha_{1}=\min
f(x)<\alpha_{2}\ldots<\alpha_{m_{1}}=\max f(x)$. Let
$\mathcal{I}_{2}=\\{\beta_{1},\ldots,\beta_{m_{2}}\\}$ defined similarly for
$g$. Now, let $\delta_{1}={(\alpha_{m_{1}}-\alpha_{1})}/{(m_{1}-1)}$ and
$\delta_{2}={(\beta_{m_{2}}-\beta_{1})}/{(m_{2}-1)}$. Let
$\alpha_{m_{1}+1}=\alpha_{m_{1}}+\delta_{1}$, and
$\beta_{m_{2}+1}=\beta_{m_{2}}+\delta_{2}$.
Let $\Delta_{ij}=[\alpha_{i},\alpha_{i+1})\times[\beta_{j},\beta_{j+1})$ for
$1\leq i\leq m_{1}$ and $1\leq j\leq m_{2}$. Then,
$\mathcal{R}=\bigcup_{i,j}\Delta_{ij}$. Let
$\Omega_{ij}=[\alpha_{i},\alpha_{m+1})\times[\beta_{j},\beta_{m_{2}+1})$. That
is, in the sublevel filtration, we take lower left part of $\mathcal{R}$ as
$\Omega_{ij}$, while in superlevel filtration, we consider the upper right
corner of $\mathcal{R}$ as $\Omega_{ij}$.
If $X^{ij}=\\{x\in X\mid f(v)\geq\alpha_{i}\mbox{ and }g(v)\geq\beta_{j}\\}$,
i.e. $X^{ij}=F^{-1}(\Omega_{ij})$, then $\widehat{X}^{ij}$ is defined as the
complex generated by $X^{ij}$. Now, let $\mathcal{B}_{k}(\widehat{X}^{ij})$ be
the $k^{th}$-Betti number of $\widehat{X}^{ij}$. The $k^{th}$ MPGF of the
filtration $F:X\to\mathbf{R}^{2}$ is then given by
$\mathcal{G}_{k}(x)=\mathcal{B}_{k}(\widehat{X}^{ij})$ for any
$x\in\Delta_{ij}$.
A recursive algorithm to compute $\mathcal{G}_{k}$ can be described as
follows. One can proceed by slicing $\mathcal{R}$ inductively. Let
$X^{d}_{j_{d}}=\\{x\in X\mid f_{d}(x)\leq\alpha_{dj_{d}}\\}$. Let
$\widehat{X}^{d}_{j_{d}}$ be the complex induced by $X^{d}_{j_{d}}$. This
defines a $(d-1)$-dimensional slice in $\mathbf{R}$. Then, one can do
$(d-1)$-parameter filtration on $\widehat{X}^{d}_{j_{d}}$ with the filtration
functions $\\{f_{1},..,f_{d-1}\\}$. We now start the process with
$X^{d}_{j_{d}}$ as original input space for $d-1$-parameter filtration, finish
the process inductively, and complete the function $\mathcal{B}_{k}$. Both
methods computationally demonstrate same efficiency, but coding inductively in
the second case could be easier.
## 6 Application to Graph Classification
In the following, as a case study, we will apply our topological summaries to
the graph classification problem. Our task is to classify graph instances of a
dataset. In particular, let $G$ be a graph with vertex set $V=\\{v_{i}\\}$ and
edge set $E=\\{e_{ij}\\}$, i.e. $e_{ij}\in E$ if there is an edge between the
vertex $v_{i}$ and $v_{j}$ in $G$. Let $f:V\to\mathbf{R}$ be a function
defined on the vertices of $G$. $\mathcal{I}=\\{\alpha_{i}\\}$ be the
threshold set which is an increasing sequence with $\alpha_{1}=\min f$ to
$\alpha_{N}=\max f$. Let $V^{i}=\\{v\in V\mid f(v)\leq\alpha_{i}\\}$. Let
$G^{i}$ be the subgraph generated by $V^{i}$, i.e. $e_{ij}\in G^{i}$ if
$v_{i},v_{j}\in G^{i}$. Then, $G^{1}\subset G^{2}\subset...\subset G^{N}$
defines the sublevel filtration for $(G,f)$. Similarly, one can define
$\widehat{G}^{i}$ as the clique complex of $G^{i}$. Then,
$\widehat{G}^{1}\subset\widehat{G}^{2}\subset...\subset\widehat{G}^{N}$
defines another natural filtration defined by $(G,f)$. We call this clique
sublevel filtration.
### 6.1 Datasets
In experiments, we used COX2 (Kersting et al., 2016), DHFR (Kersting et al.,
2016), BZR (Kersting et al., 2016), PROTEINS (Borgwardt et al., 2005), NIC1
(Wale et al., 2008) and DHFR (Schomburg et al., 2004) for binary and multi-
class classification of chemical compounds, whereas IMDB-BINARY, IMDB-MULTI,
REDDIT-BINARY and REDDIT-5K (Yanardag and Vishwanathan, 2015) are social
datasets. Table 1 shows characteristics of the graphs.
Dataset | NumGraphs | NumClasses | AvgNumNodes | AvgNumEdges
---|---|---|---|---
BZR | 405 | 2 | 35.75 | 38.36
COX2 | 467 | 2 | 41.22 | 43.45
DHFR | 467 | 2 | 42.43 | 44.54
FRANKENSTEIN | 4337 | 2 | 16.90 | 17.88
IMDB-BINARY | 1000 | 2 | 19.77 | 96.53
IMDB-MULTI | 1500 | 3 | 13.00 | 65.94
NCI1 | 4110 | 2 | 29.87 | 32.30
PROTEINS | 1113 | 2 | 39.06 | 72.82
REDDIT-BINARY | 2000 | 2 | 429.63 | 497.75
REDDIT-MULTI-5K | 4999 | 5 | 508.82 | 594.87
Table 1: Characteristics of the datasets used in experiments.
Reddit5K has five and ImdbMulti has three label classes. All other datasets
have binary labels.
We adopt the following traditional node functions (Hage and Harary, 1995) in
graph classification: 0) degree, 1) betweenness and 2) closeness centrality,
3) hub/authority score, 4) eccentricity, 5) Ollivier-Ricci and 6) Forman-Ricci
curvatures. Ollivier and Forman functions are implemented in Python by Ni et
al. (Ni et al., 2019). We compute functions 0-4 by using the iGraph library in
R.
We implement our methods on an Intel(R) Core i7 CPU @1.90GHz and 16Gb of RAM
without parallelization. Our R code is available at
https://github.com/cakcora/PeaceCorps.
### 6.2 Models
In graph classification, we will compare our results to five well-known graph
representation learning methods benchmarked in a recent in-depth study by
Errica et al. (Errica et al., 2020). These are GIN (Xu et al., 2018), DGCNN
(Zhang et al., 2018), DiffPool (Ying et al., 2018), ECC (Simonovsky and
Komodakis, 2017) and GraphSage (Hamilton et al., 2017).
For our results, we employ Random Forest (Breiman, 2001), SVM (Noble, 2006),
XGBoost (Chen et al., 2015) and KNN with Dynamic Time warping (Müller, 2007)
to classify graphs. Best models were chosen by 10-fold cross-validation on the
training set, and all accuracy results are computed by using the best models
on out-of-bag test sets.
Random Forest consistently gave the best accuracy results for both Single and
Multi-Persistence experiments. SVM with Radial Kernel yielded the next best
results. In the rest of this paper, we will demonstrate the Random Forest
results. In Random Forest, we experimented with the number of predictors
sampled for splitting at each node (mtry) from $\sqrt{n}-3,\ldots,\sqrt{n}+3$
where n is the number of dataset features. Our models use $ntree=500$ trees.
All results are averaged over 30 runs. In Single Persistence, we use Saw
Signatures of length 100. In Multi-Persistence, 10x10 grids yielded the best
results.
We compute classification results on ten datasets, but only six of those have
been used in the five benchmarked methods. As such, we report multi-
persistence results for the five datasets in Table 4, and rest of the datasets
in 5.
### 6.3 Results
#### 6.3.1 Single Persistence
Single Persistence results are shown in Table 2. Each row corresponds to a
model that uses Saw Signatures from the associated Betti functions only. We
report the best-performing filtration function for each dataset for Bettis 0
and 1. Betweenness and Closeness filtrations yield the most accurate
classification results in four and five cases, respectively. Curvature methods
Ollivier and Forman are computationally costly, and generally yield
considerably lower accuracy values. Although curvature methods yield the best
results for Protein, the closeness filter reaches similar (i.e., $0.698$ and
$0.695$) accuracy values for the dataset. We advise against using curvature
methods due to their computational costs.
Table 2: Single-persistence classification results for best performing (in
accuracy) filtrations. The N(%) column of the table reports the percentage of
graph instances that can be classified by using the associated model. Some
Betti 1 models cannot classify all graph instances, because these graphs do
not form Betti 1 values (i.e., 1-dimensional holes do not exist).
dataset | filt | betti | acc | N(%)
---|---|---|---|---
IMDBBinary | betw | 0 | 64.8 $\pm$ 3.5 | 100
IMDBBinary | betw | 1 | 65.6 $\pm$ 2.7 | 100
IMDBMulti | clos | 0 | 41.1 $\pm$ 3.4 | 100
IMDBMulti | clos | 1 | 44.2 $\pm$ 3.8 | 98
NCI1 | clos | 0 | 69.8 $\pm$ 1.3 | 100
NCI1 | clos | 1 | 59.9 $\pm$ 1.8 | 82.7
Protein | ollivier | 0 | 70.9 $\pm$ 3.2 | 100
Protein | forman | 1 | 72.2 $\pm$ 3.1 | 100
Reddit5k | betw | 0 | 47.8 $\pm$ 1.1 | 100
Reddit5k | clos | 1 | 41.7 $\pm$ 1.8 | 91.8
RedditBinary | deg | 0 | 86.0 $\pm$ 1.5 | 100
RedditBinary | betw | 1 | 69.4 $\pm$ 2.0 | 86.0
Table 3: Single-persistence classification results for best performing
filtrations (in accuracy) for additional datasets.
dataset | filt | betti | acc | N (%)
---|---|---|---|---
BZR | deg | 0 | 84.3$\pm$3.5 | 100
BZR | deg | 1 | 85.4$\pm$5.2 | 60.4
COX2 | betw | 0 | 77.4$\pm$3.3 | 100
COX2 | ricci | 1 | 82.3$\pm$4.6 | 76.1
DHFR | clos | 0 | 79.8$\pm$2.0 | 100
DHFR | clos | 1 | 70.7$\pm$3.4 | 95.6
FRANKENSTEIN | betw | 0 | 67.0$\pm$1.3 | 100
FRANKENSTEIN | deg | 1 | 72.1$\pm$7.2 | 4.4
Figure 5: Timings. Single filtration costs of the three functions in
experiments. Most dataset filtrations take a few minutes to complete. At most,
2000 RedditBinary and 4999 RedditMulti graphs take 4709 (betw) and 920 (clos)
seconds, respectively.
Figure 6: Running time comparison results for two datasets. (a)Proteins (b)
NCI1. For our method, the cost is for a complete end-to-end run. For other
methods, the costs are for 10 epochs only. Note that in practice, these
methods are run for 1000 epochs or more (Errica et al., 2020). Hence our end-
to-end run times are considerably shorter than those of the five methods.
#### 6.3.2 Multi Persistence
Multi-Persistence results are shown in Table 4 for best filtration grids. We
achieved the best results over betweenness, closeness, and degree filtration
pairs across all datasets. On average our method provides accuracy that
differs as little as 3.53% from the five popular graph neural network
solutions. Especially for the large Reddit graphs, our method ranks 3rd among
the methods.
Table 4: Multi-persistence classification accuracy results for best performing filtrations. MPGFs | DGCNN | GIN | DiffPool | ECC | GraphSage
---|---|---|---|---|---
ImdbB | deg | betw | 67.8 $\pm$ 2.7 | 70.0 | 71.23 | 68.4 | 67.67 | 68.80
ImdbM | betw | clos | 44.3 $\pm$ 3.4 | 47.8 | 48.53 | 45.64 | 43.49 | 47.56
Nci1 | deg | betw | 74.0 $\pm$ 1.6 | 74.4 | 80.04 | 76.93 | 76.18 | 76.02
Protein | betw | clos | 73.8 $\pm$ 2.8 | 75.5 | 73.25 | 73.73 | 72.30 | 73.01
Reddit5K | clos | betw | 51.6 $\pm$ 1.2 | 49.20 | 56.09 | 53.78 | OOR | 50.02
RedditB | deg | clos | 89.0 $\pm$ 0.9 | 87.7 | 89.93 | 89.08 | OOR | 84.32
dataset | Filt1 | Filt2 | Acc
---|---|---|---
BZR | deg | clos | 84.3$\pm$3.5
COX2 | deg | betw | 79.0$\pm$4.0
DHFR | clos | betw | 79.5$\pm$2.3
FRANKENSTEIN | deg | betw | 69.4$\pm$1.3
Table 5: Additional results for multi-persistence classification for most
accurate filtrations. These datasets are not studied by our competitors.
A major advantage of our method is its low computational costs, compared to
those of the five neural network solutions, which can take days. For example,
as Table 4 shows, some methods run out of resources and cannot even complete
the task. In comparison, as Figure 5 shows our method takes a few minutes to
compute filtrations for all but two large Reddit networks. In Multi-
Persistence, filtrations can be further parallelized for each row or column of
the grid to reduce Multi-Persistence time costs.
Comparison: On average, our MPGF takes a few minutes for most datasets, but on
average provides results that differ as little as 3.53% from popular graph
neural network solutions with the caveat that Single Persistence with Betti 1
may not classify all graph instances. We demonstrate the additional results in
Table 3 for Single Persistence and Table 5 for Multi Persistence for the BZR,
COX2, DHFR and FRANKENSTEIN datasets. Except for the DHFR dataset, Betti 1
results are more accurate than Betti 0 results.
## 7 Lessons Learned
Efficiency: Single persistence yields lower (i.e., 1%–6%) accuracy values than
Graph Neural Network solutions, but it is considerably faster to compute. In
general, single persistence can be preferred when the classified graphs are
too large to be handled with Graph Neural networks.
Multi-Filtration: In filtrations, we aimed to capture local (degree,
eccentricity) and global (closeness and betweenness) graph structure around
each node. We hypothesized that multi-persistence would benefit from looking
at a graph from both local and global connections. This intuition proved
correct, but we also observed that in some cases filtrations with global
functions (e.g., betw and clos) reached the highest accuracy values (Table 4).
Especially in datasets with $\geq 3$ classes, global connections may play a
more important role.
Figure 7: Variability. The impact of filter variability on accuracy. Low
variability implies having the same view on the graph from both filters, which
seems to affect performance negatively. The first two data points with 0 and
$0.0007$ variability yield the lowest accuracy in MPGFs.
Variability is good: In Multi-Persistence, filtrations may yield similar
numbers of persistent features across thresholds. This is due to the dominant
impact of edge connectivity in the graph, which causes similar persistent
holes to form even when different filtrations are used. Consider the zero-
dimensional holes created by two filters. We observe that when filters (e.g.,
betw, deg) create different numbers of zero-dimensional holes in all
thresholds, MPFGs tend to learn models that reach higher accuracy. We compute
a variability (variance) of Betti 0 numbers created by two filters to capture
this behavior. Figure 7 shows the relationship between variability and
accuracy. As the variability increases, MPGFs can learn better models.
## 8 Conclusion
In this paper, we have brought a new perspective to persistence homology as a
complexity measure, and with this motivation, we have defined two new
topological summaries in the single and multi-parameter setting. These
summaries turn out to be very interpretable, computationally efficient with a
success rate comparable to the state-of-the-art models. We have only applied
these summaries in a graph classification setting, but multi-persistence has
enough promise to be useful in many complex problems.
## References
* Adams and Coskunuzer (2021) Henry Adams and Baris Coskunuzer. Geometric approaches on persistent homology. _arXiv preprint arXiv:2103.06408_ , 2021.
* Adams et al. (2017) Henry Adams, Tegan Emerson, Michael Kirby, Rachel Neville, Chris Peterson, Patrick Shipman, Sofya Chepushtanova, Eric Hanson, Francis Motta, and Lori Ziegelmeier. Persistence images: A stable vector representation of persistent homology. _JMLR_ , 18(1):218–252, 2017.
* Borgwardt et al. (2005) Karsten M Borgwardt, Cheng Soon Ong, Stefan Schönauer, SVN Vishwanathan, Alex J Smola, and Hans-Peter Kriegel. Protein function prediction via graph kernels. _Bioinformatics_ , 21(suppl_1):i47–i56, 2005\.
* Breiman (2001) Leo Breiman. Random forests. _Machine learning_ , 45(1):5–32, 2001.
* Bubenik (2015) Peter Bubenik. Statistical topology using persistence landscapes. _JMLR_ , 16:77–102, 2015.
* Cai and Wang (2020) Chen Cai and Yusu Wang. Understanding the power of persistence pairing via permutation test. _arXiv preprint arXiv:2001.06058_ , 2020.
* Carrière and Blumberg (2020) Mathieu Carrière and Andrew Blumberg. Multiparameter persistence image for topological machine learning. _Advances in Neural Information Processing Systems_ , 33, 2020.
* Carrière et al. (2020) Mathieu Carrière, Frédéric Chazal, Yuichi Ike, Théo Lacombe, Martin Royer, and Yuhei Umeda. Perslay: A neural network layer for persistence diagrams and new graph topological signatures. In _AISTATS_ , pages 2786–2796, 2020.
* Chen et al. (2015) Tianqi Chen, Tong He, Michael Benesty, Vadim Khotilovich, Yuan Tang, Hyunsu Cho, et al. Xgboost: extreme gradient boosting. _R package version 0.4-2_ , 1(4), 2015.
* Chen et al. (2020) Yen-Chi Chen, Adrian Dobra, et al. Measuring human activity spaces from gps data with density ranking and summary curves. _Annals of Applied Statistics_ , 14(1):409–432, 2020.
* Cheung et al. (2013) Ho Yee Cheung, Tsz Chiu Kwok, and Lap Chi Lau. Fast matrix rank algorithms and applications. _Journal of the ACM (JACM)_ , 60(5):1–25, 2013\.
* Chung and Lawson (2019) Yu-Min Chung and Austin Lawson. Persistence curves: A canonical framework for summarizing persistence diagrams. _arXiv preprint arXiv:1904.07768_ , 2019.
* Di Fabio and Ferri (2015) Barbara Di Fabio and Massimo Ferri. Comparing persistence diagrams through complex vectors. In _ICIAP_ , pages 294–305, 2015.
* Edelsbrunner and Harer (2010) Herbert Edelsbrunner and John Harer. _Computational topology: an introduction_. American Mathematical Soc., 2010.
* Errica et al. (2020) Federico Errica, Marco Podda, Davide Bacciu, and Alessio Micheli. A fair comparison of graph neural networks for graph classification. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=HygDF6NFPB.
* Hage and Harary (1995) Per Hage and Frank Harary. Eccentricity and centrality in networks. _Social networks_ , 17(1):57–63, 1995.
* Hamilton et al. (2017) William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. _arXiv preprint arXiv:1706.02216_ , 2017.
* Harrington et al. (2019) Heather A Harrington, Nina Otter, Hal Schenck, and Ulrike Tillmann. Stratifying multiparameter persistent homology. _SIAM Journal on Applied Algebra and Geometry_ , 3(3):439–471, 2019.
* Hofer et al. (2017) Christoph Hofer, Roland Kwitt, Marc Niethammer, and Andreas Uhl. Deep learning with topological signatures. _arXiv preprint arXiv:1707.04041_ , 2017.
* Hofer et al. (2019) Christoph D Hofer, Roland Kwitt, and Marc Niethammer. Learning representations of persistence barcodes. _JMLR_ , 20(126):1–45, 2019.
* Kersting et al. (2016) Kristian Kersting, Nils M. Kriege, Christopher Morris, Petra Mutzel, and Marion Neumann. Benchmark data sets for graph kernels, 2016. http://graphkernels.cs.tu-dortmund.de.
* Kusano et al. (2016) Genki Kusano, Yasuaki Hiraoka, and Kenji Fukumizu. Persistence weighted gaussian kernel for topological data analysis. In _ICML_ , pages 2004–2013, 2016.
* Kyriakis et al. (2021) Panagiotis Kyriakis, Iordanis Fostiropoulos, and Paul Bogdan. Learning hyperbolic representations of topological features. In _International Conference on Learning Representations_ , 2021.
* Le and Yamada (2018) Tam Le and Makoto Yamada. Persistence fisher kernel: A riemannian manifold kernel for persistence diagrams. In _NIPS_ , pages 10007–10018, 2018.
* Lesnick (2019a) M Lesnick. Multiparameter persistence lecture notes, 2019a. https://www.albany.edu/~ML644186/AMAT_840_Spring_2019/Math840_Notes.pdf.
* Lesnick (2015) Michael Lesnick. The theory of the interleaving distance on multidimensional persistence modules. _Foundations of Computational Mathematics_ , 15(3):613–650, 2015.
* Lesnick (2019b) Michael Lesnick. Multiparameter persistence. In _Lecture Notes_ , 2019b.
* Mischaikow and Nanda (2013) Konstantin Mischaikow and Vidit Nanda. Morse theory for filtrations and efficient computation of persistent homology. _Discrete & Computational Geometry_, 50(2):330–353, 2013.
* Müller (2007) Meinard Müller. Dynamic time warping. _Information retrieval for music and motion_ , pages 69–84, 2007\.
* Ni et al. (2019) Chien-Chun Ni, Yu-Yao Lin, Feng Luo, and Jie Gao. Community detection on networks with ricci flow. _Scientific reports_ , 9(1):1–12, 2019.
* Noble (2006) William S Noble. What is a support vector machine? _Nature biotechnology_ , 24(12):1565–1567, 2006\.
* Rieck et al. (2019) Bastian Rieck, Christian Bock, and Karsten Borgwardt. A persistent weisfeiler-lehman procedure for graph classification. In _ICML_ , pages 5448–5458, 2019.
* Schomburg et al. (2004) Ida Schomburg, Antje Chang, Christian Ebeling, Marion Gremse, Christian Heldt, Gregor Huhn, and Dietmar Schomburg. Brenda, the enzyme database: updates and major new developments. _Nucleic acids research_ , 32(suppl_1):D431–D433, 2004.
* Simonovsky and Komodakis (2017) Martin Simonovsky and Nikos Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 3693–3702, 2017.
* Thomas (2019) Ashleigh Linnea Thomas. _Invariants and Metrics for Multiparameter Persistent Homology_. PhD thesis, Duke University, 2019.
* Togninalli et al. (2019) Matteo Togninalli, Elisabetta Ghisu, Felipe Llinares-López, Bastian Rieck, and Karsten Borgwardt. Wasserstein weisfeiler-lehman graph kernels. In _NeurIPS_ , pages 6439–6449, 2019.
* Umeda (2017) Yuhei Umeda. Time series classification via topological data analysis. _Information and Media Technologies_ , 12:228–239, 2017\.
* Vipond (2020) Oliver Vipond. Multiparameter persistence landscapes. _Journal of Machine Learning Research_ , 21(61):1–38, 2020.
* Wale et al. (2008) Nikil Wale, Ian A Watson, and George Karypis. Comparison of descriptor spaces for chemical compound retrieval and classification. _Knowledge and Information Systems_ , 14(3):347–375, 2008.
* Xu et al. (2018) Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? _arXiv preprint arXiv:1810.00826_ , 2018.
* Yanardag and Vishwanathan (2015) Pinar Yanardag and SVN Vishwanathan. Deep graph kernels. In _Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining_ , pages 1365–1374, 2015.
* Ying et al. (2018) Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. _arXiv preprint arXiv:1806.08804_ , 2018.
* Zhang et al. (2018) Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 32, 2018.
* Zhao and Wang (2019) Qi Zhao and Yusu Wang. Learning metrics for persistence-based summaries and applications for graph classification. In _NeurIPS_ , pages 9859–9870, 2019.
* Zieliński et al. (2019) Bartosz Zieliński, Michał Lipiński, Mateusz Juda, Matthias Zeppelzauer, and Paweł Dłotko. Persistence bag-of-words for topological data analysis. In _IJCAI_ , pages 4489–4495, 2019.
* Zomorodian and Carlsson (2005) Afra Zomorodian and Gunnar Carlsson. Computing persistent homology. _Discrete & Computational Geometry_, 33(2):249–274, 2005.
|
Original Article Qinan Wang, M.S.
Lyle School of Engineering, Southern Methodist University, Dallas, TX, 75275,
USA<EMAIL_ADDRESS>
# Intraday trading strategy based on time series and machine learning for
Chinese stock market
Qinan Wang Lyle School of Engineering, Southern Methodist University, Dallas,
TX, 75275, USA Yaomu Zhou Lyle School of Engineering, Southern Methodist
University, Dallas, TX, 75275, USA Junhao Shen Lyle School of Engineering,
Southern Methodist University, Dallas, TX, 75275, USA
###### Abstract
This article comes up with an intraday trading strategy under T+1 using
Markowitz optimization and Multilayer Perceptron (MLP) with published stock
data obtained from the Shenzhen Stock Exchange and Shanghai Stock Exchange.
The empirical results reveal the profitability of Markowitz portfolio
optimization and validate the intraday stock price prediction using MLP. The
findings further combine the Mark- owitz optimization, an MLP with the trading
strategy, to clarify this strategy’s feasibility.
###### keywords:
Stock price, Prediction, Markowitz optimization, Multilayer Perceptron,
Validation
## 1 Introduction
The development of The Chinese stock market (A shares) since its inception has
been impressive. Especially in recent years, the Chinese stock market has been
strong, which seems to sign a bull market. Most of these companies have
outperformed expectations and performed well. Except for a few losers, all
sectors have performed strongly. It raises attention worldwide. We find that
it is invaluable to research quantitative methods for the stock market under
this condition.
In China’s stock market, the regulator implements T+1. That is, stocks can
only be sold the next day after they are bought on the same day. It is hard to
figure out a way of daily trading in the market. However, the price of a stock
fluctuates every day. The maximum amplitude of a single stock can even reach
20% (the daily cap and collar volatility difference: upper limit is 10%, lower
limit is -10%). Given this circumstance, we can carry out the following
operations: Open positions in one day advance. On the following day, sell the
stocks at a high intraday price, buy the same quantity of positions as selling
before at a low price of the day; Or buy at a low price and sell at a high
price. At this point, we have managed a similar T+0 operation. We make a
profit by the difference in price within the day. During this process, the
stock account share will not change, but the amount of money available in the
account will increase.
In previous publications, the stock price prediction has been researched in
many models. Tang et al. compared their model of autoregressive moving average
generalized autoregressive conditional heteroscedasticity (ARMA-GARCH) with
the finite mixture of autoregressive (AR), a finite mixture of the
autoregressive moving average (ARMA) and a finite mixture of autoregressive
generalized autoregressive conditional heteroscedasticity (AR-GARCH) models
for finance exchange rate prediction. The experimental simulation indicated
that the mixture of the ARMA-GARCH model derives a GEM algorithm with better
performance than others. [1] In 2007, Ince et al. utilized technical
indicators with heuristic models, kernel principal component analysis, and
factor analysis to select the most useful inputs for a predicting model. They
used different inputs on multilayer perceptron (MLP) networks and support
vector regression (SVR). SVR and MLP networks required those different inputs
studied on comparison studies. Furthermore, proposed heuristic models produced
better results than the data mining method. Besides, there was no difference
between MLP and SVR techniques on their mean square error values. [2] Gong et
al. created a new method with Logistic Regression to forecast the next month’s
stock price trend basing on the current month in 2009. They chose Shenzhen
Development stock A(SDSA) from RESSET Financial Research Database as a study
case. Comparing with other models, e.g., RBF-ANN forecasting model, their
model was less complicated and at least 83% accurate in prediction. [3] In
2010, Budiman et al. used the Artificial Neural Network (ANN) and ARIMA method
to predict the results of ANTM (PT. Aneka Tambang) of stock. The study showed
that using the ANN method had a smaller error than the ARIMA approach. [4]
After three years, Alkhatib et al. forecasted stock price for a sample of six
companies on the Jordanian stock exchange utilizing the K- nearest neighbor
algorithm and nonlinear regression method to help investors, management,
decision-maker, and users make an accurate decision. Basing on the predicting
result, which is rational and reasonable, the KNN method was excellent with
the smallest error. Compared with the real stock price data, the forecasting
result and the actual stock price were almost parallel. [5] In 2016, Persio et
al. predicted stock price by using the Artificial Neural Network approach.
They also considered MLP, CNN, LSTM recurrent neural networks techniques, and
S&P500 historical time series to predict the stock price trend. As a result,
they indicated that neural networks could forecast future movements’ financial
time series and propose more ways to improve the results. [6] In recent years,
there is some research on predicting stock price by deep learning method with
algorithm development. For example, in 2019, Nikou et al. studied to assess
the forecasting power by Machine Learning models in a stock market. They
created four MLA models to make predictions. The results showed that the deep
learning approach was best than others. The second-best method was supports
vector regression concerning neural network and random forest methods with
less error. [7]
Most of the researches are only for price prediction since the T+1 regulation
of China’s stock market makes intraday trading difficult. It is innovative to
research the intraday trading strategy for China’s stock market. Besides,
Guresen et al. provided a detailed survey for 25 articles of the NNs
application on the finance and applied a multi-layer perceptron model, a
dynamic artificial neural network and hybrid networks to predict NASDAQ Stock
Exchange Index. They concluded that NN-based solutions outperform other
statistical approaches in most cases. [8] Hence, we combine the prediction by
multilayer perceptron (MLP) and practical operation to design the strategy.
This strategy is validated by simulated trade.
In this strategy, to assure both the profit of origin holdings and intraday
trading, we select a portfolio consists of promising stocks, and manage risk.
To avoid unexpected volatility of certain industries or companies, we get
three stocks from 10 industries, respectively. These industries are Science
and Technology, Basis Materials, Consumption (periodic), Finance, Service,
Means of Production, Energy, Consumption (aperiodic), Health Care,
transportation. In each industry, we use the top three performance stocks of
the last month.On the first day of trading, we open positions for all stocks
in the portfolio. On the following days, we trade two times each day to gain
from the intraday volatility. The daily trading hours for continuous bidding
of A shares are from 9:30 to 11:30 in the morning and from 13:00 to 15:00 in
the afternoon. The trading hours are only 4 hours a day, and there is a
1.5-hour break at noon. We use one-minute trading data from 9:30 to 11:20 to
train the multilayer perceptron (MLP). This model is used to predict the stock
price of 13:00 to 14:50. With the price point at 14:50, we make trading
decision by the difference between this point and price at 11:20. If the
prediction price is lower than the price at 11:20, we sell holding positions
before the close in the morning and buy back at 14:50. If the prediction price
is higher than the price at 11:20, we buy additional positions before the
close in the morning and sell them at 14:50. The model is retrained every day
for a possible large gap between successive days. The goal of this article is
to gain profit by intraday trend prediction.
In the next section, we describe the data we used in the analyses. The Method
section contains Markowitz portfolio optimization, multi-layer perceptron
(MLP), daily volatility, and measurement of return. The Results section
includes the results of portfolio optimization, modelling, and an evaluation
of the return of the strategy. The last section discusses the model
performance, further improvement, and comparison with other models.
## 2 Data
Figure 1: Histogram of daily volatility
Given the previous performance in 2020, we pick 30 stocks for the portfolio.
This study’s experimental data include close price of minutes data in October
and close price of daily data from January 2020 to September 2020. We
collected the data from Xueqiu Finance. Daily data is used in Markowitz
Optimization. Minutes data is used for day trade. The data plots are as Figure
4 and Figure 5 (See Appendix). The tickers information is as Table 3 (See
Appendix). In Figure 1, the daily volatility ranges from 0 to 0.03. We have a
0.6 probability that the daily volatility is over 0.002. It shows a potential
earning for the day trade strategy.
## 3 Method
### 3.1 Markowitz portfolio optimization
#### 3.1.1 Model
Markowitz’s portfolio theory was invented in 1952. He proposed the portfolio
optimization model using the mean and variance of individual stock returns in
an asset portfolio to find the Efficient Frontier of an investment portfolio,
the portfolio with the lowest variance under a certain yield level. According
to Markowitz’s portfolio model, it is necessary to invest in diverse stocks
and ensure that stocks’ correlation coefficient is low to minimize a
portfolio’s risk. From the relationship between the return and risk of risky
assets, it discusses selecting the economic system’s optimal asset portfolio.
The portfolio selection model focuses on risk diversification and
quantitatively determine the portfolio.
The apparent form of parametric optimization of Markowitz model has the
following mathematical representation:
$\begin{array}[]{l}\max\mathbb{E}\left(r_{P}\right)=\max\sum_{i=1}^{n}\omega_{i}\mu_{i}\\\
\min\sigma_{P}=\min\sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n}\omega_{i}\omega_{j}\sigma_{ij}}\\\
0\leq\omega_{i}\leq 1,\quad i=1,\ldots,n\\\
\sum_{i=1}^{n}\omega_{i}=1\end{array}$
where $\omega_{i}$ is the percentage of capital that will be invested in asset
$i;r^{i}$ is the return on asset $i;\mu_{i}$ the expected return on asset
$i;\mu_{ij}$ is the covariance between the return on assets $i$ and
$j;\mathbb{E}\left(r_{P}\right)$ is the expected return of the portfolio;
$\sigma_{P}$ is the risk of the portfolio.[9] The Markowitz model is based on
several assumptions concerning the behavior of investors and financial
markets:[10]
* •
A probability distribution of possible returns over some holding period can be
estimated by investors.
* •
Investors have single-period utility functions in which they maximize utility
within the framework of diminishing marginal utility of wealth.
* •
Variability about the possible values of return is used by investors to
measure risk.
* •
Investors care only about the means and variance of the returns of their
portfolios over a particular period.
* •
Expected return and risk as used by investors are measured by the first two
moments of the probability distribution of returns-expected value and
variance.
* •
Return is desirable; risk is to be avoided.
* •
Financial markets are frictionless.
#### 3.1.2 Sharpe ratio
The Sharpe ratio can be used to measure the return on investment at a given
risk. This ratio adjusts the return on investment, allowing us to compare
different investments’ performance at a given scale of risk. Without the
limitation of scale risk, we cannot compare the returns and risk performance
of different securities portfolios.
$\text{ Sharpe Ratio }=\frac{R_{p}-R_{rf}}{\sigma_{p}}$
where $R_{p}$ is the expected portfolio/asset return, $R_{rf}$ is the risk-
free rate of return, $\sigma_{p}$ is portfolio/asset standard deviation.
### 3.2 Multilayer Perceptron (MLP)
The architecture of artificial neural networks (ANNs) is based on connections
of layers by nodes called neurons as well as the biological neurons of brain.
[11] Each path transmits a signal among neurons in a manner similar to that of
synapses. [12] MLP, as a feedforward ANN, contains three main parts: one input
layer, one or more hidden layers and one output layer, which can be
successfully employed for prediction, classification, signal processing and
error filtering. [13] Each node employs one nonlinear function. MLP employs
backpropagation learning algorithm for training process. [14, 15] MLP as
popular and frequently used techniques among other MLPs was employed to
predict the direction value. Figure 2 indicates the architecture of developed
network. [16]
Figure 2: Multilayer Perceptron structure
### 3.3 Daily volatility
To verify the strategy feasibility, we need the probability mass function
(pmf) of the volatility. Instead of represented by the standard deviation, the
volatility is defined as the formula:
$\sigma=\frac{|p_{a}-p_{b}|}{p_{a}}$
where $p_{a}$ is the price of the stock at 11:20, $p_{b}$ is the price of the
stock at 14:50, $\sigma$ is the daily volatility.
### 3.4 Measurement of return
#### 3.4.1 Monthly return
In this strategy, we will have origin positions for 30 stocks. The monthly
return for each stock is defined as the formula:
$r_{m}=\frac{p_{e}-p_{s}}{p_{s}}$
where $r_{m}$ is the monthly return of the stock; $p_{e}$ is the selling
price; $p_{s}$ is the buying price.
#### 3.4.2 Daily return
In this strategy, we will trade a stock two times per day, selling in the
morning and buying in the afternoon, vice versa. We assume that trading the
same amount as origin position each day. Given any stock and its daily buying,
daily selling price, origin position price, we have the daily return of the
stock as the formula:
$r_{t}=\frac{s_{t}-b_{t}}{o}$
where $r_{t}$ is the return of the stock at time $t$, $s_{t}$ is the selling
price at time $t$, $b_{t}$ is the buying price of the stock at time $t$, $o$
is the origin position price.
#### 3.4.3 Yearly return
Given any stock and its daily or monthly return, the formulas of expected
yearly return for this stock are:
$\mathbb{E}\left(r_{y}\right)=(1+\mathbb{E}\left(r_{m}\right))^{12}-1$
$\mathbb{E}\left(r_{y}\right)=(1+\sum_{t=1}^{n}\mathbb{E}\left(r_{t}\right))^{12}-1$
where $r_{y}$ is the yearly return, $r_{m}$ is the monthly return, $r_{t}$ is
the daily return, $n$ is trading days of that month, $\mathbb{E}(\cdot)$ is
the expectation function.
#### 3.4.4 Portfolio return
Given any set of risky assets and a set of weights that describe how the
portfolio investment is split, the general formula of expected return for
$\mathrm{n}$ assets is:
$\mathbb{E}\left(r_{P}\right)=\sum_{i=1}^{n}w_{i}\mathbb{E}\left(r_{i}\right)$
where $\sum_{i=1}^{n}w_{i}=1.0$; $n=$ the number of securities; $w_{i}$ is the
proportion of the funds invested in security $i$; $r_{i}$ and $r_{P}$ are the
return on ith security and portfolio $p$; $\mathbb{E}(\cdot)$ is the
expectation function.
The return computation is nothing more than finding the weighted average
return of the securities included in the portfolio.[10]
## 4 Result
### 4.1 Portfolio optimization
For the constraints of the quadratic programming optimization, the first
constraint is that the sum of the weights equals 1, the second constraint is
that the weight is greater than or equal to 0 (not allowed to sell short). The
stock returns and standard deviation are calculated from the daily close price
from January to September 2020. Considering different investment risk
preferences, we found two optimal weights with the greatest Sharpe ratio and
the smallest standard deviation. To calculate the Sharpe ratio, set the risk-
free interest rate as 4%.
Figure 3: Expected returns and Sharpe ratios for random portfolios.
Red rhombus is the optimal portfolio with the greatest Sharpe ratio. Blue
rhombus is the optimal portfolio with the smallest standard deviation.
In Figure 3, the points are randomly produced 50,000 groups of weights for the
30 stocks. The optimal portfolio with the greatest Sharpe ratio has an
expected yearly return of 65.44% and a yearly volatility 0.27. The Sharpe
ratio of this portfolio is 0.51. The optimal portfolio with the smallest
standard deviation has an expected yearly return of 17.95% and a yearly
volatility 0.21. In the following research, we use the greatest Sharpe ratio
portfolio for a high return. The Table 1 shows the weights of each stock in
this optimal portfolio.
As the result of optimization, the greatest weight is 0.0818 for ’SZ300122’
and the smallest weight is 0.0049 for ’SZ000725’. The two stocks have monthly
returns 14.98% and -3.67% in October respectively. The greater weights are
given to ’SZ002475’, ’SZ002594’, ’SZ002607’, ’SZ300122’, ’SH600519’,
’SH601888’, ’SH601899’, ’SH603288’, in which the weights are over 0.05. In
order of the stocks in the Table 1 from top to bottom, they provide -0.02%,
0.27%, 0.12%, -0.02%, 0.45%, 0.02%, 0.61%, -0.27%, 2.75%, 1.46%, 0.86%, 1.23%,
-0.04%, -0.01%, 0.84%, 0.02%, -0.09%, -0.15%, -0.01%, -0.03%, -0.05%, -0.02%,
-0.05%, 0, 0.01%, 0.01%, 0.76%, -0.70%, -0.02%, 0.08% respectively to the
portfolio monthly return. Given the contributions to the portfolio, the top-
performing stocks are ’SZ002594’, ’SZ002607’, ’SZ300015’, among which
contributions to the portfolio are over 1% monthly.
By the optimal portfolio with the greatest Sharpe ratio, we have a monthly
return of 8.02% in October and a yearly return of 152.35% for the origin
holdings.
Table 1: 30 stocks October daily return Ticker symbol | Weight | Close price on 2020-9-30 | Close price on 2020-10-30 | Monthly return (%)
---|---|---|---|---
SZ000002 | 0.0110 | 28.02 | 27.49 | -1.89
SZ000333 | 0.0373 | 72.60 | 77.87 | 7.26
SZ000651 | 0.0128 | 53.30 | 58.43 | 9.62
SZ000725 | 0.0049 | 4.91 | 4.73 | -3.67
SZ000858 | 0.0425 | 221.00 | 244.35 | 10.57
SZ002352 | 0.0091 | 81.20 | 82.80 | 1.97
SZ002415 | 0.0345 | 38.11 | 44.90 | 17.82
SZ002475 | 0.0673 | 57.13 | 54.86 | -3.97
SZ002594 | 0.0733 | 116.24 | 159.81 | 37.48
SZ002607 | 0.0692 | 32.63 | 39.52 | 21.12
SZ300015 | 0.0409 | 51.42 | 62.26 | 21.08
SZ300122 | 0.0818 | 139.31 | 160.18 | 14.98
SH600009 | 0.0090 | 68.78 | 66.10 | -3.90
SH600028 | 0.0058 | 3.91 | 3.90 | -2.56
SH600104 | 0.0348 | 19.13 | 23.15 | 24.01
SH600276 | 0.0226 | 89.82 | 88.84 | 1.09
SH600309 | 0.0068 | 69.30 | 78.49 | -13.26
SH600346 | 0.0368 | 18.56 | 19.30 | -3.99
SH600519 | 0.0782 | 1668.50 | 1670.02 | -0.09
SH601012 | 0.0200 | 75.01 | 75.99 | -1.31
SH601088 | 0.0435 | 16.47 | 16.65 | -1.09
SH601225 | 0.0047 | 8.39 | 8.75 | -4.29
SH601318 | 0.0259 | 76.26 | 77.83 | -2.06
SH601398 | 0.0140 | 4.92 | 4.92 | 0
SH601766 | 0.0060 | 5.49 | 5.39 | 1.82
SH601857 | 0.0075 | 4.11 | 4.07 | 0.97
SH601888 | 0.0719 | 222.94 | 199.40 | 10.56
SH601899 | 0.0513 | 6.15 | 6.99 | -13.66
SH601939 | 0.0094 | 6.15 | 6.29 | -2.28
SH603288 | 0.0672 | 162.10 | 160.20 | 1.17
### 4.2 Model
To predict by MLP, we trained a new model for each trading day using the data
before 11:20. For each model, lags of input time series to use as inputs are
60. The number of networks to train is 100, and the result is the ensemble
forecast. To avoid the affection of outliers, the combination operator for
forecasts is ’median.’ The best number of hidden nodes is found by 5-fold
cross-validation.
We use this prediction model to do paper trading. This model is used to
predict the stock price of 13:00 to 14:50. With the price point at 14:50, we
make trading decision by the difference between this point and the price at
11:20. If the prediction price is lower than the price at 11:20, we sell
holding positions before the close in the morning and buy back at 14:50. If
the prediction price is higher than the price at 11:20, we buy additional
positions before the close in the morning and sell them at 14:50. Given the
trading fee of 0.05%, if the difference between the predicted price at 14:50
and the price at 11:20 is not more than 0.1% of the latter, we will not trade
on that day. To calculate the daily return, we use the close price on
September 30th as origin position price. The October sum of daily return for
each stock in the portfolio is as the Table 2. The specific earnings and model
prediction for each day of each stock are shown in Appendix.
In Table 2, ’SZ000333’, ’SZ000651’, ’SZ000725’, ’SZ000858’, ’SZ002352’,
’SZ002415’, ’SZ002475’, ’SZ002594’, ’SZ002 607’, ’SZ300015’, ’SH600028’,
’SH600309’, ’SH600346’, ’SH601088’, ’SH601398’, ’SH601857’, ’SH603288’ have
positive returns. The number of positive returns stocks is 17 and the number
of negative returns sto cks is 13. The top-performing stocks in the intraday
trading strategy are ’SZ002594’, ’SZ002607’ and ’SZ002475’ with the sum of
daily return of 14.13%, 10.10%, and 7.09% respectively. Particularly, they are
given greater weights of 0.0733, 0.0692, and 0.0673.
For ’SZ002594’, in the Table 12 (See Appendix), the highest daily return is
6.27% and the lowest return is -5.43%. For ’SZ002607’, in the Table 13 (See
Appendix), the highest daily return is 3.88% and the lowest return is -2.21%.
For ’SZ002478’, in the Table 11 (See Appendix), the highest daily return is
2.94% and the lowest return is -0.68%. All three stocks have posted strong
gains throughout the month, but have also posted big losses in some trading
sessions.
Stocks with high intraday volatility, such as ’SZ300122’, ’SH600104’ and
’SH601318’ are prone to large losses. The October sum of daily returns for the
three stocks are -11.48%, -7.11% and -4.04%, respectively. The weights for
these three stocks are 0.0818, 0.0348, 0.0259.
For ’SZ300122’, in the Table 15 (See Appendix), the highest daily return is
8.48% and the lowest return is -7.90%. For ’SH600104’, in the Table 18 (See
Appendix), the highest daily return is 1.05% and the lowest return is -3.08%.
For ’SH601318’, in the Table 26 (See Appendix), the highest daily return is
1.08% and the lowest return is -1.32%. All three stocks have posted strong
gains throughout the month, but have also posted big losses in some trading
sessions.
In the prediction, the model is able to predict accurately when price rises or
falls periodically. But in periods of sustained rise or fall, when models
typically predict reversion to the mean, it tends to generate large trading
losses.
Combining the prediction model with the intraday trading strategy, we have a
return of 1.41% in October and a yearly return of 18.24%.
Table 2: 30 stocks October daily return Ticker symbol | October sum of daily return (%)
---|---
SZ000002 | -1.96
SZ000333 | 3.66
SZ000651 | 2.35
SZ000725 | 1.22
SZ000858 | 0.88
SZ002352 | 2.38
SZ002415 | 0.25
SZ002475 | 7.09
SZ002594 | 14.13
SZ002607 | 10.10
SZ300015 | 2.40
SZ300122 | -11.48
SH600009 | -1.45
SH600028 | 1.29
SH600104 | -7.11
SH600276 | -1.51
SH600309 | 5.25
SH600346 | 6.74
SH600519 | -3.41
SH601012 | -1.15
SH601088 | 0.14
SH601225 | -0.7
SH601318 | -4.04
SH601398 | 0.61
SH601766 | -0.19
SH601857 | 1.69
SH601888 | -0.72
SH601899 | -1.12
SH601939 | -1.78
SH603288 | 4.72
### 4.3 Portfolio return
Total portfolio returns are divided into origin position gains and intraday
trading gains. As mentioned before, the origin position gains for October is
8.02% and the intraday trading gains for October is 1.41%. The total portfolio
returns for October is 9.43% and the yearly return is 194.87%.
## 5 Discussion
The current study’s purpose was to combine the prediction model and practical
intraday trading strategy under the T+1 constraint in China stock market. This
study has shown that the prediction model and strategy is profitable for
trading. The findings will be of interest to the Fund company investing in
China stock market and interested in the intraday trading under T+1.
More broadly, research is also needed to determine some practical trading
problems. The first problem is to set a stop loss. When we find the prediction
trend is reverse to the actual trend and the loss is too large in a day, like
3%, we should trade ahead of time to avoid a larger loss. The second problem
is to set a higher threshold for the decision of trading. Because the trading
fee is 0.05% per time, it will cause loss if we trade too many times, even the
strategy is profitable. In this article, we set the threshold as 0.1%. If the
difference between the prediction price at 14:50 and the real price at 11:20
is not more than or less than 0.1% of the origin position price, we will not
trade on that day. To reduce the trading fee, we can increase the threshold to
0.2% or more.
## References
* Tang et al. [2003] Tang H, Chiu KC, Xu L. Finite mixture of ARMA-GARCH model for stock price prediction. In: Proceedings of the Third International Workshop on Computational Intelligence in Economics and Finance (CIEF’2003), North Carolina, USA; 2003. p. 1112–1119.
* Ince and Trafalis [2007] Ince H, Trafalis TB. Kernel principal component analysis and support vector machines for stock price prediction. Iie Transactions 2007;39(6):629–637.
* Gong and Sun [2009] Gong J, Sun S. A new approach of stock price prediction based on logistic regression model. In: 2009 International Conference on New Trends in Information and Service Science IEEE; 2009. p. 1366–1371.
* Wijaya et al. [2010] Wijaya YB, Kom S, Napitupulu TA. Stock price prediction: comparison of Arima and artificial neural network methods-An Indonesia Stock’s Case. In: 2010 Second International Conference on Advances in Computing, Control, and Telecommunication Technologies IEEE; 2010. p. 176–179.
* Alkhatib et al. [2013] Alkhatib K, Najadat H, Hmeidi I, Shatnawi MKA. Stock price prediction using k-nearest neighbor (kNN) algorithm. International Journal of Business, Humanities and Technology 2013;3(3):32–44.
* Di Persio and Honchar [2016] Di Persio L, Honchar O. Artificial neural networks architectures for stock price prediction: Comparisons and applications. International journal of circuits, systems and signal processing 2016;10(2016):403–413.
* Nikou et al. [2019] Nikou M, Mansourfar G, Bagherzadeh J. Stock price prediction using DEEP learning algorithm and its comparison with machine learning algorithms. Intelligent Systems in Accounting, Finance and Management 2019;26(4):164–174.
* Guresen et al. [2011] Guresen E, Kayakutlu G, Daim TU. Using artificial neural network models in stock market index prediction. Expert Systems with Applications 2011;38(8):10389–10397.
* Markowitz and Todd [2000] Markowitz HM, Todd GP. Mean-variance analysis in portfolio choice and capital markets, vol. 66. John Wiley & Sons; 2000.
* Chen et al. [2010] Chen WP, Chung H, Ho KY, Hsu TL. Portfolio optimization models and mean–variance spanning tests. In: Handbook of quantitative finance and risk management Springer; 2010.p. 165–184.
* Ecer [2013] Ecer F. Artificial Neural Networks in Predicting Financial Performance: An Application for Turkey’s Top 500 Companies. Economic Computation and Economic Cybernetics Studies and Research 2013;47(2):103–114.
* Ardabili et al. [2016] Ardabili SF, Mahmoudi A, Gundoshmian TM. Modeling and simulation controlling system of HVAC using fuzzy and predictive (radial basis function, RBF) controllers. Journal of Building Engineering 2016;6:301–308.
* Ecer [2013] Ecer F. Comparing the bank failure prediction performance of neural networks and support vector machines: The Turkish case. Economic research-Ekonomska istraživanja 2013;26(3):81–98.
* Gundoshmian et al. [2019] Gundoshmian TM, Ardabili S, Mosavi A, Várkonyi-Kóczy AR. Prediction of combine harvester performance using hybrid machine learning modeling and response surface methodology. In: International Conference on Global Research and Education Springer; 2019. p. 345–360.
* Ardabili [2014] Ardabili S. Simulation and comparison of control system in mushroom growing rooms environment. PhD thesis, Thesis of Master science. Department of mechanic of agricultural machinery …; 2014.
* Ecer et al. [2020] Ecer F, Ardabili S, Band SS, Mosavi A. Training Multilayer Perceptron with Genetic Algorithms and Particle Swarm Optimization for Modeling Stock Price Index Prediction. Entropy 2020;22(11):1239.
## Appendix
Table 3: Ticker symbol reference Number of Stock | Ticker symbol | Company
---|---|---
01 | SZ000002 | CHINA VANKE CO., Ltd.
02 | SZ000333 | Midea Group
03 | SZ000651 | Gree Electric Appliances Inc. of Zhuhai
04 | SZ000725 | Boe Technology Group Co., Ltd.
05 | SZ000858 | Wuliangye Yibin Co.,Ltd.
06 | SZ002352 | SF Express (Group) Co., Ltd.
07 | SZ002415 | Hangzhou Hikvision Digital Technology Co., Ltd.
08 | SZ002475 | Shenzhen Luxshare Precision Industry Co.,Ltd.
09 | SZ002594 | BYD Co., Ltd.
10 | SZ002607 | Offcn Education Technology Co., Ltd.
11 | SZ300015 | Aier Eye Hospital Group Co., Ltd.
12 | SZ300122 | Chongqing Zhifei Biological Products Co., Ltd.
13 | SH600009 | Shanghai Airport Authority
14 | SH600028 | Sinopec, the China Petroleum and Chemical Corporation
15 | SH600104 | SAIC Motor Corporation Limited
16 | SH600276 | Jiangsu Hengrui Medicine Co., Ltd.
17 | SH600309 | Wanhua Chemical Group Co., Ltd.
18 | SH600346 | Hengli Group
19 | SH600519 | Kweichow Moutai Co., Ltd.
20 | SH601012 | LONGi Green Energy Technology Co.,Ltd.
21 | SH601088 | China Shenhua Energy Co., Ltd.
22 | SH601225 | Shaanxi Coal and Chemical Industry Group Co., Ltd.
23 | SH601318 | Ping An Insurance (Group) Company of China, Ltd.
24 | SH601398 | Industrial and Commercial Bank of China Limited
25 | SH601766 | CRRC Corporation Limited
26 | SH601857 | China National Petroleum Corporation
27 | SH601888 | China Tourism Group Duty Free Corporation Limited.
28 | SH601899 | Zijin Mining Group Co., Limited
29 | SH601939 | China Construction Bank Corporation
30 | SH603288 | Foshan Haitian Flavouring & Food Co. Ltd
Figure 4: Minute close price on 2020-10-09 Figure 5: Daily close price from January to September 2020 Table 4: SZ000002 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 27.90 | 27.90 | 27.89 | 0
2020-10-12 | 28.37 | 28.26 | 28.39 | 0
2020-10-13 | 27.90 | 27.95 | 27.75 | -0.18
2020-10-14 | 27.75 | 27.79 | 27.49 | -0.14
2020-10-15 | 27.77 | 27.73 | 27.80 | -0.14
2020-10-16 | 27.81 | 27.83 | 28.20 | 0.07
2020-10-19 | 27.96 | 27.74 | 28.11 | -0.79
2020-10-20 | 27.42 | 27.45 | 27.44 | 0
2020-10-21 | 27.35 | 27.60 | 27.22 | -0.89
2020-10-22 | 27.79 | 27.99 | 27.96 | 0.71
2020-10-23 | 27.93 | 27.95 | 27.97 | 0.07
2020-10-26 | 27.99 | 27.99 | 27.96 | 0
2020-10-27 | 27.61 | 27.50 | 27.69 | -0.39
2020-10-28 | 27.00 | 26.98 | 26.98 | 0
2020-10-29 | 27.09 | 27.45 | 27.24 | 1.28
2020-10-30 | 27.88 | 27.44 | 28.36 | -1.57
Table 5: SZ000333 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 73.88 | 73.38 | 73.96 | -0.69
2020-10-12 | 73.16 | 73.70 | 73.15 | 0
2020-10-13 | 74.90 | 75.91 | 74.92 | 0
2020-10-14 | 75.27 | 75.41 | 75.32 | 0
2020-10-15 | 76.73 | 76.29 | 76.54 | 0.6
2020-10-16 | 74.73 | 75.10 | 75.02 | 0.51
2020-10-19 | 75.12 | 74.84 | 74.41 | 0.39
2020-10-20 | 76.03 | 76.19 | 75.82 | -0.22
2020-10-21 | 77.45 | 77.31 | 79.25 | -0.19
2020-10-22 | 77.46 | 78.22 | 77.35 | -1.05
2020-10-23 | 77.36 | 75.64 | 77.44 | 0
2020-10-26 | 76.37 | 76.53 | 76.91 | 0.22
2020-10-27 | 75.95 | 75.99 | 75.91 | 0
2020-10-28 | 76.48 | 76.89 | 74.73 | -0.56
2020-10-29 | 79.78 | 81.64 | 82.51 | 2.56
2020-10-30 | 79.28 | 77.76 | 77.31 | 2.09
Table 6: SZ000651 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 54.99 | 54.65 | 55.02 | 0
2020-10-12 | 55.77 | 55.77 | 55.70 | 0
2020-10-13 | 55.76 | 55.89 | 55.66 | -0.24
2020-10-14 | 57.54 | 57.64 | 57.67 | 0.19
2020-10-15 | 58.37 | 57.91 | 58.37 | 0
2020-10-16 | 58.21 | 57.73 | 58.33 | -0.90
2020-10-19 | 57.81 | 57.25 | 57.63 | 1.05
2020-10-20 | 57.56 | 57.64 | 57.72 | 0.15
2020-10-21 | 57.87 | 57.92 | 57.88 | 0
2020-10-22 | 58.35 | 58.56 | 58.33 | 0
2020-10-23 | 58.28 | 58.51 | 57.61 | -0.43
2020-10-26 | 58.00 | 58.10 | 57.28 | -0.19
2020-10-27 | 57.63 | 57.34 | 57.73 | -0.54
2020-10-28 | 57.26 | 57.31 | 57.39 | 0.09
2020-10-29 | 57.28 | 58.12 | 57.92 | 1.58
2020-10-30 | 59.30 | 58.45 | 59.22 | 1.59
Table 7: SZ000725 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 5.13 | 5.18 | 5.27 | 1.02
2020-10-12 | 5.24 | 5.23 | 5.23 | 0.20
2020-10-13 | 5.16 | 5.17 | 5.15 | -0.20
2020-10-14 | 5.06 | 5.05 | 5.12 | -0.20
2020-10-15 | 5.00 | 4.97 | 5.01 | -0.61
2020-10-16 | 4.89 | 4.91 | 4.88 | -0.41
2020-10-19 | 4.88 | 4.85 | 4.88 | 0
2020-10-20 | 4.79 | 4.92 | 4.80 | 2.65
2020-10-21 | 4.80 | 4.85 | 4.81 | 1.02
2020-10-22 | 4.89 | 4.88 | 4.90 | -0.20
2020-10-23 | 4.83 | 4.79 | 4.83 | 0
2020-10-26 | 4.79 | 4.79 | 4.80 | 0
2020-10-27 | 4.71 | 4.74 | 4.70 | -0.61
2020-10-28 | 4.72 | 4.79 | 4.70 | -1.43
2020-10-29 | 4.87 | 4.83 | 5.13 | -0.81
2020-10-30 | 4.87 | 4.83 | 5.13 | -0.81
Table 8: SZ000858 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 229.23 | 226.94 | 234.61 | -1.04
2020-10-12 | 237.99 | 241.99 | 245.37 | 1.81
2020-10-13 | 241.61 | 242.52 | 241.61 | 0
2020-10-14 | 239.39 | 239.61 | 228.42 | -0.09
2020-10-15 | 238.81 | 238.85 | 238.82 | 0
2020-10-16 | 237.12 | 236.93 | 238.11 | -0.08
2020-10-19 | 236.27 | 235.69 | 237.12 | -0.26
2020-10-20 | 239.20 | 240.98 | 239.53 | 0.81
2020-10-21 | 240.94 | 241.86 | 241.02 | 0
2020-10-22 | 240.30 | 242.61 | 240.70 | 1.05
2020-10-23 | 240.66 | 237.10 | 241.41 | -1.61
2020-10-26 | 233.87 | 233.17 | 235.84 | -0.32
2020-10-27 | 232.54 | 234.10 | 232.51 | 0
2020-10-28 | 239.09 | 240.40 | 243.81 | 0.59
2020-10-29 | 253.13 | 251.03 | 266.74 | -0.95
2020-10-30 | 254.94 | 243.75 | 245351 | 0.99
Table 9: SZ002352 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 86.20 | 85.37 | 87.05 | -1.02
2020-10-12 | 86.40 | 86.91 | 85.96 | -0.63
2020-10-13 | 89.97 | 92.26 | 89.54 | -2.82
2020-10-14 | 92.07 | 92.82 | 92.44 | 0.92
2020-10-15 | 89.75 | 90.3 | 88.93 | -0.68
2020-10-16 | 89.30 | 89.66 | 89.43 | 0.44
2020-10-19 | 89.59 | 89.37 | 89.98 | -0.27
2020-10-20 | 88.48 | 90.60 | 88.59 | 2.61
2020-10-21 | 89.44 | 89.50 | 89.33 | -0.07
2020-10-22 | 86.37 | 88.01 | 86.06 | -2.02
2020-10-23 | 86.49 | 85.02 | 84.93 | 1.81
2020-10-26 | 87.98 | 87.17 | 87.82 | 1.00
2020-10-27 | 86.14 | 87.04 | 86.17 | 0
2020-10-28 | 86.18 | 86.82 | 85.95 | -0.79
2020-10-29 | 85.54 | 86.33 | 85.78 | 0.97
2020-10-30 | 84.88 | 82.50 | 83.42 | 2.93
Table 10: SZ002415 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 38.19 | 38.10 | 38.01 | 0.24
2020-10-12 | 37.96 | 38.65 | 37.96 | 0
2020-10-13 | 39.01 | 39.03 | 38.75 | -0.05
2020-10-14 | 38.80 | 38.97 | 39.04 | 0.45
2020-10-15 | 39.63 | 39.43 | 39.55 | 0.52
2020-10-16 | 38.45 | 38.52 | 38.67 | 0.18
2020-10-19 | 38.98 | 39.26 | 39.04 | 0.73
2020-10-20 | 39.18 | 39.19 | 39.15 | 0
2020-10-21 | 39.39 | 39.20 | 39.32 | 0.5
2020-10-22 | 39.39 | 39.24 | 39.03 | 0.39
2020-10-23 | 40.24 | 39.00 | 40.34 | -3.25
2020-10-26 | 43.00 | 42.56 | 46.38 | -1.15
2020-10-27 | 44.00 | 44.09 | 45.94 | 0.24
2020-10-28 | 44.31 | 44.22 | 44.15 | 0.24
2020-10-29 | 44.55 | 45.01 | 44.72 | 1.21
2020-10-30 | 46.00 | 45.01 | 45.99 | 0
Table 11: SZ002475 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 61.13 | 61.36 | 61.49 | 0.40
2020-10-12 | 61.04 | 61.98 | 61.21 | 1.65
2020-10-13 | 61.75 | 61.34 | 61.08 | 0.72
2020-10-14 | 59.44 | 59.88 | 59.80 | 0.77
2020-10-15 | 60.25 | 59.92 | 60.01 | 0.58
2020-10-16 | 58.11 | 58.40 | 56.08 | -0.51
2020-10-19 | 58.91 | 58.49 | 58.96 | 0
2020-10-20 | 58.75 | 59.17 | 58.73 | 0
2020-10-21 | 57.62 | 57.74 | 57.62 | 0
2020-10-22 | 57.15 | 57.42 | 57.28 | 0.47
2020-10-23 | 57.20 | 56.78 | 57.40 | -0.74
2020-10-26 | 57.58 | 57.21 | 57.67 | -0.65
2020-10-27 | 56.56 | 57.50 | 56.70 | 1.65
2020-10-28 | 59.10 | 59.49 | 58.63 | -0.68
2020-10-29 | 58.71 | 58.43 | 58.42 | 0.49
2020-10-30 | 56.64 | 54.96 | 55.31 | 2.94
Table 12: SZ002594 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 123.68 | 120.00 | 123.56 | 0
2020-10-12 | 121.03 | 128.36 | 121.39 | 6.31
2020-10-13 | 129.20 | 129.29 | 128.69 | -0.08
2020-10-14 | 130.32 | 130.89 | 132.67 | 0.49
2020-10-15 | 134.38 | 131.94 | 134.27 | 0
2020-10-16 | 125.88 | 127.69 | 119.85 | -1.56
2020-10-19 | 127.25 | 127.10 | 127.43 | -0.13
2020-10-20 | 131.51 | 138.80 | 132.12 | 6.27
2020-10-21 | 137.09 | 138.00 | 136.99 | 0
2020-10-22 | 140.50 | 144.52 | 144.37 | 3.46
2020-10-23 | 141.00 | 137.52 | 142.19 | -2.99
2020-10-26 | 141.86 | 139.95 | 141.92 | 0
2020-10-27 | 137.85 | 140.37 | 137.88 | 0
2020-10-28 | 147.91 | 151.40 | 154.99 | 3
2020-10-29 | 158.02 | 163.59 | 159.63 | 4.79
2020-10-30 | 166.48 | 160.17 | 167.42 | -5.43
Table 13: SZ002607 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 33.07 | 32.51 | 32.95 | 1.72
2020-10-12 | 31.94 | 32.07 | 31.02 | -0.40
2020-10-13 | 32.41 | 32.94 | 32.87 | 1.62
2020-10-14 | 36.25 | 36.25 | 36.19 | 0
2020-10-15 | 36.32 | 36.69 | 35.98 | -1.13
2020-10-16 | 37.88 | 38.64 | 37.95 | 2.33
2020-10-19 | 37.70 | 38.17 | 37.76 | 1.44
2020-10-20 | 37.84 | 37.73 | 37.98 | -0.34
2020-10-21 | 38.08 | 38.09 | 37.99 | -0.03
2020-10-22 | 38.67 | 38.71 | 38.76 | 0.12
2020-10-23 | 39.31 | 37.89 | 39.27 | 4.35
2020-10-26 | 37.23 | 37.34 | 36.99 | -0.34
2020-10-27 | 38.10 | 38.06 | 38.17 | -0.12
2020-10-28 | 39.47 | 38.75 | 40.80 | -2.21
2020-10-29 | 39.87 | 39.62 | 41.18 | -0.77
2020-10-30 | 40.58 | 39.32 | 40.52 | 3.86
Table 14: SZ300015 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 52.54 | 53.28 | 52.53 | 0
2020-10-12 | 55.25 | 57.47 | 57.36 | 4.32
2020-10-13 | 56.80 | 58.12 | 54.84 | -2.57
2020-10-14 | 57.90 | 57.72 | 57.49 | 0.35
2020-10-15 | 57.15 | 57.16 | 56.24 | -0.02
2020-10-16 | 56.55 | 57.18 | 56.60 | 0
2020-10-19 | 56.40 | 56.20 | 56.78 | -0.39
2020-10-20 | 56.91 | 57.43 | 56.99 | 1.01
2020-10-21 | 57.67 | 58.25 | 57.44 | -1.13
2020-10-22 | 57.78 | 58.80 | 57.76 | 0
2020-10-23 | 58.58 | 56.96 | 58.65 | -3.15
2020-10-26 | 56.91 | 57.00 | 56.54 | -0.18
2020-10-27 | 60.30 | 61.50 | 60.95 | 2.33
2020-10-28 | 62.70 | 63.14 | 64.55 | 0.86
2020-10-29 | 62.75 | 62.58 | 62.57 | 0.33
2020-10-30 | 62.38 | 62.05 | 62.12 | 0.64
Table 15: SZ300122 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 148.41 | 149.98 | 150.60 | 1.13
2020-10-12 | 153.97 | 156.10 | 150.39 | -1.53
2020-10-13 | 163.38 | 156.41 | 167.50 | -5.00
2020-10-14 | 160.90 | 157.95 | 161.15 | -2.12
2020-10-15 | 154.10 | 156.55 | 145.62 | -1.76
2020-10-16 | 157.74 | 161.87 | 159.79 | 2.96
2020-10-19 | 159.57 | 158.60 | 159.76 | -0.70
2020-10-20 | 155.85 | 160.01 | 155.50 | -2.99
2020-10-21 | 164.12 | 162.61 | 164.03 | 0
2020-10-22 | 154.45 | 153.79 | 155.32 | -0.47
2020-10-23 | 153.80 | 141.99 | 153.26 | 8.48
2020-10-26 | 143.25 | 142.92 | 146.33 | -0.24
2020-10-27 | 143.78 | 146.30 | 143.25 | -1.81
2020-10-28 | 145.28 | 147.06 | 144.63 | -1.28
2020-10-29 | 151.00 | 162.00 | 147.61 | -7.90
2020-10-30 | 160.74 | 158.30 | 160.07 | 1.75
Table 16: SH600009 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 69.25 | 69.27 | 68.98 | -0.03
2020-10-12 | 69.89 | 69.91 | 69.81 | -0.03
2020-10-13 | 69.73 | 69.92 | 68.98 | -0.28
2020-10-14 | 69.19 | 70.38 | 69.17 | 0
2020-10-15 | 70.00 | 69.62 | 69.55 | 0.55
2020-10-16 | 69.55 | 69.50 | 69.75 | -0.07
2020-10-19 | 69.32 | 69.00 | 69.50 | -0.47
2020-10-20 | 69.44 | 69.52 | 70.21 | 0.12
2020-10-21 | 69.36 | 69.69 | 69.47 | 0.48
2020-10-22 | 68.29 | 67.84 | 68.23 | 0
2020-10-23 | 67.31 | 67.16 | 67.49 | -0.22
2020-10-26 | 67.20 | 67.38 | 67.18 | 0
2020-10-27 | 66.31 | 66.23 | 66.52 | -0.12
2020-10-28 | 66.58 | 67.09 | 66.53 | 0
2020-10-29 | 66.31 | 66.3 | 66.77 | -0.01
2020-10-30 | 67.08 | 66.14 | 67.62 | -1.37
Table 17: SH600028 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 3.94 | 3.94 | 3.94 | 0
2020-10-12 | 3.95 | 3.96 | 4.03 | 0.26
2020-10-13 | 3.93 | 3.94 | 3.89 | -0.26
2020-10-14 | 3.92 | 3.90 | 3.91 | 0.51
2020-10-15 | 3.92 | 3.91 | 3.92 | -0.26
2020-10-16 | 3.94 | 3.93 | 3.93 | 0.26
2020-10-19 | 3.92 | 3.92 | 3.92 | 0
2020-10-20 | 3.92 | 3.92 | 3.92 | 0
2020-10-21 | 3.92 | 3.93 | 3.99 | 0.26
2020-10-22 | 3.93 | 3.92 | 3.92 | 0.26
2020-10-23 | 3.90 | 3.90 | 3.93 | 0
2020-10-26 | 3.90 | 3.90 | 3.90 | 0
2020-10-27 | 3.88 | 3.88 | 3.88 | 0
2020-10-28 | 3.86 | 3.87 | 3.87 | 0.26
2020-10-29 | 3.88 | 3.88 | 3.87 | 0
2020-10-30 | 3.92 | 3.9 | 3.92 | 0
Table 18: SH600104 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 19.87 | 19.77 | 19.29 | 0.52
2020-10-12 | 19.96 | 20.07 | 20.00 | 0.58
2020-10-13 | 20.05 | 20.65 | 20.02 | -3.14
2020-10-14 | 20.34 | 20.53 | 20.33 | 0
2020-10-15 | 20.71 | 20.51 | 20.83 | -1.05
2020-10-16 | 20.39 | 20.59 | 20.43 | 1.05
2020-10-19 | 20.39 | 20.37 | 20.51 | -0.10
2020-10-20 | 20.80 | 21.11 | 20.81 | 0
2020-10-21 | 21.41 | 21.57 | 21.39 | -0.84
2020-10-22 | 21.17 | 21.19 | 21.66 | 0.10
2020-10-23 | 21.49 | 21.05 | 21.49 | 0
2020-10-26 | 21.26 | 21.18 | 21.36 | -0.42
2020-10-27 | 21.13 | 21.12 | 21.20 | -0.05
2020-10-28 | 21.59 | 21.67 | 21.91 | 0.42
2020-10-29 | 22.61 | 22.82 | 22.52 | -1.10
2020-10-30 | 23.65 | 23.06 | 23.79 | -3.08
Table 19: SH600276 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 91.31 | 91.97 | 91.33 | 0
2020-10-12 | 93.85 | 94.03 | 94.59 | 0.20
2020-10-13 | 93.66 | 93.75 | 93.46 | -0.10
2020-10-14 | 93.24 | 92.95 | 93.27 | 0
2020-10-15 | 93.23 | 93.34 | 92.97 | -0.12
2020-10-16 | 92.56 | 92.83 | 91.00 | -0.30
2020-10-19 | 91.83 | 91.27 | 91.99 | -0.62
2020-10-20 | 90.51 | 91.13 | 90.22 | -0.69
2020-10-21 | 89.64 | 89.50 | 89.64 | 0
2020-10-22 | 89.31 | 89.39 | 89.15 | -0.09
2020-10-23 | 89.20 | 88.29 | 89.27 | 0
2020-10-26 | 89.01 | 89.20 | 89.03 | 0
2020-10-27 | 88.96 | 89.23 | 89.21 | 0.30
2020-10-28 | 90.84 | 91.30 | 90.94 | 0.51
2020-10-29 | 91.05 | 91.50 | 90.77 | -0.50
2020-10-30 | 88.56 | 88.65 | 87.35 | -0.10
Table 20: SH600309 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 72.02 | 72.16 | 72.16 | 0.20
2020-10-12 | 76.01 | 77.03 | 78.66 | 1.47
2020-10-13 | 77.02 | 79.37 | 76.97 | 0
2020-10-14 | 80.48 | 80.24 | 80.63 | -0.35
2020-10-15 | 79.70 | 79.61 | 79.65 | 0
2020-10-16 | 78.75 | 79.73 | 79.40 | 1.41
2020-10-19 | 79.02 | 79.30 | 79.36 | 0.40
2020-10-20 | 79.11 | 80.01 | 79.31 | 1.30
2020-10-21 | 79.51 | 79.59 | 79.46 | 0
2020-10-22 | 78.42 | 78.91 | 78.54 | 0.71
2020-10-23 | 79.73 | 77.40 | 81.01 | -3.36
2020-10-26 | 76.34 | 76.32 | 77.01 | -0.03
2020-10-27 | 77.33 | 77.60 | 77.22 | -0.39
2020-10-28 | 79.78 | 82.30 | 79.97 | 3.64
2020-10-29 | 79.00 | 78.83 | 76.16 | 0.25
2020-10-30 | 80.70 | 78.25 | 80.65 | 0
Table 21: SH600346 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 19.05 | 19.01 | 18.90 | 0.22
2020-10-12 | 19.84 | 19.89 | 20.30 | 0.27
2020-10-13 | 20.47 | 20.88 | 21.10 | 2.21
2020-10-14 | 20.87 | 21.07 | 20.90 | 1.08
2020-10-15 | 20.79 | 20.57 | 20.74 | 1.19
2020-10-16 | 20.42 | 20.37 | 20.41 | 0
2020-10-19 | 20.52 | 20.38 | 19.55 | 0.75
2020-10-20 | 20.21 | 20.49 | 20.26 | 1.51
2020-10-21 | 20.30 | 20.09 | 20.30 | 0
2020-10-22 | 19.33 | 19.48 | 18.70 | -0.81
2020-10-23 | 19.66 | 19.50 | 19.56 | 0.86
2020-10-26 | 19.53 | 19.46 | 20.11 | -0.38
2020-10-27 | 19.04 | 19.20 | 18.29 | -0.86
2020-10-28 | 19.52 | 20.34 | 19.53 | 0
2020-10-29 | 20.01 | 19.99 | 20.00 | 0
2020-10-30 | 19.62 | 19.49 | 19.56 | 0.70
Table 22: SH600519 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 1703.75 | 1696.41 | 1703.15 | 0
2020-10-12 | 1742.00 | 1748.24 | 1736.77 | -0.37
2020-10-13 | 1741.31 | 1739.13 | 1741.28 | 0
2020-10-14 | 1730.13 | 1729.03 | 1729.02 | 0
2020-10-15 | 1732.50 | 1724.30 | 1732.14 | 0
2020-10-16 | 1722.99 | 1713.50 | 1730.24 | -0.57
2020-10-19 | 1696.00 | 1697.68 | 1693.43 | -0.10
2020-10-20 | 1729.90 | 1733.38 | 1730.82 | 0
2020-10-21 | 1727.52 | 1731.80 | 1727.46 | 0
2020-10-22 | 1731.85 | 1743.00 | 1729.72 | -0.67
2020-10-23 | 1739.00 | 1722.99 | 1742.37 | -0.96
2020-10-26 | 1645.84 | 1642.00 | 1649.04 | -0.23
2020-10-27 | 1618.55 | 1625.31 | 1618.72 | 0
2020-10-28 | 1657.57 | 1665.29 | 1659.36 | 0.46
2020-10-29 | 1705.00 | 1688.87 | 1757.35 | -0.97
2020-10-30 | 1688.98 | 1666.60 | 1688.38 | 0
Table 23: SH601012 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 82.43 | 81.24 | 82.18 | 1.59
2020-10-12 | 82.51 | 82.75 | 82.07 | -0.32
2020-10-13 | 81.85 | 82.54 | 81.72 | -0.92
2020-10-14 | 80.46 | 80.74 | 81.99 | 0.37
2020-10-15 | 78.27 | 77.64 | 76.49 | 0.84
2020-10-16 | 76.37 | 76.02 | 76.87 | -0.47
2020-10-19 | 72.00 | 71.09 | 69.06 | 1.21
2020-10-20 | 71.46 | 73.14 | 71.31 | -2.24
2020-10-21 | 72.21 | 72.47 | 71.59 | -0.35
2020-10-22 | 70.19 | 70.83 | 70.12 | -0.85
2020-10-23 | 69.69 | 67.71 | 70.21 | -2.64
2020-10-26 | 69.49 | 70.20 | 69.69 | 0.95
2020-10-27 | 69.46 | 71.38 | 69.44 | 0
2020-10-28 | 72.12 | 72.39 | 71.70 | -0.36
2020-10-29 | 73.88 | 74.33 | 74.08 | 0.6
2020-10-30 | 75.88 | 74.80 | 75.32 | 1.44
Table 24: SH601088 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 16.74 | 16.56 | 16.76 | -1.09
2020-10-12 | 16.80 | 16.73 | 16.80 | 0
2020-10-13 | 16.66 | 16.77 | 16.81 | 0.67
2020-10-14 | 16.59 | 16.66 | 16.58 | 0
2020-10-15 | 16.91 | 16.77 | 17.02 | -0.85
2020-10-16 | 16.98 | 16.97 | 17.04 | -0.06
2020-10-19 | 16.88 | 16.86 | 16.93 | -0.12
2020-10-20 | 16.71 | 16.82 | 16.96 | 0.67
2020-10-21 | 16.73 | 16.76 | 16.74 | 0
2020-10-22 | 16.70 | 16.87 | 16.72 | 1.03
2020-10-23 | 16.67 | 16.71 | 16.70 | 0.24
2020-10-26 | 16.65 | 16.57 | 16.64 | 0
2020-10-27 | 16.77 | 16.75 | 17.05 | -0.12
2020-10-28 | 16.57 | 16.64 | 16.59 | 0.43
2020-10-29 | 16.48 | 16.55 | 16.50 | 0.43
2020-10-30 | 16.81 | 16.63 | 16.84 | -1.09
Table 25: SH601225 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 8.96 | 8.89 | 9.33 | -0.83
2020-10-12 | 8.99 | 8.99 | 8.98 | 0
2020-10-13 | 9.11 | 9.25 | 9.09 | -1.67
2020-10-14 | 9.09 | 9.13 | 9.10 | 0
2020-10-15 | 9.40 | 9.14 | 9.45 | -3.10
2020-10-16 | 9.30 | 9.35 | 9.35 | 0.60
2020-10-19 | 9.28 | 9.17 | 9.25 | 1.31
2020-10-20 | 9.12 | 9.17 | 9.24 | 0.60
2020-10-21 | 9.03 | 9.08 | 9.04 | 0.60
2020-10-22 | 8.92 | 9.01 | 8.94 | 1.07
2020-10-23 | 8.99 | 8.90 | 8.98 | 1.07
2020-10-26 | 8.83 | 8.91 | 8.82 | -0.95
2020-10-27 | 8.83 | 8.88 | 8.85 | 0.60
2020-10-28 | 8.81 | 8.92 | 8.82 | 0
2020-10-29 | 8.74 | 8.79 | 8.74 | 0
2020-10-30 | 8.90 | 8.74 | 8.91 | 0
Table 26: SH601318 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 78.41 | 77.88 | 78.92 | -0.69
2020-10-12 | 80.38 | 80.96 | 80.01 | -0.76
2020-10-13 | 81.11 | 81.46 | 82.15 | 0.46
2020-10-14 | 81.29 | 81.35 | 81.29 | 0
2020-10-15 | 82.12 | 81.50 | 80.97 | 0.81
2020-10-16 | 82.16 | 82.13 | 82.28 | -0.04
2020-10-19 | 82.73 | 81.91 | 81.98 | 1.08
2020-10-20 | 81.19 | 81.26 | 81.31 | 0.09
2020-10-21 | 81.98 | 82.50 | 82.02 | 0
2020-10-22 | 81.35 | 82.36 | 81.25 | -1.32
2020-10-23 | 84.51 | 83.75 | 84.79 | -1.00
2020-10-26 | 82.88 | 82.10 | 83.24 | -1.02
2020-10-27 | 81.10 | 81.29 | 81.12 | 0
2020-10-28 | 79.29 | 79.65 | 78.97 | -0.47
2020-10-29 | 78.81 | 79.16 | 78.40 | -0.46
2020-10-30 | 78.31 | 77.76 | 78.40 | -0.72
Table 27: SH601398 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 4.91 | 4.92 | 4.93 | 0.20
2020-10-12 | 4.94 | 4.96 | 5.02 | 0.41
2020-10-13 | 4.93 | 4.92 | 4.99 | -0.20
2020-10-14 | 4.92 | 4.92 | 4.92 | 0
2020-10-15 | 4.95 | 4.93 | 4.94 | 0.41
2020-10-16 | 5.00 | 4.99 | 5.08 | -0.20
2020-10-19 | 5.03 | 5.02 | 5.03 | 0
2020-10-20 | 4.96 | 4.97 | 4.97 | 0.20
2020-10-21 | 5.02 | 5.05 | 5.02 | 0
2020-10-22 | 5.04 | 5.05 | 5.04 | 0
2020-10-23 | 5.07 | 5.07 | 5.08 | 0
2020-10-26 | 5.04 | 5.02 | 5.05 | -0.41
2020-10-27 | 5.02 | 5.03 | 5.02 | 0
2020-10-28 | 5.00 | 5.01 | 5.00 | 0
2020-10-29 | 4.98 | 4.97 | 4.93 | 0.20
2020-10-30 | 4.93 | 4.92 | 4.93 | 0
Table 28: SH601766 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 5.56 | 5.56 | 5.56 | 0
2020-10-12 | 5.64 | 5.65 | 5.63 | -0.18
2020-10-13 | 5.62 | 5.61 | 5.61 | 0.18
2020-10-14 | 5.58 | 5.58 | 5.67 | 0
2020-10-15 | 5.56 | 5.55 | 5.57 | -0.18
2020-10-16 | 5.57 | 5.59 | 5.58 | 0.36
2020-10-19 | 5.59 | 5.57 | 5.46 | 0.36
2020-10-20 | 5.54 | 5.55 | 5.53 | -0.18
2020-10-21 | 5.52 | 5.60 | 5.52 | 0
2020-10-22 | 5.55 | 5.54 | 5.53 | 0.18
2020-10-23 | 5.56 | 5.56 | 5.55 | 0
2020-10-26 | 5.56 | 5.54 | 5.56 | 0
2020-10-27 | 5.52 | 5.52 | 5.53 | 0
2020-10-28 | 5.49 | 5.51 | 5.49 | 0
2020-10-29 | 5.48 | 5.46 | 5.46 | 0.36
2020-10-30 | 5.46 | 5.40 | 5.47 | -1.09
Table 29: SH601857 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 4.13 | 4.14 | 4.13 | 0
2020-10-12 | 4.16 | 4.17 | 4.23 | 0.24
2020-10-13 | 4.15 | 4.14 | 4.14 | 0.24
2020-10-14 | 4.11 | 4.11 | 4.07 | 0
2020-10-15 | 4.11 | 4.10 | 4.11 | 0
2020-10-16 | 4.11 | 4.12 | 4.12 | 0.24
2020-10-19 | 4.11 | 4.11 | 4.18 | 0
2020-10-20 | 4.09 | 4.09 | 4.09 | 0
2020-10-21 | 4.09 | 4.11 | 4.09 | 0
2020-10-22 | 4.09 | 4.09 | 4.09 | 0
2020-10-23 | 4.12 | 4.13 | 4.14 | 0.24
2020-10-26 | 4.09 | 4.11 | 4.10 | 0.49
2020-10-27 | 4.10 | 4.09 | 4.09 | 0.24
2020-10-28 | 4.09 | 4.09 | 4.08 | 0
2020-10-29 | 4.06 | 4.07 | 4.06 | 0
2020-10-30 | 4.10 | 4.08 | 4.10 | 0
Table 30: SH601888 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 201.10 | 204.84 | 198.49 | -1.689
2020-10-12 | 208.30 | 211.60 | 215.31 | 1.48
2020-10-13 | 209.90 | 214.79 | 209.15 | -2.19
2020-10-14 | 208.11 | 209.77 | 208.28 | 0
2020-10-15 | 202.91 | 202.48 | 202.72 | 0
2020-10-16 | 202.72 | 203.40 | 202.91 | 0
2020-10-19 | 200.01 | 199.99 | 200.40 | -0.01
2020-10-20 | 200.59 | 201.16 | 198.40 | -0.26
2020-10-21 | 195.71 | 195.47 | 195.37 | 0.11
2020-10-22 | 191.53 | 195.93 | 192.37 | 1.97
2020-10-23 | 193.41 | 194.30 | 189.56 | -0.4
2020-10-26 | 185.81 | 184.07 | 186.45 | -0.78
2020-10-27 | 184.76 | 184.74 | 184.15 | 0.01
2020-10-28 | 197.18 | 199.79 | 197.33 | 0
2020-10-29 | 202.81 | 201.80 | 210.94 | -0.45
2020-10-30 | 203.34 | 200.05 | 203.04 | 1.48
Table 31: SH601899 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 6.24 | 6.21 | 6.23 | 0.49
2020-10-12 | 6.54 | 6.56 | 6.77 | 0.33
2020-10-13 | 6.50 | 6.59 | 6.49 | -1.46
2020-10-14 | 6.52 | 6.49 | 6.51 | 0.49
2020-10-15 | 6.57 | 6.56 | 6.58 | 0
2020-10-16 | 6.62 | 6.60 | 6.65 | -0.33
2020-10-19 | 6.61 | 6.67 | 6.64 | 0.98
2020-10-20 | 6.72 | 6.77 | 6.68 | -0.81
2020-10-21 | 6.91 | 6.95 | 6.98 | 0.65
2020-10-22 | 6.91 | 6.96 | 6.90 | 0
2020-10-23 | 6.96 | 6.87 | 6.97 | -1.46
2020-10-26 | 7.00 | 7.03 | 7.03 | 0.49
2020-10-27 | 7.01 | 7.06 | 7.00 | 0
2020-10-28 | 7.06 | 7.16 | 7.06 | 0
2020-10-29 | 7.05 | 7.02 | 7.17 | -0.49
2020-10-30 | 7.16 | 6.98 | 7.16 | 0
Table 32: SH601939 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 6.14 | 6.13 | 6.14 | 0
2020-10-12 | 6.18 | 6.23 | 6.17 | -0.81
2020-10-13 | 6.19 | 6.17 | 6.19 | 0
2020-10-14 | 6.17 | 6.16 | 6.16 | 0
2020-10-15 | 6.23 | 6.22 | 6.23 | 0
2020-10-16 | 6.40 | 6.40 | 6.38 | 0
2020-10-19 | 6.57 | 6.51 | 6.64 | -0.98
2020-10-20 | 6.34 | 6.35 | 6.10 | -0.16
2020-10-21 | 6.38 | 6.43 | 6.38 | 0
2020-10-22 | 6.42 | 6.46 | 6.42 | 0
2020-10-23 | 6.50 | 6.49 | 6.51 | -0.16
2020-10-26 | 6.44 | 6.45 | 6.43 | -0.16
2020-10-27 | 6.39 | 6.38 | 6.40 | -0.16
2020-10-28 | 6.30 | 6.28 | 6.31 | -0.33
2020-10-29 | 6.30 | 6.30 | 6.28 | 0
2020-10-30 | 6.33 | 6.27 | 6.28 | 0.98
Table 33: SH603288 daily return Date | Price at 11:20 | Real price at 14:50 | Predicted price at 14:50 | Daily return (%)
---|---|---|---|---
2020-10-9 | 164.56 | 162.62 | 163.56 | 1.20
2020-10-12 | 171.30 | 172.28 | 182.68 | 0.60
2020-10-13 | 173.20 | 172.03 | 176.88 | -0.72
2020-10-14 | 171.25 | 171.80 | 171.43 | 0.34
2020-10-15 | 169.36 | 170.87 | 168.93 | -0.93
2020-10-16 | 168.27 | 168.41 | 168.82 | 0.09
2020-10-19 | 164.46 | 163.13 | 163.12 | 0.82
2020-10-20 | 167.71 | 169.59 | 172.16 | 1.16
2020-10-21 | 168.60 | 168.99 | 169.20 | 0.24
2020-10-22 | 167.18 | 168.20 | 167.24 | 0
2020-10-23 | 166.50 | 164.96 | 166.40 | 0
2020-10-26 | 161.49 | 162.00 | 161.92 | 0.31
2020-10-27 | 163.84 | 164.23 | 163.63 | -0.24
2020-10-28 | 165.89 | 166.00 | 167.59 | 0.07
2020-10-29 | 167.90 | 168.02 | 167.59 | -0.07
2020-10-30 | 162.45 | 159.45 | 157.43 | 1.85
|
# Hints of a new leptophilic Higgs sector?
Yoav Afik1,2,, P. S. Bhupal Dev3,, Anil Thapa4, 0000-0001-8102-356X
0000-0003-4655-2866 0000-0003-4471-2336 1Enrico Fermi Institute, University of
Chicago, Chicago, Illinois 60637, USA
2Experimental Physics Department, CERN, 1211 Geneva, Switzerland
3Department of Physics and McDonnell Center for the Space Sciences, Washington
University, St. Louis, Missouri 63130, USA
4Department of Physics, University of Virginia, Charlottesville, Virginia
22904-4714, USA
###### Abstract
We show that a new leptophilic Higgs sector can resolve some intriguing
anomalies in current experimental data across multiple energy ranges.
Motivated by the recent CMS excess in the resonant $e\mu$ channel at 146 GeV,
we consider a leptophilic two-Higgs-doublet model, and propose a novel
resonant production mechanism for the neutral components of the second Higgs
doublet at the LHC using the lepton content of the proton. Interestingly, the
same Yukawa coupling $Y_{e\mu}\sim 0.65-0.81$ that explains the CMS excess
also addresses the muon $(g-2)$ anomaly. Moreover, the new Higgs doublet also
resolves the recent CDF $W$-boson mass anomaly. The relevant model parameter
space will be completely probed by future LHC data.
## I Introduction
Using the Higgs boson as the keystone for new physics searches is well-
motivated [1], as an extended Higgs sector could potentially address some of
the pressing issues plaguing the Standard Model (SM), including the gauge
hierarchy problem, stability of the electroweak vacuum, mechanism of
electroweak symmetry breaking, origin of the fermion masses and mixing,
matter-antimatter asymmetry, and the nature of dark matter. Therefore, even
though the measured properties of the 125-GeV Higgs boson discovered at the
LHC [2, 3] are thus far consistent with the SM expectations [4, 5], further
precision Higgs studies, as well as direct searches for additional Higgs
bosons, must continue.
An interesting aspect of beyond-the-SM (BSM) physics is lepton flavor
violation (LFV), which is forbidden in the SM by an accidental global
symmetry. In fact, the observation of neutrino oscillations [6, 7, 8, 9, 10]
necessarily implies LFV. However, despite intense experimental efforts, no
corresponding LFV in the charged lepton sector has been observed [11].
Therefore, alternative searches for LFV involving exotic Higgs decays ($h\to
e\mu,e\tau,\mu\tau$) could be powerful probes of BSM physics [12, 13, 14, 15,
16, 17, 18]. Both ATLAS and CMS Collaborations have performed such LFV Higgs
searches with the $\sqrt{s}=13$ TeV LHC Run-2 data [19, 20, 21, 22, 23].
Although no evidence for LFV decays of the 125 GeV Higgs boson was found, CMS
has reported an intriguing $3.8\sigma$ local ($2.8\sigma$ global) excess in
the resonant $e\mu$ search around 146 GeV, with a preferred cross section of
$\sigma(pp\to H\to e\mu)=3.89^{+1.25}_{-1.13}$ fb [23]. If confirmed, this
would be a clear sign of BSM physics. In this letter, we take the CMS $e\mu$
excess at face value and provide the simplest possible interpretation in terms
of leptophilic neutral scalars within a two-Higgs-doublet model (2HDM). In
this context, we propose a novel resonant production channel for the
leptophilic neutral (pseudo)scalars at the LHC using the lepton parton
distribution function (PDF) of the proton [24, 25, 26, 27]; see Fig. 1. We
show that this scenario can explain the CMS excess with a Yukawa coupling
$Y_{e\mu}\sim 0.55-0.81$, while being consistent with all existing
constraints.
Another interesting feature of our solution is its intimate connection to two
other outstanding anomalies in current experimental data, namely, the
$(g-2)_{\mu}$ anomaly [28, 29, 30] and the CDF $W$-mass anomaly [31]. We
emphasize that the prospects of probing a leptophilic light Higgs sector at
the energy and intensity frontiers is a worthwhile study in its own right,
irrespective of the future status of these anomalies.
## II Model Setup
Figure 1: A representative Feynman diagram for resonant production of
leptophilic scalar fields at hadron colliders through lepton PDF.
Here we propose an economical scenario with a leptophilic 2HDM to explain the
CMS excess. We work in the Higgs basis [32], where only one neutral Higgs
acquires a nonzero vacuum expectation value, $v$. In this basis, the scalars
fields can be parameterized as
$H_{1}=\left(\begin{array}[]{c}G^{+}\\\
\frac{1}{\sqrt{2}}\left(v+H_{1}^{0}+iG^{0}\right)\end{array}\right),\
H_{2}=\left(\begin{array}[]{c}H^{+}\\\
\frac{1}{\sqrt{2}}\left(H_{2}^{0}+iA\right)\end{array}\right),$ (1)
where $(G^{+},G^{0})$ are the Goldstone modes, eaten up by $W$ and $Z$ after
electroweak symmetry breaking, $(H^{0}_{1},H^{0}_{2})$ and $A$ are the neutral
$\mathcal{CP}$-even and $\mathcal{CP}$-odd scalars respectively, and $H^{+}$
is a charged scalar field. In the alignment/decoupling limit [33, 34, 35, 36],
we identify $H_{1}^{0}\equiv h$ as the observed $125$ GeV SM-like Higgs boson,
whereas the $H_{2}$-sector does not couple to the SM gauge bosons. This is in
agreement with the LHC data [37, 38, 39]. We assume the mixing angle $\theta$
between the $\mathcal{CP}$-even scalar $H_{2}^{0}\equiv H$ and the SM Higgs
boson is small, and the only relevant production mechanism for $H$ (and $A$)
at colliders is via its leptonic Yukawa interactions:
$-{\cal L}_{Y}\supset Y_{\alpha\beta}\bar{L}_{\alpha}H_{2}\ell_{\beta,R}+{\rm
H.c.}$ (2)
For either $Y_{e\mu}\neq 0$ or $Y_{\mu e}\neq 0$, with all other
$Y_{\alpha\beta}$ involving electrons or muons assumed to be small, the
dominant contribution to the $pp\to H/A\to e\mu$ signal comes from the
$s$-channel Feynman diagram shown in Fig. 1, where the $H/A$ is produced
resonantly using the lepton PDF of the proton, and then decays to
$e^{\mp}\mu^{\pm}$ final states with a branching ratio (BR) determined by the
structure of the Yukawa coupling matrix $Y$ in Eq. (2). There is a sub-
dominant contribution to the same final-state from a $t$-channel exchange of
$H/A$, not shown in Fig. 1, but included in our calculation.
We estimate the signal cross section numerically using MadGraph5_aMC@NLO [40]
at leading order (LO) parton-level with the LUXlep-NNPDF31 PDF (82400) [41,
42, 43, 25]. The default MadGraph5 cuts are applied at parton-level, and the
default LO dynamical scale is used, which is the transverse mass calculated by
a $k_{t}$-clustering of the final-state partons [44]. The cross section result
including both $H$ and $A$ contributions is shown by the blue curve in Fig. 2
left panel as a function of $|Y_{e\mu}|$ (also applicable for $|Y_{\mu e}|$)
for $m_{H/A}=146$ GeV and assuming ${\rm BR}(H/A\to e\mu)=70\%$ (explained
below), where the thickness accounts for the theory uncertainty due to scale
(${}^{+39.4\%}_{-30.3\%}$) and PDF ($\pm 4.5\%$) variation. The horizontal
green (yellow) shaded region explains the CMS excess at $1\sigma$ ($2\sigma$).
The corresponding ATLAS search [19] is not directly comparable with the CMS
analysis, but a back-of-the-envelope calculation from the sideband data mildly
disfavors a narrow-width excess at 146 GeV, and a rough scaling of background
gives a ballpark upper limit of about 3.0 fb on the cross section [45], as
shown by the horizontal dashed line in Fig. 2. We find that $Y_{e\mu}\sim
0.55-0.81$ can explain the CMS excess at $2\sigma$. For such values of the
leptonic Yukawa coupling, any quark Yukawa couplings of the second Higgs
doublet $H_{2}$ must be small; otherwise, it will be ruled out by the
chirality enhanced meson decays, such as $\pi^{+}\to e^{+}\nu$. Thus our
proposal is different from other scalar interpretations of the CMS excess [46,
47], which used quark couplings to enhance the production cross section.
Figure 2: Left: Total $e\mu$ production cross section from $H/A$ (blue band)
at $\sqrt{s}=13$ TeV LHC as a function of the Yukawa coupling $Y_{e\mu}$ (or
$Y_{\mu e}$) in our leptophilic 2HDM with $m_{H}\simeq m_{A}=146$ GeV. Right:
Same as left panel but in the $Y_{e\mu}-Y_{\mu e}$ plane. See text for
details.
## III Constraints
The large $Y_{e\mu/\mu e}$ couplings of the neutral components, as well as the
charged component, of the leptophilic Higgs doublet, are subject to a number
of other constraints, and also give rise to other interesting phenomena, as
discussed below.
### III.1 Neutral sector
Even if we choose only the off-diagonal entries $Y_{e\mu/\mu e}\neq 0$, small
diagonal entries $Y_{\ell\ell}\sim\sin\theta y_{\ell}$ (with $\ell=e,\mu$)
will be induced via the $h-H$ mixing and the SM Yukawa couplings
$y_{\ell}\equiv\sqrt{2}m_{\ell}/v$ (with $y_{\mu}\simeq 6\times 10^{-4}$ and
$y_{e}\simeq 3\times 10^{-6}$). But the products $Y_{e\mu}Y_{ee}$ and
$Y_{e\mu}Y_{\mu\mu}$ are subject to strong LFV constraints [48]. Using the
general LFV formula [49] and the current MEG limit on $\mu\to e\gamma$ [50],
we require $Y_{ee}\lesssim 9\times 10^{-5}$ and $Y_{\mu\mu}\lesssim 6\times
10^{-5}$, which gives an upper limit of $\sin\theta\lesssim 0.1$ on the Higgs
mixing.
The same $Y_{e\mu\ (\mu e)}$ coupling gives an additional contribution to the
$e^{+}e^{-}\to\mu^{+}\mu^{-}$ cross section via $t$-channel $H/A$ exchange,
and therefore, is constrained by LEP measurements, which are in good agreement
with the SM prediction [51, 52]. Naively, the contact interaction bounds from
LEP data would kill the parameter space for ${\cal O}(1)$ Yukawa couplings
[48]. However, this bound is not directly applicable, if neutral scalars are
lighter than the LEP center-of-mass energy $\sqrt{s}=209$ GeV. A dedicated
analysis [53] comparing the 2HDM cross section, which includes the
interference between the $H/A$-mediated diagrams with the SM processes,
against the LEP dimuon data imposes the constraint $Y_{e\mu}<0.8$, thus ruling
out the parameter space shown by the brown-shaded region in Fig. 2. The same
bounds are also applicable to the $Y_{\mu e}$ coupling; see Fig. 4 for
different masses. The LEP limit can be significantly improved at future lepton
colliders, such as the $\sqrt{s}=1$ TeV ILC [54] with integrated luminosity
$L=500~{}{\rm~{}fb}^{-1}$ (cf. the dashed curve in Fig. 4), which can probe
$Y_{e\mu}$ (or $Y_{\mu e}$) up to 0.1 [53, 55, 56].
As for the hadron collider constraints on light neutral scalars, most of the
Tevatron/LHC searches are done in the context of either MSSM or general 2HDM,
and rely on the gluon fusion or vector boson fusion production mechanisms.
None of these searches are applicable for us, because the leptophilic $H/A$
does not directly couple to the quarks, and in the alignment limit ($\theta\to
0$), also does not couple to the $W/Z$ bosons. This also suppresses other
production channels like pair-production of $HA$.
The most important constraint on the neutral scalar sector comes from low-
energy process of muonium ($M_{\mu}=e^{-}\mu^{+}$)-antimuonium
($\overline{M}_{\mu}=e^{+}\mu^{-}$) oscillation [57, 58, 59, 60]. The MACS
experiment at PSI puts an upper bound on the oscillation probability
$P(M_{\mu}\leftrightarrow\overline{M}_{\mu})<8.2\times 10^{-11}$ at $90\%$ CL
[61], while a sensitivity at the level of $\mathcal{O}(10^{-14})$ is expected
at the proposed MACE experiment [62]. In our 2HDM setup, the oscillation
probability gets contribution from both $H$ and $A$ [63, 60]; see Appendix A.
If $H$ and $A$ are highly non-degenerate, i.e. only either $H$ or $A$
dominantly contributes, the MACS bound requires $Y_{e\mu}<0.18$ for
$m_{H/A}=146$ GeV, as shown (for illustration only) by the vertical purple
line in Fig. 2 left panel, which rules out the LFV coupling needed to explain
the CMS excess with a single scalar/pseudoscalar. However, for $m_{H}\simeq
m_{A}$, there is a cancellation in the
$M_{\mu}\leftrightarrow\overline{M}_{\mu}$ amplitude which allows for either
$Y_{e\mu}$ or $Y_{\mu e}$ to be large, but not both. This is depicted by the
gray-shaded region in Fig. 2 right panel for $m_{H}\simeq m_{A}=146$ GeV. In
this limit, even the future MACE sensitivity cannot rule out the CMS excess
region.
Thus far, it seems either $Y_{e\mu}$ or $Y_{\mu e}$ coupling can be taken to
be large for explaining the CMS excess, while being consistent with the
current constraints. However, as discussed below, a combination of the LHC
charged Higgs constraints and the global fit to non-standard neutrino
interactions (NSI), preclude the possibility of a large $Y_{\mu e}$ coupling,
as shown by the horizontal purple-shaded region in Fig. 2 right panel.
Therefore, the only viable possibility is to have a large $Y_{e\mu}$ coupling
and small $Y_{\mu e}$ coupling (the lower right band of the CMS excess region
in Fig. 2 right panel).
### III.2 Charged sector
At LEP, $H^{\pm}$ can be pair produced through either $s$-channel Drell-Yan
process via $\gamma/Z$, or $t$-channel via light neutrino. It can also be
singly produced either in association with a $W$ boson or through the Drell-
Yan channel in association with the leptons [48]. Once produced, the charged
scalar decays into $\nu_{\alpha}\ell_{\beta,R}$ through the Yukawa coupling
$Y_{\alpha\beta}$, which has the same signature as the right-handed slepton
decay into lepton plus massless neutralino in SUSY models:
$e^{+}e^{-}\to\tilde{\ell}_{R}^{+}\tilde{\ell}_{R}^{-}\to\ell_{R}^{+}\tilde{\chi}^{0}\ell_{R}^{-}\tilde{\chi}^{0}$.
We can therefore reinterpret the LEP slepton searches [64, 65, 66, 67, 68] to
derive a bound on light charged scalars. Depending on the branching ratio
${\rm BR}(H^{+}\to\ell^{+}\nu)$ the LEP limit on the charged scalar varies
from $80-100$ GeV [48].
Similarly at the LHC, a pair of charged scalars can be produced through
$s$-channel Drell-Yan process via $\gamma/Z$, followed by decays into
$\nu_{\alpha}\ell_{\beta,R}$. By reinterpreting the LHC searches for right-
handed sleptons, one can therefore put bounds on the charged scalar mass as a
function of BR in the massless neutralino limit. From an ATLAS analysis of the
LHC Run-2 data [69],we obtain a lower bound of $m_{H^{+}}>425$ GeV at 90% CL
for ${\rm BR}(H^{+}\to\mu^{+}\nu_{e})=1$. As we will see below, for
$m_{H}=m_{A}=146$ GeV, the charged Higgs boson cannot be too much heavier due
to the electroweak precision data (EWPD) constraints. Therefore, we would need
additional decay channels in order to make ${\rm
BR}(H^{+}\to\mu^{+}\nu_{e})<1$ and relax the LHC constraints.
## IV Resolving the $W$-boson mass anomaly
The mass splitting between the neutral and charged components of the
$SU(2)_{L}$ doublet $H_{2}$ breaks the custodial symmetry of the SM at the
loop level. The change in the relationship between the $W$ and $Z$ boson
masses can be used to accommodate the recent CDF $W$-mass anomaly, which
currently stands at $7\sigma$ [31]. This effect can be parameterized by the
oblique parameters $S$ and $T$ [70, 71], which modifies [72]
$\displaystyle m_{W}\simeq m_{W}^{\rm
SM}\left[1-\frac{\alpha(S-2\cos^{2}{\theta_{w}}T)}{4(\cos^{2}{\theta_{w}}-\sin^{2}{\theta_{w}})}\right],$
(3)
where $\theta_{w}$ is the electroweak mixing angle. We incorporate the global
electroweak fit [73] with the new CDF data to show allowed ranges for the
scalar masses $(m_{A},m_{H^{+}})$ with the choice of $m_{H}=146$ GeV in Fig. 3
(blue band). In spite of explaining the CDF $W$ mass shift, the model is
mildly consistent with the PDG global fit [74], as can be seen from the red
region in Fig. 3. We find that the CDF anomaly prefers significant splitting
between $m_{A}$ and $m_{H^{+}}$. For $m_{H}=m_{A}=146$ GeV, we require
$m_{H^{+}}\simeq$ 228–234 GeV to explain the CDF anomaly at $2\sigma$.
Figure 3: $2\sigma$ allowed ranges from EWPD global fit for the charged and
neutral Higgs masses in the alignment limit of our 2HDM scenario.
To reconcile the CDF-preferred $m_{H^{+}}$ region with the LHC constraint
$m_{H^{+}}>425$ GeV, we reinterpret the slepton search limit as a function of
the charged Higgs mass and ${\rm BR}(H^{+}\to\mu^{+}\nu_{e})$, using the
publicly available cross section limits given as a function of the slepton
mass from the auxiliary material of Ref. [69], as well as from an earlier
ATLAS analysis [75]. We find that to lower the $m_{H^{+}}$ bound to $\sim 230$
GeV, as required by the CDF anomaly, we need ${\rm
BR}(H^{+}\to\mu^{+}\nu_{e})<0.7$ (0.95) according to the cross section limits
reported in Ref. [75] ([69]). We therefore fix ${\rm
BR}(H^{+}\to\mu^{+}\nu_{e})=0.7$ for our analysis of the CMS excess in Fig. 2.
For the purpose of our discussion here, we are agnostic about the detailed
structure of the Yukawa coupling matrix, which could account for the remaining
30% BR. Additional nonzero entries in the Yukawa matrix are viable, albeit
requiring potential adjustments to suppress LFV. One example texture that fits
our branching ratio requirement is $Y_{e\mu}=0.71$, $Y_{\tau\tau}=0.46$, and
all other Yukawa entries negligible. This choice does not lead to trilepton
LFV decays but does induce the radiative LFV decay $\mu\to e\gamma$ via a two-
loop process involving the tauon in the Barr-Zee diagram [76, 15]. However, it
is also important to consider other diagrams such as the two-loop Barr-Zee
diagram from the charged Higgs, which depends on the quartic coupling
$\lambda(H_{2}^{\dagger}H_{2})(H_{1}^{\dagger}H_{2})$, and depending on the
sign of $\lambda$, can destructively interfere with the tau-loop-induced
diagram. We find that the LFV constraints can be satisfied for the above
choice of Yukawa couplings for a relatively small quartic coupling of order
$\mathcal{O}(10^{-3})$.
We note here that instead of a large $Y_{e\mu}$ coupling, if we had allowed a
large $Y_{\mu e}$ coupling, it would imply the coupling of charged Higgs
$H^{-}$ to electrons and muon neutrinos. This leads to a $\nu_{\mu}-e$
coherent scattering in matter via $t$-channel exchange of the charged Higgs,
and hence, generates an NSI of the type $\varepsilon_{\mu\mu}=|Y_{\mu
e}|^{2}/(4\sqrt{2}G_{F}m_{H^{+}}^{2})$ [48]. From a recent global analysis of
NSI constraints, we get a 90% CL bound of $\varepsilon_{\mu\mu}<0.015$
[77].111This is derived from the bound on
$\varepsilon_{\tau\tau}-\varepsilon_{\mu\mu}$ [77] (see also Ref. [78]), which
is stronger than the individual bound on $\varepsilon_{\mu\mu}$. In our model,
both $\varepsilon_{\mu\mu}$ and $\varepsilon_{\tau\tau}$ cannot be
simultaneously large due to strong charged LFV constraints; therefore, the
bound on $\varepsilon_{\tau\tau}-\varepsilon_{\mu\mu}$ is also applicable for
$\varepsilon_{\mu\mu}$. For $m_{H^{+}}\sim 230$ GeV, this gives an upper bound
of $Y_{\mu e}\simeq 0.23$, which is shown by the purple-shaded region in Fig.
2 right panel.
## V Muon Anomalous Magnetic Moment
The same $Y_{e\mu}$ coupling also contributes to the $(g-2)_{\mu}$ via the
neutral and charged Higgs loops [79, 80]; see Appendix B. The combined result
of the Brookhaven [28] and Fermilab [29] $(g-2)_{\mu}$ experiments is
$4.2\sigma$ away from the 2020 global average of the SM prediction [81]:
$\Delta a_{\mu}({\rm WP})=(251\pm 59)\times 10^{-11}$.222This was recently
updated to $\Delta a_{\mu}({\rm WP})=(249\pm 48)\times 10^{-11}$ [30], but
there is no noticeable change in our results. This discrepancy is however
reduced to only $1.5\sigma$, if we use the ab-initio lattice calculation from
the BMW collaboration [82]333Other lattice calculations now agree with the BMW
result in the ”intermediate distance regime” [83, 84, 85, 86], but a more
thorough and complete analysis is ongoing., which gives $\Delta a_{\mu}({\rm
BMW})=(107\pm 70)\times 10^{-11}$ [87]. The extra contribution from the
neutral Higgs sector in our 2HDM scenario can explain the $(g-2)_{\mu}$
anomaly at $1\sigma$, as shown by the red (orange) shaded region in Fig. 2,
using the BMW (WP) value for the SM prediction. We find that the $1\sigma$ WP-
preferred region is excluded by LEP constraint on $Y_{e\mu}$ for $m_{H}\simeq
m_{A}=146$ GeV, whereas part of the $1\sigma$ BMW-preferred region is still
allowed, while simultaneously explaining the CMS excess and the CDF $W$-mass
anomaly.
Fig. 4 shows the range of the $(g-2)_{\mu}$ anomaly-preferred region at
$1\sigma$ in the neutral Higgs mass-coupling plane. For comparison, the green
bar at 146 GeV shows the CMS excess region, whereas the purple shaded region
around it is the exclusion region derived from CMS data [23]. The gray-shaded
region shows the LEP exclusion from $e^{+}e^{-}\to\mu^{+}\mu^{-}$ data [53].
The magenta region is excluded at $2\sigma$ from the precision $Z$-width
measurements [74], because for $m_{H/A}<m_{Z}$, an additional decay mode
$Z\to\ell_{\alpha}^{+}\ell_{\beta}^{-}H/A\to 4\ell$ opens up. The vertical
cyan (blue) line is the indirect lower bound on the neutral Higgs mass,
derived using a combination of the electroweak precision constraint on the
mass splitting between the neutral and charged Higgs sectors using the CDF
(PDG) value of $m_{W}$, and the LEP lower limit of $\sim 100$ GeV on the
charged Higgs mass.
Figure 4: The CMS excess at $1\sigma$ (green) and 95% CL exclusion (purple) in
the mass-coupling plane, contrasted with the $1\sigma$ regions preferred by
$(g-2)_{\mu}$. Also shown are the constraints from LEP dilepton, $Z\to 4\ell$,
EWPD, and the future ILC and HL-LHC sensitivities.
From Fig. 4, we find that if we use the WP value for $g-2$, only a narrow band
around $m_{H/A}\simeq 25$ GeV can explain the $g-2$ anomaly at $1\sigma$. On
the other hand, if we use the BMW value, most of the parameter space for
$m_{H/A}>25$ GeV is currently allowed. Future sensitivity projections from HL-
LHC [88] and ILC [54] can cover most of the remaining allowed parameter space,
irrespective of the status of the CMS excess. In general, a dedicated neutral
scalar search in the LFV dilepton channels beyond 160 GeV could completely
probe the $(g-2)_{\mu}$-allowed region.
## VI Discussion and Conclusion
Both ATLAS and the CMS collaborations searched for new bosons decaying into
opposite-sign and different flavor light leptons ($e^{\pm}\mu^{\mp}$) [19,
23]. In the CMS analysis, machine-learning techniques are used to enhance the
sensitivity where an excess is observed. ATLAS, on the other hand, did not
perform such a dedicated, BDT-optimized resonance search, and did not
interpret the results for masses which are different than the SM value of
$\sim 125$ GeV. Therefore, naively, it could be that the CMS analysis is
sensitive to a signal hypothesis which was not reachable by ATLAS. Although a
similar excess at 146 GeV is disfavored by ATLAS at $1\sigma$ (as shown in our
Fig. 2) [45], it is a ballpark estimate only and not entirely conclusive; a
dedicated interpretation of the ATLAS results is required.
Both analyses generated signal samples with two mechanisms: gluon-fusion (ggH)
and vector-boson-fusion (VBF). The contribution of the ggH mechanism to the
total cross section is significantly higher [23], and therefore it has the
dominant effect on the results. In order to validate the use of the results by
simply comparing cross sections, we compared the kinematic distributions of
the leptons between the ggH mechanism and a direct production with leptons
from the proton, and found good agreement.
It is also interesting that CMS reported excesses in the diphoton [89] and
ditau [90] channels at 95 GeV, but only with $2.9\sigma$ local ($1.3\sigma$
global) and $2.6\sigma$ local ($2.3\sigma$ global) significances,
respectively. These can be accommodated with an extended Higgs sector [91, 92,
93, 94, 95], but a common explanation together with the 146 GeV $e\mu$ excess
seems difficult, and requires further investigation.
In conclusion, the leptophilic 2HDM provides the simplest explanation for the
CMS $e\mu$ excess at 146 GeV. It also simultaneously resolves the CDF $W$-mass
and the $(g-2)_{\mu}$ anomalies. A minimal extension of this 2HDM by a singlet
charged scalar leads to the Zee model of radiative neutrino mass generation
[96]. Should the CMS excess be confirmed, a detailed neutrino oscillation fit
(similar to what was done in Ref. [48]) with large $Y_{e\mu}$ entry could be
performed, which might also lead to concrete predictions in the neutrino
sector, including NSI, as well as for charged LFV decays.
###### Acknowledgements.
We thank Saurabh Nangia for valuable technical help with using lepton PDF. AT
thanks Julian Heeck for valuable discussion. BD thanks Ashutosh Kotwal and
David Toback for convincing him that the CDF $W$-mass anomaly should be taken
seriously. BD also thanks the Mitchell Institute at Texas A&M University for
local hospitality, where part of this work was done. BD is supported in part
by the US Department of Energy grant No. DE-SC 0017987 and by a URA VSP
fellowship. The work of AT is supported in part by the National Science
Foundation under Grant PHY-2210428. YA is supported by the National Science
Foundation under Grant No. PHY-2013010.
## Appendix A Muonium-antimuonium oscillation
The muonium-antimuonium oscillation probability in our 2HDM scenario is given
by [63, 60]
$P(M_{\mu}\to\overline{M}_{\mu})\simeq\frac{64\alpha^{6}m_{\rm
red}^{6}\tau_{\mu}^{2}}{\pi^{2}}G_{M\overline{M}}^{2}\,,$ (4)
where $\alpha$ is the fine-structure constant, $m_{\rm
red}=m_{e}m_{\mu}/(m_{e}+m_{\mu})$ is the reduced mass of the electron-muon
system, $\tau_{\mu}$ is the muon lifetime, and $G_{M\overline{M}}$ is the
Wilson coefficient which, in our 2HDM scenario, is given by [60]
$G_{M\overline{M}}^{2}\simeq
0.32\,\left|\frac{3G_{3}}{2}+\frac{G_{45}}{4}\right|^{2}+0.13\,\left|\frac{G_{45}}{4}-0.68\,G_{3}\right|^{2},$
(5)
with the following coefficients in the alignment limit:
$\displaystyle G_{45}$ $\displaystyle\equiv-\frac{Y_{e\mu}^{*2}+Y_{\mu
e}^{2}}{8\sqrt{2}}\left(\frac{1}{m_{H}^{2}}-\frac{1}{m_{A}^{2}}\right),$ (6)
$\displaystyle G_{3}$ $\displaystyle\equiv-\frac{Y_{e\mu}^{*}Y_{\mu
e}}{8\sqrt{2}}\left(\frac{1}{m_{H}^{2}}+\frac{1}{m_{A}^{2}}\right).$ (7)
We find that for $m_{H}\simeq m_{A}$, there is a cancellation in the $G_{45}$
amplitude (at the level of 6%), while the $G_{3}$ amplitude vanishes if we
consider only $Y_{e\mu}$ (or $Y_{\mu e}$).
## Appendix B Lepton anomalous magnetic moment
The expression for one-loop contribution of neutral and charged scalars to
$(g-2)_{\mu}$ is given by
$\displaystyle\Delta a_{\mu}$
$\displaystyle\simeq\frac{m_{\mu}^{2}}{16\pi^{2}}\left[\frac{1}{m_{H}^{2}}\left\\{\frac{|Y_{e\mu}|^{2}+|Y_{\mu
e}|^{2}}{6}-2\frac{m_{e}}{m_{\mu}}\left(\frac{3}{4}+\log\left(\frac{m_{e}}{m_{H}}\right)\right)\Re(Y_{\mu
e}Y_{e\mu})\right\\}\right.$
$\displaystyle\left.\qquad\qquad+\frac{1}{m_{A}^{2}}\left\\{\frac{|Y_{e\mu}|^{2}+|Y_{\mu
e}|^{2}}{6}+2\frac{m_{e}}{m_{\mu}}\left(\frac{3}{4}+\log\left(\frac{m_{e}}{m_{A}}\right)\right)\Re(Y_{\mu
e}Y_{e\mu})\right\\}\right.$
$\displaystyle\left.\qquad\qquad-\frac{1}{m_{H^{+}}^{2}}\frac{|Y_{e\mu}|^{2}}{6}\right].$
(8)
In the limit of $m_{H}\simeq m_{A}$, the terms proportional to $m_{e}m_{\mu}$
cancel. These terms also vanish in the limit of $Y_{\mu e}\to 0$, or if the
Yukawa couplings are real. For complex Yukawa couplings, there will be
additional strong constraints from electron electric dipole moment [97]. For
our scenario with small $Y_{\mu e}$, Eq. (8) reduces to the simple expression
$\displaystyle\Delta a_{\mu}$
$\displaystyle\simeq\frac{m_{\mu}^{2}|Y_{e\mu}|^{2}}{96\pi^{2}}\left(\frac{1}{m_{H}^{2}}+\frac{1}{m_{A}^{2}}-\frac{1}{m_{H^{+}}^{2}}\right).$
(9)
The same Yukawa coupling $Y_{e\mu}$ also contributes to $(g-2)_{e}$, and
$\Delta a_{e}$ is given by Eq. (9) with the replacement
$m_{\mu}\leftrightarrow m_{e}$. Due to the $m_{e}^{2}$ suppression, the
corresponding bound on $Y_{e\mu}$ is much weaker. Moreover, it is not clear
whether the $(g-2)_{e}$ result is anomalous. Although the experimental value
of $a_{e}$ has been measured very precisely [98], the SM prediction [99]
relies on the measurement of the fine-structure constant, and currently there
is a $5.5\sigma$ discrepancy between the Paris Rb determination of $\alpha$
[100] and the Berkeley Cs determination [101]. The recent Northwestern result
sits in between [98]. Until the discrepant $\alpha$ measurements are resolved,
we cannot draw any meaningful constraints from $(g-2)_{e}$.
## References
* [1] S. Dawson et al., “Report of the Topical Group on Higgs Physics for Snowmass 2021: The Case for Precision Higgs Physics,” in Snowmass 2021. 9, 2022. [2209.07510].
* [2] ATLAS Collaboration, G. Aad et al., “Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC,” Phys. Lett. B 716 (2012) 1–29, [1207.7214].
* [3] CMS Collaboration, S. Chatrchyan et al., “Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC,” Phys. Lett. B 716 (2012) 30–61, [1207.7235].
* [4] CMS Collaboration, A. Tumasyan et al., “A portrait of the Higgs boson by the CMS experiment ten years after the discovery,” Nature 607 no. 7917, (2022) 60–68, [2207.00043].
* [5] ATLAS Collaboration, “A detailed map of Higgs boson interactions by the ATLAS experiment ten years after the discovery,” Nature 607 no. 7917, (2022) 52–59, [2207.00092]. [Erratum: Nature 612, E24 (2022)].
* [6] Super-Kamiokande Collaboration, Y. Fukuda et al., “Evidence for oscillation of atmospheric neutrinos,” Phys. Rev. Lett. 81 (1998) 1562–1567, [hep-ex/9807003].
* [7] SNO Collaboration, Q. R. Ahmad et al., “Measurement of the rate of $\nu_{e}+d\to p+p+e^{-}$ interactions produced by 8B solar neutrinos at the Sudbury Neutrino Observatory,” Phys. Rev. Lett. 87 (2001) 071301, [nucl-ex/0106015].
* [8] Double Chooz Collaboration, Y. Abe et al., “Indication of Reactor $\bar{\nu}_{e}$ Disappearance in the Double Chooz Experiment,” Phys. Rev. Lett. 108 (2012) 131801, [1112.6353].
* [9] Daya Bay Collaboration, F. P. An et al., “Observation of electron-antineutrino disappearance at Daya Bay,” Phys. Rev. Lett. 108 (2012) 171803, [1203.1669].
* [10] RENO Collaboration, J. K. Ahn et al., “Observation of Reactor Electron Antineutrino Disappearance in the RENO Experiment,” Phys. Rev. Lett. 108 (2012) 191802, [1204.0626].
* [11] L. Calibbi and G. Signorelli, “Charged Lepton Flavour Violation: An Experimental and Theoretical Introduction,” Riv. Nuovo Cim. 41 no. 2, (2018) 71–174, [1709.00294].
* [12] J. L. Diaz-Cruz and J. J. Toscano, “Lepton flavor violating decays of Higgs bosons beyond the standard model,” Phys. Rev. D 62 (2000) 116005, [hep-ph/9910233].
* [13] G. Blankenburg, J. Ellis, and G. Isidori, “Flavour-Changing Decays of a 125 GeV Higgs-like Particle,” Phys. Lett. B 712 (2012) 386–390, [1202.5704].
* [14] R. Harnik, J. Kopp, and J. Zupan, “Flavor Violating Higgs Decays,” JHEP 03 (2013) 026, [1209.1397].
* [15] A. Crivellin, J. Heeck, and P. Stoffer, “A perturbed lepton-specific two-Higgs-doublet model facing experimental hints for physics beyond the Standard Model,” Phys. Rev. Lett. 116 no. 8, (2016) 081801, [1507.07567].
* [16] J. Herrero-Garcia, N. Rius, and A. Santamaria, “Higgs lepton flavour violation: UV completions and connection to neutrino masses,” JHEP 11 (2016) 084, [1605.06091].
* [17] M. Endo, S. Iguro, and T. Kitahara, “Probing $e\mu$ flavor-violating ALP at Belle II,” JHEP 06 (2020) 040, [2002.05948].
* [18] R. K. Barman, P. S. B. Dev, and A. Thapa, “Constraining lepton flavor violating Higgs couplings at the HL-LHC in the vector boson fusion channel,” Phys. Rev. D 107 no. 7, (2023) 075018, [2210.16287].
* [19] ATLAS Collaboration, G. Aad et al., “Search for the Higgs boson decays $H\to ee$ and $H\to e\mu$ in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector,” Phys. Lett. B 801 (2020) 135148, [1909.10235].
* [20] CMS Collaboration, A. M. Sirunyan et al., “Search for lepton flavour violating decays of a neutral heavy Higgs boson to $\mu\tau$ and e$\tau$ in proton-proton collisions at $\sqrt{s}=$ 13 TeV,” JHEP 03 (2020) 103, [1911.10267].
* [21] CMS Collaboration, A. M. Sirunyan et al., “Search for lepton-flavor violating decays of the Higgs boson in the $\mu\tau$ and e$\tau$ final states in proton-proton collisions at $\sqrt{s}$ = 13 TeV,” Phys. Rev. D 104 no. 3, (2021) 032013, [2105.03007].
* [22] ATLAS Collaboration, G. Aad et al., “Searches for lepton-flavour-violating decays of the Higgs boson into $e\tau$ and $\mu\tau$ in $\sqrt{s}=13$ TeV $pp$ collisions with the ATLAS detector,” JHEP 07 (2023) 166, [2302.05225].
* [23] CMS Collaboration, A. Hayrapetyan et al., “Search for the lepton-flavor violating decay of the Higgs boson and additional Higgs bosons in the e$\mu$ final state in proton-proton collisions at $\sqrt{s}$ = 13 TeV,” Phys. Rev. D 108 no. 7, (2023) 072004, [2305.18106].
* [24] V. Bertone, S. Carrazza, D. Pagani, and M. Zaro, “On the Impact of Lepton PDFs,” JHEP 11 (2015) 194, [1508.07002].
* [25] L. Buonocore, P. Nason, F. Tramontano, and G. Zanderighi, “Leptons in the proton,” JHEP 08 no. 08, (2020) 019, [2005.06477].
* [26] L. Buonocore, P. Nason, F. Tramontano, and G. Zanderighi, “Photon and leptons induced processes at the LHC,” JHEP 12 (2021) 073, [2109.10924].
* [27] H. K. Dreiner, V. M. Lozano, S. Nangia, and T. Opferkuch, “Lepton PDFs and multipurpose single-lepton searches at the LHC,” Phys. Rev. D 107 no. 3, (2023) 035011, [2112.12755].
* [28] Muon g-2 Collaboration, G. W. Bennett et al., “Final Report of the Muon E821 Anomalous Magnetic Moment Measurement at BNL,” Phys. Rev. D 73 (2006) 072003, [hep-ex/0602035].
* [29] Muon g-2 Collaboration, B. Abi et al., “Measurement of the Positive Muon Anomalous Magnetic Moment to 0.46 ppm,” Phys. Rev. Lett. 126 no. 14, (2021) 141801, [2104.03281].
* [30] Muon g-2 Collaboration, D. P. Aguillard et al., “Measurement of the Positive Muon Anomalous Magnetic Moment to 0.20 ppm,” Phys. Rev. Lett. 131 no. 16, (2023) 161802, [2308.06230].
* [31] CDF Collaboration, T. Aaltonen et al., “High-precision measurement of the $W$ boson mass with the CDF II detector,” Science 376 no. 6589, (2022) 170–176.
* [32] S. Davidson and H. E. Haber, “Basis-independent methods for the two-Higgs-doublet model,” Phys. Rev. D 72 (2005) 035004, [hep-ph/0504050]. [Erratum: Phys.Rev.D 72, 099902 (2005)].
* [33] J. F. Gunion and H. E. Haber, “The CP conserving two Higgs doublet model: The Approach to the decoupling limit,” Phys. Rev. D 67 (2003) 075019, [hep-ph/0207010].
* [34] M. Carena, I. Low, N. R. Shah, and C. E. M. Wagner, “Impersonating the Standard Model Higgs Boson: Alignment without Decoupling,” JHEP 04 (2014) 015, [1310.2248].
* [35] P. S. B. Dev and A. Pilaftsis, “Maximally Symmetric Two Higgs Doublet Model with Natural Standard Model Alignment,” JHEP 12 (2014) 024, [1408.3405]. [Erratum: JHEP 11, 147 (2015)].
* [36] D. Das and I. Saha, “Search for a stable alignment limit in two-Higgs-doublet models,” Phys. Rev. D 91 no. 9, (2015) 095024, [1503.02135].
* [37] J. Haller, A. Hoecker, R. Kogler, K. Mönig, T. Peiffer, and J. Stelzer, “Update of the global electroweak fit and constraints on two-Higgs-doublet models,” Eur. Phys. J. C 78 no. 8, (2018) 675, [1803.01853].
* [38] O. Eberhardt, A. P. n. Martínez, and A. Pich, “Global fits in the Aligned Two-Higgs-Doublet model,” JHEP 05 (2021) 005, [2012.09200].
* [39] A. Karan, V. Miralles, and A. Pich, “Updated global fit of the ATHDM with heavy scalars,” [2307.15419].
* [40] J. Alwall et al., “The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations,” JHEP 07 (2014) 079, [1405.0301].
* [41] A. Manohar, P. Nason, G. P. Salam, and G. Zanderighi, “How bright is the proton? A precise determination of the photon parton distribution function,” Phys. Rev. Lett. 117 no. 24, (2016) 242002, [1607.04266].
* [42] A. V. Manohar, P. Nason, G. P. Salam, and G. Zanderighi, “The Photon Content of the Proton,” JHEP 12 (2017) 046, [1708.01256].
* [43] NNPDF Collaboration, V. Bertone, S. Carrazza, N. P. Hartland, and J. Rojo, “Illuminating the photon content of the proton within a global PDF analysis,” SciPost Phys. 5 no. 1, (2018) 008, [1712.07053].
* [44] S. Catani, Y. L. Dokshitzer, M. H. Seymour, and B. R. Webber, “Longitudinally invariant $K_{t}$ clustering algorithms for hadron hadron collisions,” Nucl. Phys. B 406 (1993) 187–224.
* [45] K. Leney, “Searches in the BSM Higgs sector.” https://indico.in2p3.fr/event/29681/contributions/122562/attachments/76697/111308/02-KLeney-V1.pdf. 57th Rencontres de Moriond, Electroweak Interactions & Unified Theories, La Thuile, Italy (2023).
* [46] R. Primulando, J. Julio, N. Srimanobhas, and P. Uttayarat, “A new Higgs boson with electron-muon flavor-violating couplings,” Phys. Lett. B 845 (2023) 138129, [2304.13757].
* [47] N. Koivunen and M. Raidal, “Production and decays of 146 GeV flavons into e$\mu$ final state at the LHC,” JHEP 11 (2023) 014, [2305.00014].
* [48] K. S. Babu, P. S. B. Dev, S. Jana, and A. Thapa, “Non-Standard Interactions in Radiative Neutrino Mass Models,” JHEP 03 (2020) 006, [1907.09498].
* [49] L. Lavoura, “General formulae for f(1) —$>$ f(2) gamma,” Eur. Phys. J. C 29 (2003) 191–195, [hep-ph/0302221].
* [50] MEG Collaboration, A. M. Baldini et al., “Search for the lepton flavour violating decay $\mu^{+}\rightarrow\mathrm{e}^{+}\gamma$ with the full dataset of the MEG experiment,” Eur. Phys. J. C 76 no. 8, (2016) 434, [1605.05081].
* [51] OPAL Collaboration, G. Abbiendi et al., “Tests of the standard model and constraints on new physics from measurements of fermion pair production at 189-GeV to 209-GeV at LEP,” Eur. Phys. J. C 33 (2004) 173–212, [hep-ex/0309053].
* [52] LEP, ALEPH, DELPHI, L3, OPAL, LEP Electroweak Working Group, SLD Electroweak Group, SLD Heavy Flavor Group Collaboration, t. S. Electroweak, “A Combination of preliminary electroweak measurements and constraints on the standard model,” [hep-ex/0312023].
* [53] R. K. Barman, R. Dcruz, and A. Thapa, “Neutrino masses and magnetic moments of electron and muon in the Zee Model,” JHEP 03 (2022) 183, [2112.04523].
* [54] T. Barklow, J. Brau, K. Fujii, J. Gao, J. List, N. Walker, and K. Yokoya, “ILC Operating Scenarios,” [1506.07830].
* [55] P. S. B. Dev, R. N. Mohapatra, and Y. Zhang, “Lepton Flavor Violation Induced by a Neutral Scalar at Future Lepton Colliders,” Phys. Rev. Lett. 120 no. 22, (2018) 221804, [1711.08430].
* [56] P. S. B. Dev, R. N. Mohapatra, and Y. Zhang, “Probing TeV scale origin of neutrino mass at future lepton colliders via neutral and doubly-charged scalars,” Phys. Rev. D 98 no. 7, (2018) 075028, [1803.11167].
* [57] B. Pontecorvo, “Mesonium and anti-mesonium,” Sov. Phys. JETP 6 (1957) 429. http://jetp.ras.ru/cgi-bin/dn/e_006_02_0429.pdf.
* [58] U. D. Jentschura, G. Soff, V. G. Ivanov, and S. G. Karshenboim, “The Bound mu+ mu- system,” Phys. Rev. A 56 (1997) 4483, [physics/9706026].
* [59] T. E. Clark and S. T. Love, “Muonium - anti-muonium oscillations and massive Majorana neutrinos,” Mod.The second-order weak correction to (g - 2) of the muon in arbitrary gauge models Phys. Lett. A 19 (2004) 297–306, [hep-ph/0307264].
* [60] T. Fukuyama, Y. Mimura, and Y. Uesaka, “Models of the muonium to antimuonium transition,” Phys. Rev. D 105 no. 1, (2022) 015026, [2108.10736].
* [61] L. Willmann et al., “New bounds from searching for muonium to anti-muonium conversion,” Phys. Rev. Lett. 82 (1999) 49–52, [hep-ex/9807011].
* [62] A.-Y. Bai et al., “Snowmass2021 Whitepaper: Muonium to antimuonium conversion,” in Snowmass 2021. 3, 2022. [2203.11406].
* [63] R. Conlin and A. A. Petrov, “Muonium-antimuonium oscillations in effective field theory,” Phys. Rev. D 102 no. 9, (2020) 095001, [2005.10276].
* [64] ALEPH Collaboration, A. Heister et al., “Search for scalar leptons in e+ e- collisions at center-of-mass energies up to 209-GeV,” Phys. Lett. B 526 (2002) 206–220, [hep-ex/0112011].
* [65] ALEPH Collaboration, A. Heister et al., “Absolute mass lower limit for the lightest neutralino of the MSSM from e+ e- data at s**(1/2) up to 209-GeV,” Phys. Lett. B 583 (2004) 247–263.
* [66] DELPHI Collaboration, J. Abdallah et al., “Searches for supersymmetric particles in e+ e- collisions up to 208-GeV and interpretation of the results within the MSSM,” Eur. Phys. J. C 31 (2003) 421–479, [hep-ex/0311019].
* [67] L3 Collaboration, P. Achard et al., “Search for scalar leptons and scalar quarks at LEP,” Phys. Lett. B 580 (2004) 37–49, [hep-ex/0310007].
* [68] OPAL Collaboration, G. Abbiendi et al., “Search for anomalous production of dilepton events with missing transverse momentum in e+ e- collisions at s**(1/2) = 183-Gev to 209-GeV,” Eur. Phys. J. C 32 (2004) 453–473, [hep-ex/0309014].
* [69] ATLAS Collaboration, G. Aad et al., “Search for electroweak production of charginos and sleptons decaying into final states with two leptons and missing transverse momentum in $\sqrt{s}=13$ TeV $pp$ collisions using the ATLAS detector,” Eur. Phys. J. C 80 no. 2, (2020) 123, [1908.08215].
* [70] M. E. Peskin and T. Takeuchi, “A New constraint on a strongly interacting Higgs sector,” Phys. Rev. Lett. 65 (1990) 964–967.
* [71] M. E. Peskin and T. Takeuchi, “Estimation of oblique electroweak corrections,” Phys. Rev. D 46 (1992) 381–409.
* [72] I. Maksymyk, C. P. Burgess, and D. London, “Beyond S, T and U,” Phys. Rev. D 50 (1994) 529–535, [hep-ph/9306267].
* [73] C.-T. Lu, L. Wu, Y. Wu, and B. Zhu, “Electroweak precision fit and new physics in light of the W boson mass,” Phys. Rev. D 106 no. 3, (2022) 035034, [2204.03796].
* [74] Particle Data Group Collaboration, R. L. Workman et al., “Review of Particle Physics,” PTEP 2022 (2022) 083C01.
* [75] ATLAS Collaboration, M. Aaboud et al., “Search for electroweak production of supersymmetric particles in final states with two or three leptons at $\sqrt{s}=13\,$TeV with the ATLAS detector,” Eur. Phys. J. C 78 no. 12, (2018) 995, [1803.02762].
* [76] V. Ilisie, “New Barr-Zee contributions to $\mathbf{(g-2)_{\mu}}$ in two-Higgs-doublet models,” JHEP 04 (2015) 077, [1502.04199].
* [77] P. Coloma, M. C. Gonzalez-Garcia, M. Maltoni, J. a. P. Pinheiro, and S. Urrea, “Global constraints on non-standard neutrino interactions with quarks and electrons,” JHEP 08 (2023) 032, [2305.07698].
* [78] IceCube Collaboration, R. Abbasi et al., “Non-standard neutrino interactions in IceCube,” PoS EPS-HEP2021 (2022) 245.
* [79] J. P. Leveille, “The Second Order Weak Correction to (G-2) of the Muon in Arbitrary Gauge Models,” Nucl. Phys. B 137 (1978) 63–76.
* [80] M. Lindner, M. Platscher, and F. S. Queiroz, “A Call for New Physics : The Muon Anomalous Magnetic Moment and Lepton Flavor Violation,” Phys. Rept. 731 (2018) 1–82, [1610.06587].
* [81] T. Aoyama et al., “The anomalous magnetic moment of the muon in the Standard Model,” Phys. Rept. 887 (2020) 1–166, [2006.04822].
* [82] S. Borsanyi et al., “Leading hadronic contribution to the muon magnetic moment from lattice QCD,” Nature 593 no. 7857, (2021) 51–55, [2002.12347].
* [83] M. Cè et al., “Window observable for the hadronic vacuum polarization contribution to the muon g-2 from lattice QCD,” Phys. Rev. D 106 no. 11, (2022) 114502, [2206.06582].
* [84] Extended Twisted Mass Collaboration, C. Alexandrou et al., “Lattice calculation of the short and intermediate time-distance hadronic vacuum polarization contributions to the muon magnetic moment using twisted-mass fermions,” Phys. Rev. D 107 no. 7, (2023) 074506, [2206.15084].
* [85] Fermilab Lattice, HPQCD,, MILC Collaboration, A. Bazavov et al., “Light-quark connected intermediate-window contributions to the muon g-2 hadronic vacuum polarization from lattice QCD,” Phys. Rev. D 107 no. 11, (2023) 114514, [2301.08274].
* [86] RBC, UKQCD Collaboration, T. Blum et al., “Update of Euclidean windows of the hadronic vacuum polarization,” Phys. Rev. D 108 no. 5, (2023) 054507, [2301.08696].
* [87] H. Wittig, “Progress on $(g-2)_{\mu}$ from Lattice QCD,” in 57th Rencontres de Moriond on Electroweak Interactions and Unified Theories. 6, 2023. [2306.04165].
* [88] I. Zurbano Fernandez et al., “High-Luminosity Large Hadron Collider (HL-LHC): Technical design report,” tech. rep., CERN, Geneva, 12, 2020.
* [89] CMS Collaboration, “Search for a standard model-like Higgs boson in the mass range between 70 and 110$~{}\mathrm{GeV}$ in the diphoton final state in proton-proton collisions at $\sqrt{s}=13~{}\mathrm{TeV}$,”. CMS-PAS-HIG-20-002.
* [90] CMS Collaboration, A. Tumasyan et al., “Searches for additional Higgs bosons and for vector leptoquarks in $\tau\tau$ final states in proton-proton collisions at $\sqrt{s}$ = 13 TeV,” JHEP 07 (2023) 073, [2208.02717].
* [91] T. Biekötter, S. Heinemeyer, and G. Weiglein, “The CMS di-photon excess at 95 GeV in view of the LHC Run 2 results,” Phys. Lett. B 846 (2023) 138217, [2303.12018].
* [92] D. Azevedo, T. Biekötter, and P. M. Ferreira, “2HDM interpretations of the CMS diphoton excess at 95 GeV,” JHEP 11 (2023) 017, [2305.19716].
* [93] P. Escribano, V. M. Lozano, and A. Vicente, “Scotogenic explanation for the 95 GeV excesses,” Phys. Rev. D 108 no. 11, (2023) 115001, [2306.03735].
* [94] T. Biekötter, S. Heinemeyer, and G. Weiglein, “The 95.4 GeV di-photon excess at ATLAS and CMS,” [2306.03889].
* [95] S. Bhattacharya, G. Coloretti, A. Crivellin, S.-E. Dahbi, Y. Fang, M. Kumar, and B. Mellado, “Growing Excesses of New Scalars at the Electroweak Scale,” [2306.17209].
* [96] A. Zee, “A Theory of Lepton Number Violation, Neutrino Majorana Mass, and Oscillation,” Phys. Lett. B 93 (1980) 389. [Erratum: Phys.Lett.B 95, 461 (1980)].
* [97] T. S. Roussy et al., “An improved bound on the electron’s electric dipole moment,” Science 381 (2023) 46, [2212.11841].
* [98] X. Fan, T. G. Myers, B. A. D. Sukra, and G. Gabrielse, “Measurement of the Electron Magnetic Moment,” Phys. Rev. Lett. 130 no. 7, (2023) 071801, [2209.13084].
* [99] T. Aoyama, T. Kinoshita, and M. Nio, “Theory of the Anomalous Magnetic Moment of the Electron,” Atoms 7 no. 1, (2019) 28.
* [100] L. Morel, Z. Yao, P. Cladé, and S. Guellati-Khélifa, “Determination of the fine-structure constant with an accuracy of 81 parts per trillion,” Nature 588 no. 7836, (2020) 61–65.
* [101] R. H. Parker, C. Yu, W. Zhong, B. Estey, and H. Müller, “Measurement of the fine-structure constant as a test of the Standard Model,” Science 360 (2018) 191, [1812.04130].
|
# Neural Networks Assisted Metropolis-Hastings for Bayesian Estimation of
Critical Exponent on Elliptic Black Hole Solution in 4D Using Quantum
Perturbation Theory
Armin<EMAIL_ADDRESS>Department of Mathematics and Statistics,
Memorial University of Newfoundland, St. John’s, NL, Canada. Ehsan
<EMAIL_ADDRESS>Roberto J<EMAIL_ADDRESS>
###### Abstract
It is well-known that the critical gravitational collapse produces continuous
self-similar solutions characterized by the Choptuik critical exponent,
$\gamma$. We examine the solutions in the domains of the linear perturbation
equations, considering the numerical measurement errors. Specifically, we
study quantum perturbation theory for the four-dimensional Einstein-axion-
dilaton system of the elliptic class of $\text{SL}(2,\mathbb{R})$
transformations. We develop a novel artificial neural network-assisted
Metropolis-Hastings algorithm based on quantum perturbation theory to find the
distribution of the critical exponent in a Bayesian framework. Unlike existing
methods, this new probabilistic approach identifies the available
deterministic solution and explores the range of physically distinguishable
critical exponents that may arise due to numerical measurement errors.
## 1 Introduction
Black holes are known to be the final state of gravitational collapse. They
have a well-known characteristic property. They are entirely defined by mass,
angular momentum, and charge. Choptuik [1] shows that there seems to be
another parameter or the fourth quantity that establishes the collapse itself.
Following up on Christodoulou’s work on the spherically symmetric collapse of
scalar fields [2], Choptuik looked into the idea that a critical behaviour can
show discrete spacetime self-similarity. The amplitude of the scalar field
fluctuation shows that $p$ must be greater than the critical value
$p_{\text{crit}}$ to form a black hole. Moreover, for values of $p$ above this
threshold, the mass of the black hole $M_{\text{bh}}$ results a scaling law as
$r_{S}(p)\propto M_{\text{bh}}(p)\propto(p-p_{\text{crit}})^{\gamma}\,.$ (1)
The critical exponent in 4d for a single real scalar field is given by
$\gamma\simeq 0.37$ [1, 3, 4], while for general dimension ($d\geq 4$) [5, 6]
these relations are modified as
$r_{S}(p)\propto(p-p_{\text{crit}})^{\gamma}\,,\quad
M_{\text{bh}}(p)\sim(p-p_{\text{crit}})^{(D-3)\gamma}.$ (2)
Various numerical simulations with other fields are carried out [7, 8, 9, 10,
11, 12]. As an instance the collapse of a perfect fluid is done in [13, 14, 5,
15]. In [14] they explored $\gamma\simeq 0.36$ and therefore it was realised
in [16] that $\gamma$ might be universal for all matter field coupled to
gravity in four dimensions. It was first in [5, 15, 17] that the authors found
the critical exponent may be found by working out the perturbations of the
solutions. To achieve the perturbations, one needs to perturb any field $h$
(be it the metric or the matter content) as
$h=h_{0}+\varepsilon\,h_{-\kappa},$ (3)
where the perturbation $h_{-\kappa}$ does have the scaling
$-\kappa\in\mathbb{C}$ that labels the different modes. Within the possible
values of $\kappa$, we express the most relevant mode $\kappa^{*}$ as the
highest value of $\real(\kappa)$.
It was argued in [5, 15, 17] that the $\kappa^{*}$ is related to the critical
exponent by $\gamma=\frac{1}{\real\kappa^{*}}$. The axial symmetry was
analyzed in [18], and the critical solution for the shock waves was
investigated by [19].
For the first time, the axion-dilaton critical collapse solution coupled to
gravity in four dimensions was determined by [20] where they found the value
$\gamma\simeq 0.2641$, hence arguing the doubts relating to the universality
of $\gamma$ in four dimensions. The first motivation for studying critical
solutions in the axion-dilaton system is the AdS/CFT correspondence [21],
which relates the Choptuik exponent to the imaginary part of quasinormal
modes, and the dual conformal field theory [22]. Additional motivations
include the holographic description of black hole formation [6] and the
broader physics of black holes and their applications to holography and string
theory [23, 24].
In type IIB string theory, there is significant interest in exploring
gravitational collapse in spaces that can asymptotically approach
$AdS_{5}\times S^{5}$ where the matter content is described by the axion-
dilaton system and the self-dual 5-form field. The paper [25] recently
investigated entire families of distinguishable continuous self-similar
solutions of the Einstein-axion-dilaton system in four and five dimensions,
including all three conjugacy classes of $\mathrm{SL}(2,\mathbb{R})$. This
study builds upon works of [26, 27]. By applying advanced analytic and
numerical methods described in [28], the perturbed critical solution of a
four-dimensional elliptic critical collapse was computed and determined the
value of $\gamma$ to be around 0.2641, as previously reported by [20]. The
findings provide strong confidence in our ability to determine other critical
exponents across different dimensions and for various classes of solutions.
The study by [29] utilized regression models to estimate nonlinear critical
functions. Subsequently, [30] introduced several methods, including truncated
power basis, natural spline, and penalized B-spline regression models, to
estimate the nonlinear functions relevant to black hole physics, specifically
in the axion-dilaton case. Recently, in [31], artificial neural networks were
applied to find black hole solutions within the parabolic class in higher
dimensions. Additionally, in [32], the Hamiltonian Monte Carlo method was
proposed to analyze the complexity of elliptic black holes within a Bayesian
framework. Lastly, [33] employed the sequential Monte Carlo approach to
investigate the multimodal posterior distributions of critical functions in
hyperbolic equations. This probabilistic approach helps identify existing
solutions in the literature and finds all possible solutions that might arise
due to measurement errors.
Unlike all existing methods in the literature, we treat the critical exponent
in this work as a random variable for the first time, thereby better capturing
the uncertainty and numerical measurement errors associated with the perturbed
equations of motion. To achieve this, we have developed a novel statistical
method that integrates the power and flexibility of Markov Chain Monte Carlo
(MCMC) and artificial neural networks, estimating the posterior probability
distribution of the critical exponent within a Bayesian framework. Our
approach introduces a generic formalism based on artificial neural networks
and the Metropolis-Hastings algorithm [34, 35] to estimate critical exponent,
addressing their inherent complexity. Using quantum perturbation theory, we
apply the Bayesian estimation method to determine the critical exponent for
the elliptic black hole solution in 4d, estimating the Choptuik exponent
within the domain of the equations. We investigate the range of possible
values for the critical exponent in the Einstein-axion-dilaton system’s 4d
elliptic black hole solution. In this novel iterative approach, each iteration
combines the Metropolis-Hastings algorithm with artificial neural networks to
simultaneously solve the nonlinear perturbed equations of motion and identify
the stochastically most likely values of the critical exponent. We have
investigated the Bayesian estimation of the critical exponent based on various
perturbed equations independently and by considering all the perturbed
equations simultaneously. This was done to examine the impact of various
perturbed equations on stochastic accept-or-reject transitions. Unlike
traditional methods, this new probabilistic approach provides the established
solution and explores the range of physically distinguishable critical
exponents that may arise due to numerical measurement errors.
This paper is structured as follows: Section 2 explores the Einstein-axion-
dilaton system. Sections 3 details the quantum perturbation analysis and the
perturbed equations of motion. Section 4 presents the statistical methods
employed, including polynomial regression, artificial neural networks, and
Markov Chain Monte Carlo. Section 5 provides the numerical studies that
examine the posterior distribution and Bayesian estimation of the critical
exponent.
## 2 The Einstein-axion-dilaton System
The Einstein-axion-dilaton system coupled to gravity in $d$ dimensions is
defined by the following action:
$S=\int
d^{d}x\sqrt{-g}\left(R-\frac{1}{2}\frac{\partial_{a}\tau\partial^{a}\bar{\tau}}{(\imaginary\tau)^{2}}\right)\;,$
(4)
which is known by the effective action of type II string theory [36, 37]. The
axion-dilaton is expressed by $\tau\equiv a+ie^{-\phi}$. This action enjoys
the $\mathrm{SL}(2,\mathbb{R})$ symmetry
$\tau\rightarrow M\tau\equiv\frac{\alpha\tau+\beta}{\gamma\tau+\delta}\;,$ (5)
where $\alpha$, $\beta$, $\gamma$, $\delta$ are real parameters satisfying
$\alpha\delta-\beta\gamma=1$. If the quantum effects are considered then the
$\mathrm{SL}(2,\mathbb{R})$ symmetry gets exchanged to
$\mathrm{SL}(2,\mathbb{Z})$ and this S-duality was revealed to be the non-
perturbative symmetry of IIB string theory [38, 39, 40]. Taking the above
action, one derives the equations of motion as
$R_{ab}=\tilde{T}_{ab}\equiv\frac{1}{4(\imaginary\tau)^{2}}(\partial_{a}\tau\partial_{b}\bar{\tau}+\partial_{a}\bar{\tau}\partial_{b}\tau)\,,$
(6)
$\nabla^{a}\nabla_{a}\tau+\frac{i\nabla^{a}\tau\nabla_{a}\tau}{\imaginary\tau}=0\,.$
(7)
We take the spherical symmetry for both background and perturbations and the
general form of the metric in $d$ dimensions is given by
$ds^{2}=(1+u(t,r))(-b(t,r)^{2}dt^{2}+dr^{2})+r^{2}d\Omega_{d-2}^{2}\,,$ (8)
$\tau=\tau(t,r).$ (9)
The angular part of the metric in $d$ dimensions is $d\Omega_{d-2}^{2}$. One
can find out a scale-invariant solution by requiring the fact that under a
scale transformation $(t,r)\rightarrow(\Lambda t,\Lambda r)$ so that the line
element can be changed as $ds^{2}\rightarrow\Lambda^{2}ds^{2}$. Hence all the
functions must be scale invariant, i.e. $u(t,r)=u(z)$, $b(t,r)=b(z)$,
$z\equiv-r/t$. The effective action (4) is invariant under the
$\mathrm{SL}(2,\mathbb{R})$ transformation (5), hence $\tau$ also does need to
be invariant up to an $\mathrm{SL}(2,\mathbb{R})$ transformation, which means
that if the following holds
$\tau(\Lambda t,\Lambda r)=M(\Lambda)\tau(t,r),$ (10)
we then call a system of $(g,\tau)$, which respects the above properties of a
continuous self-similar (CSS) solution. It is worth highlighting that various
cases get related to other classes of
$\evaluated{\derivative{M}{\Lambda}}_{\Lambda=1}$ [25]. Hence, $\tau$ can take
three different forms [26], called elliptic, hyperbolic and parabolic cases.
In this paper, we deal with the elliptic ansatz and its form for the axion-
dilaton case can be cast as follows
$\displaystyle\tau(t,r)=i\frac{1-(-t)^{i\omega}f(z)}{1+(-t)^{i\omega}f(z)}\,,$
(11)
where $\omega$ is a real parameter to be known and the function $f(z)$ needs
to satisfy $\absolutevalue{f(z)}<1$ for the elliptic case. By replacing the
CSS ansätze in the equations of motion, we would be able to get a differential
system of equations for $u(z)$, $b(z)$, $f(z)$. Making use of spherical
symmetry, one can show that $u(z),u^{\prime}(z)$ can be expressed in terms of
$b(z)$ and $f(z)$ so that finally, we are left with some ordinary differential
equations (ODEs) as follows
$\displaystyle b^{\prime}(z)$ $\displaystyle=B(b(z),f(z),f^{\prime}(z))\,,$
(12) $\displaystyle f^{\prime\prime}(z)$
$\displaystyle=F(b(z),f(z),f^{\prime}(z))\,.$ (13)
These equations in elliptic 4d can be written in a closed form as
$\displaystyle 0$ $\displaystyle=$ $\displaystyle
b^{\prime}+{z(b^{2}-z^{2})\over
b(-1+|f|^{2})^{2}}f^{\prime}\bar{f}^{\prime}-{i\omega(b^{2}-z^{2})\over
b(-1+|f|^{2})^{2}}(f\bar{f}^{\prime}-\bar{f}f^{\prime})-{\omega^{2}z|f|^{2}\over
b(-1+|f|^{2})^{2}},$ $\displaystyle 0$ $\displaystyle=$ $\displaystyle
f^{\prime\prime}-{z(b^{2}+z^{2})\over b^{2}(-1+|f|^{2})^{2}}f^{\prime
2}\bar{f}^{\prime}+{2\over(1-|f|^{2})}\left(1-{i\omega(b^{2}+z^{2})\over
2b^{2}(1-|f|^{2})}\right)\bar{f}f^{\prime 2}$ (14)
$\displaystyle+{i\omega(b^{2}+2z^{2})\over
b^{2}(-1+|f|^{2})^{2}}ff^{\prime}\bar{f}^{\prime}+{2\over z}\left(1+{i\omega
z^{2}(1+|f|^{2})\over(b^{2}-z^{2})(1-|f|^{2})}\right.$
$\displaystyle+\left.{\omega^{2}z^{4}|f|^{2}\over
b^{2}(b^{2}-z^{2})(1-|f|^{2})^{2}}\right)f^{\prime}+{\omega^{2}z\over
b^{2}(-1+|f|^{2})^{2}}f^{2}\bar{f}^{\prime}+$
$\displaystyle{2i\omega\over(b^{2}-z^{2})}\left(\frac{1}{2}-{i\omega(1+|f|^{2})\over
2(1-|f|^{2})}\right.-\left.{\omega^{2}z^{2}|f|^{2}\over
2b^{2}(-1+|f|^{2})^{2}}\right)f.$
The above equations have five singularities [26] located at $z=\pm 0$,
$z=\infty$ and $z=z_{\pm}$ where the last singularities are known by
$b(z_{\pm})=\pm z_{\pm}$. $z=z_{+}$ is just a mere coordinate singularity [20,
26], thus $\tau$ should have been regular across it and this constraint
actually translates to having the finite value for $f^{\prime\prime}(z)$ as
$z\rightarrow z_{+}$.
One can actually realize that the vanishing of the divergent part of
$f^{\prime\prime}(z)$ provides us with an equation that is a complex valued
constraint at $z_{+}$ which can be indicated by
$G(b(z_{+}),f(z_{+}),f^{\prime}(z_{+}))=0$ where the final form of $G$ for the
elliptic class is given as [25]
$\displaystyle G(f(z_{+}),f^{\prime}(z_{+}))=$
$\displaystyle\,2z\bar{f}(z_{+})\left(-2\omega^{2}\right)f^{\prime}(z_{+})$
(15)
$\displaystyle+f(z_{+})\bar{f}(z_{+})\left(2z_{+}\bar{f}(z_{+})(-2+2i\omega+2)f^{\prime}(z_{+})+2i\omega\left(2+\omega^{2}\right)\right)$
$\displaystyle-\frac{2z_{+}(2+2i\omega-2)f^{\prime}(z_{+})}{f(z_{+})}$
$\displaystyle+2\omega(\omega-i)f(z_{+})^{2}\bar{f}(z_{+})^{2}-2\omega(\omega+i)\,.$
The initial conditions are known by the smoothness of the solution. By
applying polar coordinate as $f(z)=f_{m}(z)e^{if_{a}(z)}$, we can see that all
equations are invariant under a global redefinition of the phase of $f(z)$, so
this means that $f_{a}(0)=0$. Having set the regularity condition at $z=0$ and
making use of the residual symmetries in the equations of motions (14), we
reveal the initial boundary conditions of the equations of motion as follows
$\displaystyle b(0)=1,f_{m}(0)=x_{0},\quad\quad\quad\quad
f_{m}^{\prime}(0)=f_{a}^{\prime}(0)=f_{a}(0)=0\;,$ (16)
where $x_{0}$ is a real parameter and $(0<x_{0}<1)$. Therefore, we have two
constraints (the vanishing of the real and imaginary parts of $G$) and two
parameters $(\omega,x_{0})$. The discrete solution in four dimensions was
found in [28]. Note that these solutions are explicitly found by root-finding
procedure as well as numerically integrating all the equations of motion. For
instance, the solution in 4d elliptic case was discovered to be [41, 26] as
$\omega=1.176,\quad\absolutevalue{f(0)}=0.892,\quad z_{+}=2.605$ (17)
where the black hole solutions in higher dimensions for the axion-dilaton
system are obtained in [42].
## 3 Quantum Perturbation Analysis
In this section, we would like to derive the quantum perturbation equations
for black holes in the elliptic class in four dimensions.
This method can be implemented as an extensive method that holds for all
dimensions and all matter content. Some of the steps are taken from [43] and
[44] while we could remove $u(t,r)$ and its derivatives from all the
equations, making use of some algebraic computations444Some similar
perturbations of spherically symmetric solutions for Horava Gravity were
carried out in [45].. One can perturb the exact solutions $h_{0}$ (where $h$
denotes either $b$, or $f$) that are explored in Section 2 as
$h(t,r)=h_{0}(z)+\varepsilon\,h_{1}(t,r)$ (18)
where $\varepsilon$ is a small number. By expanding equations in powers of
$\varepsilon$, the zeroth order part results in the background equations which
have been studied in Section 2 and the linearized equations for the
perturbations $h_{1}(t,r)$ are explored due to the linear terms in
$\varepsilon$. One can consider the perturbations of the form
$\displaystyle h(z,t)=h_{0}(z)+\varepsilon(-t)^{-\kappa}h_{1}(z).$ (19)
The four-dimensional axion-dilaton system is known to be stable, and we
consider $\real\kappa>0$. We can explore the spectrum of $\kappa$ by solving
the equations for $h_{1}(z)$. Indeed, one can point out that the general
solution to the first-order equations will be gained with the linear
combination of these modes. We would like to find out the mode $\kappa^{*}$
with the biggest real part (by assuming a growing mode, which would be
$t\rightarrow 0$, i.e. $\real\kappa>0$), which will be related to critical
Choptuik exponent as [5, 15, 17]
$\displaystyle\gamma=\frac{1}{\real\kappa^{*}}.$ (20)
We consider only real modes $\kappa^{*}$ where the values $\kappa=0$ and
$\kappa=1$ are indeed gauge modes with respect to phase of $f$ or $U(1)$ re-
definitions of $f$ and time translations as well [28] and these modes have
been eliminated from the calculations.
### 3.1 Perturbation Equations in 4D For the Elliptic Class
The derivation of the perturbations has been explained in [28]. However, we
want to analyze the quantum perturbation to explore the entire range of
perturbed equations independently and by considering all the perturbed
equations simultaneously, as well as their impact on stochastic accept-reject
transitions in finding the posterior distributions and Bayesian estimate of
the critical exponent. We here highlight briefly all perturbation equations.
The perturbation ansatz (19), for $b$, $\tau$ functions are
$b(t,r)=b_{0}(z)+\varepsilon\,(-t)^{-\kappa}b_{1}(z)\ ,$ (21)
$\tau(t,r)=i\frac{1-(-t)^{i\omega}f(t,r)}{1+(-t)^{iw}f(t,r)}\ ,$ (22)
$f(t,r)\equiv f_{0}(z)+\varepsilon(-t)^{-\kappa}f_{1}(z).$ (23)
Using the ansätze (21) for the metric functions, we can calculate the Ricci
tensor for the metric as a function of $\varepsilon$ to find the zeroth-order
and first-order parts from the limiting behaviours as
$R^{(0)}_{ab}=\lim_{\varepsilon\rightarrow 0}R_{ab}(\varepsilon)\,\quad
R^{(1)}_{ab}=\lim_{\varepsilon\rightarrow
0}\derivative{R_{ab}(\varepsilon)}{\varepsilon}\,.$ (24)
The same method is applied to the right-hand side $\tilde{T}_{ab}$ of the
field equations, which results in
$\tilde{T}^{(0)}_{ab}=\lim_{\varepsilon\rightarrow
0}\tilde{T}_{ab}(\varepsilon)\,\quad\tilde{T}^{(1)}_{ab}=\lim_{\varepsilon\rightarrow
0}\derivative{\tilde{T}_{ab}(\varepsilon)}{\varepsilon}.$ (25)
Indeed, the Einstein Field Equations (EFEs) are held order by order so that
$R^{(0)}_{ab}=\tilde{T}^{(0)}_{ab}\,,\quad R^{(1)}_{ab}=\tilde{T}^{(1)}_{ab}.$
(26)
As explained, one can remove $u$ and its first derivative in terms of
$b_{0},f$ and their first derivatives. We now combine the field equations so
that we would be able to eliminate the second-derivative terms in $b(t,r)$.
This procedure is known as Hamiltonian constraint, which can be shown by
$C(\varepsilon)\equiv
R_{tt}+b^{2}\,R_{rr}-\tilde{T}_{tt}-b^{2}\,\tilde{T}_{rr}=0\,.$ (27)
One finds the lowest-order contribution as a first-order equation relating
$b_{0}^{\prime}$ to $b_{0}$, $f_{0}$, $f_{0}^{\prime}$ as follows
$\displaystyle
b_{0}^{\prime}=\frac{((z^{2}-b_{0}^{2})f_{0}^{\prime}(z\bar{f}_{0}^{\prime}+i\omega\bar{f}_{0})+\omega
f_{0}(\omega
z\bar{f}_{0}-i(z^{2}-b_{0}^{2})\bar{f}_{0}^{\prime}))}{b_{0}(1-f_{0}\bar{f}_{0})^{2}}\,.$
(28)
In a similar way, the first correction is given by
$\evaluated{\derivative{C(\varepsilon)}{\varepsilon}}_{\varepsilon=0}=0\ ,$
(29)
which is a first-order equation relating $b_{1}^{\prime}$ to $b_{0}$,
$b_{0}^{\prime}$ $f_{0}$, $f_{0}^{\prime}$, $f_{0}^{\prime\prime}$, and the
other perturbations $b_{1}$, $f_{1}$, $f_{1}^{\prime}$, which are indeed
linear in all perturbations. For the elliptic class in 4d one obtains the
linearized equations for $b_{1}^{\prime}$ as follows
$\displaystyle(L_{1})b_{1}^{\prime}$
$\displaystyle=rt((t-2t)b_{0}+rb_{0}^{\prime})\bigg{(}-\frac{2f_{1}^{\prime}\bar{f}_{0}^{\prime}r^{2}}{t^{4}s_{0}^{2}}-\frac{2b_{1}b_{0}^{\prime}}{rt}+\frac{\kappa
2b_{0}b_{1}b_{0}^{\prime}}{r\left((t-2t)b_{0}+rb_{0}^{\prime}\right)}$ (30)
$\displaystyle\quad-\frac{4i\omega
b_{0}b_{1}\bar{f}_{0}f_{0}^{\prime}}{rts_{0}^{2}}+\frac{2i\omega
b_{0}^{2}\bar{f}_{1}f_{0}^{\prime}}{rts_{0}^{3}}+\frac{2\kappa\bar{f}_{1}f_{0}^{\prime}r}{t^{3}s_{0}^{2}}+\frac{2i\omega\bar{f}_{1}f_{0}^{\prime}r}{t^{3}s_{0}^{2}}$
$\displaystyle\quad-\frac{2\kappa
b_{0}^{2}\bar{f}_{1}f_{0}^{\prime}}{rts_{0}^{2}}+\frac{2i\omega\bar{f}_{0}f_{1}^{\prime}r}{t^{3}s_{0}^{2}}-\frac{2i\omega
b_{0}^{2}\bar{f}_{0}f_{1}^{\prime}}{rts_{0}^{2}}+\frac{4b_{0}b_{1}f_{0}^{\prime}\bar{f}_{0}^{\prime}}{t^{2}s_{0}^{2}}$
$\displaystyle\quad+\frac{2b_{0}^{2}f_{1}^{\prime}\bar{f}_{0}^{\prime}}{t^{2}s_{0}^{2}}+\frac{1}{rt^{4}s_{0}^{3}}2f_{1}\Big{(}t\omega\bar{f}_{0}^{2}(rt(-i\kappa+\omega)f_{0}-2i(r^{2}-t^{2}b_{0}^{2})f_{0}^{\prime})$
$\displaystyle\quad\quad+t(\kappa-i\omega)(-r^{2}+t^{2}b_{0}^{2})\bar{f}_{0}^{\prime}+\bar{f}_{0}\big{(}rt^{2}\omega(i\kappa+\omega)+t(\kappa+i\omega)$
$\displaystyle\quad\quad\times(r^{2}-t^{2}b_{0}^{2})f_{0}\bar{f}_{0}^{\prime}+2r(r^{2}-t^{2}b_{0}^{2})f_{0}^{\prime}\bar{f}_{0}^{\prime}\big{)}\Big{)}-\frac{2f_{0}^{\prime}\bar{f}_{1}^{\prime}r^{2}}{t^{4}s_{0}^{2}}+\frac{2b_{0}^{2}f_{0}^{\prime}\bar{f}_{1}^{\prime}}{t^{2}s_{0}^{2}}$
$\displaystyle\quad-\frac{2if_{0}(\bar{f}_{1}(r\omega
t^{2}(\kappa+i\omega)+\omega
t(2r^{2}-t^{2}b_{0}^{2})\bar{f}_{0}f_{0}^{\prime}+2ir(r^{2}-t^{2}b_{0}^{2})f_{0}^{\prime}\bar{f}_{0}^{\prime}))}{rt^{4}s_{0}^{3}}$
$\displaystyle\quad-\frac{2if_{0}t\omega(-\bar{f}_{1}^{\prime}r^{2}+2t^{2}b_{0}b_{1}\bar{f}_{0}^{\prime}+t^{2}b_{0}^{2}\bar{f}_{1}^{\prime})}{rt^{4}s_{0}^{3}}+\frac{4i\omega
f_{0}^{2}\left(r^{2}-t^{2}b_{0}^{2}\right)\bar{f}_{1}\bar{f}_{0}^{\prime}}{rt^{3}s_{0}^{3}}$
$\displaystyle+\frac{2i\omega
f_{0}^{2}\left(\bar{f}_{0}\left(-\bar{f}_{1}^{\prime}r^{2}+t(\kappa-i\omega)\bar{f}_{1}r+2t^{2}b_{0}b_{1}\bar{f}_{0}^{\prime}+t^{2}b_{0}^{2}\bar{f}_{1}^{\prime}\right)\right)}{rt^{3}s_{0}^{3}}\bigg{)}\,,$
where
$L_{1}=2b_{0}((\kappa-1)tb_{0}+rb_{0}^{\prime}),\quad
s_{0}=(f_{0}\bar{f}_{0}-1).$ (31)
It can be seen that (30) is invariant under dilations
$(t,r)\rightarrow(e^{\lambda}r,e^{\lambda}t)$, so that, by changing
coordinates to $(t,z)$, one finds that all the factors of $t$ will cancel off
and the final result just depends on $z$. Hence, we are left with just real
and linear ordinary differential equations.
If we apply the perturbation ansätze in the $\tau$ equation of motion (7) and
extract the zeroth and first-order parts and remove all $u_{0}^{\prime}$,
$u_{1}$, $u_{1}^{\prime}$, then the resulting zeroth-order part will be an
equation including $b_{0}$, $b_{0}^{\prime}$ $f_{0}$, $f_{0}^{\prime}$,
$f_{0}^{\prime\prime}$, while substituting $b_{0}^{\prime}$ according to (12)
and solving for $f_{0}^{\prime\prime}$, we obtain a second-order equation for
$f_{0}$, which is indeed the explicit form of (13). The first-order part
includes $b_{1}$, $b_{1}^{\prime}$, $f_{1}$, $f_{1}^{\prime}$, and
$f_{1}^{\prime\prime}$ linearly, while the equations depend on the zeroth-
order functions as well as their derivatives non-linearly as background
functions. Hence, we are left with the perturbations for
$f_{1}^{\prime\prime}$ given by
$\displaystyle(L_{3})f_{1}^{\prime\prime}=$ $\displaystyle
r^{2}\bigg{(}it\omega
f_{0}^{2}(\bar{f}_{1}b_{0}^{\prime}+\bar{f}_{0}b_{1}^{\prime})+f_{1}b_{0}^{\prime}(\kappa
t-it\omega-t(\kappa-2i\omega)f_{0}\bar{f}_{0}+rf_{0}^{\prime}\bar{f}_{0})$
(32)
$\displaystyle\quad-r(b_{1}^{\prime}f_{0}^{\prime}+b_{0}^{\prime}f_{1}^{\prime})+f_{0}\big{(}b_{1}^{\prime}(-it\omega+r\bar{f}_{0}f_{0}^{\prime})+rb_{0}^{\prime}(\bar{f}_{1}f_{0}^{\prime}+\bar{f}_{0}f_{1}^{\prime})\big{)}\bigg{)}$
$\displaystyle+rt^{2}b_{0}^{2}\big{(}f_{1}\bar{f}_{0}b_{0}^{\prime}f_{0}^{\prime}-b_{1}^{\prime}f_{0}^{\prime}-b_{0}^{\prime}f_{1}^{\prime}+f_{0}(\bar{f}_{1}b_{0}^{\prime}f_{0}^{\prime}+\bar{f}_{0}(b_{1}^{\prime}f_{0}^{\prime}+b_{0}^{\prime}f_{1}^{\prime}))\big{)}$
$\displaystyle-t^{2}b_{0}^{3}\bigg{(}2r\bar{f}_{1}f_{0}^{\prime
2}-2tf_{1}^{\prime}+4r\bar{f}_{0}f_{0}^{\prime}f_{1}^{\prime}+f_{1}\bar{f}_{0}(2tf_{0}^{\prime}-rf_{0}^{\prime\prime})$
$\displaystyle\quad+f_{0}\big{(}2t\bar{f}_{0}f_{1}^{\prime}+\bar{f}_{1}(2tf_{0}^{\prime}-rf_{0}^{\prime\prime})\big{)}\bigg{)}$
$\displaystyle-
rb_{0}\bigg{(}t^{2}\omega(\omega-i)f_{0}^{2}\bar{f}_{1}+2r\big{(}-r\bar{f}_{1}f_{0}^{\prime
2}+(t(1+\kappa-i\omega)-2r\bar{f}_{0}f_{0}^{\prime})f_{1}^{\prime}\big{)}$
$\displaystyle\quad\quad+f_{1}\Big{(}t^{2}(-\kappa^{2}+\kappa(-1+2i\omega)+\omega(i+\omega))+t^{2}(\kappa+\kappa^{2}+2i\kappa\omega$
$\displaystyle\quad\quad\quad+2\omega(-i+\omega)\big{)}\bar{f}_{0}f_{0}+r\bar{f}_{0}(2t(-1+2\kappa-i\omega)f_{0}^{\prime}+rf_{0}^{\prime\prime})\Big{)}$
$\displaystyle\quad\quad+rf_{0}(-2t(1+\kappa+i\omega)\bar{f}_{0}f_{1}^{\prime}+\bar{f}_{1}(-2it(-i+\omega)f_{0}^{\prime}+rf_{0}^{\prime\prime}))\bigg{)}$
$\displaystyle-
b_{1}\bigg{(}rt^{2}\omega(-i+i\kappa+\omega)f_{0}^{2}\bar{f}_{0}-t\big{(}r^{2}(-2+\kappa+2i\omega)+6t^{2}b_{0}^{2}$
$\displaystyle\quad-2rtb_{0}b_{0}^{\prime}\big{)}f_{0}^{\prime}-2r(r^{2}-3t^{2}b_{0}^{2})\bar{f}_{0}f_{0}^{\prime
2}-r(r^{2}-3t^{2}b_{0}^{2})f_{0}^{\prime\prime}$
$\displaystyle\quad+f_{0}\Big{(}rt^{2}\omega(i-i\kappa+\omega)+\bar{f}_{0}\big{(}t(r^{2}(-2+\kappa-2i\omega)+6t^{2}b_{0}^{2}$
$\displaystyle\quad-2rtb_{0}b_{0}^{\prime})f_{0}^{\prime}+r(r^{2}-3t^{2}b_{0}^{2})f_{0}^{\prime\prime}\big{)}\Big{)}\bigg{)}\,,$
where
$L_{3}=rb_{0}(r^{2}-t^{2}b_{0}^{2})s_{0}\,.$ (33)
The above equations are scale-invariant. Hence, they turn to an ordinary
differential equation after making use of a change of coordinates to $(z,t)$.
Thus, the system of ordinary linear differential equations is given by
$\displaystyle b_{1}^{\prime}$
$\displaystyle=B_{1}(b_{1},f_{1},f_{1}^{\prime})\,,$ (34) $\displaystyle
f_{1}^{\prime\prime}$ $\displaystyle=F_{1}(b_{1},f_{1},f_{1}^{\prime})\,.$
(35)
Note that $b_{1}$ and $f_{1}$ are now linear functions that still depend non-
linearly on the unperturbed solution. We also have a quadratic dependence on
$\kappa$ as well. Note that these equations also do include the same
singularities as appeared in the unperturbed system of equations which means
that they are also singular for $z=0$ and $b^{2}(z)=z^{2}$. The modes are now
explored by finding the $\kappa$ values that are related to smooth solutions
of the perturbed equations (34), (35) where they need to satisfy the proper
boundary conditions as follows.
We now focus on the boundary conditions needed to solve (34) and (35). First
of all, at $z=0$, we re-scale the time coordinate so that $b_{1}(0)=0$. Also,
using the regularity condition for the axion-dilaton at $z=0$, we find that
$f_{1}^{\prime}(0)=0$ so that the freedom in $f$ is reduced to $f_{1}(0)$
which is an unknown complex parameter. On the other hand, at $z_{+}$ (we
recall that $z_{+}$ is defined by the equation $b(z_{+})=z_{+}$) all equations
and perturbations are regular so that all the second derivatives
$\partial_{r}^{2}f(t,r)$, $\partial_{r}\partial_{t}f(t,r)$,
$\partial_{t}^{2}f(t,r)$ should be finite as $z\rightarrow z_{+}$. Hence,
$f_{0}^{\prime\prime}(z)$ and $f_{1}^{\prime\prime}(z)$ are also finite as
$z\rightarrow z_{+}$. We introduce $\beta=b_{0}(z)-z$ and then expand
$f_{0}^{\prime\prime},f_{1}^{\prime\prime}$ near the singularity, as
$\displaystyle f_{0}^{\prime\prime}(\beta)$
$\displaystyle=\frac{1}{\beta}G(h_{0})+\mathcal{O}(1)\,,$ (36) $\displaystyle
f_{1}^{\prime\prime}(\beta)$
$\displaystyle=\frac{1}{\beta^{2}}\bar{G}(h_{0})+\frac{1}{\beta}H(h_{0},h_{1}|\kappa)+\mathcal{O}(1)\,,$
(37)
where we have defined the following equations:
$\displaystyle h_{0}=(b_{0}(z_{+}),f_{0}(z_{+}),f_{0}^{\prime}(z_{+})),\quad
h_{1}=(b_{1}(z_{+}),f_{1}(z_{+}),f_{1}^{\prime}(z_{+}))\;.$ (38)
The vanishing unperturbed complex constraint is given by $G(h_{0})=0$ at
$z_{+}$, and we checked that it implies $\bar{G}(h_{0})=0$ at $z_{+}$ as well,
which means that
$G(h_{0})=0\,\Rightarrow\,\bar{G}(h_{0})=0\,.$ (39)
Hence, we are left just with the complex-valued constraint
$H(h_{0},h_{1}|\kappa)=0\,,$ (40)
that is linear in $h_{1}$.
We now try to solve this constraint for $f_{1}^{\prime}(z_{+})$ in terms of
$f_{1}(z_{+})$, $b_{1}(z_{+})$, $\kappa$ and $h_{0}$. Hence, this condition
reduces the number of free parameters in the boundary conditions at $z_{+}$ to
a real number $b_{1}(z_{+})$ and a complex $f_{1}(z_{+})$. In conclusion, we
have six real unknowns, which are $\kappa$ and the five-component vector as
$X=(\real f_{1}(0),\,\imaginary f_{1}(0),\,\real f_{1}(z_{+}),\,\imaginary
f_{1}(z_{+}),\,b_{1}(z_{+}))\;,$ (41)
and the system of linear ODE’s (34) and (35) where the total real order is
indeed five. From the numerical procedure of [28] and given a set of boundary
conditions $X$, we integrate from $z=0$ to an intermediate point
$z_{\text{mid}}$, and similarly, we integrate backwards from $z_{+}$ to
$z_{\text{mid}}$. Finally, we collect the values of all functions
$(b_{1},\real f_{1},\imaginary f_{1},\real f_{1}^{\prime},\imaginary
f_{1}^{\prime})$ at $z_{\text{mid}}$ and encode the difference between the two
integrations in a difference function $D(\kappa;X)$. By definition,
$D(\kappa;X)$ is linear in $X$ thus it has a representation in matrix form
$D(\kappa;X)=A(\kappa)X\;,$ (42)
where $A(\kappa)$ is a $5\times 5$ real matrix depending on $\kappa$. Thus, we
only need to find the zeroes of $D(\kappa;X)$ and this can be achieved by
evaluating $\det A(\kappa)=0$. We carry out the root search for the
determinant as a function of $\kappa$ where the root with the biggest value
will be related to the Choptuik exponent through (20). Note that the perturbed
equations of motion are singular whenever the factor
$\left(\kappa-1-z\frac{b_{0}^{\prime}}{b_{0}}\right)$ in the denominator
vanishes, so that the numerical procedure fails at a particular point. We can
get an estimate for the values of $\kappa$ giving rise to this singular
behaviour, and we can also find the entire region that leads to numerical
failure. However, this apparent problem does not affect our evaluation of the
critical exponent because, in most cases, the most relevant mode $\kappa^{*}$
lies outside that particular failure region.
## 4 Statistical Approach
In this section, we detail the novel approach we have developed to estimate
the distribution of the critical exponent $\kappa$ in the elliptic 4d case of
the equations of motion with perturbations. In figure 1 we show the main steps
of the proposed pipeline.
Figure 1: Bayesian approach with artificial neural networks assisted
Metropolis-Hastings for the estimation of the distribution of $\kappa$. We
start with the estimation of the closed form for the unperturbed equations of
motion following polynomial regression. Then, a Bayesian artificial neural
networks assisted Metropolis-Hastings is followed, sampling $\kappa$ from a
prior distribution, and using an ANNs solver for the solution of the perturbed
equations of motion in each of the Metropolis-Hastings iterations.
Technically, we offer a new Bayesian approach with artificial neural networks
(ANNs) assisted Metropolis-Hastings for the estimation of the distribution of
the mentioned $\kappa$ that is related to the critical exponent $\gamma$. To
do so, we must treat $\kappa$ a random variable, so we use the Bayesian
strategy to find the posterior distribution of $\kappa$ based on the perturbed
DE system. We emphasize again that this is first time in the literature where
the critical exponent is treated as a random variable. As can be seen in
figure 1, our approach starts with the use a polynomial regression technique
to estimate the closed form of the unperturbed critical collapse functions
under analysis. To properly feed the polynomial regression technique, we
follow an ANN-based approach [32] to numerically find the unperturbed critical
collapse functions in the entire domain of the DE system. Our approach uses
the ANN estimates of the unperturbed critical collapse functions so that the
polynomial regression technique can be used for the estimation of the final
closed form. As our objective is the Bayesian analysis of the perturbed
equations, we use the estimated closed-form to incorporate the perturbations
related to $\kappa$. The last step consists of the implementation of ANN-
assisted Metropolis-Hastings to estimate the distribution of of $\kappa$ which
is related to the critical exponent. We detail each step in the following
subsections, providing all the details for the statistical methods employed.
### 4.1 Polynomial regression for the estimation of the closed form of the
unperturbed equations
Polynomial regression models are common tools in data science to model
multivariate data and find the relationship between variables. Let $({\bf
x},{\bf Z})$ be the input multivariate data, where ${\bf x}$ denotes the
response vector of sample size $n$ and ${\bf Z}$ denotes the design matrix of
size $(n\times p)$ with $p$ independent variables $({\bf z}_{1},\ldots,{\bf
z}_{p})^{\top}$. Given the training data of size $n$, the multivariate
regression
$\displaystyle x_{i}={\bf z}_{i}^{\top}{\bf\beta}+\epsilon_{i},$ (43)
where $\epsilon_{i}$ are independent and identically distributed (iid)
variables from the standard Gaussian distribution. The regression model (43)
essentially converts predicting the response differential equation variable
$x=g({\bf z})$ into estimating the unknown coefficients of the model. The
least squares (LS) method is a common approach to estimating the regression
model coefficients (43) minimizing the $l_{2}$ norm between the predicted
responses and observed responses as
$\displaystyle{\widehat{\beta}}=\arg\min_{\beta}||{\bf x}-{\bf
Z}{\bf\beta}||^{2}_{2}.$ (44)
One can easily show that the LS estimate of unknown coefficients can be
obtained by $\widehat{\bf\beta}_{LS}=({\bf Z}^{\top}{\bf Z})^{-1}{\bf Z}{\bf
x}$ as the solution to (44). Hence, the differential equation response $x(z)$
at every space-time point $z_{new}$ is predicted by ${\widehat{x}}_{new}={\bf
z}_{new}{\widehat{\beta}}_{LS}$ [46].
The polynomial regression model extends the properties of multivariate
regression by employing the higher-order terms of the independent variables to
better predict the nonlinear pattern of the response function $x=g(z)$. The
polynomial regression of order $J$ is given by
$\displaystyle x_{i}=\sum_{j=l}^{J}z_{i}^{j}\beta_{j}+\epsilon_{i},$ (45)
where $\beta=(\beta_{1},\ldots,\beta_{J})$ are the model coefficients that
should be estimated. One can easily represent (45) in matrix form as (43)
where the columns of the design matrix ${\bf Z}$ induce the explanatory
variable $z_{i},i=1,\ldots,n$ raised to various polynomial powers
$j=1,\ldots,J$. From the least squares method (44), one can predict the non-
linear responses of the differential equations at every point $z_{new}$ in the
domain of the equations of motion by ${\widehat{x}}_{new}={\bf
z}_{new}{\widehat{\beta}}_{LS}$.
Technically, we follow the described polynomial regression technique to first
obtain a closed form of the unperturbed equations of motions. As it has been
detailed, the polynomial regression needs pairs of corresponding inputs and
outputs. For generating them, for our problem, we follow an ANNs-based
approach [32] to numerically find the unperturbed critical collapse functions
in the entire domain of the DE system. In other words, we employ the technique
in [32] to generate the training data of the polynomial regressor.
### 4.2 ANNs assisted Metropolis-Hastings for the estimation of the
distribution of the critical exponent
We do have a closed form of the unperturbed equations of motion thanks to the
polynomial regression detailed in previous section. However, our objective
consists of finding the distribution of the parameter of $\kappa$ in the
perturbed equations of motion.
With the estimations of the closed form for the DE variables
$b_{0}(z),|f_{0}(z)|$ and $\arg(f_{0}(z))$ of the unperturbed equations of
motion, we proceed to incorporate these estimates and update the perturbed
equations of motion (34) and (35). We consider these perturbed equations of
motion, as our underlying DE system to estimate the distribution of the
critical exponent.
Consider a system of $H$ differential equations where ${\bf
x}(t)=\left(x_{1}(t),\ldots,x_{H}(t)\right)$ represents the multivariate
solutions evaluated at space-time $t$ to the system of differential equations
(DEs)
$\displaystyle\frac{d}{dt}x_{h}(t)=g_{h}({\bf x}(t)|\theta),$ (46)
where the DE variable $x_{h}(t)$ corresponds to $h$ DE and $\theta$ denotes
the collection of the unknown parameters of the DE. In our particular case
$\theta=\kappa$. Due to the high non-linearity of the black hole equations of
motion, the exact solution to the DE equations can be observed. Rather than
the exact DE solution, one may observe a perturbed version of the DE variable
with numerical measurement errors through numerical experiments. To take this
uncertainty into statistical methods, let ${\bf y}_{h}=(y_{h1},\ldots,y_{hn})$
denote the observed version of the DE variable $x_{h}(t)$ at $n$ space-time
points. Hence, let $y_{h,i}$ follow a Gaussian distribution with mean
$x_{h}(t_{i}|\theta)$ and variance $\sigma_{h}^{2}$. Given the Gaussian
distribution of the observed data, the uncertainty in the DE data can be
modelled by the likelihood function of $y_{i}$ as
$\displaystyle L(\theta|{\bf
y})=\prod_{h=1}^{H}\prod_{i=1}^{n}(1/\sigma_{i}^{2})\exp\left\\{-{(y_{hi}-x_{h}(t_{i}|\theta)}/{2\sigma_{h}^{2}}\right\\},$
(47)
The likelihood function translates the problem of finding the solution to a
system of DEs into a Gaussian process with unknown parameters. One can
estimate the unknown parameters of the DE system given the observed DE
variables by maximizing the likelihood function.
According to the non-linear nature of the DE variables, the likelihood
function (47) appears nonlinear with multiple optimums, indicating the
sensitivity of the maximum likelihood estimate of the DE variables.
Technically, in this work, we propose a Bayesian approach incorporating the
numerical measurement errors in estimating the critical exponent parameter in
the equations of motion (34),(35). This framework requires prior knowledge of
model parameters. This prior knowledge is incorporated into equations through
a prior probability density function. Let $\pi(\theta;{\bf\alpha})$ denote
this prior distribution, with ${\bf\alpha}$ representing the vector of hyper-
parameters. In the experiments, see Section 5, we employ different prior
distributions.
So, we are able to first generate $\kappa$ candidates from a proposal
distribution. Then, we propose to apply fully connected ANNs using the
$\kappa$ candidate and find the solution to the perturbed equations of motion
corresponding to the $\kappa$ candidate. This means that our model will
generate for each $\kappa$ candidate a solution using ANNs, assisting in an
iterative fashion the deployed Bayesian approach. We show now in detail how
this ANNs step is done.
Standard ANNs encompass multi-layer perceptrons, which transfer the data
between the layers through linear and non-linear functions [47]. In this
research, we focus on fully connected ANNs to deal with the non-linearity of
the elliptic perturbed equations of motion in the Einstein-axion-dilaton
system in 4d. Let $\mathcal{N}^{L}({\bf x},t,\psi)$ denote a neural networks
with $L$ layers which map the input dimensions $R^{in}$ to $R^{out}$. Let
${\bf W}^{l}$ and ${\bf b}^{l}$ denote the weight matrix and bias vector,
which regress the neurons in layer $l$ on $l-1$, respectively. Accordingly,
the response vector before activation in layer $l$ is given by
$\displaystyle\begin{array}[]{ll}\text{input layer:}&N^{0}({\bf
x},t,\psi)={\bf x},\\\ \text{hidden layers:}&N^{l}({\bf x},t,\psi)=\eta({\bf
W}^{l}N^{l-1}({\bf x},t,\psi)+{\bf b}^{l}),\\\ \forall 1\leq l\leq L-1&\\\
\text{output layer:}&N^{L}({\bf x},t,\psi)=\eta({\bf W}^{L}N^{L-1}({\bf
x},t,\psi)+{\bf b}^{L}),\\\ \end{array}$
where the output of layer $l$ is obtained after applying the activation
function, that is $\eta(\cdot)$.
The universal approximation theorem guarantees that the neural networks can
approximate any function, making the neural networks a state-of-the-art method
for solving nonlinear equations [47]. One must define a loss function
$\mathcal{L}$ to assess the performance of the ANNs and a back-propagation
algorithm to adjust the weight matrices and bias parameters of the ANNs based
on the gradient of the loss functions. The back-propagation algorithm
calculates the gradients of the ANNs with respect to the response variable to
ensure that the estimates of the ANN’s coefficients meet the requirements of
the loss function. This process translates solving the differential equations
of motion into an optimization problem, estimating the differential variables
based on minimizing the squared residues of the ANNs. Accordingly, for this
purpose, we introduce the $l_{2}$ loss function as
$\mathcal{L}\left(\mathcal{N}^{L}({\bf
x},t,\psi)\right)=\left(\mathcal{N}^{L}({\bf x},t,\psi)-g\right)^{2},$
where $\mathcal{N}^{L}({\bf x},t,\psi)$ represents the trained neural networks
approximating the underlying function.
Overall, at this point, our method disposes of a solution of the system of DEs
pertinent to the perturbed elliptic class of black hole equations of motion,
for a sampled particular value of $\kappa$. However, our objective is to find
the probability distribution of the critical exponent, so that it can be
considered, for the first time in the literature, as a random variable.
Therefore, for each of the solutions associated to sampled values of $\kappa$
we must compute the likelihood function (47) and find its acceptance
probability
Using information contained in the likelihood function (47) and the prior
distribution, one can find the posterior distribution of the unknown
parameters $\pi({\bf\theta}|{\bf y})$. The posterior distribution enables us
to capture the pattern of the parameters of the DE system using all the
uncertainty involved in calculating the DE variables. The posterior
distribution of $\theta$ is calculated by
$\displaystyle\pi({\bf\theta}|{\bf y})=\frac{L({\bf\theta}|{\bf
y})\pi({\bf\theta};{\bf\alpha})}{\int_{\theta}L({\bf\theta}|{\bf
y})\pi({\bf\theta};{\bf\alpha})}.$ (48)
Nevertheless, the posterior distribution (48) lacks a closed-form solution
because of the high non-linearity of the DE variables and the complex marginal
distributions of the DE variables. For this reason, one has to use iterative
methods such as Markov Chain Monte Carlo to find numerically the posterior
distribution of the parameters of the DE system [48].
Markov Chain Monte Carlo (MCMC) is a widely acceptable numerical technique
within the Bayesian framework to estimate the posterior distribution of the DE
system. The power of the MCMC approach relies on the fact that it transfers
the estimation task into the sampling process from the posterior distribution
of interest. Extensive research has been conducted in the literature about the
properties of the MCMC approach, including Metropolis-Hastings, Gibbs
sampling, and Sequential Monte Carlo [49] and [35]. In its nutshell, MCMC
employs various probabilistic techniques to generate a Markov chain of samples
approximating the target posterior distribution. When the chain is selected
long enough, the Markov chain will eventually reach a stationary state that
accurately fluctuates around the true form of the posterior distribution.
For our implementation, we follow the Metropolis-Hastings (MH) algorithm [34],
which is an established MCMC approach. The MH method calculates the
probability of the transition between the current state of the chain
$\theta^{(t)}$ and a candidate state $\theta^{*}$ simulated stochastically
from an independent proposal distribution $q(\theta)$. The MH method applies a
probabilistic step to accept or reject the proposed candidate. Let
$\theta^{(t)}$ represent the $t$-th state of the MCMC chain. Then we accept
the proposed state $\theta^{*}$ as the next state of the chain with
probability
$\displaystyle\min\left\\{1,\frac{\pi({\bf\theta}^{*}|{\bf
y})q({\bf\theta}^{(t)})}{\pi({\bf\theta}^{(t)}|{\bf
y})q({\bf\theta}^{*})}\right\\}.$ (49)
If the candidate ${\bf\theta}^{*}$ is accepted, then
${\bf\theta}^{(t+1)}={\bf\theta}^{*}$; otherwise,
${\bf\theta}^{(t+1)}={\bf\theta}^{(t)}$. The entire MCMC method is replicated
$N$ times to generate a sequence of $\\{{\bf\theta}^{(t)};t=1,\ldots,N\\}$
samples from the target posterior distribution $\pi({\bf\theta}|{\bf y})$.
Overall, we have detailed a novel model that allows for a Bayesian estimation
of the critical exponent on the elliptic black hole solution in 4d. In the
next section, we show all the numerical studies performed, and for the first
time in the literature, we are able to provide a probability distribution of
$\kappa$.
## 5 Numerical Studies
In this section, we plan to find the distribution of the critical exponent
$\kappa$ in the elliptic 4d equations of motion in the Bayesian framework. To
do so, the critical exponent is treated as a random variable, so we use the
Bayesian strategy to find the posterior distribution of $\kappa$ based on the
perturbed DE system. In this numerical study, we investigate the equations of
motion as the input DE system.
Hatefi et al. (2023) in [32] proposed ANNs to solve the unperturbed DE
variable. We applied the properties of polynomial regression and found the
closed-form polynomial regression estimates for the unperturbed critical
collapse functions. According to the fact that the goal of this research is to
estimate the critical exponent and that the critical exponent is only observed
in perturbed DE variables, we first followed [32] and used the ANNs based on
$\omega=1.176$ to numerically find the unperturbed critical collapse functions
in the entire domain of the DE system. To find the regression estimate of the
unperturbed critical collapse functions, we generated equally spaced points
${z_{1},\ldots,z_{n}}$ of size $n=1000$ from the domain of the DE system. We
obtained the ANN estimates of the unperturbed critical collapse functions
evaluated at each of the 1000 points. Once we obtained the ANN estimates, we
used 750 observations as training and 250 as testing data to estimate the
closed-form polynomial regression of the unperturbed critical collapse
functions, as described in Subsection 4.1. Hence, the closed-form polynomial
estimates of the unperturbed critical collapse functions are derived by
$\displaystyle\widehat{b_{0}(z)}$
$\displaystyle=1.005-0.187z+0.480z^{2}-0.004z^{3},$ (50)
$\displaystyle\widehat{|f_{0}(z)|}$
$\displaystyle=0.919-0.122z-0.028z^{2}-0.002z^{3},$ (51)
$\displaystyle\widehat{\arg(f_{0}(z))}$
$\displaystyle=-0.011+0.041z+0.047z^{2}-0.012z^{3}.$ (52)
Figure 2: The polynomial estimates of the unperturbed critical collapse
functions $b_{0}(z),\arg(f_{0}(z))$ and $|f_{0}(z)|$ of the equations of
motion based on polynomial orders of $l=1$ (black), $l=2$ (red) and $l=3$
(blue). The dots show the true functional form evaluated at the validation
points.
Figure 2 shows the training and test performance of the polynomial estimates
for the critical collapse functions for polynomial orders $l=1,2,3$. One can
easily see that the polynomial of order $l=3$ can capture very well the non-
linearity of the unperturbed critical collapse functions. Once we found the
unperturbed DE variables $b_{0}(z),|f_{0}(z)|$ and $\arg(f_{0}(z))$, we
incorporated these estimates and updated the perturbed equations of motion. In
the next step, we consider this perturbed equations of motion, as our
underlying DE system to estimate the critical exponent.
We proposed an artificial neural network-assisted Metropolis-Hastings to
obtain the posterior distribution of $\kappa$. We first generated $\kappa$
candidates from a proposal distribution to do so. Here, we considered two
proposal distributions, including the Uniform distribution between (3,4) and
the Gaussian distribution with a mean of 3.5 and variance of 1 to deal with
the singularity issue, and the range included the established solution
available in the literature [28].
In the next step, we apply fully connected neural networks using the $\kappa$
candidate and find the solution to the perturbed equations of motion
corresponding to the $\kappa$ candidate. We implemented the fully connected
ANNs using the Python package neurodiffeq [50]. The neural networks used
contain 4 hidden layers, each with 16 neurons. The training was done for 50
epochs. We then computed the likelihood function (47) and found the acceptance
probability of the Metropolis-Hasting algorithm under non-informative uniform
distribution. Due to the fact that the range of DE variables is dramatically
different, we assign $\sigma_{b_{1}}=7$ and $\sigma_{g}=2$ in the likelihood
function of $\kappa$ given the perturbed DE variables.
It is shown in the literature that the perturbed $b_{1}(z)$ DE variable is
linear with respect to $\kappa$; however, the perturbed $f_{1m}(z)=g_{m}(z)$
contains the higher orders of $\kappa$ [28]. To investigate the effects of the
different DE variables on the MCMC chains and the Bayesian distribution of
$\kappa$, we evaluated the performance of the stochastic accept-reject method
based on three scenarios. These scenarios include the accept-reject step based
on only the observed perturbed DE variable of $b_{1}(z)$, accept-reject based
on only the observed perturbed DE variable $g_{m}(z)$ and accept-reject based
on both observed perturbed DE variables $g_{m}(z)$ and $b_{1}(z)$. Finally,
the entire neural network-assisted Metropolis-Hasting approach was carried out
for 2000 chains, where in each iteration, the perturbed equations of motion
were computed using artificial neural networks. We discard the first 10% of
the history of the MCMC chain to wash out the effect of the initialization
step on the performance of the MCMC chain.
Figures 3-8 show the posterior distributions and trace plots of the $\kappa$
parameter based on 1800 MCMC chains (after discarding the burn-in period)
using our artificial neural networks-assisted Metropolis-Hastings approach.
The results are based on three accept-reject approaches and two proposal
distributions, including uniform and Gaussian distributions. From Figures 3-8,
we observe that the distribution’s posterior mean and posterior mode, as two
Bayesian estimates of the $\kappa$ range between 3.2 and 3.8, which verifies
the findings in the literature that $\kappa^{*}\approx 3.7858$ for the
equations of motion in 4d elliptic class [28]. From the trace plot, we easily
observed that the MCMC chain easily searches the domain of the parameter of
interest and supports the solution between 3.2 and 3.8.
As one needs the ANNs in each iteration of the MCMC chain to evaluate the
accept-reject steps, we show the results of the loss function of ANNs in
Figures 9-14 (in the Appendix). To show the convergence of the ANNs, we
computed the difference between training and validation loss functions in the
last epoch of the ANNs over 1800 MCMC samples. In each figure, we also show
the entire trajectory of the training and validation loss functions over all
the epochs for a randomly selected MCMC sample. From Figures 9-14, we clearly
observe the loss differences are almost very small of magnitude $10^{-6}$
fluctuating around zero. This highlights the convergence of the ANNs in
estimating the critical exponent in a Bayesian framework.
The unique solution for the elliptic case in four dimensions was achieved in
[28]. Indeed, the behaviour of $detA(k)$ near the last crossing of the
horizontal axis was estimated to be
$\kappa^{*}_{4E}\approx 3.7858\,,$ (53)
which gives rise to a Choptuik exponent as $\gamma_{4E}\approx 0.2641$. In
this paper, we have used various statistical methods to actually explore the
entire range of the critical exponent rather than exploring the last crossing.
Surprisingly, our results
$3.2<\kappa^{*}_{4E}<3.8$ (54)
is in perfect agreement with the literature, see [41]. Hence, we can actually
take this very non-trivial match as evidence that shows all our numerical sets
up are reliable methods that can be extended to any dimension and any matter
content.
Figure 3: The posterior distribution and trace plot of the $\kappa$ based on
MCMC samples from the Metropolis-Hastings algorithm using only the perturbed
DE variable $b_{1}(z)$ to accept or reject the transitions under the Uniform
proposal distribution. Figure 4: The posterior distribution and trace plot of
the $\kappa$ based on MCMC samples from the Metropolis-Hastings algorithm
using only the perturbed DE variable $f_{1m}(z)=g_{m}(z)$ to accept or reject
the transitions under the Uniform proposal distribution. Figure 5: The
posterior distribution and trace plot of the $\kappa$ based on MCMC samples
from the Metropolis-Hastings algorithm using both the perturbed DE variables
$b_{1}(z)$ and $g_{m}(z)$ to accept or reject the transitions under the
Uniform proposal distribution.
## 6 Conclusion
In this paper, we advance the current literature on critical exponents for
black hole solutions by examining all solutions within the domains of the
linear perturbation equations of motion. Specifically, we investigate the
quantum perturbation theory for a four-dimensional Einstein-axion-dilaton
system under an elliptic class of $\text{SL}(2,\mathbb{R})$ transformations.
Utilizing quantum perturbation theory, we propose artificial neural network-
assisted Metropolis-Hastings algorithms for Bayesian estimation of the
critical exponent of the elliptic black hole solution in 4d.
Unlike conventional methods, we investigated the range of possible values for
critical exponents using quantum perturbation theory. We conducted a thorough
analysis not only of the perturbed differential equation (DE) variables
$b_{1}(z)$ and $f_{1m}(z)=g_{m}(z)$ but also of the DE variables for both
$b_{1}(z)$ and $g_{m}(z)$ equations simultaneously. This comprehensive
approach allowed us to assess their effects on stochastic accept-reject
transitions within the posterior distributions of the critical exponent. Our
innovative probabilistic method stands out from existing techniques by
offering the established solution while simultaneously exploring the entire
spectrum of physically distinguishable critical exponents, accounting for
potential numerical measurement errors. This advancement provides a more
robust and inclusive understanding of critical exponents, highlighting the
limitations of previous approaches and paving the way for more accurate
predictions in complex systems.
Our new methods introduce innovative approaches for exploring the potential
range of allowed values for the critical exponent, applicable to all different
conjugacy classes of the $\mathrm{SL}(2,\mathbb{R})$ transformation. Moreover,
it is important to emphasize that these methods can be extended to various
types of matter content. This is a direction we are keen to pursue in future
research.
It is interesting to note that, besides the critical exponents’ dependence on
matter content, dimension, and various solutions of self-critical collapse
[25], these exponents can span an entire range of values rather than being
confined to a single localized value. Therefore, we conclude that the
conjecture regarding the universality of the Choptuik exponent does not hold.
However, some universal behaviours might be embedded in the combinations of
critical exponents and other parameters of the given theory that our current
analysis has not accounted for. Our new findings clearly indicate that the
standard expectations of statistical mechanics do not transfer to the context
of critical gravitational collapse.
## 7 Acknowledgments
E. Hatefi would like to thank Pablo Diaz, E. Hirschmann, Philip Siegmann, L.
Alvarez-Gaume, and A. Sagnotti for their useful discussions and support. E.
Hatefi is supported by the María Zambrano Grant of the Ministry of
Universities of Spain. Armin Hatefi acknowledges the support from the Natural
Sciences and Engineering Research Council of Canada (NSERC).
## References
* [1] M.W. Choptuik, Universality and Scaling in Gravitational Collapse of a Massless Scalar Field, Phys. Rev. Lett. 70, 9 (1993).
* [2] D. Christodoulou, “The Problem of a Self-gravitating Scalar Field,” Commun. Math. Phys. 105 (1986) 337; “Global Existence of Generalized Solutions of the Spherically Symmetric Einstein Scalar Equations in the Large,” Commun. Math. Phys. 106 (1986) 587;“The Structure and Uniqueness of Generalized Solutions of the Spherically Symmetric Einstein Scalar Equations,” Commun. Math. Phys. 109 (1987) 591.
* [3] R. S. Hamade and J. M. Stewart, “The Spherically symmetric collapse of a massless scalar field,” Class. Quant. Grav. 13 (1996) 497 [arXiv:gr-qc/9506044].
* [4] C. Gundlach, “Critical phenomena in gravitational collapse,” Phys. Rept. 376 (2003) 339 [gr-qc/0210101].
* [5] T. Koike, T. Hara, and S. Adachi, “Critical Behavior in Gravitational Collapse of Radiation Fluid: A Renormalization Group (Linear Perturbation) Analysis,” Phys. Rev. Lett. 74 (1995) 5170 [gr-qc/9503007].
* [6] L. Alvarez-Gaume, C. Gomez and M. A. Vazquez-Mozo, “Scaling Phenomena in Gravity from QCD,” Phys. Lett. B 649 (2007) 478 [hep-th/0611312].
* [7] M. Birukou, V. Husain, G. Kunstatter, E. Vaz and M. Olivier, “Scalar field collapse in any dimension,” Phys. Rev. D 65 (2002) 104036 [gr-qc/0201026].
* [8] V. Husain, G. Kunstatter, B. Preston and M. Birukou, “Anti-de Sitter gravitational collapse,” Class. Quant. Grav. 20 (2003) L23 [gr-qc/0210011].
* [9] E. Sorkin and Y. Oren, “On Choptuik’s scaling in higher dimensions,” Phys. Rev. D 71, 124005 (2005) [arXiv:hep-th/0502034].
* [10] J. Bland, B. Preston, M. Becker, G. Kunstatter and V. Husain, “Dimension-dependence of the critical exponent in spherically symmetric gravitational collapse,” Class. Quant. Grav. 22 (2005) 5355 [gr-qc/0507088].
* [11] E. W. Hirschmann and D. M. Eardley, “Universal scaling and echoing in gravitational collapse of a complex scalar field,” Phys. Rev. D 51 (1995) 4198 [gr-qc/9412066].
* [12] J. V. Rocha and M. Tomašević, “Self-similarity in Einstein-Maxwell-dilaton theories and critical collapse,” Phys. Rev. D 98 (2018) no.10, 104063 [arXiv:1810.04907 [gr-qc]].
* [13] L. Alvarez-Gaume, C. Gomez, A. Sabio Vera, A. Tavanfar and M. A. Vazquez-Mozo,“Critical gravitational collapse: towards a holographic understanding of the Regge region,” Nucl. Phys. B 806 (2009) 327 [arXiv:0804.1464 [hep-th]].
* [14] C. R. Evans and J. S. Coleman, “Observation of critical phenomena and selfsimilarity in the gravitational collapse of radiation fluid,” Phys. Rev. Lett. 72 (1994) 1782 [gr-qc/9402041].
* [15] D. Maison, “Non-Universality of Critical Behaviour in Spherically Symmetric Gravitational Collapse,” Phys. Lett. B 366 (1996) 82 [gr-qc/9504008].
* [16] A. Strominger and L. Thorlacius, “Universality and scaling at the onset of quantum black hole formation,” Phys. Rev. Lett. 72 (1994) 1584 [hep-th/9312017].
* [17] E. W. Hirschmann and D. M. Eardley, ‘Critical exponents and stability at the black hole threshold for a complex scalar field,” Phys. Rev. D 52 (1995) 5850 [gr-qc/9506078].
* [18] A. M. Abrahams and C. R. Evans, “Critical behavior and scaling in vacuum axisymmetric gravitational collapse,” Phys. Rev. Lett. 70 (1993) 2980.
* [19] L. Alvarez-Gaume, C. Gomez, A. Sabio Vera, A. Tavanfar and M. A. Vazquez-Mozo, “Critical formation of trapped surfaces in the collision of gravitational shock waves,” JHEP 0902, 009 (2009) [arXiv:0811.3969 [hep-th]].
* [20] E. W. Hirschmann and D. M. Eardley, “Criticality and bifurcation in the gravitational collapse of a selfcoupled scalar field,” Phys. Rev. D 56 (1997) 4696 [gr-qc/9511052].
* [21] J. M. Maldacena, “The Large N limit of superconformal field theories and supergravity,”Int. J. Theor. Phys. 38 (1999), 1113-1133, Adv. Theor. Math. Phys. 2, arXiv:hep-th/9711200, E. Witten, “Anti-de Sitter space and holography,” Adv. Theor. Math. Phys. 2 (1998), 253-291,hep-th/9802150, S. Gubser, I. R. Klebanov and A. M. Polyakov, “Gauge theory correlators from noncritical string theory,” Phys. Lett. B 428 (1998), 105-114, hep-th/9802109.
* [22] D. Birmingham, “Choptuik scaling and quasinormal modes in the AdS / CFT correspondence,” Phys. Rev. D 64 (2001), 064024 [arXiv:hep-th/0101194 [hep-th]].
* [23] E. Hatefi, A. Nurmagambetov and I. Park, “ADM reduction of IIB on $\mathcal{H}^{p,q}$ to dS braneworld,” JHEP 04 (2013), 170, arXiv:1210.3825 , “$N^{3}$ entropy of $M5$ branes from dielectric effect,” NPB866 (2013), 58-71, arXiv:1204.2711, S. de Alwis, R. Gupta, E. Hatefi and F. Quevedo, “Stability, Tunneling and Flux Changing de Sitter Transitions in the Large Volume String Scenario,” JHEP 11 (2013), 179, arXiv:1308.1222; E. Hatefi and I. Y. Park, Phys. Rev. D 85 (2012), 125039 [arXiv:1203.5553 [hep-th]]; E. Hatefi, JHEP 05 (2010), 080,[arXiv:1003.0314 [hep-th]].
* [24] E. Hatefi, Phys. Rev. D 86 (2012), 046003, [arXiv:1203.1329 [hep-th]]; E. Hatefi and I. Y. Park, Nucl. Phys. B 864 (2012), 640-663 [arXiv:1205.5079 [hep-th]]; E. Hatefi, JCAP 09 (2013), 011, [arXiv:1211.5538 [hep-th]]; E. Hatefi, JHEP 07 (2013), 002, [arXiv:1304.3711 [hep-th]]; E. Hatefi, JHEP 04 (2013), 070, [arXiv:1211.2413 [hep-th]]; E. Hatefi, JHEP 11 (2013), 204, [arXiv:1307.3520 [hep-th]].
* [25] R. Antonelli and E. Hatefi, “On self-similar axion-dilaton configurations,” , JHEP 03 (2020), 074 [arXiv:1912.00078 [hep-th]].
* [26] L. Álvarez-Gaumé and E. Hatefi, “Critical Collapse in the Axion-Dilaton System in Diverse Dimensions,” Class. Quant. Grav. 29 (2012) 025006 [arXiv:1108.0078 [gr-qc]].
* [27] L. Álvarez-Gaumé and E. Hatefi, “More On Critical Collapse of Axion-Dilaton System in Dimension Four,” JCAP 1310 (2013) 037 [arXiv:1307.1378 [gr-qc]].
* [28] R. Antonelli and E. Hatefi, “On Critical Exponents for Self-Similar Collapse,” JHEP 03 (2020), 180 [arXiv:1912.06103 [hep-th]].
* [29] E. Hatefi and A. Hatefi, “Estimation of Critical Collapse Solutions to Black Holes with Nonlinear Statistical Models,” Mathematics 10 (2022) no.23, 4537 [arXiv:2110.07153 [gr-qc]].
* [30] E. Hatefi and A. Hatefi, “Nonlinear statistical spline smoothers for critical spherical black hole solutions in 4-dimension,” Annals Phys. 446 (2022), 169112 [arXiv:2201.00949 [gr-qc]].
* [31] E. Hatefi, A. Hatefi and R. J. López-Sastre, “Analysis of black hole solutions in parabolic class using neural networks,” Eur. Phys. J. C 83 (2023) no.07, 623 [arXiv:2302.04619 [gr-qc]].
* [32] A. Hatefi, E. Hatefi and R. J. López-Sastre, “Modeling the complexity of elliptic black hole solution in 4D using Hamiltonian Monte Carlo with stacked neural networks,” JHEP 10 (2023), 034 [arXiv:2307.14515 [gr-qc]].
* [33] A. Hatefi and E. Hatefi, ‘Sequential Monte Carlo with cross-validated neural networks for complexity of hyperbolic black hole solutions in 4D,” Eur. Phys. J. C 83 (2023) no.11, 1083 [arXiv:2308.07907 [hep-th]].
* [34] Robert, C. P., Casella, G.,“ Monte Carlo statistical methods“ (Vol. 2). New York: Springer, (1999).
* [35] Murphy, K. P., “Probabilistic machine learning: Advanced topics“. MIT Press, 2023.
* [36] A. Sen, “Strong - weak coupling duality in four-dimensional string theory,” Int. J. Mod. Phys. A 9 (1994) 3707 [hep-th/9402002].
* [37] J. H. Schwarz, “Evidence for nonperturbative string symmetries,” Lett. Math. Phys. 34 (1995) 309 [hep-th/9411178].
* [38] M.B. Green, J.H. Schwarz and E. Witten, 1987 Superstring Theory Vols I,II, Cambridge University Press,
* [39] J. Polchinski, 1998 String Theory, Vols I,II, Cambridge University Press
* [40] A. Font, L. E. Ibanez, D. Lust and F. Quevedo, “Strong - weak coupling duality and nonperturbative effects in string theory,” Phys. Lett. B 249 (1990) 35.
* [41] D. M. Eardley, E. W. Hirschmann and J. H. Horne, “S duality at the black hole threshold in gravitational collapse,” Phys. Rev. D 52 (1995) 5397 [arXiv:gr-qc/9505041].
* [42] E. Hatefi and E. Vanzan, “On Higher Dimensional Self-Similar Axion-Dilaton Solutions,” Eur. Phys. J. C 80 (2020), 10 [arXiv:2005.11646 [hep-th]].
* [43] R. S. Hamade, J. H. Horne and J. M. Stewart, “Continuous Self-Similarity and $S$-Duality,” Class. Quant. Grav. 13 (1996) 2241 [arXiv:gr-qc/9511024].
* [44] E. Hatefi and A. Kuntz, “On perturbation theory and critical exponents for self-similar systems,” Eur. Phys. J. C 81 (2021) no.15 [arXiv:2010.11603 [hep-th]].
* [45] A. Ghodsi and E. Hatefi, “Extremal rotating solutions in Horava Gravity,” Phys. Rev. D 81 (2010) 044016 [arXiv:0906.1237 [hep-th]].
* [46] F. Harrell Jr, “ Regression modeling strategies: with applications to linear models, logistic and ordinal regression, and survival analysis,“ Springer (2015).
* [47] Goodfellow, I., Bengio, Y., & Courville, A. “Deep learning”, MIT press. (2016)
* [48] Girolami, M., “Bayesian inference for differential equations. “ Theoretical Computer Science, 408(1), 4-16, (2008).
* [49] Bishop, C. M., and Nasrabadi, N. M., “Pattern recognition and machine learning“ (Vol. 4, No. 4, p. 738). New York: Springer, (2006).
* [50] F. Chen, D. Sondak, P. Protopapas, P.Mattheakis, M. Liu, S.Agarwal, D. & Di Giovanni, M., “NeuroDiffEq: A Python package for solving differential equations with neural networks,” Journal Of Open Source Software. 5, 1931 (2020)
## Appendix
Figure 6: The posterior distribution and trace plot of the $\kappa$ based on
MCMC samples from the Metropolis-Hastings algorithm using only the perturbed
DE variable $b_{1}(z)$ to accept or reject the transitions under the Gaussian
proposal distribution. Figure 7: The posterior distribution and trace plot of
the $\kappa$ based on MCMC samples from the Metropolis-Hastings algorithm
using only the perturbed DE variable $f_{1m}(z)=g_{m}(z)$ to accept or reject
the transitions under the Gaussian proposal distribution. Figure 8: The
posterior distribution and trace plot of the $\kappa$ based on MCMC samples
from the Metropolis-Hastings algorithm using both the perturbed DE variables
$b_{1}(z)$ and $g_{m}(z)$ to accept or reject the transitions under the
Gaussian proposal distribution. Figure 9: The difference between the train
and validation losses in the last epochs over MCMC samples (A) and the
histogram of the loss differences (B). The train and validation losses for one
random MCMC sampling (C). The MCMC samples are from the Metropolis-Hastings
algorithm, using only the perturbed DE variable $b_{1}(z)$ to accept or reject
the transitions under the Uniform proposal distribution. Figure 10: The
difference between the train and validation losses in the last epochs over
MCMC samples (A) and the histogram of the loss differences (B). The train and
validation losses for one random MCMC sampling (C). The MCMC samples are from
the Metropolis-Hastings algorithm, using only the perturbed DE variable
$g_{m}(z)$ to accept or reject the transitions under the Uniform proposal
distribution. Figure 11: The difference between the train and validation
losses in the last epochs over 1800 MCMC samples (A) and the histogram of the
loss differences (B). The train and validation losses for one random MCMC
sampling (C). The MCMC samples are from the Metropolis-Hastings algorithm,
using both perturbed DE variables $b_{1}(z)$ and $g_{m}(z)$ to accept or
reject the transitions under the Uniform proposal distribution. Figure 12:
The difference between the train and validation losses in the last epochs over
MCMC samples (A) and the histogram of the loss differences (B). The train and
validation losses for one random MCMC sampling (C). The MCMC samples are from
the Metropolis-Hastings algorithm, using only perturbed DE variables
$b_{1}(z)$ to accept or reject the transitions under the Gaussian proposal
distribution. Figure 13: The difference between the train and validation
losses in the last epochs over MCMC samples (A) and the histogram of the loss
differences (B). The train and validation losses for one random MCMC sampling
(C). The MCMC samples are from the Metropolis-Hastings algorithm, using only
the perturbed DE variable $g_{m}(z)$ to accept or reject the transitions under
the Gaussian proposal distribution. Figure 14: The difference between the
train and validation losses in the last epochs over MCMC samples (A) and the
histogram of the loss differences (B). The train and validation losses for one
random MCMC sampling (C). The MCMC samples are from the Metropolis-Hastings
algorithm, using both perturbed DE variables $b_{1}(z)$ and $g_{m}(z)$ to
accept or reject the transitions under the Gaussian proposal distribution.
|
We observe that NMF-LR achieves the smallest reconstruction error among all methods but suffers for the classification task. This is expected from the construction of the dataset, as the synthetic images are nonnegative linear combinations of images of digits '2' and '5', and the same dictionary atoms are also used as filters for label prediction. On the other hand, logistic regression on pixels does not compress the data matrix so we assigned a relative reconstruction error of 1. In Figure \ref{fig:MNIST_pareto}, we observe that, except in a few instances, most SDL models lie between the two extremes of (1) LR and (2) NMF-LR in the sense that achieving significantly better classification accuracies (both in terms of accuracy and F-score) with small reconstruction error. For instance, SDL-filt with $ξ=10$ achieves a relative reconstruction error of about 0.22 and classification accuracy above 80$%$, which is more than double the performance of NMF-LR and also about $11%$ better than LR accuracy. It is interesting to note that SDL-conv-filt with $ξ=0.1$ achieves the best classification accuracy of about $91%$ but its reconstruction error is quite large at around $0.7$.
For more qualitative analysis, we plot various estimated dictionaries $$ and compare them. Figure \ref{fig:MNIST_filt_dict} shows how the dictionary matrix $$ estimated by SDL-filter changes depending on the level of tuning parameter $ξ∈{0.01, 0.1, 1}$. When $ξ=1$, the combined SDL loss in \eqref{eq:ASDL_1} puts some significant weight on the matrix factorization term, so the learned dictionary $$ should be reconstructive of the synthesized images in $_$. Indeed, the learned atoms in Figure \ref{fig:MNIST_filt_dict} left shows shapes of digits `5' and `2'. Further, the second atom resembling '2' is associated with a negative regression coefficient, indicating that being close to `2' may be partially aligned with being close to `7', which corresponds to being a negative example. On the other hand, decreasing the value of $ξ$ increases the amount of supervision. The learned atoms in Figure \ref{fig:MNIST_filt_dict} middle and right resembles less of the digits '2' and '5', but it seems that some abstract shape with large positive (for $ξ=1$) and large negative (for $ξ=0.01$) are learned and the resulting classification accuracies increase. The other atoms seem to be the shape of `8', which are seemingly learned from superpositions` 2' and `5'. One can regard the learned dictionary atoms as the `best effort' of SDL-filter in balancing the two partially aligned reconstructions and discrimination taken from the space of images of linear combinations of `2' and `5'.
%These three $\hat{W}_x$ were obtained by the filter-based method. When $\xi=0$, the loss function does not contain matrix factorization loss, therefore, the $\hat{W}_x$ has highlighted pixels rather than the shape of a specific number related to the class (left). On the contrary, when $\xi=1$, $\hat{W}_x$ is very close to the image of original basis matrix $W_{{true}_X}$ (right). $\hat{W}_x$ with $\xi = 0.5$ was obtained as the combination of logistic regression loss and matrix factorization loss with equal weight (middle). The second basis matrix looks 8 generated by the combination of 2 and 5 properly to predict the second class, 8.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{Figures/MNIST_filt_dict_xi.pdf}
\caption{Estimated dictionary matrix $\hat{\W}$ from SDL-filter depending on the level of tuning parameter $\xi\in \{ 1,0.1,0.01 \}$. The smaller the tuning parameter is the stronger the supervision effect is.}
\label{fig:MNIST_filt_dict}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{Figures/MNIST_dict_comparison.pdf}
\caption{Estimated basis matrix $\hat{W}_x$ from filter based method depending on the level of tuning parameter $\xi$. $\xi = 0$ (Left), $\xi = 0.5$ (Middle), $\xi = 1$ (Right)}
\label{fig:MNIST_dict_comparison}
\end{figure}
In Figure \ref{fig:MNIST_dict_comparison}, we plot the estimated dictionaries $$ from all five methods except LR with chosen level of tuning parameter $ξ$. It is interesting that NMF learned almost identical atoms (with inferior classification performance) that resemble the typical shape of nonnegative linear combinations of digits `2' and `5', instead of learning separate atoms of shapes `2' and `5' individually. For convex SDL algorithms, we find that typically a larger tuning parameter is required for fast convergence, which we also observed in Figure \ref{fig:benchmark_MNIST}.
\section{Applications}
\subsection{Supervised Topic Modeling on fake job postings dataset}
%Fake job postings have been causing financial losses to individuals seeking jobs.
According to Better Business Bureau, a non-profit organization that monitors and evaluates job postings, there were 3,434 fake job postings reported in 2019. These scam postings result in huge financial loss and the average loss per victim is \$3,000 according to the FBI reports [7]. In this section, we use our methods to classify fake job postings on a dataset 'real-or-fake-job posting-prediction' in Kaggle [6]. In doing so, the new method can also simultaneously learn topics that are most effective in classifying fake job postings. We compared the performance of classifiers based on multiple indexes such as accuracy, and F-score, and find what factors are highly associated with fake job postings. Identifying the relevant characteristics of scams and building prediction models will help prevent potential financial losses in advance.
\subsubsection{Dataset description and preliminary analysis}
\label{sec:fakejob_setup}
There are 17,880 postings and 15 variables in the dataset including binary variables, categorical variables, and textual information of \textit{job description}. Among the 17,880 postings, 17,014 are true job postings (95.1\%) and 866 are fraudulent postings (4.84\%), which shows a high imbalance between the two classes. This imbalance is the main characteristic of the dataset. We coded fake job postings as positive examples and true job postings as negative examples. Due to the high imbalance, the accuracy of classification can be trivially high (e.g., by classifying everything to be negative), and hence achieving a high F-score is of importance.
In our experiments, we represented each job posting as a $p=2480$ dimensional word frequency vector computed from its \textit{job description} and augmented with $q=72$ auxiliary covariates of binary and categorical variables including indicators of the posting having a company logo or the posted job in the United States or not. For computing the word frequency vectors, we represent the job description variable as a term/document frequency matrix with Term Frequency-Inverse Document Frequency (TF-IDF) normalization [60]. TF-IDF measures the relative importance of each term in a collection of documents. If a word is common to all documents, then it is less likely to have an important meaning. %After filtering,
The top 2480 most frequent words were used for the analysis. %The below Figure \ref{fig:Figure4} shows the 10 most frequent words from the variable according to fraudulent and true job postings.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{Figures/fakejob_LR_coefficients.pdf}
\caption{The top 20 variables with largest coefficients from logistic regression on $p=2480$ words in job description (left) and $p+q=2480+72$ words and auxiliary covariates combined (right). In the left panel, red bars indicate the words that appear as dominant keywords in the forthcoming topic modeling analysis. In the right panel, blue bars indicate words and grey bars indicate auxiliary covariates.}
\label{fig:fakejob_LR}
\end{figure}
As a preliminary analysis, we first apply logistic regression either on the $p$-dimensional word frequency vectors or on the $(p+q)$-dimensional combined feature vectors (Figure \ref{fig:fakejob_LR} right). For the former experiment, Figure \ref{fig:fakejob_LR} left shows 10 words each with positive and negative regression coefficients with the largest absolute values. The results indicate that having a large frequency of words such as `earn', `money', and `link' is positively correlated with being a fake job, whereas postings with a high frequency of words such as `team', `client', and `website' as well as with company logo and jobs outside of the US are more likely to be true job postings.
\subsubsection{Supervised Topic Modeling with Auxiliary Covariates}
Topic modeling is a classical technique in text data analysis that seeks to find a small number of `topics', which are groups of words that share semantic context. The grounding assumption is that a given text may be built upon such topics as latent variables. Methods such as nonnegative matrix factorization (NMF) [40] and latent Dirichlet allocation (LDA) [65, 15, 33] have been successfully used to detect or estimate such latent semantic factors. Also, `supervised' topic modeling techniques have been studied, where one seeks to group words not only by their semantic contexts but also using their `functional contexts' that are provided by additional class labels. See, for example, [47] for LDA-based approaches and [29] for NMF combined with linear regression model (see \eqref{eq:SDL_regression_H1}).
Here we mainly compare two methods, namely, (1) NMF with logistic regression and (2) SDL-filter with nonnegativity constraints on $$ and $$̋. However, we do compare the performance of all four SDL models in Figures <ref> and <ref>. We note that for the purpose of topic modeling, it is crucial to use nonnegativity constraint on the dictionary matrix $\W$ in the SDL model (<ref>) as word frequencies are nonnegative and we would like to decompose a given document's word frequency as additively rather than subtractively in order for better interpretability (see, e.g., [40]).
Comparison between (supervised) topics learned by NMF and SDL-filter for the fake job posting data. (a) Four out of 25 topics learned by NMF are shown together with the corresponding logistic regression coefficients. (b) Four out of 20 supervised topics learned by SDL-filter with tuning parameter $\xi=5$ are shown together with the corresponding logistic regression coefficients. (c) Similar as (b) but with tuning parameter $\xi=1$. (d) Nine out of 20 supervised topics (white background)+ 72 auxiliary covariates (dark background) learned from SDL-filt with 72 auxiliary covariates are shown with their corresponding logistic regression coefficients. Corresponding classification accuracy and F-scores are also shown in the subtitles with fake job postings being the positive examples. For topic wordclouds (white background), word sizes are proportional to their frequency.
First consider Figure <ref> (a), which shows topics (shown as wordclouds) learned by NMF and their associated regression coefficients. Namely, after learning a dictionary matrix $\W\in\R^{p\times 25}$ by NMF from the job description matrix of shape $p\times n$ with $n=17,780$, each of the $r$ columns of $\W$ becomes the topic frequency vector and top 10 words with higheset frequencies are shown as wordcloud. NMF was able to find topics that summarize specific job information. More specifically, the upper right and lower left topics correspond to beauty and healthcare-related jobs. However, as can be seen by the low F-score reported in Figure <ref> (a), while the 25 topics learned by NMF give generic job descriptions, they may not be helpful to determine if a job posting is indeed fake. The main reason that we have these topics is that the dataset is highly imbalanced. Since most of the postings are true job postings (95%), when we first conduct dimension reduction based on NMF, the topics that we learned are mainly determined by dominant true job postings, rather than fake job postings.
On the other hand, some selected supervised topics (out of $20$ total) learned by SDL-filter with $\xi=5$ and $1$ are shown in Figures <ref> (b) and (c). In each case, the upper left and lower right topics are the ones with the largest positive and negative regression coefficients, respectively, and the upper right and lower left ones are manually selected for illustration purposes. For $\xi=1$ in Figure <ref> (c), notice that the upper left topic with positive regression coefficient consists of words that appear frequently on fake job postings (e.g., `money', `earn', `link'), while the lower right topic uses the words from true job postings (e.g., `team', `client', `website'), both detected by logistic regression in Figure <ref>. Topics with neutral regression coefficients are mainly used to reconstruct data matrix rather than for classification purposes. Note that the corresponding F-score of 0.43 is achieved using only 20 variables (topics) and is on par with the F-scores obtained by logistic regression using $p=2480$ or $p+q=2552$ variables in Figure <ref>.
Increasing the tuning parameter $\xi$ from 1 to $5$ weakens the supervision effect. Accordingly, the two neutral topics in Figure <ref> (b) becomes generic job descriptions as found by NMF in Figure <ref> (a), but the two extreme ones (upper left and lower right) maintain similar content and large absolute values of their regression coefficients.
We also conduct a similar analysis using SDL-filter with $\xi=0.001$ and $r=20$ topics along with 72 auxiliary covariates after converting categorical variables to one-hot-encoding. In Figure <ref> (d), we show the covariates with the largest absolute regression coefficients, which is a mix of supervised topics (white background) and auxiliary variables (dark background). This setting achieves the best F-score of $0.52$ by using $20+72$ variables, which is still significantly less than using all 2552 variables while enjoying better interpretability. We see that SDL-filter automatically combines words that are positively or negatively associated with fake job postings in an ensemble with auxiliary covariates. In other words, SDL-filter seems to perform simultaneous supervised topic modeling on text data, while incorporating auxiliary covariates for improved performance.
§.§.§ Evaluation of model performance
We provide a summary of classification accuracy and F-score of various settings in Table <ref> (see Tables <ref> and <ref> for the full results). Note that due to the high imbalance in the dataset (only 5% of fake job postings), getting a high classification accuracy is trivial (e.g., by classifying all as true job postings), so getting high $F$-score is more important. One can see that SDL-filter is overall the method of best classification performance, both in terms of accuracy and F-score, which improves when using auxiliary covariates. In contrast to comparable performance of convex SDL algorithms in the semi-synthetic MNIST data in Figure <ref>, for fake job postings dataset they show mediocre performance. It seems that for larger datasets in high dimension, one needs more extensive hyperparameter tuning for convex SDL methods than the nonconvex ones.
Tables of best average F-score over five runs from each of the six methods for the fake job postings data in Section <ref> with (left) and without (right) the auxiliary covariates for tuning parameter $\xi\in \{0.01, 0.1, 1, 5, 10\}$. See Tables <ref> and <ref> in Appendix for more details.
Pareto plot of relative reconstruction error vs. classification accuracy/F-score for various models on fake job postings dataset.
As in Figure <ref> for the semi-synthetic MNIST dataset, we also provide a Pareto plot in Figure <ref> to evaluate the performance of various SDL models on the fake job posting dataset against the benchmark models of logistic regression (LR) and NMF followed by logistic regression (NMF-LR). Recall that the Pareto plot shows how a model simultaneously performs two objectives of reducing the reconstruction error $\lVert \X_{\textup{data}} - \W\H \rVert$ as well as increasing the classification accuracy. As before, increasing the tuning parameter $\xi$ in various SDL models seems to interpolate between two extremes of LR and NMF-LR. We observe that SDL-filter performs best overall, in some cases achieving both goals with better classification performance than LR. The inferior performance of convex SDL models on the real dataset in contrast to their superior performance on the semi-synthetic dataset in Figure <ref> indicates that, in practice, the convex SDL models require more hyperparameter tuning. For instance, we did not try to fine-tune the stepsize, which we fixed at $\tau=0.01$ throughout the experiments.
We also report that the topics learned from SDL-feature share similar characteristics with respect to the tuning parameter $\xi$ in figure <ref>. We also mention that the topics learned from convex SDL algorithms, which cannot be used with a nonnegative constraint on the dictionary matrix $\W$, turns out to be uninformative and not very much interpretable. This is expected since the use of nonnegativity constraints on $\W$ and $\H$ (i.e., strong constraints on the SDL model, see \ref{assumption:A1}) is crucial for matrix-factorization-based topic modeling experiments (see [40]). We omit figures for the topics of SDL-feature and the convex SDL models.
%and filter-convex-based dictionary learning have very similar characteristics. From this observation, topics from supervised dictionary learning conduct dimension reduction and classification at the same time, which is different from the usual dimension reduction technique as shown in figure \ref{fig:Figure6} (a). However, topics from feature-convex-based dictionary learning are a collection of unrelated words, neither from general job postings nor from fraudulent job postings. It is mainly because it is based on singular value decomposition for dimension reduction.
%Also, topic modeling provides dimension reduction of the original high-dimensional word frequency representation of text data. In our context, instead of using the entire $p=2480$ words, we may learn $r$ (e.g., $r\in \{20,25\}$) topics and work with the reduced $r$-dimensional feature representation for more efficient and effective data analysis. This would also be useful if we want to combine with low-dimensional auxiliary covariates.
% In our context, one may try to find $r$ topics in the job posting dataset and then use the $r$-dimensional compressed representation of the job postings (instead of $p$-dimension) for logistic regression. In this way, we may learn the association between topics and fake/true job postings.
%One way to find \textit{topic} is to use the classic non-negative matrix factorization technique. In other words, first, use non-negative matrix factorization for dimension reduction, and then apply logistic regression using a selected set of words, named topic.
%\yaocomment{It seems that there is no explanation about Table 2. Is that for fake job posting, chest X-ray images, or simulation data?}
\subsection{Supervised dictionary learning on chest X-ray images for pneumonia detection}
Pneumonia is an acute respiratory infection that affects the lungs. According to WHO reports, it accounts for 15\% of all deaths of children under 5 years old, killing 808,694 children in 2017. Moreover, currently, about 15\% of COVID-19 patients suffer from severe pneumonia [73]. Chest X-ray is one inexpensive way to diagnose pneumonia, but rapid radiological
interpretation is not always available. A successful statistical model to classify pneumonia from chest X-ray images will enable rapid pneumonia diagnosis with high accuracy, which will be able to reduce the burden on clinicians and help their decision-making process. In this section, we apply our SDL methods for chest X-ray images for pneumonia detection.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{Figures/pneumonia_cell.jpeg}
\caption{ The normal chest X-ray (left panel) depicts clear lungs without any areas of abnormal opacification in the image. Bacterial pneumonia (middle) typically exhibits a focal lobar consolidation, in this case in the right upper lobe (white arrows), whereas viral pneumonia (right) manifests with a more diffuse “interstitial” pattern in both lungs. The figure and the description are excerpted from [35].
\label{fig:pneumonia_cell}
\end{figure}
The pneumonia data set was first introduced in [35]. There is a total of 5,863 chest X-ray images from children, consisting of 4,273 pneumonia patients and 1,583 healthy subjects. The images were collected from pediatric patients one to five years old from Guangzhou Women and Children’s Medical Center, Guangzhou. For the analysis of chest X-ray images, all images were initially screened for quality control and two expert physicians diagnosed the images. In the reference [35], an extremely accurate image classification system has been developed (with a classification accuracy of 92.8\%) using sophisticated deep neural network image classifiers. We intend to demonstrate that our SDL methods yield interesting and promising results for medical image classification tasks while being significantly simpler and easier to train than the deep neural network models.
In order to apply our SDL methods, we resize each chest X-ray image into an $180 ×180$ pixel image. Vectorizing each image, we obtain the data matrix $_∈^32,400×5,863$. We label pneumonia images with 1 and normal images with 0, obtaining the labeled matrix $_ ∈{0,1}^1×5,863$. We used the deterministic test/train split provided by the original references [35], where the train and the test sets consist of 5,216 and 624 images, respectively. Standard logistic regression with $p=180^2=32,400$ individual pixels as the explanatory variable yields a classification accuracy of 82\%. However, it is not entirely reasonable to assume individual pixels in the image to be correlated with pneumonia. Instead, we may associate certain latent shapes with pneumonia by using dictionary learning methods.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{Figures/pneumonia_dict.pdf}
\caption{25 dictionary atoms learned from chest X-ray images by NMF and SDL-feature with $\xi=50$. Corresponding logistic regression coefficients, as well as classification performances, are also shown. For SDL-feature, we used $L_{1}$ regularization coefficient of $5$ for the code matrix $\H$ and no $L_{2}$-regularization. Positive regression coefficients indicate a positive correlation with having pneumonia. There is a clear contrast between the two extreme atoms (upper left and lower right) according to their correlation with pneumonia.
\label{fig:pneumonia_dict}
\end{figure}
Figure \ref{fig:pneumonia_dict} shows 25 dictionary atoms of size $180\times 180$ learned by NMF (left) and SDL-feature (right) together with their corresponding logistic regression coefficient shown. For SDL-feature, we used tuning parameter $\xi=50$ and $L_{1}$-regularization coefficient of $5$ on the code matrix $\H$. Namely, we used Algorithm \ref{algorithm:SDL} for the feature-based SDL model in \eqref{eq:ASDL_1} together with an additional $L_{1}$ regularization term of $\lambda \lVert \H \rVert_{1}$ added to the loss function in \eqref{eq:ASDL_1} with $\lambda=5$. Using the $L_{1}$-regularization on the code matrix is standard in dictionary learning literature [50, 52, 49] in order to learn sparse representation on an over-complete dictionary. For this particular application, we find such regularization helps improving the classification accuracy better than using $L_{2}$-regularization on any of the factors. We found the value of $\xi=50$ and $\lambda=5$ by using a grid search for $\xi=\{0.01, 0.1, 1, 10, 50, 100\}$ and $\lambda=\{0,1,5,10\}$.
%\yaocomment{This is for my understanding. Did we add $L_{1}$ regularization in any other numerical studies for SDL models or just for this chest X-ray data? }
Note that the code matrix $\H$ is constrained to be nonnegative during optimization. Since the probability of the existence of pneumonia is calculated by the logit transformation of $\Beta (\mathbf{1},\H^{T})^{T}$, the positive value of $\Beta$ indicates that the corresponding atom is more related to pneumonia with the magnitude of the regression coefficient indicates the strength of such association, and the negative value of $\Beta$ suggests that the atom is related to normal. Comparing the NMF and SDL-feature dictionaries in Figure \ref{fig:pneumonia_dict}, we find that the regression coefficients for NMF atoms are quite neutral, but there are three atoms (two upper left and one lower right) with an order of magnitude larger absolute regression coefficient learned by SDL-feature. A closer investigation shows that the atom with regression coefficients $-0.066$ has almost no signal (dark) around the lung, whereas the two atoms with regression coefficients 0.052 and 0.046 show the opposite contrast, having most signals (bright) around the lung and weak signals elsewhere. Although one should pay extra caution when interpreting machine learning results as a clinical statement, this observation seems to be well-aligned with some basic characteristics of normal or pneumonia chest X-ray images as shown in Figure \ref{fig:pneumonia_cell} (see also the caption).
Lastly, we report some additional details of these experiments. First, the training of SDL-feature on the chest X-ray dataset and making predictions is extremely efficient and the entire process of training and testing takes under 2 minutes on an average laptop computer (100 iterations on an Apple M1 chip). Second, we find that SDL-filter achieves higher classification performance with accuracy $84.3\%$ and F-score $87.9\%$ with tuning parameter $\xi=0.01$, $r=20$ atoms, and the same $L_{1}$-regularization coefficient on $\H$ of $5$. However, the learned atoms are too noisy and not quite interpretable as the ones learned by SDL-feature in Figure \ref{fig:pneumonia_dict} right. Training and testing take about ten minutes on the same machine. Third, convex SDL models, in this case, take a long time (about an hour) with the same number of iterations and we omit the results.
%with positive regression coefficients and one atom with a negative co
%We conducted the logistic regression with 32,400 explanatory variables, the logistic regression with L1 penalty ($\lambda=10$) and L2 penalty ($\lambda=0.01$), and LMF with 25 latent factors, supervision parameter $\xi=40$ with sparsity constraint at $H$ ($\lambda_{H}=5$). Based on the given train, validation, and test set in Kaggle, we found the optimal $\lambda$ for LASSO and Ridge and evaluated the model performance. The performance of the four models was compared to the result from Convolutional neural network model in the Cell paper. The results given in Table \eqref{tab:Table2} indicate that LMF is comparable with other classical methods.
%In figure \ref{fig:Figure5}, we highlighted regions in yellow that are found to be positively associated with pneumonia based on logistic regression and LMF. More precisely, for logistic regression, the regression coefficient for each pixel in $180\times 180$ image is learned, and the highlighted region consists of pixels with their regression coefficients at least as large as $0.9$ times the maximum regression coefficient. On the other hand, the highlighted regions for LMF consist of pixels at least as large as $0.9$ times the maximum `score', where the score for each pixel is computed by the value in the reconstruction with dictionary images weighted by their regression coefficients. Since logistic regression recognizes a specific region significant, if many patients in the training set have pneumonia in the same particular region, the highlighted region does not vary much regardless of the actual location of pneumonia in each patient. On the other hand, LMF highlights different regions for each patient since its regression coefficient is associated with a geometric pattern, rather than individual pixels.
%{\color{red}This section should contain a clear and convincing discussion of the theoretical properties of the method. It should also discuss under what assumptions the methods should work and under what conditions they will fail.}
\section{Concluding Remarks}
%{\color{red}What is the answer to the question? What did you learn about the methods?}
In this paper, we provided a comprehensive treatment of a large class of supervised dictionary learning methods in terms of model construction, optimization algorithms and their convergence properties, and statistical estimation guarantees for the corresponding generative models. SDL models find best balance between two objectives of data modeling by latent factors and label classification while achieving simultaneous dimension reduction and classification, which makes them suitable for coping with high-dimensional data. We demonstrated that our methods show comparable performance with classical models for classifying fake job postings as well as chest X-ray images for pneumonia while learning basis topics or images that are directly associated with fake jobs or pneumonia, respectively. In addition, the new methods achieve such comparable performance with much reduced number of variables and with much more homogeneous and interpretable models.
Our method can potentially be used for a number of high-dimensional classification problems, especially for areas where interpretability is required such as natural language processing and biomedical image processing. While a number of sophisticated deep-learning-based approaches are gaining popularity due to their extreme success in diverse problems including image classification and voice recognition, an inherent downside is the loss of interpretability due to a severe over-parameterization and sophisticated design of such algorithms. In this work, we showed that combining two classical methods of nonnegative matrix factorization and logistic regression could achieve comparable performance while maintaining the transparency of the method and the interpretability of the results. % and theoretical convergence guarantee.
% AnotherAs another application, the method is applicable to highly unbalanced data, especially for the analysis of rare diseases. Unlike other existing dimensional reduction methods which mainly find latent factors from the normal group, our method jointly uses label information to find a classifier, so we can construct a better-performed model.
One of the main techniques we developed in this work is a `double lifting procedure', which transforms the SDL problem into a CAFE problem \eqref{eq:CAFE} and then to a CALE problem \eqref{eq:CALE}; then we can use globally guaranteed low-rank projected gradient descent (Algorithm \ref{algorithm:LPGD}) to efficiently find the global optimum of the resulting CALE problem, and then we can pull that solution back to the original space. While this approach has proven to be quite powerful in analyzing SDL problems in the present work, it is interesting to note that our double-lifting technique does not immediately apply to %all other related models. For example,
the supervised PCA model proposed by Ritchie et al. [61] for finding low-dimensional subspace that is also effective for a regression task:
\begin{align}\label{eq:SPCA_regression_W1}
\min_{\W\in \R^{p\times r},\, \W^{T}\W=\I_{r},\, \Beta\in \R^{1\times r} } \left[ f\left( \W \begin{bmatrix} \Beta ,\, \W^{T} \end{bmatrix} \right):= \lVert \Y_{\textup{label}} - \Beta^{T} \W^{T}\X_{\textup{data}} \rVert_{F}^{2} + \xi \lVert \X_{\textup{data}} - \W \W^{T} \X_{\textup{data}} \rVert_{F}^{2}\right].
\end{align}
Even though we can realize the objective function in the right hand side of \eqref{eq:SPCA_regression_W1} as a function depending on the product $\W[\Beta,\W^{T}]$, the two matrix factors $\W$ and $[\Beta,\W^{T}]$ are \textit{not} decoupled as before. It is for a future investigation to devise a lifting technique for SPCA and related problems, and obtain a strong global convergence guarantee.
\section*{Acknowledgements}
HL is partially supported by NSF DMS-2206296 and DMS-2010035.
\small{
\bibliographystyle{amsalpha}
\newcommand{\etalchar}[1]{$^{#1}$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
% \MRhref is called by the amsart/book/proc definition of \MR.
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
\providecommand{\href}[2]{#2}
\begin{thebibliography}{RWRY11}
[1]
Woody Austin, Dylan Anderson, and Joydeep Ghosh, \emph{Fully supervised
non-negative matrix factorization for feature extraction}, IGARSS 2018-2018
IEEE International Geoscience and Remote Sensing Symposium, IEEE, 2018,
[2]
Alekh Agarwal, Sahand Negahban, and Martin~J Wainwright, \emph{Fast global
convergence rates of gradient methods for high-dimensional statistical
recovery}, Advances in Neural Information Processing Systems \textbf{23}
[3]
Sanae Amani and Christos Thrampoulidis, \emph{Ucb-based algorithms for
multinomial logistic regression bandits}, Advances in Neural Information
Processing Systems \textbf{34} (2021).
[4]
Herv{\'e} Abdi and Lynne~J Williams, \emph{Principal component analysis}, Wiley
interdisciplinary reviews: computational statistics \textbf{2} (2010), no.~4,
[5]
Christopher~M Bishop et~al., \emph{Neural networks for pattern recognition},
Oxford university press, 1995.
[6]
Shivam Bansa, \emph{fake job postings dataset},
\url{https://www.kaggle.com/datasets/shivamb/real-or-fake-fake-jobposting-prediction}
[7]
\bysame, \emph{Fbi report on fake job postings},
\url{https://www.thesslstore.com/blog/fake-jobs-cybercriminals-prey-on-job-seekers-via-fake-job-postings/}
[8]
Michael~W Berry and Murray Browne, \emph{Email surveillance using non-negative
matrix factorization}, Computational \& Mathematical Organization Theory
\textbf{11} (2005), no.~3, 249--264.
[9]
Michael~W Berry, Murray Browne, Amy~N Langville, V~Paul Pauca, and Robert~J
Plemmons, \emph{Algorithms and applications for approximate nonnegative
matrix factorization}, Computational statistics \& data analysis \textbf{52}
(2007), no.~1, 155--173.
[10]
Amir Beck, \emph{First-order methods in optimization}, SIAM, 2017.
[11]
Dimitri~P Bertsekas, \emph{Nonlinear programming}, Journal of the Operational
Research Society \textbf{48} (1997), no.~3, 334--334.
[12]
\bysame, \emph{Nonlinear programming}, Athena scientific Belmont, 1999.
[13]
Rostyslav Boutchko, Debasis Mitra, Suzanne~L Baker, William~J Jagust, and
Grant~T Gullberg, \emph{Clustering-initiated factor analysis application for
tissue classification in dynamic brain positron emission tomography}, Journal
of Cerebral Blood Flow \& Metabolism \textbf{35} (2015), no.~7, 1104--1111.
[14]
Christopher~M Bishop and Nasser~M Nasrabadi, \emph{Pattern recognition and
machine learning}, vol.~4, Springer, 2006.
[15]
David~M Blei, Andrew~Y Ng, and Michael~I Jordan, \emph{Latent dirichlet
allocation}, Journal of machine Learning research \textbf{3} (2003), no.~Jan,
[16]
Dankmar B{\"o}hning, \emph{Multinomial logistic regression algorithm}, Annals
of the institute of Statistical Mathematics \textbf{44} (1992), no.~1,
[17]
Moody~T Chu, Robert~E Funderlic, and Robert~J Plemmons, \emph{Structured low
rank approximation}, Linear algebra and its applications \textbf{366} (2003),
[18]
Yang Chen, Xiao Wang, Cong Shi, Eng~Keong Lua, Xiaoming Fu, Beixing Deng, and
Xing Li, \emph{Phoenix: A weight-based network coordinate system using matrix
factorization}, IEEE Transactions on Network and Service Management
\textbf{8} (2011), no.~4, 334--347.
[19]
Rick Durrett, \emph{Probability: theory and examples}, fourth ed., Cambridge
Series in Statistical and Probabilistic Mathematics, Cambridge University
Press, Cambridge, 2010.
[20]
Michael Elad and Michal Aharon, \emph{Image denoising via sparse and redundant
representations over learned dictionaries}, IEEE Transactions on Image
processing \textbf{15} (2006), no.~12, 3736--3745.
[21]
Jianqing Fan and Runze Li, \emph{Variable selection via nonconcave penalized
likelihood and its oracle properties}, Journal of the American statistical
Association \textbf{96} (2001), no.~456, 1348--1360.
[22]
David~G Feingold and Richard~S Varga, \emph{Block diagonally dominant matrices
and generalizations of the gerschgorin circle theorem.}, Pacific Journal of
Mathematics \textbf{12} (1962), no.~4, 1241--1250.
[23]
Mehrdad~J Gangeh, Pouria Fewzee, Ali Ghodsi, Mohamed~S Kamel, and Fakhri
Karray, \emph{Multiview supervised dictionary learning in speech emotion
recognition}, IEEE/ACM Transactions on Audio, Speech, and Language Processing
\textbf{22} (2014), no.~6, 1056--1068.
[24]
Mehrdad~J Gangeh, Ahmed~K Farahat, Ali Ghodsi, and Mohamed~S Kamel,
\emph{Supervised dictionary learning and sparse representation-a review},
arXiv preprint arXiv:1502.05928 (2015).
[25]
Gene~H Golub and Christian Reinsch, \emph{Singular value decomposition and
least squares solutions}, Linear algebra, Springer, 1971, pp.~134--151.
[26]
Luigi Grippof and Marco Sciandrone, \emph{Globally convergent block-coordinate
techniques for unconstrained optimization}, Optimization methods and software
\textbf{10} (1999), no.~4, 587--637.
[27]
Luigi Grippo and Marco Sciandrone, \emph{On the convergence of the block
nonlinear gauss--seidel method under convex constraints}, Operations research
letters \textbf{26} (2000), no.~3, 127--136.
[28]
Roger~A Horn and Charles~R Johnson, \emph{Matrix analysis}, Cambridge
university press, 2012.
[29]
Jamie Haddock, Lara Kassab, Sixian Li, Alona Kryshchenko, Rachel Grotheer,
Elena Sizikova, Chuntian Wang, Thomas Merkh, RWMA Madushani, Miju Ahn,
et~al., \emph{Semi-supervised nmf models for topic modeling in learning
tasks}, arXiv preprint arXiv:2010.07956 (2020).
[30]
Nathan Halko, Per-Gunnar Martinsson, and Joel~A Tropp, \emph{Finding structure
with randomness: Probabilistic algorithms for constructing approximate matrix
decompositions}, SIAM review \textbf{53} (2011), no.~2, 217--288.
[31]
Prateek Jain, Raghu Meka, and Inderjit Dhillon, \emph{Guaranteed rank
minimization via singular value projection}, Advances in Neural Information
Processing Systems \textbf{23} (2010).
[32]
Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi, \emph{Low-rank matrix
completion using alternating minimization}, Proceedings of the forty-fifth
annual ACM symposium on Theory of computing, 2013, pp.~665--674.
[33]
Hamed Jelodar, Yongli Wang, Chi Yuan, Xia Feng, Xiahui Jiang, Yanchao Li, and
Liang Zhao, \emph{Latent dirichlet allocation (lda) and topic modeling:
models, applications, a survey}, Multimedia Tools and Applications
\textbf{78} (2019), no.~11, 15169--15211.
[34]
Tamara~G Kolda and Brett~W Bader, \emph{Tensor decompositions and
applications}, SIAM review \textbf{51} (2009), no.~3, 455--500.
[35]
Daniel~S Kermany, Michael Goldbaum, Wenjia Cai, Carolina~CS Valentim, Huiying
Liang, Sally~L Baxter, Alex McKeown, Ge~Yang, Xiaokang Wu, Fangbing Yan,
et~al., \emph{Identifying medical diagnoses and treatable diseases by
image-based deep learning}, Cell \textbf{172} (2018), no.~5, 1122--1131.
[36]
Jingu Kim, Yunlong He, and Haesun Park, \emph{Algorithms for nonnegative matrix
and tensor factorizations: A unified view based on block coordinate descent
framework}, Journal of Global Optimization \textbf{58} (2014), no.~2,
[37]
Donghyun Kim, Chanyoung Park, Jinoh Oh, Sungyoung Lee, and Hwanjo Yu,
\emph{Convolutional matrix factorization for document context-aware
recommendation}, Proceedings of the 10th ACM conference on recommender
systems, 2016, pp.~233--240.
[38]
Koulik Khamaru and Martin Wainwright, \emph{Convergence guarantees for a class
of non-convex and non-smooth optimization problems}, International Conference
on Machine Learning, PMLR, 2018, pp.~2601--2610.
[39]
Guillaume Lecu{\'e} and Shahar Mendelson, \emph{Sparse recovery under weak
moment assumptions}, Journal of the European Mathematical Society \textbf{19}
(2017), no.~3, 881--904.
[40]
Daniel~D Lee and H~Sebastian Seung, \emph{Learning the parts of objects by
non-negative matrix factorization}, Nature \textbf{401} (1999), no.~6755,
[41]
Daniel Lee and H~Sebastian Seung, \emph{Algorithms for non-negative matrix
factorization}, Advances in neural information processing systems \textbf{13}
(2000), 556--562.
[42]
Daniel~D Lee and H~Sebastian Seung, \emph{Algorithms for non-negative matrix
factorization}, Advances in neural information processing systems, 2001,
[43]
Johannes Leuschner, Maximilian Schmidt, Pascal Fernsel, Delf Lachmund, Tobias
Boskamp, and Peter Maass, \emph{Supervised non-negative matrix factorization
methods for maldi imaging applications}, Bioinformatics \textbf{35} (2019),
no.~11, 1940--1947.
[44]
Hanbaek Lyu, \emph{Convergence and complexity of block coordinate descent with
diminishing radius for nonconvex optimization}, arXiv preprint
arXiv:2012.03503 (2020).
[45]
Julien Mairal, \emph{Optimization with first-order surrogate functions},
International Conference on Machine Learning, 2013, pp.~783--791.
[46]
\bysame, \emph{Stochastic majorization-minimization algorithms for large-scale
optimization}, Advances in Neural Information Processing Systems, 2013,
[47]
Jon Mcauliffe and David Blei, \emph{Supervised topic models}, Advances in
neural information processing systems \textbf{20} (2007).
[48]
Julien Mairal, Francis Bach, and Jean Ponce, \emph{Task-driven dictionary
learning}, IEEE transactions on pattern analysis and machine intelligence
\textbf{34} (2011), no.~4, 791--804.
[49]
Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro, \emph{Online
learning for matrix factorization and sparse coding}, Journal of Machine
Learning Research \textbf{11} (2010), no.~Jan, 19--60.
[50]
Julien Mairal, Michael Elad, and Guillermo Sapiro, \emph{Sparse representation
for color image restoration}, IEEE Transactions on Image Processing
\textbf{17} (2007), no.~1, 53--69.
[51]
Raghu Meka, Prateek Jain, and Inderjit~S Dhillon, \emph{Guaranteed rank
minimization via singular value projection}, arXiv preprint arXiv:0909.5457
[52]
Julien Mairal, Jean Ponce, Guillermo Sapiro, Andrew Zisserman, and Francis
Bach, \emph{Supervised dictionary learning}, Advances in Neural Information
Processing Systems \textbf{21} (2008), 1033--1040.
[53]
Yu~Nesterov, \emph{Gradient methods for minimizing composite functions},
Mathematical programming \textbf{140} (2013), no.~1, 125--161.
[54]
Sahand Negahban and Martin~J Wainwright, \emph{Estimation of (near) low-rank
matrices with noise and high-dimensional scaling}, The Annals of Statistics
\textbf{39} (2011), no.~2, 1069--1097.
[55]
Gabriel Peyr{\'e}, \emph{Sparse modeling of textures}, Journal of Mathematical
Imaging and Vision \textbf{34} (2009), no.~1, 17--31.
[56]
Dohyung Park, Anastasios Kyrillidis, Srinadh Bhojanapalli, Constantine
Caramanis, and Sujay Sanghavi, \emph{Provable non-convex projected gradient
descent for a class of constrained matrix optimization problems}, stat
\textbf{1050} (2016), 4.
[57]
Dohyung Park, Anastasios Kyrillidis, Constantine Carmanis, and Sujay Sanghavi,
\emph{Non-square matrix sensing without spurious local minima via the
burer-monteiro approach}, Artificial Intelligence and Statistics, PMLR, 2017,
[58]
Dohyung Park, Anastasios Kyrillidis, Constantine Caramanis, and Sujay Sanghavi,
\emph{Finding low-rank solutions via nonconvex matrix factorization,
efficiently and provably}, SIAM Journal on Imaging Sciences \textbf{11}
(2018), no.~4, 2165--2204.
[59]
Michael~JD Powell, \emph{On search directions for minimization algorithms},
Mathematical programming \textbf{4} (1973), no.~1, 193--201.
[60]
Fabian Pedregosa, Ga{\"e}l Varoquaux, Alexandre Gramfort, Vincent Michel,
Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron
Weiss, Vincent Dubourg, et~al., \emph{Scikit-learn: Machine learning in
python}, the Journal of machine Learning research \textbf{12} (2011),
[61]
Alexander Ritchie, Laura Balzano, Daniel Kessler, Chandra~S Sripada, and
Clayton Scott, \emph{Supervised pca: A multiobjective approach}, arXiv
preprint arXiv:2011.05309 (2020).
[62]
Benjamin Recht, Maryam Fazel, and Pablo~A Parrilo, \emph{Guaranteed
minimum-rank solutions of linear matrix equations via nuclear norm
minimization}, SIAM review \textbf{52} (2010), no.~3, 471--501.
[63]
Bin Ren, Laurent Pueyo, Guangtun~Ben Zhu, John Debes, and Gaspard Duch{\^e}ne,
\emph{Non-negative matrix factorization: robust extraction of extended
structures}, The Astrophysical Journal \textbf{852} (2018), no.~2, 104.
[64]
Pradeep Ravikumar, Martin~J Wainwright, Garvesh Raskutti, and Bin Yu,
\emph{High-dimensional covariance estimation by minimizing ℓ1-penalized
log-determinant divergence}, Electronic Journal of Statistics \textbf{5}
(2011), 935--980.
[65]
Mark Steyvers and Tom Griffiths, \emph{Probabilistic topic models}, Handbook of
latent semantic analysis \textbf{427} (2007), no.~7, 424--440.
[66]
Arkadiusz Sitek, Grant~T Gullberg, and Ronald~H Huesman, \emph{Correction for
ambiguous solutions in factor analysis using a penalized least squares
objective}, IEEE transactions on medical imaging \textbf{21} (2002), no.~3,
[67]
Tao Sun, Yuejiao Sun, and Wotao Yin, \emph{On markov chain gradient descent},
Advances in Neural Information Processing Systems, 2018, pp.~9896--9905.
[68]
Stephen Tu, Ross Boczar, Max Simchowitz, Mahdi Soltanolkotabi, and Ben Recht,
\emph{Low-rank solutions of linear matrix equations via procrustes flow},
International Conference on Machine Learning, PMLR, 2016, pp.~964--973.
[69]
Gui-Xian Tian and Ting-Zhu Huang, \emph{Inequalities for the minimum eigenvalue
of m-matrices}, The Electronic Journal of Linear Algebra \textbf{20} (2010),
[70]
Tian Tong, Cong Ma, and Yuejie Chi, \emph{Accelerating ill-conditioned low-rank
matrix estimation via scaled gradient descent}, Journal of Machine Learning
Research \textbf{22} (2021), no.~150, 1--63.
[71]
Leo Taslaman and Bj{\"o}rn Nilsson, \emph{A framework for regularized
non-negative matrix factorization, with application to the analysis of gene
expression data}, PloS one \textbf{7} (2012), no.~11, e46331.
[72]
Roman Vershynin, \emph{High-dimensional probability: An introduction with
applications in data science}, vol.~47, Cambridge university press, 2018.
[73]
\emph{{WebMD} coronavirus and pneumonia},
\url{https://www.webmd.com/lung/covid-and-pneumonia#1}, Accessed: 2021-03-13.
[74]
Stephen~J Wright, \emph{Coordinate descent algorithms}, Mathematical
Programming \textbf{151} (2015), no.~1, 3--34.
[75]
Michael~E Wall, Andreas Rechtsteiner, and Luis~M Rocha, \emph{Singular value
decomposition and principal component analysis}, A practical approach to
microarray data analysis, Springer, 2003, pp.~91--109.
[76]
Rachel Ward, Xiaoxia Wu, and Leon Bottou, \emph{Adagrad stepsizes: Sharp
convergence over nonconvex landscapes}, International Conference on Machine
Learning, PMLR, 2019, pp.~6677--6686.
[77]
Lingxiao Wang, Xiao Zhang, and Quanquan Gu, \emph{A unified computational and
statistical framework for nonconvex low-rank matrix estimation}, Artificial
Intelligence and Statistics, PMLR, 2017, pp.~981--990.
[78]
Yi~Xu, Zhuoning Yuan, Sen Yang, Rong Jin, and Tianbao Yang, \emph{On the
convergence of (stochastic) gradient descent with extrapolation for
non-convex optimization}, arXiv preprint arXiv:1901.10682 (2019).
[79]
Pavel Yaskov, \emph{Controlling the least eigenvalue of a random gram matrix},
Linear Algebra and its Applications \textbf{504} (2016), 108--123.
[80]
Yael Yankelevsky and Michael Elad, \emph{Structure-aware classification using
supervised dictionary learning}, 2017 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2017, pp.~4421--4425.
[81]
Shijie Zhao, Junwei Han, Jinglei Lv, Xi~Jiang, Xintao Hu, Yu~Zhao, Bao Ge, Lei
Guo, and Tianming Liu, \emph{Supervised dictionary learning for inferring
concurrent brain networks}, IEEE transactions on medical imaging \textbf{34}
(2015), no.~10, 2036--2045.
[82]
Qiang Zhang and Baoxin Li, \emph{Discriminative k-svd for dictionary learning
in face recognition}, 2010 IEEE computer society conference on computer
vision and pattern recognition, IEEE, 2010, pp.~2691--2698.
[83]
Qinqing Zheng and John Lafferty, \emph{A convergent gradient descent algorithm
for rank minimization and semidefinite programming from random linear
measurements}, arXiv preprint arXiv:1506.06081 (2015).
[84]
Tuo Zhao, Zhaoran Wang, and Han Liu, \emph{A nonconvex optimization framework
for low rank matrix estimation}, Advances in Neural Information Processing
Systems \textbf{28} (2015), 559.
\end{thebibliography}
\addresseshere
\newpage
\appendix
\section{Proof of main results}
\subsection{Proof of Theorem \ref{thm:CALE_LPGD}}
We first establish Theorem \ref{thm:CALE_LPGD}, which shows exponential convergence of the low-rank projected gradient descent (Algorithm \ref{algorithm:LPGD}) for the CALE problem \ref{eq:CALE}. The proof is similar to the standard argument that shows exponential convergence projected gradient descent with fixed step size for constrained strongly convex problems (see, e.g., \cite[Thm. 10.29]{beck2017first}). However, when we have a strongly convex minimization problem with low-rank-constrained matrix parameter, then the constraint set of low-rank matrices is not convex, so one cannot use non-expansiveness of convex projection operator. Indeed, the rank-$r$ projection $\Pi_{r}$ by truncated SVD is not guaranteed to be non-expansive. In order to circumvent this issue, we use the idea of approximating rank-$r$ projection by a suitable linear projection on a carefully chosen linear subspace, an approach used in [77]. Then one can show that rank-$r$ projection is at most $2$-Lipschitz in some sense, so if the contraction constant in standard analysis of projected gradient descent for strongly convex objectives is small enough ($<1/2$), then overall one still retains exponential convergence.
\begin{lemma}(Linear approximation of rank-$r$ projection)\label{lem:rank_r_lin_appx}
Fix $\Y\in \R^{d_{1}\times d_{2}}$, $R\ge r \in \mathbb{N}$, and denote $\X=\Pi_{r}(\Y)$ and $\hat{\X} = \Pi_{\mathcal{A}}(
\Y)$, where $\mathcal{A}\subseteq \R^{d_{1}\times d_{2}}$ is a linear subspace. Let $\X=\U\bSigma \V^{T}$ denote the SVD of $\X$. Suppose there exists $\overline{\U}\in \R^{d_{1}\times R}$ and $\overline{\V} \in \R^{d_{2}\times R}$ such that
\begin{align}
\mathcal{A} = \left\{ \A \in \R^{d_{1}\times d_{2}}\,\big|\, \textup{col}(\A^{T}) \subseteq \textup{col}(\overline{\V}),\, \textup{col}(\A) \subseteq \textup{col}(\overline{\U}) \right\}, \quad \textup{col}(\U)\subseteq \textup{col}(\overline{\U}), \quad \textup{col}(\V)\subseteq \textup{col}(\overline{\V}).
\end{align}
Then $\X=\Pi_{r}(\hat{\X})$.
\end{lemma}
\begin{proof}
Write $\Y-\X=\dot{\U} \dot{\bSigma} \dot{\V}^{T}$ for its SVD. Let $d:=\rank(\Y)$ and let $\sigma_{1}\ge \dots \ge \sigma_{d}>0$ denote the nonzero singular values of $\Y$. Since $\X=\Pi_{r}(\Y) = \U\bSigma\V^{T}$ and $\Y=\U\bSigma\V^{T} + \dot{\U} \dot{\bSigma} \dot{\V}^{T}$, we must have that $\bSigma$ consists of the top $r$ singular values of $\Y$ and the rest of $d-r$ singular values are contained in $\dot{\bSigma}$. Furthermore, $\textup{col}(\U) \perp \textup{col}(\dot{\U})$.
Now, since $\X\in \mathcal{A}$ and $\Pi_{\mathcal{A}}$ is linear, we get
\begin{align}\label{eq:3r_r_approx1_lem}
\hat{\X} = \Pi_{\mathcal{A}}( \X + (\Y -\X) ) = \U \bSigma \V^{T} + \Pi_{\mathcal{A}}(\dot{\U} \dot{\bSigma} \dot{\V}^{T}).
\end{align}
Let $\bZ:= \Pi_{\mathcal{A}}(\dot{\U} \dot{\bSigma} \dot{\V}^{T})$ and write its SVD as $\bZ = \widetilde{\U}\widetilde{\bSigma}\widetilde{\V}^{T}$. Then note that $(\U^{T}\overline{\U} \, \overline{\U}^{T})^{T}=\overline{\U}\,\overline{\U}^{T}\U=\U$ since $\overline{\U}\,\overline{\U}^{T}:\R^{d_{1}}\rightarrow \R^{d_{1}}$ is the orthogonal projection onto $\textup{col}(\overline{\U})\supseteq \textup{col}(\U)$. Hence $\U^{T}\overline{\U}\,\overline{\U}^{T} = \U^{T}$, so we get
\begin{align}
\U^{T} \bZ = \left( \U^{T} \overline{\U}\, \overline{\U}^{T}\right) \dot{\U} \dot{\bSigma} \dot{\V}^{T} \V^{T} \overline{\V} = \left( \U^{T} \dot{\U} \right) \dot{\bSigma} \dot{\V}^{T} \V^{T} \overline{\V} = O.
\end{align}
It follows that $\U^{T}\widetilde{\U}=O$, since $ \U^{T}\widetilde{\U} = \U^{T} \bZ \widetilde{\V} (\widetilde{\bSigma})^{-1} = O$. Therefore, rewriting \eqref{eq:3r_r_approx1_lem} gives the SVD of $\hat{\X}$ as
\begin{align}
\hat{\X} = \begin{bmatrix}
\U & \widetilde{\U}
\end{bmatrix}
\begin{bmatrix}
\bSigma & O\\
O & \widetilde{\bSigma}
\end{bmatrix}
\begin{bmatrix}
\V \\ \widetilde{\V}
\end{bmatrix}
\end{align}
Furthermore, $\lVert \Pi_{\mathcal{A}}(\dot{\U} \dot{\bSigma} \dot{\V}^{T}) \rVert_{2} \le \lVert\dot{\bSigma} \rVert_{2} =\sigma_{r+1}^{t}$, so $\Sigma$ consists of the top $r$ singular values of $\hat{\X}$. It follows that $\X=\U \bSigma \V^{T}$ is the best rank-$r$ approximation of $\hat{\X}$, as desired.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem} \ref{thm:CALE_LPGD}]
Denote $\bZ^{\star}=[\X^{\star}, \bGamma^{\star}]\in \Param\subseteq \R^{d_{1}\times d_{2}}\times \R^{d_{3}\times d_{4}}$. Let $\mathcal{A}$ denote a linear subspace of $\R^{d_{1}\times d_{2}}$. Denote
\begin{align}
\hat{\bZ}_{t} =\Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}\left( \Pi_{\Param} \left( \bZ_{t-1} - \tau \nabla f(\bZ_{t}) \right) \right).
\end{align}
We will choose $\mathcal{A}$ in such a way that
\begin{align}\label{eq:mathcal_A_subspace_cond}
\X_{t} = \Pi_{r}(\hat{\X}_{t}) \in \argmin_{\X, \rank(\X)\le r} \lVert \hat{\X}_{t} - \X \rVert_{F}, \quad \bZ^{\star}\in \mathcal{A} \times \R^{d_{3}\times d_{4}}.
\end{align}
For instance, $\mathcal{A}=\R^{d_{1}\times d_{2}}$ satisfies the above conditions, although this choice does not give optimal control on the variance term (the second term in the right hand side of \eqref{eq:CALE_LPGD_thm}). We will first derive a general bound with such $\mathcal{A}$, and at the end of the proof, we will give a specific construction of such $\mathcal{A}$ to obtain the bound in the assertion.
Denote $\Delta \bZ^{\star}:=\bZ^{\star} - \Pi_{\Param}\left( \bZ^{\star} - \tau \nabla f(\bZ^{\star}))\right)$. Using $\bZ^{\star}\in \mathcal{A}\times \R^{d_{3}\times d_{4}}$ and linearity of the linear projection $\Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}$, write
\begin{align}
\bZ^{\star} &= \Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}(\bZ^{\star}) \\
&=\Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}} \left( \Pi_{\Param}(\bZ^{\star} - \tau \nabla f(\bZ^{\star})) \right) + \Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}} \left( \bZ^{\star}- \Pi_{\Param}(\bZ^{\star} - \tau \nabla f(\bZ^{\star})) \right) \\
&=\Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}} \left( \Pi_{\Param}(\bZ^{\star} - \tau \nabla f(\bZ^{\star})) \right) + \Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}} \left( \Delta \bZ^{\star} \right).
\end{align}
Namely, the first term above is a one-step update of a projected gradient descent at $\bZ^{\star}$ over $\Param$ with stepsize $\tau$, and the second term above is the error term. If $\bZ^{\star}$ is a stationary point of $f$ over $\Param$, then $-\nabla f(\bZ^{\star})$ lies in the normal cone of $\Param$ at $\bZ^{\star}$, so $\bZ^{\star}$ is invariant under the projected gradient descent and the error term above (the second term in the last expression) is zero. If $\bZ^{\star}$ is only approximately stationary, then the error above is nonzero.
Now, recall that $\hat{Z}_{t}$ is obtained by using the orthogonal projection $\Pi_{\mathcal{A}}$ onto the convex subset $\mathcal{A}$ instead of the rank-$r$ projection $\Pi_{r}$ to obtain the matrix coordinate of $\hat{\bZ}_{t}$. Notice that $\Pi_{\Param}$ and $\Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}$ are non-expansive being projection onto a convex set, while the rank-$r$ projection $\Pi_{r}$ is not in general. Also using the linearity of the subspace projection $\Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}$, we get
\begin{align}
&\lVert \hat{\bZ}_{t} - \bZ^{\star} \rVert_{F} \\
&\qquad = \left\lVert \Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}\left( \Pi_{\Param} \left( \bZ_{t-1} - \tau \nabla f(\bZ_{t-1}) \right) \right) - \Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}\left( \Pi_{\Param} \left( \bZ^{\star} - \tau \nabla f(\bZ^{\star}) \right) \right) +\Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}\left( \Delta \bZ^{\star} \right) \right\rVert_{F} \\
&\qquad \le \left\lVert \bZ_{t-1} - \tau \nabla f(\bZ_{t-1}) - \bZ^{\star} + \tau \nabla f(\bZ^{\star}) \right\rVert_{F} + \lVert \Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}\left( \Delta \bZ^{\star} \right) \rVert_{F} \\
&\qquad \le \max( |1-\tau L| ,\, |1-\tau \mu| ) \, \lVert \bZ_{t-1} - \bZ^{\star}\rVert_{F} + \lVert \Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}\left( \Delta \bZ^{\star} \right) \rVert_{F}.
\end{align}
The last inequality follows from the fact that $\bZ_{t}$ and $\bZ^{\star}$ have rank $\le r$ and the restricted strong convexity and smoothness properties (Definition \ref{def:RSC}). Namely, fix $\X,\Y\in \R^{d_{1}\times d_{2}}\times \R^{d_{3}\times d_{4}}$ whose first matrix components have rank $\le r$. Assuming $\nabla f$ is continuous,
\begin{align}
\X - \tau \nabla f(\X) - \Y + \tau \nabla f(\Y) &= (\X-\Y) - \tau ( \nabla f(\X) - \nabla f(\Y)) \\
&= \int_{0}^{1} \left( \I-\tau \nabla^{2}(\X + s(\Y-\X)) \right)(\X-\Y)\,ds.
\end{align}
Using the inequality $\lVert \A \B \rVert_{F} \le \lVert \A \rVert_{2} \lVert \B \rVert_{F}$, this gives
\begin{align}
\lVert \X - \tau \nabla f(\X) - \Y + \tau \nabla f(\Y) \rVert_{F} & \le \sup_{\bZ=[\bZ_{1},\bZ_{2}]: \, \rank(\bZ_{1})\le r} \lVert \I-\tau \nabla^{2}f( \bZ)\rVert_{2} \, \lVert \X - \Y \rVert_{F} \\
&\le \rho \,\lVert \X - \Y \rVert_{F},
\end{align}
where $\eta:=\max( |1-\tau L| ,\, |1-\tau \mu| )$. Indeed, for the second inequality above, note that the eigenvalues of $\nabla^{2} f(\bZ)$ are contained in $[\mu,L]$, so the eigenvalues of $\I-\tau \nabla^{2}f(\bZ) $ are between $\min(1-\tau L,\, 1-\tau \mu)$ and $\max(1-\tau L,\, 1-\tau \mu)$. Combining with the previous inequality, it follows that
\begin{align}\label{eq:pf_LPGD_pf1}
\lVert \hat{\bZ}_{t} - \bZ^{\star} \rVert_{F} \le \eta \, \lVert \bZ_{t-1} - \bZ^{\star}\rVert_{F} + \lVert \Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}\left(\Delta \bZ^{\star}\right) \rVert_{F}.
\end{align}
Next, recall the notation $\bZ_{t}=[\X_{t}, \bGamma_{t}]$ and $\hat{\bZ}_{t}=[\hat{\X}_{t}, \bGamma_{t}]$. By construction, we have $\X_{t} =\Pi_{r}( \hat{\X}_{t})$, so $\X_{t}$ is the best rank-$r$ approximation of $\hat{\X}_{t}$ in the sense that $\X_{t}=\argmin_{\X, \rank(\X)\le r} \lVert \hat{\X}_{t} - \X \rVert_{F}$. Then observe that
\begin{align}
\lVert \bZ_{t} - \bZ^{\star} \rVert_{F} &\le \lVert \bZ_{t} - \hat{\bZ}_{t} \rVert_{F} + \lVert \hat{\bZ}_{t} - \bZ^{\star} \rVert_{F} \\
&= \lVert \X_{t} - \hat{\X}_{t} \rVert_{F} + \lVert \hat{\bZ}_{t} - \bZ^{\star} \rVert_{F} \\
&\le \lVert\X^{\star} - \hat{\X}_{t} \rVert_{F} + \lVert \hat{\bZ}_{t} - \bZ^{\star} \rVert_{F} \le 2 \lVert \hat{\bZ}_{t} - \bZ^{\star} \rVert_{F},
\end{align}
so by combining with \eqref{eq:pf_LPGD_pf1}, we get
\begin{align}
\lVert \bZ_{t} - \bZ^{\star} \rVert_{F} \le 2\eta \, \lVert \bZ_{t-1} - \bZ^{\star}\rVert_{F}+ \lVert \Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}\left(\Delta \bZ^{\star} \right) \rVert_{F}.
\end{align}
Note that $0\le \eta<1/2$ if and only if $\tau\in (\frac{1}{2\mu}, \frac{3}{2L})$, and this interval is non-empty if and only if $L/\mu<3$. Hence for such choice of $\tau$, $0<2\eta<1$, so
by a recursive application of the above inequality, we obtain
\begin{align}\label{eq:linear_conv_ineq_pf1}
\lVert \bZ_{t} - \bZ^{\star} \rVert_{F} \le (2\eta)^{t}\, \lVert \bZ_{0} - \bZ^{\star}\rVert_{F} + \frac{1}{1-2\eta} \lVert \Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}\left(\Delta \bZ^{\star} \right) \rVert_{F}.
\end{align}
Finally, we bound the variance term in the last expression by choosing a suitable linear subspace $\mathcal{A}\subseteq \R^{d_{1}\times d_{2}}$ satisfying \eqref{eq:mathcal_A_subspace_cond}. Note that $\Delta \bZ^{\star}=\tau [\Delta \X^{\star}, \Delta \bGamma^{\star}]$, where the latter is defined in the statement of Theorem \ref{thm:CALE_LPGD}. Recall that $\bZ^{\star}=[\X^{\star}, \bGamma^{\star}]$. Let $\X^{\star} = \U^{\star} \bSigma^{\star} (\V^{\star})^{T}$ denote the SVD of $\X^{\star}$. For each iteration $t$, denote $\bZ_{t} = [\X_{t}, \bGamma_{t}]$ and let $\X_{t} = \U_{t} \bSigma_{t} \V_{t}^{T}$ denote the SVD of $\X_{t}$. Since $\X_{t}$ and $\X^{\star}$ have rank at most $r$, all of both $\U^{\star}$, $\U_{t}$, $\V^{\star}$, and $\V_{t}$ have at most $r$ columns. Define a matrix $\U_{3r}$ so that its columns form a basis for the subspace spanned by the columns of $[\U^{\star}, \U_{t-1}, \U_{t}]$. Then $\U_{3r}$ has at most $3r$ columns. Similarly, let $\U_{3r}$ be a matrix so that its columns form a basis for the subspace spanned by the columns of $[\V^{\star}, \V_{t-1}, \V_{t}]$. Then $\V_{3r}$ has at most $3r$ columns. Now, define the subspace
\begin{align}\label{eq:mathcal_A_construction}
\mathcal{A} := \left\{ \Delta\in \R^{d_{1}\times d_{2}}\,|\, \textup{span}(\Delta^{T}) \subseteq \textup{span}(\V_{3r}),\, \textup{span}(\Delta) \subseteq \textup{span}(\U_{3r}) \right\}.
\end{align}
Note that $\mathcal{A}$ is a convex subset of $\R^{d_{1}\times d_{2}}$. Also note that, by definition, $\X^{\star}, \X_{t}, \X_{t-1}\in \mathcal{A}$. Let $\Pi_{\mathcal{A}}$ denote the projection operator onto $\mathcal{A}$. More precisely, for each $\X\in \R^{d_{1}\times d_{2}}$, we have
\begin{align}
\Pi_{\mathcal{A}}(\X) = \U_{3r}\U_{3r}^{T} \X \V_{3r}\V_{3r}^{T}.
\end{align}
Then by Lemma \ref{lem:rank_r_lin_appx}, we have $\X_{t} =\Pi_{r}( \hat{\X}_{t})$. Hence $\mathcal{A}$ in \eqref{eq:mathcal_A_construction} satisfies \eqref{eq:mathcal_A_subspace_cond}. Therefore, \eqref{eq:linear_conv_ineq_pf1} holds for the $\mathcal{A}$ chosen as in \eqref{eq:mathcal_A_construction}.
Now, note that $\Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}(\Delta \X^{\star}, \Delta \bGamma^{\star}) = [\Pi_{\mathcal{A}}(\Delta \X^{\star}) , \Delta \bGamma^{\star} ]$ and $\rank(\mathcal{A})\le 3r$. Thus by triangle inequality,
\begin{align}\label{eq:LPGD_last_gap_bound}
\lVert \Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}\left( \Delta \X^{\star}, \Delta \bGamma^{\star} \right) \rVert_{F} &\le \lVert \Pi_{\mathcal{A}}( \Delta \X^{\star}) \rVert_{F} + \lVert \Delta \bGamma^{\star} \rVert_{F} \le \sqrt{ 3r} \lVert \Delta \X^{\star} \rVert_{2} + \lVert \Delta \bGamma^{\star} \rVert_{F}.
\end{align}
This completes the proof of \textbf{(i)}.
Next, we show \textbf{(ii)}. Suppose $\mathbf{Z}^{\star}$ is a stationary point of $f$ over $\Param$. Then $\Delta \bZ^{\star}=O$ so the first part of the assertion follows from \textbf{(i)}. For the second part, suppose that $\nabla f$ is $L'$-Lipschiz over $\Param$ for some $L'>0$. Then by Cauchy-Schwarz inequality,
\begin{align}
\left| f(\bZ_{n}) - f(\bZ^{\star}) \right| & = \left| \int_{0}^{1} \left\langle \nabla f \left(\bZ_{n} + s(\bZ^{\star} - \bZ_{n}) \right),\, \bZ_{n}-\bZ^{\star} \right\rangle \,ds \right| \\
&\le \int_{0}^{1}\left\lVert \nabla f \left(\bZ_{n} + s(\bZ^{\star} - \bZ_{n}) \right) \right\rVert \lVert \bZ_{n}-\bZ^{\star} \rVert \,ds \\
&\le \int_{0}^{1} \left( \left\lVert \nabla f (\bZ^{\star}) \rVert + s L\lVert \bZ_{n}-\bZ^{\star} \right\rVert \right) \lVert \bZ_{n}-\bZ^{\star} \rVert \,ds \\
&\le \left( \left\lVert \nabla f (\bZ^{\star}) \rVert + L\lVert \bZ_{n}-\bZ^{\star} \right\rVert \right) \lVert \bZ_{n}-\bZ^{\star} \rVert.
\end{align}
Then \eqref{eq:PSGD_linear_conv2} follows by combining the above inequality with \textbf{(i)}.
\end{proof}
\begin{remark}\label{rmk:pf_thm_LPGD}
Note that in \eqref{eq:LPGD_last_gap_bound}, we could have used the following crude bound
\begin{align}\label{eq:LPGD_last_gap_bound2}
\left\lVert \Pi_{\mathcal{A}\times \R^{d_{3}\times d_{4}}}\left( \Delta \X^{\star}, \Delta \bGamma^{\star} \right) \right\rVert_{F} \le \left\lVert \left[ \Delta \X^{\star}, \Delta \bGamma^{\star} \right] \right\rVert_{F} &\le
\lVert \Delta \X^{\star} \rVert_{F} + \lVert \Delta \bGamma^{\star} \rVert_{F} \\
&\le \sqrt{\rank(\Delta \X^{\star})} \lVert \Delta \X^{\star} \rVert_{2} + \lVert \Delta \bGamma^{\star} \rVert_{F},
\end{align}
which is also the bound we would have obtained if we choosed the trivial linear subspace $\mathcal{A}=\R^{d_{1}\times d_{2}}$ in the proof of Theorem \ref{thm:CALE_LPGD} above. While we know $\rank(\X^{\star})\le r $, we do not have an a priori bound on $ \rank(\Delta \X^{\star})$, which could be much larger then $\sqrt{3r}$. A smarter choice of the subspace $\mathcal{A}$ as we used in the proof of Theorem \ref{thm:CALE_LPGD} ensures that we only need the factor $\sqrt{3r}$ in place of the unknown factor $\sqrt{\rank(\Delta \X^{\star})}$ as in \eqref{eq:LPGD_last_gap_bound}.
\end{remark}
\subsection{Proof of Theorems \ref{thm:SDL_LPGD} and \ref{thm:SDL_LPGD_feat}}
Next, prove Theorems \ref{thm:SDL_LPGD} and \ref{thm:SDL_LPGD_feat}, which amounts to verify that the hypothesis of Theorem \ref{thm:CALE_LPGD} holds for the SDL problems in \eqref{eq:SDL_filt_1} and \eqref{eq:SDL_feat_1}.
We begin with some preliminary computation. Let $\a_{s}$ denote the activation corresponding to the $s$th sample (see \eqref{eq:ASDL_1}). More precisely, $\a_{s}=\A^{T}\x_{s}+\bGamma^{T}\x'_{s}$ for the filter-based model with $\A\in \R^{p\times \kappa}$, and $\a_{s}=\A[:,s]+\bGamma^{T}\x'_{s}$ with $\A\in \R^{\kappa\times n}$. In both cases, $\B\in \R^{p\times n}$ and $\bGamma\in \R^{q\times \kappa}$. Then the objective function $f$ in \eqref{eq:SDL_filt_CALE} can be written as
\begin{align}\label{eq:f_SDL_pf0}
f(\A, \B, \bGamma)&:=\left( -\sum_{s=1}^{n} \sum_{j=0}^{\kappa} \mathbf{1}(y_{i}=j) \log g_{j}( \a_{s} ) \right) + \xi \lVert \X_{\textup{data}} -\B\rVert_{F}^{2} + \nu \left( \lVert \A \rVert_{F}^{2}+\lVert \bGamma \rVert_{F}^{2} \right) \\
&= \sum_{s=1}^{n} \left( \log \left( 1+\sum_{c=1}^{\kappa} h( \a_{s}[c] ) \right) - \sum_{j=1}^{\kappa} \mathbf{1}(y_{i}=j) \log h( \a_{s}[j] ) \right) + \xi \lVert \X_{\textup{data}} -\B\rVert_{F}^{2} + \nu \left( \lVert \A \rVert_{F}^{2} + \lVert \bGamma \rVert_{F}^{2} \right),
\end{align}
where $\a_{s}[i]\in \R$ denotes the $i$th component of $\a_{s}\in \R^{\kappa}$. In the proofs we provided below, we compute the Hessian of $f$ above explicitly for the filter- and the feature-based cases and use Theorem \ref{thm:CALE_LPGD} to derive the result. Recall the functions $\dot{\h}$ and $\ddot{\H}$ introduced in \ref{assumption:A4}. For each label $y\in \{0,\dots,\kappa\}$ and activation $\a\in \R^{\kappa}$, the negative log likelihood of observing label $y$ from the probability distribution $\g(\a)$ defined in \eqref{eq:ell_log_likelihood} can be written as
\begin{align}
\ell_{0}(y,\a) := \log\left( \sum_{c=1}^{\kappa} h(\a[c]) \right) - \sum_{c=1}^{\kappa}\mathbf{1}(y=c) \log h(\a[c]).
\end{align}
Then we have the following relations
\begin{align}
\nabla_{\a} \ell_{0}(y,\a) = \dot{\h}(y,\a), \qquad \nabla_{\a}\nabla_{\a^{T}} \ell_{0}(y,\a) = \ddot{\H}(y,\a).
\end{align}
\begin{proof}[\textbf{Proof of Theorem} \ref{thm:SDL_LPGD}]
Let $f=f_{\textup{SDL-filt}}$ denote the loss function for the filter-based SDL model in \eqref{eq:SDL_filt_CALE}. Fix $\bZ_{1},\bZ_{2}\in \Param\subseteq \R^{d_{1}\times d_{2}}\times \R^{d_{3}\times d_{4}} $. By \ref{assumption:A1} $\Param$ is convex, so $t\bZ_{1} + (1-t) \bZ_{2}\in \Param$ for all $t\in [0,1]$. Then by the mean value theorem, there exists $t^{*}\in [0,1]$ such that for $\bZ^{*}=t^{*}\bZ_{1} + (1-t^{*})\bZ_{2}$,
\begin{align}\label{eq:RSC_thm_pf}
f(\bZ_{2}) - f(\bZ_{1}) - \langle \nabla f(\bZ_{1}), \, \bZ_{2}-\bZ_{1} \rangle = \left( \vect(\bZ_{2}) - \vect(\bZ_{1}) \right)^{T} \nabla_{\vect(\bZ)}\nabla_{\vect(\bZ)^{T}} f( \bZ^{*} ) \left( \vect(\bZ_{2}) - \vect(\bZ_{1}) \right).
\end{align}
Hence, according to Theorem \ref{thm:CALE_LPGD}, it suffices to verify that
for some $\mu,L>0$ such that $L/\mu<3$,
\begin{align}
\frac{\mu}{2} \I \preceq \nabla_{\vect(\bZ)}\nabla_{\vect(\bZ)^{T}} f( \bZ^{*} ) \preceq \frac{L}{2} \I
\end{align}
for all $\bZ^{*}=[\X,\bGamma]$ with $\rank(\X^{*})\le r$.
%According to Theorem \ref{thm:CALE_LPGD}. it boils down to computing the gradient and the Hessian of the SDL loss function $f_{\textup{SDL-filt}}$ in \eqref{eq:SDL_filt_CALE}.
To this end, let $\a_{s}$ denote the activation corresponding to the $s$th sample (see \eqref{eq:ASDL_1}). More precisely, $\a_{s}=\A^{T}\x_{s}+\bGamma^{T}\x'_{s}$ for the filter-based model we consider here. We discussed that the objective function $f$ in \eqref{eq:SDL_filt_CALE} can be written as \eqref{eq:f_SDL_pf0}. Denote
\begin{align}\label{eq:thm_SDL_filt_as_def}
\a_{s}= \A^{T}\x_{s}+\bGamma^{T}\x_{s}' =: \vast[ \left\langle \underbrace{\begin{bmatrix}
\A[:,j] \\ \bGamma[:,j] \end{bmatrix}}_{=:\u_{j}} ,\, \underbrace{\begin{bmatrix}
\x_{s} \\ \x_{s}' \end{bmatrix}}_{=:\bphi_{s}} \right\rangle; \,\, j=1,\dots,\kappa \vast]^{T}\in \R^{\kappa},
\end{align}
where we have introduced the notations $\u_{j}\in \R^{(p+q)\times 1}$ for $j=1,\dots,\kappa$ and $\bphi_{s}\in \R^{(p+q) \times 1 }$ for $s=1,\dots, n$. Denote $\U:=[\u_{1},\dots,\u_{\kappa}]\in \R^{(p+q)\times \kappa}$, which is a matrix parameter that combines $\A$ and $\bGamma$. Also denote $\bPhi=(\bphi_{1},\dots,\bphi_{n})\in \R^{(p+q)\times n}$ that combined feature matrix of $n$ observations. Then we can compute the gradient and the Hessian of $f$ above as follows:
\begin{align}\label{eq:SDL_filt_gradients}
&\nabla_{\vect(\U)} f(\U,\B) = \left( \sum_{s=1}^{n} \dot{\h}(y_{s},\U^{T}\bphi_{s}) \otimes \bphi_{s} \right) + 2\nu \vect(\U), \quad \nabla_{\B} f(\U,\B) = 2\xi (\B-\X_{\textup{data}}) \\
&\nabla_{\vect(\U)}\nabla_{\vect(\U)^{T}} f(\U,\B) = \left( \sum_{s=1}^{n} \ddot{\H}(y_{s},\U^{T}\bphi_{s}) \otimes \bphi_{s}\bphi_{s}^{T}\right) + 2\nu \I_{(p+q)\kappa}, \\
&\nabla_{\vect(\B)}\nabla_{\vect(\B)^{T}} f(\U,\B) = 2\xi \I_{pn}, \qquad \nabla_{\vect(\B)}\nabla_{\vect(\U)^{T}} f(\U,\B) = O,
\end{align}
where $\otimes$ above denotes the Kronecker product and the functions $\dot{\h}$ and $\ddot{\H}$ are defined in \eqref{eq:Hddot_def}.
Recall that the eigenvalues of $\A\otimes \B$, where $\A$ and $\B$ are two square matrices, are given by $\lambda_{i}\mu_{j}$, where $\lambda_{i}$ and $\mu_{j}$ run over all eigenvalues of $\A$ and $\B$, respectively. Hence denoting $\H_{\U}:=\sum_{s=1}^{N} \ddot{\H}(y_{s},\U^{T}\bphi_{s},) \otimes \bphi_{s}\bphi_{s}^{T}$ and using \ref{assumption:A2}-\ref{assumption:A3}, we can deduce
\begin{align}\label{eq:MNL_evals_bounds}
\lambda_{\min}(\H_{\U}) &\ge n \lambda_{\min}\left( n^{-1} \bPhi \bPhi^{T} \right) \min_{1\le s \le N,\, \U} \lambda_{\min}\left( \ddot{\H}(y_{s},\bphi_{s}, \U) \right) \ge n \delta^{-}\alpha^{-} \ge n \mu^{*}>0, \\
\lambda_{\max}(\H_{\U}) &\le n \lambda_{\max}\left( n^{-1}\bPhi \bPhi^{T} \right) \max_{1\le s \le N,\, \U} \lambda_{\min}\left( \ddot{\H}(y_{s},\bphi_{s}, \U) \right) \le n \delta^{+}\alpha^{+}\le n L^{*}.
\end{align}
This holds for all $\A,\B,\bGamma$ such that $\rank([\A,\B])\le r$ and under the convex constraint in \eqref{assumption:A1} (also recall that $\U$ is the vertical stack of $\A$ and $\bGamma$). Hence we conclude that the objective function $f_{\textup{SDL-filt}}$ in \eqref{eq:SDL_filt_CALE} verifies RSC and RSM properties (Def. \ref{def:RSC}) with parameters $\mu=\min(2\xi,2\nu+n \mu^{*})$ and $L=\max(2\xi, 2\nu + n L^{*})$. It is straightforward to verify that $L/\mu<3$ if and only if \eqref{eq:thm_SDL_LGPD_cond} holds. This verifies \eqref{eq:RSC_thm_pf} for the chosen parameters $\mu$ and $L$. Then the rest follows from Theorem \ref{thm:CALE_LPGD}.
\end{proof}
Next, we prove Theorem \ref{thm:SDL_LPGD_feat}, the exponential convergence of Algorithm \ref{alg:SDL_feat_LPGD} for the feature-based SDL in \eqref{eq:SDL_feat_1}.
\begin{proof}[\textbf{Proof of Theorem} \ref{thm:SDL_LPGD_feat}]
We will use the same setup as in the Proof of Theorem \ref{thm:SDL_LPGD}. The main part of the argument is the computation of the Hessian of loss function $f:=f_{\textup{SDL-feat}}$ in \eqref{eq:SDL_feat_CALE}, which is straightforward but a bit more involved than the corresponding computation for the filter-based case in the proof of Theorem \ref{thm:SDL_LPGD}. To this end, let $\a_{s}$ denote the activation corresponding to the $s$th sample (see \eqref{eq:ASDL_1}). More precisely, $\a_{s}=\A+\bGamma^{T}\x'_{s}$ for the feature-based model we consider here. Recall the objective function $f$ in \eqref{eq:SDL_feat_CALE} re-written in \eqref{eq:f_SDL_pf0}. We will compute the gradient and the Hessian of $f$ below.
Recall that for the feature-based model we consider here, we have $\a_{s}=\A[:,s]+\bGamma^{T}\x'_{s}$, where in this case $\A\in \R^{\kappa\times n}$ (see \eqref{eq:SDL_feat_CALE}). Denote
\begin{align}
\a_{s}=\I_{\kappa} \A[:,s]+\bGamma^{T}\x_{s}' =: \vast[ \left\langle \underbrace{\begin{bmatrix}
\I_{\kappa}[:,j] \\ \bGamma[:,j] \end{bmatrix}}_{=:\v_{j}} ,\, \underbrace{\begin{bmatrix}
\A[:,s] \\ \x_{s}' \end{bmatrix}}_{=:\bpsi_{s}} \right\rangle; \,\, j=1,\dots,\kappa \vast]^{T}\in \R^{\kappa}.
\end{align}
Note that for the feature-based model here, $\A[:,s]$ is concatenated with the auxiliary covariate $\x'_{s}$, whereas we concatenated $\A[:,j]$ with $\bGamma[:,j]$ for the filter-based case (see \eqref{eq:thm_SDL_filt_as_def})\footnote{This is because for the feature-based model, the column $\A[:,s]\in \R^{\kappa}$ for $s=1,\dots,n$ represent a feature of the $s$th sample, whereas for the filter-based model, $\A[:,j]$ for $j=1,\dots,\kappa$ represents the $j$th filter that is applied to the feature $\x_{s}$ of the $s$th sample.}.
A straightforward computation shows the following gradient formulas:
\begin{align}\label{eq:SDL_feat_gradients}
&\nabla_{\vect(\bGamma)} f(\A,\B,\bGamma) = \left( \sum_{s=1}^{n} \dot{\h}(y_{s},\a_{s}) \otimes \x'_{s} \right) + 2\nu \vect(\bGamma), \\
&\nabla_{\vect(\A)} f(\V,\B) = \begin{bmatrix}
\dot{\h}(y_{1},\a_{1}) \\
\vdots \\
\dot{\h}(y_{n},\a_{n})
\end{bmatrix}
+ 2\nu \vect(\A), \quad \nabla_{\B} f(\A,\B,\bGamma) = 2\xi(\B-\X_{\textup{data}}) \\
&\nabla_{\vect(\bGamma)}\nabla_{\vect(\bGamma)^{T}} f(\A,\B,\bGamma) = \left( \sum_{s=1}^{n} \ddot{\H}(y_{s},\a_{s}) \otimes \x_{s}'(\x_{s}')^{T}\right) + 2\nu \I_{q\kappa}, \\
&\nabla_{\vect(\A)}\nabla_{\vect(\A)^{T}} f(\A,\B,\bGamma) = \diag\left( \ddot{\H}(y_{1},\a_{1}),\dots, \ddot{\H}(y_{n},\a_{n}) \right) + 2\nu \I_{\kappa n} \\
&\nabla_{\vect(\bGamma)}\nabla_{\vect(\A)^{T}} f(\A,\B,\bGamma) = \left[ \ddot{\H}(y_{1},\a_{1})\otimes \x'_{1}, \dots, \ddot{\H}(y_{1},\a_{n})\otimes \x'_{n} \right] \in \R^{\kappa q \times \kappa n} \\
&\nabla_{\vect(\B)}\nabla_{\vect(\B)^{T}} f(\A,\B,\bGamma) = 2\xi \I_{pn}, \qquad \nabla_{\vect(\B)}\nabla_{\vect(\V)^{T}} f(\A,\B,\bGamma) = O.
\end{align}
From this we will compute the eigenvalues of the Hessian $\H_{\textup{feat}}$ of the loss function $f$. In order to illustrate our computation in a simple setting, we first assume $\kappa=1=q$, which corresponds to binary classification $\kappa=1$ with one-dimensional auxiliary covariates $q=1$. In this case, we have
\begin{align}
\H_{\textup{feat}}&:=\nabla_{\vect(\A,\bGamma,\B)} \nabla_{\vect(\A,\bGamma,\B)^{T}} f(\A,\B,\bGamma) \\
\begin{bmatrix}
\ddot{h}(y_{1},\a_{1})+2\nu & 0 & \dots & 0 & \ddot{h}(y_{1},\a_{1}) x_{1}' & O \\
0 & \ddot{h}(y_{2},\a_{2})+2\nu &\dots & 0 & \ddot{h}(y_{2},\a_{2}) x_{2}' & O \\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
0 & \dots & 0 & \ddot{h}(y_{n},\a_{n})+2\nu & \ddot{h}(y_{n},\a_{n}) x_{n}' & O \\
\ddot{h}(y_{1},\a_{1}) x_{1}' & \ddot{h}(y_{2},\a_{2}) x_{2}' & \dots & \ddot{h}(y_{n},\a_{n}) x_{n}' & \left( \frac{1}{n}\sum_{s=1}^{n}\ddot{h}(y_{s},\a_{s}) (x_{s}')^{2} \right) + 2\nu & O \\
O&O&\dots &O &O & 2\xi \I_{pn}
\end{bmatrix},
\end{align}
where we denoted $\ddot{h}=\ddot{h}_{11}\in \R$ and $x'_{s}=\x_{s}'\in \R$ for $s=1,\dots,n$. In order to compute the eigenvalues of the above matrix, we will use the following formula for determinant of $3\times 3$ block matrix: ($O$ representing matrices of zero entries with appropriate sizes)
\begin{align}
\det\left( \begin{bmatrix}
A & B & O \\
B^{T} & C & O \\
O & O & D
\end{bmatrix} \right)
= \det\left( C - B^{T}A^{-1}B \right) \det(A) \det(D).
\end{align}
This yields the following simple formula for the characteristic polynomial of $\H_{\textup{feat}}$:
\begin{align}
\det( \H_{\textup{feat}} - \lambda \I) &= \left( \sum_{s=1}^{n} \ddot{h}(y_{s},\a_{s}) (x_{s}')^{2} - \sum_{s=1}^{n} \frac{(\ddot{h}(y_{s},\a_{s}))^{2} (x_{s}')^{2}}{\ddot{h}(y_{s},\a_{s})+2\nu } + 2\nu - \lambda \right) (2\xi - \lambda) \prod_{s=1}^{n} \left( \ddot{h}(\y_{s},\a_{s}) + 2\nu- \lambda\right)\\
&=\left( \sum_{s=1}^{n} \frac{2\nu \ddot{h}(y_{s},\a_{s}) (x_{s}')^{2}}{ \ddot{h}(y_{s},\a_{s})+2\nu } + 2\nu - \lambda \right) (2\xi - \lambda)^{pn} \prod_{s=1}^{n} \left( \ddot{h}(\y_{s},\a_{s}) + 2\nu- \lambda\right).
\end{align}
By \ref{assumption:A4}, we know that $\ddot{h}(y_{s},\a_{s})>0$ for all $s=1,\dots,n$, so the first term in the parenthesis in the above display is lower bounded by $2\nu-\lambda$. It follows that
\begin{align}
\lambda_{\min}(\H_{\textup{feat}}) & \ge \min(2\xi, \alpha^{-}+2\nu), \\
\lambda_{\max}(\H_{\textup{feat}}) &\le \max\left( 2\nu + \alpha^{+}\sum_{s=1}^{n} (x_{s}')^{2} ,\, 2\xi,\, \alpha^{+} + 2\nu\right).
\end{align}
Now we generalize the above computation for general $\kappa,q\ge 1$ case. First note the general form of the Hessian as below:
\begin{align}
&\H_{\textup{feat}}:=\nabla_{\vect(\A,\bGamma,\B)} \nabla_{\vect(\A,\bGamma,\B)^{T}} f(\A,\B,\bGamma) \\
&\quad =
\begin{bmatrix}
\ddot{\H}(y_{1},\a_{1})+2\nu \I_{\kappa} & 0 & \dots & 0 & (\ddot{\H}(y_{1},\a_{1})\otimes \x_{1}')^{T} & O \\
0 & \ddot{\H}(y_{2},\a_{2})+2\nu \I_{\kappa} &\dots & 0 & (\ddot{\H}(y_{2},\a_{2})\otimes \x_{2}')^{T} & O \\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
0 & \dots & 0 & \ddot{\H}(y_{n},\a_{n})+2\nu \I_{\kappa} & (\ddot{\H}(y_{n},\a_{n})\otimes \x_{n}')^{T} & O \\
\ddot{\H}(y_{1},\a_{1})\otimes \x_{1}' & \ddot{\H}(y_{2},\a_{2})\otimes \x_{2}' & \dots & \ddot{\H}(y_{n},\a_{n})\otimes \x_{n}' & \begin{matrix} \sum_{s=1}^{n}\ddot{\H}(y_{s},\a_{s}) \otimes \x_{s}'(\x_{s})^{T} \\ + 2\nu \I_{q\kappa} \end{matrix} & O \\
O&O&\dots &O &O & 2\xi \I_{pn}
\end{bmatrix}.
\end{align}
Note that for any square symmetric matrix $B$ and a column vector $\x$ of matching size,
\begin{align}
B\otimes \x \x^{T} - (B\otimes \x)^{T}(B+\lambda \I)^{-1} (B\otimes \x) &= \left( B-B(B+\lambda \I)^{-1} B \right) \otimes (\x\x^{T}) \\
&= (B+\lambda I)^{-1} B \otimes \x\x^{T} \\
& \preceq \I \otimes \x\x^{T},
\end{align}
where the last diagonal dominace is due to the Woodbury identity for matrix inverse (e.g., see [28]). Hence by a similar computation as before, we obtain
\begin{align}
\det( n\H_{\textup{feat}} - \lambda \I)
%&\quad = \left( \sum_{s=1}^{n} \ddot{\H}(y_{s},\a_{s})\otimes \x_{s}'(\x_{s})^{T} - \sum_{s=1}^{n}\ddot{\H}(y_{s},\a_{s}) \left( \ddot{\H}(y_{s},\a_{s}) + 2\nu \I\right)^{-1} \ddot{\H}(y_{s},\a_{s}) \otimes \x_{s}'(\x_{s}')^{T} + 2\nu \I_{q\kappa} - \lambda \right) (2\xi - \lambda) \prod_{s=1}^{n} \left( \ddot{h}(\y_{s},\a_{s}) + 2\nu- \lambda\right)\\
& = \det\left( \sum_{s=1}^{n} 2\nu \left( \ddot{\H}(y_{s},\a_{s}) + 2\nu \I_{\kappa} \right)^{-1} \ddot{\H}(y_{s},\a_{s}) \otimes \x_{s}'(\x_{s}')^{T} + (2\nu -\lambda) \I_{q\kappa} \right) (2\xi n - \lambda)^{pn} \\
&\hspace{5cm} \times \prod_{s=1}^{n} \det\left( \ddot{\H}(\y_{s},\a_{s}) + (2\nu - \lambda)\I_{\kappa} \right).
\end{align}
It follows that
\begin{align}
\lambda_{\min}(\H_{\textup{feat}}) & \ge \min(2\xi, \alpha^{-}+2\nu), \\
\lambda_{\max}(\H_{\textup{feat}}) &\le \max\left(2\nu +\alpha^{+} n \lambda_{\max}\left( n^{-1}\X_{\textup{aux}} \X_{\textup{aux}}^{T} \right),\, 2\xi,\, \alpha^{+} + 2\nu\right).
\end{align}
Then the rest follows from Theorem \ref{thm:SDL_LPGD_feat}.
\end{proof}
\section{Proof of Theorem \ref{thm:SDL_BCD}}
In this section, we prove Theorem \ref{thm:SDL_BCD} only for the case of filter-based SDL in \eqref{eq:ASDL_1}. An almost identical argument will show the assertion for the feature-based case.
Recall the filter-based SDL loss function $f(\A,\B,\bGamma)$ in \eqref{eq:f_SDL_pf0} in terms of the combined variables $[\A,\B,\bGamma]$, where $\A=\W\Beta$ and $\B=\W\H$. For convenience, recall that $\A\in \R^{\kappa\times p}$, $\W\in \R^{p\times r}$, $\Beta\in \R^{r\times \kappa}$, $\H\in \R^{r\times n}$, and $\bGamma\in \R^{q\times n}$. In the following computations, we will use \textit{commutation matrix} $\C^{(a\times b)}$, which is a special instance of $ab\times ab$ permutation matrix. Namely, for each integers $a,b\ge 1$, there exists a unique matrix $\C^{(a\times b)}\in \{0,1\}^{ab\times ab}$ such that for all $A\in \R^{a\times b}$, we have $\C^{(a,b)}\vect(A) = \vect(A^{T})$. Note that $(\C^{(a,b)})^{T}=\C^{b\times a}$. Furthermore, $(\C^{(a,b)})^{T}\C^{(a,b)}=\I_{ab}$ since $(\C^{(a,b)})^{T}\C^{(a,b)}\vect(A)=\C^{(b,a)}\vect(A^{T})=\vect(A)$. Hence $\C^{(a,b)}$ is positive semi-definite. Throughout this section, we denote $\bZ=[\W,\H,\Beta,\bGamma]$ for the combined SDL parameters.
\begin{lemma}[Derivatives of the filter-based SDL objective in separate variables]
\label{lem:SDL_filt_BCD_derivatives}
Let $L(\bZ)$ denote the objective of the filter-based SDL in \eqref{eq:ASDL_1}. Suppose \ref{assumption:A4} holds. Recall $\dot{\h}$ and $\ddot{\H}$ defined in \eqref{eq:Hddot_def}. Let $\a_{s}:=\Beta^{T}\W^{T}\x_{s}+\bGamma^{T}\x'_{s}$ for $s=1,\dots,n$ and $\K:=[\dot{\h}(y_{1},\a_{1}),\dots, \dot{\h}(y_{1},\a_{1})]\in \R^{\kappa\times n}$. Then we have
\begin{align}\label{eq:SDL_filt_BCD_derivatives1}
\nabla_{\W} \, L(\bZ) &= \X_{\textup{data}} \K^{T} \Beta^{T} + 2\xi(\W\H-\X_{\textup{data}})\H^{T},\qquad
\nabla_{\Beta} \, L(\bZ) = \W^{T} \X_{\textup{data}} \K^{T} \\
\nabla_{\bGamma} \, L(\bZ) &= \X_{\textup{aux}}\K^{T}, \qquad \nabla_{\H} \, L(\bZ) = 2\xi \W^{T}(\W\H-\X_{\textup{data}}).
\end{align}
Furthermore, for diagonal terms in the Hessian, we have
\begin{align}
\nabla_{\vect(\W)}\nabla_{\vect(\W)^{T}} \, L(\bZ)
&= (\Beta \otimes \X_{\textup{data}}) \diag\left( \ddot{\H}(y_{1},\a_{1}),\dots,\ddot{\H}(y_{n},\a_{n})\right) \C^{(n,\kappa)} (\Beta \otimes \X_{\textup{data}})^{T} +2\xi (\H\H^{T} \otimes \I_{p} ), \\
\nabla_{\vect(\H)}\nabla_{\vect(\H)^{T}} \, L(\bZ) & = 2\xi (\I_{n}\otimes \W^{T}\W), \\
\nabla_{\vect(\Beta)}\nabla_{\vect(\Beta)^{T}} \, L(\bZ)
&=(\I_{\kappa}\otimes \W^{T}\X_{\textup{data}}) \diag\left( \ddot{\H}(y_{1},\a_{1}),\dots,\ddot{\H}(y_{n},\a_{n})\right) (\I_{\kappa}\otimes \W^{T}\X_{\textup{data}})^{T}, \\
\nabla_{\vect(\bGamma)}\nabla_{\vect(\bGamma)^{T}} \, L(\bZ)
&= (\I_{r}\otimes \X_{\textup{aux}}) \diag\left( \ddot{\H}(y_{1},\a_{1}),\dots,\ddot{\H}(y_{n},\a_{n})\right) (\I_{r}\otimes \X_{\textup{aux}})^{T}.
\end{align}
Lastly, for the off-diagonal terms in the Hessian, we have
\begin{align}
\nabla_{\vect(\Beta)}\nabla_{\vect(\W)^{T}} \, L(\bZ)
&= \C^{(\kappa, n)} (\I_{\kappa}\otimes \W^{T}\X_{\textup{data}})\diag\left( \ddot{\H}(y_{1},\a_{1}),\dots,\ddot{\H}(y_{n},\a_{n})\right) (\Beta \otimes \X_{\textup{data}})^{T} + \C^{(\kappa, r)} (\I_{r}\otimes \X_{\textup{data}} \K^{T} )^{T}, \\
\nabla_{\vect(\bGamma)}\nabla_{\vect(\W)^{T}} \, L(\bZ) &= O,\\
\nabla_{\vect(\H)}\nabla_{\vect(\W)^{T}} \, L(\bZ) &= 2\xi \left[ (\H^{T}\otimes \W^{T}) + (\I_{r}\otimes \H^{T}\W^{T}) - \C^{(n\times r)} (\I_{r}\otimes \X_{\textup{data}})^{T} \right],\\
\nabla_{\vect(\Beta)}\nabla_{\vect(\H)^{T}} \, L(\bZ) &= \nabla_{\vect(\bGamma)}\nabla_{\vect(\H)^{T}} \, L(\bZ) = O,\\
\nabla_{\vect(\bGamma)}\nabla_{\vect(\Beta)^{T}} \, L(\bZ) &= (\I_{r}\otimes \X_{\textup{aux}}) \diag\left( \ddot{\H}(y_{1},\a_{1}),\dots,\ddot{\H}(y_{n},\a_{n})\right) \C^{(n,\kappa)} (\I_{\kappa}\otimes \W^{T}\X_{\textup{data}})^{T}.
\end{align}
\end{lemma}
\begin{proof}
Setting $\nu=0$ in \eqref{eq:f_SDL_pf0}, we have $L(\bZ)=f(\A,\B,\bGamma)$. Recall the gradients of $f(\A,\B,\bGamma)$ in \eqref{eq:SDL_filt_gradients}. By using the chain rule and noting that $\A[:,j]=\W \Beta[:,j]$, we can compute
\begin{align}
\nabla_{\W} f(\A,\B,\bGamma) &= \left( \sum_{s=1}^{n} \sum_{j=1}^{\kappa} \frac{\partial f(\A,\B,\bGamma)}{\partial \A[:,j]} \frac{\partial \A[:,j]}{\partial \W}\right) + 2\xi(\W\H-\X_{\textup{data}})\H^{T} \\
&= \sum_{s=1}^{n}\sum_{j=1}^{\kappa} \x_{s} \dot{h}_{j}(y_{s},\a_{s}) \Beta[:,j]^{T} + 2\xi(\W\H-\X_{\textup{data}})\H^{T} \\
&= \X_{\textup{data}} \K^{T} \Beta^{T} + 2\xi(\W\H-\X_{\textup{data}})\H^{T},
\end{align}
By a similar computation, we can also compute, for each $j=1,\dots,\kappa$,
\begin{align}
\nabla_{\Beta[:,j]} L(\bZ) &= \sum_{s=1}^{n} \frac{\partial \A[:,j]}{\partial \Beta[:,j]}\frac{\partial f(\A,\B,\bGamma)}{\partial \A[:,j]} = \sum_{s=1}^{n} \W^{T} \dot{h}_{j}(y_{s},\a_{s}) \x_{s},
\end{align}
so we get $\nabla_{\Beta} f(\A,\B,\bGamma) = \W^{T} \X_{\textup{data}} \K^{T}$. A similar computation shows the remaining two gradients.
Next, recall the relations for vectorizing product of matrices: for $A\in \R^{a\times b}$, $B\in \R^{b\times c}$, and $C\in \R^{c\times d}$,
\begin{align}
\vect(AB) &= (\I_{c}\otimes A) \vect(B) = (B^{T} \otimes \I_{a}) \vect(A), \\
\vect(ABC) &= (C^{T}\otimes A)\vect(B) = (\I_{d}\otimes AB) \vect(C) = (C^{T}B^{T}\otimes \I_{a}) \vect(A).
\end{align}
From this the previous calculation yields
\begin{align}
\nabla_{\vect(\W)} f(\A,\B,\bGamma)
&= \vect\left( \X_{\textup{data}} \K^{T} \Beta^{T} \right) + 2\xi \vect(\W\H\H^{T}) - 2\xi \vect(\X_{\textup{data}} \H^{T}) \\
&= (\Beta \otimes \X_{\textup{data}}) \vect(\K^{T}) + 2\xi (\H\H^{T} \otimes \I_{p} ) \vect(\W) - 2\xi \vect(\X_{\textup{data}} \H^{T}).
\end{align}
Note that $\vect(\K^{T})^{T}=(\C^{(\kappa,n)} \vect(\K))^{T}=\vect(\K)^{T} \C^{(n,\kappa)}$. Hence we get
\begin{align}
\nabla_{\vect(\W)}\nabla_{\vect(\W)^{T}} f(\A,\B,\bGamma)
&= \nabla_{\vect(\W)}\left( \vect(\K)^{T} \C^{(n,\kappa)} (\Beta \otimes \X_{\textup{data}})^{T} + 2\xi \vect(\W)^{T} (\H\H^{T} \otimes \I_{p} ) - 2\xi \vect(\X_{\textup{data}} \H^{T})^{T} \right)\\
&= (\Beta \otimes \X_{\textup{data}}) \diag\left( \ddot{\H}(y_{1},\a_{1}),\dots,\ddot{\H}(y_{n},\a_{n})\right) \C^{(n,\kappa)} (\Beta \otimes \X_{\textup{data}})^{T} +2\xi (\H\H^{T} \otimes \I_{p} ).
\end{align}
Similarly, we can compute
\begin{align}
\nabla_{\vect(\Beta)}\nabla_{\vect(\Beta)^{T}} f(\A,\B,\bGamma) &=\nabla_{\vect(\Beta)} \vect(\W^{T} \X_{\textup{data}} \K^{T})^{T}\\
&= \nabla_{\vect(\Beta)} \vect(\K)^{T} (\I_{\kappa}\otimes \W^{T}\X_{\textup{data}})^{T} \\
&=(\I_{\kappa}\otimes \W^{T}\X_{\textup{data}}) \diag\left( \ddot{\H}(y_{1},\a_{1}),\dots,\ddot{\H}(y_{n},\a_{n})\right) (\I_{\kappa}\otimes \W^{T}\X_{\textup{data}})^{T}.
\end{align}
Also note that
\begin{align}
\nabla_{\vect(\bGamma)}\nabla_{\vect(\bGamma)^{T}} f(\A,\B,\bGamma) &= \nabla_{\vect(\bGamma)} \vect(\X_{\textup{aux}} \K^{T})^{T} \\
&= \nabla_{\vect(\bGamma)} \vect(\K)^{T} (\I_{r}\otimes \X_{\textup{aux}})^{T} \\
&= (\I_{r}\otimes \X_{\textup{aux}}) \diag\left( \ddot{\H}(y_{1},\a_{1}),\dots,\ddot{\H}(y_{n},\a_{n})\right) (\I_{r}\otimes \X_{\textup{aux}})^{T}.
\end{align}
Similarly, we get
\begin{align}
\nabla_{\vect(\H)}\nabla_{\vect(\H)^{T}} f(\A,\B,\bGamma) &= \nabla_{\vect(\H)} \left( 2\xi \vect(\W^{T}\W\H)^{T} - 2\xi \vect(\W^{T}\X_{\textup{data}})^{T}\right) \\
&= 2\xi \nabla_{\vect(\H)} \vect(\H)^{T} (\I_{n}\otimes \W^{T}\W) = (\I_{n}\otimes \W^{T}\W).
\end{align}
Next, we compute the off-diagonal terms in the Hessian of $f$. First, we compute
\begin{align}
\nabla_{\vect(\Beta)}\nabla_{\vect(\W)^{T}} f(\A,\B,\bGamma) &= \nabla_{\vect(\Beta)} \vect(\X_{\textup{data}} \K^{T} \Beta^{T})^{T} \\
&= \left( \frac{\partial}{\partial \vect(\Beta) } \vect(\Beta)^{T}\C^{(\kappa, r)} \right) (\I_{r}\otimes \X_{\textup{data}} \K^{T} )^{T} + \left( \frac{\partial}{\partial \vect(\Beta)} \C^{(\kappa\times n)} \vect(\K^{T}) \right) (\Beta \otimes \X_{\textup{data}})^{T} \\
&= \C^{(\kappa, r)} (\I_{r}\otimes \X_{\textup{data}} \K^{T} )^{T} + \\
&\qquad + \C^{(\kappa, n)} (\I_{\kappa}\otimes \W^{T}\X_{\textup{data}})\diag\left( \ddot{\H}(y_{1},\a_{1}),\dots,\ddot{\H}(y_{n},\a_{n})\right) (\Beta \otimes \X_{\textup{data}})^{T}.
\end{align}
Second, note that $\nabla_{\vect(\bGamma)}\nabla_{\vect(\W)^{T}} f(\A,\B,\bGamma)=O$. Third, for the forthcoming computation, we claim that
\begin{align}
\nabla_{\vect(\H)} \vect(\H\H^{T})^{T} = (\H^{T}\otimes \I_{r}) + (\I_{r}\otimes \H^{T}).
\end{align}
One can directly verify the above when $\H$ consists of a single column, and the general case can be easily obtained from there. Also note that using the commutation matrix, we can write $\vect(\H^{T})^{T}=(\C^{(r,n)} \vect(\H))^{T}=\vect(\H)^{T} \C^{(n,r)}$. Now observe that
\begin{align}
\nabla_{\vect(\H)}\nabla_{\vect(\W)^{T}} f(\A,\B,\bGamma) &= 2\xi \nabla_{\vect(\H)} \left[ \vect(\W\H\H^{T}) - \vect(\X_{\textup{data}} \H^{T}) \right]^{T} \\
&= 2\xi \nabla_{\vect(\H)} \left[ \vect(\H\H^{T})^{T} (\I_{r}\otimes \W)^{T} - \vect(\H^{T})^{T}(\I_{r}\otimes \X_{\textup{data}})^{T} \right] \\
&= 2\xi \left( \nabla_{\vect(\H)} \vect(\H\H^{T})^{T} \right) (\I_{r}\otimes \W)^{T} - \left( \nabla_{\vect(\H)} \vect(\H^{T})^{T} \right) (\I_{r}\otimes \X_{\textup{data}})^{T} \\
&= 2\xi \left[ \left( (\H^{T}\otimes \I_{r}) + (\I_{r}\otimes \H^{T}) \right) (\I_{r}\otimes \W)^{T} - \C^{(n\times r)} (\I_{r}\otimes \X_{\textup{data}})^{T} \right].
\end{align}
Then one can use the mixed-product property to further simplify the last expression as in the assertion. Fourth, noting that $\vect(\K^{T})^{T}=(\C^{(\kappa,n)} \vect(\K))^{T}=\vect(\K)^{T} \C^{(n,\kappa)}$, we can compute
\begin{align}
\nabla_{\vect(\bGamma)}\nabla_{\vect(\Beta)^{T}} f(\A,\B,\bGamma) &= \nabla_{\vect(\bGamma)} \vect(\W^{T} \X_{\textup{data}} \K^{T})^{T} \\
&= \nabla_{\vect(\bGamma)} \vect(\K^{T})^{T} (\I_{\kappa}\otimes \W^{T}\X_{\textup{data}})^{T}\\
&= \nabla_{\vect(\bGamma)} \vect(\K) \C^{(n,\kappa)} (\I_{\kappa}\otimes \W^{T}\X_{\textup{data}})^{T} \\
&= (\I_{r}\otimes \X_{\textup{aux}}) \diag\left( \ddot{\H}(y_{1},\a_{1}),\dots,\ddot{\H}(y_{n},\a_{n})\right) \C^{(n,\kappa)} (\I_{\kappa}\otimes \W^{T}\X_{\textup{data}})^{T}.
\end{align}
The remaining zero-second derivatives are easy to see.
\end{proof}
\begin{remark}[Derivatives for the feature-based SDL objective in separate variables]
\label{rmk:feat_SDL_gradient}
Arguing similarly as in the proof of Lemma \ref{lem:SDL_filt_BCD_derivatives}, we can compute the derivatives of the feature-based SDL objective in separate variables as follows. Let $L(\bZ)$ denote the objective of the feature-based SDL in \eqref{eq:ASDL_1}. Suppose \ref{assumption:A4} holds. Recall $\dot{\h}$ defined in \eqref{eq:Hddot_def}. Let $\a_{s}:=\Beta^{T}\h_{s}+\bGamma^{T}\x'_{s}$ for $s=1,\dots,n$, where $\H=[\h_{1},\dots,\h_{n}]\in \R^{r\times n}$ being the code matrix. Let $\K:=[\dot{\h}(y_{1},\a_{1}),\dots, \dot{\h}(y_{1},\a_{1})]\in \R^{\kappa\times n}$. Then we
\begin{align}\label{eq:SDL_feat_BCD_derivatives1}
\nabla_{\W} \, L(\bZ) &= 2\xi(\W\H-\X_{\textup{data}})\H^{T},\qquad
\nabla_{\Beta} \, L(\bZ) = \H \K^{T} \\
\nabla_{\bGamma} \, L(\bZ) &= \X_{\textup{aux}}\K^{T}, \qquad \nabla_{\H} \, L(\bZ) = \Beta \K + 2\xi \W^{T}(\W\H-\X_{\textup{data}}).
\end{align}
\end{remark}
\begin{lemma}\label{lem:SDL_BCD_smooth}
\label{lem:BCD_hypothesis}
Assume the hypothesis of Theorem \ref{thm:SDL_BCD} is true. Then the loss function $L$ in \eqref{eq:ASDL_1} is convex in each block coordinates $\W$, $\H$, $\Beta$, and $\bGamma$. Furthermore, its gradient is continuous and $M$-Lipschitz for some $M>0$ on the admissible parameter space.
\end{lemma}
\begin{proof}
Notice that the diagonal terms in the Hessian given in Lemma \ref{lem:SDL_filt_BCD_derivatives} are positive semidefinite. (An explicit lower bound on the eigenvalues can also be computed.) This is enough to conclude that $L$ is convex in each factor $\W$, $\H$, $\Beta$, and $\bGamma$ while all the other three are held fixed (i.e., $L$ is multiconvex). The second part of the assertion follows easily from the first derivative computations in Lemma \ref{lem:SDL_filt_BCD_derivatives} and the compactness assumption in Theorem \ref{thm:SDL_BCD}. %The second part of the assertion also follows easily from Lemma \ref{lem:SDL_filt_BCD_derivatives}, \ref{assumption:A5_cpt}, and the dominated convergence theorem.
\end{proof}
Now we are ready to derive Theorem \ref{thm:SDL_BCD}.
\begin{proof}[\textbf{Proof of Theorem \ref{thm:SDL_BCD}}]
The result would immediately follow the main result in [44]. In order to apply the result, we need to verify that 1) the filter-based SDL loss function $L$ in \eqref{eq:ASDL_1} is multiconvex and 2) the gradient of $L$ is $M$-Lipschitz for some constant $M>0$. Under the assumption, \ref{assumption:A4} and the assumed compactness of the parameter space, both of these hypotheses are verified in Lemma \ref{lem:BCD_hypothesis}. This shows the assertion.
\end{proof}
\section{Proof of Theorem \ref{thm:SDL_BCD_STAT_filt}}
\label{section:appendix_SDL_BCD_STAT}
Throughout this section, let $\mathcal{L}(\bZ)=\mathcal{L}_{n}(\W,\h,\Beta,\bGamma)$ denote the objective in \eqref{eq:SDL_likelihood_BCD_filter}. Denote
\begin{align}\label{eq:def_L_bar_STAT}
\bar{\mathcal{L}}(\bZ) := \E_{(\x,\x',y)}\left[ \mathcal{L}_{1}(\bZ) \right].
\end{align}
\begin{lemma}[Derivatives of the filter-based SDL objective in separate variables]
\label{lem:SDL_filt_BCD_STAT_derivatives}
Let $\mathcal{L}(\bZ)=\mathcal{L}_{n}(\W,\h,\Beta,\bGamma)$ denote the objective in \eqref{eq:SDL_likelihood_BCD_filter}. Suppose \ref{assumption:A4} holds. Recall $\dot{\h}$ and $\ddot{\H}$ defined in \eqref{eq:Hddot_def}. Let $\a_{s}:=\Beta^{T}\W^{T}\x_{s}+\bGamma^{T}\x'_{s}$ for $s=1,\dots,n$ and $\K:=[\dot{\h}(y_{1},\a_{1}),\dots, \dot{\h}(y_{1},\a_{1})]\in \R^{\kappa\times n}$. Also denote $\H=[\h,\dots,\h]\in \R^{r\times n}$. Then we have
\begin{align}
\nabla_{\W} \, \mathcal{L}(\bZ) &= \X_{\textup{data}} \K^{T} \Beta^{T} + 2\xi(\W\H-\X_{\textup{data}})\H^{T} + 2\nu\W,\qquad
\nabla_{\Beta} \, \mathcal{L}(\bZ) = \W^{T} \X_{\textup{data}} \K^{T} + 2\nu \Beta, \\
\nabla_{\bGamma} \, \mathcal{L}(\bZ) &= \X_{\textup{aux}}\K^{T} + 2\nu \bGamma, \qquad \nabla_{\h} \, \mathcal{L}(\bZ) = 2\xi \W^{T}(n\W\h-\sum_{s=1}^{n}\x_{s}) + 2\nu \h.
\end{align}
Furthermore, for diagonal terms in the Hessian, we have
\begin{align}
\nabla_{\vect(\W)}\nabla_{\vect(\W)^{T}} \, \mathcal{L}(\bZ)
&= (\Beta \otimes \X_{\textup{data}}) \diag\left( \ddot{\H}(y_{1},\a_{1}),\dots,\ddot{\H}(y_{n},\a_{n})\right) \C^{(n,\kappa)} (\Beta \otimes \X_{\textup{data}})^{T} +2\xi (\H\H^{T} \otimes \I_{p} ) + 2\nu \I_{pr}, \\
\nabla_{\h}\nabla_{\h^{T}} \, \mathcal{L}(\bZ) & = 2\xi n\W^{T}\W + 2\nu \I_{r}, \\
\nabla_{\vect(\Beta)}\nabla_{\vect(\Beta)^{T}} \, \mathcal{L}(\bZ)
&=(\I_{\kappa}\otimes \W^{T}\X_{\textup{data}}) \diag\left( \ddot{\H}(y_{1},\a_{1}),\dots,\ddot{\H}(y_{n},\a_{n})\right) (\I_{\kappa}\otimes \W^{T}\X_{\textup{data}})^{T} + 2\nu \I_{r\kappa}, \\
\nabla_{\vect(\bGamma)}\nabla_{\vect(\bGamma)^{T}} \, \mathcal{L}(\bZ)
&= (\I_{r}\otimes \X_{\textup{aux}}) \diag\left( \ddot{\H}(y_{1},\a_{1}),\dots,\ddot{\H}(y_{n},\a_{n})\right) (\I_{r}\otimes \X_{\textup{aux}})^{T} + 2\nu \I_{q\kappa}
\end{align}
Lastly, for the off-diagonal terms in the Hessian, we have
\begin{align}
\nabla_{\vect(\Beta)}\nabla_{\vect(\W)^{T}} \, \mathcal{L}(\bZ)
&= (\I_{\kappa}\otimes \W^{T}\X_{\textup{data}})\diag\left( \ddot{\H}(y_{1},\a_{1}),\dots,\ddot{\H}(y_{n},\a_{n})\right) (\Beta \otimes \X_{\textup{data}})^{T} + \C^{(\kappa, r)} (\I_{\r}\otimes \X_{\textup{data}} \K^{T} )^{T}, \\
\nabla_{\vect(\bGamma)}\nabla_{\vect(\W)^{T}} \, \mathcal{L}(\bZ) &= O,\\
\nabla_{\h}\nabla_{\vect(\W)^{T}} \, \mathcal{L}(\bZ) &= 2\xi \left[ n(\h^{T}\otimes \W^{T}) + n(\I_{r}\otimes \h^{T}\W^{T}) - (\mathbf{1}_{1\times n}\otimes \I_{r}) \C^{(n\times r)} (\I_{r}\otimes \X_{\textup{data}})^{T} \right],\\
\nabla_{\vect(\Beta)}\nabla_{\h^{T}} \, \mathcal{L}(\bZ) &= \nabla_{\vect(\bGamma)}\nabla_{\h^{T}} \, \mathcal{L}(\bZ) = O,\\
\nabla_{\vect(\bGamma)}\nabla_{\vect(\Beta)^{T}} \, \mathcal{L}(\bZ) &= (\I_{r}\otimes \X_{\textup{aux}}) \diag\left( \ddot{\H}(y_{1},\a_{1}),\dots,\ddot{\H}(y_{n},\a_{n})\right) \C^{(n,\kappa)} (\I_{\kappa}\otimes \W^{T}\X_{\textup{data}})^{T}.
\end{align}
\end{lemma}
\begin{proof}
For $\H=[\h,\dots,\h]$, note that
\begin{align}
\nabla_{\h} \vect(\H)^{T} = \mathbf{1}_{1\times n}\otimes \I_{r} ,\qquad \nabla_{\h} \vect(\H\H^{T})^{T} = n(\h^{T}\otimes \I_{r}) + n(\I_{r}\otimes \h^{T}).
\end{align}
Then the assertion follows from similar computations as in the proof of Lemma \ref{lem:SDL_filt_BCD_derivatives}.
\end{proof}
\begin{lemma}\label{lem:SDL_BCD_STAT_smooth}
\label{lem:BCD_STAT_hypothesis}
Let $\mathcal{L}_{n}(\bZ)=\mathcal{L}_{n}(\W,\h,\Beta,\bGamma)$ denote the objective in \eqref{eq:SDL_likelihood_BCD_filter}. Assume the hypothesis of Theorem \ref{thm:SDL_BCD_STAT_filt} holds. Then $\mathcal{L}_{n}$ is convex in each block coordinates $\W$, $\h$, $\Beta$, and $\bGamma$ for all $\nu\ge 0$. Furthermore, its gradient is continuous and $M$-Lipschitz for some $M>0$ on the admissible parameter space.
\end{lemma}
\begin{proof}
The argument is identical to the proof Lemma \ref{lem:BCD_hypothesis} using Lemma \ref{lem:SDL_filt_BCD_STAT_derivatives} instead of Lemma \ref{lem:SDL_filt_BCD_derivatives}. For the multi-convexity part, recall that $\ddot{\H}(y_{s},\a_{s})$'s are positive definite due to \ref{assumption:A4} and the commutation matrices $\C^{(a,b)}$ are also positive definite.
\end{proof}
\begin{lemma}[Derivatives of the expected filter-based SDL objective]
\label{lem:SDL_filt_BCD_STAT_derivatives}
Suppose a single data $(\x,\x',y)$ is sampled according to the generative model \eqref{eq:SDL_prob_BCD_filt} and let $\bar{\mathcal{L}}$ be as in \eqref{eq:def_L_bar_STAT}. Assume the hypothesis of Theorem \ref{thm:SDL_BCD_STAT_filt} holds.
Recall $\dot{\h}$ and $\ddot{\H}$ defined in \eqref{eq:Hddot_def}. Let $\a:=\Beta^{T}\W^{T}\x+\bGamma^{T}\x'$ and $\xi:=(2\sigma^{2})^{-1}$. Then we have
\begin{align}
\nabla_{\W} \, \bar{\mathcal{L}}(\bZ) &= \E[\x \, \dot{\h}(y,\a)^{T}] \Beta^{T} + 2\xi(\W\h-\W^{\star}\h^{\star})\h^{T} + 2\nu \W,\qquad
\nabla_{\Beta} \, \bar{\mathcal{L}}(\bZ) = \W^{T} \E[\x \,\dot{\h}(y,\a)^{T}] + 2\nu \Beta\\
\nabla_{\bGamma} \, \bar{\mathcal{L}}(\bZ) &= \E[\x' \dot{\h}(y,\a)^{T}] + 2\nu \bGamma \qquad \nabla_{\h} \, \bar{\mathcal{L}}(\bZ) = 2\xi \W^{T}(\W\h-\W^{\star}\h^{\star}) + 2\nu \h.
\end{align}
Furthermore, for diagonal terms in the Hessian, we have
\begin{align}
\nabla_{\vect(\W)}\nabla_{\vect(\W)^{T}} \, \bar{\mathcal{L}}(\bZ)
&=\E\left[ (\Beta \otimes \x) \ddot{\H}(y,\a) (\Beta \otimes \x)^{T} \right]+2\xi (\h\h^{T} \otimes \I_{p} ) + 2\nu \I_{pr} \, \\
\nabla_{\h}\nabla_{\h^{T}} \, \bar{\mathcal{L}}(\bZ) & = 2\xi \W^{T}\W + 2\nu \I_{r},\\
\nabla_{\vect(\Beta)}\nabla_{\vect(\Beta)^{T}} \, \bar{\mathcal{L}}(\bZ)
&=\E\left[(\I_{\kappa}\otimes \W^{T}\x) \ddot{\H}(y,\a) (\I_{\kappa}\otimes \W^{T}\x)^{T}\right] + 2\nu \I_{r\kappa}, \\
\nabla_{\vect(\bGamma)}\nabla_{\vect(\bGamma)^{T}} \, \bar{\mathcal{L}}(\bZ)
&= \E\left[(\I_{r}\otimes \x') \ddot{\H}(y,\a) (\I_{r}\otimes \x')^{T}\right]+ 2\nu \I_{q\kappa}.
\end{align}
Lastly, for the off-diagonal terms in the Hessian, we have
\begin{align}
\nabla_{\vect(\Beta)}\nabla_{\vect(\W)^{T}} \, \bar{\mathcal{L}}(\bZ)
&= \E\left[ (\I_{\kappa}\otimes \W^{T}\x) \ddot{\H}(y,\a) (\Beta \otimes \x)^{T} + \C^{(\kappa\times r)} (\I_{r}\otimes \x \, \dot{\h}(y, \a)^{T} )^{T} \right], \\
\nabla_{\vect(\bGamma)}\nabla_{\vect(\W)^{T}} \, \bar{\mathcal{L}}(\bZ) &= O,\\
\nabla_{\h}\nabla_{\vect(\W)^{T}} \, \bar{\mathcal{L}}(\bZ) &= 2\xi \left[ (\h^{T}\otimes \W^{T}) + (\I_{r}\otimes (\W\h - \W^{\star}\h^{\star})^{T}) \right],\\
\nabla_{\vect(\Beta)}\nabla_{\h^{T}} \, \bar{\mathcal{L}}(\bZ) &= \nabla_{\vect(\bGamma)}\nabla_{\h^{T}} \, \bar{\mathcal{L}}(\bZ) = O,\\
\nabla_{\vect(\bGamma)}\nabla_{\vect(\Beta)^{T}} \, \bar{\mathcal{L}}(\bZ) &= \E\left[ (\I_{r}\otimes \x') \ddot{\H}(y,\a) (\I_{\kappa}\otimes \W^{T}\x)^{T} \right].
\end{align}
\end{lemma}
\begin{proof}
According to Lemmas \ref{lem:SDL_filt_BCD_STAT_derivatives} and \ref{lem:SDL_BCD_STAT_smooth}, $\mathcal{L}$ is twice continuously differentiable and both $\nabla \mathcal{L}$ and $\nabla^{2} \mathcal{L}$ are bounded within the (compact) parameter space. Hence by the monotone convergence theorem,
\begin{align}
\nabla \bar{\mathcal{L}} = \E\left[ \nabla \mathcal{L}\right], \qquad \nabla^{2} \bar{\mathcal{L}} = \E\left[ \nabla^{2} \mathcal{L}\right].
\end{align}
Hence we can simply specialize the derivatives of $\mathcal{L}$ we computed in Lemma \ref{lem:SDL_filt_BCD_STAT_derivatives} for the single sample case $n=1$ and then take the expectation. In doing so, we use the fact that $\C^{(1\times a)}=\I_{a}$, where $\C^{(a,b)}$ denotes the commutation matrix defined above the statement of Lemma \ref{lem:SDL_filt_BCD_derivatives}. %The derivation is straightforward and details are omitted.
\end{proof}
\begin{lemma}\label{lem:SDL_BCD_avg_smooth}
\label{lem:BCD_STAT_hypothesis}
Suppose a single data $(\x,\x',y)$ is sampled according to the generative model \eqref{eq:SDL_prob_BCD_filt} with true parameters $\bZ^{\star}=[\W^{\star}, \H^{\star}, \Beta^{\star}, \bGamma^{\star}]$ and $\blambda^{\star}$. Let $\bar{\mathcal{L}}$ be as in \eqref{eq:def_L_bar_STAT} and assume the hypothesis of Theorem \ref{thm:SDL_BCD_STAT_filt} holds. Then the following hold:
\begin{description}
\item[(i)] $\bar{\mathcal{L}}$ is convex in each block coordinates $\W$, $\h$, $\Beta$, and $\bGamma$ for all $\nu\ge 0$.
\item[(ii)] $\nabla \bar{\mathcal{L}}$ continuous and $M$-Lipschitz for some $M>0$ on the admissible parameter space.
\item[(iii)] $\nabla^{2} \bar{\mathcal{L}}(\bZ^{\star})$ is positive definite if $\nu > \lambda_{+}$, where
\begin{align}
\lambda_{+}:= \frac{1}{2}\max\left( \begin{matrix} 2\xi \lVert \h^{\star} \rVert_{2} \, \lVert \W^{\star} \rVert_{2} + \alpha^{+} \lVert \Beta^{\star} \rVert_{2} \lVert \W^{\star} \rVert_{2} \E\left[ \Vert \x\x^{T} \rVert_{2} \right] + \gamma_{\max}\sigma \sqrt{2p\pi} \\
\hspace{3cm} - \alpha^{-} \, \lambda_{\min}(\Beta^{\star} (\Beta^{\star})^{T} ) \E\left[ \lambda_{\min}( \x\x^{T} ) \right] - 2\xi \lambda_{\min} (\h^{\star}(\h^{\star})^{T} ), \\[10pt]
2\xi \lVert \h^{\star} \rVert_{2} \, \lVert \W^{\star} \rVert_{2} - 2\xi (\W^{\star})^{T}\W^{\star} - 2\xi \lambda_{\min}( (\W^{\star})^{T}\W^{\star}), \\[10pt]
\alpha^{+} \lVert \Beta^{\star} \rVert_{2} \lVert \W^{\star} \rVert_{2} \E\left[ \Vert \x\x^{T} \rVert_{2} \right] + \gamma_{\max}\sigma \sqrt{2p\pi} +\alpha^{+} \E\left[ \lVert \x'(\W^{\star})^{T}\x \rVert_{2}\right] \\
\hspace{4cm} - \alpha^{-} \, \E\left[ \lambda_{\min}( (\W^{\star})^{T} \x\x^{T} \W^{\star} ) \right], \\[10pt]
\alpha^{+} \E\left[ \lVert \x'(\W^{\star})^{T}\x \rVert_{2}\right] - \lambda_{\min}(\x'(\x')^{T})
\end{matrix}\right) \label{eq:nu_min_BCD_STAT_explicit}.
\end{align}
\end{description}
\end{lemma}
\begin{proof}
Parts \textbf{(i)} and \textbf{(ii)} follow easily from Lemma \ref{lem:SDL_filt_BCD_STAT_derivatives} as in the proof of Lemma \ref{lem:BCD_STAT_hypothesis}. Now we argue for \textbf{(iii)}. Denote $\xi=(2\sigma^{2})^{-1}$ and $\a^{\star}=(\W^{\star}\Beta^{\star})^{T}\x+\bGamma^{\star}\x' $. Recall that according to Lemma \ref{lem:SDL_filt_BCD_STAT_derivatives}, we can write the Hessian of $\bar{\mathcal{L}}$ at the true parameter $\bZ^{\star}$ as the $4\times 4$ block matrix $(A_{ij})_{1\le i,j\le 4} + 2\nu \I$ in \eqref{eq:expected_hessian_block}. Recall \ref{assumption:A4}. The diagonal blocks are given by
\begin{align}
&=\E\left[ (\Beta^{\star} \otimes \x) \ddot{\H}(y,\a^{\star}) (\Beta^{\star} \otimes \x)^{T} \right]+2\xi (\h^{\star}(\h^{\star})^{T} \otimes \I_{p} ) \\
&\qquad \succeq \left( \alpha^{-} \, \lambda_{\min}(\Beta^{\star} (\Beta^{\star})^{T} ) \E\left[ \lambda_{\min}( \x\x^{T} ) \right] + 2\xi \lambda_{\min} (\h^{\star})(\h^{\star})^{T} \right) \I_{pr} \\
A_{22} & = 2\xi (\W^{\star})^{T}\W^{\star} \succeq 2\xi \lambda_{\min}( (\W^{\star})^{T}\W^{\star}) \I_{r},\\
A_{33}&=\E\left[(\I_{\kappa}\otimes (\W^{\star})^{T}\x) \ddot{\H}(y,\a^{\star}) (\I_{\kappa}\otimes (\W^{\star})^{T}\x)^{T}\right] \succeq \alpha^{-} \E\left[ \lambda_{\min}( (\W^{\star})^{T}\x \x^{T} \W^{\star} ) \right] \I_{r\kappa}, \\
&= \E\left[(\I_{r}\otimes \x') \ddot{\H}(y,\a^{\star}) (\I_{r}\otimes \x')^{T}\right] \succeq \alpha^{-} \lambda_{\min}(\x'(\x')^{T}) \I_{rq},
\end{align}
where the off-diagonal blocks are given by
\begin{align}
A_{21} &= 2\xi (\h^{\star})^{T}\otimes (\W^{\star})^{T} \\
A_{31} &= \E\left[ (\I_{\kappa}\otimes \W^{T}\x) \ddot{\H}(y,\a^{\star}) (\Beta \otimes \x)^{T} + \C^{(\kappa\times r)} (\I_{r}\otimes \x \, \dot{\h}(y,\a^{\star})^{T} )^{T} \right] \\
A_{43} &= \E\left[ (\I_{r}\otimes \x') \ddot{\H}(y,\a^{\star}) (\I_{\kappa}\otimes \W^{T}\x)^{T} \right].
\end{align}
If $\nu$ is large enough so that the following condition is satisfied
\begin{align}
\lambda_{\min}(A_{ii}) + 2\nu > \sum_{j\ne i} \lVert A_{ij} \rVert_{2} \quad \forall 1\le i \le 4,
\end{align}
then the Hessian $\nabla^{2} \bar{\mathcal{L}}$ is block diagonally dominant and is positive definite (see [22]). Thus it suffices to take
\begin{align}\label{eq:lambda_lower_bd0}
\nu &> \frac{1}{2}\max\left( \begin{matrix} \lVert A_{12} \rVert_{2} +\lVert A_{13} \rVert_{2} - \lambda_{\min}(A_{11}), \\
\lVert A_{12} \rVert_{2}- \lambda_{\min}(A_{22}), \\
\lVert A_{13} \rVert_{2} +\lVert A_{34} \rVert_{2} - \lambda_{\min}(A_{33}), \\
\lVert A_{34} \rVert_{2} - \lambda_{\min}(A_{44})
\end{matrix}\right).
\end{align}
Note that (see \ref{assumption:A4} for the definition of $\gamma_{\max}$ and $\alpha^{\pm}$)
\begin{align}
\lVert \E\left[ \x \, \dot{\h}(y,\a^{\star})^{T} \right] \rVert_{2} &\le \lVert \E\left[ \W^{\star} \h^{\star} \dot{\h}(y,\a^{\star})^{T} \right] \rVert_{2} + \lVert \E\left[ \beps \dot{\h}(y,\a^{\star})^{T} \right] \rVert_{2} \\
&\le \lVert \W^{\star} \h^{\star} \E\left[ \dot{\h}(y,\a^{\star})^{T} \right] \rVert_{2} + \lVert \E\left[ \beps \dot{\h}(y,\a^{\star})^{T} \right] \rVert_{2} \\
&= \lVert \E\left[ \beps \,\dot{\h}(y, \a^{\star})^{T} \right] \rVert_{2} \\
&\le \gamma_{\max}\sigma \sqrt{2p\pi}.
\end{align}
Using this, we get
\begin{align}
\lVert A_{12} \rVert_{2} &= \lVert A_{21} \rVert_{2} =2\xi \lVert \h^{\star} \rVert_{2} \, \lVert \W^{\star} \rVert_{2}, \\
\lVert A_{13} \rVert_{2} &\le \alpha^{+} \lVert \Beta^{\star} \rVert_{2} \lVert \W^{\star} \rVert_{2} \E\left[ \Vert \x\x^{T} \rVert_{2} \right] + \gamma_{\max}\sigma \sqrt{2p\pi}, \\
\lVert A_{43} \rVert_{2} &\le \alpha^{+} \E\left[ \lVert \x'(\W^{\star})^{T}\x \rVert_{2}\right].
\end{align}
Using these upper bounds on the operator norm of the off-diagonal blocks and the lower bounds on the eigenvalues of the diagonal blocks above, the lower bound in \eqref{eq:lambda_lower_bd0} can be lower bounded by $\lambda_{+}$ defined in the assertion.
\end{proof}
\section{Proof of Theorems \ref{thm:SDL_LPGD_STAT} and \ref{thm:SDL_LPGD_STAT_feat}}
We first recall the following standard concentration bounds:
\begin{lemma}[Generalized Hoeffding's inequality for sub-gaussian variables]
\label{lem:hoeffding}
Let $X_{1},\dots,X_{n}$ denote i.i.d. random vectors in $\R^{d}$ such that $\E[X_{k}[i]^{2}/K^{2}]\le 2$ for some constant $K>0$ for all $1\le k \le n$ and $1\le i \le d$. Fix a vector $\a=(a_{1},\dots,a_{n})^{T}\in \R^{n}$. Then for each $t>0$,
\begin{align}
\P\left( \left\lVert \sum_{k=1}^{n} a_{k}X_{k} \right\rVert_{1} > t \right) \le 2d \exp\left( \frac{-t^{2}}{K^{2} d^{2}\lVert \a \rVert_{2}^{2}} \right)
\end{align}
\end{lemma}
\begin{proof}
Follows from \cite[Thm 2.6.2]{vershynin2018high} and using a union bound over $d$ coordinates.
\end{proof}
\begin{lemma}(2-norm of matrices with independent sub-gaussian entries)
\label{lem:matrix_norm_bd}
Let $\A$ be an $m\times n$ random matrix with independent subgaussian entries $\A_{ij}$ of mean zero. Denote $K$ to be the maximum subgaussian norm of $\A_{ij}$, that is, $K>0$ is the smallest number such that $\E[\exp(\A_{ij})^{2}/K^{2} ]\le 2$. Then for each $t>0$,
\begin{align}
\P\left( \lVert \A \rVert_{2} \ge 3K(\sqrt{m}+\sqrt{n}+t) \right) \le 2\exp(-t^{2}).
\end{align}
\end{lemma}
\begin{proof}
See \cite[Thm. 4.4.5]{vershynin2018high}
\end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{thm:SDL_LPGD_STAT}}]
Let $\mathcal{L}_{n}$ denote the $L_{2}$-regularized negative joint negative log likelihood function in \eqref{eq:SDL_likelihood_conv_filter} without the last three terms, and define the expected loss function $\bar{\mathcal{L}}_{n}(\bZ):= \E_{\beps_{i},\beps_{i}',1\le i \le n}\left[ \mathcal{L}_{n}(\bZ) \right]$. We omit the constant terms in these functions. Define the following gradient mappings of $\bZ^{\star}$ with respect to the empirical $f_{n}$ and the expected $\bar{f}_{n}$ loss functions:
\begin{align}
G(\bZ^{\star}, \tau)=\frac{1}{\tau}\left( \bZ^{\star} - \Pi_{\Param} \left( \bZ^{\star} - \tau \nabla \mathcal{L}_{n}(\bZ^{\star}) \right)\right), \qquad \bar{G}(\bZ^{\star},\tau):=\frac{1}{\tau}\left( \bZ^{\star} - \Pi_{\Param} \left( \bZ^{\star} - \tau \nabla \bar{\mathcal{L}}_{n}(\bZ^{\star}) \right)\right).
\end{align}
It is elementary to show that the true parameter $\bZ^{\star}$ is a stationary point of $\bar{\mathcal{L}}- \nu (\lVert \A\rVert_{F}^{2} + \lVert \bGamma \rVert_{F}^{2}) $ over $\Param\subseteq \R^{p\times (\kappa+n)}\times \R^{q\times \kappa}$. Hence we have $\bar{G}(\bZ^{\star},\tau)= 2\nu[\A^{\star},O,\bGamma^{\star}]$, so we may write
\begin{align}\label{eq:grad_mapping_compare_stationary}
G(\bZ^{\star}, \tau) &= G(\bZ^{\star}, \tau) - \bar{G}(\bZ^{\star}, \tau) + 2\nu[\A^{\star},O,\bGamma^{\star}] \\
&= \frac{1}{\tau}\left[ \Pi_{\Param}\left( \bZ^{\star}-\tau\nabla \mathcal{L}_{n}(\bZ^{\star}) \right) - \Pi_{\Param}\left( \bZ^{\star}-\tau\nabla \bar{\mathcal{L}}_{n}(\bZ^{\star}) \right) \right] + 2\nu[\A^{\star},O,\bGamma^{\star}]
\end{align}
First, suppose $\bZ^{\star}-\tau \nabla \mathcal{L}_{n} (\bZ^{\star})\in \Param$ (In particular, this is the case whe $\Param$ equals the whole space). Then we can disregard the projection $\Pi_{\Param}$ in the above display so we get
\begin{align}
G(\bZ^{\star}, \tau) - 2\nu[\A^{\star},O,\bGamma^{\star}] = \nabla \mathcal{L}_{n}(\bZ^{\star}) - \nabla \bar{\mathcal{L}}(\bZ^{\star}) =: [\Delta \X^{\star}, \Delta \bGamma^{\star}].
\end{align}
According to Theorem \ref{thm:SDL_LPGD}, it now suffices show that $G(\bZ^{\star}, \tau)$ above is small with high probability. We use the notation $\U=[\A^{T}, \bGamma^{T}]^{T}$, $\U^{\star}=[(\A^{\star})^{T}, (\bGamma^{\star})^{T}]^{T}$, $\bPhi=[\bphi_{1},\dots,\bphi_{n}]=[\X_{\textup{data}}^{T}, \X_{\textup{aux}}^{T}]^{T}$ (see also the proof of Theorem \ref{thm:SDL_LPGD}). Denote $\a_{s}=\U^{T}\bphi_{s}$ and $\a_{s}^{\star}=(\U^{\star})^{T}\bphi_{s}$ for $s=1,\dots,n$ and introduce the following random quantities
\begin{align}
\mathtt{Q}_{1}:= \sum_{s=1}^{n} \dot{\h}(y_{s},\a_{s}^{\star})\in \R^{\kappa} ,\quad \mathtt{Q}_{2}:=\sum_{s=1}^{n} \beps_{s}\in \R^{p} ,\quad \mathtt{Q}_{3}:=\sum_{s=1}^{n} \beps_{s}'\in \R^{q}, \quad \mathtt{Q}_{4}:= [ \beps_{1},\dots,\beps_{n}] \in \R^{p\times n}.
\end{align}
Recall that
\begin{align}
&\nabla_{\vect(\U)} \mathcal{L}_{n}(\U,\B) = \left( \sum_{s=1}^{n} \dot{\h}(y_{s},\a_{s}) \otimes \bphi_{s} \right) + 2\nu \vect(\U), \quad \nabla_{\B} \mathcal{L}_{n}(\U,\B) = \frac{2}{2\sigma^{2}} (\B-\X_{\textup{data}}), \\
&\nabla_{\vect(\U)} \bar{\mathcal{L}}_{n}(\U,\B) = \left( \sum_{s=1}^{n} \E\left[ \dot{\h}(y_{s},\a_{s}) \otimes \bphi_{s} \right] \right) + 2\nu \vect(\U), \quad \nabla_{\B} \bar{\mathcal{L}}_{n}(\U,\B) = \frac{2}{2\sigma^{2}} (\B-\B^{\star}),
\end{align}
where $\dot{\h}$ is defined in \eqref{eq:Hddot_def}. Note that
\begin{align}
\E\left[ \dot{\h}(y_{s},\a_{s}) \,\bigg|\, \bphi_{s} \right] &= \left[ \left(\frac{h'(\a[j])}{1+\sum_{c=1}^{\kappa} h(\a[c])} - g_{j}(\a_{s}^{\star})\frac{h'(\a[j])}{h(\a[j])} \right)_{\a=\a_{s}} \, ; \, j=1,\dots,\kappa \right]\\
&= \left[ \left(\frac{h'(\a[j])}{1+\sum_{c=1}^{\kappa} h(\a[c])} - \frac{h(\a_{s}^{\star}[j])}{1+\sum_{c=1}^{\kappa} h(\a_{s}^{\star}[c])} \frac{h'(\a[j])}{h(\a[j])} \right)_{\a=\a_{s}} \, ; \, j=1,\dots,\kappa \right],
\end{align}
so the above vanishes when $\a_{s}=\a_{s}^{\star}$. Hence
\begin{align}\label{eq:dot_h_exp_vanish}
\E\left[ \dot{\h}(y_{s},\a_{s}^{\star}) \otimes \bphi_{s} \right] = \E\left[ \E\left[ \dot{\h}(y_{s},\a_{s}^{\star}) \otimes \bphi_{s}\,\bigg|\, \bphi_{s} \right] \right] =\mathbf{0},
\end{align}
Hence we can compute the following gradients
\begin{align}
\nabla_{\vect(\A)} (\mathcal{L}_{n} - \bar{\mathcal{L}}_{n})(\A,\B,\bGamma) &=\left( \sum_{s=1}^{n} \dot{\h}(y_{s},\a_{s}) \otimes \x_{s} \right) \\
\nabla_{\vect(\bGamma)} (\mathcal{L}_{n} - \bar{\mathcal{L}}_{n})(\A,\B,\bGamma) &=\left( \sum_{s=1}^{n} \dot{\h}(y_{s},\a_{s}) \otimes \x_{s}' \right) \\
\nabla_{\B} (\mathcal{L}_{n} - \bar{\mathcal{L}}_{n})(\A,\B,\bGamma) &=\frac{2}{2\sigma^{2})} (\B^{\star}-\X_{\textup{data}}) = \frac{2}{2\sigma^{2}} [ \beps_{1},\dots,\beps_{n}] \\
\nabla_{\blambda} (\mathcal{L}_{n} - \bar{\mathcal{L}}_{n})(\A,\B,\bGamma) &=\frac{2}{2\sigma^{2}} \sum_{s=1}^{n}\beps_{s}' .
\end{align}
It follows that (recall the definition of $\gamma_{\max}$in \ref{assumption:A4})
\begin{align}
\lVert \nabla_{\A} (\mathcal{L}_{n} - \bar{\mathcal{L}}_{n})(\A^{\star},\B^{\star},\bGamma^{\star}) \rVert_{2} &= \left\rVert\sum_{s=1}^{n} (\B^{\star}[:,s]+\beps_{s}) \dot{\h}(y_{s},\a_{s}^{\star})^{T} \right\rVert_{2} \\
& \le \left\rVert \sum_{s=1}^{n} \B^{\star}[:,s] \dot{\h}(y_{s},\a_{s}^{\star})^{T} \right\rVert_{2} +\left\rVert \sum_{s=1}^{n} \beps_{s} \dot{\h}(y_{s},\a_{s}^{\star})^{T} \right\rVert_{2} \\
&\le \lVert \B^{\star}\rVert_{\infty} \left\rVert \mathtt{Q}_{1} \right\rVert_{2} + \gamma_{\max} \left\rVert \mathtt{Q}_{2} \right\rVert_{2}.
\end{align}
Similarly, we have
\begin{align}
\lVert \Delta \bGamma^{\star} \rVert_{F}= \lVert \nabla_{\bGamma} (\mathcal{L}_{n} - \bar{\mathcal{L}}_{n})(\A^{\star},\B^{\star},\bGamma^{\star}) \rVert_{F} &= \lVert \nabla_{\vect(\bGamma)} (\mathcal{L}_{n} - \bar{\mathcal{L}}_{n})(\A^{\star},\B^{\star},\bGamma^{\star}) \rVert_{2} \\
&\le q\lVert \blambda^{\star}\rVert_{\infty} \left\rVert \mathtt{Q}_{1} \right\rVert_{2} + q\gamma_{\max} \left\rVert \mathtt{Q}_{3} \right\rVert_{2}
\end{align}
Using the fact that $\lVert [A,B] \rVert_{2}\le \lVert A \rVert_{2} + \lVert B \rVert_{2} $ for two matrices $A,B$ with the same number of rows, we have
\begin{align}\label{eq:SDL_MLE_pf_bd_Q}
\left\rVert \Delta \X^{\star} \right\rVert_{2} &= \left\lVert \nabla_{\A} (\mathcal{L}_{n} - \bar{\mathcal{L}}_{n})(\A^{\star},\B^{\star},\bGamma^{\star}) \right\rVert_{2} + \left\lVert \nabla_{\bGamma} (\mathcal{L}_{n} - \bar{\mathcal{L}}_{n})(\A^{\star},\B^{\star},\bGamma^{\star}) \right\rVert_{2} \\
&\le \lVert \B^{\star}\rVert_{\infty} \left\rVert \mathtt{Q}_{1} \right\rVert_{2} + n\gamma_{\max} \left\rVert \mathtt{Q}_{2} \right\rVert_{2} +\frac{2}{2\sigma^{2}} \left\lVert \mathtt{Q}_{4} \right\rVert_{2}.
\end{align}
Thus, combining the above bounds, we obtain
\begin{align}\label{eq:SDL_MLE_pf_bd_Q2}
S:= \sqrt{3r} \lVert \Delta \X^{\star} \rVert_{2} + \lVert \Delta \bGamma^{\star} \rVert_{F} \le \sum_{i=1}^{4} c_{i} \lVert \mathtt{Q}_{i} \rVert_{2},
\end{align}
where the constants $c_{1},\dots,c_{4}>0$ are given by
\begin{align}\label{eq:c_constants_Q}
c_{1}=\left( \sqrt{3r} \lVert \B^{\star}\rVert_{\infty} + q\lVert \blambda^{\star} \rVert_{\infty} \right), \quad c_{2}=\gamma_{\max}\left( q + \sqrt{3r} \right),\quad c_{3}=q\gamma_{\max}, \quad c_{4}=\frac{2\sqrt{3}r}{2\sigma^{2}}.
\end{align}
Next, we will use concentration inequalities to argue that the right hand side in \eqref{eq:SDL_MLE_pf_bd_Q2} is small with high probability and obtain the following tail bound on $S$: %While the first three terms in \eqref{eq:SDL_MLE_pf_bd_Q2} can be made $O(1/\sqrt{n})$ with high probability, unfortunately, the last term $\mathtt{Q}_{4}$ is typically of order $O(\sqrt{p}+\sqrt{n})$ and it cannot be made small, unless the noise level $\sigma>0$ is significantly small. Nontheless, we will use concentration inequalities to obtain the following tail bound
\begin{align}\label{eq:S_tail_bound}
\P\left(S>c \sqrt{n} \log n + 3C\sigma(\sqrt{p} + \sqrt{n}+ c\sqrt{\log n}) \right) \le \frac{1}{n},
\end{align}
where $C>0$ is an absolute constant and $c>0$ can be written explicitly in terms of the constants we use in this proof. Recall that for a random variable $Z$, its sub-Gaussian norm, denoted as $\lVert Z \rVert_{\psi_{2}}$, is the smalleset number $K>0$ such that $\E[\exp(Z^{2}/K^{2})]\le 2$. The constant $C>$ above is the sub-gaussian norm of the standard normal variable, which can be taken as $C\le 36e/\log 2$. Using union bound with Lemmas \ref{lem:hoeffding} and \ref{lem:matrix_norm_bd}, for each $t,t>0$, we get
\begin{align}\label{eq:S_bd_pf}
&\P\left( S > (c_{1}+c_{2}+c_{3}+c_{4})t + 3C\sigma(\sqrt{p} + \sqrt{n}+ t') \right) \\
&\qquad \le \left( \sum_{i=1}^{3} \P\left( \lVert \mathtt{Q}_{i} \rVert_{2}>t\right) \right) + \P\left( \lVert n\mathtt{Q}_{4} \rVert_{2}> 3C\sigma(\sqrt{p} + \sqrt{n}+ t') \right) \\
&\qquad \le 2\kappa \exp\left( \frac{-t^{2} }{C_{1}^{2} \kappa^{2} n} \right) + 2p \exp\left( \frac{-t^{2} }{ (C \sigma)^{2} p^{2} n } \right) + 2q \exp\left( \frac{-t^{2} }{ (C \sigma')^{2} q^{2} n } \right) + \exp(-(t')^{2}).
\end{align}
Indeed, for bounding $\P(\mathtt{Q}_{1}>t)$, we used Lemma \ref{lem:hoeffding} with sub-Gaussian norm $C_{1}=K=\gamma_{\max}/\sqrt{\log 2} $ for the bounded random vector $\dot{\h}(y_{s},\a_{s})$ (see \cite[Ex. 2.5.8]{vershynin2018high}); for $\P(\mathtt{Q}_{2}>t)$ and $\P(\mathtt{Q}_{3}>t)$, we used Lemma \ref{lem:hoeffding} with $K=C \sigma$ and $K=C \sigma'$, respectively; for the last term involving $\mathtt{Q}_{4}$, we used Lemma \ref{lem:matrix_norm_bd} with $K=C/\sigma$. Observe that in order to make the last expression in \eqref{eq:S_bd_pf} small, we will chose $t=c_{5}\sqrt{n}\log n$ and $t'=c_{5}\sqrt{\log n}$, where $c_{5}>0$ is a constant to be determined. This yields
\begin{align}
\P\left( S> c \sqrt{n} \log n + 3C\sigma(\sqrt{p} + \sqrt{n}+ c\sqrt{\log n}) \right) \le n^{-c_{6}},
\end{align}
where $c=c_{5}\sum_{i=1}^{4} c_{i}$ and $c_{6}>0$ is an explicit constant that grows in $c_{5}$. We assume $c_{5}>0$ is such that $c_{6}\ge 1$. This shows \eqref{eq:S_tail_bound}.
To finish, we use Theorem \ref{thm:SDL_LPGD} to deduce that with probability at least $1/n$,
\begin{align}
\lVert \bZ_{t} - \bZ^{\star} \rVert_{F} \le \rho^{t} \, \lVert \bZ_{0} - \bZ^{\star}\rVert_{F} + \frac{\tau}{1-\rho}\left( c \sqrt{n} \log n + 3C\sigma(\sqrt{p} + \sqrt{n}+ c\sqrt{\log n}) \right) + \frac{2\nu\tau}{1-\rho}\left( \lVert \A^{\star} \rVert_{2} + \lVert \bGamma^{\star} \rVert_{F} \right)
\end{align}
Note that $\tau<\frac{3}{2L}$ with $L = \max(2\xi,\, 2\nu+ n L^{*}) \ge n L^{*}$, so $\tau<\frac{3}{2nL^{*}}$. So this yields the desired result.
Second, suppose $\bZ^{\star}-\tau \nabla f_{\textup{SDL-filt}}(\bZ^{\star})\notin \Param$. Then we cannot direcly simplify the expression \eqref{eq:grad_mapping_compare_stationary}. In this case, we take the Frobenius norm and use non-expansiveness of the projection operator (onto convex set $\Param$):
\begin{align}\label{eq:grad_mapping_compare_stationary2}
\lVert G(\bZ^{\star}, \tau) \rVert_{F} &= \frac{1}{\tau} \left\rVert\left[ \Pi_{\Param}\left( \bZ^{\star}-\tau\nabla \mathcal{L}_{n}(\bZ^{\star}) \right) - \Pi_{\Param}\left( \bZ^{\star}-\tau\nabla \bar{\mathcal{L}}_{n}(\bZ^{\star}) \right) \right] \right\rVert_{F} \\
&\le \lVert \nabla \mathcal{L}_{n}(\bZ^{\star})- \nabla \bar{\mathcal{L}}_{n}(\bZ^{\star}) \rVert_{F} \\
& \le \lVert \Delta \X^{\star} \rVert_{F} + \lVert \Delta \bGamma^{\star} \rVert_{F}.
\end{align}
According to Remark \ref{rmk:pf_thm_LPGD}, we also have Theorem \ref{thm:CALE_LPGD} (and hence Theorem \ref{thm:SDL_LPGD}) with $\sqrt{3r} \lVert \Delta \X^{\star} \rVert_{2}$ replaced with $\lVert \Delta \X^{\star} \rVert_{F}$. Then an identical argument shows
\begin{align}\label{eq:SDL_MLE_pf_bd_Q4}
S':= \lVert \Delta \X^{\star} \rVert_{F} + \lVert \Delta \bGamma \rVert_{F} \le c_{1} \lVert \mathtt{Q}_{1} \rVert_{2} + c_{2}\lVert \mathtt{Q}_{2} \rVert_{2} + c_{3}\lVert \mathtt{Q}_{3} \rVert_{2} + c_{4}\lVert \mathtt{Q}_{4} \rVert_{F},
\end{align}
where the constants $c_{1},\dots,c_{4}>0$ are the same as in \eqref{eq:c_constants_Q}. So we have
\begin{align}
\lVert \bZ_{t} - \bZ^{\star} \rVert_{F} \le \rho^{t} \, \lVert \bZ_{0} - \bZ^{\star}\rVert_{F} + \frac{\tau}{1-\rho}(S' + 2\nu (\lVert \A^{\star} \rVert_{2}+\lVert \A^{\star} \rVert_{F})).
\end{align}
Then an identical argument with the inequality $\lVert \mathtt{Q}_{4}\rVert_{F} \le \sqrt{\min(p,n)} \lVert \mathtt{Q}_{4}\rVert_{2}$ shows
\begin{align}\label{eq:S_bd_pf2}
&\P\left( S' > (c_{1}+c_{2}+c_{3}+c_{4})t + 3C\sigma(\sqrt{p} + \sqrt{n}+ t') \sqrt{\min(p,n)} \right) \\
&\qquad \le \left( \sum_{i=1}^{3} \P\left( \lVert \mathtt{Q}_{i} \rVert_{2} >t\right) \right) + \P\left( \lVert \mathtt{Q}_{4} \rVert_{2} > \frac{3C(\sqrt{p} + \sqrt{n}+ t') }{\sigma} \right),
\end{align}
and the assertion follows similarly as before.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{thm:SDL_LPGD_STAT_feat}}]
The argument is entirely similar to the proof of Theorem \ref{thm:SDL_LPGD_STAT}. Indeed, denoting $\a_{s}=\A[:,s]+\bGamma^{T}\x_{s}'$ for $s=1,\dots,n$ and keeping the other notations the same as in the proof of Theorem \ref{thm:SDL_LPGD_STAT}, we can compute the following gradients
\begin{align}
\nabla_{\A} (\mathcal{L}_{n} - \bar{\mathcal{L}}_{n})(\A,\B,\bGamma) &= \begin{bmatrix}
\dot{\h}(y_{1},\a_{1}),\dots,
\dot{\h}(y_{n},\a_{n})
\end{bmatrix} \\
\nabla_{\vect(\bGamma)} (\mathcal{L}_{n} - \bar{\mathcal{L}}_{n})(\A,\B,\bGamma) &=\left( \sum_{s=1}^{n} \dot{\h}(y_{s},\a_{s}) \otimes \x_{s}' \right) \\
\nabla_{\B} (\mathcal{L}_{n} - \bar{\mathcal{L}}_{n})(\A,\B,\bGamma) &=\frac{2}{2\sigma^{2}} (\B^{\star}-\X_{\textup{data}}) = \frac{2}{2\sigma^{2}} [ \beps_{1},\dots,\beps_{n}] \\
\nabla_{\blambda} (\mathcal{L}_{n} - \bar{\mathcal{L}}_{n})(\A,\B,\bGamma) &=\frac{2}{2\sigma^{2}} \sum_{s=1}^{n}\beps_{s}' .
\end{align}
Hence repeating the same argument as before, using concentration inequalities for the following random quantities
\begin{align}
\mathtt{Q}_{1}:= \begin{bmatrix}
\dot{\h}(y_{1},\a_{1}),\dots,
\dot{\h}(y_{n},\a_{n})
\end{bmatrix} \in \R^{p\times n} ,\quad \mathtt{Q}_{2}:=\sum_{s=1}^{n} \beps_{s}\in \R^{p} ,\quad \mathtt{Q}_{3}:=\sum_{s=1}^{n} \beps_{s}'\in \R^{q}, \quad \mathtt{Q}_{4}:= [ \beps_{1},\dots,\beps_{n}] \in \R^{p\times n},
\end{align}
one can bound the size of $G(\bZ^{\star},\tau)$ with high probability. The rest of the details are omitted.
\end{proof}
\section{Proof of Theorem \ref{thm:MLE_local_consistency}}
In this section, we prove the non-asymptotic local consistency of constrained and regularized MLE, stated in Theorem \ref{thm:MLE_local_consistency}. We combine a classical approach in [21] with concentration inequalities, namely, a classical Berry-Esseen bound for deviations from standard normal distribution and a uniform McDirmid bound (Lemma \ref{lem:uniform_McDirmid}). The former is used to control the linear term in the second-order Taylor expansion of the log-likelihood function, and the latter is used to control the second-order term. By using an $\eps$-net argument, the latter concentration inequality can be extended to a setting where the random variables are parameterized within a compact set.
\begin{lemma}[A uniform McDirmid's inequality]
\label{lem:uniform_McDirmid}
Let $(X_{n})_{n\ge 1}$ be i.i.d. random vectors in $\R^{d}$ from a distribution $\pi$. Fix a compact parameter space $\Param\subseteq \R^{p}$ and $f_{\param}:\R^{d}\rightarrow \R$ is a bounded functional for each $\param\in \Param$ such that
\begin{align}\label{eq:McDirmid_cond}
\lVert f_{\param} - f_{\param'} \rVert_{\infty} \le L\lVert\param-\param' \rVert ,\qquad \forall \param,\param'\in \Param
\end{align}
for some constant $L>0$. Further assume that $\E_{X\sim \pi}[f_{\param}(X)]=0$ for all $\param\in \Param$. Then there exists constants $C,M>0$ such that for each $n \ge 0$, and $\eta>0$,
\begin{align}
\P\left( \sup_{\param\in \Param} \left| \frac{1}{n}\sum_{k=1}^{n} f_{\param}(X_{k}) \right| \ge \eta \right) \le C \exp\left( p\log (1/\eta) + \frac{-\eta^{2}n}{ 2M^{2} }\right),
\end{align}
where $C'=4R(\varphi(\Omega))^{2}\tau_{\textup{min}}$.
\end{lemma}
\begin{proof}
Since $\Param\subseteq \R^{p}$ is compact, it can be covered by a finite number of $L^{2}$-balls of any given radius $\eps>0$. Let $\mathcal{U}_{\eps}$ be such an open cover using the least number of balls of radius $\eps>0$. Let $N(\eps) = |\mathcal{U}_{\eps}|$ denote the least number of such balls to cover $\Param$. Moreover, let $\textup{diam}(\Param)$ denote the diameter of $\Param$, which is finite since $\Param$ is compact. Then $\Param$ is contained in a $p$-dimensional box of side length $\textup{diam}(\Param)$. It follows that there exists a constant $K>0$, depending only on $\textup{diam}(\Param)$ and $rd$, such that
\begin{align}\label{eq:bd_N(eps)}
N(\eps)\le K\left( \frac{\textup{diam}(\Param)}{\eps} \right)^{p}.
\end{align}
Next, fix $\eta>0$, $\param\in \Param$, and $\eps>0$. Let $\param_{1},\cdots,\param_{N(\eps)}$ be the centers of balls in the open cover $\mathcal{U}_{\eps}$. Then there exists $1\le j \le N(\eps)$ such that $\lVert \param - \param_{j}\rVert <\eps$. By the hypothesis, $f_{\param}$ depends on $\param$ uniformly continuously with respect to the supremum norm. Hence there exists $\delta=\delta(\eps)>0$ such that
\begin{align}
\lVert f_{\param} - f_{\param_{j}} \rVert_{\infty} \le L\eps.
\end{align}
Denote $H_{n}(\param):= n^{-1}\sum_{k=1}^{n} f_{\param}(X_{k})$. Then this yields, almost surely,
\begin{align}
| H_{n}(\param) - H_{n}(\param_{j}) | \le L\eps.
\end{align}
Furthermore, since $\Param$ is compact and each $f_{\param}$ is bounded, \eqref{eq:McDirmid_cond} implies that $\lVert f_{\param} \rVert_{\infty}$ is uniformly bounded in $\param$ by some constant, say, $M>0$. It follows that for each $\param\in \Param$, $H_{n}(\param)$ changes its value at most by $M$ when one of $X_{1},\dots,X_{n}$ is replaced arbitrarily. Therefore by the standard McDirmid's inequality (see, e.g., \cite[Thm. 2.9.1]{vershynin2018high}) and a union bound, with choosing $ \eps=\eta/(2L)$,
\begin{align}
|
# Is the Molecular Weight Dependence of the Glass Transition Temperature
Driven by a Chain End Effect?
William F. Drayer [ David S. Simmons<EMAIL_ADDRESS>[
###### Abstract
The immense dependence of the glass transition temperature $T_{g}$ on
molecular weight $M$ is one of the most fundamentally and practically
important features of polymer glass formation. Here, we report on molecular
dynamics simulation of three model linear polymers of substantially different
complexity demonstrating that the 70-year-old canonical explanation of this
dependence (a simple chain end dilution effect) is likely incorrect at leading
order. Our data shows that end effects are present only in relatively stiff
polymers and, furthermore, that the magnitude of this end effect diminishes on
cooling. Instead, we find that $T_{g}(M)$ trends are instead dominated by
shifts in $T_{g}$ throughout the entire polymer chain rather than through a
chain end effect. We show that these data are consistent with a generic two-
barrier model of $T_{g}$ and its $M$-dependence, motivated by the Elastically
Collective Nonlinear Langevin Equation (ECNLE) theory. More broadly, this work
indicates both a need to reassess the canonical understanding of $T_{g}(M)$ in
linear polymers (and macromolecules at large) and an opportunity to reveal new
glass formation physics with renewed study of $M$ effects on $T_{g}$.
###### keywords:
American Chemical Society, LaTeX
Penn] Department of Materials Science and Engineering, University of
Pennsylvania, Philadelphia, PA 19104, USA USF] Department of Chemical,
Biological and Materials Engineering, University of South Florida, Tampa, FL
33620, USA IR,NMR,UV
## 1 Introduction
A diverse array of systems solidify on laboratory timescales through the glass
transition, a poorly understood phenomenon wherein relaxation times
dramatically grow on cooling over a finite range of temperature $T$ 1, 2, 3.
One central feature of this transition is a profound dependence on molecular
weight $M$ or polymer degree of polymerization $N$: the glass transition
temperature $T_{g}$ commonly differs over 200 K between the small molecule
(e.g., monomer) and the infinite $M$ polymer limits 4. Indeed, while this has
historically understood as an issue of polymer physics only, more recent work
has suggested a continuum of $T_{g}$ size dependence spanning from polymers
with large $M$ down to the genuine small molecule limit4.
The canonical textbook explanation 5, 6, 7, 8, 9, 10 for this trend was
established in the early 1950’s by Fox and Flory (FF) 11 and Ueberreiter and
Kanig (UK) 12. In the case of the former, specifically for polystyrene
polymers, end groups are suggested to “act like a foreign substance in
disrupting the local configurational order of the styrene units,” 11 which in
turn has been interpreted as a chain end free volume effect present in
polymers more generally.
Ueberreiter and Kanig are more explicit in their interpretation of chain ends
and their impact on $T_{g}$ at large. The original publication 12 in fact has
sections entitled “Polymers as Mixtures of End and Middle Groups” and “Chain
End Groups Acting as Plasticizers.” Their discussion of end groups states that
“[they] have a greater expansion coefficient according to an improved mobility
which is due to their privileged position” and later that “[i]t therefore
seems reasonable to treat the end groups as plasticizers.” Finally, they
remark in their summary that “[t]he end groups act as plasticizers and cause
the ‘self-plasticization’ of the polymer.”
These arguments are reasonable and intuitively appealing, with the ends
exhibiting some combination of faster dynamics or higher free volume due to
having one less bonded neighbor and the middle groups exhibiting slower
dynamics and/or lower free volume due to their extra bond relative to the end
groups. The effect on $T_{g}$ for growing chain length is then simply a
dilution of this chain end effect just like that of volume. One would expect
chain ends to exhibit enhanced mobility or free volume relative to other chain
segments, and that the infinite molecular weight limit is reached when the
majority of segments are beyond the dynamical or structural influence of chain
ends and thus exhibit mobility characteristic of an infinite chain. Indeed,
given that interactions cannot be infinite range, it _must_ follow that any
enhancement in mobility by the chain ends must radiate outward from the ends
via some gradient along the backbone or through space. However, this
underlying local mechanism has never been fully validated.
Despite the long-standing predominance of this perspective, and perhaps in
part due to the lack of a direct test to date, questions have emerged over
whether this represents the sole, or even the dominant, mechanism driving the
$M$ dependence of $T_{g}$. Early work by Cowie 13 argued for the presence of
three regimes of $T_{g}(M)$ behavior — a complexity not captured by the two-
parameter FF and UK forms. Novikov and Rössler have suggested that the
canonical scenario is missing a distinct mechanism that dominates in the low
$M$ limit 4. Indeed, their work emphasizes a continuity between the molecular
weight dependence of $T_{g}$ for polymers and that for small molecules, with
the former merging into the latter in the low molecular weight limit. In
addition to suggesting the need for an additional mechanism, this work
therefore also emphasizes the breadth of the importance of understanding how
molecular size impacts $T_{g}$ in both polymers and small molecules. Other
distinct scenarios have suggested that the $T_{g}(M)$ dependence is driven by
the growth of chain stiffness or intramolecular activation barriers with $M$,
neither directly mediated by any chain end effect 14, 15. Even the basic
physical rationale for the FF and UK perspectives has been reconsidered, with
Zaccone and Terentjev 16 showing that the the FF equation can be derived by a
chain connectivity rather than chain end dilution argument.
Data published by Miwa et al. are particularly interesting from the
perspective of this discussion 17. They reported that chain ends exhibit a
local drop in the temperature of a spin transition (which they argue is
proportional to $T_{g}$) for spin-labeled polystyrene, but that the chain end
spin transition was itself $M$-dependent, even at fairly high $M$. While the
enhancement in mobility at chain ends seems in accord with canonical chain end
dilution models, its local $M$ dependence is not; at least to leading order,
the polymer is modelled in FF and UK with chain ends that exhibit enhanced
mobility or lower $T_{g}$ for all $M$, with a chain of infinite length
infinitely diluting this effect wherein almost all polymer segments exhibit
$T_{g}$ of the infinite limit, $T_{g,\infty}$. The overall $M$ dependence is
instead expected to emerge at the mean chain level by averaging.
To assess whether the $T_{g}(M)$ dependence is predominantly driven by chain
end effects as anticipated by the FF and UK models, we measure local dynamics
in molecular dynamics (MD) simulations of three well-established polymers
models: a freely-jointed chain (FJC) 18, 19, a freely-rotating chain (FRC) 19,
and OPLS all-atom polystyrene (AAPS) 20, 21, 22. These models span a range of
complexity and strength of intramolecular correlations, which prior studies
23, 14, 15, 3, 24 have suggested play an important role in $M$ effects on
$T_{g}$.
## Methodology
### Model and Simulation Details
We study three models spanning from a fully-flexible bead-spring chain to a
chemically realistic polymer in this work. All simulations are performed in
LAMMPS 25. The simplest model, the freely-jointed chain (FJC), uses the
standard finite extensible nonlinear elastic (FENE) bond potential 18,
$E=-0.5KR_{0}^{2}\ln\left[1-\left(\frac{r}{R_{0}}\right)^{2}\right]+4\epsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6}\right]+\epsilon,$
with particle size $\sigma=1$, interaction strength $\epsilon=1$, FENE elastic
constant $K=30$, and maximum bond elongation $R_{0}=1.5$. To increase chain
stiffness, we employ an angle potential to model a freely-rotating chain
(FRC),
$E=K_{\theta}[\cos(\theta)-\cos(\theta_{0})]^{2}.$
We set the bending constant $K_{\theta}=25$ and bending equilibrium angle
$\cos\theta_{0}=-0.333$ (which corresponds to 109.5 degrees), as done in prior
work 19. Bead-spring simulations utilize the Stoermer-Verlet time integration
algorithm as implemented in LAMMPS with a timestep of $\tau=0.005$. Both the
FJC and FRC span chain lengths of $4\leq N\leq 60$ beads with total bead
counts of $N=30000$.
To analyze a model with realistic chemical structure, we perform additional
analysis of OPLS all-atom polystyrene (AAPS) simulations first published in
prior work 21, 22. Full details of those simulations can be found in those
prior publications. Degrees of polymerization for AAPS range from $3\leq N\leq
400$ chemical repeat units, with a total chemical repeat unit count per
simulation of approximately 800 (e.g., there are 160 chains for $N=5$ and two
chains for $N=400$).
We utilize the PreSQ simulation protocol 21, 22 wherein simulations begin with
a high temperature anneal before sequential linear quenches and further
isothermal annealing sufficient to yield equilibrium relaxation times at the
mean system level. These simulations are performed in the isothermal-isobaric
($NPT$) ensemble using the Nose-Hoover thermostat and barostat, with both
damping parameters set to $\tau=2$.
### 1.1 Analysis Details
Relaxation is determined using the self-part of the intermediate scattering
function,
$F_{s}(q,t)=\left\langle\frac{1}{S}\Sigma^{S}_{k}\frac{1}{N}\Sigma^{N}_{j}\left(\exp{\left(-i\mathbf{q}\cdot\left(\mathbf{r}_{j}(t+s_{k})-\mathbf{r}_{j}(s_{k})\right)\right)}\right)\right\rangle_{|\mathbf{q}|=q}$
(1)
choosing a wavenumber near the first peak in the structure factor:
7.07196/$\sigma_{LJ}$ (where $\sigma_{LJ}$ is the Lennard-Jones unit of length
and is of order 1 nm in real units) for both the freely-jointed and freely-
rotation chain models and 1.19952/Å for the all-atom polystyrene model. We
show representative data for $F_{s}$ as a function of time for each model with
a chain length of $N=10$ for all models, with the addition of $N=100$ for
AAPS, plotting both the mean system data and chain end data in the SI. The
slow relaxation process within these relaxation functions is then fit to a
stretched exponential and the alpha relaxation time is defined as the time at
which this function decays to 0.2, as done in several prior works 26, 27, 28,
21, 22.
We perform this analysis two ways. First, we compute a relaxation time for the
entire system by summing in Equation 1 over all segments in the system.
Second, we perform the analysis for particular repeat unit locations within
the chain. In the bead spring models, for example, we compute a relaxation
time for chain ends by summing over only end beads. We also do this for beads
bonded to chain ends, beads bonded to those beads, and so on, in general
computing a mean relaxation time at each position $i$ within the chain, where
$i=1$ is the chain end, $i=2$ denotes repeat units bonded to a chain end, and
so on. We perform a similar analysis for AAPS, but in this case for each
repeat unit location, we average Equation 1 over all atoms within that repeat
unit location in all chains.
After computing these relaxation times across a range of temperature, $T_{g}$
is then quantified by fitting these relaxation time $\tau$ and temperature $T$
data to the MYEGA functional form, suggested by Mauro et al. 29, which may be
written as
$\log{\tau}=\log{\tau_{\infty}}+\frac{A}{T}\exp{\frac{B}{T}}.,$ (2)
where $\tau_{\infty}$, $A$, and $B$ are the fitting parameters. We rewrite
this self-consistently in terms of the glass transition temperature, replacing
$A$ as a fitting parameter, as
$\log{\tau}=\log{\tau_{\infty}}+\left(\log\tau_{g}-\log{\tau_{\infty}}\right)\frac{T_{g}}{T}\exp{\left(B\left(\frac{1}{T}-\frac{1}{T_{g}}\right)\right)}$
(3)
in order to conveniently obtain standard errors on $T_{g}$ from a least-
squares regression upon choosing a timescale $\tau_{g}$ for $T_{g}$. We define
$T_{g}$ at an extrapolated experimental timescale of 100 seconds for AAPS, to
allow for experimental comparability (this comparison has been validated in
prior work at the mean system level 22). For the FJC and FRC, we instead
employ a computational timescale $T_{g}$ convention of $\tau_{g}=10^{4}$
dimensionless Lennard-Jones time units, due to the lack of experimental
analogue, as is standard for bead models 30, 31, 32.
The same process is used for both mean system (cf. Figure 1, filled markers)
and repeat unit specific (cf. Figure 1, crosses and open markers, and Figure
2, all markers) analysis by employing their respective relaxation times as
described above; we thus obtain $T_{g}$ at both a whole level and at the level
of particular repeat unit positions with the polymer (averaged over many
chains). We note that the definition of a local $\tau$ and thus $T_{g}$ in
this manner has extensive precedent, particularly from the perspective of
interfacial and nanoconfinement effects on dynamics in glass-formers 33, 34,
35, 36, 37, 38, 39, 40, 41. We especially highlight how Priestley and
coworkers have reported on the $T_{g}$ of individual segment locations as a
function of number of bonds from a block copolymer junction, which is very
similar to our segment-specific $T_{g}$ reported here42.
We also compute local values of the Debye-Waller factor, $\langle
u^{2}\rangle$, which is a measure of dynamical free volume 43, 44, 21, 45, 46,
47, 48, 49, 50, glassy elasticity 51, 52, and particle localization 53, 54,
55, 56, 57 that quantifies the space accessed by a segment within a cage of
its transient neighbors. Specifically, we choose the mean-square displacement
(MSD) at a time delta of 0.275 (or $\approx 10^{-0.55}$) dimensionless time
units for both the FJC and FRC and 0.711 ps (or $\approx 10^{-0.15}$ ps) for
AAPS (consistent with prior work21). As $\langle u^{2}\rangle$ is a (nearly
linear, see SI) function of temperature, we choose a consistent $T$ near the
lowest accessed by simulation for each model to assess trends with respect to
$N$. The chosen temperatures (0.4 for the FJC, 0.6 for the FRC, and 500 K for
AAPS) are at worst a slight extrapolation for the largest chain lengths (as
the longest chains exhibit the largest $T_{g}$ values) with the majority of
the data being interpolations. Plots of the MSD data for select systems and
$\langle u^{2}\rangle$ as a function of temperature are found in the SI (for
the same systems as done for the $F_{s}$ data).
## 2 Results
### 2.1 Quantifying Chain End Effects
Figure 1: Mean system and chain end $T_{g}$, normalized by its value for the
highest $M$ simulated for a given system, plotted for the systems shown in the
legend. Error bars are standard errors on $T_{g}$ as a fit parameter for AAPS
(standard errors are smaller than the data points for the FJC and FRC).
We begin by considering how varying $M$ (by means of degree of polymerization
or chain length $N$) impacts $T_{g}$ for these systems. We initially define
$T_{g}$ for AAPS on the experimental timescale (100 seconds) by extrapolating
the fit to Equation 3. This extrapolation has has been robustly validated
against experiment for polystyrene based on this simulation model over a wide
range of chain lengths in prior work 22, such that we can reliably infer
experimental-timescale glass formation behavior from these simulations at much
shorter times. For the two coarse systems, consistent with prior work 31, 58,
59, 60 we define $T_{g}(M)$ on a computational timescale of $10^{4}\tau_{LJ}$
(Lennard-Jones time units). As shown in Fig. 1, all simulated systems exhibit
an appreciable dependence of their mean system $T_{g}$ on $N$, in a manner
comparable to experiment. This dependence is stronger in models with stronger
intramolecular correlations (AAPS is the most stiff while the FJC is the most
flexible), consistent with prior reasoning and common trends in experimental
polymers 23, 4, 14, 3, 15.
Figure 2: $T_{g}$ (a-c), $\langle u^{2}\rangle$ (d-f), and $\log\tau$ (g-i)
plotted as a function of bead index $i$: chain ends are labeled $i=1$ (either
bead for FJC and FRC or monomer for AAPS), $i=2$ indicates repeat units bonded
to chain ends, and so on until $i=N/2$, which indicates pairs of middle-most
repeat units for even $N$ (for odd AAPS chain lengths, the last data point is
a single monomer). Each row corresponds to each model, as labeled inside each
panel. Error bars are standard errors on $T_{g}$ as a fit parameter (within
the data points for FRC). Note that error bars increase with chain length for
AAPS due to reduced statistical sampling.
Additionally shown in Fig. 1, all three models exhibit negligible or weak
local reduction in $T_{g}$ at chain ends, relative to the overall magnitude of
the trend of mean $T_{g}$ with $N$. Essentially no $T_{g}$ end effect is
observed in the FJC model and there is less than a 1% end reduction for the
FRC model ($\sim 4$ K in real units for typical glassy polymers). In AAPS the
end effect is of order 30K, relative to a shift of nearly 200K in mean $T_{g}$
with $M$ (consistent with Miwa et al.’s experimental findings 17). We expand
upon this in Figure 2a-c, which shows that local variations in $T_{g}$ along
the backbone near the chain end are weak or absent in all three models. Even
in AAPS, where a modest $T_{g}$ end effect is present, it does not appreciably
propagate to covalently connected segments. We instead observe a whole-chain
effect of increasing $T_{g}$ with respect to $N$ regardless of $i$, in
contrast to the expected chain-end effect. Evidently, a direct $T_{g}$ end
effect is vastly too small to account for the much larger variation in mean-
chain $T_{g}$ with $N$ observed in Fig. 1 for all three models.
Could a more pronounced end effect still be present in free volume, which is
the underlying proposition of the FF model? Interestingly, Fig. 2d-f indicate
that trends in $\langle u^{2}\rangle$ with $M$ and chain location are
nonuniversal. As with $T_{g}$, shifts in $\langle u^{2}\rangle$ with $M$ in
the FJC model are nearly uniform through the whole chain, with at most a very
weak chain end gradient. In contrast, AAPS exhibits a significant chain end
effect of enhanced $\langle u^{2}\rangle$. The FRC model exhibits a mix of
these two scenarios. This trend is perhaps as expected: intramolecular
correlations contribute to the cage scale in stiffer polymers, such that the
reduced bond connectivity near chain ends relieves caging more significantly
in these systems. However, as shown by the FJC, it is evidently possible for a
polymer to exhibit at least a 10% drop in $T_{g}$ without a significant chain
end effect in either $T_{g}$ or $\langle u^{2}\rangle$. This suggests that
chain end mobility or free volume effects cannot be the sole driver of the
$T_{g}(M)$ dependence. In contrast, data for FRC and AAPS indicate that even
the presence of an appreciable enhancement in chain-end $\langle u^{2}\rangle$
does not necessarily translate to a substantial suppression in chain end
$T_{g}$.
Figure 3: $\tau$ vs $T$ for $N=100$ AAPS, for chain ends (open symbols) and
mean system (green symbols). Curves in corresponding colors are fits to the
MYEGA functional form 29. Green vertical lines (rightmost of each pair) denote
the $T$ at which the mean system $\tau$ equals $10^{-11}$s (dotted),
$10^{-9}$s (dot-dashed), and $10^{2}$s (dashed), respectively. Black vertical
lines (leftmost of each pair) denote the $T$ at which the chain ends exhibit
these same $\tau$. The $T$ difference, $\Delta T_{g}$, is reported for each of
the three timescales. Heavy red vertical segments highlight the chain end
$\tau$ reduction relative to the mean system at each timescale. Figure 4:
Magnitude of chain end $T_{g}$ effect vs conventional timescale $\tau_{g}$ for
AAPS chain lengths indicated in the legend. Filled markers denote points for
which both mid-chain and end-chain $T_{g}$ values are interpolated from
simulation data; grey markers are points for which the mid-chain $T_{g}$ is
interpolated and end-chain values are mildly extrapolated; open markers denote
points for which $T_{g}$ values are extrapolated for all monomers.
How does enhanced chain end $\langle u^{2}\rangle$ not directly lead to
suppressed chain end $T_{g}$? In Fig. 2g-i we report relaxation time $\tau$
gradients along the chain backbone at the same $T$ for which we reported
$\langle u^{2}\rangle$ end gradients. At least for AAPS, the chain end $\tau$
is indeed considerably reduced for $T$ well above that of $T_{g}$ when defined
on the experimental timescale. The failure of this increase to lead to a
chain-end $T_{g}$ enhancement lies in a subtle feature of its $T$ dependence,
which we demonstrate in Fig. 3. The chain-end mobility enhancement strengthens
on cooling on an _absolute_ basis; however, this strengthening is insufficient
to keep up with the growth in the overall activation barrier of relaxation on
cooling, such that it becomes _relatively_ weaker in its implications for
$T_{g}$. We further show this effect in Fig. 4, which emphasizes that this
effect occurs within computationally accessible timescales and is not merely a
result of extrapolation. Figures 3 and 4 demonstrate how the magnitude of the
$T_{g}$ end effect shrinks as the timescale $\tau_{g}$ that defines $T_{g}$ is
increased.
### 2.2 Theoretical Interpretations
Our data indicate that the chain end mobility effects intuited by FF and UK
are present at high $T$ in semiflexible chains (although negligible in quite
flexible chains) but diminish in significance upon cooling and become sub-
dominant by the experimental $T_{g}$ timescale. Within the context of many
classical theories of glass formation, this observation seems surprising. Many
of these theories, including free volume theory 61, 62 and classical entropy
theories 63, postulate the presence of a single dominant activation barrier to
relaxation in glass-forming liquids. This barrier is postulated to grow on
cooling in a manner that results from a _multiplicative product_ of the high
temperature barrier with a temperature-dependence amplification factor
associated with a growing cooperative lengthscale. As an example, within the
Adam-Gibbs theory of glass formation, $\log(\tau)$ goes as a high-$T$ barrier
times a cooperativity factor over $k_{B}T$ 63. A reduction in either high-$T$
activation barrier or cooperativity at the chain ends would thus not be
expected to diminish in importance on cooling. A similar intuition would seem
to hold for free volume approaches given the inverse proportionality of the
activation barrier to a single quantity (the free volume). It is this
intuition, that the alteration in activation barriers near the chain end
becomes highly important in the glass formation range, that drives the
classical FF and UK viewpoints.
More recently, a distinct alternative perspective has emerged that views
glassy super-Arrhenius behavior as emerging from an _additive_ , rather than
multiplicative, growth in the barrier on cooling 54, 55, 64. In particular,
the Elastically Collective Nonlinear Langevin Equation (ECNLE) theory of glass
formation formulates its activation barrier as a sum of a local barrier (which
grows relatively weakly on cooling) and a collective elastic barrier (which is
predicted to emerge and then grow relatively strongly on cooling towards
$T_{g}$) 54, 55.
We can understand our results within the context these newer perspectives.
Consider a generic two-barrier model wherein the total activation barrier
$F_{tot}^{mid}$ in the mid-chain is a sum of a local barrier $F_{loc}^{mid}$
and a collective barrier $F_{coll}^{mid}$:
$F_{tot}^{mid}\left(N,T\right)=F_{loc}^{mid}\left(N,T\right)+F_{coll}^{mid}\left(N,T\right).$
(4)
Alterations of this barrier at the chain end are rooted in alterations to
intramolecular correlations that are intrinsically present arbitrarily far
above $T_{g}$, and therefore most directly impact the local barrier (since the
collective barrier is absent far above $T_{g}$). We thus model the end effect
as a fractional reduction of the local barrier by a factor $\alpha^{end}$,
which we model as roughly temperature-invariant (but expect to be chemistry
dependent and larger for stiffer chains) since it reflects a truncation of
intramolecular barriers that are mainly steric and bonding in nature and
therefore relatively athermal:
$F_{tot}^{end}\left(N,T\right)={{\alpha}^{end}}F_{loc}^{mid}\left(N,T\right)+F_{coll}^{mid}\left(N,T\right).$
(5)
One can quantify the expected temperature dependence of the chain-end
relaxation time gradient within this perspective by employing the total
barrier forms above within a generalized activation law to compute the ratio
of chain-end to chain-mid relaxation times:
$\log\left(\frac{{{\tau}_{end}}\left(N,T\right)}{{{\tau}_{mid}}\left(N,T\right)}\right)=\frac{\left(1-{{\alpha}^{end}}\right){{F}_{loc}^{mid}}\left(N,T\right)}{kT}.$
(6)
This equation anticipates that the enhanced chain end mobility (relative to
the mid-chain) should grow on cooling, as seen above in Fig. 3 (the vertical
red bars, moving from right to left), simply because of the reduction in
temperature and any growth on cooling of $F_{loc}^{mid}$.
We can further combine Equations 4 and 5 with a generalized activation law as
above, rearrange them to solve for temperature, apply each at its
corresponding local $T_{g}$, and take their ratio. This gives
$\frac{T_{g}^{end}}{T_{g}^{mid}}=R\left[1-\left(1-{{\alpha}^{end}}\right)x_{loc}^{mid}\left(N,T_{g}^{end}\right)\right],$
(7)
in which we define two dimensionless ratios: a prefactor
$R={F_{tot}^{mid}\left(N,T_{g}^{end}\right)}/{F_{tot}^{mid}\left(N,T_{g}^{mid}\right)}$
and $x_{loc}^{mid}\left(N,T\right),$ which is the fraction of the total
barrier in the mid-chain that is contributed by the local barrier at $T$ such
that
$x_{loc}^{mid}\left(N,T_{g}^{end}\right)={F_{loc}^{mid}\left(N,T_{g}^{end}\right)}/{F_{tot}^{mid}\left(N,T_{g}^{end}\right)}\;$
For a weak $T_{g}$ end effect, we can approximate the relaxation process as
Arrhenius over the limited temperature range involved, giving $R\approx 1$.
Equation 7 then predicts that the $T_{g}$ end effect shrinks on cooling, even
as the $\tau$ end effect grows, because the fractional reduction from the
local barrier $x_{loc}^{mid}$ shrinks on cooling such that the end effects
become diluted within the faster-growing overall barrier. Indeed, this is the
behavior we see in Fig. 4, because the fractional reduction from the local
barrier $x_{loc}^{mid}$ shrinks on cooling 54 such that the end effects become
diluted within the faster-growing overall barrier.
### 2.3 What causes $T_{g}(M)$ if not chain end effects?
Across three models, chain ends evidently do not exhibit sufficiently reduced
$T_{g}$ values to account for the dependence of mean $T_{g}$ on chain length
on the basis of a chain end dilution effect. This would appear to demand a
reevaluation of the textbook understanding of the $T_{g}(M)$ dependence. Our
data indicate that $T_{g}(M)$ variations are primarily driven by the $M$
dependence of the mid-chain (or whole-chain) activation barrier
$F_{tot}^{mid}(N,T)$, which in turn may result from whole-chain trends in some
combination of the local and collective barriers. This type of scenario has
been predicted within the ECNLE theory, where both local and collective
elastic contributions to this activation barrier grow with increasing size of
the fundamental dynamical repeat unit. This unit is taken to be the entire
molecule in small rigid molecules 54, 55 and as the Kuhn segment in polymers
14, 24. In the small molecule case, this leads to the prediction that
$T_{g}\sim\sqrt{M}$ 54, 55, which is consistent with the small molecule limit
identified by Novikov and Rössler 4. In the polymer case, the $M$ dependence
follows from the growth of the Kuhn segment with increasing $N$ (as measured
by growth of the chain’s characteristic ratio $C_{N}$): a reflection of
increases in effective chain stiffness with $M$. 14, 24.
Figure 5: $T_{g}$ plotted as a function of normalized $C_{N}$, each
normalized by their value for the longest chain of that type simulated, for
the FJC (blue circles) and FRC (orange diamonds). Lines in corresponding
colors are linear fits.
Indeed, prior studies have found that variation of $T_{g}$ with $M$ tracks
with variations in $C_{N}$ for at least polystyrene, poly(methyl
methacrylate), and polyethylene. This has also been reported in polydimethyl
siloxane, although this is in dispute 15. While we cannot add to the extant
data for PS $T_{g}$ vs $C_{N}$ correlations due to the limited number of
chains that can be simulated in an AA simulation accessing the glass formation
range, Fig. 5 illustrates that $T_{g}$ is proportional to $C_{N}$ for our two
bead-spring systems, adding to evidence that $T_{g}(M)$ closely tracks with
$C_{N}(M)$ as $M$ is varied for a given polymer, in line with the ECNLE
scenario 14, 24.
It is not clear whether this $C_{N}$ scenario alone fully accounts for the
$T_{g}(M)$ dependence given suggestions that $T_{g}(M)$ exhibits multiple
regimes, particularly in stiffer polymers. Baker et al. have argued that this
may result from nontrivial variations in chain conformational statistics,
combined with an intramolecular dynamical facilitation effect 15. While not
excluding that scenario, our results suggest a potential alternate scenario
given that we find chain end effects on $T_{g}$ to be present, if weak, in our
stiffer systems. It may be that the multiple regimes observed in some stiffer
polymers reflect a combination of a leading order stiffness effect with a
perhaps second-order end effect with parallels to FF and UK.
## 3 Conclusions
We have reported on local dynamics for three model polymers of varying degrees
of complexity and stiffness, ranging from a freely-jointed polymer chain, with
only excluded volume added relative to a random walk in three dimensions, to a
fully atomistic polystyrene chain which indeed agrees with experimental
$T_{g}(N)$ quite well 22. Surprisingly, we find that chain-end mobility
enhancements in dynamics or free volume are insufficient to account for the
shift in overall system $T_{g}$ with varying M. Indeed, the freely-jointed-
chain model exhibits an almost complete absence of chain end gradient in
$T_{g}$, relaxation time, or Debye-Waller factor $\langle u^{2}\rangle$ (one
measure of a dynamic free volume). Nevertheless, this model exhibits a 10%
reduction in $T_{g}$ from the highest to lowest molecular weight simulated.
The results for the freely-rotating-chain model and all-atom polystyrene are
more complex, exhibiting an enhancement in $\langle u^{2}\rangle$ at the chain
end. However, as noted above, this does not translate into appreciable $T_{g}$
gradients at chain ends. Within many classical theories of the glass
transition, this would be difficult to understand. However, we show that it
can be understood in terms of a simple two-barrier scenario of the glass
transition inspired by the Elastically Collective Nonlinear Langevin Equation
(ECNLE) theory of glass formation 54, 55, which predicts additive
contributions to the activation barrier for relaxation from a local barrier
and an longer-ranged collective barrier that grows more strongly on cooling.
If chain-end enhancements in $\langle u^{2}\rangle$ primarily alter the local
barrier contribution, we show that the they become less important at low
temperatures where the collective barrier dominates.
Does the absence of a locally-driven chain-end $T_{g}$ suppression
sufficiently large to account for the $T_{g}(M)$ dependence via spatial
averaging genuinely provide compelling evidence that some other mechanism must
be dominant? As we discuss in the introduction, any model in which faster
dynamics or lower $T_{g}$ are effectively _nucleated_ by a chain end effect
requires that this effect must emanate over a _finite_ distance from the chain
end, since correlations in supercooled liquids are not of infinite range. Some
gradient in these properties is therefore to be expected. Our longest chains
are 400 repeat units in the case of AAPS. Most segments in this system are not
within the first neighbor shell of an end; at this molecular weight, chain
ends are expected to be approximately 6 segmental diameters apart on average.
We nevertheless do not observe gradients remotely large enough to account for
$T_{g}(M)$.
Could a gradient emanating from chain ends simply be so long-ranged as to
appear flat even over this spacing? There are several reasons to conclude that
this is not the case. First, prior simulations probing the range of
correlation length scales in glass-forming liquids have suggested that they
are of range only a few segmental diameters on the timescales we access; they
should therefore be readily observable within Figure 2 were they present. As a
possibly even stronger argument, let us consider what it would imply regarding
the behavior of even longer chains if some dramatically longer range were in
fact concealing the gradient within our simulations. This would require that a
$T_{g}$ gradient of magnitude comparable to the overall $T_{g}(M)$ trend would
have to emerge in Figure 2 for PS chains considerably longer than 400 repeat
units (about 40,000 g/mol). Since it would be incoherent for the $T_{g}$ at
any location in the chain to drop with increasing molecular weight, this would
imply that the high-molecular weight mid-chain $T_{g}$ would have to grow by
many tens of K beyond that observed here. This is not plausible, since the
mean $T_{g}$ value here has already plateaued in accord with high-molecular
weight $T_{g}$ observed for PS in experiment.
Collectively, these findings suggest that chain end effects are not the
dominant origin of the $T_{g}(M)$ dependence. They are entirely absent in at
least one system exhibiting a substantial $T_{g}(M)$ dependence, and even when
present, their effects grow weaker on cooling and play little role in $T_{g}$
at experimental timescales. Indeed, we note that even the modest chain end
$T_{g}$ suppression experimentally inferred by Miwa et al. 17 was based on a
shorter-timescale spin transition that occurs at higher temperatures where
this analysis would expect chain end effects to be modestly more important in
some systems than they are at experimental timescales. Perhaps the most
remarkable conclusion is that for the fully flexible chain, chain end effects
for _both_ $\tau$ and $\langle u^{2}\rangle$ are negligible, from which one
must conclude that _at least_ another mechanism must be responsible for the
observed 10% reduction in $T_{g}$ for this model. Consistent with prior work,
we report that $T_{g}$ in our freely-jointed and freely-rotating chain models
tracks linearly with the characteristic ratio is the molecular weight is
varied. This may be consistent with a scenario encoded within the ECNLE theory
wherein $T_{g}(M)$ is driven by stiffness variations with molecular weight, as
indicated by the growth in the characteristic ratio in longer chains. More
broadly, this may align with the proposition that intramolecular activation
barriers, which play a central role in polymer glass formation 65, may
qualitatively vary with molecular weight 15.
Overall, these findings indicate a need to reopen the study of $M$ effects on
$T_{g}$, with a focus on more recent theories wherein this trend is dominated
by whole-chain effects rather than end effects. Indeed, the finding that the
$T$-dependence of dynamical chain end effects can be understood based on a
two-barrier model of the glass transition suggests that a renewed focus on
this problem may have the potential to yield broader insights into the nature
of glass formation itself.
This material is based upon work supported by the National Science Foundation
under Grant No. DMR - 1849594. The authors acknowledge helpful discussions
with Dr. Kenneth Schweizer.
The supporting information contains curves for decay of the self-part of the
intermediate scattering function, the time evolution of the mean-square
displacement and the temperature dependence of the Debye-Waller factor for
representative chains of each model, along with temperature dependence of mean
relaxation time for each model and all chain length.
## References
* Debenedetti and Stillinger 2001 Debenedetti, P. G.; Stillinger, F. H. Supercooled liquids and the glass transition. _Nature_ 2001, _410_ , 259–267
* Cavagna 2009 Cavagna, A. Supercooled liquids for pedestrians. _Physics Reports-Review Section of Physics Letters_ 2009, _476_ , 51–124, WOS:000267351200001
* Novikov and Sokolov 2022 Novikov, V. N.; Sokolov, A. P. Temperature Dependence of Structural Relaxation in Glass-Forming Liquids and Polymers. _Entropy_ 2022, _24_ , 1101, Number: 8 Publisher: Multidisciplinary Digital Publishing Institute
* Novikov and Rössler 2013 Novikov, V.; Rössler, E. Correlation between glass transition temperature and molecular mass in non-polymeric and polymer glass formers. _Polymer_ 2013, _54_ , 6987–6991
* Hiemenz and Lodge 2007 Hiemenz, P. C.; Lodge, T. _Polymer Chemistry_ , 2nd ed.; CRC Press, 2007
* Rubinstein and Colby 2003 Rubinstein, M.; Colby, R. H. _Polymer Physics_ ; OUP Oxford, 2003
* Coleman and Painter 1998 Coleman, M. M.; Painter, P. C. _Fundamentals of Polymer Science: An Introductory Text_ , 2nd ed.; CRC Press: Lancaster, Pa, 1998
* Rudin and P.Eng 2012 Rudin, A.; P.Eng, P. C. P. D. _The Elements of Polymer Science and Engineering: An Introductory Text and Reference for Engineers and Chemists_ , 3rd ed.; Academic Press: San Diego, 2012
* Mathot and Benoist 1994 Mathot, V. B. F., Benoist, L., Eds. _Calorimetry and Thermal Analysis of Polymers_ ; Hanser Pub Inc: Munich ; New York : Cincinnati, 1994
* Rosen 1993 Rosen, S. L. _Fundamental Principles of Polymeric Materials_ , 2nd ed.; Wiley-Interscience: New York, 1993
* Fox and Flory 1950 Fox, T. G.; Flory, P. J. Second‐Order Transition Temperatures and Related Properties of Polystyrene. I. Influence of Molecular Weight. _Journal of Applied Physics_ 1950, _21_ , 581–591, Publisher: American Institute of Physics
* Ueberreiter and Kanig 1952 Ueberreiter, K.; Kanig, G. Self-plasticization of polymers. _Journal of Colloid Science_ 1952, _7_ , 569–583
* Cown 1975 Cown, J. M. G. Some general features of Tg-M relations for oligomers and amorphous polymers. _European Polymer Journal_ 1975, _11_ , 297–300
* Mirigian and Schweizer 2015 Mirigian, S.; Schweizer, K. S. Dynamical Theory of Segmental Relaxation and Emergent Elasticity in Supercooled Polymer Melts. _Macromolecules_ 2015, _48_ , 1901–1913
* Baker et al. 2022 Baker, D. L.; Reynolds, M.; Masurel, R.; Olmsted, P. D.; Mattsson, J. Cooperative Intramolecular Dynamics Control the Chain-Length-Dependent Glass Transition in Polymers. _Physical Review X_ 2022, _12_ , 021047
* Zaccone and Terentjev 2013-04-26 Zaccone, A.; Terentjev, E. M. Disorder-Assisted Melting and the Glass Transition in Amorphous Solids. _Physical Review Letters_ 2013-04-26, _110_ , 178002, Publisher: American Physical Society
* Miwa et al. 2003 Miwa, Y.; Tanase, T.; Yamamoto, K.; Sakaguchi, M.; Sakai, M.; Shimada, S. Influence of Chain End and Molecular Weight on Molecular Motion of Polystyrene, Revealed by the ESR Selective Spin-Label Method. _Macromolecules_ 2003, _36_ , 3235–3239, Publisher: American Chemical Society
* Kremer and Grest 1990 Kremer, K.; Grest, G. S. Dynamics of entangled linear polymer melts: A molecular-dynamics simulation. _The Journal of Chemical Physics_ 1990, _92_ , 5057–5086
* Bulacu and van der Giessen 2007 Bulacu, M.; van der Giessen, E. Molecular-dynamics simulation study of the glass transition in amorphous polymers with controlled chain stiffness. _Physical Review E_ 2007, _76_ , 011807
* Jorgensen et al. 1996 Jorgensen, W. L.; Maxwell, D. S.; Tirado-Rives, J. Development and Testing of the OPLS All-Atom Force Field on Conformational Energetics and Properties of Organic Liquids. _Journal of the American Chemical Society_ 1996, _118_ , 11225–11236
* Hung et al. 2019 Hung, J.-H.; Patra, T. K.; Meenakshisundaram, V.; Mangalara, J. H.; Simmons, D. S. Universal localization transition accompanying glass formation: insights from efficient molecular dynamics simulations of diverse supercooled liquids. _Soft Matter_ 2019, _15_ , 1223–1242
* Hung et al. 2020 Hung, J.-H.; Patra, T. K.; Simmons, D. S. Forecasting the experimental glass transition from short time relaxation data. _Journal of Non-Crystalline Solids_ 2020, _544_ , 120205
* Sokolov et al. 2007 Sokolov, A. P.; Novikov, V. N.; Ding, Y. Why many polymers are so fragile. _Journal of Physics: Condensed Matter_ 2007, _19_ , 205116
* Zhou et al. 2022 Zhou, Y.; Mei, B.; Schweizer, K. S. Activated relaxation in supercooled monodisperse atomic and polymeric WCA fluids: Simulation and ECNLE theory. _The Journal of Chemical Physics_ 2022, _156_ , 114901, Publisher: American Institute of Physics
* Thompson et al. 2022 Thompson, A. P.; Aktulga, H. M.; Berger, R.; Bolintineanu, D. S.; Brown, W. M.; Crozier, P. S.; in ’t Veld, P. J.; Kohlmeyer, A.; Moore, S. G.; Nguyen, T. D.; Shan, R.; Stevens, M. J.; Tranchida, J.; Trott, C.; Plimpton, S. J. LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. _Computer Physics Communications_ 2022, _271_ , 108171
* 26 Hanakata, P. Z.; Douglas, J. F.; Starr, F. W. Local variation of fragility and glass transition temperature of ultra-thin supported polymer films. _137_ , 244901
* 27 Lang, R. J.; Merling, W. L.; Simmons, D. S. Combined Dependence of Nanoconfined Tg on Interfacial Energy and Softness of Confinement. _3_ , 758–762
* 28 Hung, J.-H.; Mangalara, J. H.; Simmons, D. S. Heterogeneous Rouse Model Predicts Polymer Chain Translational Normal Mode Decoupling. _51_ , 2887–2898
* Mauro et al. 2009 Mauro, J. C.; Yue, Y.; Ellison, A. J.; Gupta, P. K.; Allan, D. C. Viscosity of glass-forming liquids. _Proceedings of the National Academy of Sciences_ 2009, _106_ , 19780–19784
* 30 Marvin, M. D.; Lang, R. J.; Simmons, D. S. Nanoconfinement effects on the fragility of glass formation of a model freestanding polymer film. _10_ , 3166–3170
* 31 Hsu, D. D.; Xia, W.; Song, J.; Keten, S. Glass-Transition and Side-Chain Dynamics in Thin Films: Explaining Dissimilar Free Surface Effects for Polystyrene vs Poly(methyl methacrylate). _5_ , 481–486, Publisher: American Chemical Society
* 32 Cheng, Y.; Yang, J.; Hung, J.-H.; Patra, T. K.; Simmons, D. S. Design Rules for Highly Conductive Polymeric Ionic Liquids from Molecular Dynamics Simulations. _51_ , 6630–6644
* 33 Scheidler, P.; Kob, W.; Binder, K. Cooperative motion and growing length scales in supercooled confined liquids. _59_ , 701, Publisher: IOP Publishing
* 34 Scheidler, P.; Kob, W.; Binder, K. The relaxation dynamics of a confined glassy simple liquid. _12_ , 5–9
* 35 Scheidler, P.; Kob, W.; Binder, K. The Relaxation Dynamics of a Supercooled Liquid Confined by Rough Walls. _108_ , 6673–6686, Publisher: American Chemical Society
* 36 Baschnagel, J.; Varnik, F. Computer simulations of supercooled polymer melts in the bulk and in confined geometry. _17_ , R851
* 37 Peter, S.; Meyer, H.; Baschnagel, J. Thickness-dependent reduction of the glass-transition temperature in thin polymer films with a free surface. _44_ , 2951–2967, _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/polb.20924
* 38 Kob, W.; Roldán-Vargas, S.; Berthier, L. Non-monotonic temperature evolution of dynamic correlations in glass-forming liquids. _8_ , 164–167, Number: 2 Publisher: Nature Publishing Group
* 39 Hocky, G. M.; Berthier, L.; Kob, W.; Reichman, D. R. Crossovers in the dynamics of supercooled liquids probed by an amorphous wall. _89_ , 052311, Publisher: American Physical Society
* 40 Kob, W.; Coslovich, D. Nonlinear dynamic response of glass-forming liquids to random pinning. _90_ , 052305, Publisher: American Physical Society
* 41 Hao, Z.; Ghanekarade, A.; Zhu, N.; Randazzo, K.; Kawaguchi, D.; Tanaka, K.; Wang, X.; Simmons, D. S.; Priestley, R. D.; Zuo, B. Mobility gradients yield rubbery surfaces on top of polymer glasses. _596_ , 372–376, Number: 7872 Publisher: Nature Publishing Group
* Christie et al. 2018 Christie, D.; Register, R. A.; Priestley, R. D. Direct Measurement of the Local Glass Transition in Self-Assembled Copolymers with Nanometer Resolution. _ACS Central Science_ 2018, _4_ , 504–511
* McKenzie-Smith et al. 2022 McKenzie-Smith, T.; Douglas, J. F.; Starr, F. W. Relating dynamic free volume to cooperative relaxation in a glass-forming polymer composite. _The Journal of Chemical Physics_ 2022, _157_ , 131101, Publisher: American Institute of Physics
* McKenzie-Smith et al. 2021 McKenzie-Smith, T. Q.; Douglas, J. F.; Starr, F. W. Explaining the Sensitivity of Polymer Segmental Relaxation to Additive Size Based on the Localization Model. _Physical Review Letters_ 2021, _127_ , 277802, Publisher: American Physical Society
* Puosi et al. 2019 Puosi, F.; Tripodo, A.; Leporini, D. Fast Vibrational Modes and Slow Heterogeneous Dynamics in Polymers and Viscous Liquids. _International Journal of Molecular Sciences_ 2019, _20_ , 5708, Number: 22 Publisher: Multidisciplinary Digital Publishing Institute
* Pazmiño Betancourt et al. 2015 Pazmiño Betancourt, B. A.; Hanakata, P. Z.; Starr, F. W.; Douglas, J. F. Quantitative relations between cooperative motion, emergent elasticity, and free volume in model glass-forming polymer materials. _Proceedings of the National Academy of Sciences_ 2015, _112_ , 2966–2971, Publisher: Proceedings of the National Academy of Sciences
* Ottochian and Leporini 2011 Ottochian, A.; Leporini, D. Universal scaling between structural relaxation and caged dynamics in glass-forming systems: Free volume and time scales. _Journal of Non-Crystalline Solids_ 2011, _357_ , 298–301
* Ngai 2004 Ngai, K. L. Why the fast relaxation in the picosecond to nanosecond time range can sense the glass transition. _Philosophical Magazine_ 2004, _84_ , 1341–1353, Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/14786430310001644080
* Starr et al. 2002 Starr, F. W.; Sastry, S.; Douglas, J. F.; Glotzer, S. C. What Do We Learn from the Local Geometry of Glass-Forming Liquids? _Physical Review Letters_ 2002, _89_ , 125501
* Ngai et al. 2001 Ngai, K. L.; Bao, L.-R.; Yee, A. F.; Soles, C. L. Correlation of Positron Annihilation and Other Dynamic Properties in Small Molecule Glass-Forming Substances. _Physical Review Letters_ 2001, _87_ , 215901, Publisher: American Physical Society
* van Zanten and Rufener 2000 van Zanten, J. H.; Rufener, K. P. Brownian motion in a single relaxation time Maxwell fluid. _Physical Review E_ 2000, _62_ , 5389–5396
* Yang and Schweizer 2011 Yang, J.; Schweizer, K. S. Glassy dynamics and mechanical response in dense fluids of soft repulsive spheres. II. Shear modulus, relaxation-elasticity connections, and rheology. _Journal of Chemical Physics_ 2011, _134_ , 204909
* Dyre 2006 Dyre, J. C. Colloquium: The glass transition and elastic models of glass-forming liquids. _Reviews of Modern Physics_ 2006, _78_ , 953–972
* Mirigian and Schweizer 2014 Mirigian, S.; Schweizer, K. S. Elastically cooperative activated barrier hopping theory of relaxation in viscous fluids. I. General formulation and application to hard sphere fluids. _Journal of Chemical Physics_ 2014, _140_ , 194506, WOS:000336832700028
* Mirigian and Schweizer 2014 Mirigian, S.; Schweizer, K. S. Elastically cooperative activated barrier hopping theory of relaxation in viscous fluids. II. Thermal liquids. _The Journal of Chemical Physics_ 2014, _140_ , 194507, Publisher: American Institute of Physics
* Hall and Wolynes 1987 Hall, R. W.; Wolynes, P. G. The aperiodic crystal picture and free-energy barriers in glasses. _Journal of Chemical Physics_ 1987, _86_ , 2943–2948
* Buchenau and Zorn 1992 Buchenau, U.; Zorn, R. A RELATION BETWEEN FAST AND SLOW MOTIONS IN GLASSY AND LIQUID SELENIUM. _Europhysics Letters_ 1992, _18_ , 523–528
* Xia et al. 2015 Xia, W.; Hsu, D. D.; Keten, S. Molecular Weight Effects on the Glass Transition and Confinement Behavior of Polymer Thin Films. _Macromolecular Rapid Communications_ 2015, _36_ , 1422–1427, WOS:000359791700006
* Mangalara and Simmons 2015 Mangalara, J. H.; Simmons, D. S. Tuning Polymer Glass Formation Behavior and Mechanical Properties with Oligomeric Diluents of Varying Stiffness. _ACS Macro Letters_ 2015, _4_ , 1134–1138
* Ghanekarade et al. 2023 Ghanekarade, A.; Phan, A. D.; Schweizer, K. S.; Simmons, D. S. Signature of collective elastic glass physics in surface-induced long-range tails in dynamical gradients. _Nature Physics_ 2023, 1–7, Publisher: Nature Publishing Group
* Doolittle 1951 Doolittle, A. K. Studies in Newtonian Flow. II. The Dependence of the Viscosity of Liquids on Free‐Space. _Journal of Applied Physics_ 1951, _22_ , 1471–1475
* White and Lipson 2017 White, R. P.; Lipson, J. E. G. Explaining the T,V-dependent dynamics of glass forming liquids: The cooperative free volume model tested against new simulation results. _The Journal of Chemical Physics_ 2017, _147_ , 184503
* Adam and Gibbs 1965 Adam, G.; Gibbs, J. H. On the Temperature Dependence of Cooperative Relaxation Properties in Glass‐Forming Liquids. _The Journal of Chemical Physics_ 1965, _43_ , 139–146
* Schmidtke et al. 2015 Schmidtke, B.; Hofmann, M.; Lichtinger, A.; Rössler, E. A. Temperature Dependence of the Segmental Relaxation Time of Polymers Revisited. _Macromolecules_ 2015, _48_ , 3005–3013
* 65 Colmenero, J. Are polymers standard glass-forming systems? The role of intramolecular barriers on the glass-transition phenomena of glass-forming polymers. _27_ , 103101, Publisher: IOP Publishing
|
# Gating ferromagnetic resonance by superconductors via modulated reflection
of magnetization-radiated electric fields
Xi-Han Zhou School of Physics, Huazhong University of Science and Technology,
Wuhan 430074, China Tao Yu<EMAIL_ADDRESS>School of Physics, Huazhong
University of Science and Technology, Wuhan 430074, China
###### Abstract
We predict that ferromagnetic resonance in insulating magnetic film with
inplane magnetization radiates electric fields polarized along the
magnetization with opposite amplitudes at two sides of the magnetic insulator,
which can be modulated strongly by adjacent superconductors. With a single
superconductor adjacent to the magnetic insulator this radiated electric field
is totally reflected with a $\pi$-phase shift, which thereby vanishes at the
superconductor side and causes no influence on the ferromagnetic resonance.
When the magnetic insulator is sandwiched by two superconductors, this
reflection becomes back and forth, so the electric field exists at both
superconductors that drives the Meissner supercurrent, which in turn shifts
efficiently the ferromagnetic resonance. We predict an ultrastrong coupling
between magnons in the yttrium iron garnet and Cooper pairs in NbN with the
frequency shift achieving tens of percent of the bare ferromagnetic resonance.
## I Introduction
“Magnonics” exploits magnetic excitations, i.e., spin waves or their quanta,
magnons, as potential information carriers for spin transport in insulators
with low-energy consumption [1, 2, 3, 4, 5, 6, 7]. Interaction between magnons
and Cooper pairs in heterostructures composed of magnets and superconductors
may modulate the transport of spin information [8, 11, 17, 13, 15, 16, 9, 10,
14, 12], strongly enhance the magnon-photon interaction [18, 22, 19, 20, 21,
23], and lead to the emergence of triplet Cooper pairing [26, 24, 25, 28, 27],
which is potential to bring unprecedented functionalities in spintronics [26,
24, 25], quantum information [29, 30, 31, 32, 33], and topological quantum
computation [34]. In this heterostructure, the hybridized quantum states and
distribution of macroscopic electromagnetic fields govern its properties. For
example, the “ultrastrong coupling” [35] with the coupling strength close to
the ferromagnetic resonance (FMR) frequency unveils the importance of the
dipolar interaction in the superconductor(S)$|$metallic
ferromagnet(F)$|$superconductor(S) heterostructure [19, 20], where the photon
mode with a large mode density is localized in the nano-scale between two
superconductors [36].
The importance of the dipolar interaction also manifests in the superconductor
gating effect on magnons [37, 38, 39, 40, 12, 11, 17], in which the frequency
of magnons with finite wave number [41, 42, 43, 44] can be shifted up to tens
of GHz, as recently predicted [11, 12] and observed [17] in the
superconductor(S)$|$ferromagnet(F) insulator heterostructure. The stray
electric field of magnons drives the supercurrent in the adjacent
superconductor which in turn generates the Oersted magnetic field that affects
the low-frequency magnetization dynamics. This gating effect favors the spin
diode [7, 45] and magnon trap [46, 47, 48] in proper gating configurations.
The FMR frequency in this S$|$F bilayer is not affected, however.
Figure 1: Snapshots of magnetization-radiated electric fields in different
heterostructure configurations. The electric field changes linearly when
across the thickness of the ferromagnetic insulating film. (a) The electric-
field amplitude is opposite at two sides of the thin magnetic insulator. (b)
When fabricating a superconductor thin film on a ferromagnetic insulator, the
electric field is suppressed to vanish at the superconductor side but enhanced
at the other side of the magnet. (c) and (d) When the magnet is sandwiched by
two superconductors, the electric field exists but differs at both sides in
both symmetric and asymmetric configurations.
On the other hand, the FMR of the metallic ferromagnet sandwiched by two
superconductors was shifted up to 50 mT in the resonant field when the
thickness of two superconductor layers is larger than the London’s penetration
depth, as observed in several recent experiments [49, 50, 51]. Above the
superconducting transition temperature, the FMR frequency recovers to the
Kittel mode [52], which may be exploited to realize the magnetic logic gate
through a phase transition in the superconductor. This phenomenon may be
related to the frequency splitting induced by spin-triplet superconducting
state [49], Meissner screening [51], and giant demagnetization effects [53,
13]. It appears that this modulation could be absent for the FMR in the
ferromagnetic insulators, however, which has not been reported yet [49, 51,
53, 13]. The experiment [49] showed that inserting a thin insulator layer in
the heterostructures composed of a metallic ferromagnet sandwiched by two
superconductors completely suppresses the shift of FMR. This raises the issue
of whether the FMR can be gated or not in magnetic insulators by adjacent
superconductors in proper configurations.
In this work, we study this issue by going beyond the quasi-static
approximation for magnetostatic modes [54] and demonstrate that although the
stray magnetic field of Kittel magnon with uniform magnetization precession is
vanishingly small outside of the in-plane magnetized ferromagnetic insulating
film, the radiated electric field is significant with opposite amplitudes at
two sides of the magnetic film and polarization parallel to the magnetization
direction. This distribution of the radiated electric field is sensitive to
the adjacent superconductors due to the total reflection, as illustrated in
Fig. 1 for snapshots of the distribution of electric fields in different
heterostructure configurations. The electric field is opposite at two sides of
a single thin ferromagnetic insulator [Fig. 1(a)]; contra-intuitively, in the
S$|$F bilayer this electric field is suppressed to vanish at the
superconductor side [Fig. 1(b)]; nevertheless, when sandwiched by two
superconductors, the electric field is neither shifted to vanish nor screened
completely, as plotted in Figs. 1(c) and (d) for symmetric and asymmetric
configurations. These features are well understood by our mechanism of
modulated reflection of magnetization-induced electric fields by
superconductors, which predicts the absence of FMR shift in ferromagnetic
insulator$|$superconductor heterostructure and the ultrastrong modulation of
FMR, shifted up to tens of percent of the bare frequency when the
ferromagnetic insulator is sandwiched by two thin superconductors.
This paper is organized as follows. We address the model and general formalism
in Sec. II. In Sec. III, IV, and V, we analyze the distribution of the
electric fields from FMR of a single ferromagnetic insulator, S$|$F bilayer,
and S$|$F$|$S heterostructure, respectively, and address the ultrastrong
interaction between the FMR and supercurrent. We conclude and discuss in Sec.
VI.
## II Model and general formalism
We consider a heterostructure composed of a ferromagnetic insulating film of
thickness $2d_{F}\sim O(50~{}{\rm nm})$ with inplane magnetization sandwiched
by two thin superconductor layers with thickness $d_{S}\lesssim\lambda$ and
$d_{S}^{\prime}\lesssim\lambda$, respectively, as illustrated in Fig. 2. Here
$\lambda\sim O(100~{}{\rm nm})$ is London’s penetration depth of conventional
superconductors. In the ferromagnetic insulators, the dynamics of
magnetization ${\bf M}=M_{x}\hat{\bf x}+M_{y}\hat{\bf y}+M_{0}\hat{\bf z}$,
where $M_{0}$ is the saturated magnetization, is phenomenologically governed
by the Landau-Lifshitz equation [55]
$\displaystyle\partial{\bf M}/\partial t=-\mu_{0}\gamma{\bf M}\times{\bf
H}_{\text{eff}},$ (1)
where $\mu_{0}$ is the vacuum permeability and $-\gamma$ is the electron
gyromagnetic ratio. The magnetization precesses around the effective magnetic
field ${\bf H}_{\text{eff}}={\bf H}_{\text{app}}+{\bf H}_{d}+{\bf H}_{s}$ that
contains the external static field ${\bf H}_{\text{app}}=H_{0}\hat{\bf z}$,
the dipolar magnetic field ${\bf H}_{d}$ generated by the magnetic charge
$-\nabla\cdot{\bf M}$, and the Oersted magnetic field ${\bf H}_{s}$ from the
superconductor that needs a self-consistent treatment with Maxwell’s equation
[11]. At low frequencies, it is sufficient to express the stray magnetic field
as [55, 54]
$\displaystyle{\bf H}_{d,\beta}({\bf
M})=\dfrac{1}{4\pi}\partial_{\beta}\partial_{\alpha}\int d{\bf
r^{\prime}}\dfrac{M_{\alpha}(\bf r^{\prime})}{|\bf r-r^{\prime}|}.$ (2)
The exchange interaction plays no role in the FMR since the gradient of ${\bf
M}$ vanishes for the uniform precession.
Figure 2: S(1)$|$F$|$S(2) heterostructure. The thickness of superconductors
above and beneath the thin ferromagnetic insulator of thickness $2d_{F}$ is
$d_{S}$ and $d_{S}^{\prime}$, respectively. The driven supercurrents ${\bf
J}_{s}$ and ${\bf J}_{s}^{\prime}$ by FMR flow oppositely along the
magnetization direction.
On the other hand, the oscillating magnetic induction ${\bf B}=\mu_{0}({\bf
M}+{\bf H})$ of frequency $\omega$ governs the radiation of electric fields
inside and outside the ferromagnetic insulator according to [56]
$\displaystyle\nabla\times{\bf E}=i\omega{\bf B},$
$\displaystyle\nabla\times{\bf H}={\bf J}_{s}-i\omega\varepsilon_{0}{\bf E},$
(3)
where $\varepsilon_{0}$ is the vacuum permittivity. When coupled with
superconductors, this electric field drives the supercurrent ${\bf J}_{s}$ via
London’s equation [57]
$\displaystyle\dfrac{\partial{\bf J}_{s}}{\partial
t}=\dfrac{1}{\mu_{0}\lambda^{2}}{\bf E},$ $\displaystyle\nabla\times{\bf
J}_{s}=-\dfrac{1}{\mu_{0}\lambda^{2}}{\bf B}.$ (4)
Here London’s penetration depth at different temperatures $T<T_{c}$ follows
the relation [57]
$\displaystyle\lambda(T)=\sqrt{\frac{m_{e}}{\mu_{0}n_{e}e^{2}}}\left(1-\left(\frac{T}{T_{c}}\right)^{4}\right)^{-1/2},$
(5)
where $m_{e}$ is the electron mass and $n_{e}$ is the electron density.
At low frequencies, we may apply the quasi-static approximation
$\nabla\times{\bf B}=\mu_{0}{\bf J}_{s}$ in superconductors. Taking the curl
of Eq. (3) and substituting Eq. (4) into it, the electric field inside the
superconductor obeys
$\displaystyle\nabla^{2}{\bf E}-{\bf E}/\lambda^{2}=0.$ (6)
On the other hand, taking the curl of $\nabla\times{\bf B}=\mu_{0}{\bf J}_{s}$
and combining with Eq. (4), the magnetic induction inside the superconductor
obeys $\nabla^{2}{\bf B}-{\bf B}/\lambda^{2}=0$.
The driven supercurrent then affects the magnetization dynamics. From Eq. (4),
the electric field drives supercurrent inside the superconductor, which then
generates the vector potential. With the uniform magnetization precession, the
system is translational invariant in the $y$-$z$ plane, so the supercurrent
only depends on $x$ and as it for the vector potential [56]
$\displaystyle{\bf A}(x)=\dfrac{\mu_{0}}{4\pi}\int d{\bf
r^{\prime}}\dfrac{{{\bf J}_{s}}(x^{\prime})}{|{\bf r}-{\bf r}^{\prime}|}.$ (7)
Accordingly, the Oersted magnetic field ${\bf
H}_{s}=(1/\mu_{0})\nabla\times{\bf A}$ only contains the $y$-component
${H}_{y}=-\partial_{x}A_{z}(x)/\mu_{0}$, which drives the magnetization.
The boundary condition describes the fields at the interfaces [56]. For the
magnetic induction and field, ${\bf B}_{\perp}$ and ${\bf H}_{\parallel}$ are
continuous at the boundaries. Since there is no surface current or charge
accumulation, the electric field $\bf E$ is continuous at interfaces.
## III Single thin ferromagnetic insulator
We start with a single insulating ferromagnetic film to address the
significant radiated electric fields from the uniform magnetization
precession. For a single ferromagnetic insulator of thickness $2d_{F}$ biased
by a static magnetic field ${\bf H}_{\rm app}=H_{0}\hat{\bf z}$, the
magnetization $\bf M$ for the FMR is uniform inside the ferromagnetic layer by
the constant demagnetization factor $N_{xx}=-1$. Since the magnetic film is
sufficiently thin, we stick to the uniform precession throughout this work.
The opposite magnetic charges at the two surfaces of the film generate
opposite magnetic field outside, which results in vanished stray magnetic
field ${\bf H}_{d}=0$ outside the ferromagnetic layer, as also calculated from
Eq. (2); inside the ferromagnet, ${\bf H}_{d}=\\{-M_{x},0,0\\}$ and ${\bf
B}=\\{0,\mu_{0}M_{y},\mu_{0}(H_{0}+M_{0})\\}$, in which only the $y$-component
of $\bf B$ oscillates with frequency $\omega$ that can radiate the electric
field.
### III.1 Full solution
Here we go beyond the quasi-static approximation and solve the radiated
electric field. According to Eq. (3), the oscillating electromagnetic field is
the source for radiating microwaves in space. Taking the curl of the first
equation in Eq. (3), the electric field obeys
$\displaystyle\nabla^{2}{\bf E}+\varepsilon_{0}\mu_{0}\omega^{2}{\bf
E}=-i\omega\mu_{0}\nabla\times{\bf M},$ (8)
which has the solution
$\displaystyle{\bf E}({\bf
r})=\dfrac{i\mu_{0}\omega}{4\pi}\int\dfrac{[\nabla^{\prime}\times{\bf M}({\bf
r^{\prime}})]e^{ik|{\bf r-r^{\prime}}|}}{|{\bf r-r^{\prime}}|}d{\bf
r^{\prime}},$ (9)
where $k=\omega\sqrt{\mu_{0}\varepsilon_{0}}$ is the wave number of
microwaves. Since only the $x$ and $y$ components of ${\bf M}$ oscillate with
frequency $\omega$ and $\bf M$ is uniform inside the ferromagnetic layer,
$(\nabla\times{\bf M})_{x,y}=0$ in all space, leading to $E_{x}=E_{y}=0$ and
$\displaystyle
E_{z}(x)=\dfrac{i\mu_{0}\omega}{4\pi}\int\dfrac{[\partial_{x^{\prime}}M_{y}({\bf
r^{\prime}})]e^{ik|{\bf r-r^{\prime}}|}}{|{\bf r-r^{\prime}}|}d{\bf
r^{\prime}}.$ (10)
Using Weyl identity [7]
$\displaystyle\frac{e^{ik|{\bf r}-{\bf r^{\prime}}|}}{|{\bf r}-{\bf
r}^{\prime}|}=\int
dk_{z}^{\prime}dk_{y}^{\prime}\frac{ie^{ik_{z}^{\prime}(z-z^{\prime})+ik_{y}^{\prime}(y-y^{\prime})}e^{i\sqrt{k^{2}-k_{z}^{\prime
2}-k_{y}^{\prime 2}}|x-x^{\prime}|}}{2\pi\sqrt{k^{2}-k_{z}^{\prime
2}-k_{y}^{\prime 2}}},$ (11)
we obtain the electric field
$\displaystyle E_{z}=\dfrac{-\mu_{0}\omega
M_{y}}{2k}\begin{cases}e^{ik(x+d_{F})}-e^{-ik(x-d_{F})},&-d_{F}<x<d_{F}\\\
e^{ik(x+d_{F})}-e^{ik(x-d_{F})},&x>d_{F}\\\
e^{-ik(x+d_{F})}-e^{-ik(x-d_{F})},&x<-d_{F}\end{cases}.$ (12)
From Eq. (3), we find the magnetic induction $B_{x}=0$,
$B_{z}=\mu_{0}(H_{0}+M_{0})$ is static, and
$B_{y}=-\partial_{x}E_{z}/(i\omega)$ follows
$\displaystyle
B_{y}=\dfrac{\mu_{0}M_{y}}{2}\begin{cases}e^{ik(x+d_{F})}+e^{-ik(x-d_{F})},&-d_{F}<x<d_{F}\\\
e^{ik(x+d_{F})}-e^{ik(x-d_{F})},&x>d_{F}\\\
-e^{-ik(x+d_{F})}+e^{-ik(x-d_{F})},&x<-d_{F}\end{cases}.$ (13)
We are interested in the field near the ferromagnet with a distance
$\sim\lambda$. In ferromagnetic insulators, $\omega\sim 100$ GHz [11], and
$\lambda\sim 100$ nm for conventional superconductors, so $k\lambda\sim
3\times 10^{-5}\ll 1$. When $kx\rightarrow 0$, we have
$\displaystyle E_{z}(x)=\begin{cases}-i\mu_{0}\omega M_{y}x,&-d_{F}<x<d_{F}\\\
-i\mu_{0}\omega M_{y}d_{F},&x>d_{F}\\\ i\mu_{0}\omega
M_{y}d_{F},&x<-d_{F}\end{cases},$ (14)
as plotted in Fig. 1(a) for a snapshot. The magnetic induction
$\displaystyle B_{y}(x)=\begin{cases}\mu_{0}M_{y},&-d_{F}<x<d_{F}\\\
0,&x>d_{F}\\\ 0,&x<-d_{F}\end{cases},$ (15)
recovers to the results from quasi-static approximation [54] with vanished
magnetic field $H_{y}$ outside of the ferromagnet.
### III.2 Quasi-static approximation
The above analysis implies that when focusing on the near-field limit, we may
apply the quasi-static approximation that sets $\nabla\times{\bf H}=0$ in Eq.
(3). When focusing on the FMR case, $\bf E$ is translation invariant in the
$y$-$z$ plane. i.e., $\partial_{z}E_{x}=0$. Taking the $y$-component of Eq.
(1), the oscillation of $B_{y}$ only generate $E_{z}$ parallel to the
magnetization:
$\displaystyle-\partial_{x}E_{z}=i\omega\mu_{0}M_{y}.$ (16)
Integrating along $x$ across the ferromagnet yields
$\displaystyle E_{z}(x)=-i\omega\mu_{0}M_{y}(x+d_{F})+E_{z}(x=-d_{F}).$ (17)
Thereby, $E_{z}$ depends linearly on $x$ inside the ferromagnet. Outside the
ferromagnet,
$\displaystyle E_{z}(x)=-2i\omega\mu_{0}M_{y}d_{F}+E_{z}(x=-d_{F})$ (18)
is uniform, which is consistent with the vanished magnetic field
$H_{y|\text{outside}}=0$ in the quasi-static approximation. According to the
symmetry, $E_{z}(x=0)=0$, so the electric field is exactly the same as Eq.
(14).
## IV S$|$F heterostructure
We consider the S$|$F heterostructure composed of a ferromagnetic film of
thickness $2d_{F}$ and a superconductor of thickness $d_{S}$, as shown in Fig.
3. We demonstrate the adjacent superconductors modulate strongly the radiated
electric field which explains the absence of the FMR shift in this
configuration [17, 49].
Figure 3: Radiated electric field of the F$|$S heterostructure.
### IV.1 Full solution
Inside the ferromagnet, since $\nabla\times{\bf M}=0$ for uniform ${\bf M}$,
Eq. (8) has the solution $E_{z}(x)=E_{1}e^{ikx}+E_{1}^{\prime}e^{-ikx}$.
Inside the superconductor, according to Eqs. (1) and (4), the electric field
obeys
$\displaystyle\partial_{x}^{2}E_{z}+(\varepsilon_{0}\mu_{0}\omega^{2}-1/\lambda^{2})E_{z}=0,$
(19)
which has the solution
$E_{z}(x)=E_{2}e^{ik^{\prime}x}+E_{2}^{\prime}e^{-ik^{\prime}x}$, where
$k^{\prime}=\sqrt{\varepsilon_{0}\mu_{0}\omega^{2}-1/\lambda^{2}}$. Out of the
heterostructure, the electric fields $E_{3}e^{ikx}$ and $E_{4}e^{-ikx}$ are
radiated. These radiated electric fields are illustrated in Fig. 3.
The amplitudes $\\{E_{1},E_{1}^{\prime},E_{2},E_{2}^{\prime},E_{3},E_{4}\\}$
are governed by the boundary conditions, i.e., $E_{z}$ and $H_{y}$ are
continuous at interfaces. The continuous $E_{z}$ at interfaces requests
$\displaystyle
E_{1}e^{ikd_{F}}+E_{1}^{\prime}e^{-ikd_{F}}=E_{2}e^{ik^{\prime}d_{F}}+E_{2}^{\prime}e^{-ik^{\prime}d_{F}},$
$\displaystyle
E_{2}e^{ik^{\prime}(d_{F}+d_{S})}+E_{2}^{\prime}e^{-ik^{\prime}(d_{F}+d_{S})}=E_{3}e^{ik(d_{F}+d_{S})},$
$\displaystyle E_{1}e^{-ikd_{F}}+E_{1}^{\prime}e^{ikd_{F}}=E_{4}e^{ikd_{F}}.$
(20)
In the superconductors, $H_{y}=-1/(i\omega\mu_{0})\partial_{x}E_{z}$, while in
the ferromagnet, $H_{y}=-1/(i\omega\mu_{0})\partial_{x}E_{z}-M_{y}$, so the
continuous $H_{y}$ at interfaces leads to
$\displaystyle{k}(E_{1}e^{ikd_{F}}-E_{1}^{\prime}e^{-ikd_{F}})+\omega\mu_{0}M_{y}$
$\displaystyle=k^{\prime}(E_{2}e^{ik^{\prime}d_{F}}-E_{2}^{\prime}e^{-ik^{\prime}d_{F}}),$
$\displaystyle{k^{\prime}}(E_{2}e^{ik^{\prime}(d_{F}+d_{S})}-E_{2}^{\prime}e^{-ik^{\prime}(d_{F}+d_{S})})={k}E_{3}e^{ik(d_{F}+d_{S})},$
$\displaystyle
k(E_{1}e^{-ikd_{F}}-E_{1}^{\prime}e^{ikd_{F}})+\omega\mu_{0}M_{y}=-kE_{4}e^{ikd_{F}}.$
(21)
Combining Eqs. (20) and (21), we obtain all the amplitudes. In the
ferromagnetic insulator,
$\displaystyle E_{z}(-d_{F}<x<d_{F})$ $\displaystyle={\cal
R}E_{0}e^{-ik(x-d_{F})}-\frac{\omega\mu_{0}M_{y}}{2k}\left(e^{ik(x+d_{F})}-e^{-ik(x-d_{F})}\right),$
(22)
where the amplitude
$\displaystyle
E_{0}=-\frac{\omega\mu_{0}M_{y}}{2k}\left(e^{2ik(x+d_{F})}-1\right),$ (23)
and
$\displaystyle{\cal R}=\frac{e^{ik^{\prime}d_{S}}(k^{2}-k^{\prime
2})+e^{-ik^{\prime}d_{S}}(k^{\prime
2}-k^{2})}{e^{ik^{\prime}d_{S}}(k-k^{\prime})^{2}-e^{-ik^{\prime}d_{S}}(k+k^{\prime})^{2}}$
(24)
is the reflection coefficient of the electric field at the F$|$S interface.
When $d_{S}=0$, ${\cal R}=0$, the solution (22) recovers the single layer case
(12). On the other hand, even with a small $d_{S}\ll\lambda$, since
$|k|\ll|k^{\prime}|$ when $\omega\sim 100$ GHz, ${\cal R}\rightarrow-1$
implies the total reflection of the electric fields at the F$|$S interface
even with ultrathin conventional superconductor layer. As shown below, this
implies the absence of FMR shift in all the available experiments [17, 49].
Inside the superconductor,
$\displaystyle
E_{z}(d_{F}<x<d_{F}+d_{S})=\frac{2kE_{0}}{e^{ik^{\prime}d_{S}}(k-k^{\prime})^{2}-e^{-ik^{\prime}d_{S}}(k+k^{\prime})^{2}}$
$\displaystyle\times\left((k-k^{\prime})e^{-ik^{\prime}(x-d_{F}+d_{S})}-(k+k^{\prime})e^{ik^{\prime}(x-d_{F}-d_{S})}\right),$
(25)
which is indeed very weak since $|k|\ll|k^{\prime}|$. Out of the
heterostructure,
$\displaystyle
E_{z}(x>d_{F}+d_{S})=\frac{-4kk^{\prime}E_{0}e^{ik(x-d_{F}-d_{S})}}{e^{ik^{\prime}d_{S}}(k-k^{\prime})^{2}-e^{-ik^{\prime}d_{S}}(k+k^{\prime})^{2}},$
$\displaystyle E_{z}(x<-d_{F})={\cal R}E_{0}e^{-ik(x-d_{F})}$
$\displaystyle-\frac{\omega\mu_{0}M_{y}}{2k}(e^{-ik(x+d_{F})}-e^{-ik(x-d_{F})}).$
(26)
With low frequencies and near the heterostructure, $kx\rightarrow 0$,
$kd_{F}\rightarrow 0$, and $kd_{S}\rightarrow 0$, so the electric fields
$\displaystyle E_{z}(x)=\begin{cases}0,&x>d_{F}\\\
-i\omega\mu_{0}M_{y}(x-d_{F}),&-d_{F}<x<d_{F}\\\
2i\omega\mu_{0}M_{y}d_{F},&x<-d_{F}\end{cases},$ (27)
which is illustrated in Fig. 1(b) for a snapshot. The electric field vanishes
in the superconductor due to the total reflection with a $\pi$-phase shift
${\cal R}=-1$ that generates no supercurrent and thereby leads to no
modulation on the FMR.
### IV.2 Quasi-static approximation
The full solution clearly shows the absence of electric fields at the
superconductor side of the S$|$F heterostructure, which can be well understood
within the quasi-static approximation $\nabla\times{\bf H}=0$ or ${\bf
J}_{s}$. Assuming $E_{z}(x=d_{F})=\tilde{E}_{0}$ at the F$|$S interface,
according to Eq. (6) the electric field in the adjacent superconductor
$\displaystyle
E_{z}(x)=\tilde{E}_{0}\dfrac{\cosh{((x-d_{S}-d_{F})/\lambda)}}{\cosh{(d_{S}/\lambda)}}$
(28)
drives the supercurrents. For a thin superconducting film of thickness
$O(\lambda)$, we are allowed to take an average of the supercurrent
$J_{s,z}=[J_{s,z}(x=d_{F})+J_{s,z}(x=d_{F}+d_{S})]/2$, and from the first
equation of Eq. (4)
$\displaystyle
J_{s,z}=\dfrac{i}{\mu_{0}\omega\lambda^{2}}\tilde{E}_{0}\dfrac{1+\cosh(d_{S}/\lambda)}{2\cosh(d_{S}/\lambda)}.$
(29)
The supercurrents generate the vector potential (7) and the Oersted magnetic
field according to ${H}_{y}=-\partial_{x}A_{z}/\mu_{0}$. Taking $k=0$ at low
frequencies in the Weyl identity (11), i.e., [7]
$\displaystyle\dfrac{1}{|{\bf r}-{\bf r}^{\prime}|}=\int
dk_{y}^{\prime}dk_{z}^{\prime}\dfrac{e^{ik_{y}^{\prime}(y-y^{\prime})+ik_{z}^{\prime}(z-z^{\prime})}e^{-\sqrt{k_{y}^{\prime
2}+k_{z}^{\prime 2}}|x-x^{\prime}|}}{2\pi\sqrt{k_{y}^{\prime 2}+k_{z}^{\prime
2}}},$ (30)
we obtain the Oersted magnetic field generated by the supercurrents
$\displaystyle
H_{s,y}(x)=\left\\{\begin{array}[]{cc}d_{S}{{J}}_{s,z}/2,&x>d_{F}+d_{S}\\\
-d_{S}{{J}}_{s,z}/2,&~{}x<d_{F}\end{array}\right..$ (33)
However, constant $H_{s,y}$ independent of $x$ should vanish out of the
heterostructure within the quasi-static approximation since a constant
magnetic field renders the radiated electric field divergent, which requests
$J_{s,z}=0$ when $d_{S}\neq 0$ and $E_{z}(x>d_{F})=0$. Since the electric
field is continuous at interfaces, $E_{z}(x=d_{F})=\tilde{E}_{0}=0$ and
according to Eq. (16) $E_{z}(x=-d_{F})=2id_{F}\omega\mu_{0}M_{y}$. These
simple calculations thereby capture precisely the key physics of the full
solution (27).
## V S$|$F$|$S heterostructure
Further, we consider the S$|$F$|$S heterostructure as illustrated in Fig. 2
composed of the ferromagnetic insulator of thickness $2d_{F}$ and two adjacent
superconductor films of thickness $d_{S}$ and $d_{S}^{\prime}$, respectively.
In comparison to that of the S$|$F bilayer, the distribution of the electric
field in S$|$F$|$S heterostructure changes much due to its back-and-forth
reflection by the superconductors, as addressed in this section.
### V.1 Full solution
Similar to the S$|$F heterostructure, inside the ferromagnet,
$E_{z}(x)=E_{1}e^{ikx}+E_{1}^{\prime}e^{-ikx}$; in the superconductor “1”,
$E_{z}(x)=E_{2}e^{ik^{\prime}x}+E_{2}^{\prime}e^{-ik^{\prime}x}$; and in the
superconductor “2”,
$E_{z}(x)=E_{3}e^{ik^{\prime}x}+E_{3}^{\prime}e^{-ik^{\prime}x}$. Out of the
heterostructure, the electric fields $E_{4}e^{ikx}$ and $E_{5}e^{-ikx}$ are
radiated. These electric fields are illustrated in Fig. 4.
Figure 4: Radiated electric field of the S$|$F$|$S heterostructure.
The amplitudes
$\\{E_{1},E_{1}^{\prime},E_{2},E_{2}^{\prime},E_{3},E_{3}^{\prime},E_{4},E_{5}\\}$
are governed by the boundary conditions. The continuous $E_{z}$ at interfaces
requests
$\displaystyle
E_{1}e^{ikd_{F}}+E_{1}^{\prime}e^{-ikd_{F}}=E_{2}e^{ik^{\prime}d_{F}}+E_{2}^{\prime}e^{-ik^{\prime}d_{F}},$
$\displaystyle
E_{1}e^{-ikd_{F}}+E_{1}^{\prime}e^{ikd_{F}}=E_{3}e^{-ik^{\prime}d_{F}}+E_{3}^{\prime}e^{ik^{\prime}d_{F}},$
$\displaystyle
E_{2}e^{ik^{\prime}(d_{F}+d_{S})}+E_{2}^{\prime}e^{-ik^{\prime}(d_{F}+d_{S})}=E_{4}e^{ik(d_{F}+d_{S})},$
$\displaystyle
E_{3}e^{-ik^{\prime}(d_{F}+d_{S}^{\prime})}+E_{3}^{\prime}e^{ik^{\prime}(d_{F}+d_{S}^{\prime})}=E_{5}e^{ik(d_{F}+d_{S}^{\prime})},$
(34)
and the continuous $H_{y}$ at interfaces leads to
$\displaystyle
k(E_{1}e^{ikd_{F}}-E_{1}^{\prime}e^{-ikd_{F}})+\omega\mu_{0}M_{y}=k^{\prime}(E_{2}e^{ik^{\prime}d_{F}}-E_{2}^{\prime}e^{-ik^{\prime}d_{F}}),$
$\displaystyle
k(E_{1}e^{-ikd_{F}}-E_{1}^{\prime}e^{ikd_{F}})+\omega\mu_{0}M_{y}=k^{\prime}(E_{3}e^{-ik^{\prime}d_{F}}-E_{3}^{\prime}e^{ik^{\prime}d_{F}}),$
$\displaystyle
k^{\prime}(E_{2}e^{ik^{\prime}(d_{F}+d_{S})}-E_{2}^{\prime}e^{-ik^{\prime}(d_{F}+d_{S})})=kE_{4}e^{ik(d_{F}+d_{S})},$
$\displaystyle
k^{\prime}(E_{3}e^{-ik^{\prime}(d_{F}+d_{S}^{\prime})}-E_{3}^{\prime}e^{ik^{\prime}(d_{F}+d_{S}^{\prime})})=-kE_{5}e^{ik(d_{F}+d_{S}^{\prime})}.$
(35)
Combining Eqs. (34) and (35), we obtain the electric-field distribution,
referring to Appendix A for the general solution. In particular, when
$d_{S}=d_{S}^{\prime}$, in the ferromagnetic film,
$\displaystyle E_{z}(-d_{F}<x<d_{F})$
$\displaystyle=\frac{-\omega\mu_{0}M_{y}\sinh{(ikx)}}{k\cosh{(ikd_{F})}-k^{\prime}f(u)\sinh{(ikd_{F})}},$
(36)
where $u=-[(k+k^{\prime})/(k-k^{\prime})]\exp(-2ik^{\prime}d_{S})$ and
$\displaystyle
f(u)=\frac{u-1}{u+1}=\frac{k^{\prime}\sinh{(ik^{\prime}d_{S})}-k\cosh{(ik^{\prime}d_{S})}}{k\sinh{(ik^{\prime}d_{S})}-k^{\prime}\cosh{(ik^{\prime}d_{S})}}.$
(37)
In the superconductor “1”,
$\displaystyle E_{z}(d_{F}<x<d_{F}+d_{S})$
$\displaystyle=\frac{-\omega\mu_{0}M_{y}(ue^{ik^{\prime}(x-d_{F})}+e^{-ik^{\prime}(x-d_{F})})}{k(1+u)\coth(ikd_{F})-k^{\prime}(u-1)},$
(38)
and in the superconductor “2”,
$\displaystyle E_{z}(-d_{F}-d_{S}<x<-d_{F})$
$\displaystyle=\frac{\omega\mu_{0}M_{y}(ue^{-ik^{\prime}(x+d_{F})}+e^{ik^{\prime}(x+d_{F})})}{k(1+u)\coth(ikd_{F})-k^{\prime}(u-1)}.$
(39)
They both exist, and $E_{z}(x=-d_{F})$ and $E_{z}(x=d_{F})$ are opposite. Out
of the heterostructure,
$\displaystyle E_{z}(x>d_{F}+d_{S})$
$\displaystyle=\frac{-\omega\mu_{0}M_{y}(ue^{ik^{\prime}d_{S}}+e^{-ik^{\prime}d_{S}})}{k(1+u)\coth(ikd_{F})-k^{\prime}(u-1)}e^{ikx},$
$\displaystyle E_{z}(x<-d_{F}-d_{S})$
$\displaystyle=\frac{\omega\mu_{0}M_{y}(ue^{ik^{\prime}d_{S}}+e^{-ik^{\prime}d_{S}})}{k(1+u)\coth(ikd_{F})-k^{\prime}(u-1)}e^{-ikx}.$
(40)
We illustrate in Fig. 5 the distribution of the electric fields ${\rm
Re}(E_{z}/(i\omega\mu_{0}M_{y}d_{F}))$ in the symmetric
$d_{S}^{\prime}=d_{S}=30$ nm and asymmetric $d_{S}^{\prime}=2d_{S}=60$ nm
S$|$F$|$S heterostructure, respectively, in the near-field limit. For NbN with
electron density $n_{e}=1.65\times 10^{28}$/m3 [58] and $T_{c}=6.5$ K, the
London penetration depth $\lambda(T=0)=42.0$ nm and $\lambda(T=0.5T_{c})=43.4$
nm. The fields are opposite at the two superconductors in the symmetric
heterostructure but are skewed when $d_{S}\neq d_{S}^{\prime}$. These fields
carrying energy are radiated out in the far zone [56]. When the
superconductors are sufficiently thick $\\{d_{S},d_{S}^{\prime}\\}\gg\lambda$,
these electric fields are confined between them, which corresponds to an
excellent waveguide with small size [36].
Figure 5: Distribution of electric fields in symmetric
$d_{S}=d_{S}^{\prime}=30$ nm [(a)] and asymmetric $d_{S}^{\prime}=2d_{S}=60$
nm [(b)] S$|$F$|$S heterostructure. The thickness of the ferromagnetic film
2$d_{F}=60$ nm and London’s penetration depth $\lambda=43.4$ nm.
### V.2 Quasi-static approximation
As justified, the quasi-static approximation $\nabla\times{\bf H}=0$ or ${\bf
J}_{s}$ is allowed when solving the electric fields near the heterostructure
[56]. In the FMR case, the radiated electric field is uniform in the $y$-$z$
plane, so from $\nabla\times{\bf E}=i\omega{\bf B}$, the $x$-component
$B_{x}=H_{d,x}+M_{x}=0$ generates no electric field outside the magnet. On the
other hand, in the linear response regime $B_{z}=\mu_{0}(H_{0}+M_{z})$ is
static, so only $B_{y}=\mu_{0}M_{y}$ in the magnet radiates the time-dependent
electric field according to
$-\partial_{x}E_{z}=i\omega\mu_{0}(M_{y}+H_{s,y})$. Integrating along $x$
across the ferromagnet yields the net electric field at the interfaces obeying
$\displaystyle
E_{z}(x=d_{F})-E_{z}(x=-d_{F})=-2d_{F}i\omega\mu_{0}(M_{y}+H_{s,y}).$ (41)
Out of the heterostructure, from the $z$-component of $\nabla\times{\bf H}=0$,
$H_{y}|_{\rm outside}$ is a constant, which can be proved to vanish as in Sec.
IV.2.
In the quasi-static approximation, the electric field in the superconductors
$``1"$ and $``2"$ obeys Eq. (6). From the boundary conditions with continuous
$E_{z}$ and $H_{y}$ at interfaces and $H_{y}|_{\text{outside}}=0$, the
electric field in the superconductors reads
$\displaystyle E_{z}(d_{F}<x<d_{F}+d_{S})$
$\displaystyle=E_{z}(x=d_{F})\dfrac{\cosh((x-d_{S}-d_{F})/\lambda)}{\cosh(d_{S}/\lambda)},$
$\displaystyle E_{z}(-d_{F}-d_{S}<x<-d_{F})$
$\displaystyle=E_{z}(x=-d_{F})\dfrac{\cosh((x+d_{S}^{\prime}+d_{F})/\lambda)}{\cosh(d_{S}^{\prime}/\lambda)},$
(42)
which drive the supercurrents in the adjacent superconductors. For thin
superconducting films of thickness $O({\lambda})$, we are allowed to take an
average of the supercurrents ${\bf J}^{(1)}_{s}=\left[{\bf
J}_{s}(x=d_{F})+{\bf J}_{s}(x=d_{F}+d_{S})\right]/2$ and ${\bf
J}^{(2)}_{s}=\left[{\bf J}_{s}(x=-d_{F})+{\bf
J}_{s}(x=-d_{F}-d_{S}^{\prime})\right]/2$, i.e.,
$\displaystyle{{J}}_{s,z}^{(1)}=\dfrac{i}{\omega\mu_{0}\lambda^{2}}E_{z}(x=d_{F})\dfrac{1+\cosh(d_{S}/\lambda)}{2\cosh(d_{S}/\lambda)},$
$\displaystyle{{J}}^{(2)}_{s,z}=\dfrac{i}{\omega\mu_{0}\lambda^{2}}E_{z}(x=-d_{F})\dfrac{1+\cosh(d_{S}^{\prime}/\lambda)}{2\cosh(d_{S}^{\prime}/\lambda)}.$
(43)
The supercurrents generate the vector potential (7) and the Oersted magnetic
field according to ${H}_{s,y}=-\partial_{x}A_{z}/\mu_{0}$. Using the Weyl
identity (30) we obtain
$\displaystyle{H_{s,y}}(x)=\left\\{\begin{array}[]{cc}\left(d_{S}{{J}}^{(1)}_{s,z}+d_{S}^{\prime}{{J}}^{(2)}_{s,z}\right)/2,&x>d_{F}+d_{S}\\\
\left(-d_{S}{{J}}^{(1)}_{s,z}+d_{S}^{\prime}{{J}}^{(2)}_{s,z}\right)/2,&~{}-d_{F}<x<d_{F}\\\
\left(-d_{S}{{J}}^{(1)}_{s,z}-d_{S}^{\prime}{{J}}^{(2)}_{s,z}\right)/2,&~{}~{}x<-d_{F}-d_{S}^{\prime}\end{array}\right..$
(47)
$H_{s,y}|_{\text{outside}}=0$ requests
$\displaystyle d_{S}{{J}}^{(1)}_{s,z}+d_{S}^{\prime}{{J}}^{(2)}_{s,z}=0,$ (48)
so the Oersted magnetic field inside the ferromagnetic slab is reduced to
$\displaystyle{H}_{s,y}(-d_{F}<x<d_{F})=d_{S}^{\prime}{J}_{s,z}^{(2)}=-d_{S}J_{s,z}^{(1)}.$
(49)
Thereby, when $d_{S}=d_{S}^{\prime}$, the supercurrents are opposite in the
two superconductors. When $d_{S}^{\prime}\rightarrow 0$, $H_{s,y}$ vanishes in
the magnet.
Substituting Eqs. (43) and (41) into (48), we obtain the electric field at the
surface of the ferromagnetic film:
$\displaystyle E_{z}(x=-d_{F})=i\mu_{0}\omega
d_{S}d_{F}(M_{y}+H_{s,y})\dfrac{\cosh(d_{S}/\lambda)+1}{\cosh(d_{S}/\lambda)}$
$\displaystyle\times\left(\dfrac{d_{S}(\cosh(d_{S}/\lambda)+1)}{2\cosh(d_{S}/\lambda)}+\dfrac{d_{S}^{\prime}(\cosh(d_{S}^{\prime}/\lambda)+1)}{2\cosh(d_{S}^{\prime}/\lambda)}\right)^{-1}.$
(50)
Substituting it into Eq. (49), the Oersted magnetic field in the ferromagnetic
film
$\displaystyle{H}_{s,y}(-d_{F}<x<d_{F})$
$\displaystyle=-M_{y}\dfrac{d_{F}d_{S}^{\prime}d_{S}G(d_{S},d_{S}^{\prime},\lambda)}{\lambda^{2}+d_{F}d_{S}^{\prime}d_{S}G(d_{S},d_{S}^{\prime},\lambda)},$
(51)
where
$\displaystyle
G(d_{S},d_{S}^{\prime},\lambda)=\dfrac{(\cosh(d_{S}/\lambda)+1)}{\cosh(d_{S}/\lambda)}\dfrac{(\cosh(d_{S}^{\prime}/\lambda)+1)}{\cosh(d_{S}^{\prime}/\lambda)}$
$\displaystyle\times\left(\dfrac{d_{S}(\cosh(d_{S}/\lambda)+1)}{\cosh(d_{S}/\lambda)}+\dfrac{d_{S}^{\prime}(\cosh(d_{S}^{\prime}/\lambda)+1)}{\cosh(d_{S}^{\prime}/\lambda)}\right)^{-1}.$
(52)
These results capture precisely the key physics of the full solution and are
convenient for the calculation of the interaction between Kittel magnon and
Cooper pairs.
### V.3 Ultrastrong interaction between Kittel magnon and Cooper pairs
Above we address that the dynamics of magnetization ${\bf M}$ generates
$H_{s,y}$ via the backaction of superconductors, which, in turn, drives ${\bf
M}$ in the ferromagnet, imposing a self-consistent problem that is solved by
combining the Landau-Lifshitz and Maxwell’s equations.
In the linear regime, the Landau-Lifshitz equation
$\displaystyle-i\omega M_{x}+\mu_{0}\gamma M_{y}H_{0}=\mu_{0}\gamma
M_{0}H_{s,y},$ $\displaystyle i\omega M_{y}+\mu_{0}\gamma
M_{x}H_{0}=\mu_{0}\gamma M_{0}H_{d,x}.$ (53)
Substituting $B_{x}=M_{x}+H_{d,x}=0$ into Eq. (53), $M_{y}$ relates to
$H_{s,y}$ via
$\displaystyle
M_{y}=\dfrac{\mu_{0}^{2}\gamma^{2}M_{0}(H_{0}+M_{0})}{\mu_{0}^{2}\gamma^{2}H_{0}(H_{0}+M_{0})-\omega^{2}}H_{s,y}.$
(54)
When $d_{S}^{\prime}\rightarrow 0$, $H_{s,y}=0$ according to Eq. (51), and the
FMR frequency recovers the Kittel formula $\tilde{\omega}_{\rm
K}=\mu_{0}\gamma\sqrt{H_{0}(H_{0}+M_{0})}$ [52]. With finite $d_{S}$ and
$d_{S}^{\prime}$, the FMR frequency is self-consistently solved via combining
Eqs. (51) and (54), leading to the modified FMR frequency
$\displaystyle\omega_{\rm K}=\mu_{0}\gamma$
$\displaystyle\times\sqrt{\frac{\lambda^{2}H_{0}(H_{0}+M_{0})+d_{S}d_{S}^{\prime}d_{F}G(d_{S},d_{S}^{\prime},\lambda)(H_{0}+M_{0})^{2}}{d_{S}d_{S}^{\prime}d_{F}G(d_{S},d_{S}^{\prime},\lambda)+\lambda^{2}}}.$
(55)
In particular, when $d_{S}=d_{S}^{\prime}$,
$\displaystyle\omega_{\rm K}$
$\displaystyle={\mu_{0}\gamma}\left(\dfrac{2\lambda^{2}\cosh{(d_{S}/\lambda)}H_{0}(H_{0}+M_{0})}{d_{S}d_{F}\left(\cosh{(d_{S}/\lambda)}+1\right)+2\lambda^{2}\cosh{(d_{S}/\lambda)}}\right.$
$\displaystyle\left.+\dfrac{d_{S}d_{F}(\cosh{(d_{S}/\lambda)}+1)(H_{0}+M_{0})^{2}}{d_{S}d_{F}\left(\cosh{(d_{S}/\lambda)}+1\right)+2\lambda^{2}\cosh{(d_{S}/\lambda)}}\right)^{1/2}.$
(56)
Approaching $T_{c}$, $\lambda\rightarrow\infty$,
$\cosh(d_{S}/\lambda)\rightarrow 1$, so the FMR frequency (56) recovers the
Kittel formula $\omega_{\rm K}\rightarrow\tilde{\omega}_{\rm K}$; otherwise
$T<T_{c}$, it is shifted.
To show the FMR shift, we assume an oscillating magnetic field
$\tilde{H}e^{-i\omega_{0}t}\hat{\bf y}$ of frequency $\omega_{0}$ applied
along the $\bf\hat{y}$-direction (the associated microwave electric field is
along the normal $\hat{\bf x}$-direction). The wavelength of this microwave is
much larger than the thickness of the heterostructure, so it can be treated as
uniform across the heterostructure thickness. It can penetrate the
superconductor easily when $\\{d_{S},d_{S}^{\prime}\\}\sim\lambda$. With the
wave vector$\parallel\hat{\bf z}$ parallel to the film, it only excites $\bf
M$ in the ferromagnet but does not drive the superconductor.
Including the external pump field $\tilde{H}e^{-i\omega_{0}t}\hat{\bf y}$ and
$H_{s,y}\hat{\bf y}$ from the superconductor (51) into ${\bf H}_{\rm eff}$ and
incorporating the Gilbert damping $\alpha_{G}$, the linearized Landau-
Lifshitz-Gilbert equation reads
$\displaystyle-i\omega_{0}M_{x}+\mu_{0}\gamma M_{y}H_{0}$
$\displaystyle=\mu_{0}\gamma
M_{0}H_{\text{eff},y}+i\alpha_{G}\omega_{0}M_{y},$ $\displaystyle\mu_{0}\gamma
H_{0}M_{x}+i\omega_{0}M_{y}$ $\displaystyle=\mu_{0}\gamma
M_{0}H_{\text{eff},x}+i\alpha_{G}\omega_{0}M_{x},$ (57)
from which we solve with $\alpha_{G}\ll 1$
$\displaystyle M_{y}$
$\displaystyle=\dfrac{\mu_{0}^{2}\gamma^{2}M_{0}(H_{0}+M_{0})}{\omega_{\rm
K}^{2}-\omega_{0}^{2}-i\Gamma}\tilde{H},$ $\displaystyle M_{x}$
$\displaystyle=-iM_{y}\left[\dfrac{\omega_{0}}{\mu_{0}\gamma(H_{0}+M_{0})}+\dfrac{i\alpha_{G}\omega_{0}^{2}}{(\mu_{0}\gamma(H_{0}+M_{0}))^{2}}\right],$
(58)
where
$\displaystyle\Gamma$
$\displaystyle=\dfrac{\alpha_{G}\omega_{0}(\mu_{0}^{2}\gamma^{2}(H_{0}+M_{0})^{2}+\omega_{0}^{2})}{\mu_{0}\gamma(H_{0}+M_{0})}.$
(59)
The Oersted field in the thin magnetic film induced by the supercurrent
$\displaystyle H_{s,y}=$
$\displaystyle-\tilde{H}\dfrac{\mu_{0}^{2}\gamma^{2}M_{0}(H_{0}+M_{0})}{\omega_{\rm
K}^{2}-\omega_{0}^{2}-i\Gamma}\dfrac{d_{F}d_{S}d_{S}^{\prime}G(d_{S},d_{S}^{\prime},\lambda)}{\lambda^{2}+d_{S}d_{F}d_{S}^{\prime}G(d_{S},d_{S}^{\prime},\lambda)}.$
(60)
The average supercurrent density in (one of) the thin superconductor
$\displaystyle
J_{s}^{(1)}=\tilde{H}\dfrac{\mu_{0}^{2}\gamma^{2}M_{0}(H_{0}+M_{0})}{\omega_{\rm
K}^{2}-\omega_{0}^{2}-i\Gamma}\dfrac{d_{F}d_{S}^{\prime}G(d_{S},d_{S}^{\prime},\lambda)}{\lambda^{2}+d_{S}d_{F}d_{S}^{\prime}G(d_{S},d_{S}^{\prime},\lambda)}.$
(61)
The average electric field $E_{z}$ in (one of) the superconductors
$\displaystyle
E^{(1)}_{z}=-i\tilde{H}\dfrac{\mu_{0}^{3}\gamma^{2}M_{0}(H_{0}+M_{0})}{\omega_{\rm
K}^{2}-\omega_{0}^{2}-i\Gamma}\dfrac{\omega_{0}\lambda^{2}d_{F}d_{S}^{\prime}G(d_{S},d_{S}^{\prime},\lambda)}{\lambda^{2}+d_{S}d_{F}d_{S}^{\prime}G(d_{S},d_{S}^{\prime},\lambda)}.$
(62)
Here we illustrate the numerical results considering a yttrium iron garnet
(YIG) film of thickness $2d_{F}=60$ nm sandwiched by two NbN superconductors
of thickness $d_{S}=d_{S}^{\prime}=30$ nm. Insulating EuS thin magnetic film
[59, 60] is also a possible candidate to test our prediction. For YIG,
$\mu_{0}M_{0}=0.2$ T and $\alpha_{G}=5\times 10^{-4}$ [61, 62]. We use
$\lambda(T=0.5T_{c})=43.4$ nm for NbN [58] . We take the bias field
$\mu_{0}H_{0}=0.05$ T and the excitation field $\mu_{0}\tilde{H}=0.01$ mT.
Figure 6 shows the radiated electric field in (one of) the superconductors and
the excited amplitudes of $\bf M$ as a function of the excitation frequency
$\omega_{0}$. The frequency shift is $2\pi\times 1.6$ GHz, comparable to half
of the bare FMR frequency $\tilde{\omega}_{\rm K}=2\pi\times 3.2$ GHz,
corresponding to the decrease of the resonant magnetic field as large as 350
mT. This demonstrates the potential to achieve ultrastrong interaction between
magnons and Cooper pairs even with magnetic insulators.
Figure 6: FMR spectra with the excitation field $\mu_{0}\tilde{H}=0.01$ mT.
(a) plots the excited electric field amplitude in (one of) the superconductors
in the symmetric S$|$F$|$S heterostructure. The amplitude of the resonance
electric field $E_{z}\sim 9$ V/m. (b) is the excited amplitudes of the
magnetization $M_{x,y}$ with and without two adjacent superconductors. The
frequency shift is as large as $2\pi\times 1.6$ GHz$\sim\tilde{\omega}_{\rm
K}/2$.
## VI Conclusion and discussion
Magnetic insulators are ideal candidates for long-range spin transport [1, 2,
3, 4, 5, 6, 7], strong coupling between magnons and microwaves [32], and
quantum information processing [29, 30, 33], gating which by superconductors
may bring new control dimensions. In comparison to metallic magnets, the
mutual proximity effect may differ between magnetic insulators and
superconductors, which may be helpful to distinguish different competitive
mechanisms [26] in the future studies. Our model system differs from the
metallic ferromagnets since there are no electric currents flowing in the
insulators that, if large, may affect the field distribution via radiation.
Our theory may apply to the antiferromagnet as well, in which case we need to
replace the characterized frequency with terahertz ones.
In conclusion, we analyze the interaction between the Kittel magnons in
insulating magnetic film and Cooper pairs in superconductors mediated by the
radiated electric fields from the magnetization dynamics. Via highlighting the
role of the total reflection of the electric fields at the ferromagnet-
superconductor interface that are solved beyond the quasi-static
approximation, we provide a comprehensive understanding of the absence of the
FMR shift in the F$|$S heterostructure and predict its existence in the
S$|$F$|$S heterostructure with the Meissner screening. The coupling between
magnons and Cooper pairs is ultrastrong with the frequency shift achieving
tens of percent of the bare FMR frequency, which may bring superior advantage
in information processing in on-chip magnonics and quantum magnonics.
###### Acknowledgements.
We gratefully acknowledge Prof. Guang Yang and Prof. Lihui Bai for many
inspiring discussions. This work is financially supported by the National
Natural Science Foundation of China, and the startup grant of Huazhong
University of Science and Technology (Grants No. 3004012185 and 3004012198).
## Appendix A General solution of $E_{z}$ in S$|$F$|$S heterostructure
Here we list the general solution of $E_{z}(x)$ in the S$|$F$|$S
heterostructure when $d_{S}\neq d_{S}^{\prime}$ in Fig. 4. Inside the
ferromagnet,
$\displaystyle E_{z}(-d_{F}<x<d_{F})$
$\displaystyle=\frac{-\omega\mu_{0}M_{y}(Ge^{ikx}+e^{-ikx})}{k(Ge^{ikd_{F}}-e^{-ikd_{F}})-k^{\prime}f(u)(Ge^{ikd_{F}}+e^{-ikd_{F}})},$
(63)
where
$\displaystyle
G=-\frac{-2k\sinh(ikd_{F})+k^{\prime}(f(u)e^{-ikd_{F}}+f(u^{\prime})e^{ikd_{F}})}{-2k\sinh(ikd_{F})+k^{\prime}(f(u)e^{ikd_{F}}+f(u^{\prime})e^{-ikd_{F}})},$
(64)
and
$u^{\prime}=-[(k+k^{\prime})/(k-k^{\prime})]\exp(-2ik^{\prime}d_{S}^{\prime})$.
In the superconductor “1”,
$\displaystyle
E_{z}(d_{F}<x<d_{F}+d_{S})=\frac{ue^{ik^{\prime}(x-d_{F})}+e^{-ik^{\prime}(x-d_{F})}}{1+u}$
$\displaystyle\times\frac{-\omega\mu_{0}M_{y}(Ge^{ikd_{F}}+e^{-ikd_{F}})}{k(Ge^{ikd_{F}}-e^{-ikd_{F}})-k^{\prime}f(u)(Ge^{ikd_{F}}+e^{-ikd_{F}})}.$
(65)
In the superconductor “2”,
$\displaystyle
E_{z}(-d_{F}-d_{S}^{\prime}<x<-d_{F})=\frac{e^{ik^{\prime}(x+d_{F})}+u^{\prime}e^{-ik^{\prime}(x+d_{F})}}{1+u^{\prime}}$
$\displaystyle\times\frac{-\omega\mu_{0}M_{y}(Ge^{-ikd_{F}}+e^{ikd_{F}})}{k(Ge^{ikd_{F}}-e^{-ikd_{F}})-k^{\prime}f(u)(Ge^{ikd_{F}}+e^{-ikd_{F}})}.$
(66)
Out of the heterostructure,
$\displaystyle
E_{z}(x>d_{F}+d_{S})=\frac{ue^{ik^{\prime}d_{S}}+e^{-ik^{\prime}d_{S}}}{1+u}$
$\displaystyle\times\frac{-\omega\mu_{0}M_{y}(Ge^{ikd_{F}}+e^{-ikd_{F}})e^{ik(x-d_{F}-d_{S})}}{k(Ge^{ikd_{F}}-e^{-ikd_{F}})-k^{\prime}f(u)(Ge^{ikd_{F}}+e^{-ikd_{F}})},$
$\displaystyle
E_{z}(x<-d_{F}-d_{S}^{\prime})=\frac{e^{-ik^{\prime}d_{S}^{\prime}}+u^{\prime}e^{ik^{\prime}d_{S}^{\prime}}}{1+u^{\prime}}$
$\displaystyle\times\frac{-\omega\mu_{0}M_{y}(Ge^{-ikd_{F}}+e^{ikd_{F}})e^{-ik(x+d_{F}+d_{S})}}{k(Ge^{ikd_{F}}-e^{-ikd_{F}})-k^{\prime}f(u)(Ge^{ikd_{F}}+e^{-ikd_{F}})}.$
(67)
## References
* [1] B. Lenk, H. Ulrichs, F. Garbs, and M. Münzenberg, The building blocks of magnonics, Phys. Rep. 507, 107 (2011).
* [2] A. V. Chumak, V. I. Vasyuchka, A. A. Serga, and B. Hillebrands, Magnon spintronics, Nat. Phys. 11, 453 (2015).
* [3] D. Grundler, Nanomagnonics around the corner, Nat. Nanotechnol. 11, 407 (2016).
* [4] V. E. Demidov, S. Urazhdin, G. de Loubens, O. Klein, V. Cros, A. Anane, and S. O. Demokritov, Magnetization oscillations and waves driven by pure spin currents, Phys. Rep. 673, 1 (2017).
* [5] A. Brataas, B. van Wees, O. Klein, G. de Loubens, and M. Viret, Spin Insulatronics, Phys. Rep. 885, 1 (2020).
* [6] Barman et al., The 2021 Magnonics Roadmap, J. Phys. Condens. Matter 33, 413001 (2021).
* [7] T. Yu, Z. C. Luo, and G. E. W. Bauer, Chirality as generalized spin–orbit interaction in spintronics, Phys. Rep. 1009, 1 (2023).
* [8] O. V. Dobrovolskity, R. Sachser, T. Brächer, T. Böttcher, V. V. Kruglyak, R. V. Vovk, V. A. Shklovskij, M. Huth, B. Hillebrands, and A. V. Chumak, Magnon-fluxon interaction in a ferromagnet/superconductor heterostructure, Nat. Phys. 15, 477 (2019).
* [9] Y. Yao, R. Cai, T. Yu, Y. Ma, W. Xing, Y. Ji, X.- C. Xie, S.-H. Yang, and W. Han, Giant oscillatory Gilbert damping in superconductor/ferromagnet/superconductor junctions, Sci. Adv. 7, eabh3686 (2021).
* [10] L. G. Johnsen, H. T. Simensen, A. Brataas, and J. Linder, Magnon Spin Current Induced by Triplet Cooper Pair Supercurrents, Phys. Rev. Lett. 127, 207001 (2021).
* [11] T. Yu and G. E. W. Bauer, Efficient Gating of Magnons by Proximity Superconductors, Phys. Rev. Lett. 129, 117201 (2022).
* [12] M. A. Kuznetsov and A. A. Fraerman, Temperature-sensitive spin-wave nonreciprocity induced by interlayer dipolar coupling in ferromagnet/paramagnet and ferromagnet/superconductor hybrid systems, Phys. Rev. B 105, 214401 (2022).
* [13] M. Silaev, Anderson-Higgs Mass of Magnons in Superconductor-Ferromagnet-Superconductor Systems, Phys. Rev. Appl. 18, L061004 (2022).
* [14] I. V. Bobkova, A. M. Bobkov, A. Kamra, and W. Belzig, Magnon-cooparons in magnet-superconductor hybrids, Commun. Mater. 3, 95 (2022).
* [15] A. M. Bobkov, S. A. Sorokin, and I. V. Bobkova, Phys. Rev. B 107, 174521 (2023).
* [16] A. S. Ianovskaia, A. M. Bobkov, and I. V. Bobkova, Magnon influence on the superconducting DOS in FI/S bilayers, arXiv:2307. 03954.
* [17] M. Borst, P. H. Vree, A. Lowther, A. Teepe, S. Kurdi, I. Bertelli, B. G. Simon, Y. M. Blanter, and T. van der Sar, Observation and control of hybrid spin-wave–Meissner-current transport modes, arXiv:2307.07581.
* [18] A. T. G. Janssønn, H. T. Simensen, A. Kamra, A. Brataas, and S. H. Jacobsen, Macroscale nonlocal transfer of superconducting signatures to a ferromagnet in a cavity, Phys. Rev. B 102, 180506(R) (2020).
* [19] I. A. Golovchanskiy, N. N. Abramov, V. S. Stolyarov, M. Weides, V. V. Ryazanov, A. A. Golubov, A. V. Ustinov, and M. Y. Kupriyanov, Ultrastrong photon-to-magnon coupling in multilayered heterostructures involving superconducting coherence via ferromagnetic layers, Sci. Adv. 7, eabe8638 (2021).
* [20] I. A. Golovchanskiy, N. N. Abramov, V. S. Stolyarov, A. A. Golubov, M. Y. Kupriyanov, V. V. Ryazanov, and A. V. Ustinov, Approaching Deep-Strong On-Chip Photon-To-Magnon Coupling, Phys. Rev. Appl. 16, 034029 (2021).
* [21] M. Silaev, Ultrastrong magnon-photon coupling, squeezed vacuum, and entanglement in superconductor/ferromagnet nanostructures, Phys. Rev. B 107, L180503 (2023).
* [22] A. T. G. Janssønn, H. G. Hugdal, A. Brataas, and S. H. Jacobsen, Cavity-mediated superconductor–ferromagnetic-insulator coupling, Phys. Rev. B 107, 035147 (2023).
* [23] A. Ghirri, C. Bonizzoni, M. Maksutoglu, A. Mercurio, O. D. Stefano, S. Savasta, and M. Affronte, Ultrastrong magnon-photon coupling achieved by magnetic films in contact with superconducting resonators, Phys. Rev. Appl. (2023).
* [24] M. Eschrig, Spin-polarized supercurrents for spintronics: a review of current progress, Rep. Prog. Phys. 78, 104501 (2015).
* [25] J. Linder and J. W. A. Robinson, Superconducting spintronics, Nat. Phys. 11, 307 (2015).
* [26] F. S. Bergeret, M. Silaev, P. Virtanen, and T. T. Heikkilä, Colloquium: Nonequilibrium effects in superconductors with a spin-splitting field, Rev. Mod. Phys. 90, 041001 (2018).
* [27] J. Linder and A. V. Balatsky, Odd-frequency superconductivity, Rev. Mod. Phys. 91, 045005 (2019).
* [28] M. Amundsen, J. Linder, J. W. A. Robinson, I. Žutić, and N. Banerjee, Colloquium: Spin-orbit effects in superconducting hybrid structures, arXiv: 2210.03549.
* [29] Y. Tabuchi, S. Ishino, A. Noguchi, T. Ishikawa, R. Yamazaki, K. Usami, and Y. Nakamura, Coherent coupling between a ferromagnetic magnon and a superconducting qubit, Science 349, 405 (2015).
* [30] D. L. Quirion, S. P. Wolski, Y. Tabuchi, S. Kono, K. Usami, and Y. Nakamura, Entanglement-based single-shot detection of a single magnon with a superconducting qubit, Science 367, 425 (2020).
* [31] T. Yu, M. Claassen, D. M. Kennes, and M. A. Sentef, Optical manipulation of domains in chiral topological superconductors, Phys. Rev. Research 3, 013253 (2021).
* [32] B. Z. Rameshti, S. V. Kusminskiy, J. A. Haigh, K. Usami, D. Lachance-Quirion, Y. Nakamura, C. -M. Hu, H. X. Tang, G. E. W. Bauer, and Y. M. Blanter, Cavity magnonics, Phys. Rep. 979, 1 (2022).
* [33] D. Xu, X. -K. Gu, H. -K. Li, Y. -C. Weng, Y. -P. Wang, J. Li, H. Wang, S. -Y. Zhu, and J. Q. You, Quantum Control of a Single Magnon in a Macroscopic Spin System, Phys. Rev. Lett. 130, 193603 (2023).
* [34] K. M. D. Hals, M. Schecter, and M. S. Rudner, Composite Topological Excitations in Ferromagnet-Superconductor Heterostructures, Phys. Rev. Lett. 117, 017001 (2016).
* [35] A. F. Kockum, A. Miranowicz, S. D. Liberato, S. Savasta, and F. Nori, Ultrastrong coupling between light and matter, Nat. Rev. Phys. 1, 19 (2019).
* [36] J. C. Swihart, Field Solution for a Thin‐Film Superconducting Strip Transmission Line, J. Appl. Phys. 32, 461 (1961).
* [37] I. A. Golovchanskiy, N. N. Abramov, V. S. Stolyarov, V. V. Bolginov, V. V. Ryazanov, A. A. Golubov, and A. V. Ustinov, Ferromagnet/Superconductor Hybridization for Magnonic Applications, Adv. Funct. Mater. 28, 1802375 (2018).
* [38] I. A. Golovchanskiy, N. N. Abramov, V. S. Stolyarov, V. V. Ryazanov, A. A. Golubov, and A. V. Ustinov, Modified dispersion law for spin waves coupled to a superconductor, J. Appl. Phys. 124, 233903 (2018).
* [39] I. A. Golovchanskiy, N. N. Abramov, V. S. Stolyarov, P. S. Dzhumaev, O. V. Emelyanova, A. A. Golubov, V. V. Ryazanov, and A. V. Ustinov, Ferromagnet/Superconductor Hybrid Magnonic Metamaterials, Adv. Sci. 6, 1900435 (2019).
* [40] I. A. Golovchanskiy, N. N. Abramov, V. S. Stolyarov, A. A. Golubov, V. V. Ryazanov, and A. V. Ustinov, Nonlinear spin waves in ferromagnetic/superconductor hybrids, J. Appl. Phys. 127, 093903 (2020).
* [41] Seshadri, Surface magnetostatic modes of a ferrite slab, Proc. IEEE 58, 506 (1970).
* [42] C. Bayer, J. Jorzick, B. Hillebrands, S. O. Demokritov, R. Kouba, R. Bozinoski, A. N. Slavin, K. Y. Guslienko, D. V. Berkov, N. L. Gorn, and M. P. Kostylev, Spin-wave excitations in finite rectangular elements of ${\mathrm{Ni}}_{80}{\mathrm{Fe}}_{20}$, Phys. Rev. B 72, 064427 (2005).
* [43] T. Yu, C. P. Liu, H. M. Yu, Y. M. Blanter, and G. E. W. Bauer, Chiral excitation of spin waves in ferromagnetic films by magnetic nanowire gratings, Phys. Rev. B 99, 134424 (2019).
* [44] T. Yu, Y. M. Blanter, and G. E. W. Bauer, Chiral Pumping of Spin Waves, Phys. Rev. Lett. 123, 247202 (2019).
* [45] T. Yu, J. Zou, B. Zeng, J. W. Rao, and K. Xia, Non-Hermitian Topological Magnonics, arXiv:2306.04348.
* [46] A. V. Chumak, A. A. Serga, and B. Hillebrands, Nat. Commun. 5, 4700 (2014).
* [47] T. Yu, H. Wang, M. A. Sentef, H. Yu, and G. E. W. Bauer, Magnon trap by chiral spin pumping, Phys. Rev. B 102, 054429 (2020).
* [48] O. A. Santos and B. J. van Wees, Magnon confinement in an all-on-chip YIG cavity resonator using hybrid YIG/Py magnon barriers, arXiv:2306.14029.
* [49] L. -L. Li, Y. -L. Zhao, X. -X. Zhang, and Y. Sun, Possible evidence for spin-transfer torque induced by spin-triplet supercurrents, Chin. Phys. Lett. 35, 077401 (2018).
* [50] I. A. Golovchanskiy, N. N. Abramov, V. S. Stolyarov, V. I. Chichkov, M. Silaev, I. V. Shchetinin, A. A. Golubov, V. V. Ryazanov, A. V. Ustinov, and M. Y. Kupriyanov, Magnetization Dynamics in Proximity-Coupled Superconductor-Ferromagnet-Superconductor Multilayers, Phys. Rev. Appl. 14, 024086 (2020).
* [51] K. -R. Jeon, C. Ciccarelli, H. Kurebayashi, L. F. Cohen, X. Montiel, M. Eschrig, T. Wagner, S. Komori, A. Srivastava, J. W. A. Robinson, and M. G. Blamire, Effect of Meissner Screening and Trapped Magnetic Flux on Magnetization Dynamics in Thick $\mathrm{Nb}/{\mathrm{Ni}}_{80}{\mathrm{Fe}}_{20}/\mathrm{Nb}$ Trilayers, Phys. Rev. Appl. 11, 014061 (2019).
* [52] C. Kittel, On the Theory of Ferromagnetic Resonance Absorption, Phys. Rev. 73, 155 (1948).
* [53] S. V. Mironov and A. I. Buzdin, Giant demagnetization effects induced by superconducting films, Appl. Phys. Lett. 119, 102601 (2021).
* [54] S. M. Rezende, Fundamentals of Magnonics (Springer, Cham, 2020).
* [55] L. D. Landau and E. M. Lifshitz, Electrodynamics of Continuous Media, 2nd ed. (Butterworth-Heinenann, Oxford, U.K., 1984).
* [56] J. D. Jackson, Classical Electrodynamics (Wiley, New York, 1998).
* [57] J. R. Schrieffer, Theory of Superconductivity, (W. A. Benjimin, New York, 1964).
* [58] S. P. Chockalingam, M. Chand, J. Jesudasan, V. Tripathi, and P. Raychaudhuri, Superconducting properties and Hall effect of epitaxial NbN thin films, Phys. Rev. B 77, 214503 (2008).
* [59] O. W. Dietrich, A. J. Henderson, Jr., and H. Meyer, Spin-wave analysis of specific heat and magnetization in Euo and Eus, Phys. Rev. B 12, 2844 (1975).
* [60] Y. Hou, F. Nichele, H. Chi, A. Lodesani, Y. Wu, M. F. Ritter, D. Z. Haxell, M. Davydova, S. Ilić, O. Glezakou-Elbert, A. Varambally, F. S. Bergeret, A. Kamra, L. Fu, P. A. Lee, and J. S. Moodera , Ubiquitous Superconducting Diode Effect in Superconductor Thin Films, Phys. Rev. Lett. 131, 027001 (2023).
* [61] X. Y. Wei, O. A. Santos, C. H. S. Lusero, G. E. W. Bauer, J. B. Youssef, and B. J. van Wees, Giant magnon spin conductivity in ultrathin yttrium iron garnet films, Nat. Mater. 21, 1352 (2022).
* [62] S. Knauer, K. Davídková, D. Schmoll, R. O. Serha, A. Voronov, Q. Wang, R. Verba, O. V. Dobrovolskiy, M. Lindner, T. Reimann, C. Dubs, M. Urbánek, and A. V. Chumak, Propagating spin-wave spectroscopy in a liquid-phase epitaxial nanometer-thick YIG film at millikelvin temperatures, J. Appl. Phys. 133, 143905 (2023).
|
A closed parameterization of DNA–damage by charged particles, as a function of
energy — A geometrical approach
Frank Van den Heuvel PhD∗1,2,
1 CRUK/MRC Oxford Institute for Radiation Oncology, Department of Oncology,
University of Oxford, Oxford, UK
2 Laboratory for experimental radiotherapy, Department of Oncology, University
of Leuven, Leuven, Belgium
$\ast$ E-mail<EMAIL_ADDRESS>
## Abstract
Purpose: To present a closed formalism calculating charged particle radiation
damage induced in DNA. The formalism is valid for all types of charged
particles and due to its closed nature is suited to provide fast conversion of
dose to DNA-damage.
Methods: The induction of double strand breaks in DNA–strings residing in
irradiated cells is quantified using a single particle model. This leads to a
proposal to use the cumulative Cauchy distribution to express the mix of high
and low LET type damage probability generated by a single particle. A
microscopic phenomenological Monte Carlo code is used to fit the parameters of
the model as a function of kinetic energy related to the damage to a DNA
molecule embedded in a cell. The model is applied for four particles:
electrons, protons, alpha–particles, and carbon ions. A geometric
interpretation of this observation using the impact ionization mean free path
as a quantifier, allows extension of the model to very low energies.
Results: The mathematical expression describes the model adequately using a
chi–square test ($\chi^{2}/NDF<1$). This applies to all particle types with an
almost perfect fit for protons, while the other particles seem to result in
some discrepancies at very low energies. The implementation calculating a
strict version of the RBE based on complex damage alone is corroborated by
experimental data from the measured RBE. The geometric interpretation
generates a unique dimensionless parameter $k$ for each type of charged
particle. In addition, it predicts a distribution of DNA damage which is
different from the current models.
## Introduction
The biological effect of ionizing radiation on human cells is believed to be
related to the generation of damage in the DNA–molecule located in the cell’s
nucleus[1]. The physical mechanism is the ionization of the DNA macro
molecule, generating lesions in the molecular structure, either by direct
ionization or by the generation of radicals in the vicinity of the DNA which
then indirectly damage it. These events (direct or indirect) can create
several types of damage to the DNA by combining a number of lesions into a
cluster, which can only happen if they occur in close proximity (typically
within one turn of the DNA–helix). The most prevalent of these damage types
are base damage (2 lesions), followed by single strand breaks (SSB) (3
lesions) , double strand breaks (DSB) (4 lesions), and locally multiple damage
sites (LMDS). The latter are clusters of different types of damage occurring
close to each other. It is shown that base as well as SSB damage is not likely
to be a deciding factor in the destruction of cells, due to the efficient
repair mechanisms which exist in the cell[2]. The combination of double strand
breaks and LMDS’s is likely to be the root cause for cell kill[3].
To quantify the amount of ionizing interactions in a medium, the physical
notion of dose can be used. Dose is defined as the amount of energy deposited
in a medium per unit mass and is expressed in Joule(J) per kg or Gray (Gy). In
the case of dose deposition by charged particles the Bethe–formalism is used.
This describes ionization events in a medium in terms of energy loss of the
charged particles in inelastic collisions with the electrons of the medium,
through the notion of mass stopping power ($dE/\rho dx$). In his seminal work
already in 1930, Bethe showed that there is an intimate relationship between
stopping power on the one hand, and energy (i.e. speed), charge, and the
medium in which the interaction takes place on the other hand[4]. A further
extension taking into account the possibility of the charged particle picking
up electrons, thereby changing the stopping power was introduced by Barkas[5],
using the concept of an effective charge.
In radiation biology, linear energy transfer (LET) is used rather than
stopping power. LET is identical to stopping power with the energy delivered
to $\delta$–rays (i.e. highly energetic knock on electrons) subtracted. This
quantity is called restricted stopping power. As such, LET is a measure for
the density of ionization taking place along the track of a charged particle
through a medium. Due to its close relationship with stopping power, it
follows that there is a close relationship between LET and the kinetic energy
of the depositing particle. From observation a dearth of DSB’s and LMDS’s was
shown to be related to low LET irradiations, while an increased number of both
for the same dose is seen high in hight LET irradiations[1]. Brenner and
Ward[6] argued that DSB and LMDS damage was related to multiple interactions
by single particles, rather than the combination of single strand breaks
generated by single particles. In the field of microdosimetry, this is taken a
step further by defining the notion of lineal energy which introduces the
amount of energy deposited along lines confined in a convex geometric shape
with a given distribution of cord lengths estimating the energy deposited in
various shapes, which can be used for measurement (i.e. spheres, cylinders).
Extending this, it is natural to propose a model where distance between
ionizations along these lines plays a significant role in the generation of
DNA–damage. A full listing and treatment of these quantities can be found in
the ICRU reports 16, 19, and 36 [7, 8, 9].
To describe the damage impact of charged particles on the DNA–structure, the
science community has taken its recourse to using Monte Carlo simulations to
quantify the damage introduced[10, 11]. A more fundamental analytical approach
is currently lacking, due to the underlying complexity of the DNA molecule,
and the paucity of the available experimental data. The data which is
available is mainly provided in terms of relative biological effective dose
(RBE), a quantity combining physical, spectral, chemical, and biological
factors, all of which hamper ab–initio calculations.
Monte Carlo calculations are able to predict the induction of simple or
complex damage as well as induction of single and double strand breaks in
DNA–molecules. These findings are interpreted using the Bethe–Barkas formalism
in terms of LET and show that high LET particles indeed introduce more complex
damage.
In this paper we develop a parameterization using a simple geometrical model,
that describes the behavior as calculated by the Monte Carlo codes. We also
show that this formalism describes the current knowledge well.
## Methods and Materials
### Theory
We use the single charged particle model as proposed by Brenner and Ward,
distinguishing three types of interaction results: Low LET, high LET, and
intermediate LET mode. The specifics of each mode are explained below.
1. 1.
Low LET: A single particle is generally not able to generate lesions close
enough together to induce double strand breaks at each interaction. It is
clear that DSB’s can be generated but in a limited fashion and that we use the
word lesion in liberal fashion to indicate an interactive event which has
damage as a consequence.
2. 2.
High LET: The particle has the possibility to generate multiple lesions
irrespective of any geometrical considerations. We implicitly assume that the
double strand break damage is the result of multiple interactions by one
particle. How exactly this damage is introduced (direct or indirect) is
outside the scope of this article. An implicit assumption however is that
ionizing events need to be geometrically close to the DNA structure.
3. 3.
Intermediate: In given geometric circumstances it is possible for the charged
particle to generate DSB–damage, in a high–LET manner, depending on the angle
under which the particle hits the sensitive volume (Fig. 1).
As a surrogate to categorize the charged particle in one of the types defined
above, we use the mean path length between ionizing interactions in a medium
consistent with the atomic make up of a DNA–molecule for the type of particle
under consideration. In the remainder, we denote this with $\lambda(E)$, where
$E$ is the kinetic energy of the particle. If $\lambda(E)$ is large relative
to the sensitive volume, then the lesions on average are too far apart and
only damage types related to a few lesions can occur (i.e. SSB and base
damage). Charged particles with such energies will be part of the first
category. If on the other hand $\lambda(E)$ is small then the probability of
lesions creating more complex clusters of damage close together will be
higher. Charged particles with this property will be in the high LET category.
Finally, charged particles with intermediate distances between ionizing events
have the capability of generating DSB and LMDS damage depending on other
factors than $\lambda(E)$ alone. In this model we use the geometric direction
of the path of the charged particle as a parameter. In Figure 1 a schematic
model of this approach is shown. This implies that only a limited amount of
directions are available to contribute to the amount of complex damage in the
manner as outlined for the high–LET type interactions. This happens when for a
particle of a given energy the quantity $\lambda(E)$ is slightly larger than
the maximal distance between two DNA–damage lesions to be considered as being
in the same cluster (usually about 10 base pairs (bp)). Due to the finite
thickness of the sensitive volume it is possible to behave in a high–LET
fashion depending on the angle with which the particle’s path crosses the
volume. This occurs when the projection of the path is smaller than the
previously determined maximum.
Figure 1: A schematic model of a source of charged particles with a given
mean free path length (i.e. a given energy), which is comparable with the
diameter of the sensitive cell volume. As the angle ($\theta$) of the
particle’s path with respect to the normal to the axis of the structure
increases the chance that more than a single event will occur in the volume.
This implies that if the angle ($\theta$) is larger than the one for which the
projection of the average length between interactions equals the diameter,
more high LET events will be registered.
### Equivalence Principle
In the case of irradiation with charged particles all directions of the
particle’s paths are possible as are all rotational positions of the
DNA–structure. A particle that interacts (i.e. that creates a lesion) at the
surface of a given sensitive volume has limited possibilities to interact
again given that on average, a specific distance (which depends on the
particle energy) has to be travelled before it interacts again. The next
interaction’s position is then limited by the constraints outlined above if it
is to fall within the sensitive volume. This first interaction can happen
anywhere along the volume, but the constraints are relative to the position of
that point. This implies that we can invoke an equivalence principle and
reduce the problem to that of an isotropic point source positioned at the
surface of a sensitive volume.
### Mathematical expression of the equivalence principle
We need to calculate what fraction of the paths starting in the given point
can interact with the sensitive volume given the fact that there is a length
within which this is not likely, provided by $\lambda(E)$, and that there is a
maximal distance ($H$) that disqualifies the generated lesion to be registered
in the same cluster. We have reduced this problem to that of the distribution
of projections of a point source on a line–piece, the solution to which is
known as the Cauchy–distribution[12], and is described by the Lorenz function
$f(x)$ with $x\in\Re$ expressed as follows:
$f(x)~{}=~{}\frac{1}{1+(\frac{x}{r})^{2}}$ (1)
Figure 2: The abstracted version of Figure 1 describes the distribution of
horizontal distances at which a line segment tilted at a random angle $\theta$
cuts the x–axis. Only particles with large angles contribute to double strand
break events by combining damage generated by a single particle. The red line
indicates the “forbidden” area as (on average) this distance between the
ionization events is observed.
From Figure 2, it follows that the contribution $P$ for a given energy of
charged particles to high LET events is proportional to:
$P~{}\sim~{}2\int_{H-\lambda(E)}^{H}\frac{1}{1+(\frac{x}{r})^{2}}dx$ (2)
Performing the calculation, we obtain:
$\displaystyle P~{}$
$\displaystyle\sim~{}\frac{2}{\pi}[\tan^{-1}(\frac{x}{r})\biggm{|}_{H-\lambda(E)}^{H}]+C$
(3)
$\displaystyle\sim~{}\frac{2}{\pi}[\tan^{-1}(\frac{H}{r})-\tan^{-1}(\frac{H-\lambda(E)}{r})]+C$
(4)
$\displaystyle\sim~{}\frac{2}{\pi}[\tan^{-1}(\frac{\lambda(E)-H}{r})+\tan^{-1}(\frac{H}{r})]+C$
(5)
This implies that the amount of DSB–damage for a given dose and given energy
of the charged particle is governed by the following expression.
$F_{cd}(E)~{}=~{}(a-b)\frac{2}{\pi}[\tan^{-1}(\frac{\lambda(E)-H}{r})]+b$ (6)
The change from a low to a high LET regimen occurs over a small energy
interval. In such a small interval the average distance dependence on the
energy of the particle can be approximated with a linear function. Therefore,
we expect the energy dependence of the contribution of complex damage to
follow the same form as in Equation 6 yielding the following expression, with
H=$\lambda(E_{0})$ and r=$\lambda(\Gamma/2)$, $E_{0}$ being the energy, where
the change in DSB is maximized and $\Gamma$ a measure for the width of the
slope (i.e. the full width at half maximum in differential energy space).
$F_{cd}(E)~{}=~{}(a-b)\frac{2}{\pi}[\tan^{-1}(\frac{E-E_{0}}{\Gamma/2})]+b$
(7)
With the parameters $a$ and $b$ related to the levels 1 and 2 as outlined
above. From boundary conditions we find that at very large energies (i.e.
$E>>E_{0}$) the expression is reduced to minimal number of double interactions
($D_{min}$) which is equal to $a$. The value of $b$ is related to the maximal
number of double interactions ($D_{max}$) as follows:
$D_{max}=D_{min}+(1-\tan^{-1}(-\frac{E_{0}}{\Gamma/2}))b.$ (8)
The formalism using energy alone allows us to forego specific assumptions
regarding the dimensions of the DNA–molecule. Furthermore, it also allows us
to apply this technique to particles where the values for $\lambda(E)$ are
less well known. In addition, it allows us to test this formalism using
experimentally available data which is available as a function of energy.
### Validation using Monte Carlo Simulations
The use of microdosimetric calculations has provided important insight into
the mechanisms and effects of radiation deposition. In the past, Monte Carlo
simulations of charged particle deposition by various modalities were used to
quantify and typify the kinds of damage introduced by the different
modalities[10].
The Monte Carlo Damage Simulation code (MCDS) developed by Semenenko and
Stewart, generates spatial maps of the damaged nucleotides forming many types
of clustered DNA lesion, including single-strand breaks (SSB), double strand
breaks (DSB), and individual or clustered base damages[11]. This approach has
been shown to yield a linear relationship of the number of generated DSB’s up
to a high dosage. It follows that this parameterization also provides the
possibility to link dose to damage. In this paper, MCDS version 3.0 was used
with the parameters described below. The DNA length which was chosen to be
1Gbp (Giga base pairs) and a nucleus diameter of 5$\mathrm{\mu m}$. In the
MCDS software, the geometry of the DNA–molecule is not an explicit parameter.
Here four parameters are used: 1) the DNA–segment length $n_{seg}$, which is
an ad hoc parameter expressed as base pairs $Gy^{-1}cell^{-1}$, 2) the number
of strand breaks generated $\sigma_{Sb}$, 3) the number of base pair damages
generated $\sigma_{Bb}$ by defining $f=\sigma_{Bb}/\sigma_{Sb}$, and 4) a
parameter $N_{min}$ (bp) describing the minimal separation for damage to be
apart not to be counted as being in the same cluster. The values of these
parameters is determined on the basis of other simulations and measurements.
For a more in–depth treatment of these parameters we refer to the work by
Semenenko and Stewart[13]. Variable input parameters MCDS were; the modality
(i.e. energy depositing particle (electron, proton,…)), the energy (in MeV),
and the oxygen concentration in %. In the implementation described here we
chose to omit any oxygen enhancement as this could be a confounding factor and
is the subject of another study. In this study it was found that oxygen only
changed the amount of damage in the low LET regimen, leaving the formalism
unchanged (data not shown). Therefore, a concentration of 0% oxygen was used.
For every particle type at the relevant kinetic energies, all complex damage
was noted per Gy, per cell and per kinetic energy.
### Fitting procedure
The ultimate goal was to fit the complex damage function to the data obtained
by the Monte Carlo simulation. The parameters that need fitting are the energy
position ($E_{0}$) the width of the underlying Cauchy distribution ($\Gamma$)
and the parameters $a$ and $b$. If a regular fit (i.e. all parameters fit at
the same time) is performed we see strong co–variances between the parameters.
To come to meaningful results we opted to perform a two step procedure: First,
we eliminate the parameters $a$ and $b$ by fitting the differential, thereby
reducing expression 6 to the Lorenz function.
$\frac{dF_{cd}}{dE}=\frac{\Gamma^{2}/4}{(E-E_{0})^{2}+\Gamma^{2}/4}$ (9)
This is also mathematically equivalent to the fit of a Breit–Wigner resonance
in high energy physics[14]. In a second fit–procedure, the remaining variables
$a$ and $b$ are fit using the cumulative Cauchy function. The fitting
procedures were performed in the gnuplot111http://www.gnuplot.info–software
using a Levenberg–Marquardt minimization routine.
## Results
In Figure 3, the Lorenz expression as outlined in Equation 6 together with a
normalization factor, is used to fit the energy differential probability for
the generation of DSBs. The fit is performed to minimize the $\chi^{2}$–value.
In all cases, the resulting $\chi^{2}/NDF$ (NDF = Number of Degrees of
Freedom) are lower than 1. The values of the parameters are provided in Table
1.
(a) Electrons (b) Protons (c) $\alpha$–particles (d) Carbon Ions
Figure 3: Fitting the Cauchy expression to the energy differential probability of generating DSB’s denoted $\frac{d\sigma}{dE}$ Particle | $\Gamma$ | $E_{0}$ | $a$ | $b$
---|---|---|---|---
$e^{-}$ | (2.854$\pm$0.051)$10^{-04}$ MeV | (1.05736$\pm$0.036)$10^{-04}$ MeV | 2.9061 | 21.460
$p^{+}$ | 0.5575$\pm$0.0094 MeV | 0.1642 $\pm$ 0.0037 MeV | 2.89068 | 21.4273
$\alpha^{++}$ | 8.20$\pm$0.17 MeV | 3.1850$\pm$0.056 MeV | 3.0856 | 20.7933
C6+ | 201.7$\pm$8.4 MeV | 95.4$\pm$2.5 MeV | 3.01459 | 21.8489
Table 1: The different values for $\Gamma$ and $E_{0}$ as defined by Equation
9 and obtained from a fitting procedure together with the asymptotic standard
error of the fitted parameter. All fits exhibited minimal values of
$\chi^{2}/NDF$ (NDF = Number of Degrees of Freedom). The columns $a$ and $b$
are the parameters indicating the levels of DSB at low, resp. high LET. Note
that even in low LET the number of DSB’s is not zero as complex damage can
occur due to the combination of simple damage events.
All fits are completely satisfactory at energies higher than $E_{0}$. On the
lower energy side some discrepancies can be observed depending on the incoming
particles, particularly in the case of electrons and carbon ions. We refer the
reader to the discussion section. For protons we see a satisfactory fit over
the full energy range.
Figure 4 shows the final results with all parameters fit. Again, all fits have
$\chi^{2}$–values commensurate with a positive goodness of fit. The final
values and the standard errors for the fitted parameters are listed in Table
1. Note, that the noise in the differential curves increases as the particles
become heavier. The random-seeming errors in the estimates of the derivative
arise in part from the Monte Carlo estimates of the mean number of DSB per Gy
per Gbp and from numerical instabilities associated with the calculation of
the derivative using finite difference methods.
(a) Electrons (b) Protons (c) $\alpha$–particles (d) Carbon Ions
Figure 4: The prediction of the number of double strand breaks or more
complex damage as a function of energy for 4 relevant charged particles. This
provides the number of Double Strand breaks (DSB) per Gy, Gbp and per cell.
The prediction for protons and alpha particles is almost perfect. For
electrons and carbon ions some discrepancies exist at lower energies.
### Geometric approach
Now is the time to investigate the geometric interpretation further. To
quantify the function $\lambda(E)$ we can use the inelastic mean free path as
measure (IMFP). Values for IMFP for electrons are well known in the
literature, not in the least as they are important in solid state physics and
electron microscopy. They can be found in freely available databases for a
variety of elements and compounds, even for organic molecules like DNA[15].
Proton values can be found in a publication by Zhen–Yu and colleagues[16]. For
heavier particles such as $\alpha$–particles and carbon–ions, the data is more
difficult to find. We therefore opt not to use the data for these particles
and restrict ourselves to electrons and protons in this further treatment.
In all current microdosimetric codes, the Bethe formalism is used which is
valid for higher energies (i.e. above 500eV for electrons). This implies that
changes in IMFP, denoted by $\lambda$, which impact the damage calculated
using these codes, also reflect the limitations of the Bethe formalism. From
the theory the following expression is used:
$\lambda(E)=\frac{E}{A\log(E/E_{0})+B}$ (10)
Particle | A | B
---|---|---
Electrons | 69.200 eV/nm | -153.94 eV/nm
Protons | 115.231 keV/nm | -301.45 keV/nm
Table 2: Parameters obtained by fitting Eq. 10 to data obtained from NIST
(electrons) and Zhen Yu et al. (protons)
In this work the parameters $H$ and $r$ have thus far not been linked to any
physical property but were fit. An interesting proposition could be to link
these to dimensions of the target structure. Indeed, the choice of a cylinder
as a geometric representation is not an accident. It is natural to use the
diameter of a DNA–molecule as a measure of the cylinder’s diameter. The length
of the cylinder is then related to the maximal distance we allow to classify
two damage events, being part of the same cluster of complex damage. Both
values can readily be found in the literature and text books[17]. For the most
prevalent form of cellular DNA (B–DNA), the values are $3.4$nm (i.e. the
height of a spiral of 10 base pairs), and $2.37$nm as the diameter. We now
define a dimensionless quantity $k$ which is specific to the type of charged
particle used. It is clear that this parameter acts as a scaling parameter but
also depends on the ratio of both fixed parameters. Equation 6 now reads as
follows:
$F_{d}(E)~{}=~{}(a-b)\frac{2}{\pi}[\tan^{-1}(\frac{k\lambda(E)-3.4}{2.37})]+b$
(11)
This reduces the impact of the charged particle’s energy on the induction of
complex damage in a DNA–molecule to three parameters $a$, $b$, and $k$. Figure
5 illustrates the use of these parameters and shows that comparable results to
the energy–based formalism can be obtained. It follows that we can repeat the
fitting procedure keeping $a$ and $b$ from the expression based on energy (Eq
7). We find values of k=5.18 for electrons and k=4.82 for protons.
### Extending the model
In the work presented above as well as in the used Monte Carlo simulations,
the Bethe–Barkas formalism together with its flawed approach in the lower
energy regions has always been used. It is well established that the IMFP does
not follow the expression outlined in Equation 10, where $\lambda(E)$ keeps
diminishing as the energy diminishes. Indeed, when the energy is lower than
200eV an increase in IMFP is observed due to plasmonic effects[18]. Ziaja et
al[19] showed that it is possible to describe this behavior analytically by
extending Equation 10 with a second term as follows:
$\lambda(E)_{Z}~{}=~{}\frac{\sqrt{E}}{A_{1}(E-E_{th})^{B_{1}}}~{}+~{}\frac{E-E_{0}\exp(-B/A)}{A\log(E/E_{0})+B}$
(12)
In this equation the parameter $E_{th}$ serves as a threshold separating the
behavior as described by Bethe from the plasmonic interactions. Using the data
provided in the work from Zhen–Yu and colleagues[16] it is straightforward to
obtain parameters for the behavior of protons. These are presented in Table 3.
Particle | $A_{1}$ | $B_{1}$ | $E_{th}$ | $A$ | $B$ | $E_{0}$
---|---|---|---|---|---|---
Electrons | 0.6560 | 1.0100 | 24.2838 | 65.898 | -128.23 | 1.0
Protons | 0.681 | 1.249 | 42.38 | 117.01 | -318.7 | $1.0\times 10^{3}$
Table 3: Parameters as in Table 2 with added lower energy factors. The
fitting was performed using Eq. 12
To extend our model to incorporate the behavior of very low energy particles
it is sufficient to replace the expression $\lambda(E)$ by $\lambda(E)_{Z}$ in
equation 11. In Figure 5, the modified curves show the difference with the
calculations based on the Bethe formalism only. This also shows that there is
an upper limit to the increase in DSB’s which depends on the type of particle.
It is conceivable that this approach also works for the heavier particles
which can be seen when using the IFMP’s in water for these (not shown).
Figure 5: Using the quantities for $H$ and $r$, the dimensionless constant
$k$ for electrons (left) and protons (right) is determined. Using both the
limited expression for $\lambda(E)$ and the more accurate estimate
$\lambda_{Z}(E)$. The former provides a fit to the Monte Carlo data comparable
with the results obtained using the energy–based formalism. The second
approach provides a maximal complex damage yield which differs for electrons
and protons.
## Discussion
We developed an approach to predict damage in complicated situations where
fields of different charged particles and their respective energy spectra
impact on living cells. The approach, due to its analytical nature allows very
fast calculation of damage in otherwise long simulations. In the derivation of
this approach using energy alone there are no assumptions on the mechanics
with which DNA–damage is caused by the charged particles. The only assumption
is that there is a sensitive volume where, if ionizations take place, damage
is introduced in the DNA. How exactly this damage is caused is not specified.
In the remainder of the text a parameter is identified, the average distance
between ionizations for the given charged particle in the medium ($\lambda$).
We show that this approach adequately quantifies the results from Monte Carlo
simulations based on phenomenological data and reduces these to a closed
analytical expression whereby the type of charged particle is expressed by a
single parameter ($k$). On the other hand we should be aware that issues like
repair mechanisms and oxygen effects are not present in the model, making its
applicability limited. However, if all things are identical (i.e. the type of
cells, oxygenation, etc…) and the only thing different is the type of charged
particle and its energy, then the original damage introduced in the DNA
structure should correlate with the outcome. An underlying assumption here is
that the repair processes are somehow independent from the modality with which
the cell is irradiated.
The results of this approach can be applied to determine the biological impact
of radiation in mixed environments, as in the case of proton therapy, where
protons, electrons and heavier ions (due to neutrons), deposit energy. Other
approaches have been proposed to try to predict outcomes from mixed fields,
which are based on available clinical response data. Most notably, an approach
based on the local effect model (LEM), where macroscopic response data in the
form of dose–effect curves is used to quantify the relative effect of the dose
delivered[20]. The parameterization, however, of the latter approach is
extensive due to the fact that every effect curve has two parameters for a
given $\alpha/\beta$–value, making the model over–parameterized. As such, it
is possible to have this model reflect the current knowledge of dose and
modality response adequately, which forms an important, albeit controversial
tool[21, 22]. Its power to predict the behavior outside of the current
knowledge therefore seems to be limited.
Cucinotta et al. attempted to incorporate the volumetric properties of the
dose deposition[23] to account for differences in track structure. They
observed that: “LET is a poor descriptor of energy deposition in small volumes
because of the diffusion of secondary electrons out of the volume and
contribution of $\delta$–rays that pass outside of the volume”. To address
this problem a quantification of the energy distribution of generated
secondary particles, or $\delta$–rays was proposed.
Such a secondary charged particle indirectly changes the behavior with respect
to the DNA damage induced. Indeed, depending on the median energy of the
spectrum the DNA damage changes accordingly if the dose is kept constant. In
the paper presented here this behavior could be easily incorporated by
considering the DNA damage for all the particles (i.e. ions and $\delta$–rays)
separately using a methodology modeled on the use of the electronic
equilibrium concept in photon cavity theory. Currently this behavior is hidden
in the $k$ parameter and it would be interesting to see if such an approach
will lead to a convergence of all $k$–values for all particles.
To take these actions fully into account an approach to provide a more
detailed model of the biological effect directly in the Monte Carlo simulation
is proposed by Sato et al.[24] This would, in theory, allow a direct
calculation of the effect in terms of energy deposited. However, as outlined
by Cucinotta this is not without problems as the behavior of low energy
electrons needs to be adequately modelled. This work predicts that the current
knowledge using the Bethe formalism, might not be suitably extended.
The results from the geometric interpretation indicate that the overall
behavior of the DNA damage induction is identical for all types of charged
particles. The only difference is in the dimensionless parameter $k$. The
latter seems to change as the ion used is heavier. Preliminary calculations
using the IMFP in water indicate that the value of $k$ diminishes as the
charged particles used are heavier (or more charged, data not shown). A
possible reason for this is that the track structure can be quite different
for different charged particles. This fact could also be an explanation for
the discrepancy found at very low energies for carbon–ions. Indeed, allowing
the parameter $k$ to be covariant with the other parameters, does provide a
more adequate fit (data nor shown).
The results for the electrons also shows a discrepancy with regard to the
generation of complex damage at lower energies. For electrons, the data on
very low energy electrons are not available in terms of energy deposition.
Indeed, the model proposed here shows a much lower incidence of complex damage
due to plasmonic effects in that region.
In summary, the model proposed here allows extension to very low energies for
electrons and protons.The fact that there are indications that the induction
of DSB’s varies linearly with dose, provides an easy implementation to dose
planning systems, given the knowledge of dose deposition spectra in a
treatment beam. An example of such implementation is provided in the Appendix.
## Appendix: Implementation in dose deposition calculations
### Mono–energetic treatment
In dose calculations a dose matrix is obtained on a dose grid Let
$\mathbf{D}=D[i,j,k]$ be the dose matrix provided. Then we can write the
amount of complex damage incurred by particles with an energy ($E$) as a
damage matrix, denoted as ($M_{cd}$). as follows:
$\displaystyle\mathbf{M_{cd}}=M_{cd}[i,j,k]$ $\displaystyle=\mathbf{D}\times
F_{cd}(E)$ (13)
$F_{cd}(E)$ then denotes a response function converting dose to damage.
### Poly–energetic treatment
Dose deposition spectra rarely consist of a field of mono–energetic electrons.
For a photon source with a given photon spectrum, an energy depositing
electron fields exists, which is roughly constant throughout the target
volume. Using Monte Carlo simulations it is possible to calculate this field
and its spectrum $\Psi(E)$. It then becomes possible to include the spectrum
in the calculation of the damage matrices. This approach has been used already
by different authors [25, 26].
$\mathbf{M_{cd}}~{}=~{}\mathbf{D}\times\frac{\int_{0}^{E_{max}}\Psi(E)F_{cd}(E)dE}{\int_{0}^{E_{max}}\Psi(E)dE}$
(14)
In the case of charged particle treatment, the particles are moderated and the
energy spectrum changes depending on the position of the point where the dose
is being deposited. It is therefore necessary to apply Equation 14 to each
point separately with knowledge of the depositing energy spectrum in that
point. Due to the closed nature of the formalism developed in this paper, it
becomes feasible to use off the shelf computing equipment.
### Application: Proton treatment
Recently, the coupling of Monte Carlo simulations in dose deposition to micro-
dosimetric code has been proposed and applied by several groups[26, 25]. Here
a two step approach is followed; 1) a general purpose Monte Carlo code (MCNPX
2.7b)[27] is used to estimate the spectrum of all different dose contributing
particles, 2) a micro dosimetric code[13] is used to determine the biological
damage.
The framework for conversion of dose to biological effect is implemented on a
simulation of a pristine 200MeV proton beam, taking into account the changing
proton spectrum. The proton simulation is performed using MCNPX. Figure 6(a)
shows the variation of the number of complex damage events as a function of
energy of the proton. In addition, the spectrum of depositing protons is shown
at a position before the Bragg peak and at the Bragg peak. In Figure 6(b) the
effect on the dose deposition is shown together with the $RBE_{cd}$ calculated
as the complex damage yield generated by the protons at that particular
position, divided by the complex damage induced by a 6MV photon beam with the
same spatial characteristics. Note, that the $RBE_{cd}$ is of the order of 1.1
with larger value of 2 a few mm distal from the Bragg peak. This is
commensurate with the cell data reported by Paganetti et al.[28] and Chaudhary
et al.[29], who showed that the radiobiological effect at the distal end of a
spread out bragg peak increases, a fact predicted by Goitein[30]. Currently,
data of direct measurement of DNA–damage in–vitro along a proton beam are
scarce. The advent of $\gamma$–H2AX measurements, as a marker for DSB–damage
is promising in this regard and has been used to investigate anti–protons[31].
(a) Impact of the spectrum (b) Proton dose deposition
Figure 6: In Fig 6(a) the spectrum at the beginning of the Bragg peak (scaled
by 0.1) is completely within the low LET regimen. While at the end of the
Bragg peak a significant part of the dose depositing protons exhibits high LET
characteristics. Fig.6(b) shows the $\mathrm{RBE_{cd}}$ (red line) together
with the damage induced by a mono–energetic proton beam.
## Acknowledgements
I am indebted to Rob Stewart, not only for generously providing the MCDS–code
for everyone to use, but also for providing much needed input on the
biological aspects of the MCDS implementation. Both Ricardo Raabe and Mike
Partridge reviewed some of the physics and made it a more rigorous paper.
Sandra Nuyts and Dirk De Ruysscher’s clinical input was also highly
appreciated.
## References
* 1. Hall EJ (1978) LET and RBE. In: Radiobiology for the radiologist, Philadelphia: Harper & Row, Publishers.
* 2. Caldecott KW (2008) Single-strand break repair and genetic disease. Nat Rev Genet 9: 619–631.
* 3. Ward JF (1985) Biochemistry of dna lesions. Radiation Research Supplement 8: pp. S103-S111.
* 4. Bethe H (1930) Zur theorie des durchgangs schneller korpuskularstrahlen durch materie. Ann Phys 5.
* 5. Barkas W, Dyer J, Heckman H (1963) Resolution of the $\sigma$–mass anomaly. Phys Rev Lett 11: 26.
* 6. Brenner DJ, Ward J (1992) Constraints on energy deposition and target size of multiply damaged sites associated with dna double-strand breaks. International Journal of Radiation Biology 61: 737-748.
* 7. ICRU, International Commission on Radiation Units and Measurements (1970) Linear energy transfer. ICRU Report 16.
* 8. ICRU, International Commission on Radiation Units and Measurements (1971) Radiation quantities and units. ICRU Report 19.
* 9. ICRU, International Commission on Radiation Units and Measurements (1983) Microdosimetry. ICRU Report 36.
* 10. Nikjoo H, O’Neill P, Goodhead DT, Terrissol M (1997) Computational modelling of low-energy electron-induced DNA damage by early physical and chemical events. Int J Radiat Biol 71: 467-473.
* 11. Stewart RD, Yu VK, Georgakilas AG, Koumenis C, Park JH, et al. (2011) Effects of radiation quality and oxygen on clustered DNA lesions and cell death. Radiation Research 176: 587-602.
* 12. Cauchy A (1853) Sur les résultats moyens d’observations de même nature, et sur les résultats les plus probables. Comptes Rendus de l’Académie des Sciences : 198-206.
* 13. Semenenko V, Stewart R (2006) Fast Monte Carlo simulation of DNA damage formed by electrons and light ions. Phys Med Biol 51: 1693-1706.
* 14. Breit G (1959) Handbuch der Physik XLI/1. Berlin, Heidelberg: Springer.
* 15. Powell C, Jablonski A (2010) Nist standard reference database 71. In: NIST Electron Inelastic–Mean–Free–Path Database — Version1.2, NIST Gaithersburg, MD.
* 16. Zhen-Yu T, Yue-Yuan X, Ming-Wen Z, Xiang-Dong L (2010) Proton inelastic mean free path in a group of organic materials in 0.05-10mev range. Chinese Physics Letters 27: 113403.
* 17. Sinden R (1994) DNA structure and function. San Diego: Academic Press.
* 18. Tanuma S, Powell CJ, Penn DR (2005) Calculations of electron inelastic mean free paths. Surface and Interface Analysis 37: 1–14.
* 19. Ziaja B, London RA, Hajdu J (2006) Ionization by impact electrons in solids: Electron mean free path fitted over a wide energy range. Journal of Applied Physics 99: 033514.
* 20. Kraemer M, Scholz M (2006) Rapid calculation of biological effects in ion radiotherapy. Phys Med Biol 51: 1959-1970.
* 21. Katz R (2003) The parameter-free track structure model of Scholz and Kraft for heavy-ion cross sections. Radiation Research 160: 724–728.
* 22. Paganetti H, Goitein M (2001) Biophysical modelling of proton radiation effects based on amorphous track models. International Journal of Radiation Biology 77: 911-928.
* 23. Cucinotta FA, Nikjoo H, Goodhead DT (2000) Model for radial dependence of frequency distributions for energy imparted in nanometer volumes from hze particles. Radiation Research 153: 459–468.
* 24. Sato T, Kase Y, Watanabe R, Niita K, Sihver L (2009) Biological dose estimation for charged-particle therapy using an improved phits code coupled with a microdosimetric kinetic model. Radiation Research 171: 107–117.
* 25. Van den Heuvel F, Locquet JP, Nuyts S (2010) Beam energy considerations for gold nano-particle enhanced radiation treatment. Physics in Medicine and Biology 55: 4509.
* 26. Hsiao Y, Stewart RD (2008) Monte carlo simulation of DNA damage induction by X-rays and selected radioisotopes. Phys Med Biol 53: 233.
* 27. Waters LS, McKinney GW, Durkee JW, Fensin ML, Hendricks JS, et al. (2007) The MCNPX Monte Carlo radiation transport code. In: Albrow, M and Raja, R, editor, Hadronic Shower Simulation Workshop. Amer Inst Physics, volume 896 of _AIP conference proceedings_ , pp. 81-90. Hadronic Shower Simulation Workshop, Batavia, IL, SEP 06-08, 2006.
* 28. Paganetti H, Niemierko A, Ancukiewicz M, Gerweck LE, Goitein M, et al. (2002) Relative biological effectiveness (RBE) values for proton beam therapy. International journal of radiation oncology, biology, physics 53: 407–421.
* 29. Chaudhary P, Marshall TI, Perozziello FM, Manti L, Currell FJ, et al. (2014) Relative biological effectiveness variation along monoenergetic and modulated bragg peaks of a 62-mev therapeutic proton beam: A preclinical assessment. Int J Radiat Oncol Biol Phys 90: 27–35.
* 30. Goitein M (2008) Radiation Oncology: A Physicist’s-Eye View. Springer Verlag.
* 31. Kavanagh JN, Currell FJ, Timson DJ, Savage KI, Richard DJ, et al. (2013) Antiproton induced dna damage: proton like in flight, carbon-ion like near rest. Sci Rep 3: –.
|
11institutetext: University of British Columbia, Vancouver, BC, Canada
22institutetext: University of Victoria, Victoria, BC, Canada 33institutetext:
University of Campinas, Campinas, Brazil 44institutetext: Simon Fraser
University, Burnaby, BC, Canada
44email<EMAIL_ADDRESS>
# BiasPruner: Debiased Continual Learning for Medical Image Classification
Nourhan Bayasi 11 0000-0003-4653-6081 Jamil Fayyad 22 0000-0003-1553-8754
Alceu Bissoto 33 0000-0003-2293-6160 Ghassan Hamarneh 44 0000-0001-5040-7448
Rafeef Garbi 11 0000-0001-6224-0876
###### Abstract
Continual Learning (CL) is crucial for enabling networks to dynamically adapt
as they learn new tasks sequentially, accommodating new data and classes
without catastrophic forgetting. Diverging from conventional perspectives on
CL, our paper introduces a new perspective wherein forgetting could actually
benefit the sequential learning paradigm. Specifically, we present BiasPruner,
a CL framework that intentionally forgets spurious correlations in the
training data that could lead to shortcut learning. Utilizing a new bias score
that measures the contribution of each unit in the network to learning
spurious features, BiasPruner prunes those units with the highest bias scores
to form a debiased subnetwork preserved for a given task. As BiasPruner learns
a new task, it constructs a new debiased subnetwork, potentially incorporating
units from previous subnetworks, which improves adaptation and performance on
the new task. During inference, BiasPruner employs a simple task-agnostic
approach to select the best debiased subnetwork for predictions. We conduct
experiments on three medical datasets for skin lesion classification and chest
X-Ray classification and demonstrate that BiasPruner consistently outperforms
SOTA CL methods in terms of classification performance and fairness. Our code
is available here.
###### Keywords:
Continual Learning Debias Pruning Shortcut Learning.
## 1 Introduction
Humans inherently learn in a continual manner, acquiring new concepts over
time without forgetting previous ones. In contrast, deep learning models
encounter the challenge of catastrophic forgetting [17], wherein learning new
data can override previously acquired knowledge. This issue becomes especially
pronounced within the medical domain, given the ever-evolving nature of
medical data, the variations in acquisition protocols, the utilization of
diverse devices for obtaining medical images, and other factors that
contribute to shifts in data distributions or the introduction of new disease
classes over time. As a result, continual learning (CL) [25, 27] has emerged
as a promising solution, allowing a network to learn continually over a
sequence of presented data while forgetting as little as possible about
previous knowledge. Several CL methods have emerged within the medical field
to address the challenge of forgetting. Replay-based methods [22, 13] store a
subset of data samples and replay them to retain old information,
regularization-based methods [15] impose restrictions on the network parameter
updates to preserve prior knowledge while learning new tasks, and
architecture-based methods, assign specialized architectural components for
each task within the network [1, 2] or expand them to accommodate new tasks
[9].
While previous CL methods achieved success, they have yet to consider a more
realistic setting in which dataset bias exists. In medical imaging, bias could
manifest through an imbalanced distribution of sensitive attributes (e.g.,
gender, age, ethnicity) [4]. Even slight imbalances induce spurious
correlations between attributes and the classification target (diagnosis) [3],
creating an illusion of predictive power that models can exploit. Leveraging
such information compromises the network’s generalization ability, amplifying
societal biases over misrepresented populations in data (e.g., detecting
melanoma in individuals with dark skin tones. In CL, learning spurious
correlations poses a significant challenge due to bias transfer, where biases
learned by a model can be transferred to a downstream task even if it has
unbiased data [23]. Since CL involves learning a sequence of tasks, the bias
transfer can potentially be amplified. Moreover, recent work [5]
mathematically proved that handling bias becomes substantially harder when
tasks are presented sequentially compared to joint training.
To address this gap and tackle bias in CL, we propose BiasPruner, a fixed-size
network capable of learning sequentially and fairly over time by dedicating a
unique debiased subnetwork for each task. BiasPruner leverages a newly
proposed bias score to measure the contribution of each unit in the network to
learning spurious features. Units with high bias scores are pruned to form a
task-specific debiased subnetwork, which is kept frozen to avoid forgetting,
whereas the remaining pruned units are subsequently offered for learning new
tasks. Fig. 1 presents an overview of our method. We evaluate our solution on
three medical imaging classification datasets, each with different bias
attributes. Our results demonstrate BiasPruner’s superior performance in both
classification accuracy and fairness. While a few recent methods have
addressed fairness in CL [14, 16, 6], BiasPruner, to the best of our
knowledge, is the first work in the medical field covering different
benchmarks and bias attributes in a class-incremental setup. Crucially,
BiasPruner does not require dataset biases to be explicitly annotated. This is
particularly relevant in healthcare, where identifying biases is complex and
costly, compounded by patient data privacy concerns [19].
Figure 1: (Left) BiasPruner learns sequentially, allocating a subnetwork for
each task. (Right) BiasPruner evaluates each network unit’s contribution to
learning spurious features from biased training data, assigning bias scores.
High-score units are pruned, and the subnetwork is finetuned on both easy and
hard samples.
## 2 Methodology
BiasPruner employs a fixed-size network, $f$, capable of learning $T$ tasks
sequentially, one at a time, where $T$ is not pre-determined, without
forgetting any of the previously learned tasks. During training the $t$-th
domain, where $t\in\\{1,2,\ldots,T\\}$, the network does not have access to
old data, i.e., it exclusively receives biased training data
$D_{t}={(x_{i},y_{i})}$ specific to the current task, where ${(x_{i},y_{i})}$
represent the training samples, consisting of a total of $N_{t}$ images and
$y_{i}\in\mathcal{C}_{t}$ classes (note the subscript $t$ emphasizing that the
set of classes may change, including adding new classes, for new tasks). For
clarity, we employ the symbol $c$ to denote any class within the set
$\mathcal{C}_{t}$. BiasPruner creates a debiased subnetwork for the $t$-th
task by pruning units in the network that are mostly correlated with unknown
bias(es) in $D_{t}$. Furthermore, BiasPruner transfers knowledge through
pruning of the original network, including units of previously created
subnetworks, for each new task. At inference, BiasPruner identifies the
optimal subnetwork for predictions on a given data in a task-agnostic setup;
i.e., information about the task origin of a test image is unknown or
unavailable.
2.1. Detecting Spurious Features through Bias Scoring. Given a biased dataset
$D_{t}$, one of the key causes of learning shortcut predictions occurs when
the model finds it easier to learn spurious features rather than the intended
ones [19]. Consequently, we propose to intentionally encourage the network $f$
to quickly fit on the easier features from the training data of $D_{t}$. To
achieve this, we adopt the generalized cross entropy (GCE) [29] loss,
$\mathcal{L}_{\mathrm{GCE}}$, which was originally proposed to address noisy
labels by fitting on the easier clean data and slowly memorizing the hard
noisy samples. The GCE loss is formulated as follows:
$\mathcal{L}_{\mathrm{GCE}}(p(x;\theta),y)=\frac{1-p_{y}(x;\theta)^{q}}{q}$
where $q\in(0,1]$ is a hyperparameter controlling the degree of bias
amplification, $p(x;\theta)$ and $p_{y}(x;\theta)$ are the softmax output of
the network and its probability assigned to the target label $y$,
respectively. Due to the GCE loss’s gradient, which up-weights samples with a
high probability of predicting the correct target, the network quickly becomes
biased to easier samples and learns shortcuts [21].
Once the network is biased, it becomes logical to identify the units that have
contributed the most to learning the shortcut in each class. To achieve this,
we partition the training data $\\{x,y\\}$ into two groups per groundtruth
class $c$: The biased sample set, $\mathcal{E}_{c}^{t}$, consists of
$(x_{i},y_{i})$ pairs that are correctly classified by the biased network with
a probability $p_{y,i}\geq\tau$; i.e., samples that are $\mathcal{E}$asier for
the network to learn. Similarly, the unbiased sample set,
$\mathcal{H}_{c}^{t}$, comprises $(x_{i},y_{i})$ pairs that are misclassified
by the biased network; i.e., samples that are $\mathcal{H}$arder to learn, as
follows (Refer to supplementary material, Fig. 4, for visualizations):
$\mathcal{E}_{c}^{t}=\\{i\,|\,y_{i}=c_{i}\,~{}\&~{}\,p_{y,i}\geq\tau\\}~{}~{},~{}~{}\mathcal{H}_{c}^{t}=\\{i\,|\,y_{i}\neq
c_{i}\,~{}\&~{}\,p_{y,i}\geq\tau\\}.$
Next, a we define a bias score, $\mathcal{S}^{t}_{c,n}$, for each unit $n$ in
the biased network relative to a given class $c$ by analyzing each unit’s ReLU
activation, $a_{i}^{n}$, as follows:
$\mathcal{S}^{t}_{c,n}=\frac{1}{\left|\mathcal{E}_{c}^{t}\right|}\sum_{i\in\mathcal{E}_{c}^{t}}\text{Var}\left(a_{i}^{n}\right)-\frac{1}{\left|\mathcal{H}_{c}^{t}\right|}\sum_{i\in\mathcal{H}_{c}^{t}}\text{Var}\left(a_{i}^{n}\right).$
$\text{Var}\left(a_{i}^{n}\right)$ represents the variance of the feature map
$a_{i}^{n}$ over its spatial dimensions $(w,h)$. The final unit-based bias
score $\bar{\mathcal{S}^{t}_{n}}$ is calculated by averaging the results over
all class-specific scores. Units that respond more strongly to biased samples
(c.f. $\mathcal{E}_{c}^{t}$) than to unbiased samples (c.f.
$\mathcal{H}_{c}^{t}$) are assigned higher bias scores, designating them as
the main contributors to learning shortcuts in the network.
2.2 Forming Subnetworks by Bias-aware Pruning and Finetuning. To ensure
fairness in CL, we form a task-specific, debiased subnetwork, ${f}_{t}$, by
selectively removing the units responsible for learning the bias in $D_{t}$.
The pruning involves removing the top $\gamma\%$ of units, which includes the
output feature maps with the highest bias scores and their corresponding
filters, leaving ($1-\gamma\%$) for each $f_{t}$. To counteract potential
performance drop post-pruning while prioritizing improved performance on
harder-to-learn samples, we propose a new weighted cross entropy loss,
$\mathcal{L}_{\mathrm{WCE}}$, for fine-tuning $f_{t}$ on $D_{t}$ over a few
epochs:
$\mathcal{L}_{\mathrm{WCE}}(x)=\mathcal{W}(x)\cdot\mathcal{L}_{\mathrm{CE}}\left({f}(x),y\right),~{}~{}~{}\text{where}~{}~{}~{}\mathcal{W}(x)=\exp\left(\alpha\cdot\mathcal{L}_{\mathrm{GCE}}(x)\right).$
$\alpha\in(0,1)$ is a trainable parameter, and $\mathcal{L}_{\mathrm{GCE}}(x)$
is the sample’s GCE loss value determined as discussed in Sec. 2.1. With this
weighted function, the influence of training samples in the finetuning process
varies according to their bias alignment; i.e., easy samples (c.f.,
$\mathcal{L}_{\mathrm{GCE}}(x)$ is small) are down-weighted, whereas hard
samples (c.f., $\mathcal{L}_{\mathrm{GCE}}(x)$ is large) are up-weighted,
exponentially.
2.3. Debiased Knowledge Transfer for Enhanced Task Adaptation. When learning a
new task, BiasPruner facilitates knowledge transfer (KT), which is achieved by
pruning the entire original network $f$ to create the new task-specific
subnetwork, including both free units and pre-assigned debiased subnetworks of
previous tasks. To avoid forgetting the previously acquired knowledge, the
subnetworks associated with prior tasks are kept frozen and only the free
units are updated to learn the new task.
2.4. Task-agnostic Inference. BiasPruner addresses a practical scenario where
the task identity of a test image is unknown during inference. In other words,
the specific task to which an image belongs to is not explicitly provided.
Given a test batch of size $s$ as $\mathbf{X}^{\text{test }}$, we employ a
‘maxoutput’ strategy for task prediction [7], which involves identifying the
task with the maximum output response:
$t^{*}=\underset{t=1,2,\ldots,T}{\arg\max}\sum_{i=1}^{s}\max\varphi_{t}\left(\theta_{t}\left(\mathbf{x}_{i}^{test}\right)\right)$,
where $\varphi_{t}$ is the fully connected layer of the $t$-th subnetwork.
Subsequently, we use the selected $t^{*}$ task to make the final prediction
$\hat{\mathbf{y}}$ based on the corresponding subnetwork;
$\hat{\mathbf{y}}={f}_{t^{*}}\left(\mathbf{X}^{\text{test }}\right)$.
## 3 Experiments and Results
Datasets. We selected/constructed datasets based on three primary
considerations: (a) the presence of a dataset bias that is spuriously
correlated with the disease classes; (b) The need for a variety of classes to
facilitate the CL setup; and (c) publicly available to ensure reproducibility.
Hence, we include Fitzpatrick17K (FITZ) [10], HAM10000 (HAM) [24] and NIH
ChestX-Ray14 (NIH) [26]. Each dataset has 114, 7 and 14 distinct classes,
respectively, that are split into 6, 3 and 3 tasks, respectively, with non-
overlapping classes, as shown in Table 1 (Refer to supplementary material for
dataset (Table 5) $\&$ bias (Fig. 5) details).
Table 1: Details on the multi-class disease datasets used in our experiments. Dataset | Number of images | Classes | Tasks | Classes per task | Dataset bias
---|---|---|---|---|---
FITZ | 16,012 | 114 | 6 | $[19,19,19,19,19,19]$ | Skin tone (I, II, III, IV, V, VI)
HAM | 8,678 | 7 | 3 | $[2,2,3]$ | Age (age$\geq$60, age$<$60)
NIH | 19,993 | 14 | 3 | $[4,5,5]$ | Gender (male, female)
Evaluation Metrics. We assess the performance of BiasPruner using both the
accuracy and fairness metrics. We use the commonly used F1-score (F) and
balanced accuracy (ACC) metrics. We report the accuracy per sensitive
attribute (e.g., male, female) as well as overall class performance (Overall).
For fairness, we use the demographic parity ratio (DPR) and equal opportunity
difference (EOD) metrics. Similar to other CL methods, we report all metrics
at the end of learning (i.e., after training the model on all $T$ tasks),
averaged across all tasks.
Implementation Details. We use ResNet-50 [11] as the backbone for feature
extraction and a unified classifier for all tasks during inference. We use the
Adam optimizer with a batch of 32 images for 200 epochs to train BiasPruner
with $\mathcal{L}_{\mathrm{GCE}}$, having early stopping in case of
overfitting. We set $q$ in $\mathcal{L}_{\mathrm{GCE}}$ to 0.7 (default) and
the confidence threshold to $\tau=0.70$. We set the pruning ratio to
$\gamma=0.6$ for all tasks. For the finetuning with
$\mathcal{L}_{\mathrm{WCE}}$, we train the debiased subnetwork for 20 epochs,
and we saved the weights with the highest ACC and EOD on the validation set.
In all experiments, we report averaged results across three random task
orders, aiming to neutralize any potential impact of the order in which tasks
are processed during network training.
I. Quantitative Results on Skin-tone-biased Dataset (FITZ) are reported in
Table 2. First, we compare BiasPruner (Exp $\mathcal{D}$) against three common
CL baselines (Exp $\mathcal{A}$): JOINT, which consolidates data from all
tasks for joint model training; SINGLE, which trains separate models for each
task and deploys task-specific models during inference; and SeqFT, which
finetunes a single model on the current task without addressing forgetting. We
observe that SINGLE outperforms JOINT as each task is learned independently,
leading to improved classification and fairness results, and that SeqFT
exhibits a significant performance drop due to catastrophic forgetting.
Notably, BiasPruner outperforms baselines in terms of overall accuracy and
fairness, attributing this to its ability to reduce the training data bias and
transfer knowledge across the tasks.
Table 2: Classification performance and fairness on FITZ. Best results marked in bold (except upper-bound). Higher is better for all metrics except EOD. Exp | Method | F | ACC | DPR | EOD
---|---|---|---|---|---
Type-I | Type-II | Type-III | Type-IV | Type-V | Type-VI | Overall
Comparison against Baselines
$\mathcal{A}$ | JOINT | 0.256 | 0.269 | 0.304 | 0.335 | 0.309 | 0.365 | 0.245 | 0.324 | 0.137 | 0.298
SINGLE | 0.435 | 0.410 | 0.469 | 0.465 | 0.495 | 0.492 | 0.430 | 0.472 | 0.185 | 0.251
SeqFT | 0.188 | 0.187 | 0.261 | 0.299 | 0.254 | 0.214 | 0.192 | 0.221 | 0.051 | 0.721
Comparison against CL Methods
$\mathcal{B}$ | EWC | 0.325 | 0.254 | 0.356 | 0.355 | 0.401 | 0.412 | 0.244 | 0.324 | 0.212 | 0.342
PackNet | 0.433 | 0.366 | 0.402 | 0.445 | 0.447 | 0.479 | 0.319 | 0.414 | 0.154 | 0.425
SupSup | 0.451 | 0.254 | 0.298 | 0.441 | 0.452 | 0.436 | 0.410 | 0.425 | 0.162 | 0.431
Comparison against CL with Bias Mitigation Methods
$\mathcal{C}$ | EWC+S | 0.308 | 0.264 | 0.357 | 0.324 | 0.411 | 0.417 | 0.385 | 0.341 | 0.228 | 0.311
PackNet+S | 0.495 | 0.434 | 0.485 | 0.494 | 0.565 | 0.562 | 0.584 | 0.501 | 0.184 | 0.248
SupSup+S | 0.466 | 0.418 | 0.467 | 0.432 | 0.554 | 0.561 | 0.534 | 0.492 | 0.182 | 0.221
EWC+W | 0.321 | 0.251 | 0.356 | 0.334 | 0.392 | 0.401 | 0.398 | 0.346 | 0.216 | 0.298
PackNet+W | 0.527 | 0.405 | 0.477 | 0.480 | 0.529 | 0.546 | 0.524 | 0.472 | 0.144 | 0.246
SupSup+W | 0.457 | 0.425 | 0.451 | 0.448 | 0.530 | 0.561 | 0.544 | 0.508 | 0.178 | 0.254
Our Proposed Fair Continual Learning Method
$\mathcal{D}$ | BiasPruner | 0.540 | 0.457 | 0.502 | 0.435 | 0.551 | 0.563 | 0.584 | 0.512 | 0.331 | 0.202
[Upper-bound] Comparison against a Bias Mitigation Method
$\mathcal{E}$ | FairDisCo | 0.542 | 0.479 | 0.523 | 0.468 | 0.571 | 0.574 | 0.615 | 0.548 | 0.474 | 0.192
Secondly, we compare BiasPruner against three CL methods (Exp $\mathcal{B}$):
EWC [12], a regularization-based method, and PackNet [20] and SupSup [28],
both subnetwork-based like ours. We notice that these CL methods demonstrate
lower fairness compared to BiasPruner, which is expected as they overlook
dataset bias. Specifically, PackNet and SupSup exhibit higher accuracy but
lower fairness compared to EWC. This is mainly due to their subnetwork-based
nature, which can inadvertently worsen accuracy disparities, particularly
among specific subgroups, during the removal of unimportant parameters [18].
Thirdly, we enhance the competing CL methods by augmenting each of them with
pre-processing bias mitigation algorithms (Exp $\mathcal{C}$). Specifically,
we apply the Resampling Algorithm (S), which balances the dataset by
oversampling minorities and undersampling majorities within each pair of skin
label and tone. Additionally, we explore the Reweighting Algorithm (W) [8],
which assigns lower weights to images that have been disadvantaged or favored
to prevent the model from learning discriminatory features. While showing
improved accuracy and fairness compared to Exp $\mathcal{B}$, they fall short
of our BiasPruner’s performance.
Finally, we compare BiasPruner to FairDisCo [8], a (non-CL) bias mitigation
technique for medical applications, which uses bias annotations in training.
Therefore, it can set an upper bound on the performance. For a fair
comparison, we allow FairDisCo to learn each task independently and report the
average performance over all tasks (Exp $\mathcal{E}$). Despite not using bias
annotations, BiasPruner exhibits slightly lower but comparable performance to
FairDisCo.
II. Quantitative Results on Age- and Gender-biased Dataset (HAM & NIH,
respectively) are given in Table 3. BiasPruner (Exp $\mathcal{I}$) outperforms
other baselines (Exp $\mathcal{F}$), CL methods (Exp $\mathcal{G}$), and CL
methods with debiasing (Exp $\mathcal{H}$) in both overall task classification
and fairness.
Table 3: Classification performance and fairness on HAM and NIH. Best results marked in bold (except upper-bound). Exp | Method | HAM | NIH
---|---|---|---
F | ACC | DPR | EOD | F | ACC | DPR | EOD
$<$60 | $\geq$60 | Overall | M | F | Overall
Comparison against Baselines
$\mathcal{F}$ | JOINT | 0.755 | 0.781 | 0.665 | 0.738 | 0.239 | 0.320 | 0.282 | 0.306 | 0.259 | 0.285 | 0.706 | 0.325
SINGLE | 0.836 | 0.819 | 0.834 | 0.841 | 0.609 | 0.131 | 0.434 | 0.428 | 0.403 | 0.417 | 0.728 | 0.311
SeqFT | 0.431 | 0.372 | 0.404 | 0.416 | 0.201 | 0.558 | 0.219 | 0.251 | 0.217 | 0.231 | 0.246 | 0.544
Comparison against CL Methods
$\mathcal{G}$ | EWC | 0.788 | 0.773 | 0.804 | 0.772 | 0.561 | 0.360 | 0.398 | 0.428 | 0.405 | 0.417 | 0.562 | 0.264
PackNet | 0.824 | 0.807 | 0.799 | 0.808 | 0.620 | 0.302 | 0.434 | 0.47 | 0.444 | 0.458 | 0.588 | 0.284
SupSup | 0.831 | 0.788 | 0.845 | 0.822 | 0.625 | 0.296 | 0.448 | 0.451 | 0.441 | 0.445 | 0.571 | 0.293
Comparison against CL with Bias Mitigation Methods
$\mathcal{H}$ | EWC+S | 0.834 | 0.821 | 0.832 | 0.827 | 0.575 | 0.172 | 0.412 | 0.434 | 0.416 | 0.421 | 0.567 | 0.259
PackNet+S | 0.839 | 0.849 | 0.817 | 0.829 | 0.613 | 0.181 | 0.419 | 0.44 | 0.425 | 0.434 | 0.640 | 0.211
SupSup+S | 0.849 | 0.802 | 0.811 | 0.817 | 0.639 | 0.204 | 0.432 | 0.456 | 0.448 | 0.451 | 0.662 | 0.204
EWC+W | 0.791 | 0.778 | 0.784 | 0.781 | 0.544 | 0.168 | 0.418 | 0.441 | 0.423 | 0.432 | 0.569 | 0.251
PackNet+W | 0.814 | 0.877 | 0.819 | 0.842 | 0.549 | 0.189 | 0.443 | 0.462 | 0.456 | 0.459 | 0.704 | 0.192
SupSup+W | 0.846 | 0.797 | 0.809 | 0.803 | 0.536 | 0.213 | 0.458 | 0.481 | 0.463 | 0.474 | 0.731 | 0.184
Our Proposed Fair Continual Learning Method
$\mathcal{I}$ | BiasPruner | 0.860 | 0.851 | 0.852 | 0.858 | 0.642 | 0.127 | 0.488 | 0.525 | 0.484 | 0.507 | 0.821 | 0.188
[Upper-bound] Comparison against a Bias Mitigation Method
$\mathcal{J}$ | FairDisCo | 0.873 | 0.876 | 0.904 | 0.893 | 0.682 | 0.113 | 0.486 | 0.545 | 0.512 | 0.538 | 0.855 | 0.150
III. Ablation Studies analyze the impact of individual components in
BiasPruner (Table 4). In Exp $\mathcal{K}$, we train the model using CE loss
instead of GCE. In Exp $\mathcal{L}$, we randomly prune the network instead of
using our bias-based pruning. In Exp $\mathcal{M}$, we finetune the debiased
subnetworks in BiasPruner with CE loss without weighting it. In Exp
$\mathcal{N}$, we simulate the absence of knowledge transfer (KT) by
prohibiting any overlapping between the parameters $\theta_{t}$ and
$\theta_{t^{\prime}}$ for any two tasks $t$ and $t^{\prime}$. We observe that
1) the impact of $\mathcal{L}_{\mathrm{WCE}}$ (Exp $\mathcal{M}$) is
predominant, as fine-tuning with CE leads to the poorest performance in
accuracy and fairness, attributed to the risk of subnetworks potentially
relearning bias.
Table 4: Classification (Overall) $\&$ fairness (DPR) results of BiasPruner from ablation studies. Best results marked in bold. Exp | $\mathcal{L}_{\mathrm{GCE}}$ | Bias-aware Pruning | $\mathcal{L}_{\mathrm{WCE}}$ | KT | FITZ | HAM | NIH
---|---|---|---|---|---|---|---
Overall$\uparrow$ | DPR$\uparrow$ | Overall$\uparrow$ | DPR$\uparrow$ | Overall$\uparrow$ | DPR$\uparrow$
$\mathcal{D,I}$ | ✓ | ✓ | ✓ | ✓ | 0.512 | 0.331 | 0.858 | 0.642 | 0.507 | 0.821
$\mathcal{K}$ | $\times$ | ✓ | ✓ | ✓ | 0.498 | 0.254 | 0.834 | 0.579 | 0.501 | 0.779
$\mathcal{L}$ | ✓ | $\times$ | ✓ | ✓ | 0.508 | 0.328 | 0.842 | 0.637 | 0.498 | 0.814
$\mathcal{M}$ | ✓ | ✓ | $\times$ | ✓ | 0.481 | 0.247 | 0.792 | 0.576 | 0.468 | 0.754
$\mathcal{N}$ | ✓ | ✓ | ✓ | $\times$ | 0.504 | 0.324 | 0.851 | 0.630 | 0.496 | 0.803
Figure 2: The overall (dashed) and DPR (dotted) performance of BiasPruner and
other methods over all the seen tasks after each training step in the
continual learning sequence, where $Ti$ refers to the $i$th task.
IV. A Sequential Analysis, as illustrated in Fig. 2, showcases the
consistently superior performance of BiasPruner over other methods in terms of
overall accuracy and DPR after each step in the continual learning sequence
across FITZ, HAM, and NIH datasets.
V. For Analysis of Model Biases (e.g., skin tone), we trained classifiers for
sensitive attribute detection on top of frozen feature extractors from three
networks: CE-based, GCE-based, and our BiasPruner, all pre-trained to diagnose
(Fig. 3). The better-than-chance ($\in$ [0.63, 0.828]) detection accuracy of
sensitive attributes, in CE- and GCE-based training, reveals that sensitive
data is embedded in the originally learned features, i.e., the presence of
bias. GCE, due to its loss function promoting shortcut learning, showed the
most bias. The high accuracy achieved by CE shows that even naïvely trained
models are susceptible to bias. In contrast, BiasPruner shows minimal bias,
reflected by its near-random AUC values ($\in$ [0.49, 0.67]) when detecting
sensitive information.
Figure 3: Sensitive attribute detection from frozen models pre-trained to
diagnose. BiasPruner low AUCs indicate that bias is not encoded in its
resulting features.
## 4 Conclusion
In this paper, we presented BiasPruner, a new continual learning (CL)
framework that leverages intentional forgetting to improve fairness and
mitigate catastrophic forgetting. By quantifying the influence of each network
unit on learning spurious features, BiasPruner identifies and prunes highly
biased units to construct a debiased subnetwork for each task. Experimental
evaluations on three classification datasets demonstrate that BiasPruner
consistently outperforms baselines and CL methods in classification
performance and fairness. Our results highlight the need to address dataset
bias in future CL designs and evaluations, due to the frequent fairness
shortcomings of current CL methods.
### 4.0.1 Acknowledgements
We thank NVIDIA for their hardware grant and the Natural Sciences and
Engineering Research Council (NSERC) of Canada for the Vanier PhD Fellowship.
A. Bissoto is funded by FAPESP (2019/19619-7, 2022/ 09606-8).
### 4.0.2
The authors have no competing interests to declare that are relevant to the
content of this article.
## References
* [1] Bayasi, N., Du, S., Hamarneh, G., Garbi, R.: Continual-GEN: Continual group ensembling for domain-agnostic skin lesion classification. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 3–13. Springer (2023)
* [2] Bayasi, N., Hamarneh, G., Garbi, R.: Culprit-Prune-Net: Efficient continual sequential multi-domain learning with application to skin lesion classification. In: Medical Image Computing and Computer Assisted Intervention (MICCAI). pp. 165–175. Springer (2021)
* [3] Bissoto, A., Barata, C., Valle, E., Avila, S.: Even small correlation and diversity shifts pose dataset-bias issues. Pattern Recognition Letters (2024)
* [4] Brown, A., Tomasev, N., Freyberg, J., Liu, Y., Karthikesalingam, A., Schrouff, J.: Detecting shortcut learning for fair medical ai using shortcut testing. Nature Communications 14(1), 4314 (2023)
* [5] Busch, F.P., Kamath, R., Mitchell, R., Stammer, W., Kersting, K., Mundt, M.: Where is the truth? the risk of getting confounded in a continual world. arXiv preprint arXiv:2402.06434 (2024)
* [6] Chowdhury, S.B.R., Chaturvedi, S.: Sustaining fairness via incremental learning. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 37, pp. 6797–6805 (2023)
* [7] Dekhovich, A., Tax, D.M., Sluiter, M.H., Bessa, M.A.: Continual prune-and-select: class-incremental learning with specialized subnetworks. Applied Intelligence 53(14), 17849–17864 (2023)
* [8] Du, S., Hers, B., Bayasi, N., Hamarneh, G., Garbi, R.: FairDisCo: Fairer ai in dermatology via disentanglement contrastive learning. In: European Conference on Computer Vision. pp. 185–202 (2022)
* [9] González, C., Ranem, A., Othman, A., Mukhopadhyay, A.: Task-agnostic continual hippocampus segmentation for smooth population shifts. In: Domain Adaptation and Representation Transfer MICCAI Workshop. pp. 108–118 (2022)
* [10] Groh, M., Harris, C., Soenksen, L., Lau, F., Han, R., Kim, A., Koochek, A., Badri, O.: Evaluating deep neural networks trained on clinical images in dermatology with the fitzpatrick 17k dataset. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1820–1828 (2021)
* [11] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016)
* [12] Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al.: Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences 114(13), 3521–3526 (2017)
* [13] Kiyasseh, D., Zhu, T., Clifton, D.: A clinical deep learning framework for continually learning from cardiac signals across diseases, time, modalities, and institutions. Nature Communications 12(1), 4221 (2021)
* [14] Lee, D., Jung, S., Moon, T.: Continual learning in the presence of spurious correlation. arXiv preprint arXiv:2303.11863 (2023)
* [15] Lenga, M., Schulz, H., Saalbach, A.: Continual learning for domain adaptation in chest x-ray classification. In: Medical Imaging with Deep Learning. pp. 413–423 (2020)
* [16] Lesort, T.: Spurious features in continual learning. In: AAAI Bridge Program on Continual Causality. pp. 59–62 (2023)
* [17] Lewandowsky, S., Li, S.C.: Catastrophic interference in neural networks: Causes, solutions, and data. In: Interference and inhibition in cognition, pp. 329–361 (1995)
* [18] Lin, X., Kim, S., Joo, J.: FairGrape: Fairness-aware gradient pruning method for face attribute classification. In: European Conference on Computer Vision. pp. 414–432 (2022)
* [19] Luo, L., Xu, D., Chen, H., Wong, T.T., Heng, P.A.: Pseudo bias-balanced learning for debiased chest x-ray classification. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 621–631 (2022)
* [20] Mallya, A., Lazebnik, S.: Packnet: Adding multiple tasks to a single network by iterative pruning. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 7765–7773 (2018)
* [21] Nam, J., Cha, H., Ahn, S., Lee, J., Shin, J.: Learning from failure: De-biasing classifier from biased classifier. Advances in Neural Information Processing Systems 33, 20673–20684 (2020)
* [22] Perkonigg, M., Hofmanninger, J., Herold, C.J., Brink, J.A., Pianykh, O., Prosch, H., Langs, G.: Dynamic memory to alleviate catastrophic forgetting in continual learning with medical imaging. Nature communications 12(1), 5678 (2021)
* [23] Salman, H., Jain, S., Ilyas, A., Engstrom, L., Wong, E., Madry, A.: When does bias transfer in transfer learning? arXiv preprint arXiv:2207.02842 (2022)
* [24] Tschandl, P., Rosendahl, C., Kittler, H.: The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data 5(1), 1–9 (2018)
* [25] Wang, L., Zhang, X., Su, H., Zhu, J.: A comprehensive survey of continual learning: Theory, method and application. arXiv preprint arXiv:2302.00487 (2023)
* [26] Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 2097–2106 (2017)
* [27] Wang, Z., Yang, E., Shen, L., Huang, H.: A comprehensive survey of forgetting in deep learning beyond continual learning. arXiv preprint arXiv:2307.09218 (2023)
* [28] Wortsman, M., Ramanujan, V., Liu, R., Kembhavi, A., Rastegari, M., Yosinski, J., Farhadi, A.: Supermasks in superposition. Advances in Neural Information Processing Systems 33, 15173–15184 (2020)
* [29] Zhang, Z., Sabuncu, M.: Generalized cross entropy loss for training deep neural networks with noisy labels. Advances in neural information processing systems 31 (2018)
BiasPruner: Debiased Continual Learning for Medical Image Classification
(Supplementary Material)
Figure 4: Visualization of easy (blue square) and hard (red square) samples
across the different benchmarks. 1st row: Images from FITZ (Task 4), where
each image is labeled by its Fitzpatrick skin tone, denoted as I, II, IV, V or
VI. 2nd row: Images from HAM (Task 3), where each image is labeled by age (age
$<$ or $\geq$ 60), denoted as $-$ or $+$, respectively. 3rd row: Images from
NIH (Task 1), where each image is labeled as male or female, denoted as M or
F, respectively. We notice that the hard samples represent the minority within
their respective tasks. Figure 5: Bias distribution across the different tasks
in FITZ, HAM and NIH. Table 5: Distribution of images across train,
validation, and test sets for each task in FITZ, HAM, and NIH. The $V$ value
between brackets next to Train represents the Cramer’s $V$ correlation between
the sensitive attribute (e.g., skin tone in FITZ) and disease classes.
| FITZ | HAM | NIH
---|---|---|---
| Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 | Task 1 | Task 2 | Task 3 | Task 1 | Task 2 | Task 3
Train ($V$) | | 2,582
---
(0.311)
| 2,519
---
(0.287)
| 1,986
---
(0.277)
| 1,609
---
(0.333)
| 1,315
---
(0.293)
| 1,198
---
(0.328)
| 519
---
(0.187)
| 731
---
(0.112)
| 4,441
---
(0.199)
| 6,003
---
(0.259)
| 2,217
---
(0.242)
| 2,481
---
(0.264)
Val | 361 | 358 | 286 | 230 | 180 | 171 | 85 | 123 | 809 | 1,711 | 646 | 710
Test | 747 | 721 | 568 | 462 | 375 | 344 | 167 | 240 | 1,563 | 3,472 | 1,311 | 1,442
Total | 3,690 | 3,598 | 2,840 | 2,301 | 1,870 | 1,713 | 771 | 1,094 | 6,813 | 11,186 | 4,174 | 4,633
Table 6: Continuation of Table 2 –Standard deviation results. Exp | Method | F | ACC | DPR | EOD
---|---|---|---|---|---
Type-I | Type-II | Type-III | Type-IV | Type-V | Type-VI | Overall
Comparison against Baselines
$\mathcal{A}$ | JOINT | 0.81 | 0.28 | 0.06 | 0.85 | 0.28 | 0.35 | 0.89 | 0.69 | 0.66 | 0.51
SINGLE | 0.73 | 0.13 | 0.15 | 0.30 | 0.45 | 0.53 | 0.19 | 0.11 | 0.12 | 0.38
SeqFT | 2.24 | 1.68 | 2.11 | 2.48 | 2.34 | 2.58 | 2.64 | 2.33 | 2.56 | 2.84
Comparison against CL Methods
$\mathcal{B}$ | EWC | 1.55 | 1.03 | 1.19 | 1.15 | 1.83 | 1.78 | 1.21 | 1.11 | 1.22 | 1.05
PackNet | 0.21 | 0.57 | 0.41 | 0.93 | 0.25 | 0.81 | 0.35 | 0.69 | 0.76 | 0.49
SupSup | 0.98 | 0.66 | 0.52 | 0.63 | 0.55 | 0.82 | 0.54 | 0.44 | 0.38 | 0.79
Comparison against CL with Bias Mitigation Methods
$\mathcal{C}$ | EWC+S | 1.51 | 1.76 | 1.34 | 1.19 | 1.75 | 1.26 | 1.33 | 1.28 | 1.01 | 1.14
PackNet+S | 0.15 | 0.14 | 0.62 | 0.51 | 0.26 | 0.17 | 0.64 | 0.25 | 0.46 | 0.26
SupSup+S | 0.25 | 0.58 | 0.61 | 0.63 | 0.24 | 0.5 | 0.28 | 0.56 | 0.63 | 0.65
EWC+W | 1.37 | 1.27 | 1.44 | 1.68 | 1.34 | 1.25 | 1.42 | 1.62 | 2.53 | 1.48
PackNet+W | 0.28 | 0.34 | 0.58 | 0.35 | 0.22 | 0.23 | 0.31 | 0.37 | 0.34 | 0.71
SupSup+W | 0.63 | 0.94 | 0.41 | 0.37 | 0.82 | 0.21 | 0.52 | 0.16 | 0.35 | 0.81
Our Proposed Fair Continual Learning Method
$\mathcal{D}$ | BiasPruner | 0.54 | 0.38 | 0.15 | 0.84 | 0.44 | 0.39 | 0.71 | 0.24 | 0.33 | 0.41
[Upper-bound] Comparison against a Bias Mitigation Method
$\mathcal{E}$ | FairDisCo | 0.91 | 0.22 | 0.84 | 0.65 | 0.74 | 0.33 | 0.85 | 0.12 | 0.35 | 0.34
Table 7: Continuation of Table 3 –Standard deviation results. Exp | Method | HAM | NIH
---|---|---|---
F | ACC | DPR | EOD | F | ACC | DPR | EOD
$<$60 | $\geq$60 | Overall | M | F | Overall
Comparison against Baselines
$\mathcal{F}$ | JOINT | 0.05 | 0.14 | 0.11 | 0.14 | 0.09 | 0.08 | 0.15 | 0.08 | 0.11 | 0.22 | 0.61 | 0.23
SINGLE | 0.53 | 0.72 | 0.23 | 0.91 | 0.34 | 0.82 | 0.91 | 0.79 | 0.44 | 0.27 | 0.22 | 0.42
SeqFT | 2.64 | 0.65 | 0.85 | 0.36 | 0.41 | 0.45 | 0.54 | 0.49 | 0.96 | 0.46 | 0.22 | 0.36
Comparison against CL Methods
$\mathcal{G}$ | EWC | 1.46 | 1.12 | 1.69 | 1.79 | 1.14 | 1.16 | 1.74 | 1.62 | 1.42 | 1.66 | 1.19 | 1.77
PackNet | 0.44 | 0.57 | 0.28 | 0.22 | 0.82 | 0.61 | 0.22 | 0.43 | 0.51 | 0.17 | 0.34 | 0.55
SupSup | 0.53 | 0.64 | 0.22 | 0.74 | 0.91 | 0.84 | 0.39 | 0.53 | 0.55 | 0.69 | 0.22 | 0.43
Comparison against CL with Bias Mitigation Methods
$\mathcal{H}$ | EWC+S | 1.64 | 1.83 | 1.72 | 1.11 | 1.96 | 1.66 | 1.36 | 1.19 | 1.57 | 1.21 | 1.44 | 1.36
PackNet+S | 0.43 | 0.37 | 0.64 | 0.87 | 0.74 | 0.22 | 0.42 | 0.33 | 0.35 | 0.19 | 0.43 | 0.35
SupSup+S | 0.18 | 0.44 | 0.61 | 0.86 | 0.34 | 0.63 | 0.91 | 0.53 | 0.28 | 0.39 | 0.38 | 0.19
EWC+W | 1.17 | 1.75 | 1.81 | 1.62 | 1.56 | 1.38 | 1.35 | 1.86 | 1.54 | 1.81 | 1.71 | 1.26
PackNet+W | 0.84 | 0.35 | 0.71 | 0.23 | 0.55 | 0.14 | 0.45 | 0.82 | 0.22 | 0.65 | 0.27 | 0.33
SupSup+W | 0.72 | 0.43 | 0.16 | 0.44 | 0.78 | 0.81 | 0.62 | 0.77 | 0.25 | 0.34 | 0.64 | 0.81
Our Proposed Fair Continual Learning Method
$\mathcal{I}$ | BiasPruner | 0.44 | 0.18 | 0.57 | 0.21 | 0.38 | 0.25 | 0.35 | 0.41 | 0.27 | 0.39 | 0.44 | 0.62
[Upper-bound] Comparison against a Bias Mitigation Method
$\mathcal{J}$ | FairDisCo | 0.92 | 0.64 | 0.38 | 0.39 | 0.22 | 0.37 | 0.72 | 0.21 | 0.34 | 0.61 | 0.66 | 0.74
|
∎
11institutetext: Aniruddha Tamhane 22institutetext: Johns Hopkins University
22email<EMAIL_ADDRESS>11institutetext: All authors are with The Johns
Hopkins University, Baltimore, MD.
# Multimodal and self-supervised representation learning for automatic gesture
recognition in surgical robotics
Aniruddha Tamhane Jie Ying Wu Mathias Unberath
###### Abstract
Self-supervised, multi-modal learning has been successful in holistic
representation of complex scenarios. This can be useful to consolidate
information from multiple modalities which have multiple, versatile uses. Its
application in surgical robotics can lead to simultaneously developing a
generalized machine understanding of the surgical process and reduce the
dependency on quality, expert annotations which are generally difficult to
obtain. We develop a self-supervised, multi-modal representation learning
paradigm that learns representations for surgical gestures from video and
kinematics. We use an encoder-decoder network configuration that encodes
representations from surgical videos and decodes them to yield kinematics. We
quantitatively demonstrate the efficacy of our learnt representations for
gesture recognition (with accuracy between 69.6 % and 77.8 %), transfer
learning across multiple tasks (with accuracy between 44.6 % and 64.8 %) and
surgeon skill classification (with accuracy between 76.8 % and 81.2 %).
Further, we qualitatively demonstrate that our self-supervised representations
cluster in semantically meaningful properties (surgeon skill and gestures).
## 1 Introduction
The use of robotic surgery has increased in the recent years. Robotic surgical
consoles such as the da Vinci® Surgical System guthart2000intuitive developed
by Intuitive Surgical Inc. (Sunnyvale, CA). are being used in place of
traditional surgical tools such as laparoscopes, with the aim of improving
surgical quality and patient prognosis. These consoles change the fundamental
nature of the surgeon’s interaction with the patient. Studying human surgical
performance under such a setting is thus a very exciting field of research
that is crucial for improving our understanding of surgeon-robot interaction
as well as improving patient outcomes. The rich and accurate surgical data
available in multiple modes such as videos and kinematics can be used to gain
a holistic comprehension of the surgery as well as study human surgical
performance closely and accurately.
A lot of recent work in this domain has focused on task-specific, supervised
learning from a single modality. We argue that this approach has several
drawbacks. For one, a task-specific learning paradigm (such as supervised
skill detection, for example) enables a very narrow learning of the overall
surgical process, giving very little understanding beyond the specific
training task. Further, supervised learning enforces a dependence on expert
annotations that are expensive to obtain, subjective and possibly erroneous.
Finally, ignoring multiple modalities of information (such as video and
kinematics) can be detrimental to learning generalizeable, feature-rich
representations.
We argue that robotic surgery can be broken down into a sequence of gestures
or surgemes varadarajan2009data . Thus, it is possible to model a surgery
similar to language models mikolov2013distributed ; devlin2018bert over the
vocabulary of the surgemes. In this work, we propose a self-supervised
learning algorithm that learns task-agnostic surgical gesture representations
from two modalities (viz. video and kinematics) that can generalize well
across multiple tasks.
Our contributions in this paper are as follows: 1) we provide a deep-learning
based architecture to learn self-supervised surgical gesture representations
from video and kinematics, 2) we quantitatively demonstrate the utility of our
representations across multiple tasks by achieving state-of-the-art accuracy
in gesture recognition and high accuracy in skill recognition, 3) we
qualitatively demonstrate the rich information stored in our representations
by visualizing our representations forming semantically meaningful clusters.
## 2 Related Work
We review research literature closely related to our research. This section is
organised as follows: we review state-of-the-art supervised learning
approaches in surgical robotics in Section 2.1, unsupervised learning
approaches in Section 2.2, state-of-the-art approaches in video activity
recognition in Section 2.3 and landmark works in self-supervised learning from
multimodal sources in 2.4.
### 2.1 Supervised Surgical Gesture Recognition
A vast majority of research on surgical robotics learning task focuses on
supervised learning. The focus of such work is to learn a specific surgical
task such as future prediction, gesture recognition etc from annotated data.
In sarikaya2019surgical , Sarikaya and Jannin parse the optical flow
corresponding to the surgical videos using a Convolutional Neural Network
based backbone to perform a supervised surgical gesture classification. They
highlight an important point that solely optical flow is a sufficient source
of information for learning to classify surgical gestures with a high
accuracy. We use this insight to extract the optical flow as a source of
domain-independent visual information. In mazomenos2018gesture , Mazomenos et
al use Recurrent Neural Networks (RNNs) to classify surgical gestures in a
supervised manner. Similarly, in dipietro2016recognizing , DiPietro et al also
applied unidirectional and bi-directional LSTMs for gesture recognition. RNNs,
though a natural and effective choice in parsing sequential data such as video
frames are computationally expensive and harder to train. In our work, we use
CNNs to parse the sequential frames, with each CNN channel dedicated to a
particular frame. In sarikaya2018joint , Sarikaya et al combine both
modalities i.e. optical flow and video for supervised surgeme recognition.
This work is also a prime example of integrating multimodal information to
improve supervised gesture recognition results.
### 2.2 Unsupervised Surgical Gesture Recognition
Unsupervised surgical gesture recognition is a research paradigm with a large
potential impact as it does not rely on large amounts of quality, expert
annotations like supervised learning. In dipietro2018unsupervised , diPietro
and Hager demonstrated that predicting the next surgical gesture using RNNs in
an unsupervised manner captures the latent information necessary for surgical
gesture recognition. This was demonstrated on the kinematics data in the
JIGSAWS dataset. Further, they showed that these embeddings naturally
clustered corresponding to distinct higher level activities. In this work,
diPietro and Hager have demonstrated the possibility of learning unsupervised
representations from surgical data that can perform on par with state-of-the-
art supervised learning paradigms on surgical gesture recognition tasks.
Further, they demonstrated the versatility of these learnt representations by
demonstrating their utility in other tasks such as information retrieval. We
however envisage that a unimodal source of information (i.e. kinematics) could
potentially have implied a very narrow, task specific learning of
representations. In our work, we use a 2D-CNN based encoder-decoder structure
to learn representations that encode information from multiple modalities (viz
video and kinematics) and perform well across multiple tasks such as gesture
recognition, surgeon skill classification and cross-task gesture recognition.
The experiments have been explained in detail in Section 6.
### 2.3 Video activity recognition
Surgical gesture recognition from video data can benefit immensely by
borrowing from milestone techniques from video activity recognition. Thus,
reviewing key papers in this domain is fruitful to our efforts. In
karpathy2014large , Karpathy et al review several video classification models
(slow, early and late fusion) to provide empirical evidence for the
superiority of the slow fusion paradigm. Several works such as ji20123d treat
videos as 3D videos, and thus extend the notion of a 2D CNN to 3D. This gives
us the necessary insights for parsing sequential visual data using CNNs.
Simonyan and Zisserman take a different approach in simonyan2014two . They
pair each video frame with the corresponding optical flow and parse both
through parallel 2D CNN based backbones. This approach has biological
justifications, since it has been indicated in simonyan2014two that human
vision processing occurs in a similar manner, with separate neural pathways
dedicated to processing static image and motion information. We have
experimented with a two-stream approach using parallel CNN-based video and
optical flow parsing streams. However, we finalized upon an encoder-decoder
based architecture since it was more suitable for our particular learning
task. The architecture is further described in Section 4. In girdhar2019video
, Giridhar et al use the transformers-based attention as described in
vaswani2017attention to learn high attention regions in video frames prior to
action recognition. It is worth exploring visual attention based models for
surgical activity recognition. We did not explore this further since visual
attention was an unnecessary feature for the data provided in the JIGSAWS
dataset, since only the surgical instruments (which are the only objects of
interest) move in the given video. Thus, we obtain all the information for
motion localization from the optical flow.
### 2.4 Multimodal self-supervised learning
Multimodal, self-supervised learning is has shown great promise in learning .
In arandjelovic2017look , Arandjelovic and Zisserman learnt embeddings for
video and audio inputs such that corresponding embeddings for audio and video
cluster close to each other. This was achieved by training a two-stream, CNN
based network to binary classify embeddings for correspondence. Further, in
arandjelovic2018objects , they demonstrate that the self-supservised
embeddings store information on the source of sound in the video. This
information can be extracted by computing a simple correlation between the
audio embeddings and different regions of the video frame representation.
## 3 Problem Formulation
Our problem statement is to learn multi-modal representations for surgical
robotic activity from video and kinematics data using a self-supervised
learning. Formally, given a dataset
$\mathbb{D}=(\mathcal{V}_{i},\mathcal{K}_{i})_{i=1}^{i=n}$ which is a tuple of
surgical videos $\mathcal{V}$ and corresponding kinematics $\mathcal{K}$, we
seek a corresponding lower-dimensional representation
$r(T(\mathcal{V}_{i}),\mathcal{K}_{i})$, where $T$ is a transformation on
$\mathcal{V}$. In our case, $T$ is a function that extracts the optical flow.
Further, we wish $r(T(\mathcal{V}_{i}),\mathcal{K}_{i})_{i=1}^{i=n}$ retain
all the critical information pertaining to the surgery, such as the exact
surgical gesture, identity of the surgeon and the skill with which the segment
of surgery was performed. Self-supervised learning is a natural solution to
this problem, given its efficacy in learning holistic information. To this
end, we define our problem a maximization of critical information retention in
$r$. Thus, our loss criterion $\mathcal{L}$ is a function
$\mathcal{L}:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}$, where
$\mathcal{L}$ is some measure of information loss between the information
sources i.e. $(\mathcal{V},\mathcal{K})$ and $r$. Finally, we define our
objective as the following empirical loss minimization over the parameters
$\theta$:
$\min_{\theta}\frac{1}{n}\sum_{i=1}^{n}\mathcal{L}(r(T(\mathcal{V}_{i}),\mathcal{K}_{i};\theta),\mathcal{V}_{i},\mathcal{K}_{i})$
(1)
Learning the representation function $r$ that encodes information from the
surgical videos and kinematics can be achieved using an appropriate proxy
learning objective. On the basis of our experiments, we propose a 2-D
Convolutional Neural Network based encoder-decoder learning objective.
Further, to learn representations that are independent of video modalities
such as luminence, color scheme, contrast etc. we extract the optical flow
$\mathcal{O}_{i}$ from the video frames $\mathcal{V}_{i}$ through
transformation $T$. Finally, we demonstrate that this architecture is helpful
in learning representations that outperform the current state-of-the-art in
self-supervised surgical gesture recognition. Thus, our problem further
reduces to the following: encode information from optical flow in
representations $r(T(\mathcal{V}))$ and minimize the information loss
$\mathcal{L}$ between the decoded representations
$\mathcal{D}(r(T(\mathcal{V})))$ and kinematics $\mathcal{K}$. We empirically
choose the loss function $\mathcal{L}$ as the L2 norm of the difference,
denoted by $||\cdot||_{2}$.
Thus, we rewrite the following formulation equivalent to the one described in
in Equation 1 under the above mentioned assumptions as:
$\min_{\theta,\phi}\frac{1}{n}\sum_{i=1}^{n}||\mathcal{D}({r(T(\mathcal{V}_{i});\theta);\phi)}-\mathcal{K}_{i}||_{2}^{2}$
(2)
given an parameterized encoder function $r$ with parameters $\mathcal{\theta}$
and decoder function $\mathcal{D}$ with parameters $\phi$.
## 4 Training and Architecture
We observe that the choice of architecture is key to learning quality
representations. We initiated our inquiry with an architecture similar to AVE-
Net described in arandjelovic2018objects , replacing the optical flow parsing
stream with the one we describe in Section 4.1 and experimenting with an RNN-
based parser (LSTMs/GRUs) and a 1D CNN based parser for the kinematics stream.
We kept the training task identical to the original representations alignment
task. We observed that the 1D CNN worked better than the RNN for parsing the
kinematics. However, the results were not entirely satisfactory. We observe
that similarity in multiple surgical gestures makes the task of identifying a
one-to-one mapping from video to kinematics a hard task, thus making the
training objective unsuitable for our task. We transformed the training
objective from a alignment-based task to an encoder-decoder task, where the
objective is to extract the corresponding kinematics vectors from the optical
flows. Accordingly, we modified the kinematics decoder to the one described in
Section 4.2. We observe that this configuration worked very well. Finally, we
choose a simple 2D Convolutional Neural Network based architecture for our
Encoder and a fully connected layer based architecture for our decoder
$\mathcal{D}$. A visual schematic of the architecture has been provided in
Figure 1. A detailed description has been provided in the subsequent sections.
Figure 1: Multimodal encoder-decoder network architecture
### 4.1 Encoder
We choose a 2D Convolutional Neural Network based architecture to model our
encoder function, as shown in Figure 1. It has been argued in
sarikaya2019surgical that optical flow is crucial and sufficient for
classifying gestures. Further, we argue that optical flow can potentially
filter out domain-specific information such as the video quality, contrast,
details about the surgical instruments etc. that is not key in understanding
the surgical process. Therefore, we add an optical flow extraction as a pre-
processing step prior to encoding the representations. We use the Farneback
algorithm farneback2003two to extract the optical flow and a 2D CNN based
encoder backbone similar to the one used by simonyan2014two to encode it.
Finally, we provide the network a context of roughly 1.67 seconds by sub-
sampling the optical flow fields from every alternate frame over 50 frames.
Our objective here is to capture a context over a sufficiently large time-
frame.
### 4.2 Decoder
We choose a simple FCN with ReLU activations as decoder as shown in Figure 1.
The decoder outputs the 25 kinematics vectors each corresponding to the
sampled optical flows. We keep the decoder network relatively shallow to
ensure maximum information retention in the representations yielded by the
encoder network.
### 4.3 Training
We train our network for an encoder-decoder task wherein the encoder network
encodes information from the videos (optical flow) into the representation,
which is thereby parsed by the decoder network to extract the corresponding
kinematics. This has been elucidated in Section 3. While training, we
uniformly select every alternate frame, thus selecting 25 frames out of 50,
yielding a context of 1.6 seconds given a frame frequency of 30 Hz. We train
our network to encode this set of 25 video frames into representations. We use
a Mean Squared Error (MSE) between the kinematics vectors and the decoder
output as training loss, as given in Equation 2. Using this training
methodology, we train our network for 1000 epochs on the Knot Tying, Needle
Passing and Suturing datasets available in the JIGSAWS dataset. We observe
that the training loss uniformly decreases to with each training epoch,
indicating a smooth learning process.
## 5 Dataset
We test our learning approach on the JIGSAWS dataset, which contains a series
of annotated clips of surgical activity further sub-divided into three
datasets (viz knot-tying, needle-passing and suturing) performed using the da
Vinci® Surgical System console guthart2000intuitive . The frame-level
annotations include surgical gestures, anonymized user (surgeon)
identification and surgical skill metrics based on years of surgical
experience. The kinematics available on a frame-level consists of a
76-dimensional vector that includes the $x,y,z$ coordinates of the left and
right tool tips, the corresponding linear and angular velocities, the rotation
matrix and the gripper angle velocities. A detailed description of the JIGSAWS
dataset is given in gao2014jhu .
## 6 Experiments and results
We test the quality of our learned representations for a number of tasks as a
measure of the holistic understanding of the surgical process. We first divide
the frames in each video clip according to the corresponding gestures. We then
uniformly sample 25 frames from each set of 50 gesture-frames. Given the
sampling frequency of 30 Hz, we encode a gesture context of about a 1.67
seconds. To extract domain-independent information (as explained in Section
4.1, we extract the optical flow from the sampled video frames. We further
sample the 25 corresponding kinematics vectors.
We train the end-to-end deep encoder-decoder network described in Section 4 to
encode the representations from the optical flow and decode the kinematics
vectors from the representations. To obtain a visual understanding of the
information encoded in these representations, we reduce their dimensionality
to a 2-dimensional plane using the U-MAP algorithm mcinnes2018umap . We choose
U-MAP in particular because of its computational efficiency and ability to
preserve global structure, which would be crucial to appreciating the
differences in surgical gestures. We then perform a gesture classification
experiment using a gradient boosting based classifier trained solely on the
representations. Further, we also perform a skill-classification experiment
using another gradient-boosting based classifier. Finally, we perform a
gesture classification experiment on a set of cross-task representations i.e.
representations generated for a particular surgical task (eg: suturing) using
an encoder trained on a different surgical task (eg: knot-tying). Further
details and results of the experiments have been discussed in the following
sub-sections.
### 6.1 Representations visualizations using U-MAP
We use the U-MAP algorithm to perform a dimensionality reduction on the learnt
representations. We choose U-MAP in particular because of its property to
preserver local as well as global distances in the lower dimensional latent-
space. We plot the two-dimensional projections of the representation from the
Knot Tying dataset, color-coded for different gestures in Figure 2 and for
different skill-levels in Figure 3. We generate similar plots for the Needle
Passing dataset in Figures 4, 3 and Suturing dataset in Figures 6, 7. It is
interesting to observe that the in each of the skill-based plots (Figures 7,
5, 3), the representations neatly cluster into two distinct skill-based
clusters corresponding to ”beginner” and ”expert” skill level, with the
representations from the ”intermediate” skill level spread across both these
clusters. A possible explanation for this phenomenon is that the skill
”intermediate” is a vague category and encompasses people with varying skill-
levels that may be closer to the ”beginners” or the ”experts”. Another
interesting observation from the gesture-based plots is that the gestures-
based plots (Figures 6, 4, 2) is that each individual gesture (denoted by a
different color) has two distinct clusters corresponding to a distinct skill
category. Thus, it is evident from this visualization that each gesture has a
unique representation depending on whether it has been performed by an expert
or a beginner.
Figure 2: Knot Tying representations (gesture) Figure 3: Knot Tying
representations (skill) Figure 4: Needle Passing representations (gesture)
Figure 5: Needle Passing representations (skill) Figure 6: Suturing
representations (gesture) Figure 7: Suturing representations (skill)
### 6.2 Skill recognition
We test the efficacy of our representations in learning surgeon skill levels.
In particular, we train the XGBoost, a gradient boosting based classifier
introduced in chen2016xgboost on the representations for a 3-class
classification task on the self-reported user skill. We do not train the
encoder while training the classifier. We repeat our experiment for 5 randomly
chosen train-test splits. We report our findings as an aggregate of the five
random splits in Table 1 (in the format of mean $\pm$ standard deviation). We
observe that the results indicate a significant retention of information
related to surgeon skill. Further, a major portion of the classification error
is contributed by surgeons having an ”intermediate” skill. This reaffirms the
observation made in Section 6.1 that the embeddings falling in this particular
skill category follows a uniform spread from ”beginner” to ”expert” and does
not necessarily cluster tightly, unlike the other two skill levels. This makes
it significantly harder to classify.
Table 1: Skill recognition results Dataset | Accuracy | Precision | Recall | F-1
---|---|---|---|---
Knot-Tying | 0.768 $\pm$ 0.0303 | 0.766 $\pm$ 0.0336 | 0.768 $\pm$ 0.0303 | 0.758 $\pm$ 0.0311
Needle-Passing | 0.808 $\pm$ 0.0335 | 0.808 $\pm$ 0.0335 | 0.808 $\pm$ 0.0335 | 0.800 $\pm$ 0.03535
Suturing | 0.812 $\pm$ 0.0228 | 0.810 $\pm$ 0.0254 | 0.812 $\pm$ 0.0228 | 0.804 $\pm$ 0.0267
### 6.3 Gesture recognition
We test the efficacy of our representations in learning different gestures
from the videos. Similar to the skill recognition task described in Section
6.2, we train an XGBoost classifier on the representations for gesture
classification. We freeze the encoder weights while training the classifier.
We repeat our experiment for 5 randomly chosen train-test splits. We report
our findings as an aggregate of the five random splits in Table 2. We observe
that the results are promising indicating a good retention of gesture related
information in the representations. Further, we observe that quantitatively,
our gesture classification results outperform those reported in
dipietro2018unsupervised . We attribute this to the fact that our
representations encode multimodal information from both kinematics and video
streams.
Table 2: Gesture recognition results Dataset | Accuracy | Precision | Recall | F-1
---|---|---|---|---
Knot-Tying | 0.762 $\pm$ 0.0487 | 0.774 $\pm$ 0.0531 | 0.762 $\pm$ 0.0487 | 0.758 $\pm$ 0.0496
Needle-Passing | 0.696 $\pm$ 0.0328 | 0.694 $\pm$ 0.0376 | 0.696 $\pm$ 0.0305 | 0.674 $\pm$ 0.0351
Suturing | 0.778 $\pm$ 0.0239 | 0.782 $\pm$ 0.0259 | 0.778 $\pm$ 0.0239 | 0.754 $\pm$ 0.0261
### 6.4 Transfer learning for gesture recognition
We measure the robustness of our learned algorithm through a transfer-learning
based gesture recognition task. We generate representations using an encoder
network trained on one dataset (e. g. : Knot Tying) and use them for gesture
classification on the other two datasets (Needle Passing and Suturing in this
case) similar to the gesture recognition task described in Section 6.3. We
repeat the gesture classification experiment for 5 randomly chosen train-test
splits. As in the original gesture classification experiment, we train an
XGBoost classifier on the representations generated using the encoder
initialized with transferred weights. We do not retrain the encoder while
training the classifier. We report our findings as an aggregate of the results
of the 5 random split based experiments as follows: results obtained using an
encoder trained on the Needle Passing dataset are reported in Table 3, the
Knot Tying dataset are reported in Table 4 and the Suturing dataset are
reported in 5. We observe that there is a slight decrease in performance after
transfer learning. However, the results are comparable to the baseline results
and those obtained from other state-of-the-art unsupervised surgical learning
works. This indicates that promising results can be achieved by enabling end-
to-end training (including training the encoder) during the task-specific
learning.
Table 3: Transfer learning (from Needle Passing) for gesture recognition Dataset | Accuracy | Precision | Recall | F-1
---|---|---|---|---
Knot-Tying | 0.568 $\pm$ 0.03271 | 0.608 $\pm$ 0.05403 | 0.568 $\pm$ 0.03271 | 0.570 $\pm$ 0.03605
Suturing | 0.598 $\pm$ 0.01923 | 0.570 $\pm$ 0.02738 | 0.598 $\pm$ 0.01923 | 0.557 $\pm$ 0.02509
Table 4: Transfer learning (from Knot Tying) for gesture recognition Dataset | Accuracy | Precision | Recall | F-1
---|---|---|---|---
Needle-Passing | 0.446 $\pm$ 0.05319 | 0.464 $\pm$ 0.06188 | 0.444 $\pm$ 0.05176 | 0.430 $\pm$ 0.05148
Suturing | 0.624 $\pm$ 0.02302 | 0.634 $\pm$ 0.04560 | 0.624 $\pm$ 0.02302 | 0.590 $\pm$ 0.02345
Table 5: Transfer learning (from Suturing) for gesture recognition Dataset | Accuracy | Precision | Recall | F-1
---|---|---|---|---
Knot-Tying | 0.648 $\pm$ 0.04338 | 0.648 $\pm$ 0.04338 | 0.648 $\pm$ 0.04338 | 0.650 $\pm$ 0.04062
Needle-Passing | 0.448 $\pm$ 0.04658 | 0.458 $\pm$ 0.03898 | 0.446 $\pm$ 0.04505 | 0.434 $\pm$ 0.04605
## 7 Discussion and future work
Surgical robotics is an exciting area for applying multi-modal, self-
supervised learning techniques due to the complexity of the task, multiple
modes of information and lack of dependence on expert annotated data. The
choice of the how to achieve the self-supervised learning can be varied.
Previous works in literature have used varied proxy learning objectives (that
we denote by $\mathcal{L}$ in Equation 1). For example, the sequential nature
of the video/kinematics frames has been utilized to devise a future prediction
task in dipietro2018unsupervised , where the learned representations were used
in gesture recognition and information retrieval. In another example, the
audio-visual correspondence task as described in arandjelovic2018objects ;
arandjelovic2017look has been used to improve audio classification and enable
sound localization in videos. This can be transformed into a kinematics/visual
correspondence task to learn the surgical representations. We observe that
this choice is sensitive to the dataset in and the nature of information that
is expected from the learnt representations. For example, we observe that the
video-kinematics alignment task using triplets loss as described in
arandjelovic2018objects is a bad learning objective for surgical robotics
data as provided in the JIGSAWS dataset because surgery involves a repetition
of similar gestures that have similar kinematics. This creates a problem where
there is no easy one-to-one correspondence between the video and kinematics
sets, rendering the triplet loss based training to be very difficult. A
possible way to circumvent this problem is to generate negative samples in
each triplet that are sufficiently distant from the base sample by some
distance metric. Further, we observe that training RNNs takes more
computational resource and cannot be easily parallelized as compared to
training CNNs. Hence, we focus on using an architecture primarily built using
CNNs and fully-connected networks.
We introduce an encoder-decoder based self-supervised learning system that
transforms video stream into corresponding kinematics, as explained in detail
in Sections 4, 3 . We further demonstrate empirically that the learnt
representations effectively encode information on surgical gestures and
surgeon skill. Also, the representations are sufficiently robust across
multiple tasks, thus facilitating transfer learning. This also raises the
possibility that using an end-to-end, task-specific training methodology can
further improve the accuracy in all the mentioned tasks. Finally, we visually
observe that the representations form clusters corresponding to a specific
gesture and skill-level.
There are a few limitations to our work. One being that JIGSAWS dataset has a
single camera perspective in all the videos, thus making it impossible to test
the learning of the camera transformation matrix. Secondly, our encoder
network does not parse 3D visual information, which has been provided in the
JIGSAWS dataset through a left and right camera feed for each video. This
could possibly lead to learning information on new tasks such as depth
perception and also simultaneously improve accuracy on the currently existing
tasks. Finally, while our work does solve the surgeon skill classification
task, it does not address the specific differences in surgical behaviour (e.
g. differences in kinematics) that could potentially address the gap in
surgical skill and be used as milestones in surgeon training.
There also are additional applications of the proposed representation that
future work could explore, such as future task prediction, information
retrieval, surgical activity segmentation etc. Training these representations
on a sufficiently large and diverse surgical dataset can potentially lead to a
standardized architecture for parsing surgical activity that is highly
transferable across datasets and tasks. This could in turn enable a universal
technology that tracks surgical progress in real-time, giving feedback
regarding possible mistakes, surgical scene depth, next gesture suggestion
etc. with a high accuracy.
## References
* (1) Arandjelovic, R., Zisserman, A.: Look, listen and learn. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 609–617 (2017)
* (2) Arandjelovic, R., Zisserman, A.: Objects that sound. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 435–451 (2018)
* (3) Chen, T., Guestrin, C.: Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pp. 785–794 (2016)
* (4) Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
* (5) DiPietro, R., Hager, G.D.: Unsupervised learning for surgical motion by learning to predict the future. In: International conference on medical image computing and computer-assisted intervention, pp. 281–288. Springer (2018)
* (6) DiPietro, R., Lea, C., Malpani, A., Ahmidi, N., Vedula, S.S., Lee, G.I., Lee, M.R., Hager, G.D.: Recognizing surgical activities with recurrent neural networks. In: International conference on medical image computing and computer-assisted intervention, pp. 551–558. Springer (2016)
* (7) Farnebäck, G.: Two-frame motion estimation based on polynomial expansion. In: Scandinavian conference on Image analysis, pp. 363–370. Springer (2003)
* (8) Gao, Y., Vedula, S.S., Reiley, C.E., Ahmidi, N., Varadarajan, B., Lin, H.C., Tao, L., Zappella, L., Béjar, B., Yuh, D.D., et al.: Jhu-isi gesture and skill assessment working set (jigsaws): A surgical activity dataset for human motion modeling. In: MICCAI Workshop: M2CAI, vol. 3, p. 3 (2014)
* (9) Girdhar, R., Carreira, J., Doersch, C., Zisserman, A.: Video action transformer network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 244–253 (2019)
* (10) Guthart, G.S., Salisbury, J.K.: The intuitive/sup tm/telesurgery system: overview and application. In: Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065), vol. 1, pp. 618–621. IEEE (2000)
* (11) Ji, S., Xu, W., Yang, M., Yu, K.: 3d convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence 35(1), 221–231 (2012)
* (12) Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014)
* (13) Mazomenos, E., Watson, D., Kotorov, R., Stoyanov, D.: Gesture classification in robotic surgery using recurrent neural networks with kinematic information. CRAS (2018)
* (14) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018)
* (15) Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems, pp. 3111–3119 (2013)
* (16) Sarikaya, D., Guru, K.A., Corso, J.J.: Joint surgical gesture and task classification with multi-task and multimodal learning. arXiv preprint arXiv:1805.00721 (2018)
* (17) Sarikaya, D., Jannin, P.: Surgical gesture recognition with optical flow only. arXiv preprint arXiv:1904.01143 (2019)
* (18) Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Advances in neural information processing systems, pp. 568–576 (2014)
* (19) Varadarajan, B., Reiley, C., Lin, H., Khudanpur, S., Hager, G.: Data-derived models for segmentation with application to surgical assessment and training. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 426–434. Springer (2009)
* (20) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Advances in neural information processing systems, pp. 5998–6008 (2017)
|
$\displaystyle-\eta\Delta y_{s}+\nu_{0}y_{s}=f\text{ in }\Omega,\quad
y_{s}=0\text{ on }\partial\Omega.$ (A.2)
The weak formulation corresponding to (A.2) seeks $y_{s}\in H^{1}_{0}(\Omega)$
such that
$\displaystyle\eta\langle\nabla y_{s},\nabla\phi\rangle+\nu_{0}\langle
y_{s},\phi\rangle=\langle f,\phi\rangle\quad\text{ for all }\phi\in
H^{1}_{0}(\Omega).$
For $\nu_{0}\geq 0,$ using Poincaré inequality, it can be observed that
$\eta\|\nabla\phi\|^{2}+\nu_{0}\|\phi\|^{2}\geq\eta\|\nabla\phi\|^{2}.$ Thus,
an application of Lax-Milgram theorem [25, Theorem 3.1.4] leads to the
existence of a unique solution $y_{s}\in H^{1}_{0}(\Omega)$ and a elliptic
regularity [23, Theorem 4, Chapter 5] implies $y_{s}\in H^{2}(\Omega)$ with
$\displaystyle\|y_{s}\|_{H^{2}(\Omega)}\leq C\|f\|,$
for some $C>0.$
Step 2. Now, for a given $\psi\in H^{2}(\Omega),$ consider
$\displaystyle-\eta\Delta y_{s}+\nu_{0}y_{s}=f+g(\psi)\text{ in }\Omega,\quad
y_{s}=0\text{ on }\partial\Omega,$ (A.3)
where $g(\psi)=-\psi\textbf{v}\cdot\nabla\psi.$ Our aim is to show that (A.3)
has unique solution $y_{s}\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)$ by showing
$g(\psi)\in L^{2}(\Omega)$ and using Step 1. Note that,
$g(\psi)=\displaystyle\sum_{i=1}^{d}\psi v_{i}\frac{\partial\psi}{\partial
x_{i}},$ where $\textbf{v}=(v_{1},v_{2},\ldots,v_{d})^{T}\in\mathbb{R}^{d}.$ A
Hölders inequality leads to
$\displaystyle\left\|\psi v_{i}\frac{\partial\psi}{\partial
x_{i}}\right\|^{2}=\int_{\Omega}\left|\psi v_{i}\frac{\partial\psi}{\partial
x_{i}}\right|^{2}dx\leq\left(\int_{\Omega}|\psi|^{4}dx\right)^{1/2}\left(\int_{\Omega}\left|v_{i}\frac{\partial\psi}{\partial
x_{i}}\right|^{4}dx\right)^{1/2}=\|\psi\|_{L^{4}(\Omega)}^{2}\left\|v_{i}\frac{\partial\psi}{\partial
x_{i}}\right\|_{L^{4}(\Omega)}^{2}.$
for all $1\leq i\leq d.$ The Sobolev embedding (2.2) for
$\Omega\subset\mathbb{R}^{d},\,d\in\\{1,2,3\\}$ implies
$\displaystyle\|g(\psi)\|\leq\|\psi\|_{L^{4}(\Omega)}|\textbf{v}|\|\nabla\psi\|_{L^{4}(\Omega)}\leq
s_{0}^{2}|\textbf{v}|\|\psi\|_{H^{1}(\Omega)}\|\psi\|_{H^{2}(\Omega)}.$ (A.4)
Thus, for any given $\psi\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega),$ $g(\psi)\in
L^{2}(\Omega),$ and hence using Step 1, there exists a unique solution
$y_{s}\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)$ satisfying
$\displaystyle\|y_{s}\|_{H^{2}(\Omega)}\leq
C\|f\|+s_{0}^{2}|\textbf{v}|\|\psi\|_{H^{1}(\Omega)}\|\psi\|_{H^{2}(\Omega)},$
(A.5)
for some $C>0$.
Step 3. For any $\rho>0,$ define $D_{\rho}=\\{y\in H^{2}(\Omega)\cap
H^{1}_{0}(\Omega)\,|\,\|y\|_{H^{2}(\Omega)}\leq\rho\\}.$ In this step, we find
a $\rho_{0}>0$ such that for all $0<\rho\leq\rho_{0},$ the map
$S:D_{\rho}\longrightarrow D_{\rho}$ defined by $S(\psi)=y_{s}^{\psi},$ where
$y_{s}^{\psi}$ is solution of (A.3), is well defined and contraction. For
$f\in L^{2}(\Omega)$ with $\|f\|_{L^{2}(\Omega)}\leq\frac{\rho}{2C}$ and
$\rho\leq\frac{1}{2|\textbf{v}|s_{0}^{2}},$ (A.5) implies
$\|S(\psi)\|_{H^{2}(\Omega)}=\|y_{s}^{\psi}\|_{H^{2}(\Omega)}\leq
C\|f\|_{L^{2}(\Omega)}+s_{0}^{2}|\textbf{v}|\|\psi\|_{H^{1}(\Omega)}\|\psi\|_{H^{2}(\Omega)}\leq\rho.$
Therefore, $S$ is a self map.
Now, to show contraction, let $\psi^{1},\,\psi^{2}\in D_{\rho}$ be given and
let $y_{s}^{\psi^{1}}$ and $y_{s}^{\psi^{2}}$ be the corresponding solutions
of (A.3). Then, $y_{s}^{\psi^{1}}-y_{s}^{\psi^{2}}$ satisfies
$\displaystyle-\eta\Delta(y_{s}^{\psi^{1}}-y_{s}^{\psi^{2}})+\nu_{0}(y_{s}^{\psi^{1}}-y_{s}^{\psi^{2}})=g(\psi^{1})-g(\psi^{2})\text{
in }\Omega,\,y_{s}^{\psi^{1}}-y_{s}^{\psi^{2}}=0\text{ on }\partial\Omega.$
From Step 1, we have
$\displaystyle\|S(\psi^{1})-S(\psi^{2})\|_{H^{2}(\Omega)}=\|y_{s}^{\psi^{1}}-y_{s}^{\psi^{2}}\|_{H^{2}(\Omega)}\leq
C\|g(\psi^{1})-g(\psi^{2})\|_{L^{2}(\Omega)}.$
Note that
$\displaystyle
g(\psi^{1})-g(\psi^{2})=\psi^{1}\textbf{v}\cdot\nabla\psi^{1}-\psi^{2}\textbf{v}\cdot\nabla\psi^{2}=\displaystyle\sum_{i=1}^{d}\left(\psi^{1}v_{i}\left(\frac{\partial\psi^{1}}{\partial
x_{i}}-\frac{\partial\psi^{2}}{\partial
x_{i}}\right)+(\psi^{1}-\psi^{2})v_{i}\frac{\partial\psi^{2}}{\partial
x_{i}}\right).$
We estimate $g(\psi^{1})-g(\psi^{2})$ as estimated $g(\psi)$ in (A.4) and
obtain
$\displaystyle\|g(\psi^{1})-g(\psi^{2})\|_{L^{2}(\Omega)}$ $\displaystyle\leq
s_{0}^{2}|\textbf{v}|\left(\|\psi^{1}\|_{H^{1}(\Omega)}\|\psi^{1}-\psi^{2}\|_{H^{2}(\Omega)}+\|\psi^{1}-\psi^{2}\|_{H^{1}(\Omega)}\|\psi^{2}\|_{H^{2}(\Omega)}\right)$
$\displaystyle\leq
2s_{0}^{2}|\textbf{v}|\rho\|\psi^{1}-\psi^{2}\|_{H^{2}(\Omega)}.$
Therefore, choosing $\rho_{0}=\frac{1}{4s_{0}^{2}|\textbf{v}|},$ we obtain for
all $0<\rho\leq\rho_{0},$
$\displaystyle\|S(\psi^{1})-S(\psi^{2})\|_{H^{2}(\Omega)}\leq
C\|g(\psi^{1})-g(\psi^{2})\|_{L^{2}(\Omega)}\leq\frac{1}{2}\|\psi^{1}-\psi^{2}\|_{H^{2}(\Omega)}.$
Hence, by using Banach fixed point theorem, we conclude the proposition. ∎
### A.2. Proof of Theorem 3.2
We first determine $\theta_{0}\in(\frac{\pi}{2},\pi)$ such that (a) holds. Set
$\theta_{0}=\pi-\tan^{-1}\left(\frac{\alpha_{1}}{\eta/2}\right),$ where
$\alpha_{1}>0$ is as in (3.3). Note that $\theta_{0}\in(\frac{\pi}{2},\pi).$
(a) In the next three steps, we show that
$\Sigma^{c}(-\widehat{\nu};\theta_{0})\subset\rho(\mathcal{A})$ and the
resolvent estimate holds.
Step 1. ($\mu\in\Sigma^{c}(-\widehat{\nu};\theta_{0})$ with
$\Re(\mu)\geq-\widehat{\nu}$). Let $\mu\in\mathbb{C}$ be arbitrary with
$\Re(\mu)\geq-\widehat{\nu}.$ First, we show that for any given $g\in
L^{2}(\Omega),$ there exists a unique $z\in D(\mathcal{A})$ such that $(\mu
I-\mathcal{A})z=g.$ That is, we want to solve
$\displaystyle a(z,\phi)+\mu\langle z,\phi\rangle=\langle g,\phi\rangle\text{
for all }\phi\in H^{1}_{0}(\Omega),$ (A.6)
for $z,$ where $a(\cdot,\cdot)$ is as defined in (3.2). As
$\Re(\mu)\geq-\widehat{\nu},$ (3.6) implies
$\Re\left(a(\phi,\phi)+\mu\langle\phi,\phi\rangle\right)$
$\geq\Re\left(a(\phi,\phi)\right)-\widehat{\nu}\|\phi\|^{2}\geq\frac{\eta}{2}\|\nabla\phi\|^{2}.$
Therefore, by Lax-Milgram Theorem ([21, 15, Theorem 1, Section 1, Chapter
VII]) for any given $g\in L^{2}(\Omega),$ there exists a unique solution $z\in
H^{1}_{0}(\Omega)$ of (A.6). Note that, $(\mu I-\mathcal{A})z=g$ implies
$\displaystyle-\eta\Delta z+y_{s}\textbf{v}\cdot\nabla z+\textbf{v}\cdot\nabla
y_{s}z+(\mu+\nu_{0})z=g\text{ in }\Omega,\,z=0\text{ on }\partial\Omega.$
(A.7)
Using the regularity results for elliptic equation [25], we have $z\in
D(\mathcal{A}).$ Substitute $\phi=z$ in (A.6), use (3.6) and a Cauchy-Schwarz
inequality to obtain
$\displaystyle\frac{\eta}{2}\|\nabla z\|^{2}\leq\Re\left(a(z,z)+\mu\langle
z,z\rangle\right)\leq\|g\|\,\|z\|.$
This along with the Poincaré inequality leads to
$\displaystyle\|z\|\leq C\|g\|,$ (A.8)
for some $C=C(\eta,C_{p})>0.$
Step 2. Resolvent estimate for
$\Re(\mu)\geq-\widehat{\nu},\,\mu\neq-\widehat{\nu}$. Let
$\mu=-\widehat{\nu}+\rho e^{i\theta},$ $\rho>0$ and
$-\frac{\pi}{2}\leq\theta\leq\frac{\pi}{2}.$ Substitute
$\phi=e^{i\frac{\theta}{2}}z$ in (A.6) to obtain
$\displaystyle a(z,e^{i\frac{\theta}{2}}z)+(-\widehat{\nu}+\rho
e^{i\frac{\theta}{2}})\langle z,e^{i\frac{\theta}{2}}z\rangle=\langle
g,e^{i\frac{\theta}{2}}z\rangle.$ (A.9)
From the definition of $a(\cdot,\cdot)$ in (3.2) and from (A.9), we obtain
$\displaystyle\cos(\theta/2)\eta\|\nabla
z\|^{2}+(\nu_{0}-\widehat{\nu}+\rho)\cos(\theta/2)\|z\|^{2}$
$\displaystyle\leq\left|\Re\left(a(z,e^{i\frac{\theta}{2}}z)+(-\widehat{\nu}+\rho
e^{i\theta})\langle z,e^{i\frac{\theta}{2}}z\rangle\right)\right|$
$\displaystyle\qquad+\left|\langle y_{s}\textbf{v}\cdot\nabla
z,e^{i\frac{\theta}{2}}z\rangle+\langle\textbf{v}\cdot\nabla
y_{s}z,e^{i\frac{\theta}{2}}z\rangle\right|$
$\displaystyle\leq\|g\|\,\|z\|+\left|\langle y_{s}\textbf{v}\cdot\nabla
z,e^{i\frac{\theta}{2}}z\rangle+\langle\textbf{v}\cdot\nabla
y_{s}z,e^{i\frac{\theta}{2}}z\rangle\right|.$
This inequality and a similar estimate as used to obtain (3.5) lead to
$\displaystyle\cos(\theta/2)\left(\eta\|\nabla
z\|^{2}+(\nu_{0}-\widehat{\nu}+\rho)\|z\|^{2}\right)$
$\displaystyle\leq\|g\|\,\|z\|+\frac{\eta}{\sqrt{2}}\|\nabla
z\|^{2}+\frac{|\textbf{v}|^{2}}{\sqrt{2}\eta}(C_{2}^{2}+s_{0}^{4})\|y_{s}\|_{H^{2}(\Omega)}^{2}\|z\|^{2}.$
In view of (3.4), and the fact that
$\cos(\theta/2)\geq\cos(\pi/4)\geq\frac{1}{\sqrt{2}}$ (as
$-\frac{\pi}{2}\leq\theta\leq\frac{\pi}{2}$), we have
$\displaystyle\rho\cos(\theta/2)\|z\|^{2}\leq C\|g\|\,\|z\|.$
Noting that $\rho=|\mu+\widehat{\nu}|$ and
$\cos(\theta/2)\geq\cos(\pi/4)\geq\frac{1}{\sqrt{2}},$ we have
$\displaystyle\|R(\mu,\mathcal{A})\|_{\mathcal{L}(L^{2}(\Omega))}\leq\frac{C}{|\mu+\widehat{\nu}|}\text{
for all }\mu(\neq-\widehat{\nu})\text{ with }\Re(\mu)\geq-\widehat{\nu},$
(A.10)
for some positive constant $C$ independent of $\mu.$
Step 3. Case of any $\mu\in\Sigma^{c}(-\widehat{\nu};\theta_{0})$ with
$\Re(\mu)<-\widehat{\nu}$. The proof is analogous to Step 3 of [3, proof of
Theorem 3.4(a) in Section A.1].
(b) - (c) The proofs of (b) and (c) follow utilizing (a) and proof of [3,
Theorem 3.4]. ∎
## Declaration
Conflict of interest. This work was done when I was a Ph.D. student at the
Department of Mathematics, IIT Bombay, and during that time, I was supported
by an institute TA fellowship.
Acknowledgment. I am thankful to Prof. Neela Nataraj, Prof. Debanjana Mitra,
and Prof. Mythily Ramaswamy for their valuable insights, feedback, and
encouragement during various stages of this research. Their dedications and
expertise played a pivotal role in the successful completion of this study.
## References
* [1] S. Agmon, _Lectures on elliptic boundary value problems_ , AMS Chelsea Publishing, Providence, RI, 2010, Prepared for publication by B. Frank Jones, Jr. with the assistance of George W. Batten, Jr., Revised edition of the 1965 original.
* [2] W. Akram and D. Mitra, _Local stabilization of viscous Burgers equation with memory_ , Evol. Equ. Control Theory 11 (2022), no. 3, 939–973.
* [3] W. Akram, D. Mitra, N. Nataraj, and M. Ramaswamy, _Feedback stabilization of a parabolic coupled system and its numerical study_ , Math. Control Relat. Fields 14 (2024), no. 2, 695–746.
* [4] M. Badra, _Stabilisation par feedback et approximation des ´equations de navier–stokes_ , PhD. Thesis, Toulouse (2006), i–260.
* [5] M. Badra and T. Takahashi, _Stabilization of parabolic nonlinear systems with finite dimensional feedback or dynamical controllers: application to the Navier-Stokes system_ , SIAM J. Control Optim. 49 (2011), no. 2, 420–463.
* [6] by same author, _On the Fattorini criterion for approximate controllability and stabilizability of parabolic systems_ , ESAIM Control Optim. Calc. Var. 20 (2014), no. 3, 924–956.
* [7] A. Balogh and M. Krstić, _Burgers’ equation with nonlinear boundary feedback: $H^{1}$ stability, well-posedness and simulation_, vol. 6, 2000, Dedicated to Professor Robert E. Skelton, pp. 189–200.
* [8] V. Barbu, _Controllability and stabilization of parabolic equations_ , Progress in Nonlinear Differential Equations and their Applications, vol. 90, Birkhäuser/Springer, Cham, 2018, Subseries in Control.
* [9] V. Barbu, I. Lasiecka, and R. Triggiani, _Tangential boundary stabilization of Navier-Stokes equations_ , Mem. Amer. Math. Soc. 181 (2006), no. 852, x+128.
* [10] V. Barbu and R. Triggiani, _Internal stabilization of Navier-Stokes equations with finite-dimensional controllers_ , Indiana Univ. Math. J. 53 (2004), no. 5, 1443–1494.
* [11] A. Bensoussan, G. Da Prato, M. C. Delfour, and S. K. Mitter, _Representation and control of infinite dimensional systems_ , second ed., Systems & Control: Foundations & Applications, Birkhäuser Boston, Inc., Boston, MA, 2007.
* [12] T. Breiten and K. Kunisch, _Riccati-based feedback control of the monodomain equations with the FitzHugh-Nagumo model_ , SIAM J. Control Optim. 52 (2014), no. 6, 4057–4081.
* [13] by same author, _Compensator design for the monodomain equations with the FitzHugh-Nagumo model_ , ESAIM Control Optim. Calc. Var. 23 (2017), no. 1, 241–262.
* [14] J.-M. Buchot, J.-P. Raymond, and J. Tiago, _Coupling estimation and control for a two dimensional Burgers type equation_ , ESAIM Control Optim. Calc. Var. 21 (2015), no. 2, 535–560.
* [15] J. A. Burns and S. Kang, _A control problem for burgers’ equation with bounded input/output_ , Nonlinear Dyn. 2 (1991), 235–262.
* [16] Ch. I. Byrnes, D. S. Gilliam, and V. I. Shubov, _On the global dynamics of a controlled viscous Burgers’ equation_ , J. Dynam. Control Systems 4 (1998), no. 4, 457–519.
* [17] J. Caldwell, P. Wanless, and A. E. Cook, _A finite element approach to Burgers’ equation_ , Appl. Math. Modelling 5 (1981), no. 3, 189–193.
* [18] John R. Cannon, Richard E. Ewing, Yinnian He, and Yanping Lin, _A modified nonlinear Galerkin method for the viscoelastic fluid motion equations_ , Internat. J. Engrg. Sci. 37 (1999), no. 13, 1643–1662.
* [19] Y. Chen and T. Zhang, _A weak Galerkin finite element method for Burgers’ equation_ , J. Comput. Appl. Math. 348 (2019), 103–119.
* [20] Zhangxin Chen, _Finite element methods and their applications_ , Scientific Computation, Springer-Verlag, Berlin, 2005.
* [21] R. Dautray and J.L. Lions, _Mathematical analysis and numerical methods for science and technology. Vol. 2_ , Springer-Verlag, Berlin (1988), xvi+561, Functional and variational methods, With the collaboration of Michel Artola, Marc Authier, Philippe Bénilan, Michel Cessenat, Jean Michel Combes, Hélène Lanchon, Bertrand Mercier, Claude Wild and Claude Zuily, Translated from the French by Ian N. Sneddon.
* [22] A. Dogan, _A Galerkin finite element approach to Burgers’ equation_ , Appl. Math. Comput. 157 (2004), no. 2, 331–346.
* [23] L. C. Evans, _Partial differential equations_ , second ed., Graduate Studies in Mathematics, vol. 19, American Mathematical Society, Providence, RI, 2010.
* [24] T. Kato, _Perturbation theory of linear operators_ , Springer-Verlag, New York (1966).
* [25] S. Kesavan, _Topics in functional analysis and applications_ , John Wiley & Sons, Inc., New York, 1989.
* [26] M. Kroller and K. Kunisch, _Convergence rates for the feedback operators arising in the linear quadratic regulator problem governed by parabolic equations_ , SIAM J. Numer. Anal. 28 (1991), no. 5, 1350–1385.
* [27] M. Krstic, _On global stabilization of Burgers’ equation by boundary control_ , Systems Control Lett. 37 (1999), no. 3, 123–141.
* [28] S. Kundu and A. K. Pani, _Finite element approximation to global stabilization of the Burgers’ equation by Neumann boundary feedback control law_ , Adv. Comput. Math. 44 (2018), no. 2, 541–570.
* [29] by same author, _Global stabilization of BBM-Burgers’ type equations by nonlinear boundary feedback control laws: theory and finite element error analysis_ , J. Sci. Comput. 81 (2019), no. 2, 845–880.
* [30] by same author, _Global stabilization of two dimensional viscous Burgers’ equation by nonlinear Neumann boundary feedback control and its finite element analysis_ , J. Sci. Comput. 84 (2020), no. 3, Paper No. 45, 29\.
* [31] I. Lasiecka and R. Triggiani, _The regulator problem for parabolic equations with Dirichlet boundary control. I. Riccati’s feedback synthesis and regularity of optimal solution_ , Appl. Math. Optim. 16 (1987), no. 2, 147–168.
* [32] by same author, _The regulator problem for parabolic equations with Dirichlet boundary control. II. Galerkin approximation_ , Appl. Math. Optim. 16 (1987), no. 3, 187–216.
* [33] by same author, _Numerical approximations of algebraic Riccati equations for abstract systems modelled by analytic semigroups, and applications_ , Math. Comp. 57 (1991), no. 196, 639–662, S13–S37.
* [34] by same author, _Control theory for partial differential equations: continuous and approximation theories. I_ , 74 (2000), xxii+644+I4, Abstract parabolic systems.
* [35] by same author, _Control theory for partial differential equations: continuous and approximation theories. II_ , 75 (2000), i–xxii and 645–1067 and I1–I4, Abstract hyperbolic-like systems over a finite time horizon.
* [36] H. V. Ly, K. D. Mease, and E. S. Titi, _Distributed and boundary control of the viscous Burgers’ equation_ , Numer. Funct. Anal. Optim. 18 (1997), no. 1-2, 143–188.
* [37] P. S. Mantri, N. Nataraj, and A. K. Pani, _A qualocation method for Burgers’ equation_ , J. Comput. Appl. Math. 213 (2008), no. 1, 1–13.
* [38] D. Matthes and S. Plazotta, _A variational formulation of the BDF2 method for metric gradient flows_ , ESAIM Math. Model. Numer. Anal. 53 (2019), no. 1, 145–172.
* [39] A. K. Pany, N. Nataraj, and S. Singh, _A new mixed finite element method for Burgers’ equation_ , J. Appl. Math. Comput. 23 (2007), no. 1-2, 43–55.
* [40] J.-P. Raymond, _Feedback boundary stabilization of the two-dimensional Navier-Stokes equations_ , SIAM J. Control Optim. 45 (2006), no. 3, 790–828.
* [41] J.-P. Raymond and L. Thevenet, _Boundary feedback stabilization of the two dimensional Navier-Stokes equations with finite dimensional controllers_ , Discrete Contin. Dyn. Syst. 27 (2010), no. 3, 1159–1187.
* [42] M. Renardy and R. C. Rogers, _An introduction to partial differential equations_ , Springer-Verlag, New York 13 (2004), xiv+434.
* [43] L. Thevenet, J.-M. Buchot, and J.-P. Raymond, _Nonlinear feedback stabilization of a two-dimensional Burgers equation_ , ESAIM Control Optim. Calc. Var. 16 (2010), no. 4, 929–955.
* [44] V. Thomée, _Galerkin finite element methods for parabolic problems_ , 25 (2006), xii+370.
* [45] R. Triggiani, _On the stabilizability problem in Banach space_ , J. Math. Anal. Appl. 52 (1975), no. 3, 383–403.
* [46] M. Tucsnak and G. Weiss, _Observation and control for operator semigroups_ , (2009), xii+483.
* [47] S. Volkwein, _Distributed control problems for the Burgers equation_ , Comput. Optim. Appl. 18 (2001), no. 2, 115–140.
|
# Solving Inverse Problems for Spectral Energy Distributions with Deep
Generative Networks
Agapi Rissaki
National Technical University of Athens
<EMAIL_ADDRESS>
&Orestis Pavlou
European University Cyprus
<EMAIL_ADDRESS>
&Dimitris Fotakis
National Technical University of Athens
<EMAIL_ADDRESS>
&Vicky Papadopoulou
European University Cyprus
<EMAIL_ADDRESS>
&Andreas Efstathiou
European University Cyprus
<EMAIL_ADDRESS>
###### Abstract
We propose an end-to-end approach for solving inverse problems for a class of
complex astronomical signals, namely Spectral Energy Distributions (SEDs). Our
goal is to reconstruct such signals from scarce and/or unreliable
measurements. We achieve that by leveraging a learned structural prior in the
form of a Deep Generative Network. Similar methods have been tested almost
exclusively for images which display useful properties (e.g., locality,
periodicity) that are implicitly exploited. However, SEDs lack such properties
which make the problem more challenging. We manage to successfully extend the
methods to SEDs using a Generative Latent Optimization model trained with
significantly fewer and corrupted data.
## 1 Introduction
In astrophysics, distributions constructed by energy measurements in different
wavelengths, namely Spectral Energy Distributions (SEDs), are important tools
for studying the physical properties and evolution of astronomical objects.
SEDs can be used for example to determine the luminosity of astronomical
objects, the rate at which galaxies form new stars or the rate at which
supermassive black holes accrete mass to generate energy in quasars [1, 2, 3].
However, the measurement process is prone to statistical (random) as well as
systematic errors, such as background and foreground noise interference, i.e.,
atmospheric absorption and distortion, opaque/obscuring dust, etc. Due to
these factors, as well as technical limitations, such as camera sensor
sensitivity, cooling, resolution, etc, SEDs are collected in scarce, often
incomplete datasets. SEDs are compared to physical models in order to find the
best-fit model(s), which provides an insight into the underlying physical
processes and properties of the target. This highlights the importance of
expanding the range and improving the accuracy of the available data points.
In the literature, computational methods have been widely used to enhance SEDs
and handle the experimental error [4].
In recent years, deep learning has proven to be an important tool for
enhancement of real data and in general for solving inverse problems, where
the goal is to reconstruct or correct a signal given an incomplete and/or
noisy version. Specifically for astronomical data, deep learning techniques
have been used mainly for astronomical imaging, such as deblending images of
galaxies [5] or image enhancement [6]. For SEDs, deep learning has been used
in forward problems such as feature extraction [7], but not inverse problems.
In this paper we use well-known deep learning techniques adjusted
appropriately in order to solve various inverse problems for SEDs.
The method we apply is data-driven and utilizes Deep Generative Models as
learned structural priors. More specifically, models like Variational
AutoEncoders (VAEs) [8] and Generative Adversarial Networks (GANs) [9],
trained on large datasets (most frequently of images) are able to extract
information about the underlying data distribution and generate realistic
samples. These models, once trained, can be used as structural priors for
solving inverse problems [10]. Thus, this methods requires us to train a high-
quality generative network which can model realistic SEDs, with properties
such as high-frequency, irregularity etc. In this paper, we use the Generative
Latent Optimization framework (GLO) [11] to train a deep generative network
suitable for our needs. The framework allows us to train a high-quality
generative network with more flexibility than a VAE and at the same time
offers training efficiency unlike GANs, which are notoriously hard to train.
In order to train a generative network any state-of-the-art method requires a
high-quality large dataset. However, for the case of SEDs these prerequisites
are unrealistic since the measurement procedure contains innate error,
incompleteness and is particularly expensive. To overcome the issue of
erroneous and/or incomplete samples we propose an end-to-end approach: ($1$) a
preprocessing step where we utilize classical computational methods for
enhancement, e.g., iterative PCA [4], ($2$) the deep learning method described
above. Our approach is useful for a variety of inverse problems and it can
mitigate the long-term cost of solving such problems for SEDs. Furthermore, it
is expected to improve overall performance on these problems even with
significant corruption and/or incompleteness by leveraging the powerful
generalization property as well as the robustness of a deep generative
network.
## 2 Method
Suppose, we collect measurements of the form:
$y=Ax^{*}+\eta\,,$ (1)
where $A$ is a measurement matrix and $\eta$ a noise vector. Our goal is to
reconstruct the signal $x^{*}$, given $A$ and $\eta$, thus solve the linear
inverse problem. This formulation usually refers to compressed sensing (where
we assume few measurements taken) but can be also used to model several real-
world problems concerning SEDs.We tackle the problem for the case of SEDs
using a deep generative network as a structural prior [10], a method that has
been successfully applied to natural images.
### 2.1 Building the Generative Network
To build our generative network we use the Generative Latent Optimization
(GLO) framework [11], which allows us to train a relatively large generator
(sufficiently over-parametrized) in order to achieve good generalization [12].
The framework is based on the manipulation of the generator’s latent space as
well as its parameters using a simple reconstruction loss. We use the GLO
framework as an alternative to GANs which are trained via an adversarial
optimization scheme. Unlike GLO, which consists of a simple loss minimization
back-propagated to the latent space, GANs should ideally converge to an
(approximate) equilibrium which is not guaranteed and/or requires excessive
resources [13]. Thus, when training GANs in practice it is common to examine
the generated samples and stop the training when they are satisfactory. In the
case of images this technique can be easily applied, but for SEDs this is not
feasible. In fact, we use the ability to solve inverse problems as a proxy to
evaluate our trained generator.
Let us examine the training procedure more closely. We train the generator
$G\;:\;\mathcal{Z}\rightarrow\mathcal{X}$, where $\mathcal{Z}$ denotes the
latent space and $\mathcal{X}$ the underlying class of SEDs which is described
by the training set $\\{x_{i}\\}_{i=1}^{N}$. Prior to training, we randomly
initialize the latent codes $z_{i}\in\mathcal{Z}$ from a multi-dimensional
Normal distribution and pair them with each of the samples $x_{i}$. During
training, the generator’s parameters and the latent codes
$\\{z_{i}\\}_{i=1}^{N}$ are jointly optimized, as described by (2). The
optimization is driven by a simple reconstruction loss $\mathcal{L}(\cdot)$,
which in our case is Mean Squared Error (MSE).
$\min\limits_{G}\frac{1}{N}\sum_{i=1}^{N}\min\limits_{z_{i}\in\mathcal{Z}}\left[\mathcal{L}(G(z_{i}),x_{i})\right]$
(2)
More specifically, the gradient of the loss function with respect to the
parameters of the generator and the latent code is back-propagated all the way
through the network and to the latent space. This training procedure makes the
latent space more structurally meaningful and suitable for reconstruction. To
promote this feature, we project the latent codes onto the unit sphere during
training [11].
### 2.2 Reconstruction
Given the generative network $G(\cdot)$, the estimated solution of an inverse
problem (1) could be $\hat{x}=G(\hat{z})$ where:
$\hat{z}=\operatorname*{arg\,min}\limits_{z\in\mathcal{Z}}\frac{1}{m}||AG(z)-y||^{2}_{2}$
(3)
In other words, we (approximately) optimize the latent code $\hat{z}$ such
that the corresponding signal $\hat{x}$ matches the measurements $y$. We
optimize $\hat{z}$ by back-propagating the gradient of the reconstruction loss
through $G(\cdot)$ [10]. Note that we have to project $z$ onto the unit
sphere, similarly to training. In a different approach [10], instead of
explicitly projecting $z$ onto the unit sphere, we can apply a regularization
to implicitly restrict $z$ as follows:
$\hat{z}=\operatorname*{arg\,min}\limits_{z\in\mathcal{Z}}\frac{1}{m}||AG(z)-y||^{2}_{2}+R(z)\,,$
(4)
where $R(z)=\lambda||z||^{2}_{2}$ is the regularization term and $\lambda$ a
balance hyperparameter.
## 3 Experiments
### 3.1 Data
We apply our approach to Sloan Digital Sky Survey (SDSS) spectra
111https://www.sdss.org/dr12/spectro/. Specifically, we use the preprocessed
SDSS Corrected Spectra dataset offered by the astroML library [14]. The
dataset contains SEDs for $4000$ galaxies moved to restframe, preprocessed
with iterative PCA and resampled to $1000$ wavelengths
($3000-8000\mbox{\AA}$). Although the preprocessing is imperfect, leading to
outlier values, our deep learning approach still displays great performance
due to its robustness. Notice that the original SDSS dataset consists of
innately incomplete and/or corrupted SEDs, due to the nature of the
measurement process. Thus, the original SEDs cannot be used directly for
evaluation purposes because we would lack the ground truth. Instead, we
consider part of the corrected SEDs produced by the preprocessing step as test
data ($10\%$ of the preprocessed dataset), which we use for comparisons in
Section 3.3.
### 3.2 Training
We train a Feed-forward neural network with $7$ hidden layers and leakyReLU
activations (except for the output layer). We use $90\%$ of the preprocessed
dataset as our training data and train our network for $10000$ epochs with
batches of $64$ spectra. We use Adam optimization [15] with learning rate
$0.1$ for the network’s parameters and $0.01$ for the latent codes as well as
1d-batch normalization to accelerate the training procedure [16]. We choose a
simple Mean Squared Error (MSE) as our loss function and we also apply weight
decay to avoid overfitting.
Our spectra consist of measurements for $1000$ wavelengths. We choose $50$
dimensions for the latent space, which are sufficient for the representation
and allow for efficient training and reconstruction. For the reconstruction,
we limit the optimization procedure to $1000$ epochs and choose a
configuration similar to training. The project is developed using PyTorch
[17].
### 3.3 Results
(a) Inpainting problem: measurements are missing in a continuous window.
(b) Super-resolution problem: randomly selected measurements are missing.
Figure 1: The original SED signals and their reconstruction for $40\%$ missing
information in inpainting (top) and super-resolution (bottom) settings.
(a)
(b)
Figure 2: MSE of reconstruction for $100$ randomly selected signals on: (a)
inpainting, for different levels of missing information (%). (b) denoising,
for different levels of added noise ($l_{2}$-norm).
We evaluate our approach, both qualitatively and quantitatively, for different
inverse problems, by artificially injecting realistic corruption and/or
incompleteness to our test data. For the qualitative evaluation (Figure 1), we
examine the performance of our algorithm on inverse problems with missing
information. More specifically, the missing information corresponds to either
a continuous window of missing values (inpainting) or randomly chosen values
throughout the entire signal (super-resolution). For each problem we randomly
select four SED signals from our test set, then apply the appropriate masking
and produce a reconstruction. The missing information in both cases is chosen
to be $40\%$ of the total number of measurements that compose each SED. We can
see that for both problems, the reconstruction closely follows the trajectory
of the original signal and in most cases predicts the high-frequency changes
and large spikes.
In Figure 2, we examine quantitatively the performance of our algorithm on the
problems of (a) inpainting and (b) denoising (that is removing added noise
drawn by a normal distribution). For each configuration, we show the
reconstruction MSE of $100$ randomly selected SEDs (excluding measurements
that fall outside $1.5$ times the interquartile range). In Figure 2(a), we
examine the performance for different levels of missing information and
compare between SEDs drawn from test and training data. In both cases the MSE
of the vast majority of signals is particularly low for up to $40\%$ of
missing information. For reasonable percentages of missing information the
performance on test data is on par with the training data. For a sufficient
proportion of the examined signals, this trend persists even for larger
percentages (see median values). Given that our generative network was
optimized to represent the training data, this shows a considerable
generalization capability, which is crucial for the effectiveness of our
approach. In Figure 2(b), we examine the performance for different levels of
added noise and compare between our reconstruction methods, the explicit
projection (eq. 3) and the regularization (eq. 4). We can see that for all
levels of added noise the MSE is particularly low, which indicates notable
performance on denoising. Furthermore, when regularization is utilized we
observe better error concentration, which can be attributed to the flexibility
it offers to the reconstruction process.
## 4 Conclusion and Future Work
We presented an end-to-end deep learning solution for various inverse problems
concerning Spectral Energy Distributions (SEDs). Our approach relies on a deep
generative network, tailored to the particular properties of SEDs, as a
structural prior leveraging its generalization capability. Our preliminary
results show promising performance on realistic inverse problems. We are
working to extend this project to diverse and more demanding SED families
e.g., for different parts of the spectrum. Another future direction involves
transfer learning techniques, as well as ensemble learning in order to extend
our approach to data that are even more incomplete. Finally, we could augment
our method using bi-directional training in order to simultaneously extract
information regarding the astrophysical objects we study. This idea draws from
recent research on invertible neural networks for inverse problems [18].
## Broader Impact
This project will have broad impact in the effort to interpret the SEDs which
will be made available with a number of current and future ground-based and
space missions such as LSST, Euclid, JWST and SPICA. Although the examples
used in this work concentrate on the optical part of the spectrum, the same
method can also be used on SEDs which cover the whole spectrum of galaxies
from the ultraviolet to the radio. Such studies of the complete SEDs of
galaxies are now recognized as essential for a complete understanding of the
processes that control galaxy formation and evolution (e.g. [3, 19]).
## References
* [1] Andreas Efstathiou and Michael Rowan-Robinson. Dusty discs ih active galactic nuclei. Monthly Notices of the Royal Astronomical Society, 273(3):649–661, 1995.
* [2] Andreas Efstathiou, Michael Rowan-Robinson, and Ralf Siebenmorgen. Massive star formation in galaxies: radiative transfer models of the uv to millimetre emission of starburst galaxies. Monthly Notices of the Royal Astronomical Society, 313(4):734–744, 2000.
* [3] Michael Rowan-Robinson and et al. Spectral energy distributions and luminosities of galaxies and active galactic nuclei in the spitzer wide-area infrared extragalactic (swire) legacy survey. The Astronomical Journal, 129:1183–1197, March 2005.
* [4] Jakob Walcher, Brent Groves, Tamás Budavári, and Daniel Dale. Fitting the integrated spectral energy distributions of galaxies. Astrophysics and Space Science, 331(1):1–51, Aug 2010.
* [5] Alexandre Boucaud, Marc Huertas-Company, Caroline Heneka, Emille E O Ishida, Nima Sedaghat, Rafael S de Souza, Ben Moews, Hervé Dole, Marco Castellano, Emiliano Merlin, and et al. Photometry of high-redshift blended galaxies using deep learning. Monthly Notices of the Royal Astronomical Society, 491(2):2481–2495, Dec 2019.
* [6] Francois Lanusse, Peter Melchior, and Fred Moolekamp. Hybrid physical-deep learning model for astronomical inverse problems, 2019.
* [7] Frontera-Pons, J., Sureau, F., Bobin, J., and Le Floc´h, E. Unsupervised feature-learning for galaxy seds with denoising autoencoders. A&A, 603:A60, 2017.
* [8] Diederik P. Kingma and Max Welling. An introduction to variational autoencoders. CoRR, abs/1906.02691, 2019.
* [9] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks, 2014.
* [10] Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. Compressed sensing using generative models. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, pages 537–546. JMLR.org, 2017.
* [11] Piotr Bojanowski, Armand Joulin, David Lopez-Paz, and Arthur Szlam. Optimizing the latent space of generative networks. 2017\.
* [12] Behnam Neyshabur, Srinadh Bhojanapalli, David Mcallester, and Nati Srebro. Exploring generalization in deep learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5947–5956. Curran Associates, Inc., 2017.
* [13] Frans A. Oliehoek, Rahul Savani, Jose Gallego-Posada, Elise van der Pol, and Roderich Groß. Beyond local nash equilibria for adversarial networks. CoRR, abs/1806.07268, 2018.
* [14] J.T. Vanderplas, A.J. Connolly, Ž. Ivezić, and A. Gray. Introduction to astroml: Machine learning for astrophysics. In Conference on Intelligent Data Understanding (CIDU), pages 47–54, Oct 2012.
* [15] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations, 12 2014.
* [16] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, page 448–456. JMLR.org, 2015.
* [17] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017\.
* [18] Lynton Ardizzone, Jakob Kruse, Sebastian J. Wirkert, Daniel Rahner, Eric W. Pellegrini, Ralf S. Klessen, Lena Maier-Hein, Carsten Rother, and Ullrich Köthe. Analyzing inverse problems with invertible neural networks. CoRR, abs/1808.04730, 2018.
* [19] Raphael Shirley, Yannick Roehlly, Peter D. Hurley, Veronique Buat, María del Carmen Campos Varillas, Steven Duivenvoorden, Kenneth J. Duncan, Andreas Efstathiou, Duncan Farrah, Eduardo González Solares, Katarzyna Malek, Lucia Marchetti, Ian McCheyne, Andreas Papadopoulos, Estelle Pons, Roberto Scipioni, Mattia Vaccari, and Seb Oliver. HELP: a catalogue of 170 million objects, selected at 0.36-4.5 $\mu$m, from 1270 deg2 of prime extragalactic fields. Monthly Notices of the Royal Astronomical Society, 490(1):634–656, November 2019.
|
# Measurement based estimator scheme for continuous quantum error correction
Sangkha Borah<EMAIL_ADDRESS>Okinawa Institute of Science and
Technology Graduate University, Onna-son, Okinawa 904-0495, Japan Bijita
Sarma Okinawa Institute of Science and Technology Graduate University, Onna-
son, Okinawa 904-0495, Japan Michael Kewming Department of Physics, Trinity
College Dublin, Dublin 2, Ireland Fernando Quijandría Okinawa Institute of
Science and Technology Graduate University, Onna-son, Okinawa 904-0495, Japan
Gerard J. Milburn Centre for Engineered Quantum Systems, School of
Mathematics and Physics, University of Queensland, QLD 4072 Australia Jason
Twamley Okinawa Institute of Science and Technology Graduate University,
Onna-son, Okinawa 904-0495, Japan
(August 27, 2024)
###### Abstract
Canonical discrete quantum error correction (DQEC) schemes use projective von
Neumann measurements on stabilizers to discretize the error syndromes into a
finite set, and fast unitary gates are applied to recover the corrupted
information. QEC based on continuous measurement (CQEC), in principle, can be
executed faster than DQEC and can also be resource efficient. However, CQEC
requires meticulous filtering of noisy continuous measurement data to reliably
extract error syndromes on the basis of which errors could be detected. In
this work, we show that by constructing a measurement-based estimator (MBE) of
the logical qubit to be protected, which is driven by the noisy continuous
measurement currents of the stabilizers, it is possible to accurately track
the errors occurring on the physical qubits in real-time. We use this MBE to
develop a novel continuous quantum error correction (MBE-CQEC) scheme that can
protect the logical qubit to a high degree, surpassing the performances of
DQEC, and also allows QEC to be conducted either immediately or in delayed
time with instantaneous feedbacks.
quantum error correction; QEC; CQEC; continuous measurement; weak measurement;
quantum computing; fault-tolerant
Generally speaking, quantum error correction (QEC) is a solution to preserve a
quantum state from environmental decoherence, and is essential for achieving
fault-tolerant quantum computation, cryptography and quantum communications
Nielsen and Chuang (2010); Shor (1995); Steane (1996); Djordjevic (2012);
Gertler _et al._ (2021). The essence of QEC is to redundantly encode the
quantum information of a qubit in several entangled qubits which collectively
form a so called logical qubit that exhibits a longer lifetime than individual
component physical qubits. The logical qubit lies in a two-dimensional
subspace of the Hilbert space of the physical qubits, and the interaction
between the qubits and their environment cause an orthogonal rotation of the
collective state of the physical qubits out of this subspace. By
simultaneously measuring a set of operators, this rotation can be detected and
corrected without changing the encoded logical qubit state. Such operators are
selected parity operators in the Pauli group, called the stabilizer
generators, the eigenvalues of which are known as the error syndromes Shor
(1995); Steane (1996); Gottesman (1997). In canonical QEC methods, which we
will refer to as discrete quantum error correction (DQEC), these operators are
measured projectively and reveal the discrete error syndromes and this
classical information is subsequently used to correct qubit errors via fast
unitary gates Devitt _et al._ (2013); Reed _et al._ (2012). To achieve fault
tolerant quantum computation, it is important that the probability of an
erroneous rotation of the logical qubit, is below a critical threshold value
Knill _et al._ (1998); Girvin (2021). Over the few years, DQEC has been
demonstrated experimentally in various platforms such as in ion traps
Schindler _et al._ (2011); Linke _et al._ (2017); Negnevitsky _et al._
(2018), diamond NV centers Cramer _et al._ (2016), and superconducting
circuits Kelly _et al._ (2015); Ristè _et al._ (2015); Ofek _et al._
(2016); Arute et al. (2019); Andersen _et al._ (2020); Stricker _et al._
(2020); Chen et al. (2021); de Neeve _et al._ (2022).
A less explored alternative to DQEC is to utilize continuous quantum error
correction (CQEC) methods, with the first few studies dating back to the early
2000s, co-authored by one of the authors of this article Ahn _et al._ (2002,
2003); Sarovar _et al._ (2004); Sarovar and Milburn (2005); Wiseman and
Milburn (2014). In CQEC, instead of discrete projective measurements of the
stabilizer generators, these generators are continuously and weakly measured
and a quantum feedback control Hamiltonian is used for continuous error
correction. One early seminal result demonstrated how single bit flip errors
can be corrected using CQEC provided one knows the conditional moments of the
error syndromes, which alas, is not practically feasible Ahn _et al._ (2002).
This is because when we perform a weak measurement on the stabilizers, we no
longer have the direct access to the exact syndrome signals, since they are
now masked by the measurement noise that is necessarily added to the measured
signal. Likewise, in previous research along these directions, the continuous
measurement records of the syndrome measurements were smoothed with various
filter kernels so that the exact signal of the error syndromes could be
extracted from the noisy measurement records. Expectedly, this performed
suboptimally given that it is not possible to isolate the signal from noise
for any realistic situation Sarovar _et al._ (2004); Mabuchi (2009); Cardona
_et al._ (2019); Mohseninia _et al._ (2020); Atalaya _et al._ (2021);
Livingston _et al._ (2022). Following a similar strategy for filtering noisy
data, CQEC has been demonstrated experimentally for the first time in a
superconducting circuit platform last year Livingston _et al._ (2022).
Unlike DQEC, which relies on projective measurements, CQEC eliminates the need
to use ancilla qubits to measure the stabilizer operators by weakly measuring
the physical qubits, and allows faster measurements and error detection,
thereby greatly reducing the likelihood of undetected errors Wiseman and
Milburn (2014). Furthermore, CQEC can be advantageous when the control
resources are limited and the performance of the feedback can be improved by
optimizing the operational parameters Sarovar _et al._ (2004). However, as
mentioned above, previous methods to perform CQEC suffered from their
inability to correctly identify when errors occur as the continuous
measurement signals necessarily contain noise Sarovar _et al._ (2004);
Atalaya _et al._ (2021); Livingston _et al._ (2022). In this work we show
how to overcome this and push the capabilities of the CQEC far beyond the
abilities of the standard DQEC. To achieve this we equip the CQEC with a real-
time Measurement Based Estimator (MBE), which can detect and correct errors
rapidly without filtering or smoothing of the measured data. We call this
scheme of continuous error correction as MBE-CQEC. This gives a practical
solution for realizing the theoretical proposal of Ahn et al. Ahn _et al._
(2002), without using filters for signal processing of the measurement records
Sarovar _et al._ (2004); Livingston _et al._ (2022). Finally, we show that
the corrective action need not be instantaneous, but can be delayed and
corrected whenever required, a feature we call delayed error correction (DEC).
Figure 1: The proposed protocol for CQEC using the measurement based $\cal
E$stimator (MBE) scheme. The $\cal R$eal system (left) consists of a logical
qubit comprising three physical qubits with an encoded unknown quantum state
$|\psi^{\cal R}\rangle_{L}=\alpha|000\rangle+\beta|111\rangle$, which we want
to protect from bit-flip errors. The $\cal E$stimator (right) is a simulation
[computer] of the stochastic dynamics of the $\cal R$eal system modelled
similarly but with a different initial quantum state $|\psi^{\cal
E}\rangle_{L}=\alpha^{\prime}|000\rangle+\beta^{\prime}|111\rangle$, where
$\alpha^{\prime}\neq\alpha$ and $\beta^{\prime}\neq\beta$ and where
$[\alpha,\beta](\alpha^{\prime},\beta^{\prime})$ are [unknown]known. For
generality, we will initialize the $\cal E$stimator state at: $|\psi^{\cal
E}\rangle_{L}=1|000\rangle$. One executes separate continuous measurements of
the three syndrome generators on the $\cal R$eal system and the resulting time
varying classical signals $(dQ_{ZZI}(t),dQ_{IZZ}(t),dQ_{ZIZ}(t))$ drive the
stochastic dynamics of the $\cal E$stimator quantum dynamics. Although the
$\cal E$stimator cannot learn about the unknown encoded quantum state, any
errors appearing in the $\cal R$eal system are faithfully reproduced in the
$\cal E$stimator. By monitoring the appearance of bit-flips in the $\cal
E$stimator one applies a feedback Hamiltonian $F(t)$ which applies the
appropriate correction in a continuous manner with control strengths
$\lambda_{j}$ on the individual physical qubits in the $\cal R$eal system.
Figure 2: [(a) - (c)] The time evolution of the conditional means of the three
error syndromes for the $\cal R$eal system (in blue, linewidth slightly
increased for visibility) and the $\cal E$stimator (orange) under the MBE
scheme (without error correction) that explains the essence of the protocol of
the proposed CQEC scheme. Essentially, the MBE scheme allows us to use a
computer simulation driven by the continuous measurement currents of the $\cal
R$eal system to perform real-time quantum error tracking that permits real
time QEC (d) Instead of using the conditional means of the syndromes we find
that the MBE has access to the full real-time effects of errors on each
simulated physical qubit and this information can permit us to perform error
correction. Information about the evolution of the Pauli-$Z_{q}$ operator for
the physical qubit $q$, $\langle Z_{q}(t)\rangle$, scaled by its initial
absolute value $|\langle Z_{q}(0)\rangle|$ for one of the physical qubits of
the $\cal E$stimator (orange) is compared, as an example, to the corresponding
evolution of the same of the $\cal R$eal system (blue). (e) The absolute
values of the instantaneous differences $\Delta\langle Z_{q}(t)\rangle=\langle
Z_{q}(t)\rangle-\langle Z_{q}(0)\rangle$ scaled by their initial values
$\langle Z_{q}(0)\rangle$ follow one another for the respective qubits of the
$\cal R$eal and the $\cal E$stimator. (f) The fidelity of the particular
physical qubit, ${\cal F}_{q}$ (red) as well as of the logical qubit, ${\cal
F}$ (green) with respective to the initial state to be preserved is shown for
the bit-flip errors demonstrated in [(a) - (e)]. Thus, the change in fidelity
of the physical qubits can be directly monitored by computing the values of
$|\langle Z_{q}(t)\rangle-\langle Z_{q}(0)\rangle|$ for each of the qubits of
the $\cal E$stimator, which sets the bit-flip error detection protocol of the
proposed MBE-CQEC scheme. (g) The performance of the MBE-CQEC scheme is
showcased in terms of the logical qubit fidelity for a single trajectory to
show how the errors are corrected once they are detected, based on the above
error detection protocol; also showing the fidelities of each physical qubits,
in (i), and the feedbacks on the individual qubits to correct the corrupted
information at appropriate times, in (j). For these analyses, we set the
initial logical qubit state as $|\psi^{\cal R}\rangle_{L}=|111\rangle$ to
maximize the contrast of fidelity drop under bit-flip error. For these plots,
we have used $\kappa/\gamma=800$ and $\lambda/\gamma=600$.
The generalized MBE is constructed as follows. Let us consider that the
internal dynamics of this $\cal R$eal system is described by the Hamiltonian
$H$, and its conditional density matrix to be, $\rho^{\cal R}_{c}(t)$, under
continuous measurement of the operator $A^{\mathcal{R}}=A$. This can be
described by the quantum stochastic master equation (SME) Wiseman and Milburn
(2014); Diósi _et al._ (2006); Borah _et al._ (2021),
$\displaystyle d\rho^{\cal R}_{c}(t)=$
$\displaystyle-i[H,\rho^{\mathcal{R}}_{c}(t)]dt+\gamma\mathcal{D}[c]\rho^{\cal
R}_{c}(t)dt+\kappa\mathcal{D}[A]\rho^{\cal R}_{c}(t)dt$ $\displaystyle+$
$\displaystyle\sqrt{\kappa\eta}\mathcal{H}[A]\rho^{\cal R}_{c}(t)\,dW^{\cal
R}(t).$ (1)
Here, $A=\mathcal{A}/\mathcal{A}_{0}$ a dimensionless operator corresponding
to the physical observable $\mathcal{A}$ scaled suitably by $\mathcal{A}_{0}$
to make it dimensionless, and is known as the measurement operator, which is
measured with a measurement rate of $\kappa$. The first term on the RHS is the
coherent evolution of the system. The second term on the RHS gives the
environmental decoherence at a rate $\gamma$ with the collapse operator $c$,
and the third term gives the measurement backaction due to the measurement of
$A$, where $\mathcal{D}[A]\rho=A\rho
A^{\dagger}-\frac{1}{2}(A^{\dagger}A\rho+\rho A^{\dagger}A)$ represents the
decoherence superoperator. The last term is the stochastic diffusion term with
$dW(t)$ being the Wiener noise increments. $\mathcal{H}$ is a superoperator
given by, $\mathcal{H}[A]\rho=A\rho+\rho
A^{\dagger}-\rho\mathrm{tr}[A\rho+\rho A^{\dagger}]$, and $\eta\in(0,1]$ is
the measurement efficiency. The measurement records, $dQ^{\mathcal{R}}(t)$ are
given by the summation of the conditional mean of the measurement operator and
the corresponding random noise component of the measurement,
$\displaystyle dQ^{\mathcal{R}}(t)=\langle
A^{\mathcal{R}}(t)\rangle_{c}dt+\frac{1}{\sqrt{4\kappa\eta}}{dW^{\mathcal{R}}(t)}.$
(2)
The dynamics of the $\cal E$stimator is modelled following the modified SME,
$\displaystyle d\rho^{\cal E}_{c}(t)=$ $\displaystyle-i[H,\rho^{\cal
E}_{c}(t)]dt+\gamma\mathcal{D}[c]\rho^{\cal
E}_{c}(t)dt+\kappa\mathcal{D}[A]\rho^{\cal E}_{c}(t)dt$ $\displaystyle+$
$\displaystyle{2\kappa\eta}\left[dQ^{\cal R}(t)-\langle A^{\cal
E}(t)\rangle_{c}dt\right]\mathcal{H}[A]\rho^{\cal E}_{c}(t).$ (3)
In essence, the noise of the $\cal E$stimator is modelled based on the noisy
measurement records of the $\cal R$eal system. In the context of the present
work for correcting bit-flip errors of the three-qubit code, we would have,
$c=\\{XII,IXI,XIX\\}$ and $A=\\{ZZI,IZZ,ZIZ\\}$.
The proposed MBE-CQEC scheme is shown schematically in Fig. 1. The $\cal R$eal
system (left) with the initial state $\rho^{\cal R}$ consists of a logical
qubit comprising three physical qubits we take to be a system in the
laboratory, where one continuously measures the stabilizer operators $ZZI,IZZ$
and $ZIZ$, where the third stabilizer operator is redundant and can be omitted
in principle. We consider an $\cal E$stimator of the system with the initial
state $\rho^{\cal E}$, as a numerical simulator on a fast computer (on the
right in Fig. 1), whose purpose is to detect the errors on the qubits
occurring in the $\cal R$eal system based on the real time measurement
records. The $\cal E$stimator also acts as a controller to apply an
appropriate feedback Hamiltonian to the $\cal R$eal system:
$F(t)=\lambda_{1}(t)XII+\lambda_{2}(t)IXI+\lambda_{3}(t)IIX$, where $X$
denotes a Pauli-$X$ operator, and $\lambda_{q}(t)$’s are the feedback
strengths.
At the heart of the MBE-CQEC lies the fact that for the measurement of the
stabilizer operators, the $\cal E$stimator can perfectly follow the
conditional means of the stabilizers ($\langle ZZI\rangle_{c}(t),\langle
IZZ\rangle_{c}(t)$ and $\langle ZIZ\rangle_{c}(t)$) when it is fed the
continuous, albeit noisy syndrome measurement records
($dQ_{ZZI}(t),dQ_{IZZ}(t),dQ_{ZIZ}(t)$). In Fig. 2[(a) - (c)], we show these
for the $\cal R$eal (blue, with slightly thicker lines for visibility) and the
$\cal E$stimator (orange) for a single quantum trajectory, where the $\cal
E$stimator dynamics is driven by the syndrome measurement currents of the
$\cal R$eal system. The perfect match of these values entails the power of the
approach we are going to formulate for QEC, which offers a novel strategy to
extract the conditional means of the error syndromes using continuous
measurement instead of projective von Neumann measurement and without any
signal filtering. This would allow CQEC to operate at the optimum level of
performance using the error syndromes directly while outperforming the DQEC
protocols Ahn _et al._ (2002); Sarovar _et al._ (2004); Wiseman and Milburn
(2014); Nielsen and Chuang (2010), thus making it a perfect marriage between
DQEC and CQEC techniques.
Figure 3: (a) The net fidelity of the logical qubit averaged over an ensemble
of at least 100 trajectories under the proposed MBE-CQEC scheme for different
choices of measurement strength, $\kappa$ in the units of the qubit bit flip
error rate $\gamma$ are shown with the feedback strengths $\lambda=\kappa$,
for reasons demonstrated in (b). The time evolutions up to 10 lifetimes of the
qubit ($10/\gamma$) are corrected. Also shown are the fidelities for the DQEC
(cyan) and one (violet) physical qubit errors for comparison. On the left of
(a) the zoomed-in portion within $t=1/\gamma$ is shown. While DQEC fails
completely beyond a few lifetimes of the qubit (one qubit error), MBE-CQEC
protocol outperforms it significantly. (b) The performance of CQEC scheme for
different choices of the feedback strength, $\lambda$ for a fixed
$\kappa/\gamma=800$. It shows that $\lambda\sim\kappa$ is a decent choice for
overall high fidelity in long time limit. (c) The performance of the scheme
for non-ideal choices of measurement efficiencies, $\eta$, showing that the
drop in fidelity relative to the case of ideal measurement efficiency,
$\eta=1$ is not significantly large for reasonable values of $\eta$. Figure 4:
(a) and (b) demonstrate the delayed error correction (DEC) allowed by the MBE-
CQEC scheme, where the error correction can be deferred until a later time. In
these simulations, we do not apply any error correction till $t=0.9/\gamma$,
after which the errors are detected based on the proposed error detection
scheme and faithfully corrected. (a) An example trajectory shows how the error
is corrected at $t=0.9/\gamma$. (b) The same as (a) but averaged over many
trajectories, which shows how the overall fidelity drops until the errors are
corrected. [(c) - (f)] Explanation of the fidelity drop with the CQEC protocol
based on the MBE scheme: (c) the fidelity variation in time without CQEC
($\lambda=0$) is shown for a particular quantum trajectory for
$\kappa/\gamma=800$; (d) the same when used for CQEC using $\lambda=\kappa$,
that reveals that the error at $t\sim 0.2/\gamma$ could not be corrected
fully; (e) the same as (d) but for $\lambda=5\kappa/4$, for which the error
correction is even worse at that particular instance; (f) the errors getting
perfectly corrected for $\lambda=3\kappa/4$.
While the perfect computation of the real time conditional error syndromes
with the proposed MBE scheme provides optimal error correction with continuous
measurements, we will show in the following that we can use yet another error
detection protocol that depends on the time evolution of the Pauli-$Z$
operator of the $\cal E$stimator physical qubits $q$, conditioned on the
measurement records of the $\cal R$eal system. This protocol will have the
benefit of performing CQEC with a significant time delay solely based on
measurement data obtained from the real qubit measurements, and will have
additional benefits, which we will discuss in subsequent sections. In Fig.
2(d), we compare, the time evolution of the $\langle Z_{q}(t)\rangle$ (only
one of the physical qubits is shown as an example) of the instantaneous
density matrix of the $\cal R$eal, $\rho^{\cal R}(t)$ and $\cal E$stimator,
$\rho^{\cal E}(t)$ where the $\cal E$stimator is evolved according to the MBE
scheme discussed above. As expected, the expectation value $\langle
Z_{q}(t)\rangle$ for the $\cal R$eal system (in blue) and the $\cal E$stimator
(in orange) exhibit different values, as the initial states are different.
However, when normalized by their absolute values before measurement, $\langle
Z_{q}(0)\rangle$, they undergo similar changes. As shown in Fig. 2(e), the
instantaneous differences $\Delta\langle Z_{q}(t)\rangle=\langle
Z_{q}(t)\rangle-\langle Z_{q}(0)\rangle$ scaled by their initial values
$\langle Z_{q}(0)\rangle$ follow one another for the respective qubits of the
$\cal R$eal and the $\cal E$stimator. In Fig. 2(f), the fidelity of that
particular physical qubit $q$, ${\cal
F}_{q}(t)=\langle\psi(0),\mathrm{tr}_{q}(\rho(t))\psi(0)\rangle$ (red), where
$\mathrm{tr}_{q}(\rho(t))$ is the partial trace of the logical qubit density
matrix on the $q$th Hilbert space, along with the codespace fidelity of the
logical qubit, ${\cal F}(t)=\langle\psi^{L}(0),\rho(t)\psi^{L}(0)\rangle$
(green) are shown, from which, by comparing with Fig. 2(e), it is observed
that the drop in fidelity of the individual qubits are directly related in a
one to one fashion to $|\Delta\langle Z_{q}(t)\rangle/|\langle
Z_{q}(0)\rangle|$ of the $\cal R$eal/$\cal E$stimator systems. This fact can
be utilized to detect flipping of the qubits deterministically, as an error in
a qubit would mean simply $|\Delta\langle Z_{q}(t)\rangle|>\varepsilon|\langle
Z_{q}(0)\rangle|$ on the $\cal E$stimator, where $\varepsilon=1.05$ is a small
tolerance to error detection. Naturally, $\varepsilon=2$ would signify a
complete flip, while $\varepsilon=0$ would signify a complete preservation of
the state. For the $\cal E$stimator, we can fix the initial logical state of
the qubit at the reset conditions, $|\psi\rangle_{L}^{\cal E}=|000\rangle$ for
simplicity and generality. For more details of the error detection protocol,
see Methods.
Using the above approach of error detection, we next go on to the
implementation of the MBE-CQEC protocol. In Fig. 2(g), we demonstrate our CQEC
scheme by applying it to a quantum trajectory evolved over one lifetime of a
physical qubit ($t=1/\gamma$). It can be seen how well the scheme works in
correcting the bit-flip errors, quickly restoring the logical qubit after an
error is detected, as detected by the error syndromes in Fig. 2(a)-(c). In
Fig. 2(h), the individual fidelities of the physical qubits are shown, and the
times of the applied feedbacks on respective qubits are shown in (i). There is
hardly any drop in fidelity for this particular trajectory under MBE-CQEC
within this time span.
In order to evaluate the performance of the scheme correctly in a statistical
sense, we apply it to an ensemble of quantum trajectories and average over it,
the results of which are shown in Fig. 3 in terms of average codespace
fidelity, $\bar{\cal F}=1/N\sum_{j=1}^{N}\mathcal{F}_{i}$ of the logical
qubit, where $N$ represents the number of trajectories for the ensemble
average, and $\mathcal{F}_{i}$ is the codespace fidelity of the $i$th
trajectory, as defined earlier. While the feedback strengths $\lambda$’s can
be tuned in principle within a trajectory, we have used a constant value,
$\lambda_{0}\sim\kappa$ for simplicity, which means that $\lambda$ can only
take the values of $0$ or $\lambda_{0}$. In Fig. 3(a), we show how higher
values of the continuous measurement rate $\kappa$ yields an overall higher
fidelity in the long time limit for the encoded state. The time limit
considered is 10 times the single qubit lifetime ($10/\gamma$). We have found
that by choosing a feedback strength of $\lambda\sim\kappa$ is a good choice
that leads to an overall higher fidelity when averaged over hundreds of
trajectories. The performance of DQEC is also shown for comparison, which
shows that the MBE-CQEC scheme outperforms DQEC for $\kappa>10\gamma$. Another
useful measure that is typically checked and useful for fault-tolerance is the
so-called one qubit error fidelity, shown as a solid line in violet, which
saturates at ${1/2}$ in the long time limit. DQEC fails completely beyond
about $t\sim\pi/\gamma$, whereas with the MBE-CQEC scheme, the infidelity is
maintained within $1-4\%$ for higher values of $\kappa$ over 10 lifetimes of a
physical qubit. The zoomed in view of the plot, focussing on the performance
within 1 lifetime of a physical qubit, is shown on the left of Fig. 3(a). The
final fidelities for $\kappa=1000\gamma$ and $800\gamma$ are respectively
$99.857\%$ and $99.578\%$ at $t=1/\gamma$. Thus, it is not necessarily useful
to keep pushing the value of $\kappa$ any further, as the above fidelities are
not significantly different in magnitude. Recent developments of quantum
technologies, particularly those based on superconducting hardwares, allow
experimentalists to go beyond the conventional regimes of weak coupling Minev
_et al._ (2019); Blais _et al._ (2021); Livingston _et al._ (2022). In this
spirit, we will consider $\kappa/\gamma=800$ for further analyses in the rest
of the paper. In Fig. 3(b), we evaluate the proposed CQEC scheme for different
choices of $\lambda/\gamma$ while keeping $\kappa/\gamma=800$. It reveals that
$\lambda\sim\kappa$ is a relatively decent choice to preserve overall
fidelities in the long time limit. Next, we evaluate the performance of the
protocol for inefficient measurements ($\eta<1$), shown in Fig. 3(c) relative
to the ideal case, $\eta=1$. It is observed that the scheme is fairly robust
for $\eta>0.5$, and the drop of fidelity is not huge.
Now, we will discuss a distinct feature of the proposed MBE-CQEC scheme
facilitated by the new error-detection/correction scheme discussed above. We
find that we can delay the correction until some later time when it is more
convenient. We call this feature delayed error correction (DEC). This is
facilitated by the proposed error detection scheme based on computation of the
Pauli-$Z$ operator on the physical qubit $q$ of the $\cal E$stimator model
under the MBE scheme. This allows to keep a track the changes of $|\langle
Z_{q}(t)\rangle-\langle Z_{q}(0)\rangle|/|\langle Z_{q}(0)\rangle|$ on the
$\cal R$eal system qubits indirectly by monitoring the same on the $\cal
E$stimator qubits (see discussions at the beginning, and Methods). For
instance, within a trajectory of total time of $1/\gamma$, we can abstain from
doing any error correction until a later time, say $t=0.9/\gamma$. Based on
the measurement records of the $\cal R$eal system, the $\cal E$stimator can
follow the errors that happened on the qubits, and the errors can be
rightfully detected and corrected using the proposed MBE-CQEC protocol, shown
for an example trajectory in Fig. 4(a). The same for an ensemble of
trajectories is shown in Fig. 4(b). This shows how the fidelity drops
significantly to a very low value without error correction, but how the error
is corrected instantly at $t=0.9/\gamma$ just monitoring the $\cal E$stimator
$\langle Z(t)\rangle$ on individual qubits.
Finally, we will address the long time fidelity drop issue despite error
correction using our MBE-CQEC code, and possibilities of using fine-tuned
feedback controls. We have found that the choice of the feedback strength
$\lambda$ plays a key role in maintaining codespace fidelity for longer
durations. To demonstrate it, we first simulate a trajectory that undergoes
bit-flip errors as shown in Fig. 4(c) and save the noise signal, in order to
test the effect of different feedback strengths on exactly the same dynamics.
For this simulation, we use $\kappa/\gamma=800$ as before. Now, we use our
CQEC scheme with $\lambda=\kappa$, for which we see that the fidelity could
not be preserved beyond $t\sim 0.2/\gamma$, shown in Fig. 4(d). Applying the
same for $\lambda=5\kappa/4$ leads to further drop in fidelity (Fig. 4(e)) at
that point. This can be corrected perfectly, however, with $\lambda=3\kappa/4$
(Fig. 4(f)). Hence, the reason for the drop in fidelity with time can be
attributed to the instantaneous choices of the feedback strengths. Although,
in the discussion for Fig. 3(c), we had found that $\lambda\sim\kappa$ serves
as a decent choice of feedback strength, in principle fine tuning it will
improve the achievable fidelity in the long time limit. Optimizing the values
of $\lambda_{j}(t)$ however, may not be a trivial task.
In this work, we have formulated an innovative approach of realizing bit-flip
QEC that can be regarded as one of the most optimal error correcting methods
in the literature, which is named as measurement based estimation controlled
continuous quantum error correction (MBE-CQEC). While traditionally used
methods of QEC are based on projective measurements, the current proposal
utilizes continuous measurements at its core, and thus falls under CQEC. While
CQEC, in principle, can be carried out in much quicker time intervals, the
biggest problem is the absence of the true error syndromes as these signals
get masked by the measurement noises. In our proposed method of CQEC, based on
a measurement based estimation scheme, the best of both QEC and traditional
CQEC techniques could be achieved. In addition, a novel bit-flip error
detection scheme was formulated that can be operated in delayed time. In
practical scenarios, each gate carries intrinsic error, which albeit being
small, accumulates in time and poses a challenge to achieve fault-tolerant
quantum computation. The delayed CQEC method can be advantageous in this
particular context, where the intervals of successive error correction steps
can be kept significantly high. Also note that for the analysis of the work,
we assume that the encoding was done perfectly. However, MBE-CQEC is expected
to be resilient to small encoding errors thanks to the perfect emulation of
the individual qubit errors and the way the errors are detected based on
Pauli-$Z$ expectation value deviation.
Finally, while the method works optimally with the distinctive features of
delayed QEC, the bottleneck would come from numerical expenses when one would
try to extend to more qubits, as the Hilbert space dimension would grow as
$2^{N}$, where $N$ is the number of qubits. In addition, for best performance
the detector should exhibit high response bandwidth. While phase-shift errors
can be corrected as bit-flip errors by moving to the computational basis of
qubits, the inclusion of both errors in a single code, e.g., the 9-qubit Shor
code, will be limited drastically by the computational effort required to
solve the $\cal E$stimator dynamics in real time. In this context, the use of
a compact representation of the states, e.g. the matrix product states, could
be useful.
In conclusion, we have proposed a novel approach of doing bit-flip error
correction that performs optimally. It not only removes the limitations of
canonical projective QEC techniques, but also can be used to correct errors in
delayed time based on all the previous measurement records, which can be a
much welcoming factor for its experimental realization.
## Appendix
Quantum continuous measurement and feedback control. Contrary to von Neumann
measurements, where a measurement operator (observable) is projectively
measured collapsing the state to an eigenstate, continuous measurements are
weak measurements where the observable is monitored in real time without
perturbing the state of the system significantly, so that the collapse is
gradual. These types of measurements are useful to observe the process of
collapse of a state, and to engineer feedback to control its dynamics. In
fact, it can be shown that a projective measurement is equivalent to an
infinite number of continuous weak measurements carried out over an
infinitesimally small time interval. Such a continuous measurement protocol
leads to the conditional evolution of the density matrix based on noisy
measurement outcomes given by,
$\displaystyle d\rho_{c}(t)$
$\displaystyle=-i[H,\rho_{c}(t)]dt+\kappa\mathcal{D}[A]\rho_{c}(t)dt$
$\displaystyle+\sqrt{\kappa}\mathcal{H}[A]\rho_{c}(t)dW(t),$ (4)
where $\rho_{c}$ denotes the conditional density matrix of the system
described by the Hamiltonian $H$. $A=\mathcal{A}/\mathcal{A}_{0}$ is a
dimensionless operator corresponding to the physical observable $\mathcal{A}$
scaled suitably by $\mathcal{A}_{0}$, and is known as the measurement
operator, which is measured with a measurement rate of $\kappa$ (denotes the
rate at which the information is extracted). The first term of the above
equation on the right-hand side represents the coherent evolution of the
system. The second term on the right-hand side gives the measurement
backaction due to the measurement of $A$, where $\mathcal{D}[A]\rho=A\rho
A^{\dagger}-\frac{1}{2}(A^{\dagger}A\rho+\rho A^{\dagger}A)$ represents the
decoherence superoperator. The last term is the stochastic diffusion term with
$dW(t)$ being the Wiener increments, which are Gaussian distributed random
variables with zero mean and represent memoryless white noise, $\langle
dW(t)dW(\tau)\rangle=\delta(t-\tau)$. $\mathcal{H}$ is a superoperator given
by, $\mathcal{H}[A]\rho=A\rho+\rho A^{\dagger}-\rho\mathrm{tr}[A\rho+\rho
A^{\dagger}]$. Eq. 4 is known as the stochastic master equation (SME). The
measurement records $dQ(t)$ are given by,
$\displaystyle dQ(t)=\langle A_{c}(t)\rangle
dt+\frac{1}{\sqrt{4\kappa}}{dW(t)},$ (5)
where $\langle A_{c}(t)\rangle$ denotes the conditional mean of the
measurement operator $A$ (dimensionless) at time $t$, which is nothing but the
signal, and the last term represents the measurement noise associated with it.
Each evolution of the density matrix, $\rho_{c}(t)$ in time, following the SME
in Eq. 4, represents a quantum trajectory, which can be manipulated and
controlled by using appropriate feedback to the Hamiltonian in real time. If
the feedback Hamiltonian, $F(t)$ is based on the conditional state,
$\rho_{c}(t)$ or the conditional mean $\langle A_{c}(t)\rangle$ of the
measurement operator, then the SME with the feedback Hamiltonian is given by,
$\displaystyle d\rho_{c}(t)=$
$\displaystyle-i[H,\rho_{c}(t)]dt+\kappa\mathcal{D}[A]\rho_{c}(t)dt$
$\displaystyle+$
$\displaystyle\sqrt{\kappa}\mathcal{H}[A]\rho_{c}(t)dW(t)-i[F(t),\rho_{c}(t)]dt.$
(6)
For non-ideal measurement efficiency $\eta$ and in presence of the
environmental decoherence, it becomes,
$\displaystyle d\rho_{c}(t)=$
$\displaystyle-i[H,\rho_{c}(t)]dt+\gamma\mathcal{D}[c]\rho_{c}(t)dt+\kappa\mathcal{D}[A]\rho_{c}(t)dt$
$\displaystyle+\sqrt{\kappa\eta}\mathcal{H}[A]\rho_{c}(t)dW(t)-i[F(t),\rho_{c}(t)]dt,$
(7)
where $\gamma$ is the environmental decoherence rate with collapse operator
$c$. The expression for the continuous measurement record in presence of
$\eta$ gets modified to,
$\displaystyle dQ(t)=\langle A_{c}(t)\rangle
dt+\frac{1}{\sqrt{4\eta\kappa}}{dW(t)}.$ (8)
Discrete quantum error correction. Generally speaking, QEC is a method to
protect an unknown state of an open quantum system. However, in the context of
quantum computing, we will consider qubits interacting with environmental
decoherences. In contrast to classical bit errors there are two sources of
errors, bit and phase flips. To correct bit-flip errors, stabilizer codes are
used, while phase errors can be corrected similarly to the bit-flip errors but
in a rotated basis (Hadamard basis) of the physical qubits Nielsen and Chuang
(2010). Stabilizer codes are repetition codes, where the unknown state of the
qubit is mapped onto a tensor space of a larger Hilbert space of multiple
qubits as entangled states. Such an entangled unit of qubits is called a
logical qubit. For example, in the three qubit repetition code, the unknown
state of the qubit is mapped onto three physical qubits, on which single bit-
flip errors can be corrected,
$\displaystyle|0\rangle\to|000\rangle\equiv|0\rangle_{L},$ (9)
$\displaystyle|1\rangle\to|111\rangle\equiv|1\rangle_{L}.$ (10)
Here the states $|0\rangle_{L}$ and $|1\rangle_{L}$ are the basis states for
the QEC code and the space spanned by them is called the codespace. The
elements of the codespace are known as the codewords. If the state of a
physical qubit is $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$, it is encoded
to two more physical qubits as
$|\psi\rangle_{L}=\alpha|000\rangle+\beta|111\rangle$, with
$\alpha^{2}+\beta^{2}=1$. The time evolution of the density matrix of the
logical qubit under bit-flip errors caused by the environment decoherences, at
a characteristic rate of $\gamma$ is described by,
$\displaystyle
d\rho(t)=\gamma(\mathcal{D}[XII]+\mathcal{D}[IXI]+\mathcal{D}[IIX])\,\rho\,dt.$
(11)
This is equivalent to assuming that the environment causes independent bit-
flips of each physical qubit at Poisson distributed times with rate $\gamma$.
The essence of QEC is that the state of the logical qubit, $|\psi\rangle_{L}$
is unknown to us, except however, the codespace, and we need to preserve it
without losing the initial fidelity and without any knowledge of the elements,
$\alpha$ and $\beta$ of the state. In this situation, it is possible to
measure a few special observables that determine the parities of the
neighbouring qubits without giving any information about the state of the
qubits themselves. In the three qubit code, there are three possible such
operators, given by, $M_{1}=ZZI$, $M_{2}=IZZ$ and $M_{3}=ZIZ$, where the third
operator can be considered redundant. As $M_{j}^{2}=\mathbb{I}$, these
operators have two possible eigenvalues $\pm 1$. The pair of eigenvalues
$(m_{1},m_{2})$ for the simultaneous measurements of $M_{1}$ and $M_{2}$ gives
the bit-flip error happening on a given qubit, provided no two qubits are
flipped at the same time. Such one qubit flips can be corrected by applying
unitary $X$ gates to the qubit on which the flip happened. Typically, to
achieve this, the syndrome operators are projectively measured and errors are
corrected based on the following conditions of the outcomes ($m_{1},m_{2}$):
(i) $(-1,+1)\to XII$, (ii) $(-1,-1)\to IXI$, (iii) $(+1,-1)\to IIX$ and (iv)
$(-1,+1)\to$ None. We will call QEC based on projective measurement as
discrete quantum error correction (DQEC) from now on.
In order for DQEC to work, it is important to make the assumption that there
are no multiple flips of the qubits happening simultaneously, and that no
single flip errors are missed. Given the fact that projective measurements
require significant time between each measurement, while the environment acts
to degrade the qubits continuously, DQEC can never be conducted perfectly, and
the error correction performance drops significantly over time. Theoretically
speaking, if we consider each error to be detected perfectly, the contribution
of simultaneous bit-flips can be relatively small for low environmental
decoherences, $\gamma$, as the theoretical fidelity of the error corrected
logical state with DQEC with respect to the initial state is given by Ahn _et
al._ (2002),
$\displaystyle F_{\rm DQEC}(t)=\frac{1}{4}(2+3e^{-2\gamma t}-e^{-6\gamma t}).$
(12)
The drop in fidelity due to the bit-flip errors in a single qubit without
error correction, is given by,
$\displaystyle F_{1}(t)=\frac{1}{2}(1+e^{-2\gamma t}),$ (13)
and that of three qubits is given by $F_{3}(t)=F_{1}(t)^{3}$. This essentially
means that $F_{\rm DQEC}(t)\sim F_{1}(t)$ when $t\geq\pi$, which shows how
quickly the DQEC performance drops.
Continuous quantum error correction. CQEC differs from DQEC in multiple
aspects: in the way the measurements are performed on the syndrome operators,
how the errors are detected, and how the errors are corrected. Instead of
projective measurements, CQEC utilizes continuous weak measurements of the
syndrome operators, discussed above. The conditional evolution of the state of
the logical qubit undergoing bit-flip errors, continuous measurements and
feedback is modelled using the SME as,
$\displaystyle d\rho_{c}(t)=$
$\displaystyle\gamma(\mathcal{D}[XII]+\mathcal{D}[IXI]+\mathcal{D}[IIX])\rho_{c}dt$
$\displaystyle+$
$\displaystyle\kappa(\mathcal{D}[ZZI]+\mathcal{D}[IZZ]+\mathcal{D}[ZIZ])\rho_{c}dt$
$\displaystyle+$
$\displaystyle\sqrt{\kappa}(\mathcal{H}[ZZI]dW_{1}+\mathcal{H}[IZZ]dW_{2}+\mathcal{H}[ZIZ]dW_{3})\rho_{c}$
$\displaystyle-$ $\displaystyle i[F(t),\rho_{c}]dt,$ (14)
where the stochastic time varying measurement records of the stabilizer
generators are given by,
$\displaystyle dQ_{1}(t)=$ $\displaystyle\langle
ZZI\rangle_{c}dt+\frac{1}{\sqrt{4\kappa}}{dW_{1}(t)},$ (15) $\displaystyle
dQ_{2}(t)=$ $\displaystyle\langle
IZZ\rangle_{c}dt+\frac{1}{\sqrt{4\kappa}}{dW_{2}(t)},$ (16) $\displaystyle
dQ_{3}(t)=$ $\displaystyle\langle
ZIZ\rangle_{c}dt+\frac{1}{\sqrt{4\kappa}}{dW_{3}(t)}.$ (17)
Here $F(t)$ is the feedback Hamiltonian given by,
$\displaystyle F(t)=\lambda_{1}(t)XII+\lambda_{2}(t)IXI+\lambda_{3}(t)IIX,$
(18)
where $\lambda_{i}(t)$’s are, in principle, time dependent control parameters
which depends on the conditional means of the error syndromes. For example,
the following feedback scheme was proposed by Ahn et al. Ahn _et al._ (2002)
for CQEC,
$\displaystyle\lambda_{1}(t)=$
$\displaystyle\lambda(1-\langle{ZZI}\rangle_{c})(1+\langle{IZZ}\rangle_{c})(1-\langle{ZIZ}\rangle_{c}),$
(19) $\displaystyle\lambda_{2}(t)=$
$\displaystyle\lambda(1-\langle{ZZI}\rangle_{c})(1-\langle{IZZ}\rangle_{c})(1+\langle{ZIZ}\rangle_{c}),$
(20) $\displaystyle\lambda_{3}(t)=$
$\displaystyle\lambda(1+\langle{ZZI}\rangle_{c})(1-\langle{IZZ}\rangle_{c})(1-\langle{ZIZ}\rangle_{c}),$
(21)
where $\lambda$ is a feedback strength of the order of the measurement rate,
$\kappa$. The feedback function $F(t)$ described above, makes use of the
conditional means of the syndrome operators, and thus, to perform CQEC
ideally, we require detailed information about the time dependence of the
conditional means of the syndrome generators. However, the conditional means
are not available from the measurement records (Eq. 15-17) directly as these
quantities are masked by measurement noise that is a fundamental component of
all quantum measurements. The signal to noise ratio of such measurements can
be typically quite poor. For practical purposes, researchers have previously
used temporal filters to recover these conditional means from the noisy
measurement records, with some filters possessing non-uniform temporal
weights, biasing up the most recent records to avoid any lags or delays
Sarovar _et al._ (2004); Atalaya _et al._ (2021); Livingston _et al._
(2022); Mabuchi (2009). Of course real world devices already have limits on
their response bandwidths. The effects of additional software/hardware
filtering to smooth out the noisy measurement records will also degrade the
signal or conditional means. However, if somehow we happen to know the
conditional means in real time perfectly, the CQEC scheme would perform
optimally for bit-flip correction under the assumption that no two or more
qubits flip simultaneously in a three qubit stabilizer code. In the following,
we show how our proposed measurement based estimator (MBE) scheme allows us to
achieve this. We call this MBE method of CQEC as MBE-CQEC scheme.
The MBE-CQEC scheme. We now describe a scheme that can perform faithful real-
time estimation of any dynamical changes affecting the logical qubit. This MBE
scheme will play a crucial role in detecting the bit-flip errors perfectly and
therefore in applying the appropriate feedback $\lambda_{j}(t)$, in a manner,
as we will show, that achieves ultra-high levels of protection of the unknown
quantum state. Let us denote the laboratory-based quantum system that we wish
to protect, as the $\cal R$eal system, and our $\cal E$stimator system is a
numerical/computational model of the $\cal R$eal system, shown schematically
in Fig. 1 in the main text. We consider that the internal dynamics of this
$\cal R$eal system is described by the Hamiltonian $H^{\mathcal{R}}=H$, and
its conditional density matrix is given by, $\rho^{\cal R}_{c}$, under
continuous measurement via the measurement operator $A^{\mathcal{R}}=A$
(dimensionless). This can be described by the SME described above as,
$\displaystyle d\rho^{\cal R}_{c}(t)=$
$\displaystyle-i[H,\rho^{\mathcal{R}}_{c}(t)]dt+\gamma\mathcal{D}[c]\rho^{\cal
R}_{c}(t)dt$ $\displaystyle+$ $\displaystyle\kappa\mathcal{D}[A]\rho^{\cal
R}_{c}(t)dt+\sqrt{\kappa}\mathcal{H}[A]\rho^{\cal R}_{c}(t)\,dW^{\cal R}(t),$
(22)
where the superscript $\cal R$ is used to represent the $\cal R$eal lab
system. The measurement record, $dQ^{\mathcal{R}}(t)$ is given by the
summation of the conditional mean of the measurement operator and the
corresponding random noise component of the measurement,
$\displaystyle dQ^{\mathcal{R}}(t)=\langle
A^{\mathcal{R}}(t)\rangle_{c}dt+\frac{1}{\sqrt{4\kappa}}{dW^{\mathcal{R}}(t)}.$
(23)
Now, we make an $\cal E$stimator ($\cal E$) of the $\cal R$eal on a computer
with the same physical model ($H^{\mathcal{E}}=H^{\mathcal{R}}=H$), and the
continuous measurement of the same observable
($A^{\mathcal{E}}=A^{\mathcal{R}}=A$), but start the $\cal E$stimator dynamics
with a known initial state, $\rho^{\mathcal{E}}(0)$, which might be different
from the initial state of the $\cal R$eal system. The dynamics of this $\cal
E$stimator can be modelled as,
$\displaystyle d\rho^{\cal E}_{c}(t)=$ $\displaystyle-i[H,\rho^{\cal
E}_{c}(t)]dt+\gamma\mathcal{D}[c]\rho^{\cal
E}_{c}(t)dt+\kappa\mathcal{D}[A]\rho^{\cal E}_{c}(t)dt$ $\displaystyle+$
$\displaystyle\sqrt{\kappa}\mathcal{H}[A]\rho^{\cal E}_{c}(t)\,dW^{\cal
E}(t).$ (24)
We now can slave the dynamics of this $\cal E$stimator model to the dynamics
of the $\cal R$eal system by setting the $\cal E$stimator noise
${dW^{\mathcal{E}}(t)}$ as,
$\displaystyle{dW^{\mathcal{E}}(t)}={\sqrt{4\kappa}}\left[dQ^{\mathcal{R}}(t)-\langle
A^{\mathcal{E}}(t)\rangle_{c}dt\right],$ (25)
where the conditional mean $\langle A^{\mathcal{E}}(t)\rangle_{c}$, is
obtained from the $\cal E$stimator, which is readily available without any
extraneous noise. Thus, the dynamics of the $\cal E$stimator follows the
measurement records of the $\cal R$eal system as,
$\displaystyle d\rho^{\cal E}_{c}(t)=$ $\displaystyle-i[H,\rho^{\cal
E}_{c}(t)]dt+\gamma\mathcal{D}[c]\rho^{\cal
E}_{c}(t)dt+\kappa\mathcal{D}[A]\rho^{\cal E}_{c}(t)dt$ $\displaystyle+$
$\displaystyle{2\kappa}\left[dQ^{R}(t)-\langle A^{\cal
E}(t)\rangle_{c}dt\right]\mathcal{H}[A]\rho^{\cal E}_{c}(t).$ (26)
Let us now consider that the $\cal R$eal system consists of a logical qubit
comprising three physical qubits with an encoded unknown quantum state
$|\psi^{\cal R}\rangle_{L}=|\psi^{\cal
R}\rangle_{L}^{\alpha,\beta}=\alpha|000\rangle+\beta|111\rangle$, which we
want to protect from bit-flip errors. The $\cal E$stimator is modelled
similarly but with a different initial quantum state $|\psi^{\cal
E}\rangle_{L}=|\psi^{\cal
E}\rangle_{L}^{\alpha^{\prime},\beta^{\prime}}=\alpha^{\prime}|000\rangle+\beta^{\prime}|111\rangle$,
where $\alpha^{\prime}\neq\alpha$ and $\beta^{\prime}\neq\beta$. While the
values of $\alpha^{\prime}$ and $\beta^{\prime}$ we can choose; $\alpha$ and
$\beta$ for the $\cal R$eal system can be any possible values not known to us.
The conditional mean in the $\cal R$eal system, is unknown as it is masked by
the measurement noise as already stated. However, for the syndrome operators,
being parity operators, the measurement signals (error syndromes) are
independent of the coefficients ($\alpha$ and $\beta$) of the logical state
$|\psi\rangle_{L}=\alpha|000\rangle+\beta|111\rangle$, but only depends on the
codespace ($|000\rangle$ and $|111\rangle$). The unperturbed syndrome values,
$\langle{\cal G}_{i}\rangle_{c}^{{\cal R}/{\cal E}}$ at $t=0$ satisfy,
$\displaystyle\langle{\cal G}_{i}\rangle_{c}^{\cal E}(0)=\langle{\cal
G}_{i}\rangle_{c}^{\cal R}(0)=1.$ (27)
Here ${\cal G}_{i}$ represents the $i$th stabilizer operator. Now using Eq.
26, the $\cal E$stimator can be propagated to the next timestep after a
measurement time interval of $dt$ using the measurement current from the $\cal
R$eal but the conditional means from the $\cal E$stimator. For the second
step, the noise signal can be correctly recovered as, $\langle{\cal
G}_{i}\rangle_{c}^{\cal E}(dt)=\langle{\cal G}_{i}\rangle_{c}^{\cal R}(dt)$,
which can be either $+1$ or $-1$ unlike Eq. 27, and similarly the process is
repeated in timesteps of $dt$ for the $\cal E$stimator for subsequent times.
Such an $\cal E$stimator that is fed with real-time measurement records can
correctly emulate the dynamics of all the errors happening on the $\cal R$eal
system for each quantum trajectory. One can extract the error syndromes of the
$\cal R$eal system by merely looking at the $\cal E$stimator conditional
syndrome values, which are readily available. This solves the main problem of
CQEC codes, where it is otherwise not possible to isolate the error syndromes
from the measurement noise. The scheme is abbreviated as MBE-CQEC standing for
measurement based estimator scheme for continuous quantum error correction,
and is shown schematically in Fig. 1 in the main text of the article.
The MBE-CQEC scheme described above gives us a smart way of computing the
error syndromes within a continuous measurement process, which allows us to
correct bit-flips errors in real time in much rapid time intervals than DQEC
codes Nielsen and Chuang (2010) or using Eqn. 19-21. In the following we show
how the estimator system has real time tomographic information about the
errors happening to individual qubits, and we can use this to devise a new
correction scheme. This new scheme is not based on the conditional means of
the stabilizer operators, but instead, on the deviation of $\langle
Z(t)\rangle$ of the qubits in the $\cal E$stimator relative to their original
values at $t=0$, which is described below.
In the main text of the article, we have shown how the absolute deviation of
the expectation value of the Pauli-$Z$ operator of the physical qubit $q$ of
the $\cal E$stimator, $|\Delta\langle Z_{q}(t)\rangle=\langle
Z_{q}(t)\rangle-\langle Z_{q}(0)\rangle|$ scaled by its initial unperturbed
value $|\langle Z_{q}(0)\rangle|$, i.e., $|\Delta\langle
Z_{q}(t)\rangle|/|\langle Z_{q}(0)\rangle|$, follows the same in the $\cal
R$eal system. This constitutes the backbone of the error detection/correction
proposal presented in the article. To understand it better, let’s consider a
physical qubit $q$ with state given by
$|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$. The initial expectation value
$Z_{q}$ without measurement is $\langle Z_{q}(0)\rangle=\beta^{2}-\alpha^{2}$.
A flip of the qubit at time $t>0$ will mean $\langle
Z_{q}(t)\rangle=\alpha^{2}-\beta^{2}$, such that the absolute deviation from
its initial state is given by, $|\langle\Delta
Z_{q}(t)\rangle|=2|\beta^{2}-\alpha^{2}|$. The ratio $|\Delta\langle
Z_{q}(t)\rangle|/|\langle Z_{q}(0)\rangle|=2$, which means a complete flip.
Similarly, $|\Delta\langle Z_{q}(t)\rangle|/|\langle Z_{q}(0)\rangle|=0$ would
mean absolutely no flipping. For any other change $\varepsilon$ in between,
$|\Delta\langle Z_{q}(t)\rangle|/|\langle Z_{q}(0)\rangle|=\varepsilon$. The
$\cal E$stimator qubit can be modelled with $\alpha=1$ and $\beta=0$, i.e., at
the reset condition for convenience, which means $\langle Z_{q}(0)\rangle=1$
for the $\cal E$stimator qubit. Under the same noise measurement signals,
$dW_{s}^{\cal E}(t)=dW_{s}^{\cal E}$, where $s=(ZZI,IZZ,ZIZ)$ denote the
syndrome operators under measurement of the three physical qubits of the $\cal
R$eal system, a change in $|\Delta\langle
Z^{\mathcal{R}}_{q}(t)\rangle|/|\langle Z^{\mathcal{R}}_{q}(0)\rangle|$ on
qubit $q$ by an amount $\varepsilon$ will underpin similar changes in the
$\cal E$stimator qubits, $|\Delta\langle
Z^{\mathcal{E}}_{q}(t)\rangle(t)|/|\langle Z^{\mathcal{E}}_{q}(0)\rangle|$,
i.e.,
$\displaystyle\frac{|\Delta\langle Z^{\mathcal{E}}_{q}(t)\rangle|}{|\langle
Z^{\mathcal{E}}_{q}(0)\rangle|}=\frac{|\Delta\langle
Z^{\mathcal{R}}_{q}(t)\rangle|}{|\langle
Z^{\mathcal{R}}_{q}(0)\rangle|}=\varepsilon(t).$ (28)
Thus, we can use the following condition on the $\cal E$stimator system to
detect a bit-flip error on qubit $q$ and correspondingly apply the feedback
Hamiltonian after a time $\delta t$,
$\lambda_{q}(t+\delta t)=\begin{cases}\lambda_{q},&\mathrm{if}|\langle
Z_{q}^{\cal E}(t)\rangle-\langle Z_{q}^{\cal E}(0)\rangle|>\varepsilon|\langle
Z_{q}^{\cal E}(0)\rangle|,\\\ 0,&\text{otherwise},\end{cases}$ (29)
where $\lambda_{q}\sim\kappa$ and $\epsilon$ is a tolerance slightly higher
than 1, which we choose to be $\varepsilon=1.05$.
## I acknowledgments
The authors thank the super-computing facilities provided by the Okinawa
Institute of Science and Technology (OIST) Graduate University and financial
support. GJM acknowledge the support of the Australian Research Council Centre
of Excellence for Engineered Quantum Systems CE170100009.
## References
* Nielsen and Chuang (2010) Michael A. Nielsen and Isaac L. Chuang, _Quantum Computation and Quantum Information_ (Cambridge University Press, Cambridge, England, UK, 2010).
* Shor (1995) Peter W. Shor, “Scheme for reducing decoherence in quantum computer memory,” Phys. Rev. A 52, R2493–R2496(R) (1995).
* Steane (1996) A. M. Steane, “Error Correcting Codes in Quantum Theory,” Phys. Rev. Lett. 77, 793–797 (1996).
* Djordjevic (2012) Ivan Djordjevic, _Quantum Information Processing and Quantum Error Correction_ (Elsevier, Academic Press, 2012).
* Gertler _et al._ (2021) Jeffrey M. Gertler, Brian Baker, Juliang Li, Shruti Shirol, Jens Koch, and Chen Wang, “Protecting a bosonic qubit with autonomous quantum error correction,” Nature 590, 243–248 (2021).
* Gottesman (1997) Daniel Gottesman, “Stabilizer Codes and Quantum Error Correction,” arXiv (1997), quant-ph/9705052 .
* Devitt _et al._ (2013) Simon J. Devitt, William J. Munro, and Kae Nemoto, “Quantum error correction for beginners,” Rep. Prog. Phys. 76, 076001 (2013).
* Reed _et al._ (2012) M. D. Reed, L. DiCarlo, S. E. Nigg, L. Sun, L. Frunzio, S. M. Girvin, and R. J. Schoelkopf, “Realization of three-qubit quantum error correction with superconducting circuits,” Nature 482, 382–385 (2012).
* Knill _et al._ (1998) Emanuel Knill, Raymond Laflamme, and Wojciech H. Zurek, “Resilient quantum computation: error models and thresholds,” Proc. R. Soc. Lond. A. 454, 365–384 (1998).
* Girvin (2021) Steven M. Girvin, “Introduction to Quantum Error Correction and Fault Tolerance,” arXiv (2021), 2111.08894 .
* Schindler _et al._ (2011) Philipp Schindler, Julio T. Barreiro, Thomas Monz, Volckmar Nebendahl, Daniel Nigg, Michael Chwalla, Markus Hennrich, and Rainer Blatt, “Experimental Repetitive Quantum Error Correction,” Science 332, 1059–1061 (2011).
* Linke _et al._ (2017) Norbert M. Linke, Mauricio Gutierrez, Kevin A. Landsman, Caroline Figgatt, Shantanu Debnath, Kenneth R. Brown, and Christopher Monroe, “Fault-tolerant quantum error detection,” Sci. Adv. 3, e1701074 (2017).
* Negnevitsky _et al._ (2018) V. Negnevitsky, M. Marinelli, K. K. Mehta, H.-Y. Lo, C. Flühmann, and J. P. Home, “Repeated multi-qubit readout and feedback with a mixed-species trapped-ion register,” Nature 563, 527–531 (2018).
* Cramer _et al._ (2016) J. Cramer, N. Kalb, M. A. Rol, B. Hensen, M. S. Blok, M. Markham, D. J. Twitchen, R. Hanson, and T. H. Taminiau, “Repeated quantum error correction on a continuously encoded qubit by real-time feedback,” Nat. Commun. 7, 1–7 (2016).
* Kelly _et al._ (2015) J. Kelly, R. Barends, A. G. Fowler, A. Megrant, E. Jeffrey, T. C. White, D. Sank, J. Y. Mutus, B. Campbell, Yu Chen, Z. Chen, B. Chiaro, A. Dunsworth, I.-C. Hoi, C. Neill, P. J. J. O’Malley, C. Quintana, P. Roushan, A. Vainsencher, J. Wenner, A. N. Cleland, and John M. Martinis, “State preservation by repetitive error detection in a superconducting quantum circuit,” Nature 519, 66–69 (2015).
* Ristè _et al._ (2015) D. Ristè, S. Poletto, M.-Z. Huang, A. Bruno, V. Vesterinen, O.-P. Saira, and L. DiCarlo, “Detecting bit-flip errors in a logical qubit using stabilizer measurements,” Nat. Commun. 6, 1–6 (2015).
* Ofek _et al._ (2016) Nissim Ofek, Andrei Petrenko, Reinier Heeres, Philip Reinhold, Zaki Leghtas, Brian Vlastakis, Yehan Liu, Luigi Frunzio, S. M. Girvin, L. Jiang, Mazyar Mirrahimi, M. H. Devoret, and R. J. Schoelkopf, “Extending the lifetime of a quantum bit with error correction in superconducting circuits,” Nature 536, 441–445 (2016).
* Arute et al. (2019) Frank Arute et al., “Quantum supremacy using a programmable superconducting processor,” Nature 574, 505–510 (2019).
* Andersen _et al._ (2020) Christian Kraglund Andersen, Ants Remm, Stefania Lazar, Sebastian Krinner, Nathan Lacroix, Graham J. Norris, Mihai Gabureac, Christopher Eichler, and Andreas Wallraff, “Repeated quantum error detection in a surface code,” Nat. Phys. 16, 875–880 (2020).
* Stricker _et al._ (2020) Roman Stricker, Davide Vodola, Alexander Erhard, Lukas Postler, Michael Meth, Martin Ringbauer, Philipp Schindler, Thomas Monz, Markus Müller, and Rainer Blatt, “Experimental deterministic correction of qubit loss,” Nature 585, 207–210 (2020).
* Chen et al. (2021) Zijun Chen et al., “Exponential suppression of bit or phase errors with cyclic error correction,” Nature 595, 383–387 (2021).
* de Neeve _et al._ (2022) Brennan de Neeve, Thanh-Long Nguyen, Tanja Behrle, and Jonathan P. Home, “Error correction of a logical grid state qubit by dissipative pumping,” Nat. Phys. 18, 296–300 (2022).
* Ahn _et al._ (2002) Charlene Ahn, Andrew C. Doherty, and Andrew J. Landahl, “Continuous quantum error correction via quantum feedback control,” Phys. Rev. A 65, 042301 (2002).
* Ahn _et al._ (2003) Charlene Ahn, H. M. Wiseman, and G. J. Milburn, “Quantum error correction for continuously detected errors,” Phys. Rev. A 67, 052310 (2003).
* Sarovar _et al._ (2004) Mohan Sarovar, Charlene Ahn, Kurt Jacobs, and Gerard J. Milburn, “Practical scheme for error control using feedback,” Phys. Rev. A 69, 052324 (2004).
* Sarovar and Milburn (2005) Mohan Sarovar and G. J. Milburn, “Continuous quantum error correction by cooling,” Phys. Rev. A 72, 012306 (2005).
* Wiseman and Milburn (2014) Howard M. Wiseman and Gerard J. Milburn, _Quantum Measurement and Control_ (Cambridge University Press, Cambridge, England, UK, 2014).
* Mabuchi (2009) Hideo Mabuchi, “Continuous quantum error correction as classical hybrid control,” New J. Phys. 11, 105044 (2009).
* Cardona _et al._ (2019) Gerardo Cardona, Alain Sarlette, and Pierre Rouchon, “Continuous-time Quantum Error Correction with Noise-assisted Quantum Feedback,” IFAC-PapersOnLine 52, 198–203 (2019).
* Mohseninia _et al._ (2020) Razieh Mohseninia, Jing Yang, Irfan Siddiqi, Andrew N. Jordan, and Justin Dressel, “Always-On Quantum Error Tracking with Continuous Parity Measurements,” Quantum 4, 358 (2020), 1907.08882v2 .
* Atalaya _et al._ (2021) J. Atalaya, S. Zhang, M. Y. Niu, A. Babakhani, H. C. H. Chan, J. M. Epstein, and K. B. Whaley, “Continuous quantum error correction for evolution under time-dependent Hamiltonians,” Phys. Rev. A 103, 042406 (2021).
* Livingston _et al._ (2022) William P. Livingston, Machiel S. Blok, Emmanuel Flurin, Justin Dressel, Andrew N. Jordan, and Irfan Siddiqi, “Experimental demonstration of continuous quantum error correction,” Nat. Commun. 13, 1–7 (2022).
* Diósi _et al._ (2006) Lajos Diósi, Thomas Konrad, Artur Scherer, and Jürgen Audretsch, “Coupled Ito equations of continuous quantum state measurement and estimation,” J. Phys. A: Math. Gen. 39, L575–L581 (2006).
* Borah _et al._ (2021) Sangkha Borah, Bijita Sarma, Michael Kewming, Gerard J. Milburn, and Jason Twamley, “Measurement-Based Feedback Quantum Control with Deep Reinforcement Learning for a Double-Well Nonlinear Potential,” Phys. Rev. Lett. 127, 190403 (2021).
* Minev _et al._ (2019) Z. K. Minev, S. O. Mundhada, S. Shankar, P. Reinhold, R. Gutiérrez-Jáuregui, R. J. Schoelkopf, M. Mirrahimi, H. J. Carmichael, and M. H. Devoret, “To catch and reverse a quantum jump mid-flight - Nature,” Nature 570, 200–204 (2019).
* Blais _et al._ (2021) Alexandre Blais, Arne L. Grimsmo, S. M. Girvin, and Andreas Wallraff, “Circuit quantum electrodynamics,” Rev. Mod. Phys. 93, 025005 (2021).
|
# Unconventional charge density wave and photoinduced lattice symmetry change
in Kagome Metal CsV3Sb5 probed by time-resolved spectroscopy
Z. X. Wang International Center for Quantum Materials, School of Physics,
Peking University, Beijing 100871, China Q. Wu International Center for
Quantum Materials, School of Physics, Peking University, Beijing 100871, China
Q. W. Yin Department of Physics and Beijing Key Laboratory of Opto-electronic
Functional Materials and Micro-nano Devices, Renmin University of China,
Beijing 100872, China Z. J. Tu Department of Physics and Beijing Key
Laboratory of Opto-electronic Functional Materials and Micro-nano Devices,
Renmin University of China, Beijing 100872, China C. S. Gong Department of
Physics and Beijing Key Laboratory of Opto-electronic Functional Materials and
Micro-nano Devices, Renmin University of China, Beijing 100872, China T. Lin
International Center for Quantum Materials, School of Physics, Peking
University, Beijing 100871, China Q. M. Liu International Center for Quantum
Materials, School of Physics, Peking University, Beijing 100871, China L. Y.
Shi International Center for Quantum Materials, School of Physics, Peking
University, Beijing 100871, China S. J. Zhang International Center for
Quantum Materials, School of Physics, Peking University, Beijing 100871, China
D. Wu International Center for Quantum Materials, School of Physics, Peking
University, Beijing 100871, China H. C. Lei Department of Physics and
Beijing Key Laboratory of Opto-electronic Functional Materials and Micro-nano
Devices, Renmin University of China, Beijing 100872, China T. Dong
International Center for Quantum Materials, School of Physics, Peking
University, Beijing 100871, China N. L. Wang International Center for
Quantum Materials, School of Physics, Peking University, Beijing 100871, China
Beijing Academy of Quantum Information Sciences, Beijing 100913, China
###### Abstract
Recently, kagome lattice metal AV3Sb5 (A = K, Rb, Cs) family has received wide
attention due to its presence of superconductivity, charge density wave (CDW)
and peculiar properties from topological nontrivial electronic structure. With
time-resolved pump-probe spectroscopy, we show that the excited quasiparticle
relaxation dynamics can be explained by formation of energy gap below the
phase transition being similar to a usual second-order CDW condensate, by
contrast, the structure change is predominantly first order phase transition.
Furthermore, no CDW amplitude mode is identified in the ordered phase. The
results suggest that the CDW order is very different from the traditional CDW
condensate. We also find that weak pump pulse can non-thermally melt the CDW
order and drive the sample into its high temperature phase, revealing the fact
that the difference in lattice potential between those phases is small.
Control over physical properties or electronic phases in quantum materials via
external tuning parameters is a central topic in condensed matter physics.
Traditional ways of tuning external parameters are to change the temperature,
pressure, electric/magnetic field, or doping level of the material systems. In
the last two decades, photo-excitation has emerged as a new way to probe and
manipulate the properties of quantum materials, enabling an ultra-fast control
over the material properties or quantum phases Cavalleri _et al._ (2001);
Tomimoto _et al._ (2003); Okamoto _et al._ (2004); Takubo _et al._ (2005);
Matsubara _et al._ (2007); Tomeljak _et al._ (2009); Wall _et al._ (2012);
Stojchevska _et al._ (2014); Giannetti _et al._ (2016); Zhang _et al._
(2019); Sie _et al._ (2019); Nova _et al._ (2019); Li _et al._ (2019);
McLeod _et al._ (2020); Kogar _et al._ (2020); Liu _et al._ (2021). The
ultrashort laser pulses can create a far from equilibrium distribution of the
energy among the different degrees of freedom, triggering the formation of
thermal or non-thermal metastable or even stable phases. Among different
systems, correlated electronic systems with rich phase diagrams have been the
most explored materials for practical manipulation, since the complex
interactions or competition among different degrees of freedom make them very
susceptible to external perturbation or stimuli Giannetti _et al._ (2016).
Recently, a new family of kagome metals AV3Sb5 (A = Cs, Rb, K), composed of
vanadium layers, antimony layers and alkali ions sandwiched between the two
layers, has attracted tremendous attention in the community Ortiz _et al._
(2019, 2020, 2021); Yin _et al._ (2021). These materials undergo a charge
density wave (CDW) phase transition at $T_{\rm{CDW}}=80-100$ K with a
2$\times$2$\times$2 superlattice formation and a superconducting transition at
$T_{\rm{c}}=0.9-3.5$ K. High-pressure measurements revealed the competition
between the charge order and superconductivity by observing a double-dome
superconductivityZhao _et al._ (2021); Chen _et al._ (2021a); Du _et al._
(2021); Chen _et al._ (2021b); Zhang _et al._ (2021). More intriguingly, the
topological nontrivial electronic structure was found in AV3Sb5 family (even
in CDW state) Ortiz _et al._ (2019, 2020, 2021); Yin _et al._ (2021); Wang
_et al._ (2020); Tan _et al._ (2021); Li _et al._ (2021); Zhao _et al._
(2021); Chen _et al._ (2021c); Ni _et al._ (2021); Jiang _et al._ (2020). A
giant anomalous Hall effect Yang _et al._ (2020); Yu _et al._ (2021) was
observed even in the absence of magnetic ordering Ortiz _et al._ (2020);
Kenney _et al._ (2020). Furthermore, transport and scanning tunnelling
microscopy (STM) measurements revealed that the 6-fold rotation $C_{6}$
symmetry is further broken and reduced to $C_{2}$ symmetry at low temperature
below 60 K Xiang _et al._ (2021); Zhao _et al._ (2021); Chen _et al._
(2021c); Li _et al._ ; Ratcliff _et al._ (2021). A recent coherent phonon
spectroscopy measurement Ratcliff _et al._ (2021) revealed appearance of new
phonon modes below 94 K and 60 K, respectively, implying structural phase
transitions. Thus, the AV3Sb5 series provide a new opportunity to study the
competition between different phases and to understand the unconventional
correlated physics emerging from the itinerant kagome lattice electrons.
In this work, we perform ultrafast pump-probe reflectivity experiment on
CsV3Sb5, aiming to investigate the CDW order across the phase transitions and
possible photoexcitation control of different orders in the compound. Our
measurements show that, while the quasiparticle relaxation dynamics upon weak
pumping can be described by formation of energy gap below the phase
transition, the structure change is characterized by an abrupt change in the
number of coherent phonon modes without showing clear softening at the CDW
phase transition. Furthermore, no CDW amplitude mode can be identified in the
ordered phase. Those results suggest that the CDW order is predominantly first
order phase transition, and is very different from the traditional CDW
condensate. More intriguingly, we show that even small pumping fluence can
non-thermally melt the $C_{2}$ ground state order and $C_{6}$ CDW order
successionally and drive the compound into high temperature phase. We shall
discuss the implication of the results.
Single crystals of CsV3Sb5 were grown from Cs ingot (purity 99.9%), V powder
(purity 99.9%) and Sb grains (purity 99.999%) using the self-flux method,
similar to the growth of RbV3Sb5 Yin _et al._ (2021). We used an amplified
Ti:sapphire laser system with 800 nm wavelength and 35 fs pulse duration
operating at 1 kHz repetition frequency as the light source for pump-probe
measurement. The fluence of probe beam is set below 3 $\mu$J cm-2, weaker than
that of pump beam. To reduce the noise caused by stray light, the pump and
probe pulses were set to be cross polarized and an extra polarizer was mounted
just before the detector.
Figure 1: Temperature-dependent dynamics for CsV3Sb5. (a) $\Delta R/R$ in the
temperature range of 5-200 K. With increasing temperature, the absolute value
of $\Delta R/R$ increases first, then decreases and finally it changes the
sign at $T\approx 98$ K (above $T_{\rm{CDW}}=94$ K). The lines in high
temperature phase are scaled by a five-fold factor. The dash lines are the
double-exponential fitting curve: $\Delta
R/R(t)=A_{1}\rm{e}^{-t/\tau_{1}}+A_{2}\rm{e}^{-t/\tau_{2}}$. (b) (c) A fast
($\tau_{1}\sim 0.5$ ps) and a slow ($\tau_{2}\sim 10$ ps) relaxation process,
respectively. The amplitude $A_{1}$, $A_{2}$ (orange squares) and decay time
$\tau_{1}$, $\tau_{2}$ (blue circles) of the reflectivity transients extracted
from fits to the double-exponential. We notice the amplitude of the fast
relaxation process $A_{1}$ becomes 0 in high temperature phase, which means
the fast relaxation process is absent in high temperature phase and the
double-exponential function degrades into single-exponential function. Error
bars represent the standard deviation of the fit. The orange and blue lines
are the RT model fits.
Figure 1 presents the photoinduced reflectivity change of CsV3Sb5 as a
function of time delay at different temperatures under rather weak pumping
excitation $\sim$5 $\mu$J cm-2. Very prominently, $\Delta R/R$ changes sign,
from positive to negative when temperature decreases across the CDW transition
temperature $T_{\rm{CDW}}$. At low temperature, the pump pulse induces an
abrupt drop in reflectivity followed by a fast decay within a few picoseconds
and a slower recovery process in the order of 10 ps, while there is only a
slow decay dynamics at high temperature. We use double-exponential function to
fit the decay process: $\Delta
R/R=A_{1}\rm{e}^{-t/\tau_{1}}+A_{2}\rm{e}^{-t/\tau_{2}}$, where
$A_{1}\rm{e}^{-t/\tau_{1}}$ is the fast decay process and
$A_{2}\rm{e}^{-t/\tau_{2}}$ presents the slower recovery process,
respectively. The best fitting results are shown as dashed lines in Fig.1 (a).
We note the fast decay process becomes invisible above 95 K, which indicates
the strong correlation with CDW energy gap. Fitting parameters of the double-
exponential function are shown in Fig. 1 (b) and (c). For the fast decay
dynamics, the decay lifetime $\tau_{1}$ diverges and the absolute value of
amplitude $A_{1}$ drops to zero precipitously, as the temperature rising close
to $T_{\rm{CDW}}$. These behaviors indicate that an energy gap is closing
toward critical temperature, which can be roughly described by Rothwarf-Taylor
(R-T) model, showing that the opening of a CDW energy gap would significantly
impede the relaxation of the photoinduced quasiparticle Kabanov _et al._
(1999). Similar to the situation of superconductivity from the condensate of
electron-electron Cooper pairs, CDW is a condensate from electron-hole (e-h)
pairs. The pump pulse serves as an excitation to break e-h pairs across energy
gap of CDW condensate, the recombination of e-h pairs is accompanied by the
emission of phonons with energy higher than the energy gap $\Delta(T)$.
$\tau_{1}$ is the characteristic time for this recombination, since these
excited phonons will in turn break additional e-h pairs, decay time $\tau_{1}$
is ultimately determined by the time required for these phonons anharmonically
decaying into phonons with the energy less than $\Delta$. The amplitude $A$
represents the population of excited quasiparticles by pump pulse. The change
of relaxation time and the amplitude of the photoinduced reflectivity signal
near the transition temperature in R-T model is given by following equations
Kabanov _et al._ (1999); Lin _et al._ (2020):
$\displaystyle\tau(T)$
$\displaystyle\propto\frac{\ln{\left[g+\textrm{e}^{-\Delta(T)/k_{B}T}\right]}}{\Delta(T)^{2}}$
(1) $\displaystyle A(T)$
$\displaystyle\propto\frac{\Phi/(\Delta(T)+k_{B}T/2)}{1+\Gamma\sqrt{2k_{B}T/\pi\Delta(T)}\textrm{e}^{-\Delta(T)/k_{B}T}}$
where a BCS-type energy band is assumed here as
$\Delta(T)=\Delta_{0}\sqrt{1-T/T_{\rm{CDW}}}$ with $T_{\rm{CDW}}=94$ K and the
CDW gap at 0 K $\Delta_{0}=40$ meV Wang _et al._ (2021). Qualitatively, the
equations can reproduce the measurement results. The best fits yield the
phenomenological fitting parameter $g=0.18$ and $\Gamma=17$ as shown in Fig. 2
(b). Although there appears an increase of the decay time $\tau_{1}$ below
$T^{*}=60$ K, no significant change can be identified from the relaxation
process when the symmetry is further broken from $C_{6}$ to $C_{2}$.
Figure 2: Temperature-dependent coherent phonon spectroscopy for CsV3Sb5. (a)
Coherent phonon oscillation in time-domain, where the decay background is
subtracted. (b) The Fast Fourier transformation of the data in (a). (c)
Temperature dependence waterfall map extracted from (b). The 4.1 THz coherent
phonon is present at all temperature through phase change. The 1.3 THz phonon
can be only detected below $T_{\rm{CDW}}$, while the 3.1 THz phonon disappears
at the temperature above $T^{*}=30\sim 60$ K.
Figure 3: Fluence-dependent pump probe reflectivity signal of CsV3Sb5. (a)
Photo-induced reflectivity $\Delta R/R$ as a function of time delay versus
pump fluence in range from 5 to 300 $\mu$J cm-2 of CsV3Sb5 at 20 K. The data
of the three highest fluence is scaled by a factor of 0.3. (b) Fluence
dependence of decay time $\tau$ of the reflectivity transients extracted from
fits to the double exponential decay function. (c) The Fast Fourier
transformation of the oscillation parts by subtracting the decay background of
$\Delta R/R$ in (a). Three resonance peaks at 1.3, 3.1 and 4.1 THz can be seen
at the lowest fluence 5 $\mu$J cm-2. (d) Two-dimensional waterfall map of
these three resonance peaks. 3.1 THz phonon can be survived below the fluence
of 23 $\mu$J cm-2, the phonon at 1.3 THz is killed by 55 $\mu$J cm-2 pulses,
while the 4.1 THz phonon is coherently enhanced even under an intense fluence
higher than 300 $\mu$J cm-2.
After subtracting the dynamics background, we obtained the coherent phonon
oscillations, as shown in Fig. 2 (a). The fast Fourier transform result is
presented in Fig. 2 (b). A two dimensional intensity map as a function of
temperature and frequency is displayed in Fig. 2 (c). The coherent phonon
spectroscopy is similar to previous study on CsV3Sb5 Ratcliff _et al._
(2021). Above $T_{\rm{CDW}}=94$ K, there is only one peak at 4.1 THz, which is
in good agreement with the density functional theory (DFT) calculations for
the $A_{1g}$ mode of phonon spectra Li _et al._ (2021); Ratcliff _et al._
(2021). Below $T_{\rm{CDW}}$, a new mode at 1.3 THz peak abruptly appears,
yielding evidence for the CDW structural modulation. Below 60 K, a weak and
broad feature emerges near 3.1 THz, which evolves into a sharp and strong peak
below 30 K. This mode was suggested to be linked to the uniaxial order
observed by STM experiments which breaks the $C_{6}$ rotational symmetry
Ratcliff _et al._ (2021). It deserves to remark that, for conventional CDW
condensate, the CDW amplitude mode usually appears as the strongest
oscillation in the pump-probe measurement. As the temperature increases to the
transition temperature $T_{\rm{CDW}}$, the mode frequency softens and behaves
like an order parameter Chen _et al._ (2017). However, no such CDW amplitude
mode could be identified from the above spectra, suggesting that the CDW order
in CsV3Sb5 is very different from the conventional CDW condensate. The abrupt
change of the coherent mode near 1.3 THz further indicates that the phase
transition is predominantly first-order phase transition. On the other hand,
the symmetry change from $C_{6}$ to $C_{2}$ is a crossover behavior. A short
range of $C_{2}$ nematic order, corresponding the broad feature, is seen
roughly below 60 K which gradually evolves into a static $C_{2}$ order below
30 K.
The above analysis reveals a rather peculiar situation: while the excited
electron relaxation dynamics upon weak pumping can be described by formation
of energy gap below the phase transition being similar to a usual second-order
CDW condesate, the structure change is primarily first order phase transition,
being characterized by an abrupt change in the number of coherent phonon modes
without showing clear softening at the CDW phase transition. Additionally, no
CDW amplitude mode is present in the ordered phase. Those results suggest that
the CDW order is rather unconventional. We notice that nuclear magnetic
resonance (NMR) measurements on CsV3Sb5 indicated similar situation Song _et
al._ (2021). A first-order structural phase transition associated with orbital
ordering is seen as the sudden splitting of orbital shift in 51V NMR spectrum
at TCDW, by contrast, a typical second order transition behavior is seen in
the quadrupole splitting which appears gradually below TCDW. The NMR
measurement also suggests that the CDW order is a secondary electronic order.
The seemingly week coupling between electron and lattice degree of freedoms
needs to be further explored.
In addition to varying temperature, we also performed measurement of the
photoinduced reflectivity change at different pump fluences in CDW phase. The
measurement results at 20 K are shown in Fig.3. We found that a surprisingly
small pump fluence can melt the $C_{2}$ nematic and then the $C_{6}$ CDW
order. As shown in Fig.3 (a), at 5 $\mu$J cm-2 fluence, the fast decay process
shows a lifetime of hundreds of femtoseconds. With pump fluence incresing, the
absolute value of negative $\Delta R/R$ further grows, and reaches the maximum
value at around $F^{\rm{melt}}\sim$55 $\mu$ J cm-2. Then it turns back and
goes into the positive direction, passes through zero, and gradually grows
larger. This behavior is very similar to the warming process of temperature
dependent measurement. From the two exponential decay approach, we can extract
the relaxation time of fast decay dynamics, as shown in Fig.3 (b). As the
fluence increases to $F^{\rm{melt}}$, the relaxation time sharply increases.
Similar to the above analysis for temperature variation, we assume that an CDW
energy gap is present up to the melting fluence as
$\Delta=\Delta_{0}(1-F/F^{\rm{melt}})$, and the decay time is inversely
proportional to the energy gap near the critical melting fluence $\tau\propto
1/\Delta$. Indeed, the fast decay time can be reasonably reproduced by the
relation as shown in Fig.3 (b). The fluence dependent relaxation dynamics
suggests that a pump flence as small as 55 $\mu$J cm-2 can melt the CDW order
in CsV3Sb5.
The coherent phonon spectroscopy provides more direct evidence for ultrafast
optical melting of those orders in CsV3Sb5. We performed Fourier
transformation of the pump-probe measurement after subtracting the relaxation
background and obtained the coherent phonon spectra for different fluences, as
displayed in Fig.3 (c). The intensity plot as function of both frequency and
pump fluence is shown in Fig.3 (d). At small fluence, for example, at 5 $\mu$J
cm-2, three resonance peaks centered at 1.3, 3.1 and 4.1 THz can be observed.
As the pump fluence rises to 23 $\mu$J cm-2, the peak at 3.1 THz being
associated with $C_{2}$ symmetryRatcliff _et al._ (2021) disappears. As the
fluence increases beyond 55 $\mu$J cm-2, the phonon peak at 1.3 THz, which is
related to CDW order Ratcliff _et al._ (2021), is not visible. The critical
pump fluence is in agreement with the analysis form the relaxation dynamics.
The 4.1 THz resonance is present in all fluences. The strength of this mode is
further enhanced up to the highest used fluence 300 $\mu$J cm-2. The behavior
is expected normally for the coherent phonon generation in pump-probe
experiment.
We emphasize that the melting of the low temperature orders cannot be
attributed to trivial thermal effect owning to the very small pump fluence.
The temperature rise $\Delta T$ can be phenomenologically estimated by energy
conservation law $S\delta_{0}\rho/M\int_{T_{0}}^{T_{0}+\Delta
T}C_{p}(T)\textrm{d}T=(1-R)FS$, where $S$ is the excitation area, the mass
density $\rho=5.2$ g/cm3, the molar mass $M=894.5$ g/mol, the penetration
depth $\delta_{0}\approx 64$ nm and the reflectivity $R=0.42$ at 800 nm for
CsV3Sb5 from our own reflectance measurement by Fourier transform infrared
spectrometer. With the initial temperature $T_{0}=20$ K and the temperature-
dependent thermal capacity $C_{p}(T)$ is extracted from previous data Ortiz
_et al._ (2020), the values of $\Delta T$ are calculated to be approximately
10 and 19 K at the fluence of 23 and 55 $\mu$J cm-2, respectively. From this
estimation, the possibility of thermal effect can be unambiguously ruled out.
The nonthermal melting at such small pump fluence is the key finding in this
work. Although photoinduced lattice symmetry change or phase transitions have
been observed in many different systems, the threshold fluences for completely
changing the lattice potentials are usually much higher than the present
measurement. For example, VO2 undergoes a first order structural phase
transition on cooling from a rutile R-phase to a monoclinic M1-phase at 343 K.
Photoexcitation at room temperature can induce the lattice symmetry change
from the M1-phase to rutile R-phase based on similar coherent phonon
measurement Wall _et al._ (2012). However, the required pump threshold
fluence is $\sim$7 mJ cm-2 for 800 nm pump-probe measurement with similar
pulse duration $<$40 fs. Our measurement suggests that the lattice potential
difference between those different phases are relatively small. The weak pump
pulses can excite a sufficient number of electrons and their perturbation to
the lattice potential is large enough to modify the symmetry. As a result, it
drives the structural phase transitions non-thermally. We noticed that a
recent X-ray scattering measurement Li _et al._ (2021) on CsV3Sb5 revealed
that, while the CDW is long-range ordered, the integrated CDW superlattice
peak intensity that is proportional to the CDW order parameter is extremely
small. Comparing with fundamental Bragg peaks, the CDW peak intensity is
3$\sim$5 orders of magnitude weaker, demonstrating small lattice distortions.
This observation appears to be correlated to our result.
To summarize, laser pulses serve as a tool not only to probe the dynamic
behavior but also to drive phase transitions in kagome metal compound CsV3Sb5.
Our measurement reveals a peculiar CDW phase transition, _i.e._ the
quasiparticle relaxation dynamics can be explained by formation of energy gap
below the phase transition being similar to a usual second-order CDW
condensate, the structure change is predominantly first order phase
transition. Furthermore, no CDW amplitude mode can be identified in the
ordered phase. We also show that even small pumping fluence can non-thermally
melt the $C_{2}$ ground state order and then $C_{6}$ CDW order seccessionally
and drive the compound into high temperature phase, suggesting that the
lattice potential difference between those different phases is relatively
small.
ACKNOWLEDGMENTS
This work was supported by National Natural Science Foundation of China (No.
11888101, 11822412 and 11774423), the National Key Research and Development
Program of China (No. 2017YFA0302904, 2018YFE0202600) and Beijing Natural
Science Foundation (Grant No. Z200005)
## References
* Cavalleri _et al._ (2001) A. Cavalleri, C. Tóth, C. W. Siders, J. A. Squier, F. Ráksi, P. Forget, and J. C. Kieffer, Phys. Rev. Lett. 87, 237401 (2001).
* Tomimoto _et al._ (2003) S. Tomimoto, S. Miyasaka, T. Ogasawara, H. Okamoto, and Y. Tokura, Phys. Rev. B 68, 035106 (2003).
* Okamoto _et al._ (2004) H. Okamoto, Y. Ishige, S. Tanaka, H. Kishida, S. Iwai, and Y. Tokura, Phys. Rev. B 70, 165202 (2004).
* Takubo _et al._ (2005) N. Takubo, Y. Ogimoto, M. Nakamura, H. Tamaru, M. Izumi, and K. Miyano, Phys. Rev. Lett. 95, 017404 (2005).
* Matsubara _et al._ (2007) M. Matsubara, Y. Okimoto, T. Ogasawara, Y. Tomioka, H. Okamoto, and Y. Tokura, Phys. Rev. Lett. 99, 207401 (2007).
* Tomeljak _et al._ (2009) A. Tomeljak, H. Schäfer, D. Städter, M. Beyer, K. Biljakovic, and J. Demsar, Physical Review Letters 102, 066404 (2009), arXiv:0901.1951 .
* Wall _et al._ (2012) S. Wall, D. Wegkamp, L. Foglia, K. Appavoo, J. Nag, R. F. Haglund, J. StäCurrency Signhler, and M. Wolf, Nature Communications 3 (2012), 10.1038/ncomms1719, arXiv:1012.1468 .
* Stojchevska _et al._ (2014) L. Stojchevska, I. Vaskivskyi, T. Mertelj, P. Kusar, D. Svetin, S. Brazovskii, and D. Mihailovic, Science 344, 177 (2014).
* Giannetti _et al._ (2016) C. Giannetti, M. Capone, D. Fausti, M. Fabrizio, F. Parmigiani, and D. Mihailovic, Advances in Physics 65, 58 (2016), arXiv:1601.07204 .
* Zhang _et al._ (2019) M. Y. Zhang, Z. X. Wang, Y. N. Li, L. Y. Shi, D. Wu, T. Lin, S. J. Zhang, Y. Q. Liu, Q. M. Liu, J. Wang, T. Dong, and N. L. Wang, Phys. Rev. X 9, 021036 (2019).
* Sie _et al._ (2019) E. J. Sie, C. M. Nyby, C. D. Pemmaraju, S. J. Park, X. Shen, J. Yang, M. C. Hoffmann, B. K. Ofori-Okai, R. Li, A. H. Reid, S. Weathersby, E. Mannebach, N. Finney, D. Rhodes, D. Chenet, A. Antony, L. Balicas, J. Hone, T. P. Devereaux, T. F. Heinz, X. Wang, and A. M. Lindenberg, Nature 565, 61 (2019).
* Nova _et al._ (2019) T. F. Nova, A. S. Disa, M. Fechner, and A. Cavalleri, Science 364, 1075 (2019), https://science.sciencemag.org/content/364/6445/1075.full.pdf .
* Li _et al._ (2019) X. Li, T. Qiu, J. Zhang, E. Baldini, J. Lu, A. M. Rappe, and K. A. Nelson, Science 364, 1079 (2019), https://science.sciencemag.org/content/364/6445/1079.full.pdf .
* McLeod _et al._ (2020) A. S. McLeod, J. Zhang, M. Q. Gu, F. Jin, G. Zhang, K. W. Post, X. G. Zhao, A. J. Millis, W. B. Wu, J. M. Rondinelli, R. D. Averitt, and D. N. Basov, Nature Materials 19, 397 (2020), arXiv:1910.10361 .
* Kogar _et al._ (2020) A. Kogar, A. Zong, P. E. Dolgirev, X. Shen, J. Straquadine, Y.-Q. Bie, X. Wang, T. Rohwer, I.-C. Tung, Y. Yang, R. Li, J. Yang, S. Weathersby, S. Park, M. E. Kozina, E. J. Sie, H. Wen, P. Jarillo-Herrero, I. R. Fisher, X. Wang, and N. Gedik, Nature Physics 16, 159 (2020).
* Liu _et al._ (2021) Q. M. Liu, D. Wu, Z. A. Li, L. Y. Shi, Z. X. Wang, S. J. Zhang, T. Lin, T. C. Hu, H. F. Tian, J. Q. Li, T. Dong, and N. L. Wang, Nature Communications 12, 1 (2021).
* Ortiz _et al._ (2019) B. R. Ortiz, L. C. Gomes, J. R. Morey, M. Winiarski, M. Bordelon, J. S. Mangum, I. W. Oswald, J. A. Rodriguez-Rivera, J. R. Neilson, S. D. Wilson, E. Ertekin, T. M. McQueen, and E. S. Toberer, Physical Review Materials 3, 94407 (2019).
* Ortiz _et al._ (2020) B. R. Ortiz, S. M. L. Teicher, Y. Hu, J. L. Zuo, P. M. Sarte, E. C. Schueller, A. M. M. Abeykoon, M. J. Krogstad, S. Rosenkranz, R. Osborn, R. Seshadri, L. Balents, J. He, and S. D. Wilson, Phys. Rev. Lett. 125, 247002 (2020).
* Ortiz _et al._ (2021) B. R. Ortiz, P. M. Sarte, E. M. Kenney, M. J. Graf, S. M. L. Teicher, R. Seshadri, and S. D. Wilson, Physical Review Materials 5, 1 (2021).
* Yin _et al._ (2021) Q. Yin, Z. Tu, C. Gong, Y. Fu, S. Yan, and H. Lei, Chinese Physics Letters 38, 37403 (2021).
* Zhao _et al._ (2021) H. Zhao, H. Li, B. R. Ortiz, S. M. L. Teicher, T. Park, M. Ye, Z. Wang, L. Balents, S. D. Wilson, and I. Zeljkovic, (2021), arXiv:2103.03118 .
* Chen _et al._ (2021a) K. Y. Chen, N. N. Wang, Q. W. Yin, Z. J. Tu, C. S. Gong, J. P. Sun, H. C. Lei, Y. Uwatoko, and J. G. Cheng, arXiv e-prints , arXiv:2102.09328 (2021a), arXiv:2102.09328 [cond-mat.supr-con] .
* Du _et al._ (2021) F. Du, S. Luo, B. R. Ortiz, Y. Chen, W. Duan, D. Zhang, X. Lu, S. D. Wilson, Y. Song, and H. Yuan, arXiv e-prints , arXiv:2102.10959 (2021), arXiv:2102.10959 [cond-mat.supr-con] .
* Chen _et al._ (2021b) X. Chen, X. Zhan, X. Wang, J. Deng, X.-b. Liu, X. Chen, J.-g. Guo, and X. Chen, arXiv e-prints , arXiv:2103.13759 (2021b), arXiv:2103.13759 [cond-mat.supr-con] .
* Zhang _et al._ (2021) Z. Zhang, Z. Chen, Y. Zhou, Y. Yuan, S. Wang, L. Zhang, X. Zhu, Y. Zhou, X. Chen, J. Zhou, and Z. Yang, arXiv e-prints , arXiv:2103.12507 (2021), arXiv:2103.12507 [cond-mat.supr-con] .
* Wang _et al._ (2020) Y. Wang, S. Yang, P. K. Sivakumar, B. R. Ortiz, S. M. L. Teicher, H. Wu, A. K. Srivastava, C. Garg, D. Liu, S. S. P. Parkin, E. S. Toberer, T. McQueen, S. D. Wilson, and M. N. Ali, arXiv e-prints , arXiv:2012.05898 (2020), arXiv:2012.05898 [cond-mat.supr-con] .
* Tan _et al._ (2021) H. Tan, Y. Liu, Z. Wang, and B. Yan, arXiv e-prints , arXiv:2103.06325 (2021), arXiv:2103.06325 [cond-mat.supr-con] .
* Li _et al._ (2021) H. X. Li, T. T. Zhang, Y. Y. Pai, C. Marvinney, A. Said, T. Yilmaz, Q. Yin, C. Gong, Z. Tu, E. Vescovo, R. G. Moore, S. Murakami, H. C. Lei, H. N. Lee, B. Lawrie, and H. Miao, arXiv e-prints , arXiv:2103.09769 (2021), arXiv:2103.09769 [cond-mat.supr-con] .
* Zhao _et al._ (2021) H. Zhao, H. Li, B. R. Ortiz, S. M. L. Teicher, T. Park, M. Ye, Z. Wang, L. Balents, S. D. Wilson, and I. Zeljkovic, arXiv e-prints , arXiv:2103.03118 (2021), arXiv:2103.03118 [cond-mat.supr-con] .
* Chen _et al._ (2021c) H. Chen, H. Yang, B. Hu, Z. Zhao, J. Yuan, Y. Xing, G. Qian, Z. Huang, G. Li, Y. Ye, Q. Yin, C. Gong, Z. Tu, H. Lei, S. Ma, H. Zhang, S. Ni, H. Tan, C. Shen, X. Dong, B. Yan, Z. Wang, and H.-J. Gao, arXiv e-prints , arXiv:2103.09188 (2021c), arXiv:2103.09188 [cond-mat.supr-con] .
* Ni _et al._ (2021) S. Ni, S. Ma, Y. Zhang, J. Yuan, H. Yang, Z. Lu, N. Wang, J. Sun, Z. Zhao, D. Li, S. Liu, H. Zhang, H. Chen, K. Jin, J. Cheng, L. Yu, F. Zhou, X. Dong, J. Hu, H.-J. Gao, and Z. Zhao, arXiv e-prints , arXiv:2104.00374 (2021), arXiv:2104.00374 [cond-mat.supr-con] .
* Jiang _et al._ (2020) Y.-X. Jiang, J.-X. Yin, M. M. Denner, N. Shumiya, B. R. Ortiz, G. Xu, Z. Guguchia, J. He, M. Shafayat Hossain, X. Liu, J. Ruff, L. Kautzsch, S. S. Zhang, G. Chang, I. Belopolski, Q. Zhang, T. A. Cochran, D. Multer, M. Litskevich, Z.-J. Cheng, X. P. Yang, Z. Wang, R. Thomale, T. Neupert, S. D. Wilson, and M. Zahid Hasan, arXiv e-prints , arXiv:2012.15709 (2020), arXiv:2012.15709 [cond-mat.supr-con] .
* Yang _et al._ (2020) S. Y. Yang, Y. Wang, B. R. Ortiz, D. Liu, J. Gayles, E. Derunova, R. Gonzalez-Hernandez, L. Šmejkal, Y. Chen, S. S. Parkin, S. D. Wilson, E. S. Toberer, T. McQueen, and M. N. Ali, Science Advances 6, 1 (2020).
* Yu _et al._ (2021) F. H. Yu, T. Wu, Z. Y. Wang, B. Lei, W. Z. Zhuo, J. J. Ying, and X. H. Chen, arXiv e-prints , arXiv:2102.10987 (2021), arXiv:2102.10987 [cond-mat.str-el] .
* Kenney _et al._ (2020) E. M. Kenney, B. R. Ortiz, C. Wang, S. D. Wilson, and M. J. Graf, arXiv (2020), 10.1088/1361-648x/abe8f9, arXiv:2012.04737 .
* Xiang _et al._ (2021) Y. Xiang, Q. Li, Y. Li, W. Xie, H. Yang, Z. Wang, Y. Yao, and H.-H. Wen, arXiv e-prints , arXiv:2104.06909 (2021), arXiv:2104.06909 [cond-mat.supr-con] .
* (37) H. Li, H. Zhao, B. R. Ortiz, T. Park, M. Ye, L. Balents, Z. Wang, S. D. Wilson, and I. Zeljkovic, 17.
* Ratcliff _et al._ (2021) N. Ratcliff, L. Hallett, B. R. Ortiz, S. D. Wilson, and J. W. Harter, arXiv e-prints , arXiv:2104.10138 (2021), arXiv:2104.10138 [cond-mat.str-el] .
* Kabanov _et al._ (1999) V. V. Kabanov, J. Demsar, B. Podobnik, and D. Mihailovic, Phys. Rev. B 59, 1497 (1999).
* Lin _et al._ (2020) T. Lin, L. Y. Shi, Z. X. Wang, S. J. Zhang, Q. M. Liu, T. C. Hu, T. Dong, D. Wu, and N. L. Wang, Phys. Rev. B 101, 205112 (2020).
* Wang _et al._ (2021) Z. Wang, S. Ma, Y. Zhang, H. Yang, Z. Zhao, Y. Ou, Y. Zhu, S. Ni, Z. Lu, H. Chen, K. Jiang, L. Yu, Y. Zhang, X. Dong, J. Hu, H.-J. Gao, and Z. Zhao, (2021), arXiv:2104.05556 .
* Chen _et al._ (2017) R. Y. Chen, S. J. Zhang, M. Y. Zhang, T. Dong, and N. L. Wang, Phys. Rev. Lett. 118, 107402 (2017).
* Song _et al._ (2021) D. W. Song, L. X. Zheng, F. H. Yu, J. Li, L. P. Nie, M. Shan, D. Zhao, S. J. Li, B. L. Kang, Z. M. Wu, Y. B. Zhou, K. L. Sun, K. Liu, X. G. Luo, Z. Y. Wang, J. J. Ying, X. G. Wan, T. Wu, and X. H. Chen, arXiv:2104.09173 (2021), arXiv:2104.09173 .
|
# Real-time Lane-wise Traffic Monitoring in Optimal ROIs
Mei Qiu, Wei Lin, Lauren Ann Christopher, Stanley Chien∗, Yaobin Chen, Shu
HuPurdue University Indianapolis, IN, USA ∗Corresponding Author
###### Abstract
In the US, thousands of Pan, Tilt, and Zoom (PTZ) traffic cameras monitor
highway conditions. There is a great interest in using these highway cameras
to gather valuable road traffic data to support traffic analysis and decision-
making for highway safety and efficient traffic management. However, there are
too many cameras for a few human traffic operators to effectively monitor, so
a fully automated solution is desired. This paper introduces a novel system
that learns the locations of highway lanes and traffic directions from these
camera feeds automatically. It collects real-time, lane-specific traffic data
continuously, even adjusting for changes in camera angle or zoom. This
facilitates efficient traffic analysis, decision-making, and improved highway
safety.
## I Introduction
Numerous Pan, Tilt, and Zoom (PTZ) traffic cameras are installed along
highways in the USA, allowing operators to monitor traffic conditions.
However, human operators oversee hundreds of cameras, making it impossible to
watch them all simultaneously. Thus, there is interest in using Artificial
Intelligence (AI) to analyze footage in real time, providing valuable traffic
data and alerting operators to issues.
Traffic monitoring systems come in various types, broadly classified into two
groups: traditional systems and intelligent management systems [1, 2]. AI and
big data analytics have revolutionized traffic monitoring and management.
Intelligent Transportation Systems (ITS), equipped with advanced sensors,
radars, and license plate recognition cameras play a vital role in detecting
and deterring traffic rule violations [3, 4]. These systems also aid in
alleviating traffic congestion by managing traffic scenarios through real-time
data visualization [5, 6].
However, these existing traffic monitoring systems need to annotate images
from a large video surveillance data. Most of them can monitor two directions
of traffic status but are not lane-specific. There is some work with lane-wise
counting, but others rely on road lane marking detection, which fails in
various lighting, weather, and ground conditions [7]. Vehicle motion
trajectories can be used to learn lanes [8] but the performance depends on
vehicle detection and long tracking accuracy. The innovation in this work is
to perform detection and tracking only in optimal regions of the road. Our new
result also solves the problem of existing traffic flow estimation that
requires fixed zoom and viewing angles [9, 7, 10].
Figure 1: (Top) Traditional System: Vehicles are detected, tracked, and
counted across the entire frame, with manually defined counting lines, leading
to suboptimal performance. (Bottom) Our system: Vehicle detection is performed
across the entire frame. Vehicle tracking is concentrated in several
automatically learned Regions of Interest (ROIs), optimizing detection and
tracking results. Counting is lane-specific, with each vehicle assigned a
unique LaneID, moving away from bi-directional counting. Best view in color.
This paper introduces a Real-Time Automatic Lane Learning and Lane-Specific
Traffic Status Monitoring System, building upon our prior research [11, 12].
In our previous work, we developed a method for automatically identifying
highway locations, lane boundaries, and traffic flow directions from video
footage captured by cameras. Leveraging this acquired knowledge, the current
study focuses on the real-time collection and reporting of traffic data for
each lane, including metrics such as vehicle counts (illustrated in Fig. 1),
flow rates, and congestion estimates derived from vehicle count statistics.
Furthermore, this system is equipped to autonomously detect changes in the
camera’s angle or zoom level, subsequently reinitiating the road and lane
learning process to maintain continuous and accurate traffic status
monitoring.
The contributions of our work can be summarized as follows:
* •
Building on our earlier contributions in LCD[11] and MRLL[12], this paper
introduces an advanced, adaptive system capable of lane-wise vehicle counting,
flow rate calculation, and traffic status detection that operates
continuously, around the clock. This system is designed to accommodate cameras
of varying resolutions and frame rates and is robust against a wide range of
weather and traffic conditions, utilizing real-time video streams as its
primary input source.
* •
We have developed a novel, standalone module named “Camera View Checking” that
operates continuously to monitor for any changes in the camera’s angle or
view. It accomplishes this by comparing the current camera view with an
initial, well-defined reference view. Should any deviation in the camera angle
or view be detected, this module automatically triggers the system to re-
initiate the learning process for road and lane parameters.
* •
We devised a novel approach, termed “Video Rate-Computer Speed
Synchronization” to dynamically adjust the input frame rate in accordance with
the system’s processing capabilities, thereby ensuring the maintenance of
real-time performance.
* •
We have developed a region-based tracking system leveraging an enhanced
DeepSort architecture, specifically designed for real-time tracking in the
most effective Regions of Interest (ROIs) to ensure optimal vehicle detection.
Key improvements include the integration of our ”Video Rate-Computer Speed
Synchronization” method, which dynamically adjusts input frame rates based on
processing speed to sustain real-time operations. We have also transitioned
from using the traditional Intersection Over Union (IOU) metric to the more
advanced Complete IOU (CIOU) distance, significantly improving the precision
of detection and tracking data association. Furthermore, the introduction of
an adaptive matching threshold has been instrumental in optimizing the
system’s tracking accuracy.
* •
We have thoroughly evaluated the effectiveness of our proposed lane-wise
vehicle counting system by comparing it with manual counting in 9 videos.
Figure 2: The system we designed is running on every local computer with a
single GPU. This system gets real-time stream input from a single camera.
Since cameras are not fixed on highways, the Environment Learning module will
learn the road and lane information based on vehicle motion trajectories (the
details can be found in our previous work: [11, 13, 12]). When these
parameters about road and lanes are learned for a particular view (can be
defined by customer), the system will go to the next stage, to do Road
Condition and Traffic Status Detection via each lane. An independent module
(i.e., Camera View Checking) keeps running all the time to check camera angle
changes or not by comparing with the first, well-defined view. Once the system
detects the camera angle/view is changed, the Road Condition and Traffic
Status Detection module stops working immediately and the system goes to the
Environment Learning to learn a new set of road and lane information
parameters. More details are explained in Section III.
## II Related Works
### II-A Object Detection
Traditional object detection methods like background subtraction [14] and
image difference [15] have been replaced by deep learning, due to superior
detection accuracy and speed. Deep learning methods fall into two categories:
two-stage detectors (e.g., Faster R-CNN, Mask R-CNN [16, 17]) and one-stage
detectors (e.g., YOLO, YOLOv4 [18, 19]). Two-stage detectors offer superior
accuracy, while one-stage detectors provide faster inference without pre-
generated region proposals. These deep learning-based object detection
algorithms are extensively employed in traffic surveillance videos, delivering
satisfactory performance [20, 21].
### II-B Multiple Object Tracking
Most object-tracking algorithms fall into two categories: detection-based
tracking (tracking-by-detection) and detection-free tracking. In recent years,
tracking-by-detection methods have been predominant. These methods involve
performing object detection on each frame to obtain detection results, which
are then associated with adjacent frames to form trajectories. Popular data
association methods include the Hungarian algorithm and the Kuhn-Munkres
algorithm [22]. Post-processing techniques like soft non-maximum suppression
(NMS) [23] are often applied to smooth and refine the trajectories. The
Intersection Over Union (IOU) [24] metric is commonly used to associate
detection results based on bounding box overlaps between frames. SORT [25] and
Deep SORT [26] are two widely used tracking methods in traffic surveillance
videos [27, 28]. SORT combines Kalman Filter with the Hungarian algorithm and
relies on bounding box size and position for motion estimation and data
association. Deep SORT utilizes a CNN to extract appearance information,
enhancing the association metric with motion information. This allows Deep
SORT to track objects more effectively during occlusions, reducing ID switches
but also increasing computational costs.
## III Method
This section summarizes our system, described in [11, 12], that analyzes
highway surveillance camera footage in real-time to detect changes in camera
angles or zoom levels. It then learns the number of lanes and their locations,
providing lane-wise vehicle counting, flow rate, and congestion estimation in
optimal ROIs with best vehicle detection. The system, depicted in Fig. 2, runs
on a local computer with a single GPU and comprises two main components:
Environment Learning for adapting to unpredictable camera settings and Road
Condition and Traffic Status Detection for analyzing lane-based traffic
information.
### III-A Environment Learning
The goal of this component includes finding the locations of the lanes and
optimal locations to detect and count vehicles on the lanes. Instead of
relying on non-robust lane-markings in highways, we adopted a method that uses
the vehicle motion information on the video to find the lane locations because
most vehicles on the highway are driven within their lanes [11, 12].
Camera View Checking The program monitors for changes in camera angle or zoom
level which would invalidate learned lane information. The efficient ImageHash
algorithm [29] is used to assess significant and rapid background changes in
the top fifth of the image frame (the sky and far side of the road), to signal
potential camera adjustments. If the Hash Distance value exceeds a certain
threshold, it is considered a camera angle or zoom level change, prompting a
relearning process. This check occurs every 50 frames.
The algorithm is proven to be very robust in various camera motions and
lighting conditions. The example testing cases are shown in Fig. LABEL:fig4,
such as when the cloud background changed significantly, when the two input
videos were taken over a long time apart, or when the camera zoomed out or
zoom in, under all of these tough PTZ camera situations, our algorithm gives
the correct detection with the default Hash threshold. This algorithm detects
the large camera angle shifts and also detects those less apparent camera
parameter changes. There are some cases that our algorithm cannot handle well,
as shown in Fig. LABEL:fig5. In summary, our camera viewing checking algorithm
works well in the majority of our ITS environments. The system monitors the
camera’s viewing angle. If it detects changes in angle or zoom level, it halts
and restarts the environment learning stage to relearn the road and lane
parameters from the video.
### III-B Road Condition and Traffic Status Detection
Learning the ROI and lane boundaries makes lane-based traffic status detection
possible. The issues faced and solutions in lane-based traffic status
detection are described in the following subsections.
#### III-B1 Video Rate-Computer Speed Synchronization
The goal of this system development is that it can be used for all highway
surveillance cameras. Since the frame rate (frame per second: $fps$) and
resolution of these cameras can be quite different, the processing time for
image processing can vary significantly. We reduce the $fps$ processed based
on the computer computation speed(assume the time cost of each frame is
$\delta t$ with the unit of second) by adaptively skipping $K$ frames for
every frame processed during the process following Equation (1):
$K=\begin{cases}fps-\frac{1}{\delta t}&\text{if $\frac{1}{\delta t}-fps<0$}\\\
0&\text{otherwise}\end{cases}$ (1)
We designed a Video Rate-Computer Speed Synchronization method that observes
the computer processing time for each input frame and the incoming frames per
second. Then we use this information to determine how many frames should be
skipped for each frame processed so there will be no backlogged frames. Since
the processing time for each frame is affected by the vehicles captured and
processed in the frame, the number of frames skipped fluctuates over time.
#### III-B2 Vehicle Detection
The pre-trained YOLOv4 is used in this work and the input to vehicle detection
is no longer continuous but with $K$ frames skipped.
#### III-B3 Online Multi-target Tracking with Adaptive Kalman Filter
Before tracking, we filter out invalid detected vehicles, removing those with
bounding box sizes more than twice the median truck size and those outside any
ROI or roads. NMS is applied to eliminate redundant bounding box overlaps.
DeepSort [26] is then employed for online multiple vehicle tracking on the
remaining valid detection in each frame. However, the classic tracking
pipeline has limitations in our application. Firstly, real-time performance on
a single GPU is essential, but feature extraction from a deep learning model
at each frame is time-consuming. Secondly, our proposed skipping frame
strategy renders the IOU distance and fixed threshold in DeepSort inadequate,
as detection and tracklets may not overlap. Therefore, we modify DeepSort to
meet our application’s requirements.
We modified Deepsort: By removing object appearance feature matching and using
CIOU distance instead of IOU distance in Cascade Matching step in DeepSort
By importing the frame-skipping strategy, the whole system achieves real-time
processing status without much tracking accuracy loss. However, in the
default, Deepsort, the IOU distance used in the last step for matching
detected vehicles and tracks will not meet the requirement if the new detected
vehicles do not overlap. So the distance between the bounding box’s centers
and the consistency of the aspect ratio has the potential to solve this
problem. Based on the work CIOU loss [30], we replace the IOU distance with
CIOU distance. The original IOU distance is defined as Equation (2) and IOU is
the intersection-over-union of bounding boxes in detections and tracks. The
modified CIOU distance is defined as shown in Equation (3).
$d_{IOU}=1-IOU$ (2) $d_{CIOU}=1-IOU+D+\alpha*V$ (3)
In CIOU, the normalized central point distance $D$ is designed to measure the
distance of two boxes as calculated in Equation (4),
$D=\frac{\rho^{2}(p^{d},p^{t})}{c^{2}}$ (4)
where $p^{d}=[x_{d},y_{d}]^{T}$ and $p^{t}=[x_{t},y_{t}]^{T}$ are the central
points of boxes in detections and tracks , $c$ is the diagonal length of each
box in the track, and $\rho$ is specified as the Euclidean distance function.
$V$ is defined as the consistency of the aspect ratio and calculated as:
$\frac{4}{pi^{2}}(\arctan\frac{w^{d}}{h^{d}}-\arctan\frac{w^{t}}{h^{t}})$. The
trade-off parameter $\alpha$ is defined the same as with the default CIOU. The
CIOU distance mitigates the missing match when detections are not overlapping
with predicted tracks and are more robust to the scale of bounding boxes.
The matching distance threshold was adjusted with the skipped frame number $K$
in Cascade Matching step in DeepSort
Based on the above analysis, a fixed IOU distance threshold does not work well
because of different frames are skipped in each iteration. So we adjust the
CIOU distance threshold to $pre_{thre}*(K+1)$, where $K$ donates the skipped
frame number and $pre_{thre}$ is the default fixed matching threshold in
Deepsort, if no frame is skipped, $K$ equals 0.
After tracking, each vehicle passing through the ROI is assigned a unique
$Fid$.
#### III-B4 Lane-wise Vehicle Counting
The result of vehicle counting with time information can be proportionally
converted to flow rate and valuable traffic status information. To realize
lane-wise vehicle counting, we associate every tracked vehicle $i$ in the ROI
with $Fid_{i}$ to a unique lane ID $Lid_{i}$ using the lane boundary
information obtained in the environment learning stage. When assigning the
lane ids to each vehicle in the current frame, only the previous frame is
compared. This strategy will not only reduce the time cost and also reduce the
id switch when we assign the lane ids if the id switch happens in FIDs in a
longer trace. If the center of a tracked vehicle is within the boundary of a
lane and within the average car length at the baseline, the vehicle is counted
for that lane. A tracked vehicle is only counted one time.
#### III-B5 Traffic Status Detection
The system can generate traffic status and incident reports for the users in
three steps: (i) determine the flowrate of each lane based on the vehicle
counts over a specific time interval periodically (ii) estimate the percentage
of pixels occupied by all vehicles on each lane within the ROI as occupancy
rate as shown in the Equation (5), and (iii) traffic status checking based on
the combination of the flow rate and occupancy.
$Occp_{l}=\frac{\sum(h_{l})}{H}$ (5)
, here $l$ is the lane ID, and $h_{l}$ is the sum of all vehicles’ bounding
box height in lane $l$. For every $T$ minute, the instantaneous flow rate
$Fr_{l}$(vehicle per hour) of lane $l$, is calculated based on the vehicle
counting $C_{l}$ of lane $l$ during $T$: $Fr_{l}=\frac{C_{l}*60}{T}$. The time
interval for the flow rate calculation cannot be too long since it will not
show the short-term traffic status changes. On the other hand, the time
interval for the flow rate calculation cannot be too short since it will
fluctuate too much to understand its meaning.
Figure 4: 9 ITS traffic scenes recorded in sunny, rainy, snowy, nighttime, and
congestion traffic conditions.
## IV Experiments
### IV-A Experimental Settings
Datasets. For vehicle count testing purpose, we created another 9 videos and
each video lasts about 2 minutes. These testing videos have unique camera
views and various weather, traffic density and visibility as shown in Fig. 4.
We assume each video has finished the ROIs and lane learning including: lane
center locations, lane directions, lane boundaries in all the ROIs.
Evaluation Metrics. To estimate our system’s performance in vehicle counting
in each lane, we manually counted 9 videos from four scenes including sunny,
rainy, night, and congested traffic, and each video lasts 2 minutes. We
counted vehicles in each lane separately when every vehicle passes the
baseline. The flowrate of ground truth is also estimated with the counting
ground truth. The total counting accuracy of one video is defined as the
corrected counting percentage: $\frac{total\quad system\quad count}{total\quad
ground\quad truth\quad count}*\%$. We consider the smaller count as the
numerator and the higher the value means the higher the counting accuracy. We
design two other matrices to estimate the road level flow rate accuracy
compared with the flow rate generated from the ground truth counting: MEA(mean
estimation accuracy): $\frac{\sum(Fr_{l})}{\sum(FrGT_{l})}$ and RMSE(root mean
square error): $\sqrt{\frac{\sum{(Fr_{l}-FrGT_{l})^{2}}}{m}}$. We define the
$FrGT_{l}$ as the ground truth flow rate at lane $l$ and $m$ as the total
number of lanes. The MAE has a similar definition of counting accuracy, we
regard the smaller value as the numerator and the higher value means the
estimation is closer to the ground truth. The RMSE estimates the whole error
of estimation, and the lower value means better counting and flow rate
estimation. To check the traffic status, we defined an estimate rule based on
the flow rate and occupancy rate of each lane as shown in Equation (6). Based
on the rules we defined, our system gets 100% accurate traffic status (normal,
slow, and jam) reports of all the lanes of total 9 videos.
$status\\_{l}=\begin{cases}Jam&\begin{subarray}{c}\text{if $Fr_{l}<600$ and
$Occp_{l}>0.6$}\end{subarray}\\\ Slow&\begin{subarray}{c}\text{if
$600<Fr_{l}<900$ and $0.4<Occp_{l}<0.6$}\end{subarray}\\\
Normal&\text{otherwise}\end{cases}$ (6)
Manual Count. We hire 2 master students to count the actual vehicles crossing
the counting line manually. For each video, they count the vehicles in each
video on a regular interval which is 30 seconds and totally count 4 intervals.
And we also record flow rate value, and occupancy in each lane during the same
interval calculated automatically by our designed system. Finally, we analyze
how these two indicators change over time and do the comparison.
Implementation Details. We utilized YOLOv4 [19], a popular deep learning
object detector, to detect vehicles in camera frame streams. During the
reference phase, we set detection confidence score thresholds to 0.25 and IOU
threshold to 0.45. For tracking, we employed Deep Sort framework, excluding
the CNN feature extractor. Cosine distance metric facilitated track
association in each frame, with an initial IOU distance threshold of 0.35.
Parameters like $max\\_age$ (30) and $n\\_init$ (3) were set to control track
deletion and initialization phases, respectively. Coupled with our frame-
skipping strategy, the entire system ran in real-time on a single NVIDIA
Quadro RTX 5000 GPU.
### IV-B Results
Figure 5: The overall count percentage for all vehicles from 9 cases by our
counting system. Lower than 100% means miss count, larger than 100% means over
count, and 100% means count by our system is totally the same as the ground
truth.
Lane-wise Real-time Vehicle Counting. The lane detection in this system
performs with acceptable accuracy when not in totally dark conditions.
Experimental results on our own collected highway data demonstrates the
effectiveness of our proposed framework in lane-wise vehicle counting by
comparing with manual ground truth. The system count of 7 of 9 videos is close
to the ground truth, and lowest counting error is 2% as shown in Fig.5.
However, the performance decreases in low lighting conditions, poor viewing
angles, or with occlusions.
Lane-wise Flow Rate Estimation. When estimating our system’s lane-wise flow
rate estimation accuracy, we only consider the cases in which all the lanes
are correctly learned. The flow rate estimation accuracy and error results of
the 8 videos are shown in TABLE I.
These results prove our system achieves good flow rate estimation accuracy and
the highest MAE reaches 0.95 and it reaches 0.86 even in night cases. The
average RMSE is lowest in the sunny and daytime cases which means the lighting
is still the major factor affecting the detection and counting.
Road Condition and Traffic Status Detection. Based on the rules we defined,
our system gets 100% accurate traffic status (normal, slow, and jam) reports
of all the lanes of all 9 videos.
TABLE I: Accuracy of lane-wise flow rate estimation in various scenarios. Video Name | MEA $\uparrow$ | RMSE $\downarrow$
---|---|---
Sunny 1 | 0.95 | 76.49
Sunny 2 | 0.93 | 54.38
Rainy 1 | 0.93 | 47.43
Rainy 2 | 0.77 | 228.47
Night 1 | 0.86 | 118.59
Night 2 | 0.93 | 36.74
Congestion 1 | 0.86 | 354.96
Congestion 2 | 0.9 | 129.03
Average | 0.89 | 130.63
## V Conclusion
Highway surveillance cameras face challenges due to unpredictable changes in
viewing direction and zoom level. To address this, a lane-based automatic
traffic monitoring system with a lane learning component has been developed
and proved its success using real-time Indiana Highway data. The paper
outlines the system’s architecture and highlights the importance of vehicle
motion for lane detection, vehicle counting and traffic status estimation.
Limitations. The performance of our system depends critically on the accuracy
of its various sub-modules, which include lane learning, vehicle detection,
tracking, LaneID assignment, and counting. Specifically, lane learning, which
is primarily informed by video input and vehicle motion, tends to be less
effective in situations such as traffic jams or when vehicles remain
stationary. The task of vehicle detection faces significant challenges in the
Intelligent Transportation System (ITS) environment, where adverse weather
conditions and low-light conditions at night can significantly impair
detection capabilities.
In Multi-Object Tracking (MOT), one of the major hurdles is dealing with
occlusions, particularly in dense traffic scenarios where bounding boxes of
different vehicles overlap, making it difficult to track individual vehicles
accurately. Moreover, the use of heavy-weight computational architectures for
tracking hampers the ability to perform real-time tracking, necessitating the
development of lighter-weight alternatives that do not sacrifice accuracy.
To accurately assess the overall performance of our system, it is essential to
conduct tests across a broader range of scenarios to validate the system’s
robustness. This entails incorporating more diverse test cases that can
effectively simulate the wide variety of real-world conditions under which the
system must operate.
Future Work. To improve the overall accuracy of the system, it is crucial to
make incremental improvements to each individual module. Additionally,
employing a more diverse collection of test videos will be instrumental in
accurately estimating the system’s performance.
Acknowledgment. This work was supported by the Joint Transportation Research
Program (JTRP), administered by the Indiana Department of Transportation and
Purdue University, Grant SPR-4436. The authors would like to thank all INDOT
Study Advisory Committee members, including Jim Sturdevant, Ed Cox, Tim Wells,
for their guidance and advice throughout the project. The authors would also
thank Zhengming Ding, Stephan Gerve, Upadhyay, Aniket Pankaj, Prathamesh
Somkant Panat, Anup Atul Mulay, Baudouin Ramsey, Kunal Mandil, Arjun
Narukkanchira Anilkumar, and Kavya Prasad for their contributions in various
part of this project.
## References
* [1] Tamer Nadeem et al. Trafficview: A scalable traffic monitoring system. In IEEE International Conference on Mobile Data Management, 2004. Proceedings. 2004, pages 13–26. IEEE, 2004.
* [2] Neeraj Kumar Jain et al. A review on traffic monitoring system techniques. Soft computing: Theories and applications: Proceedings of SoCTA 2017, pages 569–577, 2019.
* [3] Mahmoud Abbasi et al. Deep learning for network traffic monitoring and analysis (ntma): A survey. Computer Communications, 170:19–41, 2021.
* [4] Igor Bisio et al. A systematic review of drone based road traffic monitoring system. IEEE Access, 2022.
* [5] Pallavi A Mandhare et al. Intelligent road traffic control system for traffic congestion: a perspective. International Journal of Computer Sciences and Engineering, 6(07):2018, 2018.
* [6] Vishal Mandal et al. Artificial intelligence-enabled traffic monitoring system. Sustainability, 12(21):9177, 2020.
* [7] Xingchen Zhang et al. Monocular visual traffic surveillance: a review. IEEE TITS, 2022.
* [8] Adriel Isaiah Valeroso Amoguis et al. Road lane segmentation using vehicle trajectory tracking and lane demarcation lines. In Proceedings of the 2023 6th International Conference on Machine Vision and Applications, pages 64–71, 2023.
* [9] Aleksandr Fedorov et al. Traffic flow estimation with data from a video surveillance camera. Journal of Big Data, 6:1–15, 2019.
* [10] Li Li et al. Trajectory data-based traffic flow studies: A revisit. Transportation Research Part C: Emerging Technologies, 114:225–240, 2020.
* [11] Mei Qiu et al. Intelligent highway lane center identification from surveillance camera video. In ITSC, pages 2506–2511. IEEE, 2021.
* [12] Mei Qiu et al. Intelligent highway adaptive lane learning system in multiple rois of surveillance camera video. IEEE TITS, 2024.
* [13] Mei Qiu et al. Attention mechanism improves yolov5x for detecting vehicles on surveillance videos. In 2022 IEEE AIPR, pages 1–8. IEEE, 2022.
* [14] Quin Cai et al. Tracking human motion in an indoor environment. In ICIP, volume 1, pages 215–218. IEEE, 1995.
* [15] Daniel J Dailey et al. An algorithm to estimate mean traffic speed using uncalibrated cameras. TITS, 1(2):98–107, 2000.
* [16] Shaoqing Ren et al. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28, 2015.
* [17] Kaiming He et al. Mask r-cnn. In ICCV, pages 2961–2969, 2017.
* [18] Joseph Redmon et al. You only look once: Unified, real-time object detection. In CVPR, pages 779–788, 2016.
* [19] Alexey Bochkovskiy et al. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934, 2020.
* [20] Hang Shi et al. Anomalous driving detection for traffic surveillance video analysis. In ICIST, pages 1–6. IEEE, 2021.
* [21] Soumi Mitra et al. Towards an optimised vehicle detection algorithm for multi-object tracking in traffic surveillance. In AICS, pages 200–211, 2021.
* [22] Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2):83–97, 1955.
* [23] Navaneeth Bodla et al. Soft-nms–improving object detection with one line of code. In ICCV, pages 5561–5569, 2017.
* [24] Erik Bochinski, Volker Eiselein, and Thomas Sikora. High-speed tracking-by-detection without using image information. In 2017 14th IEEE international conference on advanced video and signal based surveillance (AVSS), pages 1–6. IEEE, 2017.
* [25] Alex Bewley et al. Simple online and realtime tracking. In 2016 IEEE ICIP, pages 3464–3468. IEEE, 2016.
* [26] Nicolai Wojke et al. Simple online and realtime tracking with a deep association metric. In 2017 IEEE ICIP, pages 3645–3649. IEEE, 2017.
* [27] Chong Liu et al. City-scale multi-camera vehicle tracking guided by crossroad zones. In CVPR, pages 4129–4137, 2021.
* [28] Lijun Yu et al. Traffic danger recognition with surveillance cameras without training data. In AVSS, pages 1–6. IEEE, 2018.
* [29] David Marr and Ellen Hildreth. Theory of edge detection. Proceedings of the Royal Society of London. Series B. Biological Sciences, 207(1167):187–217, 1980.
* [30] Zhaohui Zheng et al. Enhancing geometric factors in model learning and inference for object detection and instance segmentation. IEEE Trans. Cybern., 52(8):8574–8586, 2021.
|
# Room Transfer Function Reconstruction Using Complex-valued Neural Networks
and Irregularly Distributed Microphones
###### Abstract
Reconstructing the room transfer functions needed to calculate the complex
sound field in a room has several important real-world applications. However,
an unpractical number of microphones is often required. Recently, in addition
to classical signal processing methods, deep learning techniques have been
applied to reconstruct the room transfer function starting from a very limited
set of room transfer functions measured at scattered points in the room. In
this study, we employ complex-valued neural networks to estimate room transfer
functions in the frequency range of the first room resonances, using a few
irregularly distributed microphones. To the best of our knowledge, this is the
first time complex-valued neural networks are used to estimate room transfer
functions. To analyze the benefits of applying complex-valued optimization to
the considered task, we compare the proposed technique with a state-of-the-art
real-valued neural network method and a state-of-the-art kernel-based signal
processing approach for sound field reconstruction, showing that the proposed
technique exhibits relevant advantages in terms of phase accuracy and overall
quality of the reconstructed sound field.
Index Terms— sound field reconstruction, RIR interpolation, complex-valued
neural network, room acoustics
## 1 Introduction
Immersive audio plays a crucial role in virtual and augmented reality
applications, leading to a growing interest in the navigation of the acoustic
scene [1, 2], often referred to as 6 Degrees of Freedom (6DOF). To develop
such solutions in an effective way, it is necessary to reconstruct the sound
field in a large area, with the availability of room transfer functions
measured in just a few points in the room.
Solutions for sound field reconstruction incorporate both model-based and
data-driven approaches. Model-based strategies leverage existing knowledge of
acoustic principles to address the task of sound field reconstruction. These
can be categorized into parametric [3, 4, 5, 6, 7], and non-parametric models
[8, 9, 10, 11]. Parametric models represent the sound scene using a limited
set of parameters, such as source position[12] and directivity, [3], to
effectively convey spatial audio information. Non-parametric models, on the
other hand, exploit combinations of plane waves [10] or spherical waves [13,
11] to accurately reconstruct the acoustic field. Typically, this class of
techniques relies on compressed sensing [14], which is often applied in
combination with the equivalent source model [9, 15]. Some non-parametric
solutions adopt as a prior the Helmholtz equation to derive the kernel of the
interpolation. Caviedes et. al. [16] explore the use of Gaussian Process (GP)
regression for sound field interpolation, while kernel-based interpolation
approach have been proposed in Ueno et.al. [17, 18] and Ribeiro et. al.[19,
20].
Mainly motivated by the encouraging outcomes observed within the acoustic
domain [21, 22, 23, 24], data-driven approaches have found application also in
addressing the challenge of sound field reconstruction. The method proposed in
Pezzoli et. al. [25] exploits the regularization of a Convolutional Neural
Network (CNN) to estimate the Room Impulse Responses (RIRs) without relying on
any assumptions about the acoustic model. Fernandez et. al. [26] and Pezzoli
et. al. [27] propose approaches that incorporate physics-informed neural
networks leveraging prior information from the wave equation to guide the
reconstruction of RIRs according to the governing equation of the system.
Despite operating in the time domain, the primary drawback of these techniques
lies in the necessity to retrain the model for every set of measurements.
Following the paradigm of image inpainting [28, 29], Lluis et. al. [30]
propose a CNN trained on an extensive dataset of Room Transfer Functions
(RTFs). This approach forces the network to learn features from RTFs under
various acoustic conditions, enabling its applicability to unseen rooms with
similar acoustic conditions. However, the majority of learning-based
approaches primarily handle real-valued features [31], leaving the inference
of the phase, if needed, to other processing blocks, such as Griffin-Lim [32],
leading to suboptimal results where the phase is key for good results. In this
context, the adoption of Complex-Valued Neural Networks (CVNNs) [33] is a
convenient choice, due to their ability to directly handle complex-valued data
and optimization. This makes them well-suited for addressing various audio
signal processing problems such as echo cancellation [34], speech enhancement
[35], speech separation [36], beamforming [37], and sound field reproduction
[38].
Inspired by the complex-valued representation of RTFs, this paper presents a
novel approach based on CVNNs and irregularly distributed microphones for
addressing the RTF reconstruction problem. Similarly to what done in [30], the
proposed method approaches the problem as an image inpainting task. To the
best of our knowledge, this is the first application of CVNNs in the context
of RTF reconstruction. Additionally, the approach showcases its efficacy by
successfully handling the reconstruction task with a minimal number of
irregularly distributed microphones. We consider a 2D grid of positions
deployed on a plane where we aim to estimate the RTFs. The input of the CVNN
is a sparse set of RTFs measured at a few points on the grid, while the
outputs are the RTFs estimated at all the grid points. Through a simulation
study, we compare the proposed model with the approaches proposed by LLuis et.
al. [30] and by Ueno et.al. [18], demonstrating the benefits of incorporating
CVNNs into the RTF reconstruction problem. The code used to train the proposed
model and compute the results is publicly available on GitHub
111https://github.com/RonFrancesca/complex-sound-field.
Fig. 1: Schematic representation of the proposed CVNN.
## 2 Data Model and Problem formulation
In the scope of this study, we consider an acoustic source positioned at
$\mathbf{s}\in\mathbb{R}^{3}$ and an array of microphones deployed on a two-
dimensional $W\times H$ grid in a shoebox room of dimension $L_{x}\times
L_{y}\times L_{z}$. We define the coordinates of a point $\mathbf{r}_{w,h}$ on
the grid as:
$\mathbf{r}_{w,h}=[w[L_{x}/(W-1)],h[L_{y}/(H-1)],\overline{z}]^{T},$ (1)
where $w=0,\ldots,W-1$ and $h=0,\ldots,H-1$ are the indexes of the microphones
on the grid and $\overline{z}$ is a fixed value on the z-axis. The RTF from
source $\mathbf{s}$ to microphone $\mathbf{r}_{w,h}$ in a lightly damped
shoebox room can be computed using an infinite summation of room modes,
expressed as:
$G(\mathbf{r}_{w,h}|\mathbf{s},\omega)=-\frac{1}{V}\sum_{n}^{\infty}\frac{\Psi_{n}(\mathbf{r}_{wh})\Psi_{n}(\mathbf{s})}{(\omega/c)^{2}-(\omega_{n}/c)^{2}-j\omega/\tau_{n}},$
(2)
where $\omega$ denotes the angular frequency, $c$ the speed of sound,
$\tau_{n}$ the decay time of the considered mode and $\Psi(\cdot)$ the
corresponding mode shape. For notational simplicity, we denote the mode,
identifed through the 3-dimensional index $[n_{x},n_{y},n_{z}]$ with the index
$n$, which describes its position on the frequency axis. The RTFs from source
$\mathbf{s}$ to points $\mathbf{r}_{w,h}$ at frequency $\omega$ are collected
in $\mathbf{G}\in\mathbb{C}^{W\times H}$ such that:
$[\mathbf{G}]_{w,h}(\mathbf{s},\omega)=G(\mathbf{r}_{w,h}|\mathbf{s},\omega).$
(3)
We define the set
$\mathcal{I}_{M}=\\{(\tilde{w},\tilde{h})|0<\tilde{w}<W-1,0<\tilde{h}<H-1\\},$
(4)
corresponding to positions on the 2D grid where no microphones are deployed.
The corresponding complex-valued RTFs measured on such incomplete grid can be
then defined as:
$[\tilde{\mathbf{G}}]_{w,h}(\mathbf{s},\omega)=\begin{cases}0,&\text{if}\
(w,h)\in\mathcal{I}_{M}\\\
[\mathbf{G}]_{w,h}(\mathbf{s},\omega),&\text{otherwise}\end{cases}.$ (5)
The RTF reconstruction problem is thus formulated as finding the function
$\mathcal{U}(\cdot)$ that provides an estimate $\hat{\mathbf{G}}$ of the RTF
matrix imposing that
$\hat{\mathbf{G}}=\arg\min_{\mathcal{U(\cdot)}}\sum_{w=0}^{W-1}\sum_{h=0}^{H-1}|[\mathbf{G}]_{w,h}-[\mathcal{U(\tilde{\mathbf{G}})}]_{w,h}|^{2},$
(6)
omitting the frequency dependence for simplicity.
## 3 Complex-valued Neural Network for room transfer function reconstruction
### 3.1 Complex-valued networks prerequisites
CVNNs operate with tensors containing complex values, necessitating a
redefinition of primary operations and activation functions.
Given a matrix of complex-valued weights $\mathbf{W}$ and a complex-valued
input tensor $\mathbf{X}$, we can define the convolution operation in CVNN as
$\mathbf{W}*\mathbf{X}=(\Re(\mathbf{W})+j\Im(\mathbf{W}))*(\Re(\mathbf{X})+j\Im(\mathbf{X})),$
(7)
which, by applying the distributive property of convolution, can be
conveniently expressed in matrix form [33].
Similarly, most real-valued activation functions have a complex-valued
equivalent [39]. In this study, we will use the CPReLU activation [40], which
can be defined as:
$\mathrm{CPReLU}(\mathbf{X})=\mathrm{PReLU}(\Re(\mathbf{X}))+j\mathrm{PReLU}(\Im(\mathbf{X})).$
(8)
During the training of a CVNN, weight updates involve complex-valued
gradients. The condition for a function to be differentiable (i.e.
holomorphic) in the complex domain is more stringent compared to the real
domain, since it is needed to satisfy also the Cauchy–Riemann equations. To
extend complex-valued calculus to non-holomorphic functions, Wirtinger
calculus [41] is applied to extract the gradient. Given a complex-valued
function $f(z)$, where $z=a+jb$ and $a,b\in\mathbb{R}$, the partial
derivatives w.r.t. $z$ and its conjugate $z^{*}$ can be computed as:
$\frac{\partial{f}}{\partial{z}}=1/2\left(\frac{\partial{f}}{\partial{a}}-j\frac{\partial{f}}{\partial{b}}\right),~{}\frac{\partial{f}}{\partial{z}^{*}}=1/2\left(\frac{\partial{f}}{\partial{a}}+j\frac{\partial{f}}{\partial{b}}\right).$
(9)
Given a real-valued loss function
$\mathcal{L}:\mathbb{C}\rightarrow\mathbb{R}$, the complex-gradient used to
perform backpropagation is calculated as introduced in [42, 43]:
$\nabla_{z^{*}}\mathcal{L}=2\frac{\partial{\mathcal{L}}}{\partial{z^{*}}}.$
(10)
$31.6$$39.8$$50.1$$63.1$$79.5$$100$$126$$159$$200$$251$$-25$$-20$$-15$$-10$$-5$$0$Frequency
$\left[$\mathrm{Hz}$\right]$$\mathrm{NMSE_{abs}}$
$\left[$\mathrm{dB}$\right]$$m=5$$m=15$$m=35$$m=55$
(a)
$31.6$$39.8$$50.1$$63.1$$79.5$$100$$126$$159$$200$$251$$-20$$-15$$-10$$-5$$0$Frequency
$\left[$\mathrm{Hz}$\right]$$\mathrm{NMSE_{complex}}$
$\left[$\mathrm{dB}$\right]$$m=5$$m=15$$m=35$$m=55$
(b)
Fig. 2: Normalized Mean Squared Error (NMSE) for different number of
microphones measured over the reconstructed magnitude (a) and the complex
pressure field (b). Thick lines correspond to results obtained with the
proposed method, dashed lines to the ones related to the technique proposed by
Lluis et.al. [30], and dash-dotted lines correspond to the technique proposed
by Ueno et.al. [18].
### 3.2 Input representation
To properly condition the proposed network model on reliable measurement
positions, we concatenate the incomplete RTF matrix $\tilde{\mathbf{G}}$ with
a binary mask $\mathbf{M}\in\mathbb{Z}_{2}^{W\times H}$ to create the input.
Given the index matrix $\mathcal{I}_{M}$, the binary mask $[M]_{w,h}$ is
obtained as
$[M]_{w,h}(\mathbf{s},\omega)=\begin{cases}0,&\text{if}\
(w,h)\in\mathcal{I}_{M}\\\ 1,&\text{otherwise}\end{cases}.$ (11)
Subsequently, denoting $K$ as the number of frequencies considered, the
network’s input is represented by the matrix
$\mathbf{G}_{\text{input}}\in\mathbb{C}^{W\times H\times 2K}$ obtained as:
$\tilde{\mathbf{G}}_{\text{input}}=[\tilde{\mathbf{G}}~{}~{}\mathbf{M}].$ (12)
In this context, the notation $[\cdot]$ denotes tensor concatenation along the
frequency axis.
### 3.3 Network Architecture
The CVNN proposed in this study adopts a U-Net-like [44] network architecture.
Specifically, the encoder is composed of four complex-valued convolutional
layers with filter counts of i) 128, ii) 256, iii) 512, and iv) 1024. The
decoder consists of five complex-valued convolutional layers with filter
counts of v) 512, vi) 256, vii) 128, viii) 80, and ix) 80. All encoder layers
employ a stride of $2\times 2$. In contrast, all decoder layers, except layer
ix), are preceded by a complex-valued upsampling operation with a factor of
$2\times 2$. Each complex-valued convolutional layer but layer ix) is followed
by CPReLU and complex-batch normalization [33], except layers i), viii), and
ix). The kernel size for all convolutional layers is $3\times 3$, except layer
ix), which has a kernel size of $1\times 1$. Four skip connections are
implemented by concatenating the inputs of layers v), vi), vii), and viii)
with the inputs of layers iv), iii), ii), and i), respectively. A schematic
representation of the proposed architecture is shown in Fig. 1.
### 3.4 Training Procedure
In the training phase, the matrix $\tilde{\mathbf{G}}_{\text{input}}$ serves
as the input for the CVNN $\mathcal{U}(\cdot)$. This network produces an
estimate $\hat{\mathbf{G}}$ of the ground truth complex-valued RTFs
$\mathbf{G}$.
The loss $\mathcal{L}:\mathbb{C}\rightarrow\mathbb{R}$ employed for
backpropagation is computed as the $\ell_{1}$ norm, expressed as follows:
$\mathcal{L}(\mathbf{G},\hat{\mathbf{G}})=\sum_{w=0}^{W-1}\sum_{h=0}^{H-1}|[\mathbf{G}]_{w,h}-[\hat{\mathbf{G}}]_{w,h}|,$
(13)
where, for simplicity, the batch index, frequency dependence, and active
source $\mathbf{s}$ have been omitted.
## 4 Experimental Validation
(a)
(b)
(c)
(d)
(e)
Fig. 3: Magnitude of the sound field, obtained using the proposed method (b),
Llluis et al. [30] (c), Ueno et al. [18] (d) using the $m=15$ microphone
configuration depicted in (e). Ground truth magnitude is shown in (a). Top
row: $[4.8~{}\mathrm{m}\times 5.4~{}\mathrm{m}\times 26.4~{}\mathrm{m}]$ room
with a $100~{}\mathrm{Hz}$ active source $\mathbf{s}$ positioned at
$[2.1484~{}\mathrm{m},2.0132~{}\mathrm{m},2.4~{}\mathrm{m}]^{T}$. Bottom row:
$[4.4~{}\mathrm{m}\times 9.3~{}\mathrm{m}\times 41.4~{}\mathrm{m}]$ room with
a $200~{}\mathrm{Hz}$ active source $\mathbf{s}$ positioned at
$[3.3481~{}\mathrm{m},5.1397~{}\mathrm{m},2.4~{}\mathrm{m}]^{T}$
(a)
(b)
(c)
Fig. 4: Phase of the soundfield obtained using the same configuration
considered in Fig. 3 using the proposed method (a), Ueno et al. [18] (b)
ground truth is shown in (b).
In this section, we present results from simulations designed to demonstrate
the effectiveness of employing complex-valued networks for RTF reconstruction.
In particular, we compare the proposed method with a data-driven approach
proposed by LLuis et. al. [30], as well as a signal processing kernel-based
interpolation technique proposed by Ueno et.al. [18].
### 4.1 Setup
To train the network-based models, we generated a dataset of $5000$ rooms,
divided with a 75/25% split between the training and validation sets,
respectively. The dataset has been generated following the procedure proposed
by Lluis et.al. [30]. Room dimensions were chosen based on the ITU-R BS.1116-3
standard for listening rooms, and the reverberation time is fixed to
$0.6~{}\mathrm{s}$. All methods have been tested on a separate dataset of
30000 rooms.
For this study, a significantly small set of irregularly placed microphones
has been considered. During the training of the network, the number of
microphones was randomly selected from a set of $5$, $10$, $15$, $35$, and
$55$ microphones. The selected microphones were irregularly distributed in the
considered room. The same has been done for Lluis et.al [30] architecture.
The proposed CVNN has been trained for $400$ epochs using a mini-batch of size
$16$ using the Adam optimizer [45] with the default configuration and a
learning rate of $0.001$. Lluis et. al. [30] model was trained for $400$
epochs using a mini-batch size of $32$ and the Adam optimizer with learning
rate $0.0001$. Ueno et al. [18] technique was used with default parameters and
a regularization factor of $0.1$. We considered $K=40$ frequencies in the
range between $30~{}\mathrm{Hz}$ and $300~{}\mathrm{Hz}$, as proposed by Lluis
et.al.[30].
### 4.2 Evaluation Metric
The models have been evaluated using two Normalized Mean Squared Error (NMSE)
metrics. When considering only the magnitude of the RTFs, the metric is
defined as:
$\mathrm{NMSE}_{\mathrm{abs}}=10\log_{10}\frac{\sum_{w=0}^{W-1}\sum_{h=0}^{H-1}||[\hat{\mathbf{G}}]_{w,h}|-|[\mathbf{G}]_{w,h}||^{2}}{\sum_{w=0}^{W-1}\sum_{h=0}^{H-1}|[\mathbf{G}]_{w,h}|^{2}},$
(14)
The second NMSE, which consider the entire complex field, is defined as:
$\mathrm{NMSE}_{\mathrm{complex}}=10\log_{10}\frac{\sum_{w=0}^{W-1}\sum_{h=0}^{H-1}|[\hat{\mathbf{G}}]_{w,h}-[\mathbf{G}]_{w,h}|^{2}}{\sum_{w=0}^{W-1}\sum_{h=0}^{H-1}|[\mathbf{G}]_{w,h}|^{2}}.$
(15)
### 4.3 Results
Fig. 2(a) illustrates the performances of $\mathrm{NMSE}_{\text{abs}}$, while
Fig. 2(b) showcases the outcomes for $\mathrm{NMSE}_{\text{complex}}$. Fig.
2(a) show that, when comparing the methods for each selected number of
microphones, the proposed method excels at lower frequencies, whereas Lluis et
al. [30] reach better performance at higher frequencies. However, the
advantage at high frequencies decreases as the considered number of
microphones increases. For $m=55$, the proposed method outperforms the
alternatives across almost the entire frequency range. This behavior can be
attributed to both the selected loss function, which tends to underperform at
higher frequencies, and the observation that the proposed model extracts more
information compared to Lluis et al. [30], while being evaluated on a smaller
set of measurements in this case. More specifically, this implies that the
proposed method undergoes a more intricate optimization process, wherein the
entire complex sound field must be reconstructed, while Lluis et al. [30]
concentrates on the magnitude. When compared with Ueno et al. [18], the
proposed CVNN consistently succeeded.
Fig. 2(b) report the results related to the $\mathrm{NMSE}_{\text{complex}}$.
In this case, it is not possible to compare the proposed method with Lluis et
al. [30], since the latter method only reconstructs the magnitude of the
pressure field. From the figure, it is possible to observe that the error
increases together with the frequency, but it never reaches $0~{}\mathrm{dB}$.
As expected, also in this case, the error decreases as the number of
microphones increases, likely due to the greater amount of input information.
In Fig. 3, an example of the reconstructed magnitude is displayed, obtained
using the considered neural networks with a configuration of $15$ microphones
at a frequency of $100~{}Hz$. As it is possible to observe, while both methods
can give a rough idea of the mode behavior present in the room at the
considered frequency, the proposed CVNN shows more precise performances.
Fig. 4 shows the reconstructed phase using the same setup. Through visual
examination, it is noticeable that the RTF obtained using the proposed method
is nearly indistinguishable from the ground-truth.
## 5 Conclusion
This paper proposed a novel application of complex-valued neural networks to
the problem of room transfer function reconstruction, considering a
significantly small set of irregularly placed microphones, overcoming the need
for an unpractical number of microphones. We compare the magnitude
reconstruction performance of the proposed model with a state-of-the-art data-
driven technique, showing better performances at low frequencies while
underperforming at higher frequencies, although the gap becomes smaller as the
number of microphones increases. The proposed CVNN is also compared with a
kernel interpolation method, outperforming the latter both at low and high
frequencies. The proposed architecture is also able to reconstruct the phase
of the RTFs, differently from most learning-based approaches. The results
obtained motivate deeper investigation into incorporating CVNN in addressing
RTF reconstruction challenges. Future work aims to enhance the performance of
the proposed model by considering more suitable loss functions, including
direct considerations of phase handling. Additionally, future work will focus
on analyzing the behavior of the proposed model across frequency ranges beyond
the modal range.
## References
* [1] J. G. Tylka and E. Y. Choueiri, “Fundamentals of a parametric method for virtual navigation within an array of ambisonics microphones,” JAES, vol. 68, no. 3, pp. 120–137, 2020.
* [2] M. Cobos, J. Ahrens, K. Kowalczyk, and A. Politis, “An overview of machine learning and other data-based methods for spatial audio capture, processing, and reproduction,” Eurasip J. Audio Speech Music Process., vol. 2022, no. 1, pp. 1–21, 2022.
* [3] M. Pezzoli, F. Borra, F. Antonacci, S. Tubaro, and A. Sarti, “A parametric approach to virtual miking for sources of arbitrary directivity,” IEEE/ACM Trans. Acoust., Speech, Signal Process., vol. 28, pp. 2333–2348, 2020\.
* [4] M. Pezzoli, F. Borra, F. Antonacci, A. Sarti, and S. Tubaro, “Reconstruction of the virtual microphone signal based on the distributed ray space transform,” in 26th Eur. Signal Process. Conf., pp. 1537–1541, IEEE, 2018.
* [5] L. McCormack, A. Politis, R. Gonzalez, T. Lokki, and V. Pulkki, “Parametric ambisonic encoding of arbitrary microphone arrays,” IEEE/ACM Trans. Acoust., Speech, Signal Process., vol. 30, pp. 2062–2075, 2022.
* [6] L. McCormack, A. Politis, T. McKenzie, C. Hold, and V. Pulkki, “Object-based six-degrees-of-freedom rendering of sound scenes captured with multiple ambisonic receivers,” JAES, vol. 70, no. 5, pp. 355–372, 2022.
* [7] V. Pulkki, S. Delikaris-Manias, and A. Politis, Parametric time-frequency domain spatial audio. Wiley Online Library, 2018.
* [8] S. Koyama and L. Daudet, “Sparse representation of a spatial sound field in a reverberant environment,” IEEE J. Sel. Top. Signal Process., vol. 13, no. 1, pp. 172–184, 2019.
* [9] N. Antonello, E. De Sena, M. Moonen, P. A. Naylor, and T. Van Waterschoot, “Room impulse response interpolation using a sparse spatio-temporal representation of the sound field,” IEEE/ACM Trans. Acoust., Speech, Signal Process., vol. 25, no. 10, pp. 1929–1941, 2017.
* [10] W. Jin and W. B. Kleijn, “Theory and design of multizone soundfield reproduction using sparse methods,” IEEE/ACM Trans. Acoust., Speech, Signal Process., vol. 23, no. 12, pp. 2343–2355, 2015.
* [11] M. Pezzoli, M. Cobos, F. Antonacci, and A. Sarti, “Sparsity-based sound field separation in the spherical harmonics domain,” in Int. Conf. Acoust. Speech Signal Process, IEEE, 2022.
* [12] O. Thiergart, G. Del Galdo, M. Taseska, and E. A. P. Habets, “Geometry-based spatial sound acquisition using distributed microphone arrays,” IEEE Trans. Acoust., Speech, Signal Process., vol. 21, no. 12, pp. 2583–2594, 2013\.
* [13] A. Fahim, P. N. Samarasinghe, and T. D. Abhayapala, “Sound field separation in a mixed acoustic environment using a sparse array of higher order spherical microphones,” in 2017 HSCMA, pp. 151–155, IEEE, 2017.
* [14] D. L. Donoho, “Compressed sensing,” IEEE Trans. on information theory, vol. 52, no. 4, pp. 1289–1306, 2006.
* [15] I. Tsunokuni, K. Kurokawa, H. Matsuhashi, Y. Ikeda, and N. Osaka, “Spatial extrapolation of early room impulse responses in local area using sparse equivalent sources and image source method,” Applied Acoustics, vol. 179, p. 108027, 2021.
* [16] D. Caviedes-Nozal, N. A. Riis, F. M. Heuchel, J. Brunskog, P. Gerstoft, and E. Fernandez-Grande, “Gaussian processes for sound field reconstruction,” The Journal of the Acoustical Society of America, vol. 149, no. 2, pp. 1107–1119, 2021.
* [17] N. Ueno, S. Koyama, and H. Saruwatari, “Sound field recording using distributed microphones based on harmonic analysis of infinite order,” Signal Process. Letters, vol. 25, no. 1, pp. 135–139, 2017.
* [18] N. Ueno, S. Koyama, and H. Saruwatari, “Kernel ridge regression with constraint of helmholtz equation for sound field interpolation,” in Int. Workshop Acoust. Signal Enhanc., pp. 1–440, IEEE, 2018.
* [19] J. G. Ribeiro, N. Ueno, S. Koyama, and H. Saruwatari, “Region-to-region kernel interpolation of acoustic transfer functions constrained by physical properties,” IEEE/ACM Trans. Acoust., Speech, Signal Process., vol. 30, pp. 2944–2954, 2022.
* [20] J. G. Ribeiro, S. Koyama, and H. Saruwatari, “Kernel interpolation of acoustic transfer functions with adaptive kernel for directed and residual reverberations,” arXiv preprint arXiv:2303.03869, 2023.
* [21] M. Olivieri, R. Malvermi, M. Pezzoli, M. Zanoni, S. Gonzalez, F. Antonacci, and A. Sarti, “Audio information retrieval and musical acoustics,” IEEE Instrum. Meas. Mag, vol. 24, no. 7, pp. 10–20, 2021.
* [22] L. Comanducci, F. Borra, P. Bestagini, F. Antonacci, S. Tubaro, and A. Sarti, “Source localization using distributed microphones in reverberant environments based on deep learning and ray space transform,” IEEE/ACM Trans. Acoust., Speech, Signal Process., vol. 28, pp. 2238–2251, 2020.
* [23] M. Olivieri, M. Pezzoli, F. Antonacci, and A. Sarti, “A physics-informed neural network approach for nearfield acoustic holography,” Sensors, vol. 21, no. 23, 2021.
* [24] M. J. Bianco, P. Gerstoft, J. Traer, E. Ozanich, M. A. Roch, S. Gannot, and C.-A. Deledalle, “Machine learning in acoustics: Theory and applications,” JASA, vol. 146, no. 5, pp. 3590–3628, 2019.
* [25] M. Pezzoli, D. Perini, A. Bernardini, F. Borra, F. Antonacci, and A. Sarti, “Deep prior approach for room impulse response reconstruction,” Sensors, vol. 22, no. 7, p. 2710, 2022.
* [26] E. Fernandez-Grande, X. Karakonstantis, D. Caviedes-Nozal, and P. Gerstoft, “Generative models for sound field reconstruction,” JASA, vol. 153, no. 2, pp. 1179–1190, 2023.
* [27] M. Pezzoli, F. Antonacci, and A. Sarti, “Implicit neural representation with physics-informed neural networks for the reconstruction of the early part of room impulse responses,” in Forum Acusticum 2023, EAA, 2023.
* [28] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proceedings of the IEEE Conf. on computer vision and pattern recognition, pp. 9446–9454, 2018.
* [29] G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro, “Image inpainting for irregular holes using partial convolutions,” in Proceedings of the European conference on computer vision (ECCV), pp. 85–100, 2018.
* [30] F. Lluís, P. Martínez-Nuevo, M. Bo Møller, and S. Ewan Shepstone, “Sound field reconstruction in rooms: Inpainting meets super-resolution,” JASA, vol. 148, no. 2, pp. 649–659, 2020.
* [31] J. A. Barrachina, C. Ren, C. Morisseau, G. Vieillard, and J.-P. Ovarlez, “Complex-valued vs. real-valued neural networks for classification perspectives: An example on non-circular data,” in ICASSP 2021-2021 IEEE Int. Conf. on Acoustic., Speech and Signal Process. (ICASSP), pp. 2990–2994, IEEE, 2021.
* [32] D. Griffin and J. Lim, “Signal estimation from modified short-time fourier transform,” IEEE Transactions on acoustics, speech, and signal processing, vol. 32(2), pp. 236–243, 1984.
* [33] C. Trabelsi, O. Bilaniuk, Y. Zhang, D. Serdyuk, S. Subramanian, J. F. Santos, S. Mehri, N. Rostamzadeh, Y. Bengio, and C. J. Pal, “Deep complex networks,” in ICLR, 2018.
* [34] M. M. Halimeh, T. Haubner, A. Briegleb, A. Schmidt, and W. Kellermann, “Combining adaptive filtering and complex-valued deep postfiltering for acoustic echo cancellation,” in Int. Conf. Acoust. Speech Signal Process, pp. 121–125, IEEE, 2021.
* [35] S. Zhao, T. H. Nguyen, and B. Ma, “Monaural speech enhancement with complex convolutional block attention module and joint time frequency losses,” in Int. Conf. Acoust. Speech Signal Process., pp. 6648–6652, IEEE, 2021.
* [36] Y.-S. Lee, C.-Y. Wang, S.-F. Wang, J.-C. Wang, and C.-H. Wu, “Fully complex deep neural network for phase-incorporating monaural source separation,” in Int. Conf. Acoust. Speech Signal Process, pp. 281–285, IEEE, 2017.
* [37] K. N. Watcharasupat, T. N. T. Nguyen, W.-S. Gan, S. Zhao, and B. Ma, “End-to-end complex-valued multidilated convolutional neural network for joint acoustic echo cancellation and noise suppression,” in Int. Conf. Acoust. Speech Signal Process, pp. 656–660, IEEE, 2022.
* [38] L. Comanducci, F. Antonacci, and A. Sarti, “Synthesis of soundfields through irregular loudspeaker arrays based on convolutional neural networks,” arXiv preprint arXiv:2205.12872, 2022.
* [39] Y. Kuroe, M. Yoshid, and T. Mori, “On activation functions for complex-valued neural networks—existence of energy functions—,” in Int.Conf. on Artificial Neural Networks, pp. 985–992, Springer, 2003.
* [40] A. Pandey and D. Wang, “Exploring deep complex networks for complex spectrogram enhancement,” in ICASSP 2019-2019 IEEE Int.Conf. on Acoust., Speech and Signal Process. (ICASSP), pp. 6885–6889, IEEE, 2019.
* [41] W. Wirtinger, “Zur formalen theorie der funktionen von mehr komplexen veränderlichen,” Mathematische Annalen, vol. 97, no. 1, pp. 357–375, 1927.
* [42] H. Leung and S. Haykin, “The complex backpropagation algorithm,” IEEE Trans. on signal process., vol. 39, no. 9, pp. 2101–2104, 1991.
* [43] M. F. Amin, M. I. Amin, A. Y. H. Al-Nuaimi, and K. Murase, “Wirtinger calculus based gradient descent and levenberg-marquardt learning algorithms in complex-valued neural networks,” in Int.Conf. on Neural Information Processing, pp. 550–559, Springer, 2011.
* [44] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th Int. Conf., Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241, Springer, 2015.
* [45] D. Kingma and J. Ba, “Adam: A method for stochastic optimization, 3rd int. conf. on learning representations,” arXiv preprint arXiv:1412.6980, 2014\.
|
confidential
# Exact Counts of $C_{4}$s in Blow-Up Graphs
S. Y. Chan Deakin University, Geelong, Australia, School of Information
Technology, Faculty of Science Engineering & Built Environment, Australia K.
Morgan††footnotemark: J.Ugon11footnotemark: 1
###### Abstract
Cycles have many interesting properties and are widely studied in many
disciplines. In some areas, maximising the counts of $k$-cycles are of
particular interest. A natural candidate for the construction method used to
maximise the number of subgraphs $H$ in a graph $G$, is the _blow-up_ method.
Take a graph $G$ on $n$ vertices and a pattern graph $H$ on $k$ vertices, such
that $n\geq k$, the blow-up method involves an iterative process of replacing
vertices in $G$ with a copy of the $k$-vertex graph $H$. In this paper, we
apply the blow-up method on the family of cycles. We then present the exact
counts of cycles of length 4 for using this blow-up method on cycles and
generalised theta graphs.
## 1 Introduction
In graph theory, the family of cycles are considered to have very rich
structures. Cycles have many interesting properties and are of interest in
many disciplines. Some of the many applications of cycles include periodic
scheduling, the identification of weak interdependence in ecosystems and to
determine chemical pathways in chemical networks [1, 9].
In network analysis, cycles are used in modelling and studying communication
systems, improve consensus network performance, to investigate faults and
reliability and also study the topological features of such networks [16]. In
some cases, counting cycles is used as part of network analysis or message-
passing algorithms [8, 11, 15]. Some other interesting problems arise in
relation to counting $k$-cycles in graphs. For example, counting the number of
4-cycles in a tournament [10] and enumerating cycles of a given length [2].
Further, some problems looks into maximising the number of cycles in graphs.
There exists a kidney exchange program (KEP), which involves maximising the
number of (directed) cycles to maximise the expected number of transplants [3,
4, 13].
In the areas of extremal graph theory, a construction method used to maximise
counts of graphs is the _blow-up_ method. This method has been applied in
graphs to study graph spectra [12] and even the maximum induced density of
graphs [6]. This method was also applied as an approach to the Caccetta-
Häggkivst conjecture [5] and Johansson conjecture [7].
Suppose we have graphs $G$ and $H$ on $n$ and $k$ vertices respectively, such
that $n\geq k$. The blow-up method is an iterative process of replacing each
vertex in $G$ with a _copies_ of $H$. If _all_ vertices in $G$ have been
replaced with a copy of $H$, this is also known as a _balanced_ blow-up of
$G$.
In this paper, we are interested in determining the exact number of induced
$C_{4}$s in the nested blow-up graph. We will find the exact counts of
$C_{4}$s in two different blow-up graphs, one in the nested blow-up of
$C_{4}$s and the other in the theta graph $\Theta_{2,2,2}$. We give the
associated formulas with respect to the level of blow-up $N$, which is defined
in the later section.
## 2 Notations and Definitions
In this section, we present some basic definitions and notations that are used
in this paper. All graphs in this paper are simple.
A graph $G=(V,E)$ consists of the (finite) vertex set $V$ and edge set $E$,
which is a subset of all unordered pair of vertices. The _order_ of a graph is
the number of vertices, whereas the _size_ of a graph is the number of edges.
Let $u,v\in V(G)$, we say that $u$ is _adjacent_ to $v$ if there exists an
edge $\\{u,v\\}\in E(G)$. We say that the edge $\\{u,v\\}$ is incident to
vertices $u$ and $v$.
###### Definition 1 (Graph composition).
Let $G$ and $H$ be simple graphs. The composition of graph $G$ and $H$,
denoted $G[H]$, is the graph obtained by replacing each vertex $v\in V(G)$ by
the graph $H_{v}\cong H$ and adding an edge between every vertex of $H_{u}$
and every vertex of $H_{v}$ where $\\{u,v\\}\in E(G).$
###### Definition 2 (Nested blow-up graph).
If $H$ is isomorphic to $G$ and each vertex $v\in V(G)$ is labelled
$v\in\\{0,1,2,\ldots,n-1\\}$, then $G_{0}=G$, $G_{1}=G[G]$,$\ldots$,
$G_{N}=G[\underbrace{G[G[G[\ldots[G]]]]}_{N-1}]=G[G_{N-1}]$, where at each
level $N$, the nested blow-up graph $G_{N}$ can be obtained from $G_{N-1}$ by
replacing each vertex in $G$ by a copy of $G_{N-1}$ and replacing each edge by
all possible edges between adjacent copies of $G_{N-1}$. We refer to each of
such copies $G_{N-1}$ as blobs in our proofs in order to differentiate from
other subgraphs in $G_{N}$ that may be isomorphic to $G_{N-1}$.
$v_{1}$$v_{2}$$K_{2}$$v_{1}$$v_{2}$$v_{1}^{\prime}$$v_{2}^{\prime}$$K_{2}[K_{2}]$
Figure 1: An example of the nested blow-up of $K_{2}$. In $K_{2}[K_{2}]$ there
are two blobs shown in blue and red, but 6 subgraphs that are isomorphic to
$K_{2}$. Figure 2: Example of a nested blow-up graph of $C_{4}$.
We say that blob $i$ and blob $j$ in $G_{N}$ are adjacent, denoted
$i\circledast j$, if $\\{v_{i},v_{j}\\}\in E(G)$, for $i\in[0,n),j\in[0,n)$.
The vertices of $G_{N}$ are $\\{0,1,\ldots,n|V(G_{N-1})|-1\\}$, where
$|V(G)|=n$. The edges are $\\{\\{u+i\times n^{N},v+i\times
n^{N}\\}:i\in[0,n),\\{u,v\\}\in E(G_{N-1})\\}\cup\\{\\{u+i\times
n^{N},v+i+j\times n^{N}\\}\pmod{|V(G_{N})|}\\}:i\in[0,n),j\in[0,n),u,v\in
V(G),i\circledast j\\}$. There exists copies of subgraphs that are isomorphic
to $G$ within $G_{N-1}$ and also between each blob.
## 3 Exact Counts
In this section, we show the exact counts of $C_{4}$ for the nested blow-up
graph of the graph $G=C_{4}$ and $G=\Theta_{2,2,2}$. Here, we denote $T_{N}$
as the number of induced subgraphs isomorphic to $C_{4}$ and $n_{N}$ as the
number of vertices at level $N$ of the nested blow-up $G_{N}$ respectively
(i.e., $T_{0}=1$ and $n_{0}=4$), where $G_{0}=C_{4}$. We also denote the
number of non-edges of $C_{4}$ as $m_{N}^{c}$, where $m_{0}^{c}=2$. Recall
that a _blob_ is the set of $G_{N-1}$ vertices in $G_{N}$ from the blow-up of
a vertex $v$ in $G=G_{0}$. We show the following lemma:
###### Lemma 1.
The number of non-edges $m^{c}_{N}$ in the nested blow-up graph $G_{N}$ is
$m_{N}^{c}=\binom{4^{N+1}}{2}-4^{N+1}{\sum}_{i=0}^{N}4^{i}=\dfrac{4^{N+1}(4^{N+1}-1)}{6},N\in\mathbb{N}\cup\\{0\\}.$
###### Proof.
We prove the equation from Lemma 1 by induction.
Base case: For $N=0$, the number of non-edges is
$m^{c}_{0}=\binom{4}{2}-4{\sum}^{0}_{i=0}4^{0}=6-4=\dfrac{4^{1}(4^{1}-1)}{6}=2$,
which is precisely the number of non-edges in $C_{4}$.
Assume the induction hypothesis that for a particular $N$, the case $n=N$
holds, that is:
$\displaystyle m^{c}_{N}$
$\displaystyle=\binom{4^{N+1}}{2}-4^{N+1}{\sum}^{N}_{i=0}4^{i}=\dfrac{4^{N+1}(4^{N+1}-1)}{6}.$
The number of non-edges in $G_{N+1}$ is: $\displaystyle m^{c}_{N+1}$
$\displaystyle=\binom{4^{N+2}}{2}-4\times|E(G_{N})|-4\times\text{edges between
pairs of blobs}$ We obtain $|E(G_{N})|$ using $m_{N}^{c}$ such that
$|E(G_{N})|=\binom{4^{N+1}}{2}-m_{N}^{c}$. Thus, $\displaystyle m_{N+1}^{c}$
$\displaystyle=\binom{4^{N+2}}{2}-4\cdot
4^{N+1}{\sum}^{N}_{i=0}4^{i}-4\cdot(4^{N+1})^{2}$
$\displaystyle=\binom{4^{N+2}}{2}-4^{N+2}{\sum}_{i=0}^{N}4^{i}-4^{N+2}4^{N+1}$
$\displaystyle=\binom{4^{N+2}}{2}-4^{N+2}{\sum}_{i=0}^{N+1}4^{i}.$
Since both the base case and inductive step have been proven as true, thus by
mathematical induction $m^{c}_{N}$ holds for all $N$. ∎
Thus, we state the following theorem.
###### Theorem 1.
The nested blow-up graph $G_{N}$, $N\geq 0$, of a $C_{4}$ has precisely
$T_{N}=\frac{8\times 4^{N}}{5670}\times(1280\times 4^{3N}+672\times
4^{2N}+105\times 4^{N}-713)$ induced subgraphs isomorphic to $C_{4}$.
###### Proof.
First, we will show that,
$T_{N}=\begin{cases}4(T_{N-1})+(4^{N})^{4}+4\times
m_{N-1}^{c}\times(4^{N})^{2}+4\times(m_{N-1}^{c})^{2},&n>0\\\
1&n=0.\end{cases}$
Since $G_{0}=C_{4}$, we have $T_{0}=1$. We show that we can obtain $T_{N}$
from $T_{N-1}$ and prove each term from $T_{N}$ respectively.
$B_{1}$$B_{4}$$B_{2}$$B_{3}$ Figure 3: Illustration of $G_{N}$ with four blobs
labelled $B_{1},B_{2},B_{3}$ and $B_{4}$.
$B_{4}$$B_{3}$
$B_{3}$
Figure 4: [Left to right] Illustration of ways to obtain $C_{4}$ from either
2, 3, or all 4 blobs respectively. The selected vertices are shown in red.
We know that $G_{N}$ has four blobs (see Figure 3) isomorphic to $G_{N-1}$.
Each of these blobs has $T_{N-1}$ copies of $C_{4}$, which contributes to
$4T_{N-1}$ induced copies of $C_{4}$. Thus giving the first term $4T_{N-1}$.
For simplicity of the remainder of the proof, we use Figure 4 to show the
multiple ways of obtaining an induced $C_{4}$ from either 2, 3 or all 4 blobs.
Example of how vertices can be chosen are shown in red.
We select a vertex from each of the 4 blobs that are isomorphic to $G_{N-1}$
in $G_{N}$ which results in an induced $C_{4}$. There are $4^{N}$ vertices in
each $G_{N-1}$ which gives $(4^{N})^{4}$ choices. This results in the second
term $(4^{N})^{4}$.
For the third term $4\times m^{c}_{N-1}\times(4^{N})^{2}$, we choose 1 blob
that is isomorphic to $G_{N-1}$. We choose a non-edge from this blob. Then, we
select two adjacent blobs and from each blob we select a vertex. There are
$(4^{N})$ vertices in a blob. As we choose a different vertex from each of the
2 blobs, there are $(4^{N})^{2}$ choices of vertices, as well as $4\times
m^{c}_{n-1}$ non-edges, that can be chosen from the third blob.
Lastly, we choose 2 adjacent blobs and a non-edge from each. There are 4
choices for adjacent blobs in the blow-up of $C_{4}$. Thus, resulting in
$4\times(m^{c}_{N-1})^{2}$ choices.
We now expand and simplify $T_{N}$ to find the recurrence relation.
Expanding the first few terms, we obtain:
$\displaystyle T_{0}={}$ $\displaystyle 1$ $\displaystyle T_{1}={}$
$\displaystyle 4(T_{0})+4^{4}+4\times m_{0}^{c}\times
4^{2}+4\times(m_{0}^{c})^{2}={}4+4^{4}+4^{3}\times
m_{0}^{c}+4\times(m_{0}^{c})^{2}$ $\displaystyle={}$ $\displaystyle
4^{1}\times\sum_{i=0}^{1}4^{3i}+{\sum}_{i=1}^{1}(4^{1+i+1}\times
m_{i-1}^{c})+{\sum}_{i=1}^{1}(4^{i}\times(m_{1-i}^{c})^{2})$ $\displaystyle
T_{2}={}$ $\displaystyle 4(T_{1})+4^{8}+4\times m_{1}^{c}\times
4^{4}+4\times(m_{1}^{c})^{2}$ $\displaystyle={}$ $\displaystyle
4\left(4^{1}\times\sum_{i=0}^{1}4^{3i}+{\sum}_{i=1}^{1}(4^{1+i+1}\times
m_{i-1}^{c})+{\sum}_{i=1}^{1}(4^{i}\times(m_{1-i}^{c})^{2})\right)+4^{8}+4\times
m_{1}^{c}\times 4^{4}+4\times(m_{1}^{c})^{2}$ $\displaystyle={}$
$\displaystyle\left(4^{2}\times\sum_{i=0}^{1}4^{3i}+4^{8}\right)+\left(4{\sum}_{i=1}^{1}(4^{1+i+1}\times
m_{i-1}^{c})+4\times m_{1}^{c}\times
4^{4}\right)+\left(4{\sum}_{i=1}^{1}(4^{i}\times(m_{1-i}^{c})^{2})+4\times(m_{1}^{c})^{2}\right)$
$\displaystyle={}$ $\displaystyle
4^{2}\times\sum_{i=0}^{2}4^{3i}+{\sum}_{i=1}^{2}(4^{2+i+1}\times
m_{i-1}^{c})+{\sum}_{i=1}^{2}(4^{i}\times(m_{2-i}^{c})^{2})$ $\displaystyle
T_{3}={}$ $\displaystyle 4(T_{2})+4^{12}+4\times m_{2}^{c}\times
4^{6}+4\times(m_{2}^{c})^{2}$ $\displaystyle={}$ $\displaystyle
4\left(4^{2}\times\sum_{i=0}^{2}4^{3i}+{\sum}_{i=1}^{2}(4^{2+i+1}\times
m_{i-1}^{c})+{\sum}_{i=1}^{2}(4^{i}\times(m_{2-i}^{c})^{2})\right)+4^{12}+4\times
m_{2}^{c}\times 4^{6}+4\times(m_{2}^{c})^{2}$ $\displaystyle={}$
$\displaystyle\left(4^{3}\times\sum_{i=0}^{2}4^{3i}+4^{12}\right)+\left(4{\sum}_{i=1}^{2}(4^{2+i+1}\times
m_{i-1}^{c})+4\times m_{2}^{c}\times
4^{6}\right)+\left(4{\sum}_{i=1}^{2}(4^{i}\times(m_{2-i}^{c})^{2})+4\times(m_{2}^{c})^{2}\right)$
$\displaystyle={}$ $\displaystyle
4^{3}\times\sum_{i=0}^{3}4^{3i}+{\sum}_{i=1}^{3}(4^{3+i+1}\times
m_{i-1}^{c})+{\sum}_{i=1}^{3}(4^{i}\times(m_{3-i}^{c})^{2})$
$\displaystyle\vdots$ $\displaystyle T_{N}={}$
$\displaystyle\underbrace{4^{N}\times\sum_{i=0}^{N}4^{3i}}_{Q_{N}}+\underbrace{{\sum}_{i=1}^{N}(4^{N+i+1}\times
m_{i-1}^{c})}_{R_{N}}+\underbrace{{\sum}_{i=1}^{N}(4^{i}\times(m_{N-i}^{c})^{2})}_{S_{N}}$
We simplify for each $Q_{N},R_{N}$ and $S_{N}$.
Simplifying using geometric sum,
$\displaystyle Q_{N}$
$\displaystyle=4^{N}\times{\sum}_{i=0}^{N}4^{3i}=4^{N}\times\dfrac{(4^{3(N+1)}-1)}{4^{3}-1}=\dfrac{4^{N}}{63}(64\times
4^{3N}-1).$ (1) Using Lemma 1, $\displaystyle R_{N}$
$\displaystyle={\sum}_{i=1}^{N}(4^{N+i+1}\times m_{i-1}^{c})$
$\displaystyle={\sum}_{i=1}^{N}\left(\frac{4^{N+i+1}}{6}\times\dfrac{4^{i}(4^{i}-1)}{6}\right)=4^{N+1}\times{\sum}_{i=1}^{N}\dfrac{4^{3i}-4^{2i}}{6}=\dfrac{4^{N+1}}{6}\times\left({\sum}_{i=1}^{N}4^{3i}-{\sum}_{i=1}^{N}4^{2i}\right).$
Again, simplify using geometric sum, $\displaystyle R_{N}$
$\displaystyle=\dfrac{4^{N+1}}{6}\left(\dfrac{4^{3}(4^{3N}-1)}{4^{3}-1}-\dfrac{4^{2}(4^{2N}-1)}{4^{2}-1}\right)$
$\displaystyle=\dfrac{4^{N+1}}{1890}(320\times(4^{3N}-1)-336\times(4^{2N}-1))$
$\displaystyle=\dfrac{4^{N+1}}{1890}(320\times 4^{3N}-336\times 4^{2N}+16).$
(2) Lastly, $\displaystyle S_{N}$
$\displaystyle={\sum}_{i=1}^{N}(4^{i}\times(m_{N-i}^{c})^{2})$ By Lemma 1,
$\displaystyle S_{N}$
$\displaystyle={\sum}_{i=1}^{N}\left(4^{i}\times\left(\frac{4^{N-i+1}(4^{N-i+1}-1)}{6}\right)^{2}\right)$
$\displaystyle={\sum}_{i=1}^{N}\left(4^{i}\times\frac{4^{2N-2i+2}\left(4^{N-i+1}-1\right)^{2}}{36}\right)$
$\displaystyle=\dfrac{4^{N+1}}{9}{\sum}_{i=1}^{N}4^{N-i}\left(4^{2N-2i+2}-2\times
4^{N-i+1}+1\right)$
$\displaystyle=\dfrac{4^{N+1}}{9}\left({\sum}_{i=1}^{N}4^{3N-3i+2}-2\times{\sum}_{i=1}^{N}4^{2N-2i+1}+{\sum}_{i=1}^{N}4^{N-i}\right)$
$\displaystyle=\dfrac{4^{N+1}}{9}\left(16\times{\sum}_{i=1}^{N}4^{3(N-i)}-8\times{\sum}_{i=1}^{N}4^{2(N-i)}+{\sum}_{i=1}^{N}4^{N-i}\right).$
Simplify using geometric sum, $\displaystyle S_{N}$
$\displaystyle=\dfrac{4^{N+1}}{9}\left(16\times\frac{(4^{3N}-1)}{63}-8\times\frac{(4^{2N}-1)}{15}+\frac{(4^{N}-1)}{3}\right)$
$\displaystyle=\frac{4^{N+1}}{2835}\left(80\times(4^{3N}-1)-168\times(4^{2N}-1)+105\times(4^{N}-1)\right)$
$\displaystyle=\frac{4^{N+1}}{2835}\left(80\times 4^{3N}-168\times
4^{2N}+105\times 4^{N}-17\right).$ (3)
Thus,
$\displaystyle T_{N}={}$ $\displaystyle\dfrac{4^{N}}{63}(64\times
4^{3N}-1)+\dfrac{4^{N+1}}{1890}(320\times 4^{3N}-336\times 4^{2N}+16)$
$\displaystyle+\frac{4^{N+1}}{2835}\left(80\times 4^{3N}-168\times
4^{2N}+105\times 4^{N}-17\right)$ $\displaystyle={}$
$\displaystyle\dfrac{4^{N}}{5670}(10240\times 4^{3N}-5376\times
4^{2N}+840\times 4^{N}-34).$
∎
### 3.1 Counting $C_{4}$ in the theta graph $\Theta_{2,2,2}$
In this section, we construct the nested blow-up graph of the theta graph
$\Theta_{2,2,2}$ by replacing each vertex $v_{i}$ in $\Theta_{2,2,2}$ with a
$\Theta_{2,2,2}$.
Recall that the $N$ level nested blow-up of a graph $G$ is denoted $G_{N}$. We
compute the number of induced $C_{4}$ in $\Theta_{2,2,2}$. We denote $T_{N}$
as the number of induced $C_{4}$s and $n_{N}$ as the number of distinct
vertices in $G_{N}$ respectively. Figure 5 gives the theta graph
$\Theta_{2,2,2}$.
Figure 5: Illustration of theta graph $\Theta_{2,2,2}$.
###### Lemma 2.
Let $G=\Theta_{2,2,2}$. The number of non-edges $m^{c}_{N}$ in each level of
the blow-up graph $G_{N}$ is given by
$m_{N}^{c}=4\cdot 5^{N}\sum^{N}_{i=0}5^{i}=5^{N}(5^{N+1}-1).$
###### Proof.
We prove the equation by induction.
Base case: When $N=0$, there are $m^{c}_{0}=5^{0}(5^{1}-1)=4$ non-edges which
is precisely the number of non-edges in a $\Theta_{2,2,2}$.
Assume the induction hypothesis that for a particular $N$, the single case
$n=N$ holds, that is,
$\displaystyle m_{N}^{c}$ $\displaystyle=4\cdot
5^{N}\sum^{N}_{i=0}5^{i}=5^{N}(5^{N+1}-1).$ The number of non-edges in
$G_{N+1}$ is $\displaystyle m_{N+1}^{c}$
$\displaystyle=\binom{5^{N+2}}{2}-|E(G_{N+1})|$ The number of edges in $G_{N}$
is calculated as follows: There are 5 vertices and 6 edges in
$\Theta_{2,2,2}$. At the level $N$ blow-up, there are $6\cdot 5^{N}$ edges in
each blob, and also $6\cdot 5^{N+1}$ edges between blobs, thus giving the term
$|E(G_{N})|=6\cdot 5^{N}{\sum}_{i=0}^{N}5^{i}$. Thus, $\displaystyle
m_{N+1}^{c}$ $\displaystyle=\binom{5^{N+2}}{2}-6\cdot
5^{N+1}{\sum}_{i=0}^{N+1}5^{i}$
$\displaystyle=\dfrac{5^{N+2}(5^{N+2}-1)}{2}-\dfrac{(6\cdot
5^{N+1})(5^{N+2}-1)}{4}$ $\displaystyle=5^{N+1}(5^{N+2}-1).$
Since both the base case and inductive step has been proven as true, thus by
mathematical induction $m^{c}_{N}$ holds for all $N$. ∎
We state the following theorem.
###### Theorem 2.
The nested blow-up graph of a $\Theta_{2,2,2}$ has precisely
$T_{N}=\frac{5^{N}}{1240}(6300\times 5^{3N}-2945\times 5^{2N}+372\times
5^{N}-3877)$ induced subgraphs isomorphic to $C_{4}$.
###### Proof.
First, we will show that,
$T_{N}=\begin{cases}5\times(T_{N-1})+3\times(5^{N})^{4}+6\times(m^{c}_{N-1})^{2}+9\times
m^{c}_{N-1}\times(n_{N-1})^{2},&n>0\\\ 3&n=0.\end{cases}$
Since $G_{0}=\Theta_{2,2,2}$, we have $T_{0}=3$. We show that we can obtain
$T_{N}$ from $T_{N-1}$ and prove each term from $T_{N}$ respectively.
Note that at level $N$ blow-up, we replace each of the $5^{N+1}$ vertices in
$G_{N-1}$ with $G_{0}$, this contributes to $5\times T_{N-1}$ induced copies
of $C_{4}$ at $G_{N}$. Thus giving the first term $5\times T_{N-1}$.
$B_{1}$$B_{2}$$B_{3}$$B_{4}$$B_{5}$ Figure 6: The $\Theta_{2,2,2}$ graph
containing 5 blobs, with each blob labelled respectively.
We know that $G_{N}$ has five blobs isomorphic to $G_{N-1}$. For simplicity of
the rest of the proof, we will refer to the blobs as labelled
$B_{1},B_{2},B_{3},B_{4}$ and $B_{5}$, see Figure 6.
We select 4 of these 5 blobs and one vertex from each blob. There are five
ways to select 4 blobs, but only 3 of these induce copies of $C_{4}$. Each of
these three combinations of blobs contribute to 3 $C_{4}$ in $G_{N}$. There
are $5^{N}$ vertices in each $G_{N-1}$ which gives $3\times(5^{N})^{4}$
choices. This results in the second term $3\times(5^{N})^{4}$.
For the third term $6\times(m^{c}_{N-1})^{2}$, we choose 1 blob. We choose a
non-edge from this blob and another non-edge from any adjacent blob. There are
six pairs of blobs that are adjacent, thus resulting in the term
$6\times(m^{c}_{N-1})^{2}$.
Finally, we pick a non-edge in a blob $B$ and two vertices, one from each of
two different blobs each adjacent to $B$. There are nine ways that we can
select these (refer to Figure 6), namely, $\\{B_{1},B_{2},B_{5}\\}$,
$\\{B_{1},B_{3},B_{5}\\}$, $\\{B_{1},B_{4},B_{5}\\}$,
$\\{B_{1},B_{2},B_{3}\\}$, $\\{B_{1},B_{2},B_{4}\\}$,
$\\{B_{1},B_{3},B_{4}\\}$, $\\{B_{2},B_{3},B_{5}\\}$,
$\\{B_{2},B_{4},B_{5}\\}$, and $\\{B_{3},B_{4},B_{5}\\}$.
Each combination has $m^{c}_{N-1}$ choices of a non-edge from one blob and
then a choice of a vertex from the $n_{N-1}$ vertices in each of the other two
blobs. This results in the term $9\times m^{c}_{N-1}\times(n_{N-1})^{2}$. We
now expand and simplify $T_{N}$ to find the recurrence relation for $T_{N}$.
Expanding the first few terms, we obtain:
$\displaystyle T_{0}={}$ $\displaystyle 3$ $\displaystyle T_{1}={}$
$\displaystyle 5(T_{0})+3\times 5^{4}+6\times(m_{0}^{c})^{2}+9\times
m_{0}^{c}\times 5^{2}={}3\cdot(5+5^{4})+6\times(m_{0}^{c})^{2}+9\times
5^{2}\times m_{0}^{c}$ $\displaystyle={}$ $\displaystyle 3\cdot
5^{1}\times{\sum}_{i=0}^{1}5^{3i}+6\times{\sum}_{i=1}^{1}(5^{i-1}\times(m_{1-i}^{c})^{2})+9\times{\sum}_{i=1}^{1}(5^{1+i}\times
m_{i-1}^{c})$ $\displaystyle T_{2}={}$ $\displaystyle 5(T_{1})+3\times
5^{8}+6\times(m_{1}^{c})^{2}+9\times m_{1}^{c}\times 5^{4}$ $\displaystyle={}$
$\displaystyle 5\left(3\cdot
5^{1}\times{\sum}_{i=0}^{1}5^{3i}+6\times{\sum}_{i=1}^{1}(5^{i-1}\times(m_{1-i}^{c})^{2})+9\times{\sum}_{i=1}^{1}(5^{1+i}\times
m_{i-1}^{c})\right)+3\times 5^{8}+6\times(m_{1}^{c})^{2}+9\times
m_{1}^{c}\times 5^{4}$ $\displaystyle={}$ $\displaystyle\left(3\cdot
5^{2}{\sum}_{i=0}^{1}5^{3i}+3\cdot 5^{8}\right)+\left(6\cdot
5{\sum}_{i=1}^{1}(5^{i-1}\times(m_{1-i}^{c})^{2})+6\cdot(m_{1}^{c})^{2}\right)$
$\displaystyle+\left(9\cdot 5{\sum}_{i=1}^{1}(5^{1+i}\times
m_{i-1}^{c})+9\cdot m_{1}^{c}\cdot 5^{4})\right)$ $\displaystyle={}$
$\displaystyle 3\cdot
5^{2}\times{\sum}_{i=0}^{2}5^{3i}+6\times{\sum}_{i=1}^{2}(5^{i-1}\times(m_{2-i}^{c})^{2})+9\times{\sum}_{i=1}^{2}(5^{2+i}\times
m_{i-1}^{c})$ $\displaystyle T_{3}={}$ $\displaystyle 5(T_{2})+3\times
5^{12}+6\times(m_{2}^{c})^{2}+9\times m_{2}^{c}\times 5^{6}$
$\displaystyle={}$ $\displaystyle 5\left(3\cdot
5^{2}\times{\sum}_{i=0}^{2}5^{3i}+6\times{\sum}_{i=1}^{2}(5^{i-1}\times(m_{2-i}^{c})^{2})+9\times{\sum}_{i=1}^{2}(5^{2+i}\times
m_{i-1}^{c})\right)+3\times 5^{12}+6\times(m_{2}^{c})^{2}+9\times
m_{2}^{c}\times 5^{6}$ $\displaystyle={}$ $\displaystyle\left(3\cdot
5^{3}{\sum}_{i=0}^{2}5^{3i}+3\cdot 5^{12}\right)+\left(6\cdot
5{\sum}_{i=1}^{2}(5^{i-1}\times(m_{2-i}^{c})^{2})+6\cdot(m_{2}^{c})^{2}\right)$
$\displaystyle+\left(9\cdot 5{\sum}_{i=1}^{2}(5^{2+i}\times
m_{i-1}^{c})+9\cdot m_{2}^{c}\cdot 5^{6}\right)$ $\displaystyle={}$
$\displaystyle 3\cdot
5^{3}\times{\sum}_{i=0}^{3}5^{3i}+6\times{\sum}_{i=1}^{3}(5^{i-1}\times(m_{3-i}^{c})^{2})+9\times{\sum}_{i=1}^{3}(5^{3+i}\times
m_{i-1}^{c})$ $\displaystyle\vdots$ $\displaystyle T_{N}={}$
$\displaystyle\underbrace{3\cdot
5^{N}\times{\sum}_{i=0}^{N}5^{3i}}_{Q_{N}}+\underbrace{6\times{\sum}_{i=1}^{N}(5^{i-1}\times(m_{N-i}^{c})^{2})}_{R_{N}}+\underbrace{9\times{\sum}_{i=1}^{N}(5^{N+i}\times
m_{i-1}^{c})}_{S_{N}}.$
We simplify for each $Q_{N},R_{N}$ and $S_{N}$. Simplifying using geometric
sum,
$\displaystyle Q_{N}$ $\displaystyle=3\cdot
5^{N}\times{\sum}_{i=0}^{N}5^{3i}=3\cdot
5^{N}\times\dfrac{5^{3}(5^{3N}-1)}{5^{3}-1}=\dfrac{3\cdot
5^{N}}{124}(125\times 5^{3N}-1).$ (4) Using Lemma 2, $\displaystyle R_{N}$
$\displaystyle=6\times{\sum}_{i=1}^{N}(5^{i-1}\times(m_{N-i}^{c})^{2})$
$\displaystyle=6\times{\sum}_{i=1}^{N}(5^{i-1}\times(5^{N-i}(5^{N-i+1}-1))^{2})$
$\displaystyle=6\cdot
5^{N}\left(5\cdot{\sum}_{i=1}^{N}5^{3(N-i)}-2\cdot{\sum}_{i=1}^{N}5^{2(N-i)}+\frac{1}{5}\cdot{\sum}_{i=1}^{N}5^{N-i}\right).$
Again, simplify using geometric sum, $\displaystyle R_{N}$
$\displaystyle=6\cdot
5^{N}\left(5\times\frac{5^{3N}-1}{5^{3}-1}-2\times\frac{5^{2N}-1}{5^{2}-1}+\frac{1}{5}\times\frac{5^{N}-1}{5-1}\right)$
$\displaystyle=\dfrac{5^{N}}{620}(150\times 5^{3N}-310\times 5^{2N}+186\times
5^{N}-26).$ (5) Lastly, $\displaystyle S_{N}$
$\displaystyle=9\times{\sum}_{i=1}^{N}(5^{N+i}\times m_{i-1}^{c}).$ By Lemma
2, $\displaystyle S_{N}$
$\displaystyle=9\times{\sum}_{i=1}^{N}(5^{N+i}\times(5^{i-1}(5^{i}-1)))=9\cdot
5^{N}\times{\sum}_{i=1}^{N}(5^{3i-1}-5^{2i-1})=\dfrac{9\cdot
5^{N}}{5}\times\left({\sum}_{i=1}^{N}5^{3i}-{\sum}_{i=1}^{N}5^{2i}\right).$
Simplify using geometric sum, $\displaystyle S_{N}$
$\displaystyle=\dfrac{9\cdot
5^{N}}{5}\times\left(\dfrac{5^{3}(5^{3N}-1)}{5^{3}-1}-\dfrac{5^{2}(5^{2N}-1)}{5^{2}-1}\right)=\dfrac{3\cdot
5^{N}}{1240}(750\times 5^{3N}-775\times 5^{2N}+25).$ (6)
Thus,
$\displaystyle T_{N}={}$ $\displaystyle\dfrac{3\cdot 5^{N}}{124}(125\times
5^{3N}-1)+\dfrac{5^{N}}{620}(150\times 5^{3N}-310\times 5^{2N}+186\times
5^{N}-26)$ $\displaystyle+\dfrac{3\cdot 5^{N}}{1240}(750\times
5^{3N}-775\times 5^{2N}+25)$ $\displaystyle={}$
$\displaystyle\frac{5^{N}}{1240}(6300\times 5^{3N}-2945\times 5^{2N}+372\times
5^{N}-7).$ (7)
∎
## 4 Conclusion
In this paper, we gave exact counts of $C_{4}$s in two different graph
structures: (i) the nested blow-up graphs of $C_{4}$s and (ii) the theta graph
$\theta_{2,2,2}$. Previously, only bounds were found for the blow-ups of
graphs [14]. We improved the bounds to give the exact counts of such graphs.
In a general case, we can adapt a similar formula to construct the equations
for blow-up graphs of higher order $k$s, to find the exact counts of cycles of
higher order $k$. Future direction of this work could include finding a
generalised formula for any types of blow-up graphs, with _some_ cyclic
property.
## References
* Adriaens et al. [2019] F. Adriaens, C. Aslay, T. De Bie, A. Gionis, and J. Lijffijt. Discovering interesting cycles in directed graphs. In _Proceedings of the 28th ACM International Conference on Information and Knowledge Management_ , page 1191–1200. Association for Computing Machinery, 2019.
* Alon et al. [1997] N. Alon, R. Yuster, and U. Zwick. Finding and counting given length cycles. _Algorithmica_ , 17:209–223, 1997.
* Alvelos et al. [2016] F. Alvelos, X. Klimentova, A. Rais, and A. Viana. Maximizing expected number of transplants in kidney exchange programs. _Electron. Notes Discrete Math._ , 52:269–276, 2016.
* Biró et al. [2009] P. Biró, D.F. Manlove, and R. Rizzi. Maximum weight cycle packing in directed graphs, with application to kidney exchange programs. _Discrete Math. Algorithms Appl._ , 1(4):499–517, 2009.
* Bondy [1997] J.A. Bondy. Counting subgraphs a new approach to the caccetta-häggkvist conjecture. _Discrete Math._ , 165-166:71–80, 1997.
* Hatami et al. [2014] H. Hatami, J. Hirst, and S. Norine. The inducibility of blow-up graphs. _J. Combin. Theory Ser. B_ , 109:196–212, 2014.
* Johansson [2000] R. Johansson. Triangle-factors in a balanced blown-up triangle. _Discrete Math._ , 211(1-3):249–254, 2000.
* Karimi and Banihashemi [2013] M. Karimi and A.H. Banihashemi. Message-passing algorithms for counting short cycles in a graph. _IEEE Trans. Commun._ , 61(2):485–495, 2013.
* Kavitha et al. [2009] T. Kavitha, C. Liebchen, K. Mehlhorn, D. Michail, R. Rizzi, T. Ueckerdt, and K.A. Zweig. Cycle bases in graphs characterization, algorithms, complexity, and applications. _Comput. Sci. Rev._ , 3(4):199–243, 2009.
* Linial and Morgenstern [2016] N. Linial and A. Morgenstern. On the number of 4-cycles in a tournament. _J. Graph Theory_ , 83(3):266–276, 2016.
* Liu and Wang [2006] H. Liu and J. Wang. A new way to enumerate cycles in graph. In _Advanced Int’l Conference on Telecommunications and Int’l Conference on Internet and Web Applications and Services (AICT-ICIW’06)_ , pages 57–57, 2006.
* Oliveira et al. [2014] C.S. Oliveira, L. Silva de Lima, and V. Nikiforov. Spectra of blow-up graphs. _arXiv_ , 2014.
* Pedroso [2014] J.P. Pedroso. Maximizing expectation on vertex-disjoint cycle packing. In _Computational Science and Its Applications (ICCSA 2014)_ , page 32–46, 2014.
* Pippenger and Golumbic [1975] N. Pippenger and M.C. Golumbic. The inducibility of graphs. _J. Combin. Theory Ser. B_ , 19(3):189–283, 1975\.
* Safar et al. [2011] M.H. Safar, I.Y. Sorkhoh, H.M. Faraht, and K.A. Mahdi. On maximizing the entropy of complex networks. _Procedia Computer Science_ , 5:480–488, 2011.
* Zelazo et al. [2013] D. Zelazo, S. Schuler, and F. Allgöwer. Performance and design of cycles in consensus networks. _Systems & Control Letters_, 62(1):85–96, 2013\.
|
# Open charm phenomenology with a multi-stage approach to relativistic heavy-
ion collisions
Mayank Singh<EMAIL_ADDRESS>School of Physics & Astronomy, University of
Minnesota, Minneapolis, MN 55455, USA Manu Kurian<EMAIL_ADDRESS>Department
of Physics, McGill University, 3600 University Street, Montreal, QC, H3A 2T8,
Canada RIKEN BNL Research Center, Brookhaven National Laboratory, Upton, New
York 11973, USA Sangyong Jeon<EMAIL_ADDRESS>Department of Physics,
McGill University, 3600 University Street, Montreal, QC, H3A 2T8, Canada
Charles Gale<EMAIL_ADDRESS>Department of Physics, McGill
University, 3600 University Street, Montreal, QC, H3A 2T8, Canada
###### Abstract
We study open charm flavor observables in Pb+Pb collision at
$\sqrt{s_{NN}}=2.76$ TeV within the MARTINI framework. The space-time
expansion of the quark-gluon plasma is described using the hydrodynamical
approach-MUSIC with IP-Glasma initial conditions. The model parameters,
including the viscous coefficients, were obtained from a recent Bayesian
model-to-data comparison. We evolve heavy quarks in this background using
Langevin dynamics while incorporating their collisional and radiative
processes in the medium. The sensitivity of charm observables to the IP-Glasma
initial state, bulk evolution, and centrality of the collision is studied. We
find that the elliptic flow of open charm flavor has a strong dependence on
the fluctuating initial conditions in addition to the strength of the
interaction of heavy quarks with the medium constituents. Within this
framework, the nuclear suppression factor and elliptic flow of D-mesons act as
efficient probes to study the initial stages of heavy-ion collisions,
transport coefficients associated with QGP medium as well as heavy quark
interactions.
_Introduction_ \- Heavy-ion collision experiments at the Relativistic Heavy
Ion Collider (RHIC) and Large Hadron Collider (LHC) provide an access to
strongly interacting matter: the Quark-Gluon Plasma (QGP). Relativistic
viscous hydrodynamics serves as an important framework to study the dynamical
evolution of the QGPGale _et al._ (2013). The transport properties of QGP are
extracted from model to light flavor hadron data comparisons, and significant
effort has been devoted in this directionRomatschke and Romatschke (2007);
Heinz and Snellings (2013); Ryu _et al._ (2015); Jaiswal and Roy (2016).
Recently, Bayesian analysis has been used to do a systematic extraction of
shear and bulk viscosities of the medium and their uncertainty quantification
Bernhard _et al._ (2019); Everett _et al._ (2021); Nijs _et al._ (2021);
Heffernan _et al._ (2023).
Heavy flavor quarks provide additional probes to study the properties of QGP
van Hees _et al._ (2006); Das _et al._ (2009); He _et al._ (2013a);
Andronic _et al._ (2016); Aarts _et al._ (2017); Song _et al._ (2020a);
Mustafa (2005); Sun _et al._ (2019). They are largely created in the initial
stages of the collision, and pass through the QGP while interacting with the
light flavor quarks and gluons. Thermal production of heavy quarks in the QGP
medium is expected to be negligible because of their larger mass compared to
the temperature scale of the medium Dong and Greco (2019). Generally, heavy
flavor particles are not treated as part of the medium as the thermalization
time is longer than the QGP lifetime. Their dynamics can be described as
Brownian motion in the medium and can be studied within the Langevin or the
Fokker-Planck frameworks Golam Mustafa _et al._ (1998); Moore and Teaney
(2005); Uphoff _et al._ (2011); Das _et al._ (2014); Cao _et al._ (2016);
Li _et al._ (2019). Nuclear suppression factor $R_{AA}$ and elliptic flow
$v_{2}$ are the key observables associated with the heavy flavor particles at
the RHIC and LHC energies. Several studies have been done to estimate $R_{AA}$
and $v_{2}$ by modeling the Brownian motion of the heavy quarks in the medium
Akamatsu _et al._ (2009); He _et al._ (2012); Cao _et al._ (2013); Alberico
_et al._ (2013); Song _et al._ (2015); Li _et al._ (2018); van Hees _et
al._ (2008) and to extract the momentum and temperature behavior of heavy
quark transport coefficients from these observables in heavy-ion collision
experiments Xu _et al._ (2018); Scardina _et al._ (2017); Beraudo _et al._
(2018); Cao _et al._ (2019). Notably, the inclusion of inelastic interactions
of heavy quarks in the medium on top of the elastic collisions reduces the gap
between experimental and theoretical observations Wicks _et al._ (2007).
The evolution history of the collision event is essential to model the heavy
quark dynamics in the expanding medium. Heavy quark production and transport
have been widely studied in the static limit and in expanding fireball models.
Some efforts have been made to study the heavy flavor dynamics in evolving
medium within 1+1 Bjorken hydrodynamics as well as higher dimension
hydrodynamics Young _et al._ (2012); He _et al._ (2013b); Sarkar _et al._
(2018); Thakur _et al._ (2020); Prakash _et al._ (2021). Most analyses
utilized smooth initial conditions for the hydrodynamical evolution of the QGP
medium through which heavy quarks traverse. Advances in hydrodynamical models
to describe medium expansion seem to have a significant impact on the heavy
quark observables Gossiaux _et al._ (2011); Song _et al._ (2020b); Fan _et
al._ (2023). It is also known that the initial state fluctuation has a
significant influence on the light flavor jet energy loss Rodriguez _et al._
(2010); Zhang _et al._ (2013).
In this work, we employ the state-of-the-art hydrodynamical model of QGP to
study the charm observables. We consider the evolution history of a Pb+Pb
collision event at 2.76 TeV energy with the IP-Glasma initial state Schenke
_et al._ (2012a, b); McDonald _et al._ (2017). IP-Glasma is a very successful
dynamical model which describes a variety of observables in heavy-ion
collisions. The initial fluctuating color configuration in the heavy-ion
nuclei that are approaching at high velocity can be determined within IP-Sat
approach Bartels _et al._ (2002); Kowalski and Teaney (2003) combined with
the Yang-Mills equations Krasnitz and Venugopalan (1999). Its evolution also
follows from solving the Classical Yang-Mills equations. The Glasma
distributions, which are obtained event-by-event, act as the input for the
hydrodynamical evolution. We used the recently updated shear and bulk viscous
coefficients obtained from the Bayesian model-to-data analysis Heffernan _et
al._ (2023).
The charm dynamics are encoded in the drag and diffusion coefficients in the
Langevin equation. The Langevin equation is solved within MARTINI Schenke _et
al._ (2009); Young _et al._ (2012). We studied open charm observables within
three different setups of drag and diffusion coefficients. This analysis is an
up-to-date study of the heavy flavor nuclear suppression factor and elliptic
flow using the recent developments in the hydrodynamical description of QGP
and drag and diffusion coefficients of the charm quarks.
_A hybrid model for heavy quarks_ \- Modelling of heavy flavor evolution in
collisions can be divided into three distinct stages: heavy quark initial
production, its evolution in the hydrodynamized expanding medium, and the
hadronization process. In the current analysis, we employ PYTHIA8.2 Sjöstrand
_et al._ (2015) for perturbative production of charm quarks by sampling
6-dimensional momentum distributions of $Q\bar{Q}$ systems. We allow the
gluons to split into $c\bar{c}$ pairs, but the medium-induced modification of
the gluon splitting rate is not accounted for. As the nucleons are bound in a
heavy nucleus, the parton distribution functions (nPDFs) will be modified. To
take into account the nuclear shadowing effect, EPS09 nuclear parton
distribution functions are used Eskola _et al._ (2009). Isospin effects are
accounted for by sampling $p+p$, $p+n$ and $n+n$ collisions. A finite
thermalization time $\tau_{0}$ in the heavy-ion collision is considered, and
the evolution of charm quarks during this time is governed by the equation of
motion with the zero-temperature Cornell potential Young _et al._ (2012).
The QGP medium is initialized using the IP-Glasma model Schenke _et al._
(2012a, b); McDonald _et al._ (2017) and evolved using the viscous
hydrodynamical approach MUSIC Schenke _et al._ (2010, 2011). The lattice QCD
equation of state (EoS) from the hotQCD collaboration Bazavov _et al._ (2014)
at high temperature is smoothly matched to the hadron resonance gas EoS at low
temperatures and is incorporated in the framework. The first thorough Bayesian
model-to-data comparison of relativistic heavy-ion collision measurements
using a hybrid model combining viscous expansion (MUSIC), evolution to
particlization (iS3D) McNelis _et al._ (2021), and particle transport (SMASH)
Weil _et al._ (2016) with IP-Glasma initial conditions has been recently
presented for four different model choices in Heffernan _et al._ (2023). We
chose the model with Grad’s 14-moment viscous correction with constant shear
viscosity to entropy density ratio and a temperature-dependent bulk viscosity
profile. In the present analysis, we have used the maximum a posteriori
estimates of all model parameters, including the viscous coefficients, from
this study.
The dynamics of heavy quarks in the QGP medium depend upon its radiative and
elastic interactions with the constituents in the medium. The strength of
interactions of heavy quarks in the medium can be quantified in terms of drag
and diffusion coefficients. In the local rest frame of the medium, the heavy
quark motion can be studied numerically using the discrete version of the
Langevin equations Moore and Teaney (2005); Das _et al._ (2014),
$\displaystyle dp_{i}=-A_{i}\,dt+C_{ij}\rho_{j}\sqrt{dt},$ (1)
where $dp_{i}$ denotes the change in momentum in time interval $dt$. The drag
force $A_{i}$ and covariance matrix $C_{ij}$ are defined as follows,
$\displaystyle A_{i}=p_{i}A(|{\bf p}|^{2},T),$ (2) $\displaystyle
C_{ij}=\sqrt{2B_{0}}\left(\delta_{ij}-\frac{p_{i}p_{j}}{|{\bf
p}|^{2}}\right)+\sqrt{2B_{1}}\frac{p_{i}p_{j}}{|{\bf p}|^{2}},$ (3)
where $A$ denotes the drag coefficient and the quantities $B_{0}$ and $B_{1}$
represent the transverse and longitudinal momentum diffusion coefficients,
respectively. The drag force describes the heavy quark average momentum
transfer due to the interactions, whereas the matrix $C_{ij}$ quantifies the
stochastic force by using Gaussian-normal distributed random variable
$\rho_{j}$Das _et al._ (2014). The dependence of drag and diffusion on heavy
quark momentum and temperature of the medium can be studied within
relativistic transport theory. The Langevin dynamics of the heavy quarks is
coupled with the expanding QGP medium and solved within MARTINI as follows:
* $-$
Find the fluid four-velocity and temperature at the space-time location of the
charm quark from MUSIC
* $-$
Boost the charm momentum to the fluid local rest frame and evolve the charm
three momenta to the next time step using Langevin equations
* $-$
Boost the charm momentum back to the lab frame and update the charm position
after the time step
The heavy quark transport coefficients are the key input parameters that
quantify the interaction strength of heavy quarks with the medium. The
transport coefficients can be derived in perturbative QCD by including
scattering and radiative processes Svetitsky (1988); Mustafa (2005). The
specific interactions are encoded in the matrix elements. It has been seen
that using Debye mass as infrared (IR) regulator ($\mu_{IR}$) in the gluon
propagator for t-channel interaction and a fixed coupling constant, pQCD
matrix elements are not able to describe the experimental data associated with
the heavy flavor particles Rapp and van Hees (2010); Beraudo _et al._ (2018).
As these parameters are the sources of uncertainty in the estimation of heavy
quark transport coefficients, the IR regulator and coupling constant are
determined in the analysis by physical arguments as described in Gossiaux and
Aichelin (2008). In the present analysis, the conventional choice of IR
regulator, the Debye mass, is replaced with a realistic hard thermal loop
(HTL) parameterization of the IR regulator. Further, an effective coupling
constant ($\alpha_{\text{eff}}$) that embeds non-perturbative dynamics is
employed in the study. The behaviour of $\alpha_{\text{eff}}$ is obtained from
the analysis of $e^{+}e^{-}$ annihilation Mattingly and Stevenson (1994) and
decay of $\tau$ leptons Brodsky _et al._ (2003), and has the following form
Gossiaux and Aichelin (2008),
$\alpha_{\text{eff}}(R^{2})=\dfrac{4\pi}{\beta_{0}}\Bigg{\\{}\begin{array}[]{lr}L_{-}^{-1},&R^{2}<0\\\
\frac{1}{2}-\pi^{-1}\text{arctan}(L_{+}/\pi),&R^{2}>0\end{array}$ (4)
where $R$ is the relevant energy scale, $\beta_{0}=11-\frac{2}{3}N_{f}$ with
$N_{f}$ as the number of favors and $L_{\pm}=\ln{(\pm R^{2}/\Lambda^{2})}$
with QCD parameter chosen as $\Lambda=0.2$ GeV. With these parameterizations,
the propagator with bare coupling $\alpha_{s}$ is modified for $t-$channel
gluon exchange processes (heavy quark-thermal medium interaction) as,
$\displaystyle\frac{\alpha}{t}\rightarrow\frac{\alpha_{\text{eff}}\,(t)}{t-\mu^{2}_{IR}(t,T)},$
(5)
where $t$ is the Mandelstam variable and
$\mu^{2}_{IR}(t,T)=\kappa\,4\pi\Big{(}1+\frac{N_{f}}{6}\Big{)}\alpha_{\text{eff}}\,(t)T^{2}$
with $\kappa=0.2$. IR regulator is not required for other channels and
coupling constant is fixed as
$\alpha\rightarrow\alpha_{\text{eff}}\,(s-m^{2}_{HQ})$ and
$\alpha\rightarrow\alpha_{\text{eff}}\,(u-m^{2}_{HQ})$ for $s$ and $u-$
channels, respectively such that $s=m^{2}_{HQ}$ and $u=m^{2}_{HQ}$ denote the
maximal softness in the channels Gossiaux and Aichelin (2008). Here, $m_{HQ}$
is the mass of the heavy quark. For the charm quark, we took $m_{HQ}=1.25$
GeV.
Figure 1: Temperature dependence of spatial diffusion coefficient of charm
quark and comparison of the results with other approaches.
In this study, we consider three different setups to characterize the coupling
strength of heavy quarks in the QGP medium. (i) Setup I-Constant value of
$2\pi D_{s}T$: In the static limit of heavy quarks, the coupling of heavy
quarks in the QGP can be described with one physical parameter-the spatial
diffusion coefficient $D_{s}$, which is often treated as a phenomenological
parameter. For $\sqrt{s_{NN}}=2.76$ TeV energy collisions, we take $2\pi
D_{s}T=3.0$. (ii) Setup II-Temperature dependent heavy quark transport
coefficients: Employing the fluctuation-dissipation theorem, $D_{s}$ can be
expressed in terms of a drag coefficient in the limit $p\rightarrow 0$ as,
$D_{s}=\frac{T}{m_{HQ}A(p\rightarrow 0,T)}$. The momentum dependence of
$D_{s}$ is neglected in this scenario. The temperature dependence of $D_{s}$
for the collisional and radiative processes with the effective coupling and
modified IR regulator is depicted in fig. 1. For the pQCD elastic collisional
process, the value of $(2\pi T)D_{s}$ lies in the range of $30-40$ van Hees
and Rapp (2005); Kurian _et al._ (2020), which is an order of magnitude
larger than that obtained from $N_{f}=0$ lattice result Banerjee _et al._
(2012), $N_{f}=2+1$ lattice data Altenkort _et al._ (2023) and quasiparticle
model (QPM) Scardina _et al._ (2017) estimation. It is seen that the gluon
radiation of heavy quarks in the medium suppresses the $D_{s}$. This can be
understood from the fact that the heavy quark is experiencing more drag as
they lose energy through collisional and radiative processes while traveling
through the QGP medium. The effective coupling that incorporates the non-
perturbative effects and the HTL IR regulator seems to have significant
impacts on the temperature behavior of $D_{s}$. With the above choice of
parameters and the inclusion of the radiative process in the analysis, it is
observed that $(2\pi T)D_{s}\approx 2-7$. (iii) Setup III-Temperature and
momentum dependent heavy quark transport coefficients: For a heavy quark with
a finite momentum, its dynamics in the QGP medium are described with
parameters, $A(p,T)$, $B_{0}(p,T)$, $B_{1}(p,T)$ where, in general, $B_{0}\neq
B_{1}$ (see eqs. 2 and 3). The momentum and temperature dependence of the
heavy quark drag and diffusion coefficients is described in eqs. 6-8. It is
seen that the drag coefficient decreases with an increase in heavy quark
momentum, whereas the trend is quite the opposite for momentum diffusion
coefficients. The transverse diffusion coefficient $B_{0}$ increases with
momentum and saturates at higher momentum. However, the coefficient $B_{1}$
has a sharp rise with an increase in heavy quark momentum, which indicates
large random kicks to the heavy quark. The fluctuation-dissipation relation is
enforced for the longitudinal diffusion coefficient to ensure the
equilibration of heavy quarks in the medium. The details of the derivation of
drag and diffusion coefficient are given in appendix A. We have also analyzed
the viscous effects to the heavy quark transport coefficients. It is seen that
viscous effects have no visible impact on the temperature behavior of $D_{s}$,
especially in the high-temperature regime Kurian _et al._ (2020).
In MARTINI, Peterson fragmentation is employed to describe the heavy quark
hadronization process. For the open heavy flavor mesons, heavy quark fragments
into a meson, and the fragmentation function estimates the fractional momentum
of the resulting hadron. Quarkonium is another possible final state for the
heavy quarks, and MARTINI separates out this final state from open heavy
flavor mesons using a three-step algorithm as described in detail in Young
_et al._ (2012).
_Results and discussions_ -
Figure 2: Nuclear suppression factor and elliptic flow of D mesons in three
different setups. Experimental data of D meson $R_{AA}$ and $v_{2}$ are from
the ALICE collaboration, Refs. Abelev _et al._ (2013) and Adam _et al._
(2016), respectively. The nuclear suppression factor for setups I and II are
almost identical.
We evaluate the $R_{AA}$ and $v_{2}$ of D mesons. The elliptic flow harmonic
$v_{2}$ was evaluated using the event plane method where the azimuthal angle
of D meson $\phi_{D}$ is correlated with the second order event-plane angle
$\Psi_{2}$. In experiments, the event plane angle is determined by the
azimuthal distribution of all charged hadrons. We used the initial state
spatial anisotropy to determine the event-plane angle. In the studied
centrality class, the spatial anisotropy angle and the event plane angle are
strongly correlated.
We compared the results with the available experimental data at $30-50\%$. The
nuclear suppression factor $R_{AA}$ and elliptic flow $v_{2}$ of the D-mesons
are shown in fig. 2. We observe that the estimation with momentum and
temperature-dependent charm quark transport coefficients have a better
agreement with the available data for $R_{AA}$ in comparison with the other
two setups. In contrast, the setup underestimates the D meson $v_{2}$. A
recent study Das _et al._ (2015) has predicted that the temperature
dependence of heavy quark interaction strength plays a vital role in the
simultaneous description of both $R_{AA}$ and $v_{2}$ of D meson at the RHIC
energy. However, there are still mismatches between the calculations and
measurements, especially at the LHC energies. Notably, none of the models
could explain the enhancement in $v_{2}$ around $p_{T}=10$ GeV. This could be
due to the uncertainties of heavy quark interaction in the medium and heavy
flavor hadronization process, especially in the low $p_{T}$ regime where the
coalescence mechanism may have an impact on the observables Fries _et al._
(2008).
Figure 3: Nuclear suppression factor and elliptic flow of D mesons in setup-II
in two different centralities with event-by-event initialized and smooth hydro
backgrounds.
To quantify the impact of fluctuating IP-Glasma initial state on heavy quark
observables, we have compared the results from event-by-event initialized
calculations with those from smooth initial conditions. The smooth initial
profiles are obtained from the optical Glauber model for impact parameters 3.5
fm and 10 fm, which roughly correspond to the 0-10% and 30-50% centrality
bins. All other parameters were held fixed.
The impact of fluctuating initial conditions on D meson $v_{2}$ is illustrated
in fig. 3. At low $p_{T}$, the fluctuating initial condition seems to have a
significant influence on the D-meson $v_{2}$ for $30-50\%$ centrality. The
effect of fluctuating initial conditions can be understood as a convolution of
two distinct effects. Fluctuations increase local pressure gradients and
enhance flow. That will lead to an increase in $v_{2}$. However, fluctuations
also increase decorrelation between the event planes of light flavor and heavy
flavor mesons. As the two are produced by different mechanisms in different
stages of evolution, their event plane angles are generally not identical.
Heavy flavor meson $v_{2}$ is measured by taking its projection on the event
plane determined by charged hadrons, which is dominated by the light flavor
mesons. This increased decorrelation suppresses $v_{2}$. The net effect is a
combination of two factors. As the charm quarks are much heavier than the
background medium, the enhancement in $v_{2}$ from the increased flow is more
than compensated by the decorrelation. This effect is opposite to that
observed in jets Noronha-Hostler _et al._ (2016).
_Summary_ \- In this paper, a hybrid framework is developed to study the
evolution of heavy flavor by incorporating the recent developments in initial
state dynamics and viscous QGP evolution in the relativistic heavy-ion
collisions in Pb+Pb collision at $\sqrt{s_{NN}}=2.76$ TeV. We introduce
fluctuating IP-Glasma initial states and viscous hydrodynamics tuned to a
global Bayesian analysis for the first time in a phenomenological study of the
charm quark. The heavy quark dynamics is described within the Langevin
approach in the expanding medium in which heavy quark coupling strength with
the medium is quantified in terms of its transport coefficients. We explored
the momentum and temperature dependence of the charm quark transport
coefficients due to the collisional and radiative energy loss of the heavy
quarks in the QGP medium, as well as its impact on the nuclear modification
factor $R_{AA}$ and the elliptic flow $v_{2}$ of D-mesons. Our results with
improved heavy quark dynamics with the latest developments in multistage
hybrid frameworks for the dynamical evolution of collisions demonstrate that
heavy flavor observables are influenced by the IP-Glasma initial state and
bulk evolution of the medium. We see that fluctuating initial conditions have
a significant effect on charm elliptic flow at low $p_{T}$. These effects are
the result of an increase in both flow and decorrelations. While enhanced flow
is the dominant effect for light jets, the event-plane decorrelation is more
important for charm quarks. This indicates that the heavier charm quark is
less susceptible to becoming part of the background flow than light quarks.
Further, we observe that the energy loss profiles of a charm quark and non-
perturbative effects in the QGP medium have a significant role in both
$R_{AA}$ and $v_{2}$ of D-mesons.
Looking into the future, it will be interesting to explore the influence of
pre-equilibrium interactions on heavy quark energy loss. These effects are
essential to maintain coherence in the theoretical description of heavy flavor
dynamics in heavy-ion collisions. Additionally, it is an important aspect to
take into account the uncertainties associated with the momentum and
temperature dependence of heavy quark transport coefficients to simultaneously
describe $R_{AA}$ and $v_{2}$ in Pb+Pb collisions at $2.76$ TeV. This tuning
can be achieved by utilizing a model-to-data comparison. We leave these
interesting aspects to the near future.
_Acknowledgements_ \- We thank Björn Schenke and Gojko Vujanovic for helpful
discussions and feedback. We acknowledge Matthew Heffernan and Nicolas Fortier
for their help with the IP-Glasma initial state files and with MUSIC
parameters. Numerical computations were done on the resources provided by the
Minnesota Supercomputing Institute (MSI) at the University of Minnesota and on
Beluga supercomputer at McGill University managed by Calcul Québec and Compute
Canada. M.S. is supported by the U.S. DOE Grant No. DE-FG02-87ER40328. M.K.
acknowledges a fellowship from the Fonds de recherche du Québec - Nature et
technologies (FRQNT), support from the Natural Sciences and Engineering
Research Council of Canada, and the Special Postdoctoral Researchers Program
of RIKEN. S.J. and C.G. are supported by the Natural Sciences and Engineering
Research Council of Canada.
## References
* Gale _et al._ (2013) C. Gale, S. Jeon, and B. Schenke, Int. J. Mod. Phys. A 28, 1340011 (2013), arXiv:1301.5893 [nucl-th] .
* Romatschke and Romatschke (2007) P. Romatschke and U. Romatschke, Phys. Rev. Lett. 99, 172301 (2007), arXiv:0706.1522 [nucl-th] .
* Heinz and Snellings (2013) U. Heinz and R. Snellings, Ann. Rev. Nucl. Part. Sci. 63, 123 (2013), arXiv:1301.2826 [nucl-th] .
* Ryu _et al._ (2015) S. Ryu, J. F. Paquet, C. Shen, G. S. Denicol, B. Schenke, S. Jeon, and C. Gale, Phys. Rev. Lett. 115, 132301 (2015), arXiv:1502.01675 [nucl-th] .
* Jaiswal and Roy (2016) A. Jaiswal and V. Roy, Adv. High Energy Phys. 2016, 9623034 (2016), arXiv:1605.08694 [nucl-th] .
* Bernhard _et al._ (2019) J. E. Bernhard, J. S. Moreland, and S. A. Bass, Nature Phys. 15, 1113 (2019).
* Everett _et al._ (2021) D. Everett _et al._ (JETSCAPE), Phys. Rev. Lett. 126, 242301 (2021), arXiv:2010.03928 [hep-ph] .
* Nijs _et al._ (2021) G. Nijs, W. van der Schee, U. Gürsoy, and R. Snellings, Phys. Rev. C 103, 054909 (2021), arXiv:2010.15134 [nucl-th] .
* Heffernan _et al._ (2023) M. R. Heffernan, C. Gale, S. Jeon, and J.-F. Paquet, (2023), arXiv:2302.09478 [nucl-th] .
* van Hees _et al._ (2006) H. van Hees, V. Greco, and R. Rapp, Phys. Rev. C 73, 034913 (2006), arXiv:nucl-th/0508055 .
* Das _et al._ (2009) S. K. Das, J.-e. Alam, and P. Mohanty, Phys. Rev. C 80, 054916 (2009), arXiv:0908.4194 [nucl-th] .
* He _et al._ (2013a) M. He, R. J. Fries, and R. Rapp, Phys. Rev. Lett. 110, 112301 (2013a), arXiv:1204.4442 [nucl-th] .
* Andronic _et al._ (2016) A. Andronic _et al._ , Eur. Phys. J. C 76, 107 (2016), arXiv:1506.03981 [nucl-ex] .
* Aarts _et al._ (2017) G. Aarts _et al._ , Eur. Phys. J. A 53, 93 (2017), arXiv:1612.08032 [nucl-th] .
* Song _et al._ (2020a) T. Song, P. Moreau, J. Aichelin, and E. Bratkovskaya, Phys. Rev. C 101, 044901 (2020a), arXiv:1910.09889 [nucl-th] .
* Mustafa (2005) M. G. Mustafa, Phys. Rev. C 72, 014905 (2005), arXiv:hep-ph/0412402 .
* Sun _et al._ (2019) Y. Sun, G. Coci, S. K. Das, S. Plumari, M. Ruggieri, and V. Greco, Phys. Lett. B 798, 134933 (2019), arXiv:1902.06254 [nucl-th] .
* Dong and Greco (2019) X. Dong and V. Greco, Prog. Part. Nucl. Phys. 104, 97 (2019).
* Golam Mustafa _et al._ (1998) M. Golam Mustafa, D. Pal, and D. Kumar Srivastava, Phys. Rev. C 57, 889 (1998), [Erratum: Phys.Rev.C 57, 3499–3499 (1998)], arXiv:nucl-th/9706001 .
* Moore and Teaney (2005) G. D. Moore and D. Teaney, Phys. Rev. C 71, 064904 (2005), arXiv:hep-ph/0412346 .
* Uphoff _et al._ (2011) J. Uphoff, O. Fochler, Z. Xu, and C. Greiner, Phys. Rev. C 84, 024908 (2011), arXiv:1104.2295 [hep-ph] .
* Das _et al._ (2014) S. K. Das, F. Scardina, S. Plumari, and V. Greco, Phys. Rev. C 90, 044901 (2014), arXiv:1312.6857 [nucl-th] .
* Cao _et al._ (2016) S. Cao, T. Luo, G.-Y. Qin, and X.-N. Wang, Phys. Rev. C 94, 014909 (2016), arXiv:1605.06447 [nucl-th] .
* Li _et al._ (2019) S. Li, C. Wang, R. Wan, and J. Liao, Phys. Rev. C 99, 054909 (2019), arXiv:1901.04600 [hep-ph] .
* Akamatsu _et al._ (2009) Y. Akamatsu, T. Hatsuda, and T. Hirano, Phys. Rev. C 79, 054907 (2009), arXiv:0809.1499 [hep-ph] .
* He _et al._ (2012) M. He, R. J. Fries, and R. Rapp, Phys. Rev. C 86, 014903 (2012), arXiv:1106.6006 [nucl-th] .
* Cao _et al._ (2013) S. Cao, G.-Y. Qin, and S. A. Bass, Phys. Rev. C 88, 044907 (2013), arXiv:1308.0617 [nucl-th] .
* Alberico _et al._ (2013) W. M. Alberico, A. Beraudo, A. De Pace, A. Molinari, M. Monteno, M. Nardi, F. Prino, and M. Sitta, Eur. Phys. J. C 73, 2481 (2013), arXiv:1305.7421 [hep-ph] .
* Song _et al._ (2015) T. Song, H. Berrehrah, D. Cabrera, J. M. Torres-Rincon, L. Tolos, W. Cassing, and E. Bratkovskaya, Phys. Rev. C 92, 014910 (2015), arXiv:1503.03039 [nucl-th] .
* Li _et al._ (2018) S. Li, C. Wang, X. Yuan, and S. Feng, Phys. Rev. C 98, 014909 (2018), arXiv:1803.01508 [hep-ph] .
* van Hees _et al._ (2008) H. van Hees, M. Mannarelli, V. Greco, and R. Rapp, Phys. Rev. Lett. 100, 192301 (2008), arXiv:0709.2884 [hep-ph] .
* Xu _et al._ (2018) Y. Xu, J. E. Bernhard, S. A. Bass, M. Nahrgang, and S. Cao, Phys. Rev. C 97, 014907 (2018), arXiv:1710.00807 [nucl-th] .
* Scardina _et al._ (2017) F. Scardina, S. K. Das, V. Minissale, S. Plumari, and V. Greco, Phys. Rev. C 96, 044905 (2017), arXiv:1707.05452 [nucl-th] .
* Beraudo _et al._ (2018) A. Beraudo _et al._ , Nucl. Phys. A 979, 21 (2018), arXiv:1803.03824 [nucl-th] .
* Cao _et al._ (2019) S. Cao _et al._ , Phys. Rev. C 99, 054907 (2019), arXiv:1809.07894 [nucl-th] .
* Wicks _et al._ (2007) S. Wicks, W. Horowitz, M. Djordjevic, and M. Gyulassy, Nucl. Phys. A 784, 426 (2007), arXiv:nucl-th/0512076 .
* Young _et al._ (2012) C. Young, B. Schenke, S. Jeon, and C. Gale, Phys. Rev. C 86, 034905 (2012), arXiv:1111.0647 [nucl-th] .
* He _et al._ (2013b) M. He, H. van Hees, P. B. Gossiaux, R. J. Fries, and R. Rapp, Phys. Rev. E 88, 032138 (2013b), arXiv:1305.1425 [nucl-th] .
* Sarkar _et al._ (2018) S. Sarkar, C. Chattopadhyay, and S. Pal, Phys. Rev. C 97, 064916 (2018), arXiv:1801.00637 [nucl-th] .
* Thakur _et al._ (2020) L. Thakur, N. Haque, and Y. Hirono, JHEP 06, 071 (2020), arXiv:2004.03426 [hep-ph] .
* Prakash _et al._ (2021) J. Prakash, M. Kurian, S. K. Das, and V. Chandra, Phys. Rev. D 103, 094009 (2021), arXiv:2102.07082 [hep-ph] .
* Gossiaux _et al._ (2011) P. B. Gossiaux, S. Vogel, H. van Hees, J. Aichelin, R. Rapp, M. He, and M. Bluhm, (2011), arXiv:1102.1114 [hep-ph] .
* Song _et al._ (2020b) T. Song, P. Moreau, Y. Xu, V. Ozvenchuk, E. Bratkovskaya, J. Aichelin, S. A. Bass, P. B. Gossiaux, and M. Nahrgang, Phys. Rev. C 101, 044903 (2020b), arXiv:2001.07951 [nucl-th] .
* Fan _et al._ (2023) W. Fan _et al._ (JETSCAPE), Phys. Rev. C 107, 054901 (2023), arXiv:2208.00983 [nucl-th] .
* Rodriguez _et al._ (2010) R. Rodriguez, R. J. Fries, and E. Ramirez, Phys. Lett. B 693, 108 (2010), arXiv:1005.3567 [nucl-th] .
* Zhang _et al._ (2013) H. Zhang, T. Song, and C. M. Ko, Phys. Rev. C 87, 054902 (2013), arXiv:1208.2980 [hep-ph] .
* Schenke _et al._ (2012a) B. Schenke, P. Tribedy, and R. Venugopalan, Phys. Rev. Lett. 108, 252301 (2012a), arXiv:1202.6646 [nucl-th] .
* Schenke _et al._ (2012b) B. Schenke, P. Tribedy, and R. Venugopalan, Phys. Rev. C 86, 034908 (2012b), arXiv:1206.6805 [hep-ph] .
* McDonald _et al._ (2017) S. McDonald, C. Shen, F. Fillion-Gourdeau, S. Jeon, and C. Gale, Phys. Rev. C 95, 064913 (2017), arXiv:1609.02958 [hep-ph] .
* Bartels _et al._ (2002) J. Bartels, K. J. Golec-Biernat, and H. Kowalski, Phys. Rev. D 66, 014001 (2002), arXiv:hep-ph/0203258 .
* Kowalski and Teaney (2003) H. Kowalski and D. Teaney, Phys. Rev. D 68, 114005 (2003), arXiv:hep-ph/0304189 .
* Krasnitz and Venugopalan (1999) A. Krasnitz and R. Venugopalan, Nucl. Phys. B 557, 237 (1999), arXiv:hep-ph/9809433 .
* Schenke _et al._ (2009) B. Schenke, C. Gale, and S. Jeon, Phys. Rev. C 80, 054913 (2009), arXiv:0909.2037 [hep-ph] .
* Sjöstrand _et al._ (2015) T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen, and P. Z. Skands, Comput. Phys. Commun. 191, 159 (2015), arXiv:1410.3012 [hep-ph] .
* Eskola _et al._ (2009) K. J. Eskola, H. Paukkunen, and C. A. Salgado, JHEP 04, 065 (2009), arXiv:0902.4154 [hep-ph] .
* Schenke _et al._ (2010) B. Schenke, S. Jeon, and C. Gale, Phys. Rev. C 82, 014903 (2010), arXiv:1004.1408 [hep-ph] .
* Schenke _et al._ (2011) B. Schenke, S. Jeon, and C. Gale, Phys. Rev. Lett. 106, 042301 (2011), arXiv:1009.3244 [hep-ph] .
* Bazavov _et al._ (2014) A. Bazavov _et al._ (HotQCD), Phys. Rev. D 90, 094503 (2014), arXiv:1407.6387 [hep-lat] .
* McNelis _et al._ (2021) M. McNelis, D. Everett, and U. Heinz, Comput. Phys. Commun. 258, 107604 (2021), arXiv:1912.08271 [nucl-th] .
* Weil _et al._ (2016) J. Weil _et al._ , Phys. Rev. C 94, 054905 (2016), arXiv:1606.06642 [nucl-th] .
* Svetitsky (1988) B. Svetitsky, Phys. Rev. D 37, 2484 (1988).
* Rapp and van Hees (2010) R. Rapp and H. van Hees (2010) pp. 111–206, arXiv:0903.1096 [hep-ph] .
* Gossiaux and Aichelin (2008) P. B. Gossiaux and J. Aichelin, Phys. Rev. C 78, 014904 (2008), arXiv:0802.2525 [hep-ph] .
* Mattingly and Stevenson (1994) A. C. Mattingly and P. M. Stevenson, Phys. Rev. D 49, 437 (1994), arXiv:hep-ph/9307266 .
* Brodsky _et al._ (2003) S. J. Brodsky, S. Menke, C. Merino, and J. Rathsman, Phys. Rev. D 67, 055008 (2003), arXiv:hep-ph/0212078 .
* van Hees and Rapp (2005) H. van Hees and R. Rapp, Phys. Rev. C 71, 034907 (2005), arXiv:nucl-th/0412015 .
* Kurian _et al._ (2020) M. Kurian, M. Singh, V. Chandra, S. Jeon, and C. Gale, Phys. Rev. C 102, 044907 (2020), arXiv:2007.07705 [hep-ph] .
* Banerjee _et al._ (2012) D. Banerjee, S. Datta, R. Gavai, and P. Majumdar, Phys. Rev. D 85, 014510 (2012), arXiv:1109.5738 [hep-lat] .
* Altenkort _et al._ (2023) L. Altenkort, O. Kaczmarek, R. Larsen, S. Mukherjee, P. Petreczky, H.-T. Shu, and S. Stendebach (HotQCD), Phys. Rev. Lett. 130, 231902 (2023), arXiv:2302.08501 [hep-lat] .
* Abelev _et al._ (2013) B. Abelev _et al._ (ALICE), Phys. Rev. Lett. 111, 102301 (2013), arXiv:1305.2707 [nucl-ex] .
* Adam _et al._ (2016) J. Adam _et al._ (ALICE), JHEP 03, 081 (2016), arXiv:1509.06888 [nucl-ex] .
* Das _et al._ (2015) S. K. Das, F. Scardina, S. Plumari, and V. Greco, Phys. Lett. B 747, 260 (2015), arXiv:1502.03757 [nucl-th] .
* Fries _et al._ (2008) R. J. Fries, V. Greco, and P. Sorensen, Ann. Rev. Nucl. Part. Sci. 58, 177 (2008), arXiv:0807.4939 [nucl-th] .
* Noronha-Hostler _et al._ (2016) J. Noronha-Hostler, B. Betz, J. Noronha, and M. Gyulassy, Phys. Rev. Lett. 116, 252301 (2016), arXiv:1602.03788 [nucl-th] .
* (75) L. D. Landau, Zh. Eksp. Teor. Fiz. 7, 203 (1937), translated in Collected Papers of L. D. Landau, D. ter Harr, ed. (Pergamon, New York, 1981) .
* Mazumder _et al._ (2014) S. Mazumder, T. Bhattacharyya, and J.-e. Alam, Phys. Rev. D 89, 014002 (2014), arXiv:1305.6445 [nucl-th] .
* Abir _et al._ (2012) R. Abir, C. Greiner, M. Martinez, M. G. Mustafa, and J. Uphoff, Phys. Rev. D 85, 054012 (2012), arXiv:1109.5539 [hep-ph] .
* Gyulassy and Wang (1994) M. Gyulassy and X.-n. Wang, Nucl. Phys. B 420, 583 (1994), arXiv:nucl-th/9306003 .
* Klein (1999) S. Klein, Rev. Mod. Phys. 71, 1501 (1999), arXiv:hep-ph/9802442 .
## Appendix A Appendix A: temperature and momentum dependence of charm quark
transport coefficients
Figure 4: Drag coefficient A (top) and diffusion coefficient $B_{0}$ (bottom)
as a function of charm quark momentum in the local rest frame.
By employing the Landau approximation Landau , the collision integral in the
Boltzmann equation can be simplified and the transport coefficients can be
defined as,
$\displaystyle A=~{}\langle\langle 1\rangle\rangle-\dfrac{\langle\langle{\bf
p}\cdot{\bf p}^{\prime}\rangle\rangle}{|{\bf p}|^{2}},$ (6) $\displaystyle
B_{0}=\dfrac{1}{4}\bigg{[}\langle\langle{|{\bf
p}^{\prime}|}^{2}\rangle\rangle-\dfrac{\langle\langle({\bf p}\cdot{\bf
p}^{\prime})^{2}\rangle\rangle}{|{\bf p}|^{2}}\bigg{]},$ (7) $\displaystyle
B_{1}=\dfrac{1}{2}\bigg{[}\dfrac{\langle\langle({\bf p}\cdot{\bf
p}^{\prime})^{2}\rangle\rangle}{|{\bf p}|^{2}}-2\langle\langle{\bf p}\cdot{\bf
p}^{\prime}\rangle\rangle+|{\bf p}|^{2}\langle\langle
1\rangle\rangle\bigg{]},$ (8)
where $\langle\langle F(|{\bf p}^{\prime}|)\rangle\rangle$ denote the thermal
average of a function $F(|{\bf p}^{\prime}|)$ and depends upon the heavy quark
interaction process in medium.
For the elastic collisional process, $HQ(p)+l(q)\rightarrow
HQ(p^{\prime})+l(q^{\prime})$, where $l$ denotes light quarks or gluons,
$\langle\langle F(|{\bf p}^{\prime}|)\rangle\rangle$ is defined as,
$\displaystyle\langle\langle F(|{\bf p}^{\prime}|)$
$\displaystyle\rangle\rangle=\dfrac{1}{\gamma_{HQ}}\dfrac{1}{2E_{p}}\int{\dfrac{d^{3}{\bf
q}}{(2\pi)^{3}2E_{q}}}\int\frac{d^{3}{\bf
p^{\prime}}}{(2\pi)^{3}2E_{p^{\prime}}}$
$\displaystyle\times\int\frac{d^{3}{\bf
q^{\prime}}}{(2\pi)^{3}2E_{q^{\prime}}}(2\pi)^{4}\delta^{4}(p+q-p^{\prime}-q^{\prime})$
$\displaystyle\times\sum|{\mathcal{M}}_{2\rightarrow
2}|^{2}f_{g/q}({E_{q}})\Big{(}1\pm f_{g/q}(E_{q^{\prime}})\Big{)}F(|{\bf
p}^{\prime}|),$ (9)
where $\gamma_{HQ}$ as the statistical degeneracy factor of heavy quark and
$|{\mathcal{M}}_{2\rightarrow 2}|$ describes interaction amplitude of the
heavy quark-thermal particles elastic scattering process. Here, $f_{g/q}$ is
the distribution function of thermal particles in the evolving medium.
For the inelastic ($2\rightarrow 3$) process, $HQ(p)+l(q)\rightarrow
HQ(p^{\prime})+l(q^{\prime})+g(k^{\prime})$, with
$k^{\prime}\equiv(E_{k^{\prime}},{\bf k^{\prime}_{\perp}},k^{\prime}_{z})$ as
the four-momentum of the emitted soft gluon by the heavy quark in the final
state, the thermal averaged $F(|{\bf p}^{\prime}|)$ takes the form as Mazumder
_et al._ (2014),
$\displaystyle\langle\langle F(|{\bf p}^{\prime}|)$
$\displaystyle\rangle\rangle=\dfrac{1}{\gamma_{HQ}}\frac{1}{2E_{p}}\int\frac{d^{3}{\bf
q}}{(2\pi)^{3}2E_{q}}\int\frac{d^{3}{\bf
p^{\prime}}}{(2\pi)^{3}2E_{p^{\prime}}}$
$\displaystyle\times\int\frac{d^{3}{\bf
q^{\prime}}}{(2\pi)^{3}2E_{q^{\prime}}}\int\frac{d^{3}{\bf
k^{\prime}}}{(2\pi)^{3}2E_{k^{\prime}}}\ (2\pi)^{4}$
$\displaystyle\times\delta^{(4)}(p+q-p^{\prime}-q^{\prime}-k^{\prime})\sum{|{\mathcal{M}}_{2\rightarrow
3}|^{2}}$ $\displaystyle\times f_{g/q}(E_{q})(1\pm f_{g/q}(E_{q^{\prime}}))\
(1+f_{g}(E_{k^{\prime}}))\ $
$\displaystyle\times\theta_{1}(E_{p}-E_{k^{\prime}})\
\theta_{2}(\tau-\tau_{F})\ F(|{\bf p}^{\prime}|),$ (10)
where $|{\mathcal{M}}_{2\rightarrow 3}|^{2}$ describes the matrix element
squared the radiative process Abir _et al._ (2012). The theta function
$\theta_{1}(E_{p}-E_{k^{\prime}})$ imposes constraints on the heavy quark
initial energy and $\theta_{2}(\tau-\tau_{F})$ indicates that scattering time
$\tau$ is larger than the gluon formation time $\tau_{F}$ (Landau-Pomeranchuk-
Migdal Effect) Gyulassy and Wang (1994); Klein (1999).
Fig. 4 shows the drag and diffusion coefficients of charm quark as a function
of momentum while including the collisional and radiative processes with the
modified IR regulator and effective coupling.
|
# Constraints on the Galactic Centre environment from Gaia hypervelocity stars
III: Insights on a possible companion to Sgr A*
F. A. Evans1,2, A. Rasskazov3, A. Remmelzwaal4, T. Marchetti5, A. Castro-
Ginard4, E. M. Rossi4, J. Bovy1,2
1David A. Dunlap Department of Astronomy and Astrophysics, University of
Toronto, 50 St. George Street, Toronto, ON, M5S 3H4, Canada
2Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St.
George Street, Toronto, ON, M5S 3H4, Canada
2DAMTP, University of Cambridge, CMS, Wilberforce Road, Cambridge CB3 0WA, UK
3Leiden Observatory, Leiden University, PO Box 9513, NL-2300 RA Leiden, The
Netherlands
4European Southern Observatory, Karl-Schwarzschild-Strasse 2, 85748 Garching
bei München, Germany
E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
We consider a scenario in which Sgr A* is in a massive black hole binary
(MBHB) with an as-of-yet undetected supermassive or intermediate-mass black
hole companion. Dynamical encounters between this MBHB and single stars in its
immediate vicinity would eject hypervelocity stars (HVSs) with velocities
beyond the Galactic escape velocity of the Galaxy. In this work, we use
existing HVS observations to constrain for the first time the existence of a
companion to Sgr A*. We simulate the ejection of HVSs via the ‘MBHB slingshot’
scenario and show that the population of HVSs detectable today depends
strongly on the companion mass and the separation of the MBHB. We demonstrate
that the lack of uncontroversial HVS candidates in Gaia Data Release 3 places
a firm upper limit on the mass of a possible Sgr A* companion. Within one
milliparsec of Sgr A*, our results exclude a companion more massive than
$1000\,\mathrm{M_{\odot}}$. If Sgr A* recently merged with a companion black
hole, our findings indicate that unless this companion was less massive than
$500\,\mathrm{M_{\odot}}$, this merger must have occurred at least $10$ Myr
ago. These results complement and improve upon existing independent
constraints on a companion to Sgr A* and show that large regions of its
parameter space can now be ruled out.
###### keywords:
Galaxy: centre, nucleus – stars: kinematics and dynamics
††pubyear: 2023††pagerange: Constraints on the Galactic Centre environment
from Gaia hypervelocity stars III: Insights on a possible companion to Sgr
A*–References
## 1 Introduction
In the centre of our Galaxy lurks Sagittarius A* (Sgr A*), a supermassive
black hole (SMBH) with a mass of $\sim$4$\times 10^{6}\,\mathrm{M_{\odot}}$
(Ghez et al., 2008; Genzel et al., 2010; Akiyama et al., 2022). Such SMBHs
seem to be omnipresent in the nuclei of external galaxies as well (Magorrian
et al., 1998; Kormendy & Ho, 2013), at least among galaxies above a particular
mass (see Merritt, 2013). Dynamical friction (Chandrasekhar, 1943) in dense
stellar systems conspires to bring massive bodies towards the centres of
gravitational potentials. Other SMBHs delivered via major galaxy mergers or
$\sim 10^{2}-10^{4}\,\mathrm{M_{\odot}}$ intermediate mass black holes (IMBHs)
formed originally in stellar clusters (Miller & Hamilton, 2002; Portegies
Zwart & McMillan, 2002) or directly in a SMBH accretion disc (Goodman & Tan,
2004; McKernan et al., 2012) can thereby form a bound massive black hole
binary (MBHB) consisting of either an SMBH-SMBH pair (e.g. Begelman et al.,
1980; Volonteri et al., 2003; Di Matteo et al., 2005) or an SMBH-IMBH pair
(Levin et al., 2005; Portegies Zwart et al., 2006; Arca-Sedda & Gualandris,
2018). MBHBs have received considerable attention in recent years owing to the
fact that colliding MBHBs are loud gravitational wave sources (Peters, 1964)
in a frequency range accessible to the upcoming Laser Interferometer Space
Antenna (LISA; Amaro-Seoane et al., 2017).
There exists the possibility that Sgr A* has a SMBH or IMBH companion which,
owing to the fact that the Galactic Centre (GC) makes for a challenging
observational environment (see Schödel et al., 2014, for a review), has not
yet been detected. The possible parameter space for such a companion has been
steadily reduced over the past two decades from astrometric observations of
Sgr A* itself (Hansen, 2003; Reid & Brunthaler, 2004, 2020), orbital modelling
of the S-star cluster (Gualandris & Merritt, 2009) and of the S-star cluster
star S2 in particular (Gualandris et al., 2010; Naoz et al., 2020; GRAVITY
Collaboration et al., 2020). Taken together, current constraints allow for a
companion of at most a few hundred $\mathrm{M_{\odot}}$ just within or outside
the orbit of S2 (c.f. GRAVITY Collaboration et al. 2020, Appendix D).
An independent method of constraining a possible companion to Sgr A* is by
considering its direct interactions with stars that approach it. Dynamical
interactions between single stars and MBHBs tend to ‘fling’ the stars out at
roughly the orbital velocity of the less massive member of the MBHB, draining
energy from the system and hardening the MBHB (Quinlan, 1996; Zier & Biermann,
2001). Ejection velocities therefore increase as the MBHB hardens, to the
point where the hard MBHB is able to eject so-called ‘hypervelocity stars’
(HVSs) with velocities in excess of the Galactic escape speed (Yu & Tremaine,
2003). It should be noted that a companion to Sgr A* is not required to
produce HVSs in the first place – the term HVS was coined by Hills (1988), who
first theorized that the disruption of a stellar binary by a single SMBH could
eject one member of the binary with an extreme velocity (see also Yu &
Tremaine, 2003; Kenyon et al., 2008; Sari et al., 2010; Kobayashi et al.,
2012; Rossi et al., 2014; Zhang et al., 2013; Generozov & Madigan, 2020;
Generozov, 2021). While this so-called Hills mechanism remains the most
promising scenario for the ejection of fast stars from the centre of the
Galaxy, the ‘MBHB slingshot’ mechanism described above is a well-researched
alternative (Yu & Tremaine, 2003; Levin & Beloborodov, 2003; Baumgardt et al.,
2006; Sesana et al., 2006, 2007a; Löckmann et al., 2008; Marchetti et al.,
2018; Rasskazov et al., 2019; Darbha et al., 2019; Zheng et al., 2021;
Mastrobuono-Battisti et al., 2023).
After the predictions of Hills (1988), nearly two decades would elapse until
the first detection of a promising HVS candidate by Brown et al. (2005), who
discovered a B-dwarf in the outer Galactic stellar halo with a heliocentric
radial velocity of $853\pm 12\,\mathrm{km\ s^{-1}}$. Following this and other
serendipitous HVS discoveries (Edelmann et al., 2005; Hirsch et al., 2005),
other candidate stars potentially unbound to the Galaxy have trickled in
courtesy of targeted surveys (Brown et al., 2006; Brown et al., 2009, 2012,
2014), follow-up observations of previously identified stars (e.g Heber et
al., 2008; Tillich et al., 2009; Irrgang et al., 2010, 2019) and queries of
large Galactic surveys (e.g. Palladino et al., 2014; Zhong et al., 2014; Huang
et al., 2017; Koposov et al., 2020). See Brown (2015) for a review of these
objects. While the current roster of proposed HVS candidates stands at
$\sim$two dozen, only S5-HVS1 (Koposov et al., 2020) can be uncontroversially
associated with an origin in the Galactic Centre.
Our understanding of the structure and kinematics of the Milky Way as a whole
has been revolutionized by the European Space Agency satellite Gaia (Gaia
Collaboration et al., 2016). In its third and most recent data release (DR3;
Gaia Collaboration et al., 2022), Gaia provides five-parameter astrometry
(parallax, position, proper motion) for $\sim$1.5 billion Galactic sources
(Gaia Collaboration et al., 2021a) and validated heliocentric radial
velocities for $\sim$34 million sources (Katz et al., 2022; Sartoretti et al.,
2022). While Gaia observations have been invaluable in the identification of
new potential HVS candidates (Bromley et al., 2018; Shen et al., 2018; Hattori
et al., 2018; Du et al., 2019; Li et al., 2018; Luna et al., 2019; Huang et
al., 2021; Li et al., 2021, 2022; Igoshev et al., 2023; Prudil et al., 2022),
currently missing from all data releases of Gaia radial velocity catalogues
are high-confidence HVS candidates, that is, candidates with i) precise
astrometric measurements (relative parallax error $<$20%), ii) a velocity
which suggests the star is unbound to the Galaxy, and iii) a trajectory which
is consistent with an origin in the Galactic Centre (Marchetti et al., 2019;
Marchetti, 2021; Marchetti et al., 2022).
This presents a tantalizing opportunity. If Sgr A* has a hidden companion,
HVSs ejected via the MBHB slingshot mechanism may be detectable by Gaia. Given
that the selection functions of both the Gaia source catalogue and radial
velocity atalogue are relatively well-modelled (Everall & Boubert, 2022;
Sartoretti et al., 2022; Cantat-Gaudin et al., 2022, Castro-Ginard et al.
submitted), it is possible to determine with reasonable confidence which stars
should appear in the catalogue and which stars should not. An absence of
confident HVS candidates in Gaia DR3 can therefore rule out areas of the MBHB
parameter space (namely the hidden companion’s mass and separation from Sgr
A*) which predict an abundance of MBHB slingshot-ejected HVSs.
This work is a companion to Evans et al. (2022a, b) and Marchetti et al.
(2022), in which we exploited a similar possibility, focusing on the Hills
mehanism. In Evans et al. (2022a) we used the lack of main sequence HVSs in
Gaia Early Data Release 3 (Gaia Collaboration et al., 2021a) to impose an
upper limit on the HVS ejection rate. We expanded upon this in Evans et al.
(2022b), showing that considering the lack of evolved HVS candidates improves
constraints. By considering as well the existence of S5-HVS1 (Koposov et al.,
2020), strengthened constraints further and additionally constrained the shape
of the initial mass function (IMF) among HVS progenitor binaries in the GC. In
Marchetti et al. (2022) we mined the radial velocity catalogue of the then-
newly released Gaia DR3 to search for new high-confidence HVS candidates.
While no new HVS candidates were unearthed with precise astrometry, with a
Galactocentric velocity in excess of $700\ \mathrm{km\ s^{-1}}$ and with a
trajectory pointing away from the GC, we showed how the lack of HVS candidates
in this data release improved constraints even further. In those works we
considered the Hills mechanism as the only mechanism which ejects HVSs. In
this work we follow a similar philosophy considering the MBHB slingshot
mechanism – by exploring how different present-day MBHB binary configurations
would result in HVS populations of different size, we show that some
configurations are incompatible with an absence of detected HVSs in the Gaia
DR3 radial velocity sample.
This paper is organized as follows. In Sec. 2 we outline on a step-by-step
basis our model for generating mock populations of HVSs ejected via the MBHB
slingshot mechanism. In Sec. 3 we present our analyses of these populations –
we show how the population of HVSs in Gaia depends on the characteristics of
the MBHB binary and how the absence of high-confidence HVS candidates in Gaia
DR3 constrains the mass of an as-of-yet unseen companion to Sgr A* and its
separation from it. We discuss these results in Sec. 4 before offering a
summary and conclusions in Sec. 5.
## 2 Massive black hole binary Slingshot Ejection Model
In this section we describe how we model the MBHB orbital evolution, how we
generate ejected HVSs, how we propagate these HVSs through the Galaxy and how
we obtain mock observations of them to determine which would be detectable by
Gaia. Overall our procedure resembles the approach of Marchetti et al. (2018),
who also explore Gaia-detectable HVSs ejected from an MBHB in the GC. We
expand upon their modelling by investigating a larger set of MBHB
configurations and by using updated prescriptions for propagating HVSs through
the Galaxy, performing mock photometry, and modelling the Gaia selection
function. The code we use to implement the model we describe in this section
is included as the MBHB module in the publicly available PYTHON package
speedystar111https://github.com/fraserevans/speedystar.
### 2.1 Orbital decay of the MBHB
An MBHB embedded in a collisionless, fixed stellar background loses orbital
energy via i) the ejection of stars via the MBHB slingshot mechanism, and ii)
the emission of gravitational waves. We model the hardening of the MBHB during
phase i) following Quinlan (1996):
$\frac{da}{dt}\bigg{|}_{\rm HVS}=-\frac{G\rho H}{\sigma}a^{2}\,\text{,}$ (1)
where $G$ is the gravitational constant, $\rho$ and $\sigma$ are the mass
density and one-dimensional velocity dispersion of the stellar background,
respectively, assumed here to be $\rho=7\times 10^{4}\,\mathrm{M_{\odot}\
pc^{-3}}$ (Schödel et al., 2007; Feldmeier et al., 2014; Schödel et al., 2014)
and $\sigma=100\,\mathrm{km\ s^{-1}}$ (Figer et al., 2003; Schödel et al.,
2009; Feldmeier et al., 2014; Do et al., 2020) and $H$ is a dimensionless
hardening rate:
$H(\sigma,\rho,a)=\frac{\sigma}{G\rho}\frac{d}{dt}\left(\frac{1}{a}\right)\,\text{.}$
(2)
As noted by Quinlan (1996), $H$ is approximately constant when the binary
separation is below the ‘hardening separation’ $a_{h}$:
$a_{h}\equiv\frac{GM_{\rm c}}{4\sigma^{2}}\;\text{,}$ (3)
where $M_{\rm c}$ is the mass of the less-massive companion in the MBHB. We
approximate the orbital decay of the MBHB due to gravitational wave emission
following Peters (1964):
$\frac{da}{dt}\bigg{|}_{\rm GW}=-\frac{64}{5}G^{3}c^{5}\frac{M_{\rm
SgrA^{*}}M_{\rm c}M_{\rm total}}{a^{3}}\,\text{,}$ (4)
where $c$ is the speed of light, $M_{\rm SgrA^{*}}$ is the mass of Sgr A*,
taken here as $4\times 10^{6}\,\mathrm{M_{\odot}}$ (Eisenhauer et al., 2005;
Ghez et al., 2008), and $M_{\rm total}\equiv M_{\rm SgrA^{*}}+M_{\rm c}$ is
the total mass of the MBHB. The evolution of the MBHB semi-major axis with
time is then
$a(t)=a_{0}+\int_{t_{0}}^{t}\left(\frac{da}{dt}\bigg{|}_{\rm
HVS}+\frac{da}{dt}\bigg{|}_{\rm GW}\right)dt\,\text{,}$ (5)
where at an initial time $t_{0}$ the binary starts at separation $a=a_{0}$.
Following as well from Quinlan (1996), the total mass $\Delta M_{\rm ej}$ in
stars ejected by the MBHB as it shrinks from separation $a$ to $a-\Delta a$ is
modelled using the dimensionless mass ejection rate $J$:
$\Delta M_{\rm ej}=JM_{\rm total}\Delta\mathrm{ln}(1/a)\,\text{.}$ (6)
Note that this ejection rate above assumes the MBHB orbital decay is driven
entirely by HVS ejections. To account for the contribution of GW emission to
the binary hardening, we must include a correction to Eq. 6:
$\Delta M_{\rm ej}=\frac{\Delta t_{\rm HVS}^{-1}}{\Delta t_{\rm
total}^{-1}}JM_{\rm total}\Delta\text{ln}(1/a)\;\text{,}$ (7)
where $\Delta t_{\rm total}$ is the total time required for the MBHB to shrink
from separation $a$ to $a-\Delta a$ and $\Delta t_{\rm HVS}>\Delta t_{\rm
total}$ is the time required to shrink from $a$ to $a-\Delta a$ if the decay
is entirely by HVS ejections, calculated by integrating and inverting Eq. 1.
Figure 1: The dependence of the dimensionless hardening rate $H$ (left) and
mass ejection rate $J$ (right) on the MBHB hardness, assuming a circular MBHB
orbit. MBHB separations are scaled to the hardening separation (Eq. 3)
Relationships for MBHBs with differing mass ratios ($q$) are shown with
different line colours.
Over a grid of MBHB mass ratios $q$ in the range $10^{-4}\leq q\leq 10^{0}$,
we determine the dimensionless hardening rate $H$ and the mass ejection rate
$J$ by perform scattering experiments. The methodology of these experiments
closely follows Rasskazov et al. (2019) and we refer the reader to that work
for more details. Stars, assumed to be massless, approach a MBHB of separation
$a$ from infinity with a velocity $v$, impact parameter $b$, on directions
randomized with respect to the binary phase and orbital plane. $v$ is drawn
from a Maxwellian distribution in the range [$3\times
10^{3}v_{0}\sqrt{q/(1+q)},30v_{0}\sqrt{q/(1+q)}$] (Sesana et al., 2006), where
$v_{0}\equiv\sqrt{\frac{G(M_{\rm c}+M_{\rm MBH})}{a}}$ (8)
is the orbital velocity of the MBHB. $b^{2}$ is drawn uniformly such that
pericenter distances are constrained to the range $[0,5a]$. We assume the
scattering interactions never increase the eccentricity of the binary – its
orbit remains circular throughout (see discussion on this point in Sec. 4).
Each star is scattered off the binary and the simulation ends when i) the star
reaches a distance of 50$a$ from the MBHB with a positive total energy, or ii)
the timescale for the scattering interaction exceeds 10 Gyr, or iii) when the
star spends longer than $1.6\times 10^{4}$ binary orbital periods within a
distance of 50$a$ from the MBHB. For each MBHB mass ratio $q$ we run a total
of four million such simulations using the ARCHAIN (Mikkola & Merritt, 2008)
algorithm, specifically developed to simulate small-$N$ systems.
The results of these simulations are shown in Fig. 1, where we plot how $H$
and $J$ depend on the MBHB separation for various mass ratios. In our
calculation of $J$, a star counts as ‘ejected’ if and only if its velocity at
a large distance from the MBHB is larger than $5.5\sigma=550\;\mathrm{km\
s^{-1}}$, which is approximately the escape velocity from the bulge if it is
modelled as a single isothermal sphere profile. In general $H$ at fixed
$a/a_{\rm h}$ increases with decreasing $q$ though the relationship is not
strictly monotonic. Note as well that more massive binaries still merge more
quickly since $a_{\rm h}$ decreases linearly with $q$. In agreement with
Sesana et al. (2006), we find that each can be approximated as
$\displaystyle H(a)$ $\displaystyle=A_{H}(1+a/a_{0,H})^{\gamma_{H}}\;\text{,}$
(9) $\displaystyle J(a)$
$\displaystyle=A_{J}(a/a_{0,J})^{\alpha_{J}}(1+(a/a_{0,J})^{\beta_{J}})^{\gamma_{J}}\;\text{,}$
(10)
where $A$, $\alpha$, $\beta$, $\gamma$ are fitting parameters. In Table 1 we
share best-fit parameters for a selection of mass ratios.
Table 1: Best fit parameters for the hardening rate $H$ and stellar mass ejection rate $J$ (see Eq. 9). mass ratio | $A_{H}$ | $a_{0,H}/a_{h}$ | $\gamma_{H}$ | $A_{J}$ | $a_{0,J}/a_{h}$ | $\alpha_{J}$ | $\beta_{J}$ | $\gamma_{J}$
---|---|---|---|---|---|---|---|---
1.0 | 14.58 | 1.98 | -0.62 | 0.21 | 0.14 | -0.36 | 11.17 | -0.20
0.3 | 15.98 | 3.42 | -0.77 | 0.15 | 0.48 | -0.35 | 2.77 | -1.08
0.1 | 17.33 | 3.43 | -0.77 | 0.26 | 0.41 | -0.18 | 1.32 | -1.57
0.03 | 18.34 | 3.86 | -0.84 | 0.22 | 0.24 | -0.27 | 2.37 | -0.54
0.01 | 19.14 | 3.24 | -0.77 | 0.20 | 0.21 | -0.29 | 3.75 | -0.29
0.003 | 18.27 | 4.00 | -0.81 | 0.21 | 0.20 | -0.30 | 4.31 | -0.26
0.001 | 17.29 | 3.29 | -0.75 | 0.22 | 0.21 | -0.30 | 3.93 | -0.30
0.0003 | 17.36 | 2.92 | -0.73 | 0.22 | 0.22 | -0.29 | 3.46 | -0.36
Another result of our scattering experiments, echoing the results of Rasskazov
et al. (2019), is that the high-velocity tail of the ejection velocity
distribution ($v_{\rm ej}\geq 400\,\mathrm{km\ s^{-1}}$) is well-fit by a
broken power-law distribution of the form
$\frac{\mathrm{dN}}{\mathrm{dlog}v_{\rm ej}}=\begin{cases}C_{\rm
1},&\text{if}\ v_{\rm ej}<v_{\rm break}\\\ C_{\rm 2}v_{\rm
ej}^{-3.1},&\text{if}\ v_{\rm ej}\geq v_{\rm break}\end{cases}$ (11)
where $C_{1}$ and $C_{2}$ are constants, $v_{\rm
break}=1.2v_{0}\sqrt{2q}/(1+q)$ and $v_{0}$ is the circular velocity of the
MBHB (Eq. 8). When assigning initial velocities to ejected stars, we draw
velocities at random from this distribution in the range [5.5$\sigma$,
5$v_{0}$].
Figure 2: The MBHB orbital decay and rate of mass in ejected stars as a
function of binary semi-major axis, for three values of the mass ratio $q$.
The coloured vertical lines show the hardening separation (Eq. 3) for the
corresponding mass ratio. The primary MBH mass in each scenario is assumed to
be $4\times 10^{6}\,\mathrm{M_{\odot}}$ and each starts at $t=0$ at a
separation $a_{0}=100\,\mathrm{pc}$. The top panel shows how the separation
shrinks with time, with the dotted horizontal lines showing when each binary
merges. The middle panel shows how the stellar mass ejection rate evolves
throughout the inspiral. The bottom panel shows how the decay rate changes
with decreasing separation, with the contributions from HVS ejections and from
GW emissions plotted separately.
In Fig. 2 we summarize our modelling of the MBHB orbit, showing the orbital
decay and ejected stellar mass for MBHBs with $q=10^{-1}$, $q=10^{-2}$ and
$q=10^{-3}$. Each binary has an initial separation of $a_{0}=100\,\mathrm{pc}$
at $t=0$, and for the purposes of this study it is sufficient to say the MBHB
has ‘merged’ when its separation reaches $10^{-6}\,\mathrm{pc}$, as after this
the separation will drop to zero within a year. The top panel shows the
elapsed time required for the MBHB to shrink to a separation $a$. The total
time to merge is quite sensitive to the companion mass – the $q=10^{-1}$,
$q=10^{-2}$ and $q=10^{-3}$ binaries merge within 34, 60 and 120 Myr,
respectively. The vertical lines show the hardening separations $a_{\rm h}$
for the corresponding MBHB. The MBHB reaches $a=a_{\rm h}$ at earlier stages
of the inspiral for progressively more massive companions of Sgr A*. Middle
panel shows that the HVS mass ejection rate profile peaks shortly after
$a=a_{\rm h}$. For small $q$ this peak occurs after $da/dt|_{\rm
HVS}=da/dt|_{\rm GW}$ (see bottom panel) and as $q$ increases this peak occurs
while the slingshot mechanism is still the dominant hardening mechanism. The
total ejected HVS mass also depends strongly and nonlinearly on the MBHB
companion mass – a binary with a mass ratio of $10^{-2}$ ejects 330 times more
HVS mass in total than a binary with a mass ratio of $10^{-4}$, but only eight
times more mass than a binary with a mass ratio of $10^{-3}$.
### 2.2 Generating the HVS sample
The previous subsection described how the mass ejected in HVSs by a MBHB is
related to the characteristics and evolution of the binary. Here we describe
how we apply this to generate individual mock HVSs. Our approach is as
follows.
We assume a MBHB composed of Sgr A* and a companion of mass $M_{\rm c}$
started initially at separation $a_{0}=100\,\mathrm{pc}$222This is a very
large separation, chosen mostly to ensure we use an initial separation which
captures all HVS ejections. In practice, we find that only stars ejected when
the MBHB is at a separation of $\lesssim$ 0.1 pc are detectable in any Gaia
data release. at time $t_{0}$ and is now at separation $a_{\rm now}$. We
create a grid of 1000 semi-major axes from $a_{\rm now}$ to $a_{0}$ uniformly
spaced in log-space. We use Eq. 7 to determine the mass $\Delta
M_{\mathrm{ej},i}$ ejected while the binary was in the $i$’th separation bin,
$i$ = 1, 2, …, 1000. The corresponding number of ejected HVSs is
$\Delta N_{i}=\frac{\Delta M_{\mathrm{ej},i}}{\int_{M_{\rm min}}^{M_{\rm
max}}Mf(M)dM}\;\text{,}$ (12)
where $f(M)$ is the assumed IMF, defined between minimum and maximum stellar
masses $M_{\rm min}$ and $M_{\rm max}$. We adopt a single power-law IMF with
slope $\kappa$, i.e. $f(m)\propto m^{-\kappa}$ and minimum and maximum masses
of $0.1\,\mathrm{M_{\odot}}$ and $100\,\mathrm{M_{\odot}}$, respectively.
If the MBHB is currently at separation $a_{\rm now}$, the flight time of all
stars ejected while the binary was in the $i$’th separation bin have the same
flight time $t_{\rm flight,\textit{i}}$:
$t_{\rm flight,\textit{i}}=\int_{a_{i}}^{a_{\rm
now}}\left(\frac{da}{dt}\bigg{|}_{\rm HVS}+\frac{da}{dt}\bigg{|}_{\rm
GW}\right)^{-1}da\,\text{,}$ (13)
where $a_{i}$ is the midpoint of the $i$’th separation bin. We assume a star
is equally likely to be ejected at any point in its lifetime, therefore we say
the age of each star at ejection $t_{\rm age,ej}$ is a random fraction of its
maximum lifetime $t_{\rm life}$;
$t_{\rm age,ej}=\epsilon\cdot t_{\rm life}\;,$ (14)
where $\epsilon$ is a random number uniformly distributed in [0,1]. $t_{\rm
life}$ for each ejected star is determined using the single stellar evolution
(SSE; Hurley et al., 2000) algorithms within the
AMUSE333https://amuse.readthedocs.io/en/latest/index.htmlenvironment
(Portegies Zwart et al., 2009; Portegies Zwart et al., 2013; Pelupessy et al.,
2013; Portegies Zwart & McMillan, 2018), taking the start of the asymptotic
giant branch phase as the ‘end’ of a star’s life. We cap $t_{\rm life}$ to a
maximum of 13.8 Gyr to ensure the star is not older than the Universe.
Today, the age of each star is
$t_{\rm age}=t_{\rm age,ej}+t_{\rm flight}\;.$ (15)
We remove stars for whom $t_{\rm age}>t_{\rm life}$, i.e. stars which were
ejected as main sequence or evolved stars but are stellar remnants in the
present day.
After drawing an ejection velocity for each surviving star following Eq. 11,
stars are initialized on a sphere $3\,\mathrm{pc}$ in radius centred on Sgr A*
with a velocity pointing radially away from the GC. We assume stars are
ejected isotropically – we comment further on this assumption in Sec. 4.
### 2.3 Orbital integration and mock photometry
After initializing our mock HVSs, our scheme for propagating them forward in
time through the Galactic potential and obtaining synthetic photometry of them
remains essentially unchanged from the method described in Evans et al.
(2022b). We refer the reader to that work for more detailed explanations and
briefly summarize the approaches here.
We propagate each star forward in time for its flight time assuming the
Galactic potentials of McMillan (2017), who use a Monte Carlo Markov Chain
(MCMC) method to fit a many-component potential to various kinematic data. For
each realization, we draw a potential at random from the McMillan (2017) MC
chain (P. McMillan, private communication). All ejected stars are integrated
through this potential with the PYTHON package
GALPY444https://github.com/jobovy/galpy (Bovy, 2015) using a fifth-order
Dormand-Prince integrator (Dormand & Prince, 1980) and a timestep of
$0.1\,\mathrm{Myr}$.
We estimate the visual dust extinction at each star’s distance and sky
position using the combined15 dust map of (Bovy et al.,
2016)555https://github.com/jobovy/mwdust, itself a combination of the Galactic
dust maps of Drimmel et al. (2003); Marshall et al. (2006) and Green et al.
(2015). We then determine mock apparent magnitudes for each mock HVS in the
photometric bands using the MESA Isochrone and Stellar Tracks, or MIST
(Dotter, 2016; Choi et al., 2016) models666https://waps.cfa.harvard.edu/MIST/.
With each star’s luminosity, surface gravity and effective temperature
(determined with AMUSE depending on each star’s mass, age and metallicity), as
well as its visual extinction, we interpolate the MIST bolometric correction
tables to determine each star’s apparent magnitude in the Johnsons-Cousins $V$
and $I_{\rm c}$ bands (Bessell, 1990) and Gaia $G$ and $G_{\rm RP}$,
bands777seehttps://www.cosmos.esa.int/web/gaia/edr3-passbands (Riello et al.,
2021). We estimate each star’s magnitude in the Gaia $G_{\rm RVS}$ band from
its $V$, $I_{\rm c}$ and $G$-band magnitudes using fits from Jordi et al.
(2010).
### 2.4 HVSs identifiable by Gaia
With mock magnitudes and stellar parameters for our synthetic HVS populations,
we now determine which HVS candidates should be detectable in the different
Gaia data releases with measured radial velocities. While we use an updated
selection function, this approach is similar in philosophy to Evans et al.
(2022b).
For Gaia DR3, we start by discarding all HVSs with effective temperatures
outside the range $3500\,\mathrm{K}\leq T_{\rm eff}\leq 6900\,\mathrm{K}$,
since the Gaia spectroscopic pipeline does not assign validated radial
velocities to stars outside this temperature range (Katz et al., 2022;
Sartoretti et al., 2022). Next, we use the selection functions made available
by the GaiaUnlimited888https://github.com/gaia-unlimited/gaiaunlimited project
(Cantat-Gaudin et al., 2022; Castro-Ginard et al., 2023) to identify stars
which would be detectable in the Gaia DR3 radial velocity catalogue. In short,
the DR3 empirical selection function is calibrated against the Dark Energy
Camera Plane Survey Data Release 1 (Schlafly et al., 2018), and querying it
yields the probability $p_{\rm source}$ that a star at a given sky position
with a given Gaia G-band magnitude would appear in the DR3 source catalogue,
i.e. it would have at least a measured position, magnitude and colour. Since
it compares to a deeper survey, We note that this approach only characterizes
the faint-end selection function of Gaia. At the bright end, saturation
effects lead to incompleteness for sources brighter than $G\lesssim 3$ (see
Fabricius et al., 2021; Gaia Collaboration et al., 2021b). We remove all mock
HVSs brighter than $G=3$, however we note that it is extraordinarily rare for
our model to produce HVSs this bright. Likewise, querying the spectroscopic
selection function yields the probability $p_{\rm vrad}$ that a star in the
DR3 source catalogue at a given sky position, $G$-band magnitude and $G-G_{\rm
RP}$ colour is in the radial velocity catalogue as well. We manually set
$p_{\rm vrad}$ to zero for mock HVSs in sky position/magnitude/colour bins
entirely unpopulated in the DR3 radial velocity catalogue, since in the
GaiaUnlimited statistical model $p_{\rm vrad}$ will be non-zero but small and
highly uncertain. The total probability $p$ that a mock HVS would appear in
the Gaia DR3 radial velocity subsample is then the product of $p_{\rm source}$
and $p_{\rm vrad}$. For each mock HVS in our sample we draw a random number
$0<\epsilon<1$ and designate the HVS as DR3-detectable if $\epsilon<p$.
After deciding which mock HVSs would have radial velocities in DR3, we
determine each one’s 5x5 astrometric covariance matrix (position, parallax and
proper motion uncertainties and correlations amongst them) by querying the
Gaia DR3 astrometric spread function of Everall et al. (2021)999see
https://github.com/gaiaverse/scanninglaw, which computes uncertainties based
on the sky position and $G$-band magnitude of the source. Finally, we
determine the DR3 radial velocity uncertainty for each star using the PYTHON
package PyGaia101010https://github.com/agabrown/PyGaia.
We also determine which stars will be detectable in the fourthGaia data
release, DR4. Radial velocity measurements in this survey would be available
for all mock HVSs cooler than $6900\,\mathrm{K}$ down to the $G_{\rm
RVS}=16.2$ mag faint-end magnitude limit of the Gaia radial velocity
spectrometer (Cropper et al., 2018; Katz et al., 2019). For hotter HVSs,
radial velocities would be available for stars brighter than $G_{\rm RVS}=14$.
We estimate DR4 astrometric errors using Everall et al. (2021) DR3 astrometric
spread function, reducing the errors according to the predicted Gaia
performance111111https://www.cosmos.esa.int/web/gaia/science-performance, see
also Brown (2019)..
To be labelled as a ‘detectable’ high-confidence HVS in a particular Gaia DR3
or DR4, an ejected star must:
* •
be brighter than the faint-end apparent magnitude limit of the radial velocity
catalogue of the data release.
* •
have an effective temperature range within the bounds imposed by each data
release (see above).
* •
have a relative parallax uncertainty below 20%. For larger uncertainties,
estimating distances (and therefore total velocities) becomes problematic (see
Bailer-Jones, 2015).
* •
have a total velocity in the Galactocentric rest frame in excess of
$700\,\mathrm{km\ s^{-1}}$. We sample over the astrometric and radial velocity
uncertainties of each mock star and compute its total velocity assuming a
distance from the Sun to the GC of $8.122$ kpc (GRAVITY Collaboration et al.,
2018), a height of the Sun above the Galactic disc of $20.8$ pc (Bennett &
Bovy, 2019) and a rest-frame velocity of the Sun of
$\textbf{v}_{\odot}\equiv[U_{\odot},V_{\odot},W_{\odot}]=[12.9,245.6,7.78]\,\mathrm{km\
s^{-1}}$ (Reid & Brunthaler, 2004; Drimmel & Poggio, 2018). Upon sampling, in
at least 80% of realizations the total velocity of the star must be above
$700\,\mathrm{km\ s^{-1}}$.
These above cuts resemble those used to search for HVS candidates in Gaia Data
Release 2 (Marchetti et al., 2019), Early Data Release 3 (Marchetti, 2021) and
DR3 (Marchetti et al., 2022). We remark that Marchetti et al. (2019),
Marchetti (2021) and Evans et al. (2022a, b) selected HVSs by searching for
stars likely to be moving faster than the Galactic escape speed at their
position. In Marchetti et al. (2022) and in this work, however, we focus
instead on stars moving faster than $700\,\mathrm{km\ s^{-1}}$. Determining
the boundedness of a star requires assuming a Galactic potential, whereas a
flat velocity cut instead allows agnosticism towards the potential. This cut
at $700\,\mathrm{km\ s^{-1}}$ is conservative – in reasonable models of the
Milky Way potential the escape velocity is <$700\,\mathrm{km\ s^{-1}}$
everywhere except perhaps the within the innermost kpc of the Galaxy in
heavier models (e.g. McMillan, 2017).
For brevity, in the remainder of this work we use the terms ‘Gaia DR3/DR4’ to
refer exclusively to the radial velocity subsamples, and by the term ‘HVS’ we
refer only to those stars which satisfy the criteria listed above.
## 3 Results
### 3.1 An existing companion to Sgr A*
Figure 3: The population of high-confidence HVSs (see Sec. 2.4) expected to
appear in Gaia DR3 and DR4. Panels show how NHVS depends on the mass of the
companion (left), the current separation $a_{\rm current}$ between Sgr A* and
its companion (middle) and the slope $\kappa$ of and initial mass function
among HVS progenitors (right). Parameters are fixed to their fiducial values
when not being varied. Shaded regions span the 16th to 84th quantiles over 40
iterations. The vertical dashed line in the middle panel shows the hardening
separation for this binary. Figure 4: Contour lines show how the population of
HVSs detectable in Gaia DR3 depends on the Sgr A* companion mass $M_{\rm c}$
and the current separation $a_{\rm current}$ between Sgr A* and its companion,
averaged over 40 realizations and smoothed over the grid. The colourbar shows
how the 1$\sigma$ scatter of NHVS. The white dashed line shows where the
1$\sigma$ lower bound of $N_{\rm HVS}$ reaches one – the parameter space above
this line is consistent with zero HVS detections in Gaia DR3.
Having outlined our model for generating mock HVS populations from the MBHB
slingshot mechanism, we start in Fig. 3 by illustrating how the HVS population
detectable in Gaia DR3 and DR4 depends on our model assumptions. In the left
panel we show how the Gaia HVS population depends on the IMBH companion mass
when the MBHB separation is fixed at $1\,\mathrm{mpc}$ and the IMF of HVS
progenitors has a power-law slope of -1.7 (Lu et al., 2013). Below a companion
mass of $\simeq 800\,\mathrm{M_{\odot}}$, we expect $\leq 1$ HVS in both DR3
and DR4. This expected DR3 population can rise to several tens of thousands if
Sgr A* is in a near-equal mass MBHB. The steep dependence on Mc is fairly
intuitive – as the companion mass increases, the total ejected stellar mass
increases and the ejection velocity distribution shifts towards higher
velocities (Fig. 2, Eq. 11). A strong dependence on the current separation
$a_{\rm current}$ of the MBHB (middle panel) is digestible as well – a smaller
current separation means the MBHB has ejected more HVSs in the recent past,
and HVS ejection velocities are larger since the MBHB circular velocity is
larger. $N_{\rm HVS}$ plateaus for separations smaller than the binary
hardening separation (vertical line) because the binary merges shortly
thereafter (see Fig. 2). For an HVS IMF slope of -1.7 and a companion mass of
$4000\,\mathrm{M_{\odot}}$, >1 HVS is expected in Gaia DR3 as long as the MBHB
separation is less than a few milliparsecs. This number should reach $\sim$60
in DR3 or several hundred in DR4 if the MBHB is just about to merge within the
next decade121212A small point of possible confusion – for simplicity, we
assume the MBHB separation does not shrink throughout the Gaia mission
lifetime. Given typical typical MBHB coalescence times (see Fig. 2), this
assumption is valid. The analysis presented in this work assumes a scenario in
which Gaia DR3 and DR4 are ‘snapshots’ of the Milky Way at a time when the
MBHB separation is $a_{\rm current}$, rather than observations collected over
the span of 34 or 66 months, respectively..
In the right panel of Fig. 3 we show the dependence of NHVS on the IMF power
law slope $\kappa$. The turnover at $\kappa\simeq-1.7$ can be explained by two
competing factors. Firstly, as $\kappa$ increases, ejected stars are on
average more massive, and therefore more luminous and more likely to be
brighter than Gaia’s faint-end magnitude limit of $\sim 14$ for the DR3 radial
velocity sample. Secondly, however, the total number of ejected HVSs decreases
as $\kappa$ increases, since the ejected stellar mass is locked up in fewer,
more massive stars (see Eq. 7). The turnover represents the point where the
latter effect overcomes the former.
Since the impact of $\kappa$ on NHVS in DR3 is comparatively weak
(particularly in the vicinity of $\kappa\approx-1.7$), for the remainder of
this work we explore only constraints on $a_{\rm current}$ and $M_{\rm c}$. We
run simulations over a $M_{\rm c}/a_{\rm current}$ grid, marginalizing over
$\kappa$ by sampling it at random in each iteration from a Gaussian
distribution centred on $\kappa=-1.7$ with a standard deviation of 0.2 (Lu et
al., 2013). In Fig. 4 we show the results of this simulation suite. The
contours show lines of constant $N_{\rm HVS}$ throughout $M_{\rm c}-a_{\rm
current}$ space and the colourbar indicates the 1$\sigma$ scatter. For our
fiducial choices of $M_{\rm c}=4000\,\mathrm{M_{\odot}}$ and $a_{\rm
current}=1$ mpc, we determine that $8\pm 4$ high-confidence HVSs should have
been uncovered in the radial velocity catalogue of Gaia DR3. Following from
Fig. 3, the expected DR3 HVS population ranges from <1 (for low-mass
companions at large separations) to $\sim$thousands (for a near-equal-mass,
hardened MBHB). The relative scatter is $\sigma(N_{\rm HVS})/N_{\rm HVS}\simeq
0.2$ for $N_{\rm HVS}\gtrsim 100$, rising to $\simeq 0.5$ and $\simeq 1$ if
$\sim$tens or only a few HVS were uncovered, respectively. For $M_{\rm
c}\lesssim 10^{5}\,\mathrm{M_{\odot}}$ $N_{\rm HVS}$ increases monotonically
with decreasing $a_{\rm current}$. For larger companion masses, however,
$N_{\rm HVS}$ decreases for $a_{\rm current}$ less than $\sim$ a few $\times
10^{-1}$ mpc since the MBHB slingshot mass ejection rate peaks and begins to
decline before GW emission takes over and the binary quickly merges (see Fig.
2 middle panel). The dashed white line indicates where the 1$\sigma$ lower
limit of $N_{\rm HVS}$ is equal to one, i.e. where one HVS at least should be
in the survey. Since no such HVSs have been detected in Gaia DR3, MBHB
binaries below this line can be excluded. A companion to Sgr A* cannot exist
within 1 mpc (2000 AU) of the GC unless it is quite low-mass ($M_{\rm
c}\lesssim$2000 $\mathrm{M_{\odot}}$). A near-equal mass MBHB remains possible
only for separations $\gtrsim 0.05\mathrm{pc}$.
Figure 5: Constraints on the mass of a black hole companion to Sgr A* and its
separation from Sgr A* in angle (left axis) and physical space (right axis).
Adapted from GRAVITY Collaboration et al. (2020), adapted in turn from
Gualandris & Merritt (2009). The lack of confident HVS candidates in Gaia DR3
excludes the region below the black line. The dashed horizontal line
highlights the semimajor axis of S2 at 125 mas ($\approx 4.7\,\mathrm{mpc}$
GRAVITY Collaboration et al., 2020). Yu & Tremaine (2003) previously excluded
the orange region by remarking that the barycentre of a presumed MBHB in the
GC cannot be significantly displaced from Sgr A*, otherwise the cusp of
stellar density in the GC would not be coincident with Sgr A*. The cyan, blue
and dark blue regions are excluded from astrometric observations of Sgr A*
(Hansen & Milosavljević, 2003; Reid & Brunthaler, 2004, 2020). The violet
region is excluded by the orbit of S2 (Gillessen et al., 2009). The green
region is excluded by Naoz et al. (2020) by requiring that the orbit of S2 is
stable against perturbations by a Sgr A* companion (a fading colour denotes
weakening constraints). The solid gold region is excluded by Gualandris &
Merritt (2009) using the orbital eccentricity distribution of the S-star
cluster. Companion masses to the right of the dotted red line are excluded by
Gualandris et al. (2010) from the orbit of S2. Following the 2018 pericentric
passage of S2, these constraints were improved by GRAVITY Collaboration et al.
(2020). The diagonal dotted lines show lines of constant gravitational wave
inspiral time for the MBHB.
Prior works have constrained a companion to Sgr A*. In Fig. 5 we place our
constraints in context with these. In all, large regions of the parameter
space we exclude in this work have previously been excluded, particularly
cases where Sgr A* has a fairly massive companion just inside or outside the
orbit of S2. These configurations, if true would result in an astrometric
‘wobble’ of Sgr A* (Hansen & Milosavljević, 2003; Reid & Brunthaler, 2004,
2020), would impact the orbit of S2 (Gillessen et al., 2009), its stability
against perturbation (Gualandris et al., 2010; Naoz et al., 2020; GRAVITY
Collaboration et al., 2020) and the stability of the S-star cluster in general
(Gualandris & Merritt, 2009). HVS observations are an independent tool to
measure the Galactic Centre and the constraints they impose on a possible MBHB
in the GC reinforce and somewhat sharpen these prior constraints.
Of the parameter space that has not already been excluded by previous works,
we mainly rule out companions separated from Sgr A* by less than a
milliparsec. Such configurations, depending on the mass of the companion,
could only persist for a short amount of time between gravitational wave
emission drives the MBHB to coalescence. We annotate Fig. 5 with lines of
constant gravitational wave coalescence times. Some prior works have
disregarded configurations below these lines outright, as it would be
fortuitous if, in the present day, Sgr A* were about to merge with a companion
in the very near future. Regardless, a lack of HVS candidates in Gaia DR3
observations exclude such fortunate scenarios and demonstrates the advantage
of using HVS observations alongside traditional probes. Combining prior
constraints with constraints determined in this work, the allowed
configurations of a MBHB in the GC include i) a $\lesssim 5\times
10^{4}\,\mathrm{M\odot}$ companion at a separation greater than a few tens of
milliparsecs from Sgr A*, ii) a $M=100-200\,\mathrm{M_{\odot}}$ companion just
within or outside the orbit of S2 (970 AU, or 4.7 mpc), or iii) a $M\lesssim
500\,\mathrm{M_{\odot}}$ companion at a separation of less than a few hundred
microparsecs from Sgr A*.
### 3.2 A former companion to Sgr A*
Figure 6: Left: The dependence of $N_{\rm HVS}$ on the time $t_{\rm since}$
since the MBHB has merged with a companion. A negative $t_{\rm since}$
indicates that the MBHB has not yet merged – when this is the case, the top
horizontal axis indicates the MBHB separation. Shaded regions span the 16th to
84th quantiles over 50 iterations. Right: contour lines show how the
population HVSs detectable in Gaia DR3 depends on the former Sgr A* companion
mass $M_{\rm c}$ and time elapsed since the merger $t_{\rm since}$, averaged
over 20 realizations and smoothed over the grid. The colourbar shows how the
1$\sigma$ scatter of NHVS. The dashed line shows where the 1$\sigma$ lower
bound of $N_{\rm HVS}$ equals one.
As the MBHB spirals in, the HVS mass ejection rate peaks at a small separation
(Fig. 2, see also Gualandris et al., 2005; Baumgardt et al., 2006; Levin,
2006). Post-merger, this final ‘gasp’ of HVSs propagates outwards through the
Galaxy. Over the next $\sim$tens of Myr, these HVSs can still be detected from
Earth. This means HVS observations can probe not only an existing companion to
Sgr A*, but a former companion as well. We illustrate this in Fig. 6. In the
left panel we show how $N_{\rm HVS}$ depends on the time $t_{\rm since}$ since
the MBHB in the GC merged, holding $M_{\rm c}$ and the IMF index $\kappa$
fixed. A negative $t_{\rm since}$ indicates the binary has not yet merged.
$N_{\rm HVS}$ is maximized just before the MBHB coalesces. Otherwise, from the
HVS population size alone one cannot determine whether the MBHB exists in the
present day or whether it already merged in the recent past. Given a
substantial population of HVS candidates, the distribution of flight times
could discriminate between these two scenarios (see Sec. 4).
In the right panel of Fig. 6 we show how $N_{\rm HVS}$ depends on both $M_{\rm
c}$ and $t_{\rm since}$ when we sample over $\kappa$. The dashed line
indicates where the 1$\sigma$ lower limit of $N_{\rm HVS}$ reaches one. Any
configuration below this line can be ruled out, since at least one HVS ejected
before the MBHB merged should still be detectable in Gaia DR3. If Sgr A* ever
had a companion more massive than $1000\,\mathrm{M_{\odot}}$, it must have
merged with Sgr A* more than 12 Myr ago. This lower limit on $t_{\rm since}$
increases with increasing companion mass. To our knowledge this is the first
direct observational constraint on the specific merger history of Sgr A*
within the last $\sim$tens of Myr. If Sgr A* merged with a companion more than
30 Myr ago, we cannot offer constraints on how massive that companion could
have been. similarly, if Sgr A* recently accreted a companion less massive
than $\sim$200 $\mathrm{M_{\odot}}$, current HVS observations cannot constrain
how recently this merger occurred.
### 3.3 Prospects for DR4
Figure 7: The coloured bands show configurations of $M_{\rm c}$ and $a_{\rm
current}$ (left) or $M_{\rm c}$ and $t_{\rm since}$ (right) space consistent
within 1$\sigma$ with finding the labelled number of HVSs in Gaia DR4. The
shaded grey regions are already excluded by the lack of HVSs in Gaia DR3.
Having established constraints on a possible existing or former Sgr A*
companion from Gaia DR3 HVS observations (or lack thereof), we can project
forward and explore how these constraints may update with the improved
astrometric precision and deeper faint-end magnitude limit of Gaia DR4,
expected $\sim$2026\. In Fig. 7 we show how constraints on the presumed
(existing or former) Sgr A* companion mass and either its current separation
from Sgr A* (if the MBHB still exists) or the time since coalescence (if the
MBHB has already merged) will change depending on the number of high-
confidence HVS uncovered in Gaia DR4. The coloured bands show the parameter
space allowed if $1$, $10\pm\sqrt{10}$, $100\pm 10$ or $1000\pm\sqrt{1000}$
HVSs are found in DR4. Given the lack of HVSs in Gaia DR3, our simulations
suggest we should expect no more than 9, 17, or 40 HVSs in DR4 if the
companion mass is $10^{3}$, $10^{4}$, or $10^{5}\,\mathrm{M_{\odot}}$
respectively. If zero high-confidence HVSs are discovered in Gaia DR4, regions
of parameter space below the lower edge of the ‘1’ band will be excluded.
Otherwise, since $N_{\rm HVS}$ depends so intimately on $M_{\rm c}$ and
$a_{\rm current}$/$t_{\rm since}$, strict (albeit degenerate) constraints can
be placed on these parameters if a non-zero number of HVSs are uncovered in
DR4.
## 4 Discussion
### 4.1 Contribution from the Hills mechanism
The results of the previous section assume that all detectable HVSs are
ejected via the MBHB slingshot mechanism. This is not necessarily (nor is it
expected to be) true, as multiple possible HVS ejection mechanisms exist and
given a sizeable HVS population it may be difficult to disentangle which was
ejected via which mechanism. In practice, the ejection of HVSs from the GC may
to due to a myriad of mechanisms, each occurring in tandem and coupling to
each other. These other possible avenues include the dynamical encounters
between single stars and a ‘swarm’ of stellar-mass black holes in the GC
(O’Leary & Loeb, 2007) or dynamical interactions between a single SMBH or MBHB
and a globular cluster which has sunk toward the GC (Capuzzo-Dolcetta &
Fragione, 2015; Fragione & Capuzzo-Dolcetta, 2016). Chief among these
alternative mechanisms, however, is the Hills mechanism (Hills, 1988; Gould &
Quillen, 2003; Yu & Tremaine, 2003), which involves the tidal separation of a
stellar binary following a close encounter with a single SMBH. Indeed, it is
difficult to contrive a scenario wherein ejections via the MBHB slingshot
ejection occur but ejections via the Hills mechanism do not. While there are
intrinsic differences in the kinematics and properties of stars ejected via
the two mechanisms (see Brown, 2015; Rasskazov et al., 2019, and references
therein), in practice this is difficult to discern directly, given that
current observations can detect only a biased subsample of these populations,
i.e. those which are bright and relatively nearby. When considering null HVS
detections this is not a problem, as a complete dearth of HVSs can constrain
all ejection mechanisms simultaneously. For non-null detections, however, as
may be the case for Gaia DR4, this becomes worthy of discussion. Confident
constraints on an HVS ejection mechanisms cannot be imposed using a particular
HVS candidate without a notion of which mechanism(s) may be responsible
ejecting it. The HVS candidate S5-HVS1 is an example of this conundrum, and
the next subsection below will be devoted to it. While it is beyond the scope
of this work to develop a fully self-consistent ejection model including both
the Hills mechanism and the MBHB mechanism simultaneously, in this subsection
we consider whether HVS populations ejected via these different mechanisms can
be meaningfully disentangled.
To compare the two populations, we use a population of Hills mechanism-ejected
HVSs generated by Marchetti et al. (2022). We refer the reader to that work
for more details concerning the generation of this catalogue, but it assumes
an IMF index among HVS progenitors of $\kappa=-1.7$ and generates HVS
progenitor binaries assuming power-law distributions of the binary mass ratio
and log-orbital period. In Fig. 8 we compare the properties of the Gaia
DR4-detectable HVS population ejected via the Hills mechanism (black) to the
population ejected via the MBHB slingshot mechanism (orange/yellow/blue).
Shown are the distributions of the detectable stars’ stellar masses, Gaia
G-band magnitudes, heliocentric distances, heliocentric radial velocities and
stellar ages, stacked over 40 iterations. When we vary $M_{\rm c}$ or $a_{\rm
current}$ (top and middle rows), differences between the populations are
subtle. Hills-ejected HVSs are on average slightly more massive and younger
than MBHB slingshot-ejected ones, and if the Sgr A* companion is particularly
massive and/or its separation from Sgr A* is small, slingshot-ejected HVSs
will on average exhibit a narrower range of radial velocities and will span a
wider range of distances.
If Sgr A* and its companion have already merged, differentiating between the
mechanisms becomes easier. The bottom row of Fig. 8 shows that as the time
$t_{\rm since}$ since coalescence increases, the range of stellar masses among
detectable MBHB slingshot-ejected HVSs narrows considerably (massive HVSs will
have already left the main sequence, low-mass HVSs will be too far away to
detect), the typical distance increases (the final ‘gasp’ of HVSs will be
further away), the range of radial velocities narrows (they are less impacted
by the Galactic potential and farther away, so their velocity is more in the
radial direction) and their typical ages increase (they can’t be younger than
$t_{\rm since}$). However, given that we expect $5_{-4}^{+11}$ Hills
mechanism-ejected HVSs in the radial velocity catalogue of Gaia DR4 (Evans et
al., 2022b) and at most a few tens of MBHB slingshot-ejected HVSs (Fig. 7),
uncontroversially assigning each HVS to a corresponding ejection mechanism
will be difficult in any case.
Figure 8: Among HVSs detectable in Gaia DR4, distributions of HVS stellar mass
$m$, Gaia $G$-band magnitude, heliocentric distance $d_{\rm hel}$, line-of-
sight velocity $v_{\rm rad}$ and HVS stellar age $t_{\rm age}$. Black curves
show distributions for HVSs ejected via the Hills mechanism and coloured
curves show distributions for HVSs ejected via the MBHB slingshot mechanism
when the companion mass (top row), MBHB separation (middle row) and time since
the MBHB merger (bottom row) are varied.
### 4.2 The case of S5-HVS1
Thus far in this work, the only HVS observational data we have considered is
the lack of high-quality HVS candidates in Gaia DR3. Both null and non-null
HVS detections exist in other searches and surveys, however, which can in
principle strengthen constraints. For example, the MMT HVS Survey (Brown et
al., 2009, 2012, 2014) identified several dozen HVS candidates. We choose not
to include these in our analysis since proper motion measurements for these
candidates are not sufficiently precise to conclusively associate any with an
ejection from the Galactic Centre131313Note, however, that unless one invokes
an interaction with a massive black hole, it becomes quite difficult to
explain the extreme velocities of young HVS candidates with well-constrained
total velocities such as HVS1 (Brown et al., 2005)..
Worth mentioning as well is the HVS candidate S5-HVS1, a $\sim
2.35\,M_{\odot}$ star first identified by Koposov et al. (2020) in the S5
survey (Li et al., 2019). S5-HVS1 is notable in that it is the first HVS
candidate for whom an origin in the GC is uncontroversial – its trajectory
points directly away from the GC and implies an ejection $4.8\,\mathrm{Myr}$
ago with a velocity of $v_{\rm ej}\simeq 1800\,\mathrm{km\ s^{-1}}$. As an
unambiguous HVS detection, we showed in Evans et al. (2022b) that penalizing
models which predict zero or $\gg$1 HVSs similar to S5-HVS1 in the S5 survey
significantly improved constraints on the IMF slope in the GC and the ejection
rate of HVSs via the Hills mechanism. In this subsection we comment on how and
whether the inclusion of this star can improve constraints on a current of
former companion to Sgr A*.
To identify mock HVSs which would have appeared as promising HVSs in S5 by the
analysis of Koposov et al. (2020), we first compute each star’s apparent
magnitude in the Dark Energy Camera (DECam) $g$ and $r$ bands (Abbott et al.,
2018) using the MIST models as done in Sec. 2.3. We then roughly reproduce the
S5 selection function, selecting HVSs within the S5 sky footprint (see Li et
al., 2019, table 2) which have Gaia parallaxes satisfying
$\varpi<3\sigma_{\varpi}+0.2$, and DECam photometry satisfying $15<g<19.5$ and
$-0.4<(g-r)<+0.1$. We then take only those mock HVSs with heliocentric radial
velocities larger than $800\,\mathrm{km\ s^{-1}}$, since these were the ones
selected for further inspection by Koposov et al. (2020).
The results of these selections are shown in Fig. 9. Assuming the MBHB
slingshot mechanism is the only mechanism ejecting HVSs, the left panel shows
the number of S5-HVS1 analogues predicted to appear in S5 depending on the Sgr
A* companion mass and the current separation of the MBHB. Configurations
between the white lines are consistent within 1$\sigma$ with finding exactly
one star similar to S5-HVS1 in S5, and herein lies the issue: a majority of
these configurations exist within the parameter space already excluded by the
lack of HVSs in the radial velocity catalogue of Gaia DR3. Assuming the MBHB
slingshot mechanism is the only mechanism responsible for the ejection of
HVSs, explaining both the lack of HVSs in Gaia DR3 and the existence of
S5-HVS1 is only possible if Sgr A* has a $200\,\mathrm{M_{\odot}}\lesssim
M_{\rm c}\lesssim 500\,\mathrm{M_{\odot}}$ companion at a separation of
$\lesssim 0.3$ pc.
In the right-hand panel we show similar results in $M_{\rm c}-t_{\rm since}$
space. Sgr A* having merged within the last few tens of Myr with a $\sim
200-1000\,\mathrm{M_{\odot}}$ companion is consistent with both the existence
of S5-HVS1 and the lack of HVSs in DR3. Note, however, that S5-HVS1 has a
well-constrained flight time of 4.8 Myr. If it were ejected via the MBHB
slingshot mechanism shortly before a merger, this merger must have occurred at
most 4.8 Myr ago. Within this timeframe, only a
$200\,\mathrm{M_{\odot}}\lesssim M_{\rm c}\lesssim 500\,\mathrm{M_{\odot}}$
companion can reconcile the existence of an HVS in S5 with the non-detection
of HVSs in Gaia DR3.
In conclusion, our simulations only support a MBHB slingshot origin for
S5-HVS1 in specific circumstances. Additionally, our criteria to select
S5-HVS1 analogues above makes no consideration for its extreme Galactocentric
velocity of $1750\,\mathrm{km\ s^{-1}}$. A velocity this large is quite hard
to achieve in the MBHB slingshot mechanism for a companion mass of $\lesssim
500\,\,\mathrm{M_{\odot}}$ – only $\sim$20 per cent of our S5-HVS1 analogues
in this mass range are as fast as S5-HVS1. Typical ejection velocities via the
Hills mechanism, on the other hand, are comparatively larger and can more
easily accommodate this star. A more rigourous investigation of whether
S5-HVS1 could be produced by this mechanism is warranted and does not yet
exist. Generozov & Madigan (2020) used S5-HVS1’s flight time as an
observational constraint and found that a $1000\,\mathrm{M_{\odot}}$ companion
separated from Sgr A* by 0.01 pc could efficiently reproduce the observed
eccentricity distribution of the S-star cluster in the GC. This, however, does
not necessarily imply S5-HVS1 was ejected via the MBHB slingshot mechanism.
Figure 9: The colorbar shows the population of HVSs predicted to appear in the
S5 survey (see Sec. 4.2 for details) in both $M_{\rm c}$-vs-$a_{\rm current}$
space (left) and $M_{\rm c}$-vs-$t_{\rm since}$ space (right). Configurations
below the red dashed lines are excluded by the results of this work. The
white, hashed areas show regions of parameter space consistent with finding
exactly one HVS in S5. The gold horizontal line in the right plot shows the
inferred $S^{5}$-HVS1 flight time of $4.8\,\mathrm{Myr}$ (Koposov et al.,
2020).
### 4.3 The impact of model simplifications
Our model for the ejection of HVSs from the MBHB slingshot mechanism includes
a number of implicit and explicit simplifying assumptions. Works in recent
years have shown that some of these assumptions i) impact the evolution of
MBHBs, and ii) may not hold strictly true in the centre of our own Galaxy. In
this subsection we comment on the degree to which our results could be
impacted by these simplifications.
We assume that our MBHB is non-accreting and in a gas-free environment.
Dynamical friction driven by a dense gaseous disc and accretion onto the MBHB
impact the separation, eccentricity and inclination evolution of the MBHB (see
Escala et al., 2005; Dotti et al., 2006; Dotti et al., 2007; Muñoz et al.,
2019) and would influence the stellar evolution of HVS progenitors (see
Cantiello et al., 2021). Accretion onto the MBHB in the GC would not be strong
enough to impact its evolution – X-ray observations of the Galactic Centre
indicate an accretion rate of $10^{-6}-10^{-5}\,\mathrm{M_{\odot}\ yr^{-1}}$
(Baganoff et al., 2003; Quataert, 2004) at the Bondi radius
($0.04\,\mathrm{pc}$) and polarization measurements limit the accretion rate
to $\sim 10^{-8}\,\mathrm{M_{\odot}\ yr^{-1}}$ near the Schwarzschild radius
(Quataert & Gruzinov, 2000; Bower et al., 2003; Marrone et al., 2007). We can
check the impact of this accretion with a back of the envelope calculation.
Assuming that the inspiral of a MBHB is driven entirely by gas accretion,
differentiating the angular momentum of the MBHB with respect to time yields
$\frac{\dot{a}}{a}=2\left(\frac{\dot{\ell}}{\ell}-\frac{3}{2}\right)\frac{\dot{M}}{M}\;\text{,}$
(16)
where $\dot{a}$ is the hardening rate, $\dot{M}$ is the rate of accretion onto
the MBHB, $M$ is the total MBHB mass and $\ell=L/M$ is the specific angular
momentum of the binary. $2(\dot{\ell}/\ell-3/2)$ is a constant of order unity
depending on gas accretion physics (see D’Orazio & Duffell, 2021). Plugging in
characteristic numbers for the accretion rate and MBHB mass, the accretion-
driven hardening rate is never more than ten orders of magnitude smaller than
the total slingshot + gravitational wave hardening rate. While evidence exists
for an episode of nuclear activity in the GC in the last few Myr (see Bland-
Hawthorn et al., 2019, and references therein), this flare was short-lived. On
the scale of detectable HVS flight times, it is valid to assume a putative
MBHB in the GC would be non-accreting and in a gas-poor environment. See Naoz
et al. (2020) for further discussion on the impact of accretion on a presumed
black hole companion to Sgr A*.
We have assumed for simplicity that the presumed MBHB in the GC is on an
initially circular orbit and never deviate from a circular orbit until
coalescence. This assumption is valid since while non-zero eccentricity
increases the binary hardening rate at small separations, it leads to only
minor variations in the total stellar mass ejected via the slingshot mechanism
(Quinlan, 1996; Sesana et al., 2006; Rasskazov et al., 2019). In any event,
the impact of eccentricity is only relevant for MBHB mass ratios larger than
10-3 – smaller mass ratio MBHBs tend to circularize with time (Rasskazov et
al., 2019; Bonetti et al., 2020).
Our model of the GC assumes the MBHB is embedded within a spherical,
nonrotating nuclear star cluster (NSC). In actuality, the NSC is slightly
flattened along the Galactic vertical axis (Schödel et al., 2014) and rotates
more or less parallel with the Galactic disc (Trippe et al., 2008; Schödel et
al., 2009; Chatzopoulos et al., 2015; Fritz et al., 2016). A nuclear cluster
which co-(counter-)rotates with the MBHB enhances (suppresses) eccentricity
growth of the MBHB (Sesana et al., 2011; Rasskazov & Merritt, 2017; Rasskazov
et al., 2019; Bonetti et al., 2020) while the impact of counter- and co-
rotation of the NSC on the MBHB hardening rate is less clear (see
contradictory results in Holley-Bockelmann & Khan, 2015; Mirza et al., 2017;
Rasskazov & Merritt, 2017; Rasskazov et al., 2019; Varisco et al., 2021). The
MBHB slingshot mechanism mass ejection rate is decreased slightly at small
separations ($a\lesssim 0.3a_{\rm h}$) if the NSC is maximally corotating
(Rasskazov et al., 2019), in which case our assumption of a non-rotating NSC
may be overestimating $N_{\rm HVS}$. If the NSC is maximally counter-rotating,
the mass ejection rate at small separations is increased by a factor of up to
$\sim$two (Rasskazov et al., 2019). This is less of a concern, given that
MBHBs are expected to align their rotation with the angular momentum of the
NSC (see below).
The density $\rho$ and velocity dispersion $\sigma$ of the NSC impact the MBHB
hardening rate (Eq. 1 \- Eq. 3); we assume single values for these, when in
fact they both vary with radial distance from the GC (Schödel et al., 2009,
2014; Feldmeier et al., 2014). We have confirmed that variations of $\sigma$
and $\rho$ within observational uncertainties do not meaningfully affect the
results of this work. Along with this, we neglect to consider mass segregation
as well within the NSC. A mass segregated stellar environment accelerates
hardening timescales among highly unequal-mass MBHBs but has the opposite
impact on $\sim$equal mass MBHBs (Mukherjee et al., 2023). We assume as well
that the MBHB centre of mass is always centred on the exact geometric centre
of the Galaxy. In reality, the MBHB will ‘drift’ randomly in a nonrotating
stellar nucleus (Merritt, 2001; Chatterjee et al., 2003) or on a closed orbit
if the nucleus is rotating (Holley-Bockelmann & Khan, 2015; Mirza et al.,
2017; Khan et al., 2020; Varisco et al., 2021). The center of mass
displacement from the Galactic barycentre is typically on the order of the
MBHB radius of influence or smaller, i.e. not large enough to meaningfully
affect the detectable HVS population. There are very few cases where shifting
the positions of a Gaia-detectable mock HVS by $\sim$a few pc in any direction
renders it undetectable.
When generating mock HVS populations, we take no consideration of the
orientation and phase of the MBHB. If stars are ejected isotropically in the
MBHB slingshot mechanism, the orientation of the MBHB is irrelevant. While our
model assumes isotropic ejection, theoretically the fastest-ejected stars are
ejected preferentially in the plane of the binary, with a complex polar angle
distribution depending on its separation, eccentricity and mass ratio (Sesana
et al., 2006; Rasskazov et al., 2019; Darbha et al., 2019). Ejections are
likely non-axisymmetric as well, though the azimuthal distribution of fast
ejections remains unclear (see discussion in Rasskazov et al., 2019). Without
independent constraints on the inclination/phase of the MBHB, we would be
compelled to sample them at random, and therefore any angular dependence would
get washed out in our results after averaging over many iterations. There is
reason to believe, however, that a putative MBHB in the GC would be aligned
with the Galactic disc. For a spherical, nonrotating NSC, the orientation of
the MBHB orbit drifts on the order of $\sqrt{m_{*}/M_{\rm MBHB}}\sim 10^{-4}$
rad relative to its initial orientation, where $m_{*}$ is the typical mass of
a star in the NSC and $M_{\rm MBHB}$ is the total mass of the MBHB (Merritt,
2001). In a rotating NSC, however, the angular momentum of the MBHB aligns
with the angular momentum of the nucleus (Gualandris et al., 2012; Wang et
al., 2014; Cui & Yu, 2014; Rasskazov & Merritt, 2017). Having established
above that the Milky Way NSC rotates parallel to the Galactic disc, it is not
unreasonable to assume a putative MBHB would rotate parallel with the disc as
well. Another point in favour of this is the existence of the nuclear stellar
disc embedded within the NSC itself (Bartko et al., 2009; Lu et al., 2009;
Yelda et al., 2014). This stellar disc rotates in the direction of the overall
Galactic disc as well (Schönrich et al., 2015; Schultheis et al., 2021;
Sormani et al., 2022) and resonant relaxation processes would align any IMBH-
SMBH binary with such a disc (Szölgyén et al., 2021; Magnan et al., 2022). If
the MBHB orbit were indeed aligned with the Galactic midplane, our assumption
of isotropic ejections would slightly underestimate $N_{\rm HVS}$, since in
actuality a greater proportion of HVSs are ejected on angles pointed more
flattened toward the Earth.
Finally, our model assumes the phase space of low-angular momentum orbits
which bring GC stars on a close ($\lesssim a$) approach to the MBHB is
efficiently repopulated over time. Only stars within this ‘loss cone’
experience strong dynamical interaction with the MBHB. This loss cone is
refilled over time, primarily via two-body relaxation in the NSC Sgr A* (see
Lightman & Shapiro, 1977; Merritt, 2013) or resonant relaxation processes in
the young clockwise disc (Rauch & Tremaine, 1996; Madigan et al., 2009, 2011;
Madigan et al., 2014). In principle, if these scattering mechanisms are not
efficient, the loss cone can empty and the MBHB hardening (and therefore the
ejections of HVSs) can slow down or cease entirely (Milosavljević & Merritt,
2003). This problem was investigated in the context of HVS ejections by Sesana
et al. (2007b), who found that without loss cone refilling the MBHB stalls
after ejecting only half the MBHB reduced mass in stars, $\sim
10^{2}-10^{6}\,\mathrm{M_{\odot}}$ in our case depending on the assumed mass
ratio. This is smaller than the total mass ejected in the full loss cone
regime by a factor of a few ($\sim$10) if the mass ratio is small (large) and
the ejected stars would have smaller typical velocities. Concerns about loss
cone depletion among MBHBs has largely been alleviated in recent years, as
simulations have shown that relaxation-driven refilling of the loss cone is
sufficient to avoid loss cone depletion in triaxial potentials (Berczik et
al., 2006; Vasiliev et al., 2015; Gualandris et al., 2017), or even biaxial
potentials (Khan et al., 2013). Regardless, our models in this work assume a
full loss cone, and this can lead to an overestimation of $N_{\rm HVS}$ in
cases of partial depletion. This assumption is likely the largest source of
systematic uncertainty in our modelling. Given the strong dependence of
$N_{\rm HVS}$ large range of HVS population sizes As noted by Vasiliev et al.
(2015), it is relatively straightforward to adjust $H$ and $J$ (Eq. 9) to
account for loss cone depletion (see also Rasskazov et al., 2019).
## 5 Conclusions
Massive black hole binaries (MBHBs) are a natural consequence of galaxy
evolution. Dynamical interactions between stars and an MBHB in the centre of
the Milky Way could eject hypervelocity stars (HVSs) detectable by the Gaia
space satellite. In this work, we use existing HVS observations for the first
time as a probe of a possible supermassive or intermediate mass companion
black hole to Sgr A*, the supermassive black hole located in the Galactic
Centre (GC). Building upon previous work, we realistically simulate the
ejection of HVSs from an MBHB assuming a variety of MBHB mass ratios and
separations. We focus in particular on HVSs which would have appeared in the
radial velocity catalogue of the third data release from Gaia. Considering
that zero HVSs with precise astrometry were unearthed in the radial velocity
catalogue of Gaia DR3 (Marchetti et al., 2022), MBHB configurations which
predict too many HVSs in this data release can be excluded. Our conclusions
are as follows:
* •
The number of HVSs detectable by Gaia depends strongly on the MBHB separation
and companion mass. It is comparatively less sensitive to the shape of the
assumed initial mass function (IMF) (Fig. 3)
* •
For a fiducial Sgr A* companion mass of $4000\,\mathrm{M_{\odot}}$ and MBHB
separation of $0.001\,\mathrm{pc}$, $8\pm 4$ HVSs should have been detected in
the Gaia DR3 radial velocity catalogue with precise astrometry (Fig. 4).
* •
The lack of such HVSs in DR3 excludes a companion within $1\,\mathrm{mpc}$ of
Sgr A* unless it has a mass of $\lesssim 1000\,\mathrm{M_{\odot}}$,
complementing and extending prior constraints on a Sgr A* companion (Fig. 5).
* •
The lack of confident HVS detections in Gaia DR3 also allows us to constrain a
former companion to Sgr A* for the first time. If Sgr A* merged with a
companion in the recent past, either of the following must be true: i) the
former companion had a mass of $\lesssim 500\,\mathrm{M_{\odot}}$, or ii) the
merger must have happened more than $\sim$10-30 Myr ago (Fig. 6).
* •
If Sgr A* has an existing companion or had a former companion, the forthcoming
fourth Gaia data release will contain at most a few tens of HVSs ejected via
the MBHB slingshot mechanism in its radial velocity catalogue with precise
astrometry (Fig. 7).
The constraints we place in this work on a possible companion to Sgr A*,
especially when combined with constraints obtained over the last few years
(Naoz et al., 2020; Reid & Brunthaler, 2020; GRAVITY Collaboration et al.,
2020), appear to be closing the door on the existence of such a companion.
While a massive companion up to $\sim 10^{5}\mathrm{M_{\odot}}$ is still
allowed at separations larger than $\sim 0.1\,\mathrm{pc}$, a hardened MBHB in
the GC appears unlikely unless the MBHB mass ratio is extreme. With the future
Gaia data releases and their synergy with both forthcoming and currently
operational spectroscopic facilities and surveys (e.g. WEAVE; Dalton et al.
2012, 4MOST; de Jong et al. 2019, SDSS-V MWM; Kollmeier et al. 2017), more and
more HVSs will be detected. If Sgr A* does not have a companion of significant
mass and did not have one in the recent past, the MBHB slingshot mechanism can
be definitely ruled out as an avenue for HVS ejections and future research can
focus on more realistic mechanisms.
## Acknowledgements
The authors thank J.R. Westernacher-Schneider for helpful discussion. FAE
acknowledges support from the University of Toronto Arts & Science
Postdoctoral Fellowship program and the Dunlap Institute. TM acknowledges an
ESO fellowship. EMR acknowledges that this project has received funding from
the European Research Council (ERC) under the European Union’s Horizon 2020
research and innovation programme (grant agreement No. 101002511 - VEGA P). JB
acknowledges financial support from NSERC (funding reference number
RGPIN-2020-04712).
## Data Availability
The simulation outputs underpinning this work can be shared upon reasonable
request to the corresponding author. These simulations were produced using the
speedystar package, publicly available at
https://github.com/fraserevans/speedystar.
## References
* Abbott et al. (2018) Abbott T. M. C., et al., 2018, ApJS, 239, 18
* Akiyama et al. (2022) Akiyama K., et al., 2022, ApJ, 930, L15
* Amaro-Seoane et al. (2017) Amaro-Seoane P., et al., 2017, arXiv e-prints, p. arXiv:1702.00786
* Arca-Sedda & Gualandris (2018) Arca-Sedda M., Gualandris A., 2018, MNRAS, 477, 4423
* Baganoff et al. (2003) Baganoff F. K., et al., 2003, ApJ, 591, 891
* Bailer-Jones (2015) Bailer-Jones C. A. L., 2015, PASP, 127, 994
* Bartko et al. (2009) Bartko H., et al., 2009, ApJ, 697, 1741
* Baumgardt et al. (2006) Baumgardt H., Gualandris A., Portegies Zwart S., 2006, MNRAS, 372, 174
* Begelman et al. (1980) Begelman M. C., Blandford R. D., Rees M. J., 1980, Nature, 287, 307
* Bennett & Bovy (2019) Bennett M., Bovy J., 2019, MNRAS, 482, 1417
* Berczik et al. (2006) Berczik P., Merritt D., Spurzem R., Bischof H.-P., 2006, ApJ, 642, L21
* Bessell (1990) Bessell M. S., 1990, PASP, 102, 1181
* Bland-Hawthorn et al. (2019) Bland-Hawthorn J., et al., 2019, ApJ, 886, 45
* Bonetti et al. (2020) Bonetti M., et al., 2020, MNRAS, 493, L114
* Bovy (2015) Bovy J., 2015, ApJS, 216, 29
* Bovy et al. (2016) Bovy J., Rix H.-W., Green G. M., Schlafly E. F., Finkbeiner D. P., 2016, ApJ, 818, 130
* Bower et al. (2003) Bower G. C., Wright M. C. H., Falcke H., Backer D. C., 2003, ApJ, 588, 331
* Bromley et al. (2018) Bromley B. C., Kenyon S. J., Brown W. R., Geller M. J., 2018, ApJ, 868, 25
* Brown (2015) Brown W. R., 2015, ARA&A, 53, 15
* Brown (2019) Brown A. G. A., 2019, in The Gaia Universe. p. 18, doi:10.5281/zenodo.2637972
* Brown et al. (2005) Brown W. R., Geller M. J., Kenyon S. J., Kurtz M. J., 2005, ApJ, 622, L33
* Brown et al. (2006) Brown W. R., Geller M. J., Kenyon S. J., Kurtz M. J., 2006, ApJ, 640, L35
* Brown et al. (2009) Brown W. R., Geller M. J., Kenyon S. J., 2009, ApJ, 690, 1639
* Brown et al. (2012) Brown W. R., Geller M. J., Kenyon S. J., 2012, ApJ, 751, 55
* Brown et al. (2014) Brown W. R., Geller M. J., Kenyon S. J., 2014, ApJ, 787, 89
* Cantat-Gaudin et al. (2022) Cantat-Gaudin T., et al., 2022, arXiv e-prints, p. arXiv:2208.09335
* Cantiello et al. (2021) Cantiello M., Jermyn A. S., Lin D. N. C., 2021, ApJ, 910, 94
* Capuzzo-Dolcetta & Fragione (2015) Capuzzo-Dolcetta R., Fragione G., 2015, MNRAS, 454, 2677
* Castro-Ginard et al. (2023) Castro-Ginard A., et al., 2023, arXiv e-prints, p. arXiv:2303.17738
* Chandrasekhar (1943) Chandrasekhar S., 1943, ApJ, 97, 255
* Chatterjee et al. (2003) Chatterjee P., Hernquist L., Loeb A., 2003, ApJ, 592, 32
* Chatzopoulos et al. (2015) Chatzopoulos S., Fritz T. K., Gerhard O., Gillessen S., Wegg C., Genzel R., Pfuhl O., 2015, MNRAS, 447, 948
* Choi et al. (2016) Choi J., Dotter A., Conroy C., Cantiello M., Paxton B., Johnson B. D., 2016, ApJ, 823, 102
* Cropper et al. (2018) Cropper M., et al., 2018, A&A, 616, A5
* Cui & Yu (2014) Cui X., Yu Q., 2014, MNRAS, 437, 777
* D’Orazio & Duffell (2021) D’Orazio D. J., Duffell P. C., 2021, ApJ, 914, L21
* Dalton et al. (2012) Dalton G., et al., 2012, in McLean I. S., Ramsay S. K., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV. p. 84460P, doi:10.1117/12.925950
* Darbha et al. (2019) Darbha S., Coughlin E. R., Kasen D., Quataert E., 2019, MNRAS, 482, 2132
* Di Matteo et al. (2005) Di Matteo T., Springel V., Hernquist L., 2005, Nature, 433, 604
* Do et al. (2020) Do T., David Martinez G., Kerzendorf W., Feldmeier-Krause A., Arca Sedda M., Neumayer N., Gualandris A., 2020, ApJ, 901, L28
* Dormand & Prince (1980) Dormand J., Prince P., 1980, Journal of Computational and Applied Mathematics, 6, 19
* Dotter (2016) Dotter A., 2016, ApJS, 222, 8
* Dotti et al. (2006) Dotti M., Colpi M., Haardt F., 2006, MNRAS, 367, 103
* Dotti et al. (2007) Dotti M., Colpi M., Haardt F., Mayer L., 2007, MNRAS, 379, 956
* Drimmel & Poggio (2018) Drimmel R., Poggio E., 2018, Research Notes of the American Astronomical Society, 2, 210
* Drimmel et al. (2003) Drimmel R., Cabrera-Lavers A., López-Corredoira M., 2003, A&A, 409, 205
* Du et al. (2019) Du C., Li H., Yan Y., Newberg H. J., Shi J., Ma J., Chen Y., Wu Z., 2019, ApJS, 244, 4
* Edelmann et al. (2005) Edelmann H., Napiwotzki R., Heber U., Christlieb N., Reimers D., 2005, ApJ, 634, L181
* Eisenhauer et al. (2005) Eisenhauer F., et al., 2005, Apj, 628, 246
* Escala et al. (2005) Escala A., Larson R. B., Coppi P. S., Mardones D., 2005, ApJ, 630, 152
* Evans et al. (2022a) Evans F. A., Marchetti T., Rossi E. M., 2022a, MNRAS, 512, 2350
* Evans et al. (2022b) Evans F. A., Marchetti T., Rossi E. M., 2022b, MNRAS, 517, 3469
* Everall & Boubert (2022) Everall A., Boubert D., 2022, MNRAS, 509, 6205
* Everall et al. (2021) Everall A., Boubert D., Koposov S. E., Smith L., Holl B., 2021, MNRAS, 502, 1908
* Fabricius et al. (2021) Fabricius C., et al., 2021, A&A, 649, A5
* Feldmeier et al. (2014) Feldmeier A., et al., 2014, A&A, 570, A2
* Figer et al. (2003) Figer D. F., et al., 2003, ApJ, 599, 1139
* Fragione & Capuzzo-Dolcetta (2016) Fragione G., Capuzzo-Dolcetta R., 2016, MNRAS, 458, 2596
* Fritz et al. (2016) Fritz T. K., et al., 2016, ApJ, 821, 44
* GRAVITY Collaboration et al. (2018) GRAVITY Collaboration et al., 2018, A&A, 615, L15
* GRAVITY Collaboration et al. (2020) GRAVITY Collaboration et al., 2020, A&A, 636, L5
* Gaia Collaboration et al. (2016) Gaia Collaboration et al., 2016, A&A, 595, A2
* Gaia Collaboration et al. (2021a) Gaia Collaboration et al., 2021a, A&A, 649, A1
* Gaia Collaboration et al. (2021b) Gaia Collaboration et al., 2021b, A&A, 649, A1
* Gaia Collaboration et al. (2022) Gaia Collaboration et al., 2022, arXiv e-prints, p. arXiv:2208.00211
* Generozov (2021) Generozov A., 2021, MNRAS, 501, 3088
* Generozov & Madigan (2020) Generozov A., Madigan A.-M., 2020, ApJ, 896, 137
* Genzel et al. (2010) Genzel R., Eisenhauer F., Gillessen S., 2010, Reviews of Modern Physics, 82, 3121
* Ghez et al. (2008) Ghez A. M., et al., 2008, ApJ, 689, 1044
* Gillessen et al. (2009) Gillessen S., Eisenhauer F., Trippe S., Alexander T., Genzel R., Martins F., Ott T., 2009, ApJ, 692, 1075
* Goodman & Tan (2004) Goodman J., Tan J. C., 2004, ApJ, 608, 108
* Gould & Quillen (2003) Gould A., Quillen A. C., 2003, ApJ, 592, 935
* Green et al. (2015) Green G. M., et al., 2015, ApJ, 810, 25
* Gualandris & Merritt (2009) Gualandris A., Merritt D., 2009, ApJ, 705, 361
* Gualandris et al. (2005) Gualandris A., Portegies Zwart S., Sipior M. S., 2005, MNRAS, 363, 223
* Gualandris et al. (2010) Gualandris A., Gillessen S., Merritt D., 2010, MNRAS, 409, 1146
* Gualandris et al. (2012) Gualandris A., Dotti M., Sesana A., 2012, MNRAS, 420, L38
* Gualandris et al. (2017) Gualandris A., Read J. I., Dehnen W., Bortolas E., 2017, MNRAS, 464, 2301
* Hansen (2003) Hansen B. M. S., 2003, ApJ, 582, 915
* Hansen & Milosavljević (2003) Hansen B. M. S., Milosavljević M., 2003, ApJ, 593, L77
* Hattori et al. (2018) Hattori K., Valluri M., Bell E. F., Roederer I. U., 2018, ApJ, 866, 121
* Heber et al. (2008) Heber U., Edelmann H., Napiwotzki R., Altmann M., Scholz R. D., 2008, A&A, 483, L21
* Hills (1988) Hills J. G., 1988, Nature, 331, 687
* Hirsch et al. (2005) Hirsch H. A., Heber U., O’Toole S. J., Bresolin F., 2005, A&A, 444, L61
* Holley-Bockelmann & Khan (2015) Holley-Bockelmann K., Khan F. M., 2015, ApJ, 810, 139
* Huang et al. (2017) Huang Y., et al., 2017, ApJ, 847, L9
* Huang et al. (2021) Huang Y., Li Q., Zhang H., Li X., Sun W., Chang J., Dong X., Liu X., 2021, ApJ, 907, L42
* Hurley et al. (2000) Hurley J. R., Pols O. R., Tout C. A., 2000, MNRAS, 315, 543
* Igoshev et al. (2023) Igoshev A. P., Perets H., Hallakoun N., 2023, MNRAS, 518, 6223
* Irrgang et al. (2010) Irrgang A., Przybilla N., Heber U., Nieva M. F., Schuh S., 2010, ApJ, 711, 138
* Irrgang et al. (2019) Irrgang A., Geier S., Heber U., Kupfer T., Fürst F., 2019, A&A, 628, L5
* Jordi et al. (2010) Jordi C., et al., 2010, A&A, 523, A48
* Katz et al. (2019) Katz D., et al., 2019, A&A, 622, A205
* Katz et al. (2022) Katz D., et al., 2022, arXiv e-prints, p. arXiv:2206.05902
* Kenyon et al. (2008) Kenyon S. J., Bromley B. C., Geller M. J., Brown W. R., 2008, ApJ, 680, 312
* Khan et al. (2013) Khan F. M., Holley-Bockelmann K., Berczik P., Just A., 2013, ApJ, 773, 100
* Khan et al. (2020) Khan F. M., Mirza M. A., Holley-Bockelmann K., 2020, MNRAS, 492, 256
* Kobayashi et al. (2012) Kobayashi S., Hainick Y., Sari R., Rossi E. M., 2012, ApJ, 748, 105
* Kollmeier et al. (2017) Kollmeier J. A., et al., 2017, arXiv e-prints, p. arXiv:1711.03234
* Koposov et al. (2020) Koposov S. E., et al., 2020, MNRAS, 491, 2465
* Kormendy & Ho (2013) Kormendy J., Ho L. C., 2013, ARA&A, 51, 511
* Levin (2006) Levin Y., 2006, ApJ, 653, 1203
* Levin & Beloborodov (2003) Levin Y., Beloborodov A. M., 2003, ApJ, 590, L33
* Levin et al. (2005) Levin Y., Wu A., Thommes E., 2005, ApJ, 635, 341
* Li et al. (2018) Li Y.-B., et al., 2018, AJ, 156, 87
* Li et al. (2019) Li T. S., et al., 2019, MNRAS, 490, 3508
* Li et al. (2021) Li Y.-B., et al., 2021, ApJS, 252, 3
* Li et al. (2022) Li H., Du C., Ma J., Shi J., Newberg H. J., Piao Y., 2022, ApJ, 933, L13
* Lightman & Shapiro (1977) Lightman A. P., Shapiro S. L., 1977, ApJ, 211, 244
* Löckmann et al. (2008) Löckmann U., Baumgardt H., Kroupa P., 2008, ApJ, 683, L151
* Lu et al. (2009) Lu J. R., Ghez A. M., Hornstein S. D., Morris M. R., Becklin E. E., Matthews K., 2009, ApJ, 690, 1463
* Lu et al. (2013) Lu J. R., Do T., Ghez A. M., Morris M. R., Yelda S., Matthews K., 2013, ApJ, 764, 155
* Luna et al. (2019) Luna A., Minniti D., Alonso-García J., 2019, ApJ, 887, L39
* Madigan et al. (2009) Madigan A.-M., Levin Y., Hopman C., 2009, ApJ, 697, L44
* Madigan et al. (2011) Madigan A.-M., Hopman C., Levin Y., 2011, ApJ, 738, 99
* Madigan et al. (2014) Madigan A.-M., Pfuhl O., Levin Y., Gillessen S., Genzel R., Perets H. B., 2014, ApJ, 784, 23
* Magnan et al. (2022) Magnan N., Fouvry J.-B., Pichon C., Chavanis P.-H., 2022, MNRAS, 514, 3452
* Magorrian et al. (1998) Magorrian J., et al., 1998, AJ, 115, 2285
* Marchetti (2021) Marchetti T., 2021, MNRAS, 503, 1374
* Marchetti et al. (2018) Marchetti T., Contigiani O., Rossi E. M., Albert J. G., Brown A. G. A., Sesana A., 2018, MNRAS, 476, 4697
* Marchetti et al. (2019) Marchetti T., Rossi E. M., Brown A. G. A., 2019, MNRAS, 490, 157
* Marchetti et al. (2022) Marchetti T., Evans F. A., Rossi E. M., 2022, MNRAS, 515, 767
* Marrone et al. (2007) Marrone D. P., Moran J. M., Zhao J.-H., Rao R., 2007, ApJ, 654, L57
* Marshall et al. (2006) Marshall D. J., Robin A. C., Reylé C., Schultheis M., Picaud S., 2006, A&A, 453, 635
* Mastrobuono-Battisti et al. (2023) Mastrobuono-Battisti A., Ogiya G., Hahn O., Schultheis M., 2023, arXiv e-prints, p. arXiv:2303.12826
* McKernan et al. (2012) McKernan B., Ford K. E. S., Lyra W., Perets H. B., 2012, MNRAS, 425, 460
* McMillan (2017) McMillan P. J., 2017, MNRAS, 465, 76
* Merritt (2001) Merritt D., 2001, ApJ, 556, 245
* Merritt (2013) Merritt D., 2013, Dynamics and Evolution of Galactic Nuclei
* Mikkola & Merritt (2008) Mikkola S., Merritt D., 2008, AJ, 135, 2398
* Miller & Hamilton (2002) Miller M. C., Hamilton D. P., 2002, MNRAS, 330, 232
* Milosavljević & Merritt (2003) Milosavljević M., Merritt D., 2003, ApJ, 596, 860
* Mirza et al. (2017) Mirza M. A., Tahir A., Khan F. M., Holley-Bockelmann H., Baig A. M., Berczik P., Chishtie F., 2017, MNRAS, 470, 940
* Muñoz et al. (2019) Muñoz D. J., Miranda R., Lai D., 2019, ApJ, 871, 84
* Mukherjee et al. (2023) Mukherjee D., Zhu Q., Ogiya G., Rodriguez C. L., Trac H., 2023, MNRAS, 518, 4801
* Naoz et al. (2020) Naoz S., Will C. M., Ramirez-Ruiz E., Hees A., Ghez A. M., Do T., 2020, ApJ, 888, L8
* O’Leary & Loeb (2007) O’Leary R. M., Loeb A., 2007, MNRAS, 383, 86
* Palladino et al. (2014) Palladino L. E., Schlesinger K. J., Holley-Bockelmann K., Allende Prieto C., Beers T. C., Lee Y. S., Schneider D. P., 2014, ApJ, 780, 7
* Pelupessy et al. (2013) Pelupessy F. I., van Elteren A., de Vries N., McMillan S. L. W., Drost N., Portegies Zwart S. F., 2013, A&A, 557, A84
* Peters (1964) Peters P. C., 1964, Physical Review, 136, 1224
* Portegies Zwart & McMillan (2002) Portegies Zwart S. F., McMillan S. L. W., 2002, ApJ, 576, 899
* Portegies Zwart & McMillan (2018) Portegies Zwart S., McMillan S., 2018, Astrophysical Recipes; The art of AMUSE, doi:10.1088/978-0-7503-1320-9.
* Portegies Zwart et al. (2006) Portegies Zwart S. F., Baumgardt H., McMillan S. L. W., Makino J., Hut P., Ebisuzaki T., 2006, ApJ, 641, 319
* Portegies Zwart et al. (2009) Portegies Zwart S., et al., 2009, New Astron., 14, 369
* Portegies Zwart et al. (2013) Portegies Zwart S., McMillan S. L. W., van Elteren E., Pelupessy I., de Vries N., 2013, Computer Physics Communications, 184, 456
* Prudil et al. (2022) Prudil Z., et al., 2022, A&A, 664, A148
* Quataert (2004) Quataert E., 2004, ApJ, 613, 322
* Quataert & Gruzinov (2000) Quataert E., Gruzinov A., 2000, ApJ, 545, 842
* Quinlan (1996) Quinlan G. D., 1996, New Astron., 1, 35
* Rasskazov & Merritt (2017) Rasskazov A., Merritt D., 2017, ApJ, 837, 135
* Rasskazov et al. (2019) Rasskazov A., Fragione G., Leigh N. W. C., Tagawa H., Sesana A., Price-Whelan A., Rossi E. M., 2019, ApJ, 878, 17
* Rauch & Tremaine (1996) Rauch K. P., Tremaine S., 1996, New Astron., 1, 149
* Reid & Brunthaler (2004) Reid M. J., Brunthaler A., 2004, ApJ, 616, 872
* Reid & Brunthaler (2020) Reid M. J., Brunthaler A., 2020, ApJ, 892, 39
* Riello et al. (2021) Riello M., et al., 2021, A&A, 649, A3
* Rossi et al. (2014) Rossi E. M., Kobayashi S., Sari R., 2014, ApJ, 795, 125
* Sari et al. (2010) Sari R., Kobayashi S., Rossi E. M., 2010, ApJ, 708, 605
* Sartoretti et al. (2022) Sartoretti P., et al., 2022, arXiv e-prints, p. arXiv:2206.05725
* Schlafly et al. (2018) Schlafly E. F., et al., 2018, ApJS, 234, 39
* Schödel et al. (2007) Schödel R., et al., 2007, A&A, 469, 125
* Schödel et al. (2009) Schödel R., Merritt D., Eckart A., 2009, A&A, 502, 91
* Schödel et al. (2014) Schödel R., Feldmeier A., Neumayer N., Meyer L., Yelda S., 2014, Classical and Quantum Gravity, 31, 244007
* Schönrich et al. (2015) Schönrich R., Aumer M., Sale S. E., 2015, ApJ, 812, L21
* Schultheis et al. (2021) Schultheis M., et al., 2021, A&A, 650, A191
* Sesana et al. (2006) Sesana A., Haardt F., Madau P., 2006, ApJ, 651, 392
* Sesana et al. (2007a) Sesana A., Haardt F., Madau P., 2007a, MNRAS, 379, L45
* Sesana et al. (2007b) Sesana A., Haardt F., Madau P., 2007b, ApJ, 660, 546
* Sesana et al. (2011) Sesana A., Gualandris A., Dotti M., 2011, MNRAS, 415, L35
* Shen et al. (2018) Shen K. J., et al., 2018, ApJ, 865, 15
* Sormani et al. (2022) Sormani M. C., et al., 2022, MNRAS, 512, 1857
* Szölgyén et al. (2021) Szölgyén Á., Máthé G., Kocsis B., 2021, ApJ, 919, 140
* Tillich et al. (2009) Tillich A., Przybilla N., Scholz R.-D., Heber U., 2009, A&A, 507, L37
* Trippe et al. (2008) Trippe S., et al., 2008, A&A, 492, 419
* Varisco et al. (2021) Varisco L., Bortolas E., Dotti M., Sesana A., 2021, MNRAS, 508, 1533
* Vasiliev et al. (2015) Vasiliev E., Antonini F., Merritt D., 2015, ApJ, 810, 49
* Volonteri et al. (2003) Volonteri M., Haardt F., Madau P., 2003, ApJ, 582, 559
* Wang et al. (2014) Wang L., Berczik P., Spurzem R., Kouwenhoven M. B. N., 2014, ApJ, 780, 164
* Yelda et al. (2014) Yelda S., Ghez A. M., Lu J. R., Do T., Meyer L., Morris M. R., Matthews K., 2014, ApJ, 783, 131
* Yu & Tremaine (2003) Yu Q., Tremaine S., 2003, ApJ, 599, 1129
* Zhang et al. (2013) Zhang F., Lu Y., Yu Q., 2013, ApJ, 768, 153
* Zheng et al. (2021) Zheng X., Lin D. N. C., Mao S., 2021, ApJ, 914, 33
* Zhong et al. (2014) Zhong J., et al., 2014, ApJ, 789, L2
* Zier & Biermann (2001) Zier C., Biermann P. L., 2001, A&A, 377, 23
* de Jong et al. (2019) de Jong R. S., et al., 2019, The Messenger, 175, 3
|
# Simulation of the spatial shift in detector response for polarized protons
within a calorimeter
A. Blitstein<EMAIL_ADDRESS>B. Wojtsekhowski Department of Physics
and Astronomy, University of North Carolina, Chapel Hill, North Carolina,
27599, USA Thomas Jefferson National Accelerator Facility, Newport News,
Virginia 23606, USA
###### Abstract
Measurement of the helicity dependent elastic electron-proton scattering cross
section provides a key means of investigating parity violation within the
proton. However, such measurements exhibit potential instrumental effects
associated with the detection of polarized recoiled protons. In particular,
spin-orbit interactions within a massive detector induce a systematic spatial
shift in the detector signal. In this study, we determine the size of this
shift using the Geant4 simulation toolkit. For a typical hadron calorimeter,
we found a polarization dependent shift on the order of 0.01-0.1 mm, multiple
orders of magnitude smaller than the typical spatial resolution seen in
hadronic calorimeters. Additionally, we provide the custom modifications
required of the Geant4 source code to implement the quasi-elastic scattering
of polarized protons incident on nuclei in the detector. The modifications are
readily extendable to generic matter sources, and can be used for the study of
additional spin dependent observables in Geant4.
###### keywords:
Polarized Protons , Spin-Orbit Interaction , Geant4
###### PACS:
25.30.Bf , 13.40.Gp , 14.20.Dh
††journal: NIM A
## 1 Introduction
Much of what is currently known about the structure of the proton has been
ascertained from electron scattering experiments [1, 2, 3, 4]. In such
experiments, a beam of electrons is incident on a proton target. By measuring
the distribution of kinematic variables associated with the scattered
particles, such as their momentum and energy, one obtains data on various
proton form factors. These form factors are then used to infer the spatial
distribution of quarks and gluons within the proton. In this way, the use of
polarized beams and polarized targets allow one to probe the origin of the
spin of the nucleon.
The observation of the small value of the quark polarization in the nucleon
(“spin crisis”) [5] raises a question about the strange quark contribution to
the spin of the nucleon. The parity violation present in elastic electron-
proton scattering allows for the determination of the proton form factors
related to the strange quark [6]. In such experiments the cross section beam
helicity asymmetry is measured. The detected particles are also polarized,
which could require systematic corrections to the measurement results on
account of spin-dependent interactions within the detector. These instrumental
effects become significant for the detection of polarized recoiled protons.
Consequently, it is necessary to ensure that these spin dependent interactions
that occur within the detector do not collude to disrupt the determination of
the coordinate measurement of the polarized protons by the detector. It is
known that polarized protons are affected by a spin-orbit interaction when
scattering off nuclei in matter [7, 8]. This spin-dependent, quasi-elastic
scattering can be accounted for via the introduction of an empirically known
asymmetry in the azimuthal distribution of scattered protons. In many cases,
this data is already available. Here, we make use of data [9] for hydrocarbon
based targets obtained with a standard hadron sampling calorimeter.
The specific motivation for the current study is a possible experiment
proposed in Ref. [10]. In what follows, we present a polarization-dependent
prediction for the expected size of the spatial shift in detector signal
induced by spin-orbit interactions within the proposed calorimeter. To do so,
simulations were run in Geant4, a Monte-Carlo based toolkit for simulating
nuclear physics that is sourced in C++ [11]. Currently, generic Geant4 has no
default implementation for many of the spin-dependent interactions that are
necessary to accurately simulate parity violating effects in nuclear physics.
Consequently, it was necessary to introduce custom modifications to the source
code of Geant4 associated with the modeling of the elastic scattering of
protons incident on nuclei in matter. This implementation is readily
generalizable to account for alternative modifications to the azimuthal part
of the scattering statistics, and can be used in further Geant4 studies
involving spin dependent observables (such as those seen in Ref. [12]).
## 2 The Propagation of Polarized Protons in Matter
Due to their interaction with the incident beam of polarized particles, the
protons that undergo elastic scattering with said particles are polarized as
well. In particular, for elastic $e-p$ scattering in the single photon
exchange approximation, the transverse polarization $P_{t}$ of the recoiled
protons for a longitudinally polarized electron beam is:
$P_{t}=-2h\,P_{e}\sqrt{\tau(1+\tau)}\,\,\tan(\frac{\theta_{e}}{2})\frac{G_{{}_{E}}\cdot\,G_{{}_{M}}}{G_{{}_{E}}^{2}+(\tau/\epsilon)G_{{}_{M}}^{2}},$
(1)
where $P_{e}$ is the electron beam polarization, $h=\pm 1$ the beam helicity,
$\theta_{e}$ the electron scattering angle, $\tau=Q^{2}/4m_{p}^{2}$ ($Q^{2}$
the negative four-momentum transfer squared, $m_{p}$ the proton mass),
$\epsilon$ the virtual photon polarization parameter, and $G_{{}_{E}}$ and
$G_{{}_{M}}$ the electric and magnetic form factors [4].
To first order, when polarized protons elastically scatter off nuclei in
matter, they experience a spin-orbit interaction [7, 8]. Depending on the path
they take around a given nucleus, the direction of their angular momentum
relative to the nucleus differs, as seen in Fig. 1. If the spin orbit
interaction is such that it prefers anti-alignment of proton spin with its
angular momentum, it will preferentially scatter off the nucleus in the
direction that fulfills that requirement (the top/red trajectory in Fig. 1).
For polarized protons in matter, it is empirically known that protons do
preferentially scatter in the direction that promotes anti-alignment of their
spin and angular momentum, the same direction as their spin axis crossed with
their incident direction (see Ref. [7] for a summary of the spin-orbit
interaction for nucleon-nucleus scattering).
Figure 1: Diagram illustrating the spin-orbit interaction felt by polarized
protons elastically scattering off nuclei in matter. It is known empirically
that protons prefer to take the path with $\bm{L}\parallel-\bm{P}$, indicated
by the top/red trajectory. Color online.
The elastic scattering of polarized protons incident on spinless nuclei
depicted in Fig. 1 has a few key features which must be considered when
modeling the interaction. First, the spin orbit piece to the interaction
should not modify the distribution of the scattering angle, $\theta$, for
scattered protons. This piece of the scattering statistics reflects the
typical treatment of protons elastically scattering off nuclei, providing an
important benchmark that our modified interaction should reproduce.
Accordingly, all that will be changed is the distribution associated with the
azimuthal angle, $\phi$, which sets the scattering direction in the plane
normal to the incident direction.
For standard proton-nucleus elastic scattering, the azimuthal distribution is
taken to be uniform, with no preference in direction for scattering. To add in
such a preference, we will need to introduce an asymmetry in the azimuthal
distribution for scattered protons. Taking the z-axis to be along the incident
direction and proton spin to be measured along the y-axis, the scattering
angular distribution takes the form
$N(\theta,\phi)=N_{0}(\theta)\left(1+A_{y}P_{y}\cos\phi\right),$ (2)
where $N_{0}(\theta)$ is the angular distribution in the absence of proton
polarization [9]. The side (transverse) polarization, $P_{y}$, is defined as
$P_{y}=\frac{N_{y}^{\uparrow}-N_{y}^{\downarrow}}{N_{y}^{\uparrow}+N_{y}^{\downarrow}},$
(3)
where $N_{y}^{\uparrow/\downarrow}$ is the total number of spin-up/down
protons with respect to the y-axis. One can also view the side polarization as
twice the average y-component of incident proton spin. The other variable that
appears in Eq. 2, the analyzing power $A_{y}$, is there for the sole purpose
of accounting for any additional azimuthal dependence seen in
experiments/theory.
For our purposes, we are primarily interested in the elastic scattering of
protons incident on nuclei within a plastic scintillator. For protons incident
with lab frame momentum $\vec{p}_{\text{lab}}=p_{\text{lab}}\,\,\hat{z}$, it
is known from experiment that the analyzing power is well approximated by
$A_{y}(\theta)=\frac{\sum_{n=1}^{4}c_{n}\left(p_{\text{lab}}\sin\theta\right)^{n}}{p_{\text{lab}}},$
(4)
where $c_{1}=3.02\pm 0.13$, $c_{2}=-7.33\pm 0.66$, $c_{3}=6.17\pm 1.11$, and
$c_{4}=-1.74\pm 0.59$, with $p_{\text{lab}}$ in GeV/c (see Ref. [9]).
Looking at Eq. 2, we see that the effect of the added asymmetry in the
azimuthal distribution is to preferentially scatter protons in the direction
corresponding to $\phi=0$ (where the extra term has a maximum) while making it
less likely to scatter in the direction with $\phi=\pm\pi$ (where the extra
term has a minimum). Further, the additional factor averages over
$\phi\in(-\pi,\pi)$ to 1, thereby not changing the net scattering angle part
of the angular distribution.
## 3 Geant4 Custom Modifications
To model the elastic scattering of polarized protons in matter, simulations
were run in Geant4. Geant4 is a toolkit composed of C++ source files intended
for simulating nuclear physics via Monte Carlo techniques (see Ref. [11] for
more than what is presented here). Interactions between particles are handled
in a generic way by associating with each an interaction length that goes as
the inverse of the interaction cross section, thereby allowing for modular
adjustments to its code.
A generic Geant4 simulation is called a run, which is composed of a specified
number of events, or repetitions of the user defined initial conditions. Each
event is then split up into multiple tracks for each of the primary particles
and any secondaries that are generated. Then, each track is updated in small
increments called steps, implementing dynamic changes to each of the particles
with a mix of at rest (e.g. radioactive decay), continuous (e.g. ionization),
and discrete (e.g. pair production) processes.
The typical Geant4 user need only modify a few concrete instances of the core
base classes which guide the workflow of a typical simulation. Changes to
detector geometry and materials are handled in a concrete instance of the
G4VUserDetectorConstruction class. Particle definitions and their interactions
are set in an instance of the G4VUserPhysicsList class. Finally, event
initialization, such as setting the initial particles and their kinematic
properties, is specified in an instance of the G4VUserActionInitialization
class. Though the aforementioned classes are the only mandatory classes the
user should create, additional classes can be defined, which allow one to
interface with the particles at the beginning and end of each run, event,
track, and step. These are set in concrete instances of the G4UserRunAction,
G4UserEventAction, G4UserTrackingAction, and G4UserSteppingAction base classes
respectively.
While users usually only need to associate predefined physics processes with
the particles they involve, this fails when Geant4 does not have a default
implementation for the interaction of interest. As it turns out, this is the
case for the elastic scattering of polarized protons incident on nuclei. To
implement the interaction, it is necessary to directly modify the source code
containing the default implementation of hadron elastic scattering,
G4HadronElasticProcess.cc.
Within the default code, the azimuthal angle is chosen randomly from -$\pi$ to
$\pi$. In order to account for the asymmetry present in Eq. 2, we instead need
to sample $\phi$ from the following normalized probability density function
$F(\phi)=\frac{1}{2\pi}\left(1+A_{y}P_{y}\cos\phi\right).$ (5)
In the modified code, this is accomplished by first integrating $F(\phi)$ to
get the cumulative distribution function,
$Q(\phi)=\int_{-\pi}^{\phi}F(\phi^{\prime})\differential{\phi^{\prime}}=\frac{1}{2}+\frac{\phi}{2\pi}+A_{y}P_{y}\sin\phi.$
(6)
Then, a random sample $\phi_{s}$ from $F(\phi)$ can be obtained by first
taking a random sample $x_{s}$ from the uniform distribution on $[0,1]$,
$\mathcal{U}(0,1)$, followed by solving the implicit equation
$x_{s}=Q(\phi_{s})=\frac{1}{2}+\frac{\phi_{s}}{2\pi}+A_{y}P_{y}\sin\phi_{s},$
(7)
for $\phi_{s}$ such that $\phi_{s}=Q^{-1}(x_{s})$. In the modified code, Eq. 7
is solved using the bisection method with an accuracy of $\pi/2^{11}$. We find
the sampling of $\phi_{s}$ from $F(\phi)$ via the bisection method induces a
10% increase in computation time relative to the default code. As we only
require a few days of computation time to produce sufficiently small
systematic uncertainty, we choose not to employ a more efficient sampler such
as the standard rejection method.
Though we now have a procedure to sample $\phi$ from an asymmetrical
distribution, we need to be careful about what azimuth $\phi=0$ corresponds
to. In Sec. 2, this was the azimuth of the cross product of the incident
proton’s spin axis with its incident direction. Geant4, however, has a
different process for assigning this azimuth, one that rotates the coordinate
system that the scattering analysis is performed in differently each time.
This analysis frame, in which the scattering angles are set, is defined to be
the right handed coordinate system with the z-axis along the direction of
propagation of the proton immediately before the scattering event, and the
x-axis chosen such that the proton’s direction at the start of the simulation
is contained within the xz plane of this new coordinate system. This still
leaves us with two possible choices for the direction of the x-axis, so the
unique analysis frame is specified as the one in which the inner product of
the unit vector along the x-axis with the proton direction at the start of the
simulation is positive. We note that the proton direction at the start of the
simulation and the proton direction immediately before the scattering event
will generally differ due to continuous Geant4 processes. Refer to Fig. 2 for
an example of this choice of analysis frame.
Figure 2: Depiction of the analysis frame used to set scattering angles in
Geant4. Color online.
In Geant4, three vectors are defined as instances of the G4ThreeVector class.
To rotate vectors from the analysis frame into the default lab frame, the
public member function rotateUz(indir) is used, where indir is a normalized
G4ThreeVector pointing in the direction of the proton immediately before
elastic scattering. To determine the azimuth corresponding to $\phi=0$ in the
analysis frame, we will need to transform the direction along which the spin
of the proton is measured from the lab frame into the analysis frame. This is
encoded by the inverse of the rotation accomplished by rotateUz(indir), which
had to be added into the code as it was not one of the default G4ThreeVector
member functions. More specifically, if
$\bm{v}=(v_{1},v_{2},v_{3})^{\text{T}}$ is a vector and indir =
$(u_{1},u_{2},u_{3})^{\text{T}}$ a unit vector, we can represent the action of
rotateUz(indir) on $\bm{v}$, v.rotateUz(indir), as
$\matrixquantity(\frac{u_{1}u_{3}}{u_{1}^{2}+u_{2}^{2}}&-\frac{u_{2}}{u_{1}^{2}+u_{2}^{2}}&u_{1}\\\
\frac{u_{2}u_{3}}{u_{1}^{2}+u_{2}^{2}}&\frac{u_{1}}{u_{1}^{2}+u_{2}^{2}}&u_{2}\\\
-u_{1}^{2}-u_{2}^{2}&0&u_{3})\matrixquantity(v_{1}\\\ v_{2}\\\
v_{3})=\bm{R}\bm{v}$ (8)
upon which we added in the inverse rotation rotateUzInv(indir) as follows:
$\matrixquantity(\frac{u_{1}u_{3}}{u_{1}^{2}+u_{2}^{2}}&\frac{u_{2}u_{3}}{u_{1}^{2}+u_{2}^{2}}&-u_{1}^{2}-u_{2}^{2}\\\
-\frac{u_{2}}{u_{1}^{2}+u_{2}^{2}}&\frac{u_{1}}{u_{1}^{2}+u_{2}^{2}}&0\\\
u_{1}&u_{2}&u_{3})\matrixquantity(v_{1}\\\ v_{2}\\\ v_{3})=\bm{R}^{-1}\bm{v}.$
(9)
Since we take proton polarization to be measured along the y-axis of the lab
frame, we first get the components of the $\hat{y}$ unit vector in the
analysis frame using Eq. 9. Taking a cross product of this vector with a unit
vector along the z-axis in the analysis frame then gives a vector who’s
azimuth corresponds to $\phi=0$ when sampling from Eq. 5. To take this into
account, we find the current azimuthal angle of this new vector in the
analysis frame, add it to our sampled $\phi_{s}$ from before, and set that as
the new azimuthal angle in the analysis frame. After some (default) post
processing, the scattered proton momentum is rotated back into the lab frame
through Eq. 8 and changes to the kinematic properties of the particles
involved are set by the Geant4 code.
In order to actually carry out these changes, one must ensure they are
changing the G4ThreeVector outdir that is being passed to the
ProposeMomentumDirection() member function associated with the variable
theTotalResult, which gets returned by the
G4HadronElasticProcess::PostStepDoIt() process. Accordingly, the changes to
outdir can be made anywhere between its initial definition and setting of the
proposed momentum direction in the default G4HadronElasticProcess.cc source
code.
Besides the modified hadron elastic scattering code, the other standard
interactions and particles needed to simulate protons incident on calorimeters
are included. In particular, we use the physics lists defined within the
default FTFP_BERT.cc/.hh files. Doing so accounts for electromagnetic and
synchroton physics, particle decays, hadronic physics, stopping physics, ion
physics, and cuts for tracking neutrons.
## 4 Benchmarks with Constant Analyzing Power
With the modifications to the code for elastic proton scattering made, it is
important to benchmark our code with experimental results to show that
everything is working correctly and that the results properly reflect reality.
First, the scattering angle part of the differential scattering cross section
should be unchanged. To check this, we send in a beam of protons with initial
momentum 3.8 GeV/c and side polarization 1 onto a 51.6 g/cm2 block of
ethylene, a common plastic scintillator. We then record the distribution of
the component of momentum transverse to the incident proton direction,
$p_{t}=p_{\text{lab}}\sin\theta$, where $p_{\text{lab}}$ is the magnitude of
the lab frame proton momentum and $\theta$ the scattering angle. Carrying out
a run with one million events, we arrive at the following distribution in Fig.
3, which agrees qualitatively with empirical data recorded by Azhgirey et al.
in Ref. [9]. We omit a graphical comparison with Ref. [9] due to the inability
of Geant4 to reproduce the exact experimental conditions used in the
reference.
Figure 3: Histograms depicting the number of outgoing protons observed with
transverse momentum $p_{t}$. Both were normalized by the total number of
particles and bin size to produce the differential efficiency on the y-axis.
These histograms are the result of simulating one million protons with initial
momentum 3.8 GeV/c and side polarization 1 scattering off of a 51.6 g/cm2
block of ethylene, which agree qualitatively with empirical results in Ref.
[9].
In Fig. 3, it should be noted that the scattering peak at low $p_{t}$ is
primarily due to multiple-scattering within the block of ethylene, whereas the
gradual drop off afterwards is due to elastic scattering [9]. To analyze only
the elastic scattering statistics, we need to enforce a low $p_{t}$ cut in the
data. For protons incident with a few GeV of energy, it suffices to cut out
data with $p_{t}<0.1$ GeV/c.
To check that our code produces an asymmetry of the correct size in the
azimuthal distribution of protons that are elastically scattered off nuclei in
the block of ethylene, we test the code with three representative values of
the analyzing power. In particular, we generate histograms for the azimuthal
angle of scattered protons with $A_{y}$ set to 0.5, 0.15, and 0.05, which are
shown in Fig. 4. To determine the best fit analyzing power, we perform a least
squares fit to the data by choosing $C$ and $A_{y}$ such that
$\sum_{i=1}^{\text{\\# of
Bins}}\left[N(\phi_{i})-C(1+A_{y}P_{y}\cos\phi_{i})\right]^{2}=\text{minimum}.$
(10)
The resulting fits (shown in Fig. 4) yielded analyzing powers of $0.54\pm
0.02$, $0.16\pm 0.02$, and $0.06\pm 0.02$ respectively for the simulations run
with $A_{y}=$ 0.5, 0.15, 0.05. Since all of the fits agree within 1-2 standard
deviations, we conclude that the azimuthal asymmetry has been properly
implemented.
Figure 4: Histograms depicting the number of outgoing protons observed with
azimuthal angle $\phi$ overlayed with least squares fits, in red, to determine
the analyzing power. For each plot, $10^{5}$ protons were simulated with
initial momentum 3.8 GeV/c and side polarization 1 with constant analyzing
power set to 0.5, 0.15, and 0.05 respectively from left to right. Color
online.
As these checks were made for analyzing powers over the full range of values
predicted by the empirical parameterization in Eq. 4, we proceed with a direct
substitution of $A_{y}(\theta)$ where before we had $A_{y}$ equal to a
constant.
## 5 Results
With the code properly checked and the analyzing power parameterized by Eq. 4,
the shift in detector signal can now be quantified. To mimic as close as
possible the calorimeter that would be used for the proposed measurement of
the strange form factor of the proton in Ref. [10], we model the longitudinal
structure of our test detector off of the HCAL-J detector at JLab [13], which
is similar to the one developed in Ref. [14]. In particular, the test detector
consists of a 15 by 15 array of $5\times 5\times 100$ cm3 modules. Each module
is a sampling calorimeter with 40 layers composed of 1.5 cm of Fe and 1 cm
plastic scintillator (vinyltoluene). Energy deposits are collected only within
the plastic scintillator layers, and position is resolved with a spatial
segmentation of 5 cm (see Fig. 5).
Figure 5: Schematic of the detector used for simulations with a single
detection event depicted. The plastic scintillator layers are shaded orange,
and iron layers shaded silver. The incident proton is colored in blue, and
generated photons are colored in green. Color online.
To estimate the shift in detector signal, 10 million 2.5 GeV protons with side
polarization 1 were incident on the center of the detector. For each event,
the first moment, or center, of energy deposit was computed, with the results
accumulated in histograms shown in Fig. 6.
Figure 6: Histograms depicting the center of detector signal for 10 million
2.5 GeV protons with side polarization 1 incident on the described sampling
calorimeter. Both distributions were fit to Gaussians shown in red. Color
online.
The histograms in Fig. 6 were fit to Gaussians, yielding a mean center of
detector signal of $\bar{x}=0.021\pm 0.001$ cm and $\bar{y}=0.000\pm 0.001$
cm. The reported uncertainties only account for statistical uncertainty,
neglecting all systematic effects. As expected, the shift is in the direction
of the polarization axis crossed with the incident proton direction, or the
positive x-axis.
When $A_{y}\,P_{y}<<1$, the shift in detector signal is approximately
proportional to the side polarization. Thus, for an arbitrary side
polarization $P_{y}$, the expected shift is approximately
$\Delta x\approx P_{y}\times(0.21\pm 0.01)\text{ mm}.$ (11)
## 6 Discussion and Conclusions
Ultimately, we found a polarization dependent shift in detector signal on the
order of 0.01-0.1 mm given by Eq. 11 for polarized protons incident on a
calorimeter. This result is three orders of magnitude smaller than the typical
spatial resolution of 2-5 cm in hadronic calorimeters. This small value of the
polarization dependent shift is vital for parity violation scattering
measurements, as it implies that cuts made in data analysis neglecting this
shift in detector signal are still consistent at the level of precision
required, for instance, in the proposed experiment in Ref. [10].
Among other things, these results allow us to begin work on a proposal to the
JLab PAC and work on experimental design for a measurement of the strange form
factor of the proton at a momentum transfer $Q^{2}$ of 3 (GeV/c)2. If shown to
be large, we will be provided with yet another clue about the internal
dynamics of the quark-gluon sea residing within the proton. Such information
is useful for ab initio treatments of the proton and could pave the way for
future advances in proton physics.
Additionally, the process by which changes were made to the Geant4 source code
in this study are readily generalizable to similar changes that would be
needed for empirical corrections to other processes. This would allow for
similar spin dependent effects with known azimuthal dependence to be added to
Geant4. Such changes are especially useful for the simulation of other parity
violating effects, a growing sub-branch of nuclear physics studies [15].
## 7 Acknowledgments
This work was made possible by the US DOE SULI program at Thomas Jefferson
National Accelerator Facility. We would like to extend gratitude towards Dr.
John Annand, Dr. Xinzhan Bai, and Dr. Vardan Khachatryan for assistance with
Geant4. This work was supported in part by the US DOE Office of Science and
Office of Nuclear Physics under the contracts DE-AC02-05CH11231, DE-
AC02-06CH11357, and DE-SC0016577, as well as DOE contract DE-AC05-06OR23177,
under which JSA, LLC operates JLab.
## References
* [1] R. Hofstadter, Electron scattering and nuclear structure, Reviews of Modern Physics 28 (3) (1956) 214–254. doi:10.1103/RevModPhys.28.214.
* [2] J. I. Friedman, H. W. Kendall, Deep inelastic electron scattering, Ann. Rev. Nucl. Part. Sci. 22 (1972) 203–254. doi:10.1146/annurev.ns.22.120172.001223.
* [3] X. Ji, F. Yuan, Y. Zhao, What we know and what we don’t know about the proton spin after 30 years, Nature Reviews Physics 3 (7) (2021) 27–38. doi:10.1038/s42254-020-00248-4.
* [4] V. Punjabi, C. F. Perdrisat, M. K. Jones, E. J. Brash, C. E. Carlson, The structure of the nucleon: Elastic electromagnetic form factors, The European Physical Journal A 51 (7) (2015) 79. doi:10.1140/epja/i2015-15079-x.
* [5] D. Adams, et al., Measurement of the spin-dependent structure function g1(x) of the proton, Physics Letters B 329 (2) (1994) 399–406. doi:https://doi.org/10.1016/0370-2693(94)90793-5.
* [6] D. H. Beck, R. D. McKeown, Parity-violating electron scattering and nucleon structure, Annual Review of Nuclear and Particle Science 51 (1) (2001) 189–217. doi:10.1146/annurev.nucl.51.101701.132312.
* [7] R. Walter, Analyzing power measurements and the nucleon-nucleus optical potential; a focus on the spin-orbit potential (1997).
URL https://www.semanticscholar.org/paper/ANALYZING-POWER-MEASUREMENTS-AND-
THE-OPTICAL-A-ON-Walter/20cd3717f3483e7ae0a55851dd71d88b2bb008b5
* [8] D. F. Jackson, I. Abdul-Jalil, The spin-orbit term in the optical potential for proton scattering from light nuclei, Journal of Physics G: Nuclear Physics 6 (1980) 481–499. doi:10.1088/0305-4616/6/4/017.
* [9] L. S. Azhgirey, et al., Measurement of analyzing powers for the reaction $\vec{p}$ \+ CH2 at $p_{p}$=1.75-5.3 GeV/c, Nuclear Instruments and Methods in Physics Research Section A 538 (1) (2005) 431–441. doi:10.1016/j.nima.2004.08.111.
* [10] B. Wojtsekhowski, Flavor decomposition of nucleon form factors, arXiv:2001.02190 [nucl-ex] (Jan. 2020).
URL http://arxiv.org/abs/2001.02190
* [11] S. Agostinelli, et al., Geant4 — a simulation toolkit, Nuclear Instruments and Methods in Physics Research Section A 506 (3) (2003) 250–303. doi:10.1016/S0168-9002(03)01368-8.
* [12] C. Perdrisat, V. Punjabi, M. Vanderhaeghen, Nucleon electromagnetic form factors, Progress in Particle and Nuclear Physics 59 (2) (2007) 694–764. doi:https://doi.org/10.1016/j.ppnp.2007.05.001.
* [13] G. Franklin, HCAL-J status, report at SBS collaboration meeting, July, 2014 (2014).
URL https://slidetodoc.com/hcalj-status-sbs-collaboration-meeting-july-2014-g/
* [14] N. V. Vlasov, et al., A calorimeter for detecting hadrons with energies of 10–100 gev, Instruments and Experimental Techniques 49 (1) (2006) 41–55. doi:10.1134/S0020441206010040.
* [15] J. deVries, et al., Parity- and time-reversal-violating nuclear forces, Frontiers in Physics 8 (2020) 218. doi:10.3389/fphy.2020.00218.
|
# Laminar and Turbulent Plasmoid Ejection in a Laboratory Parker Spiral
Current Sheet
Ethan E. Peterson1<EMAIL_ADDRESS>Douglass A. Endrizzi2 Michael
Clark2 Jan Egedal2 Kenneth Flanagan2 Nuno F. Loureiro1 Jason Milhone2
Joseph Olson2 Carl R. Sovinec3 John Wallace2 and Cary B. Forest2 1Plasma
Science and Fusion Center, MIT, Cambridge, MA 02139, USA 2Department of
Physics, University of Wisconsin–Madison, Madison, WI 53706, USA 3Engineering
Physics Department, University of Wisconsin–Madison, Madison, WI 53706, USA
###### Abstract
Quasi-periodic plasmoid formation at the tip of magnetic streamer structures
is observed to occur in experiments on the Big Red Ball as well as in
simulations of these experiments performed with the extended-MHD code, NIMROD.
This plasmoid formation is found to occur on a characteristic timescale
dependent on pressure gradients and magnetic curvature in both experiment and
simulation. Single mode, or laminar, plasmoids exist when the pressure
gradient is modest, but give way to turbulent plasmoid ejection when the
system drive is higher, producing plasmoids of many sizes. However, a critical
pressure gradient is also observed, below which plasmoids are never formed. A
simple heuristic model of this plasmoid formation process is presented and
suggested to be a consequence of a dynamic loss of equilibrium in the
high-$\beta$ region of the helmet streamer. This model is capable of
explaining the periodicity of plasmoids observed in the experiment and
simulations and produces plasmoid periods of 90 minutes when applied to 2D
models of solar streamers with a height of $3R_{\odot}$. This is consistent
with the location and frequency at which periodic plasma blobs have been
observed to form by LASCO and SECCHI instruments.
## 1 Introduction
Over the past few decades, in-situ measurements of the solar wind have
produced an enormous amount of data that can be used to characterize
properties of the solar wind and to discover its source regions on the Sun.
One of the earliest characterizations was the observation of “fast” and “slow”
streams of wind as measured by Mariner II (Neugebauer & Snyder, 1962).
However, later on it was discovered that the slow wind was better
characterized by the charge state ratios of oxygen — indicating a much higher
electron temperature in the source region (Neugebauer et al., 2016; Fu et al.,
2017; Cranmer et al., 2017), consistent with the temperatures and charge
states in well confined coronal loops. This led to the understanding that the
slow solar wind likely originates from transport between closed flux and open
flux in the equatorial streamer belt and that it can only occur via magnetic
reconnection. This formation process is drastically more complex than the fast
wind acceleration in coronal holes, which agrees with the original Parker
solution (Parker, 1958) and produces a relatively quiescent, supersonic flow
with photospheric abundances. Invoking magnetic reconnection in the formation
mechanism of the slow wind inherently leads to a dynamic process capable of
explaining its high degree of variability. However, spontaneous magnetic
reconnection is difficult to achieve in high Lundquist number plasmas and so a
mechanism with enough free energy for facilitating or driving the reconnection
must be included in the theory. To this end, a number of theories have been
postulated including “interchange reconnection” (Crooker et al., 2000; Fisk et
al., 1998; Fisk, 2003), the S-Web theory (Antiochos et al., 2011; Higginson &
Lynch, 2018; Antiochos et al., 2007), and streamer top reconnection (Einaudi
et al., 2001, 1999; Lapenta & Knoll, 2005; Endeve et al., 2004; Wu et al.,
2000).
Specifically with regards to streamer top reconnection, a number of
computational studies have attempted to recreate these periodic density
structures (PDSs). This process can be driven by instabilities in the coronal
loop or by converging flows at the streamer cusp (Einaudi et al., 2001, 1999;
Lapenta & Knoll, 2005; Wu et al., 2000) and has also revealed that two-fluid
effects can alter plasmoid characteristics emanating from helmet streamers
(Endeve et al., 2003, 2004). These multi-fluid simulations prescribe a fixed
amount of coronal heating at the base of the helmet streamer, which results in
a dynamic system with no equilibrium that oscillates periodically. However,
the periodicity of the plasmoids in these simulations is longer than that
observed, $\sim$15-20 hours.
With the improvements to imaging diagnostics on many of the current satellite
missions (SOHO, STEREO, Parker Solar Probe), as well as novel data analysis
techniques, increasingly smaller and more dynamic features are constantly
being revealed in the solar wind (DeForest et al., 2018; Bale et al., 2019).
One example of this pertains specifically to the slow solar wind and the
observation of PDSs, also known as plasma blobs or plasmoids, that are
released into the solar wind at the tips of helmet streamers (Wang et al.,
1997; Sheeley, Jr. et al., 1997; Lavraud et al., 2020). Running difference
calculations of white light images produced by SOHO’s Large Angle and
Spectrometric Coronograph (LASCO) (Brueckner et al., 1995) reveal bipolar
signatures indicative of small scale structures propagating outwards into the
solar wind (Wang et al., 1998). Recently these PDSs have also been identified
by the SECCHI instrument suite onboard STEREO (Viall & Vourlidas, 2015), in
old Helios data (Di Matteo et al., 2019), and during Parker Solar Probe’s
first orbit (Lavraud et al., 2020). They have also been observed to have
magnetic signatures (Rouillard et al., 2011) and be the product of magnetic
reconnection at the open-closed flux boundary of helmet streamers (Kepko et
al., 2016; Sanchez-Diaz et al., 2019).
While the work of Viall & Vourlidas (2015) shows that blobs form at or below
$2.5R_{\odot}$ and have a typical period of 90 minutes with a range of 65-100
minutes, other work suggests that blobs can also form at larger radii
($2-6R_{\odot}$) and have longer periods of a few hours (Wang & Hess, 2018;
Wang et al., 1998). The implied correlation between these observations is that
when helmet streamer tips are closer to the Sun they release higher frequency
PDSs, and lower frequency PDSs when they are further away. This hypothesis is
consistent with the observations of a wide number of variable discrete
frequencies that are observed in the slow solar wind at 1AU over the course of
the solar cycle (Viall et al., 2008).
Aside from the presence of multiple coherent frequencies of observed PDSs, it
is also well understood that the heliospheric current sheet (HCS) — and the
solar wind as a whole — is extremely turbulent (Bavassano et al., 1997;
Coleman et al., 1968; Bavassano & Bruno, 1989a, b; Luttrell & Richter, 1987;
Belcher & Davis, 1971; Marsch & Tu, 1990a, b; Bale et al., 2019) and evolves
as a function of distance from the Sun (Bavassano et al., 1982). Even though
fully developed turbulence is typically observed to exist by $0.3R_{\odot}$ in
the slow wind near the HCS, it is often not enough to completely decorrelate
the coherent PDS fluctuations generated in the corona as they are routinely
observed to drive magnetospheric fluctuations at 1AU (Viall et al., 2008;
Kepko et al., 2002; Stephenson & Walker, 2002; Kepko & Spence, 2003).
The conclusion from this plethora of observational insight is that any
mesoscale model that wishes to accurately describe the acceleration of the
solar wind near the magnetic equator must be able to produce these coherent
fluctuations embedded in a turbulent background that evolves as it travels
away from the source. It is also necessary that the frequencies be on the
proper time scales and that the fluctuations are consistent with density and
magnetic signatures indicative of plasmoids.
In this article we report on experimental observations of axisymmetric
plasmoid ejection via helmet streamer tip reconnection in a laboratory Parker
Spiral. We discuss a potential plasmoid formation mechanism and observations
of plasmoid scaling properties by comparisons between experimental
measurements and extended-MHD simulations performed with the two-fluid
modeling capabilities of the NIMROD code (Sovinec et al., 2004; Sovinec &
King, 2010). These scalings indicate higher frequency plasmoids for more
elongated streamers with thinner current sheets, as well as an evolution
toward turbulence further downstream as plasmoids of many sizes interact.
Lastly, a simple heuristic model is presented suggesting that the periodicity
of plasmoids in the experiments and simulations and those produced in the
solar wind is set by a dynamic transition from a state of quasi-equilibrium to
one where no such equilibrium exists — a process we will refer to as
equilibrium loss. It is important to note that the helmet streamers produced
in the lab and those in the solar wind represent drastically different
systems; the former is governed by Hall-MHD and exists on sub-ion scales,
whereas the latter is much better described by ideal MHD and exists on scales
much, much larger than any kinetic scale. This article does not make any
claims about the relationship of the dominant transport mechanisms between the
two systems, nor does it claim that the underlying physics of the subsequent
reconnection events are the same. The simple association is made that both
systems exhibit sonic outflows that result in periodic plasmoid ejection from
the tip of the helmet streamers and that the loss of equilibrium responsible
for this phenomenon can be driven by pressure gradients and magnetic curvature
both in the experiments and in the solar wind.
## 2 Experimental and Simulation Methodologies
The Big Red Ball (BRB) at the Wisconsin Plasma Physics Laboratory is a
versatile plasma confinement device well-suited to the study of high-$\beta$
and flow-dominated systems. The capabilities of the BRB are discussed in more
detail in other works (Forest et al., 2015; Olson et al., 2016; Flanagan et
al., 2020; Endrizzi et al., 2021) and the experimental setup of the Parker
Spiral solar wind experiments and initial findings are detailed in Peterson et
al. (2019). A depiction of the experimental configuration of the BRB for these
experiments as well as the computational model of the experiment are shown in
Fig. 1. The summary of the experimental methodology for generating the Parker
Spiral in the BRB is as follows: Thermally emissive Lanthanum-Hexaboride
cathodes are used to generate a warm, dense, unmagnetized plasma atmosphere
($T_{e}\sim 7$ eV, $n_{e}\sim 4\times 10^{17}$ m-3). A permanent dipole magnet
is placed inside this background plasma with two ring electrodes located near
its north and south poles. Current is driven from molybdenum anodes in the
plasma atmosphere into the dipole magnetosphere and collected on the polar
electrodes. This current path is visualized in Fig. 1(b) in the context of the
NIMROD simulation domain. These cross field currents generate a torque on the
plasma which causes the magnetosphere to rotate, thereby twisting the magnetic
field into a Parker Spiral.
Figure 1: Experimental (a) and computational (b, c, d) configurations. The Big
Red Ball is shown in (a) along with the local cylindrical coordinate system,
dipole magnet, and diagnostics used for mapping the magnetosphere. The 2D
finite element mesh with current injection boundary conditions and initial
vacuum field configuration are shown in (b). Panels (c) and (d) show the
location of probes within the NIMROD simulation for outputting high time
resolution field measurements. Both the initial magnetic field configuration
(c) and time-averaged magnetic field configuration after the current injection
has reached steady state (d) are shown to demonstrate the probe positions
relative to where the plasmoid formation process occurs.
At the interface between the closed field lines of the magnetosphere and the
open field lines of the Parker Spiral, periodic reconnection occurs, ejecting
axisymmetric plasmoids, much like the density structures observed in the
heliospheric current sheet which emanate from streamer top reconnection. A
number of diagnostics were employed to map the 2D time dynamics of the Parker
Spiral including two arrays of 3 axis Hall sensors and an array of triple
probes and 2D Mach probes for measuring density, temperature, and flows in the
plasma. One of the Hall sensor arrays was kept stationary and displaced
azimuthally from the 2D scanning plane to provide phase reference measurements
critical for the reconstruction of the plasmoid dynamics.
The computational domain and vacuum magnetic field for the accompanying NIMROD
simulations are shown in Fig. 1. Figure 1(b) represents the hemispherical mesh
in cylindrical coordinates, which extends down to $R=5$ cm rather than the
experiment’s support rod at $R=2$ cm. This is to allow for a small dipole
magnet to be placed outside the computational domain and to facilitate the
boundary condition manipulation to model current injection/extraction in a
manner representative of the experiment. By prescribing $B_{\varphi}$ along
the boundary as a function of time, we can set the normal component of $J$, or
the current into and out of the vessel. In all the presented simulation work
that follows, the current injection linearly ramps from zero up to a
prescribed steady state value. The ramp duration in some of the initial
simulations was 1 ms, but was shortened to 100 $\mu$s to reduce the required
simulation time for some of the higher current injection cases that have
smaller time steps. For all simulations discussed in this work, there are zero
heat flux and zero particle flux conditions applied to the boundary as well as
a no-slip boundary condition on the velocity. All simulations were performed
with experimental parameters of $n_{\mathrm{e}}=4\times 10^{17}$ m-3,
$T_{\mathrm{e}}=7$ eV, $T_{\mathrm{i}}=0.5$ eV, which give viscous and
resistive diffusivities of $\nu=50$ m${}^{2}/$s and $\eta=35$ m${}^{2}/$s
respectively. We also note that in the experiments and simulations the ion
skin depth is set to $d_{i}=70$ cm. In terms of resolution, the simulations
were performed with 2400 bicubic poloidal elements which provides centimeter-
scale resolution in the current sheet.
In order to better compare results from simulation to the experimental
measurements, the NIMROD code was modified in order to take a list of R, Z
coordinates for placing history nodes, or probes. For the simulations in this
study, four probes were placed in the current sheet at $R=25,40,55,70$ cm and
$Z=0$ cm as shown in Fig. 1. The probe locations relative to the vacuum
magnetic flux configuration and the flux distribution near the end of a
simulation can be seen in Fig. 1(c) and Fig. 1(d) respectively. These probes
provide outputs for the solution fields after every time step to allow for
higher frequency phenomena to still be captured at a few select locations in
the simulation. All NIMROD simulations presented in this work are
axisymmetric.
## 3 Observation of Streamer Top Reconnection and Plasmoid Formation in
Experiment and Two-Fluid Simulations
While the basic operating parameters, mean magnetic field, and plasma flows
that develop in both the experiment and simulations are presented in Peterson
et al. (2019), this article sheds more light on the details of the fluctuation
measurements as well as their scaling properties. Previous work demonstrated
that, during the generation of the Parker Spiral, two fluid effects are
critical as the system size is small compared to the ion skin depth, the
electrons are the only magnetized species, and $T_{e}\gg T_{i}$. The
consequence of this is a radially outward flow of electrons that advects the
dipolar magnetic field into a Parker Spiral and simultaneously generates an
inward electric field via the Hall effect. This Hall electric field drives
accretion of ions into the magnetosphere where a density and pressure gradient
begin to build until a loss of equilibrium occurs. In both experiment and
simulation, this loss of equilibrium occurs in the current sheet associated
with the Parker Spiral where $\beta>10$ (Peterson et al., 2019) and increases
up to $\beta\sim 50$ further downstream.
Figure 2: Time histories and frequency content of magnetic signals measured in
the Parker Spiral current sheet by 3-axis Hall sensors in the experiment. The
difference between the turbulent, non-axisymmetric Phase I and axisymmetric,
single mode Phase II can be seen in (a) where two probes at the same radius,
but different azimuthal angles have very different mean field values as well
as high frequency characteristics in (a.i.), but are nearly identical in Phase
II (a.ii). A spectrogram of one of these time histories (b) shows broadband
fluctuations in Phase I, followed by a coherent downward chirping spectrum of
laminar plasmoid ejection.
Measurements of the magnetic field as well as the plasma density show
interesting fluctuations whose frequency depends on the amount of current
injected. As shown in Fig. 2(a), the vertical component of the magnetic field,
$B_{Z}$ (the poloidal component perpendicular to the current sheet) is non-
axisymmetric and highly uncorrelated in the initial, high current phase of the
experiment (Phase I). However, as shown in Fig. 2(a.ii), these magnetic
fluctuations become coherent as the current injection falls with the
characteristic timescale set by the RC circuit of the current injection system
($\sim 100$ms). In addition, the $B_{Z}$ fluctuations are bi-polar and thus
suggestive of closed loops of magnetic flux disconnected from the inner
magnetosphere of the rotating plasma. A spectrogram typical of these magnetic
fluctuations in the current sheet can be seen in Fig. 2(b) which shows the
turbulent nature of the high current Phase I and coherent mode Phase II with a
high degree of correlation between the plasmoid frequency and current
injection of the system. This scaling relationship will be discussed in
subsequent sections.
Figure 3: Plasmoids are ejected from the helmet streamer cusp at a frequency
of 20 kHz when the current injection is 150 A in the experiment. Panel (a)
shows the magnetic field ($B_{Z}$) and density fluctuation ($\tilde{n}_{i}$)
signals as measured by the probe in the current sheet at $R=42$ cm and denoted
by the teal dot in panels (b), (c), and (d). Panels (b), (c), and (d) show the
flux map and density perturbation map at three successive 10 $\mu$s time steps
corresponding to the magenta, cyan, and yellow vertical lines in (a). These
panels show how the magnetic field at the streamer cusp expands outwards with
higher density plasma until reconnection occurs, releasing a plasmoid into the
current sheet.
Performing phase correlation measurements between the stationary Hall array
and the scanning array over many discharges allows us to reconstruct the
plasmoid dynamics — both magnetic field and density — through conditional
averaging. This reconstruction method first identifies 500 $\mu$s time windows
for every shot that exhibit the highest degree of shot-to-shot similarity as
measured by a single stationary hall probe. A center frequency of 20 kHz was
used for this process as it showed the best correlation shot-to-shot between
1.1 and 1.15 seconds as shown in Fig. 2(b). Once the best time window from
each shot is selected, a phase shift is calculated for each shot using the
stationary reference probe which allows us to align in time the many different
shots at different locations and build up a poloidal map of the 2D plasma
dynamics. This process reveals periodic reconnection occurring near the top of
the streamer structures resulting in plasmoids being ejected into the Parker
spiral current sheet as shown in Fig. 3. Figure 3(a) shows the $B_{Z}$
measurement as well as ion density fluctuations $\tilde{n}_{i}$ as measured by
the probe located at ($R=42$cm, $Z=-8$cm) and shown as a cyan dot in Fig.
3(b-d). We believe the up-down asymmetry of the current sheet, or “droop”, is
likely due to preferential current draw to an anode in the southern hemisphere
that was larger than the others, which is not captured in simulations since
the current injection is uniform over the range of $\pm 15^{\circ}$ latitude.
Figure 3(b-d) show subsequent time steps of 10$\mu$s through one half period
of the reconnection process, which exhibits a periodic build up of plasma
density (and pressure) inside the streamer leading to field line stretching,
reconnection, and plasmoid ejection.
In addition to plasmoids observed experimentally, they also manifest in
extended MHD simulations when the Hall term and electron pressure gradient
term are used in Ohm’s law. 2D cross sections of the magnetic flux and density
fluctuations from the NIMROD simulations as well as the time resolved signals
in the current sheet as would be measured by probes in the experiment or
satellites in space are shown in Fig. 4 and Fig. 5. The time histories of both
experimental and laminar plasmoids can be seen in movies published in previous
work (Peterson et al., 2019), whereas the evolution of the turbulent plasmoids
in Fig. 5 can be viewed in the movie turbulent_plasmoids.mp4. Figure 4
represents a moderate current injection level of 400 A in the simulation which
is slightly above the threshold necessary to observe plasmoids. As a result,
we observe relatively large single plasmoids emitted with a very regular
frequency. We refer to these plasmoids as laminar since the magnetic flux
evolves very smoothly with regular ejection of similarly sized plasmoids.
Figure 4: Laminar plasmoid formation in an axisymmetric NIMROD simulation with
400A of injected current. The time history of $B_{Z}$ and ion density
fluctuations as measured by a probe at $R=70$cm (top panel) shows the periodic
ejection of high density plasmoids that are roughly similar size over time.
On the other hand, Fig. 5 represents a high current drive case of 1000 A where
we see plasmoids of many sizes interacting in a thinner current sheet
resulting in a more turbulent medium downstream.
Figure 5: Turbulent plasmoid formation in an axisymmetric NIMROD simulation
with 1000A of injected current. The time history of $B_{Z}$ and ion density
fluctuations as measured by a probe at $R=70$cm (top panel) shows the high
variability of plasmoids in both frequency and size.
The observation of plasmoids in the experiment as well as in the NIMROD
simulations begs the question of whether this occurrence is coincidence or if
the same physical processes are driving plasmoid formation in both cases.
Importantly, theoretical and computational studies (Bhat & Loureiro, 2018), as
well as experiments (Hare et al., 2017) have demonstrated that plasmoid
formation can occur at values of the Lundquist number much below the usual
$10^{4}$ required in resistive MHD when ion-scale kinetic effects are not
negligible, as is the case here. We now turn our attention to a discussion of
the properties and scaling relationships for these observed plasmoids.
## 4 Discussion of Laminar and Turbulent Plasmoid Formation, Properties, and
Scalings
We begin by discussing the characteristics of these plasmoids, particularly
with respect to the amount of current injected into the experiment (or
simulation). Shown in Fig. 6 are the time-averaged magnetic field
configurations in the simulations (a-c) and experiment (d-f) as a function of
increasing current injection. As more flux is advected outwards with the
electron flow, it results in a current sheet thinning effect as shown in Fig.
6, where the thinner current sheets display larger magnetic curvature near the
streamer cusp. As shown in both Fig. 7 and Fig. 8, these elongated streamers
associated with higher current injection values result in higher frequency
plasmoid ejection. However, plasmoids are not present for every current
injection value in the simulation and experiment. This is shown in Fig. 7
which shows $B_{Z}$ power spectral densities from four different simulations
with increasing amounts of current injection: 200A, 400A, 600A, and 1000A for
panels (a), (b), (c), and (d), respectively. We can see that no plasmoids are
present in the 200A simulation, likely because the accretion caused by the
Hall effect is not strong enough to build up pressure above the critical
gradient necessary for loss of equilibrium.
Figure 6: Mean-field evolution of the magnetic field as a function of current
injection is shown for axisymmetric NIMROD simulations (a-c) as well as the
experiment (d-f). The top row depicts both the time-averaged poloidal and
toroidal magnetic field at three different current injection values: 250 A,
600 A, and 1000 A, from left to right, respectively. In this progression, it
is clear that the current sheet becomes thinner and the toroidal magnetic
field increases as the poloidally injected current is increased. The same is
true in the experiment as shown in the bottom row, where the current injection
values are 150 A, 250 A and 350 A, respectively. For currents larger than 350
A in the experiment, the magnetic field is highly non-axisymmetric; as a
result, axisymmetric flux surface reconstruction is not possible.
Another characteristic that trends with increased system drive (or current
injection) is a decrease in coherence of the fluctuations and increase in
turbulence. In Fig. 7(b) we can see a well defined fundamental frequency at 15
kHz as well as multiple resolved harmonics. However, as the current is
increased in panels (c) and (d), the spectra become more broadband and the
fundamental mode increases in frequency, which is consistent with the
experimental observations reported in Peterson et al. (2019).
Figure 7: $B_{Z}$ fluctuation power spectra from four probes in four different
two-fluid NIMROD simulations with current values of 200A, 400A, 600A, and
1000A shown in panels (a), (b), (c), and (d), respectively. Probes 1-4 are
located at increasing radii in the current sheet as according to Fig. 1.
Increasing the injected current increases both the fundamental plasmoid
frequency as well as the amplitudes of higher frequency components.
Fluctuations are increasingly more broadband at larger radial distances as
well.
One more important observation from Fig. 7 is that field lines at inner radii
(closer to probe 1) are more coherent — that is, the dominant mode is much
stronger relative to the other higher frequencies. The physical interpretation
of this phenomenon is that the pressure inside the magnetosphere drives a
periodic loss of equilibrium that manifests as magnetospheric oscillations
that drive larger fluctuations further out in the current sheet where the
field is weaker, ultimately resulting in a turbulent current sheet that still
has a quasi-periodic nature.
A comparison of the plasmoid frequencies in simulation to those observed in
the experiment is shown in Fig. 8(a) as a function of current (orange and
green triangles). In Fig. 8(a) each experimental data point represents the
peak value in the $B_{Z}$ frequency spectrum from a single Hall probe in 10 ms
windows between 0.9 and 1.4 seconds. This generates 50 data points per shot
and is plotted for roughly 100 shots. The simulation results (blue circles)
show the dominant frequency for the duration of the simulation once the
current injection has reached steady state. Each data point for the NIMROD
simulations therefore represents a single simulation at a single current
injection value. It is important to note the strong linear scaling at modest
current injection up to $\sim 400$ A, as well as the abrupt disappearance of
plasmoids below 100 A and $\sim 10$ kHz. The high density of data points at
very low frequency for currents above $\sim 300$ A, are not particularly
germane to this discussion and just indicate that the frequency range with the
highest power spectral density was found to be at low frequencies during the
non-axisymmetric phase of the experiment and can be seen as well in the Phase
II portion of the spectrogram in Fig. 2(b). Also plotted is the fundamental
plasmoid frequency from the Hall-MHD NIMROD simulations as a function of
current. Both experimental and simulation plasmoid frequencies scale linearly
with the current, but with different slopes.
Figure 8: $B_{z}$ fluctuation peak frequencies in the current sheet at $R=30$
cm for Helium and Argon discharges as well as the frequencies of plasmoids
present in the NIMROD simulations (a). Plasmoids in both experiment and
simulation scale linearly with the injected current and exhibit some
stabilizing effect such that no plasmoids exists below roughly 10 kHz in
either the experiment or simulations. When the current injection value is
translated into a characteristic timescale dependent on the magnetic curvature
and pressure gradient at that current value, the plasmoid frequencies in both
experiment and simulation are found to scale similarly (b).
Since the plasmoid frequencies scale more strongly with current injection in
the experiment than in simulation, the drive mechanism is likely correlated
with some quantity other than the current, but likely influenced by it. One
possible explanation is that these plasmoids are pressure driven and that the
current drive in the experiment produces larger densities in the magnetosphere
as a result of ionization, which is not modeled in the simulations. Therefore,
calculating a pressure-curvature driven time scale for the loss of equilibrium
in experiment and simulations may provide a unifying scaling, as evidenced by
Fig. 8(b). Each data point for the experiment in Fig. 8(b) is derived from a
2D flux surface reconstruction and maps of pressure, and temperature averaged
over 10 ms windows. Where the flux surface reconstruction is possible (roughly
from $t=1.0$ s to $t=1.2$ s), we obtain an estimate of the characteristic
frequency $f=\frac{1}{2\pi}\sqrt{c_{s}^{2}{\bf\it\kappa}\cdot\gradient{p}/p}$
calculated near the helmet streamer tip at $R\sim 30$ cm. This results in
$\sim 20$ data points whose error in the x-direction is calculated from the
error in the magnetic field, temperature, and pressure measurements and whose
error in the y-direction represents the full width at half maximum of the peak
in the $B_{z}$ power spectral density. The theoretical model and justification
for this characteristic time scale are explained in the following section.
## 5 Heuristic Model of Plasmoid Evolution and Extrapolation to Solar
Streamers
A relatively simple heuristic model can be constructed for the plasma
expansion in the high-$\beta$ transition region between the hydrostatic
equilibrium of the closed flux corona and the hydrodynamic equilibrium of open
field lines outside the heliospheric current sheet (HCS). The geometry of such
a system is shown in Fig. 9. Writing the momentum equation in terms of the
magnetic curvature vector, ${\bf\it\kappa}$, we obtain the dynamic equation
for the plasma expansion:
$\rho\derivative{\mathbf{v_{\perp}}}{t}=-\gradient_{\perp}{p}+\frac{B^{2}}{\mu_{0}}{\bf\it\kappa}-\gradient_{\perp}\frac{B^{2}}{2\mu_{0}}.$
(1)
Figure 9: A cartoon drawing of helmet streamer topology with indications of
the zones where equilibrium exists; namely, deep in the streamer where there
is a hydrostatic solution, and outside the current sheet where Parker’s
original hydrodynamic solution is valid. The interface between the two
exhibits no equilibrium due to the particle and heat sources present in the
corona. This loss of equilibrium is caused by the extremely high values of
$\beta$ in this region and result in quasi-periodic plasma blobs which are
driven by strong pressure gradients. The length scale $L_{c}$ shown above is
the critical length scale of the current sheet that develops from field line
stretching before reconnection occurs.
Using the equation of state $p=\rho c_{s}^{2}$, rewriting the right hand side
of Eq. 1 in terms of the plasma $\beta=2\mu_{0}p/B^{2}$, and dotting both
sides with the curvature vector, ${\bf\it\kappa}$, gives:
${\bf\it\kappa}\cdot\derivative{\mathbf{v_{\perp}}}{t}=-\frac{c_{s}^{2}}{p}{\bf\it\kappa}\cdot\gradient_{\perp}\left[p\left(1+1/\beta\right)\right]+\frac{2c_{s}^{2}}{\beta}\kappa^{2}.$
(2)
The left-hand side of this equation can be taken to define a characteristic
timescale for the driven loss of equilibrium:
${\bf\it\kappa}\cdot\derivative{\mathbf{v_{\perp}}}{t}\sim-\gamma_{dr}^{2}$.
This can be understood as the time required to form a plasmoid of radius
$R_{c}=\kappa^{-1}$ due to the acceleration provided by the net force on the
right-hand side of Eq. 2. As discussed before, the values of $\beta$ in the
current sheet are very large for both the experiment and simulations and range
from $10<\beta<50$ throughout the region of interest, denoted as the
high-$\beta$ dynamic region in Fig. 9. Therefore, ignoring terms of order
$\beta^{-1}$ results in
$\gamma_{dr}^{2}\sim\frac{c_{s}^{2}}{p}{\bf\it\kappa}\cdot\gradient{p}.$ (3)
This characteristic time scale can be understood as the time for a sound wave
to traverse a distance equivalent to the geometric mean of the pressure
gradient scale length and radius of curvature, or equivalently the free-fall
time of a plasma parcel under the action of a pressure gradient and adverse
magnetic field curvature.
The next step in the derivation of the plasmoid frequency is to show that it
is essentially this drive frequency which sets the frequency of reconnection
in the current sheet. It will be shown that in this situation the reconnection
rate is relatively insensitive to the details of the tearing mode growth rate.
We can assume that the current sheet is lengthening at the rate given by the
drive timescale $\gamma_{dr}$ because the plasma is frozen to the magnetic
field. We can also assume that the lengthening of the current sheet is
exponential in time: a result of a transition from quasi-equilibrium to a
dynamic system. Incompressibility then requires that the forming sheet is
likewise thinning exponentially, such that its thickness $a(t)$ can be
described by
$a(t)=a_{0}e^{-\gamma_{dr}t},$ (4)
where $a_{0}$ is the thickness at the beginning of the expansion.
As the aspect ratio $L/a$ of this forming current sheet increases, it becomes
unstable to a progressively broader spectrum of tearing modes. As argued by
Uzdensky & Loureiro (2016), one of those modes — the one whose growth rate,
$\gamma_{tear}(t)$, first satisfies $\gamma_{tear}(t_{crit})\sim\gamma_{dr}$ —
will eventually grow to become as wide as the forming sheet, thereby
disrupting it and leading to plasmoid ejection. This happens at a so-called
critical time, $t_{crit}$, whereupon the sheet thickness is
$a_{crit}\equiv a(t_{crit})=a_{0}e^{-\gamma_{dr}t_{crit}}.$ (5)
This can be inverted to yield
$t_{crit}=\gamma_{dr}^{-1}\ln{\left(\frac{a_{0}}{a_{crit}}\right)}.$ (6)
As we can see, the critical time for reconnection to occur is essentially the
same as the drive timescale, $\gamma_{dr}^{-1}$, since the ratio of the
initial to the critical current sheet thickness represents only a logarithmic
correction. That is, while reconnection is essential to the formation and
ejection of plasmoids, the physics of the reconnection onset are such that
details of the tearing instability that underlies it (such as the functional
form of $\gamma_{tear}(t)$) are not essential to the prediction of the
timescale associated with the plasmoid ejection.
Casting the drive timescale for $\gamma_{dr}$ into a characteristic frequency
in Hz we obtain
$f=\frac{1}{2\pi}\sqrt{c_{s}^{2}{\bf\it\kappa}\cdot\gradient{p}/p}.$ (7)
This time scale is associated with a high-$\beta$ pressure-curvature driven
loss of equilibrium. The characteristics of these oscillations — namely that
they are electromagnetic, axisymmetric perturbations localized to the region
of bad magnetic curvature with a frequency dependent on the pressure gradient
— support the notion that these plasmoids are driven by a mechanism similar in
nature to flute modes, but likely represent a loss of equilibrium due to
particle and heat transport into the streamer rather than a linear
instability. It is the frequency in Eq. 7 that empirically provides the
unifying scaling between the experimental results and the extended-MHD
simulations presented in Fig. 8. Computing this frequency along field lines
from the density, temperature and magnetic flux measured by diagnostics in the
experiment as well as the simulation shows a local maximum located at the
outboard midplane where the curvature is largest. Plotting the measured
plasmoid frequencies against this calculated pressure-curvature frequency
gives the results in Fig. 8(b) which shows much better agreement between the
experiment and simulations and is consistent with the idea that these
plasmoids are pressure driven.
If we normalize both the pressure gradient scale length and the magnetic
curvature by the critical current sheet length, $L_{c}$, we obtain two
dimensionless quantities: $\ell_{b}^{-1}=L_{c}\kappa$ and
$\ell_{p}^{-1}=L_{c}\gradient{p}/p$. We can assume, in the laminar plasmoid
case, that this critical length, $L_{c}$, is simply the plasmoid length just
as reconnection occurs. Substituting these parameters into Eq. 7 provides us
with the dimensional scaling below:
$f=\frac{1}{2\pi}\frac{c_{s}}{L_{c}\sqrt{\ell_{b}\ell_{p}}}.$ (8)
Given similar normalized length scales, $\ell_{b}$ and $\ell_{p}$, the scaling
between experimental and solar wind frequencies is simply the ratio of
critical length scales, or plasmoid size, and sound speeds; i.e.:
$f_{sw}\sim\frac{L_{c,exp}}{L_{c,sw}}\frac{c_{s,sw}}{c_{s,exp}}f_{exp}.$ (9)
As mentioned before, we will use the plasmoid length as the proxy for $L_{c}$
as it is reasonable to assume for single plasmoids that the associated
plasmoid is roughly the size of the current sheet just before it reconnects.
For plasmoids in the simulations and experiment, we will take this length
scale to be $L_{c,exp}=0.25$ m and in the solar wind we will take this scale
to be $L_{c,sw}=1R_{\odot}=7\times 10^{8}$ m which is consistent with
observations of plasmoids appearing around $2-4R_{\odot}$ and having a length
of $1R_{\odot}$ and width of $0.1R_{\odot}$(Sheeley et al., 2009). Combining
these plasmoid length scales with the sound speed typical of the experiment
($c_{s,exp}\sim 13$ km/s) and solar corona ($c_{s,sw}\sim 200$ km/s) gives us
a simple scaling relationship between frequencies observed in the lab and in
the solar wind as shown in Eq. 10
$f_{sw}\sim 5.5\times 10^{-9}f_{exp}.$ (10)
Therefore, the $20-40$ kHz plasmoids observed in both the experiment and
simulations correspond to a plasmoid frequency in the solar wind of
$110-220~{}\mu$Hz, or periods of $75-150$ minutes — in remarkable agreement
with observations (Viall & Vourlidas, 2015).
We note that this model also offers a natural explanation for the existence of
laminar and turbulent plasmoid regimes. Increasing the drive (i.e., increasing
the injected current in experiments and simulations) leads to a larger
$\gamma_{dr}$. When the drive is strong enough such that $\gamma_{dr}\gtrsim
v_{A}/L_{c}$, an ejected plasmoid has insufficient time to advect downstream a
distance comparable to its length before the subsequent plasmoid is ejected.
This leads to plasmoids of different sizes interacting downstream of the
reconnection region and more stochastic behavior. For the high drive cases in
the simulation, many of the plasmoids are small ($L_{c}\sim 5$ cm), and
$v_{A}\sim 2$ km/s, which results in the condition $\gamma_{dr}\gtrsim 40$ kHz
which is in good agreement with the onset of the turbulent plasmoids shown in
Fig. 5 and Fig. 7(d).
Another mechanism that may be contributing to the turbulent dynamics and to
the non-axisymmetric nature of Phase I in the experiment is the transition
from single plasmoid formation to multiple simultaneous plasmoid formation. To
support this hypothesis, the calculation in Appendix A computes the threshold
drive frequency necessary to destabilize shorter wavelength tearing modes in
the current sheet. If this threshold is reached we may conclude the subsequent
stochastic dynamics are those of a plasmoid chain (Uzdensky et al., 2010;
Loureiro et al., 2012) and are responsible for the turbulence in the high-
drive cases. Remarkably, as outlined in Appendix A, this threshold frequency
is computed to be $f=\gamma_{dr}/(2\pi)\gtrsim 60\,$kHz and is precisely in
the range we would expect based on experimental evidence ($\sim 60-100$ kHz).
These two mechanisms allow us to infer a hierarchy of stochasticity associated
with the drive strength and therefore the frequency of plasmoid formation. As
the drive strength increases, the system experiences a loss of equilibrium and
laminar ‘single’ plasmoid ejection. This phase is followed by plasmoids of
different ‘generations’ catching up with each other to generate a turbulent
medium downstream, but still in the ‘single’ plasmoid regime. This phase
applies particularly to the NIMROD simulations which are constrained to be
axisymmetric and are below the threshold for multiple plasmoids, yet still
exhibit increased turbulence at higher drive. Finally, at the highest drives
obtained in the experiment, the threshold to multiple simultaneous plasmoid
formation is reached as discussed in Appendix A.
This analysis can be made more quantitative by formally taking into account
the geometric effects of the magnetic field and pressure gradient for helmet
streamer structures in the solar wind. This process likewise results in blob
periodicities in the 1-2 hour range consistent with observations (Viall &
Vourlidas, 2015). This model is constructed by taking the streamer-like
poloidal field geometry generated during NIMROD simulations and scaling the
radius where reconnection occurs to coincide with $3R_{\odot}$ as a
characteristic location for PDS formation (Viall & Vourlidas, 2015; Wang et
al., 1998; Wang & Hess, 2018). Scaling the magnetic field strength using fits
to solar wind data in the ecliptic plane according to Köhnlein (1996),
produces a plausible helmet streamer geometry at the proper scale with
realistic magnetic field strength. Using fits for plasma temperature and
density in the ecliptic (likewise from Köhnlein (1996)) and mapping them to
the respective flux surfaces produces a mock helmet streamer with plausible
temperature and density profiles.
Figure 10: The work of Köhnlein (1996) provides doubly logarithmic fits to
Helios data for density, temperature, and magnetic field as a function of
heliocentric distance in the ecliptic plane. Mapping these quantities onto the
magnetic streamer structure shown in (a) allows us to compute
$f=1/2\pi\sqrt{c_{s}^{2}{\bf\it\kappa}\cdot\gradient{p}/p}$ along field lines
just inside and just outside the reconnection radius in the same fashion as
shown in Fig. 8(b). This results in plasmoid frequencies that are peaked at
the streamer cusp as expected and produce periodicities in close agreement
with in situ observations for plausible solar wind parameters.
A diagram of this 2D magnetic geometry for this heuristic model is shown in
Fig. 10(a). This model enables us to calculate the same characteristic
frequency used to unify the scaling between experiment and simulation (Eq. 7)
along the field lines in the vicinity of the reconnection site shown as the
cyan dashed lines in Fig. 10(a). The result of calculating this frequency
along flux surfaces between the cyan dashed flux surfaces in Fig. 10(a) is
shown in Fig. 10(b) and plotted as a function of field line distance away from
the outboard midplane. In this context, a field line distance of 0 corresponds
to the streamer top or the point of highest magnetic curvature where we might
expect the highest growth rate of any pressure-curvature driven loss of
equilibrium. We can see from this model that the same characteristic timescale
used to unify experimental and simulation plasmoid frequencies results in a
blob periodicity in the solar wind of $\sim 90$ minutes if formed at
$3R_{\odot}$ (Fig. 10b). If the PDS origin radius is increased or decreased,
the PDS periodicity likewise increases or decreases respectively in accordance
with observations (Wang & Hess, 2018).
## 6 Conclusions
To accompany the experimental measurements of streamer top reconnection and
plasmoid formation in the Parker Spiral current sheet, a wide range of
extended-MHD simulations were performed with the NIMROD code. Through
measurements and comparisons between experiment and simulation, we showed that
this high-$\beta$ loss of equilibrium is related to the pressure gradient and
magnetic curvature in the streamer. Specifically, the frequency of expelled
plasmoids scales with the pressure gradient and becomes more turbulent as the
pressure gradient increases and as one moves further downstream in the current
sheet.
A heuristic model for this loss of equilibrium is presented and demonstrates
that pressure-curvature driven outflows in the high-$\beta$ transition region
of the streamer belt may be responsible for the streamer top reconnection that
fuels a portion of the slow solar wind near the HCS. Although the parameters
and scale lengths in the experiment are considerably different from those in
the solar wind, the pressure driven loss of equilibrium allows both systems to
expand outwards at their respective sound speed, advecting the magnetic flux
with the ions in the case of the solar wind and with the electrons in the
experiment and simulations. While the dynamics of magnetic reconnection are
likewise vastly different between the two systems — occurring on sub-ion
scales in the experiment and macroscopic (MHD) scales in the solar wind — it
is likely that the plasmoid formation rate is governed more by the drive
timescale, $\gamma_{dr}$, than by the specifics of magnetic reconnection. As a
result, the streamer top may be reconnecting at a rate governed by the
particle and heat sourcing on these outer field lines which results in loss of
equilibrium rather than a linear instability. This allows for a unified theory
to connect observations in drastically different regimes of plasma physics
based on empirical evidence. While the underlying reconnection dynamics which
set the critical length scale of the current sheet are certainly different,
the resulting phenomenon was found to be remarkably similar between experiment
and simulation and was also reminiscent of observations of the solar corona
performed by the LASCO and SECCHI instrument suites as well as recently by
Parker Solar Probe (Lavraud et al., 2020).
The present work was supported by the NASA Earth and Space Sciences -
Heliophysics Division Fellowship award no. NNX14AO16H. The BRB facility was
constructed with support from the National Science Foundation and is now
operated as a Department of Energy National User Facility under DOE fund DE-
SC0018266. In addition, this work was supported by the NSF-DOE Partnership in
Basic Plasma Science and Engineering award no. PHY-2010136.
## Appendix A Reconnection onset in a forming current sheet in the
collisional Hall-MHD regime
In Section 5 we alluded to the onset of the tearing mode in the forming
current sheet driven by the equilibrium loss. In this Appendix, we present a
quantitative, though simplified, derivation aimed at capturing what we think
are the key features of this process in our experiments and simulations.
The plasma regime of relevance here can be described by the resistive Hall-MHD
framework; namely, we take the ions to be cold, the reconnection dynamics to
be happening at sub-ion-skin-depth scales, and the frozen-flux condition to be
broken by resistivity. Under these constraints, the expressions for the growth
rate of the tearing instability for small and large values of the tearing
instability parameter $\Delta^{\prime}$ can be obtained from Attico et al.
(2000)111Attico et al. (2000) consider the case where resistivity is
negligible and the frozen flux constraint is instead broken by electron
inertia. The resistive scalings that we use here are directly retrievable from
theirs upon the substitution $d_{e}^{2}\rightarrow\eta/\gamma$, where $d_{e}$
is the electron skin depth.. They are
$\gamma\tau_{w}=0.47\,\Delta^{\prime}(\tau_{w}\eta)^{1/2},$ (11)
in the low $\Delta^{\prime}$ case; and
$\gamma\tau_{w}=0.69\,(ka)^{3/4}a^{-1/2}(\tau_{w}\eta)^{1/4},$ (12)
in the large $\Delta^{\prime}$ case. The normalizing timescale that appears in
these expressions is sometimes called the whistler time,
$\tau_{w}=a^{2}/(d_{i}v_{A})$, with $v_{A}$ the Alfvén speed based on the
upstream (reconnecting) magnetic field.
Given a spectrum of unstable wavenumbers, the fastest growing tearing mode is
given by the intersection of these two scalings:
$k_{max}a\approx 1.2\,(\tau_{w}\eta)^{1/7}a^{-2/7},$ (13)
where we have assumed that the upstream magnetic field is well represented by
a $\tanh{x/a}$ profile, whose instability parameter is
$\Delta^{\prime}a\approx 2/(ka)$ for $ka\ll 1$. The corresponding growth rate
is
$\gamma_{max}\tau_{w}\approx 0.8(\tau_{w}\eta)^{5/14}a^{-5/7}.$ (14)
This mode is the fastest-growing mode if the current sheet is long enough that
it fits inside the layer; i.e., if $k_{max}L\geq 1$.
From here, the calculation proceeds exactly as prescribed in Uzdensky &
Loureiro (2016). We assume, as in Section 5, that the length and the thickness
of the forming current sheet expand, or contract, exponentially, with the
drive rate $\gamma_{dr}$. Then we find that the $N=1$ mode transitions from
the low to the large $\Delta^{\prime}$ regime at the time
$t_{tr}=\frac{1}{2\gamma_{dr}}\ln\left(\frac{2\pi}{1.2}\frac{a_{0}}{L_{0}}S_{H}^{1/7}\right),$
(15)
where $S_{H}\equiv d_{i}v_{A}/\eta$ is the Hall Lundquist number, and $a_{0}$
and $L_{0}$ are the initial thickness and length of the current sheet.
On the other hand, one can compute the time $t_{cr}$ at which the growth rate
of the $N=1$ mode matches the current sheet formation rate; i.e., solve
$\gamma(t)=\gamma_{dr}$ for the $N=1$ mode. This yields
$t_{cr}\approx\frac{1}{4\gamma_{dr}}\ln\left(\frac{\pi}{0.47}\frac{a_{0}}{L_{0}}\frac{a_{0}^{2}\gamma_{dr}}{d_{i}v_{A}}S_{H}^{1/2}\right).$
(16)
Finally, one can ask if the $N=1$ mode has the time to transition from the low
to the large $\Delta^{\prime}$ regime before reaching its critical time. This
occurs when
$\gamma_{dr}>4.1\frac{a_{0}}{L_{0}}\frac{d_{i}v_{A}}{a_{0}^{2}}S_{H}^{-3/14}.$
(17)
In other words, if this condition is satisfied, one would expect the forming
current sheet to be disrupted by multiple plasmoids (large tearing mode
number, $N$); if it is not, then it is the $N=1$ tearing mode that disrupts
the forming sheet.
Inserting the values measured or inferred from the experiments into this
expression (namely, $a_{0}=0.05\,$m, $L_{0}=0.25\,$m, $d_{i}=0.7\,$m,
$c_{s}=13000\,$m s-1 and $\beta_{e}=20$) yields $f=\gamma_{dr}/(2\pi)\gtrsim
60\,$kHz. This frequency is in remarkable agreement with our experimental
results which indicate a transition to the multiple plasmoid regime in the
$60-100$ kHz range.
## References
* Antiochos et al. (2007) Antiochos, S. K., DeVore, C. R., Karpen, J. T. & Mikić, Z. 2007 Structure and Dynamics of the Sun’s Open Magnetic Field. The Astrophysical Journal 671 (1), 936–946.
* Antiochos et al. (2011) Antiochos, S. K., Mikić, Z., Titov, V. S., Lionello, R. & Linker, J. A. 2011 A MODEL FOR THE SOURCES OF THE SLOW SOLAR WIND. The Astrophysical Journal 731 (2), 112.
* Attico et al. (2000) Attico, N., Califano, F. & Pegoraro, F. 2000 Fast collisionless reconnection in the whistler frequency range. Physics of Plasmas 7 (6), 2381–2387.
* Bale et al. (2019) Bale, S. D., Badman, S. T., Bonnell, J. W., Bowen, T. A., Burgess, D., Case, A. W., Cattell, C. A., Chandran, B. D.G. G., Chaston, C. C., Chen, C. H.K. K., Drake, J. F., de Wit, T. Dudok, Eastwood, J. P., Ergun, R. E., Farrell, W. M., Fong, C., Goetz, K., Goldstein, M., Goodrich, K. A., Harvey, P. R., Horbury, T. S., Howes, G. G., Kasper, J. C., Kellogg, P. J., Klimchuk, J. A., Korreck, K. E., Krasnoselskikh, V. V., Krucker, S., Laker, R., Larson, D. E., MacDowall, R. J., Maksimovic, M., Malaspina, D. M., Martinez-Oliveros, J., McComas, D. J., Meyer-Vernet, N., Moncuquet, M., Mozer, F. S., Phan, T. D., Pulupa, M., Raouafi, N. E., Salem, C., Stansby, D., Stevens, M., Szabo, A., Velli, M., Woolley, T. & Wygant, J. R. 2019 Highly structured slow solar wind emerging from an equatorial coronal hole. Nature 576 (7786), 237–242.
* Bavassano & Bruno (1989a) Bavassano, B. & Bruno, R. 1989a Evidence of local generation of Alfvénic turbulence in the solar wind. Journal of Geophysical Research 94 (A9), 11977.
* Bavassano & Bruno (1989b) Bavassano, B. & Bruno, R. 1989b Large-scale solar wind fluctuations in the inner heliosphere at low solar activity. Journal of Geophysical Research 94 (A1), 168\.
* Bavassano et al. (1982) Bavassano, B., Dobrowolny, M., Mariani, F. & Ness, N. F. 1982 Radial evolution of power spectra of interplanetary Alfvénic turbulence. Journal of Geophysical Research 87 (A5), 3617.
* Bavassano et al. (1997) Bavassano, Bruno, Woo, Richard & Bruno, Roberto 1997 Heliospheric plasma sheet and coronal streamers. Geophysical Research Letters 24 (13), 1655–1658.
* Belcher & Davis (1971) Belcher, J. W. & Davis, Leverett 1971 Large-amplitude Alfvén waves in the interplanetary medium, 2. Journal of Geophysical Research 76 (16), 3534–3563.
* Bhat & Loureiro (2018) Bhat, Pallavi & Loureiro, Nuno F. 2018 Plasmoid instability in the semi-collisional regime. Journal of Plasma Physics 84 (6), arXiv: 1804.05145.
* Brueckner et al. (1995) Brueckner, G. E., Howard, R. A., Koomen, M. J., Korendyke, C. M., Michels, D. J., Moses, J. D., Socker, D. G., Dere, K. P., Lamy, P. L., Llebaria, A., Bout, M. V., Schwenn, R., Simnett, G. M., Bedford, D. K. & Eyles, C. J. 1995 The Large Angle Spectroscopic Coronagraph (LASCO). Solar Physics 162 (1-2), 357–402.
* Coleman et al. (1968) Coleman, Paul J., Jr., J., Paul & Jr. 1968 Turbulence, Viscosity, and Dissipation in the Solar-Wind Plasma. The Astrophysical Journal 153, 371.
* Cranmer et al. (2017) Cranmer, Steven R., Gibson, Sarah E. & Riley, Pete 2017 Origins of the Ambient Solar Wind: Implications for Space Weather. Space Science Reviews 212 (3-4), 1345–1384.
* Crooker et al. (2000) Crooker, N. U., Shodhan, S., Gosling, J. T., Simmerer, J., Lepping, R. P., Steinberg, J. T. & Kahler, S. W. 2000 Density extremes in the solar wind. Geophysical Research Letters 27 (23), 3769–3772.
* DeForest et al. (2018) DeForest, C. E., Howard, R. A., Velli, M., Viall, N. & Vourlidas, A. 2018 The Highly Structured Outer Solar Corona. The Astrophysical Journal 862 (1), 18.
* Di Matteo et al. (2019) Di Matteo, S., Viall, N. M., Kepko, L., Wallace, S., Arge, C. N. & MacNeice, P. 2019 Helios Observations of Quasiperiodic Density Structures in the Slow Solar Wind at 0.3, 0.4, and 0.6 AU. Journal of Geophysical Research: Space Physics 124 (2), 837–860.
* Einaudi et al. (1999) Einaudi, Giorgio, Boncinelli, Paolo, Dahlburg, Russell B. & Karpen, Judith T. 1999 Formation of the slow solar wind in a coronal streamer. Journal of Geophysical Research: Space Physics 104 (A1), 521–534.
* Einaudi et al. (2001) Einaudi, Giorgio, Chibbaro, Sergio, Dahlburg, Russell B. & Velli, Marco 2001 Plasmoid Formation and Acceleration in the Solar Streamer Belt. The Astrophysical Journal 547 (2), 1167–1177.
* Endeve et al. (2004) Endeve, Eirik, Holzer, Thomas E. & Leer, Egil 2004 Helmet Streamers Gone Unstable: Two-Fluid Magnetohydrodynamic Models of the Solar Corona. The Astrophysical Journal 603 (1), 307–321.
* Endeve et al. (2003) Endeve, Eirik, Leer, Egil & Holzer, Thomas E. 2003 Two-dimensional Magnetohydrodynamic Models of the Solar Corona: Mass Loss from the Streamer Belt. The Astrophysical Journal 589 (2), 1040–1053.
* Endrizzi et al. (2021) Endrizzi, Douglass, Egedal, J., Clark, M., Flanagan, K., Greess, S., Milhone, J., Millet-Ayala, A., Olson, J., Peterson, E. E., Wallace, J. & Forest, C. B. 2021 Laboratory resolved structure of supercritical perpendicular shocks. Phys. Rev. Lett. 126, 145001.
* Fisk et al. (1998) Fisk, L.A., Schwadron, N.A. & Zurbuchen, T.H. 1998 On the Slow Solar Wind. Space Science Reviews 86 (1/4), 51–60.
* Fisk (2003) Fisk, L. A. 2003 Acceleration of the solar wind as a result of the reconnection of open magnetic flux with coronal loops. Journal of Geophysical Research 108 (A4), 1157.
* Flanagan et al. (2020) Flanagan, K., Milhone, J., Egedal, J., Endrizzi, D., Olson, J., Peterson, E. E., Sassella, R. & Forest, C. B. 2020 Weakly magnetized, hall dominated plasma couette flow. Phys. Rev. Lett. 125, 135001.
* Forest et al. (2015) Forest, C. B., Flanagan, K., Brookhart, M., Clark, M., Cooper, C. M., Désangles, V., Egedal, J., Endrizzi, D., Khalzov, I. V., Li, H., Miesch, M., Milhone, J., Nornberg, M., Olson, J., Peterson, E., Roesler, F., Schekochihin, A., Schmitz, O., Siller, R., Spitkovsky, A., Stemo, A., Wallace, J., Weisberg, D. & Zweibel, E. 2015 The Wisconsin Plasma Astrophysics Laboratory. Journal of Plasma Physics 81 (5), 345810501.
* Fu et al. (2017) Fu, Hui, Madjarska, Maria S., Xia, LiDong, Li, Bo, Huang, ZhengHua & Wangguan, Zhipeng 2017 Charge States and FIP Bias of the Solar Wind from Coronal Holes, Active Regions, and Quiet Sun. The Astrophysical Journal 836 (2), 169.
* Hare et al. (2017) Hare, J. D., Suttle, L., Lebedev, S. V., Loureiro, N. F., Ciardi, A., Burdiak, G. C., Chittenden, J. P., Clayson, T., Garcia, C., Niasse, N., Robinson, T., Smith, R. A., Stuart, N., Suzuki-Vidal, F., Swadling, G. F., Ma, J., Wu, J. & Yang, Q. 2017 Anomalous heating and plasmoid formation in a driven magnetic reconnection experiment. Physical Review Letters 118, 085001.
* Higginson & Lynch (2018) Higginson, A. K. & Lynch, B. J. 2018 Structured Slow Solar Wind Variability: Streamer-blob Flux Ropes and Torsional Alfvén Waves. The Astrophysical Journal 859 (1), 6\.
* Kepko & Spence (2003) Kepko, L. & Spence, H. E. 2003 Observations of discrete, global magnetospheric oscillations directly driven by solar wind density variations. Journal of Geophysical Research 108 (A6), 1257\.
* Kepko et al. (2002) Kepko, L., Spence, H. E. & Singer, H. J. 2002 ULF waves in the solar wind as direct drivers of magnetospheric pulsations. Geophysical Research Letters 29 (8), 39–1–39–4.
* Kepko et al. (2016) Kepko, L., Viall, N. M., Antiochos, S. K., Lepri, S. T., Kasper, J. C. & Weberg, M. 2016 Implications of L1 observations for slow solar wind formation by solar reconnection. Geophysical Research Letters 43 (9), 4089–4097.
* Köhnlein (1996) Köhnlein, W. 1996 Radial dependence of solar wind parameters in the ecliptic (1.1 $R_{\odot}$ \- 61AU). Solar Physics 169 (1), 209–213.
* Lapenta & Knoll (2005) Lapenta, Giovanni & Knoll, D. A. 2005 Effect of a Converging Flow at the Streamer Cusp on the Genesis of the Slow Solar Wind. The Astrophysical Journal 624 (2), 1049–1056.
* Lavraud et al. (2020) Lavraud, B., Fargette, N., Réville, V., Szabo, A., Huang, J., Rouillard, A. P., Viall, N., Phan, T. D., Kasper, J. C., Bale, S. D., Berthomier, M., Bonnell, J. W., Case, A. W., de Wit, T. Dudok, Eastwood, J. P., Génot, V., Goetz, K., Griton, L. S., Halekas, J. S, Harvey, P., Kieokaew, R., Klein, K. G., Korreck, K. E., Kouloumvakos, A., Larson, D. E., Lavarra, M., Livi, R., Louarn, P., MacDowall, R. J., Maksimovic, M., Malaspina, D., Nieves-Chinchilla, T., Pinto, R. F., Poirier, N., Pulupa, M., Raouafi, N. E., Stevens, M. L., Toledo-Redondo, S. & Whittlesey, P. L. 2020 The heliospheric current sheet and plasma sheet during parker solar probe’s first orbit. The Astrophysical Journal 894 (2), L19.
* Loureiro et al. (2012) Loureiro, N. F., Samtaney, R., Schekochihin, A. A. & Uzdensky, D. A. 2012 Magnetic reconnection and stochastic plasmoid chains in high-Lundquist-number plasmas. Physics of Plasmas 19 (4), 042303–042303, arXiv: 1108.4040.
* Luttrell & Richter (1987) Luttrell, A. H. & Richter, A. K. 1987 The Role of Alfvenic Fluctuations in MHD Turbulence Evolution between 0.3 and 1.0 AU. Sixth International Solar Wind Conference, Proceedings of the conference held 23-28 August, 1987 at YMCA of the Rockies, Estes Park, Colorado. Edited by V.J. Pizzo, T. Holzer, and D.G. Sime. NCAR Technical Note NCAR/TN-306+Proc, Volume 2, 1987., p.335 p. 335.
* Marsch & Tu (1990a) Marsch, E. & Tu, C.-Y. 1990a On the radial evolution of MHD turbulence in the inner heliosphere. Journal of Geophysical Research 95 (A6), 8211.
* Marsch & Tu (1990b) Marsch, E. & Tu, C.-Y. 1990b Spectral and spatial evolution of compressible turbulence in the inner solar wind. Journal of Geophysical Research 95 (A8), 11945\.
* Neugebauer et al. (2016) Neugebauer, Marcia, Reisenfeld, Daniel & Richardson, Ian G. 2016 Comparison of algorithms for determination of solar wind regimes. Journal of Geophysical Research: Space Physics 121 (9), 8215–8227.
* Neugebauer & Snyder (1962) Neugebauer, Marcia & Snyder, Conway W 1962 Solar Plasma Experiment. Science 138 (3545), 1095–1097.
* Olson et al. (2016) Olson, J., Egedal, J., Greess, S., Myers, R., Clark, M., Endrizzi, D., Flanagan, K., Milhone, J., Peterson, E., Wallace, J., Weisberg, D. & Forest, C.B. 2016 Experimental Demonstration of the Collisionless Plasmoid Instability below the Ion Kinetic Scale during Magnetic Reconnection. Physical Review Letters 116 (25), 1–5.
* Parker (1958) Parker, Eugene N. 1958 Dynamics of the Interplanetary Gas and Magnetic Fields. The Astrophysical Journal 128, 664–676.
* Peterson et al. (2019) Peterson, Ethan E., Endrizzi, Douglass A., Beidler, Matthew, Bunkers, Kyle J., Clark, Michael, Egedal, Jan, Flanagan, Ken, McCollam, Karsten J., Milhone, Jason, Olson, Joseph, Sovinec, Carl R., Waleffe, Roger, Wallace, John & Forest, Cary B. 2019 A laboratory model for the Parker spiral and magnetized stellar winds. Nature Physics p. 1.
* Rouillard et al. (2011) Rouillard, A. P., Sheeley, N. R., Cooper, T. J., Davies, J. A., Lavraud, B., Kilpua, E. K. J., Skoug, R. M., Steinberg, J. T., Szabo, A., Opitz, A. & Sauvaud, J.-A. 2011 THE SOLAR ORIGIN OF SMALL INTERPLANETARY TRANSIENTS. The Astrophysical Journal 734 (1), 7.
* Sanchez-Diaz et al. (2019) Sanchez-Diaz, E., Rouillard, A. P., Lavraud, B., Kilpua, E. & Davies, J. A. 2019 In Situ Measurements of the Variable Slow Solar Wind near Sector Boundaries. The Astrophysical Journal 882 (1), 51, arXiv: 1911.09683.
* Sheeley et al. (2009) Sheeley, N. R., Lee, D. D.-H., Casto, K. P., Wang, Y.-M. & Rich, N. B. 2009 THE STRUCTURE OF STREAMER BLOBS. The Astrophysical Journal 694 (2), 1471–1480.
* Sheeley, Jr. et al. (1997) Sheeley, Jr., N. R., Wang, Y.-M., Hawley, S. H., Brueckner, G. E., Dere, K. P., Howard, R. A., Koomen, M. J., Korendyke, C. M., Michels, D. J., Paswaters, S. E., Socker, D. G., St. Cyr, O. C., Wang, D., Lamy, P. L., Llebaria, A., Schwenn, R., Simnett, G. M., Plunkett, S. & Biesecker, D. A. 1997 Measurements of Flow Speeds in the Corona Between 2 and 30$R_{\odot}$ . The Astrophysical Journal 484 (1), 472–478.
* Sovinec et al. (2004) Sovinec, C.R., Glasser, A.H., Gianakon, T.A., Barnes, D.C., Nebel, R.A., Kruger, S.E., Schnack, D.D., Plimpton, S.J., Tarditi, A. & Chu, M.S. 2004 Nonlinear magnetohydrodynamics simulation using high-order finite elements. Journal of Computational Physics 195 (1), 355–386.
* Sovinec & King (2010) Sovinec, C.R. & King, J.R. 2010 Analysis of a mixed semi-implicit/implicit algorithm for low-frequency two-fluid plasma modeling. Journal of Computational Physics 229 (16), 5803–5819.
* Stephenson & Walker (2002) Stephenson, J. A. E. & Walker, A. D. M. 2002 HF radar observations of Pc5 ULF pulsations driven by the solar wind. Geophysical Research Letters 29 (9), 8–1–8–4.
* Uzdensky & Loureiro (2016) Uzdensky, D. A. & Loureiro, N. F. 2016 Magnetic Reconnection Onset via Disruption of a Forming Current Sheet by the Tearing Instability. Physical Review Letters 116 (10), arXiv: 1411.4295.
* Uzdensky et al. (2010) Uzdensky, D. A., Loureiro, N. F. & Schekochihin, A. A. 2010 Fast magnetic reconnection in the plasmoid-dominated regime. Physical Review Letters 105, 235002.
* Viall et al. (2008) Viall, N. M., Kepko, L. & Spence, H. E. 2008 Inherent length-scales of periodic solar wind number density structures. Journal of Geophysical Research: Space Physics 113 (A7), n/a–n/a.
* Viall & Vourlidas (2015) Viall, Nicholeen M. & Vourlidas, Angelos 2015 PERIODIC DENSITY STRUCTURES AND THE ORIGIN OF THE SLOW SOLAR WIND. The Astrophysical Journal 807 (2), 176.
* Wang et al. (1997) Wang, Y.-M., Sheeley, Jr., N. R., Howard, R. A., Kraemer, J. R., Rich, N. B., Andrews, M. D., Brueckner, G. E., Dere, K. P., Koomen, M. J., Korendyke, C. M., Michels, D. J., Moses, J. D., Paswaters, S. E., Socker, D. G., Wang, D., Lamy, P. L., Llebaria, A., Vibert, D., Schwenn, R. & Simnett, G. M. 1997 Origin and Evolution of Coronal Streamer Structure During the 1996 Minimum Activity Phase. The Astrophysical Journal 485 (2), 875–889.
* Wang & Hess (2018) Wang, Y.-M. & Hess, P. 2018 Gradual Streamer Expansions and the Relationship between Blobs and Inflows. The Astrophysical Journal 859 (2), 135.
* Wang et al. (1998) Wang, Y.-M., Sheeley, Jr., N. R., Walters, J. H., Brueckner, G. E., Howard, R. A., Michels, D. J., Lamy, P. L., Schwenn, R. & Simnett, G. M. 1998 Origin of Streamer Material in the Outer Corona. The Astrophysical Journal 498 (2), L165–L168.
* Wu et al. (2000) Wu, S. T., Wang, A. H., Plunkett, S. P. & Michels, D. J. 2000 Evolution of Global-Scale Coronal Magnetic Field due to Magnetic Reconnection: The Formation of the Observed Blob Motion in the Coronal Streamer Belt. The Astrophysical Journal 545 (2), 1101–1115.
|
11institutetext: The University of Queensland, St. Lucia, Australia 11email:
<EMAIL_ADDRESS>
# Seed-driven Document Ranking for Systematic Reviews: A Reproducibility Study
Shuai Wang 0000-0002-0726-5250 Harrisen Scells 0000-0001-9578-7157
Ahmed Mourad 0000-0002-9423-9404 Guido Zuccon 0000-0003-0271-5563
###### Abstract
Screening or assessing studies is critical to the quality and outcomes of a
systematic review. Typically, a Boolean query retrieves the set of studies to
screen. As the set of studies retrieved is unordered, screening all retrieved
studies is usually required for high-quality systematic reviews. Screening
prioritisation, or in other words, ranking the set of studies, enables
downstream activities of a systematic review to begin in parallel. We
investigate a method that exploits seed studies – potentially relevant studies
used to seed the query formulation process – for screening prioritisation. Our
investigation aims to reproduce this method to determine if it is
generalisable on recently published datasets and determine the impact of using
multiple seed studies on effectiveness. We show that while we could reproduce
the original methods, we could not replicate their results exactly. However,
we believe this is due to minor differences in document pre-processing, not
deficiencies with the original methodology. Our results also indicate that our
reproduced screening prioritisation method, (1) is generalisable across
datasets of similar and different topicality compared to the original
implementation, (2) that when using multiple seed studies, the effectiveness
of the method increases using our techniques to enable this, (3) and that the
use of multiple seed studies produces more stable rankings compared to single
seed studies. Finally, we make our implementation and results publicly
available at the following URL: https://github.com/ielab/sdr.
###### Keywords:
Systematic Reviews Document Ranking Re-ranking.
## 1 Introduction
A systematic review is a focused literature review that synthesises all
relevant literature for a specific research topic. Identifying relevant
publications for medical systematic reviews is a highly tedious and costly
exercise, often involving multiple reviewers to screen (i.e., assess) upwards
of tens of thousands of studies. It is a standard practice to screen each
study retrieved for a systematic review by a Boolean query. However, in recent
years, there has been a dramatic rise in Information Retrieval methods that
attempt to re-rank this set of studies for a variety of reasons, such as
stopping the screening early (once a sufficient number of studies have been
found) or beginning downstream phases of the systematic review process earlier
(such as the acquisition of the full-text of studies). However, a known
problem with many of these methods is that they use a different query from the
Boolean query used to perform the initial literature search. Instead, most
methods typically resort to less informationally representative sources for
queries that can be used for ranking, such as the title of the systematic
review, e.g., [18] (containing narrow information about the retrieval topic),
or concatenating the clauses of the Boolean query together, e.g., [3]
(negating the structural information in Boolean clauses). We instead turn our
attention to methods that use more informative sources of information to
perform re-ranking.
Indeed, we focus this reproducibility study on one such method: seed-driven
document ranking (SDR) from Lee and Sun [16]. SDR exploits studies that are
known a priori to develop the research focus and search strategy for the
systematic review. These studies are often referred to as ‘seed studies’ and
are commonplace in the initial phases of the systematic review creation
process. This method and others such as CLF [21] (which directly uses the
Boolean query for ranking) have been shown to significantly outperform other
methods that use a naïve query representation. Despite this, the SDR method
was published when there was little data for those seeking to research this
topic, and there have been methods published since that did not include SDR as
a comparison. To this end, we devise the following research questions (RQs) to
guide our investigation into why we are interested in reproducing the SDR
method:
RQ1
Does the effectiveness of SDR generalise beyond the CLEF TAR 2017 dataset? The
original study was only able to be investigated on a single dataset of
systematic review topics. In this study, we plan to use our replicated
implementation of SDR to examine the effectiveness of this method across more
recent datasets, and datasets that are more topically varied (CLEF TAR 2017
only contains systematic reviews about diagnostic test accuracy).
RQ2
What is the impact of using multiple seed studies collectively on the
effectiveness of SDR? The original study focused on two aspects of their
method: an initial ranking using a single seed study and an iterative ranking
which further uses the remaining seed studies one at a time. We focus on
investigating the first aspect concerning the impact of multiple seed studies
(multi-SDR) used collectively for input to produce an initial ranking.
RQ3
To what extent do seed studies impact the ranking stability of single- and
multi-SDR? In a recent study by Scells et al. [23] to generate Boolean queries
from seed studies, it was found that seed studies can have a considerable and
significant effect on the effectiveness of resulting queries. We perform a
similar study that aims to measure the variance in effectiveness of SDR in
single- and multi- seed study settings.
With the investigation into the above research questions, we will (1)
demonstrate the novelty of the method by performing experiments on more
datasets (RQ1), and experiments that reveal more about the effectiveness of
the method (RQ2, RQ3), (2) assess the impact of SDR towards the Information
Retrieval community and the wider systematic review community, (3) investigate
the reliability of SDR by comparing it to several baselines on publicly
available datasets, and (4) make our complete reproduced implementation of SDR
publicly available for others to use as a baseline in future work on re-
ranking for systematic reviews.
## 2 Replicating SDR
In the original paper of Lee and Sun, they devise two experimental settings
for SDR: an initial ranking of retrieved studies using a seed study and
iteratively re-ranking by updating the query used for SDR with one seed study
at a time to simulate the manual screening process. We focus on the initial
ranking stage for two reasons: (1) screening prioritisation is an accepted
practice in the systematic review creation process as all studies must still
be screened [4]; and (2) an effective initial ranking will naturally result in
a more effective and efficient re-ranking of studies, as more studies that are
relevant will be identified faster. The intuition for SDR is that relevant
studies are similar to each other. The original paper makes two important
observations about seed studies to support this intuition: (1) that relevant
studies are more similar to each other than they are to non-relevant studies;
and (2) that relevant studies share many clinical terms. These two
observations are used to inform the representation and scoring of studies,
given a seed study. We attempt to replicate these observations below to verify
both that our implementation follows the same steps to make similar
observations and whether the assumptions derived from them hold.
###### Observation 2.1.
For a given systematic review, its relevant documents share higher pair-wise
similarity than that of irrelevant documents.
We find that this observation is valid in our reproduction, as demonstrated by
Figure 2. In order to produce this plot, irrelevant studies were randomly
under-sampled ten times. The number of non-relevant studies is always the same
as the number of relevant studies for each topic. This means it is unlikely
that we will produce the exact result initially found for this observation by
Lee and Sun. Furthermore, one reason that the average pairwise similarity for
the relevant studies may not match the original results is that the textual
content of studies on PubMed may have changed or been updated. Rather than
using a dump of PubMed from 2017, we used the latest version of studies on
PubMed, as it is unknown the exact date that studies were extracted from
PubMed in the original paper, and the CLEF TAR dataset does not give an exact
date.
###### Observation 2.2.
Relevant documents for a given systematic review share high commonality in
terms of clinical terms.
We found that this observation is also valid in our reproduction, as
demonstrated in Figure 2. It can be seen that the commonality of terms for the
bag of words (BOW) and bag of clinical words (BOC) representations closely
match those reported by Lee and Sun. However, we also found that with some
minor modifications to the pre-processing of studies, we achieved a similar
(yet still lower) commonality for terms using the BOW representation. We
believe that the BOC representation shares a higher commonality of terms
because the vocabulary size is smaller than the BOW representation. Naturally,
with a smaller vocabulary, it is more likely for studies to share common
terms. When pre-processing studies using the method described in original
paper, we find that BOC terms count for 4.6% of the vocabulary, while they
account for 31.2% using our pre-processing. In fact, our BOW vocabulary is
only 14.8% their BOW vocabulary. Note that BOC is a distinct subset of BOW.
Figure 1: Intra-similarity between relevant studies and irrelevant studies.
Figure 2: Distribution of terms in relevant studies.
### 2.1 Document Representation
Given Observation 1 about relevant studies for this task, Lee and Sun chose to
represent studies as a ‘bag of clinical words’ (BOC). They chose to use the
Unified Medical Language System (UMLS) as their ontology of clinical terms.
UMLS is an umbrella ontology that combines many common medical ontologies such
as SNOMED-CT and MeSH. In order to identify UMLS concepts (and therefore the
clinical terms) within the studies, Lee and Sun combine the outputs of the
NCBO Bioportal [20] API111http://data.bioontology.org/documentation and
QuickUMLS [24]. We follow their process as described, however we are not aware
if it is not possible to set a specific version for the NCBO API. We use
QuickUMLS version 1.4.0 with UMLS 2016AB.
### 2.2 Term Weighting
SDR weights terms based on the intuition that terms in relevant studies are
more similar to each other (or occur with each other more frequently) than
non-relevant studies. The weight of an individual term in a seed study is
estimated by measuring to what extent it separates similar (pseudo-relevant)
and dissimilar (pseudo-non-relevant) studies. Formally, each term $t_{i}$ in a
seed document $d_{s}$ ($t_{i}\in d_{s}$) is weighted using the function
$\varphi(t_{i},d_{s})=\text{ln}\left(1+\frac{\gamma(D_{t_{i}},d_{s})}{\gamma(D_{\bar{t_{i}}},d_{s})}\right)$,
where $D_{t_{i}}$ represents the subset of candidate studies to be ranked
where $t_{i}$ appears, and $D_{\bar{t_{i}}}$ represents the subset of
candidate studies to be ranked where $t_{i}$ does not appear. The average
similarity between studies is computed as
$\gamma(D,d_{s})=\frac{1}{|D|}\sum_{d_{j}\in D}sim(d_{j},d_{s})$, where $sim$
is the cosine similarity between the vector representations of the candidate
study $d_{j}$ and the seed study $d_{i}$. We follow the original
implementation and represent studies as tf-idf vectors.
### 2.3 Document Scoring
The original SDR implementation uses the query likelihood language model with
Jelenik-Mercer smoothing for scoring studies. Typically, this ranking function
is derived as indicated by QLM shown in Equation 1, where $c(t_{i},d_{s})$
represents the count of a term in a seed study, $c(t_{i},d)$ represents the
count of a term in a candidate study, $L_{d}$ represents the number of terms
in a study, $p(t_{i}|\mathbb{C})$ represents the probability of a term in a
background collection, and $\lambda$ is the Jelenik-Mercer smoothing
parameter. To incorporate the term weights as described in Subsection 2.2, the
original paper includes $\varphi$ function into the document scoring function
as shown in Equation 1:
$score(d,d_{s})=\sum_{t_{i}\in
d,d_{s}}\overbrace{\varphi(t_{i},d_{s})}^{\text{Term
Weight}}\cdot\overbrace{c(t_{i},d_{s})\cdot\text{log}\left(1+\frac{1-\lambda}{\lambda}\cdot\frac{c(t_{i},d)}{L_{d}\cdot
p(t_{i}|\mathbb{C})}\right)}^{\text{QLM}}$ (1)
where $p(t_{i}|\mathbb{C})$ is estimated using maximum likelihood estimation
over the entire candidate set of studies $C$. In the original paper, when
additional seed studies were ranked in the top-$k$ set of candidate seed
studies (denoted as $d_{s^{\prime}}$), a re-ranking was initiated by expanding
each $t_{i}$ in $d_{s}$ with the new terms from $d_{s^{\prime}}$. For our
replication study, we only consider the initial ranking of candidate studies,
as an abundance of baseline methods can be used as a comparison for this task.
It is also arguably the most important step as a poor initial ranking will
naturally result in a less effective and less efficient re-ranking.
### 2.4 Multi-SDR
One assumption in the original paper is that only a single seed study can be
used at a time for ranking candidate studies. We propose a modification by
studying the impact of using multiple seed studies collectively. In practice,
it is common for Boolean queries (i.e., the search strategies used to retrieve
the set of candidate studies we use for ranking) to be developed with a
handful of seed studies, not just a single seed study. We hypothesise that the
effectiveness of SDR will increase when multiple seed studies are used. Each
relevant study must be used as a seed study for ranking, as the seed studies
are not known in any of the collections we used. Therefore the average
performance across topics was recorded (i.e., leave-one-out cross-validation).
This study follows the methodology for the single-SDR method described in the
subsections above. How we adapt single-SDR for a multi-SDR setting, and how we
make this comparable to single-SDR is described as follows.
#### 2.4.1 Grouping Seed Studies
To study multi-SDR, we adopt a similar approach to the original paper;
however, we instead randomly group multiple seed studies together and perform
leave-one-out cross-validation over these groups. To account for any topic
differences that may impact performance, we use a sliding window across the
list of seed studies so that a seed study can appear in multiple groups. The
number of seed studies to fill each group was chosen to be 20% of the total
seed studies. Rather than use a fixed number of seed studies, choosing
different proportions simulates the use of seed studies in practice, i.e.,
different amounts of seed studies may be known before conducting a review.
#### 2.4.2 Combining Seed Studies for Multi-SDR
The way we exploit multiple seed studies for SDR is, we believe, similar to
how Lee and Sun used multiple seed studies in their relevance feedback
approach to SDR. We concatenate seed studies together such that the resulting
representation can be used directly with the existing single-SDR framework. We
acknowledge that there may be more sophisticated approaches to exploit multi-
SDR. However, we leave this as future work as it is out of the scope for this
reproducibility study.
When computing term weights for multi-SDR, we also encountered computational
infeasibility for large groups of seed studies. To this end, we randomly
under-sampled the number of irrelevant studies to 50 each time we compute
$\varphi$.
#### 2.4.3 Comparing Single-SDR to Multi-SDR
Directly comparing the results of multi-SDR to single-SDR is not possible due
to the leave-one-out cross-validation style of evaluation used for single-SDR.
To address this, we apply an oracle to identify the most effective single-SDR
run out of all the seed studies used for a given multi-SDR run in terms of
MAP. We then remove the other seed studies used in the multi-SDR run from the
oracle-selected single-SDR run so that both runs share the same number of
candidate studies for ranking.
## 3 Experimental Setup
### 3.1 Datasets
When the original SDR paper was published, only a single collection with
results of baseline method implementations was available. We intend to assess
the generalisability of their SDR method on several new collections which have
been released since. The collections we consider are:
CLEF TAR 2017 [9]
This is the original dataset that was used to study SDR. We include this
dataset to confirm that we achieve the same or similar results as the original
paper. This collection includes 50 systematic review topics on diagnostic test
accuracy – a type of systematic review that is challenging to create. The 50
topics are split into 20 training topics and 30 testing topics. In our
evaluation, we removed topics CD010653, CD010771, CD010386, CD012019, CD011549
as they contained only a single or no relevant studies to use as seed studies.
For our experiments using multiple seed studies, we further removed topics
CD010860, CD010775, CD010896, CD008643, CD011548, CD010438, CD010633, CD008686
due to low numbers of relevant studies.
CLEF TAR 2018 [11]
This collection adds 30 diagnostic test accuracy systematic reviews as topics
to the existing 2017 collection; however, it also removes eight because they
are not ‘reliable for training or testing purposes. In total, this collection
contains 72 topics. Our evaluation only used 30 additional reviews of the 2018
dataset and removed topics CD012216, CD009263, CD011515, CD011602, and
CD010680 as they contained only a single or no relevant studies to use as seed
studies. We also removed topic CD009263 because we ran into memory issues when
running experiments on this topic due to many candidate documents (approx.
80,000). For our experiments using multiple seed studies, we removed topics
CD012083, CD012009, CD010864, CD011686, CD011420 due to low numbers of
relevant studies.
CLEF TAR 2019 [10]
This collection further develops on the previous years’ by also including
systematic reviews of different types. From this collection, we use the 38
systematic reviews of interventions (i.e., a different type of diagnostic test
accuracy).222Although the overview paper claims there are 40 interventions
topics, there are two topics that appear in both training and testing splits.
However, like the previous datasets, we ignore these splits and combine the
training and testing splits. We use this collection to study the
generalisability of SDR on other kinds of systematic reviews. In our
evaluation, we removed topics CD010019, CD012342, CD011140, CD012120, CD012521
as they contained only a single or no relevant studies to use as seed studies.
For our experiments using multiple seed studies, we further removed topics
CD011380, CD012521, CD009069, CD012164, CD007868, CD005253, CD012455 due to
low numbers of relevant studies.
### 3.2 Baselines
The baselines in the original paper included the best performing method from
the CLEF TAR 2017 participants, several seed-study-based methods, and
variations of the scoring function used by SDR. For our experiments, we
compare our reproduction of SDR to all of the original baselines that we have
also reproduced from the original paper.The baselines in the original paper
include: BM25-{BOW,BOC}, QLM-{BOW,BOC}, SDR-{BOW,BOC}, and AES-{BOW,BOC}. The
last method, AES, is an embedding-based method that averages the embeddings
for all terms in the seed studies. The AES method uses pre-trained word2vec
embeddings using PubMed and Wikipedia (as specified in the original paper). We
also include a variation that uses only PubMed embeddings (AES-P). Finally, we
also include the linear interpolation between SDR and AES, using the same
parameter as the original paper ($\alpha=0.3$). We use the same versions of
the pre-trained embeddings as the original paper.
### 3.3 Evaluation Measures
For comparison to the original paper, we use the same evaluation measures.
These include MAP, precision@k, recall@k, LastRel%, and Work Saved over
Sampling (WSS). LastRel is a measure introduced at CLEF TAR’17 [9]. It is
calculated as the rank position of the last relevant document. LastRel% is the
normalised percentage of studies that must be screened in order to obtain all
relevant studies. Work Saved Over Sampling; a measure initially proposed to
measure classification effectiveness [7], is calculated instead here, by
computing the fraction of studies that can be removed from screening to obtain
all relevant documents; i.e., $\text{WSS}=\frac{|C|-\text{LastRel}}{|C|}$.
Where $C$ is the number of studies originally retrieved (i.e., the candidate
set for re-ranking). For precision@k and recall@k, we report much deeper
levels of $k$: the original paper reported $k=\\{10,20,30\\}$; where we report
$k=\\{10,100,1000\\}$. Furthermore, we also report nDCG at these k-values, as
it provides additional information about relevant study rank positions. We
compute LastRel% and WSS using the scripts used in CLEF TAR 2017. For all
other evaluation measures we use trec_eval (version 9.0.7).
### 3.4 Document Pre-Processing
It is widely known that document pre-processing (e.g., tokenisation,
stopwords, or stemming) can have a profound effect on ranking performance [8].
Although the original paper provides information about the versions of the
libraries it uses for ranking, there were fewer details, such as how documents
were tokenised or which stopword list was used. We reached out to the original
authors to confirm the exact experimental settings. From the original paper,
documents were split using space, then stopwords were removed using nltk.
The modifications we made to the document pre-processing pipeline were that
documents were first pre-processed to remove punctuation marks and then
tokenised using gensim version 3.2.0 tokeniser. For stopwords, as the original
authors have not specified the nltk version, we used the latest version at the
time of publishing, version 3.6.3. Then terms used are lowercased for in all
methods except for AES. No stemming has been applied in either pre-processing
pipeline.
## 4 Results
Before we investigate the three research questions of our reproducibility
study, we first examine the extent to which we were able to replicate the
results of Lee and Sun. In this study, we were unable to exactly replicate the
results due to what we believe to be minor differences in document pre-
processing and evaluation setup. Despite these difference, the results in
Table 1 show a similar performance across the baselines and evaluation
measures compared to what Lee and Sun originally reported in their paper for
our pre-processing pipeline.
The results observed comparing the document pre-processing pipeline for the
BOW representation as described by Lee and Sun (*-LEE) to our document pre-
processing pipeline show that the BOW baselines may not have been as strong as
if the original authors had performed a similar pipeline as us. We find that
although the results comparing their baseline is statistically significant
with our best performing method, our baseline is not significantly different.
Finally, we find that the SDR-BOW-AES-LEE method, which corresponds to their
most effective method, is significantly worse than our most effective method
for 2017, SDR-BOW-AES-P.
In terms of the BOC representation we were unable to identify a more effective
pipeline for extracting clinical terms. Here, we applied the clinical term
extraction tools over individual terms in the document (following the pre-
processing of Lee and Sun), and not the entire document. Although we find this
to be counter-intuitive, as tools like QuickUMLS and the NCBO API use text
semantics to match n-grams, the result of applying the tools to individual
terms has the effect of reducing the vocabulary of a seed study to the key
concepts.
Finally, comparing our evaluation setup to Lee and Sun, we find that there
were a number of topics in the CLEF TAR 2017 dataset that were incompatible
with SDR. Rather than attempting to replicate their results, we simply do not
compare their original results with ours, since we do not have access to their
run files or precise evaluation setup. Furthermore, when we compare the
results we report from to the best performing participant at CLEF TAR 2017
that did not use relevance feedback [3], we remove the same topics from the
run file of this participant for fairness. Although this method cannot be
directly compared to, we can see that even relatively unsophisticated methods
that use seed studies such as BM25-BOW are able to outperform the method by
this participant.
### 4.1 Generalisability of SDR
We next investigate the first research question: Does the effectiveness of SDR
generalise beyond the CLEF TAR 2017 dataset? In Table 2, we can see that the
term weighting of SDR almost always increases effectiveness compared to using
only QLM, and that interpolation with AES can have further benefits to
effectiveness. However, we note that few of these results are statistically
significant.
While we are unable to include all of the results for space reasons, we find
that SDR-BOC-AES-P was not always the most effective SDR method. Indeed on the
2019 dataset, SDR-BOW was the most effective. The reason for this may be due
to the difference in topicality of the 2019 dataset. This suggests that not
only is the method of identifying clinical terms not suitable for these
intervention systematic review topics, but that the interpolation between SDR
and AES may require dataset-specific tuning.
Method | MAP | Prec. | Prec. | Prec. | Recall | Recall | Recall | nDCG | nDCG | nDCG | LR% | WSS
---|---|---|---|---|---|---|---|---|---|---|---|---
| | 10 | 100 | 1000 | 10 | 100 | 1000 | 10 | 100 | 1000 | |
Sheffield-run-2[3] | 0.1706 | 0.1367 | 0.0703 | 0.0156 | 0.1759 | 0.5133 | 0.8353 | 0.2089 | 0.3342 | 0.4465 | 0.4660 | 0.5340
BM25-BOW-LEE | 0.1710† | 0.2027† | 0.0867† | 0.0195† | 0.1543 | 0.5118† | 0.8798† | 0.2439† | 0.3419† | 0.4770† | 0.4902† | 0.5098†
BM25-BOW | 0.1810 | 0.2128† | 0.0898† | 0.0200 | 0.1646 | 0.5232† | 0.8928 | 0.2560 | 0.3534† | 0.4899† | 0.4427† | 0.5573†
BM25-BOC | 0.1764† | 0.2145† | 0.0895† | 0.0200 | 0.1562 | 0.5215† | 0.8944 | 0.2539 | 0.3496† | 0.4871† | 0.4401† | 0.5599†
QLM-BOW-LEE | 0.1539† | 0.1846† | 0.0778† | 0.0184† | 0.1367† | 0.4664† | 0.8508† | 0.2198† | 0.3091† | 0.4454† | 0.4662† | 0.5338†
QLM-BOW | 0.1973 | 0.2360 | 0.0964 | 0.0203 | 0.1855 | 0.5464 | 0.9081 | 0.2827 | 0.3772 | 0.5100 | 0.3851 | 0.6149
QLM-BOC | 0.1894 | 0.2330 | 0.0951 | 0.0202 | 0.1809 | 0.5376 | 0.9032 | 0.2771 | 0.3684 | 0.5018 | 0.3936 | 0.6064
SDR-BOW-LEE | 0.1533† | 0.1777† | 0.0780† | 0.0185† | 0.1304† | 0.4710† | 0.8576† | 0.2142† | 0.3088† | 0.4460† | 0.4660† | 0.5340†
SDR-BOW | 0.1972 | 0.2264 | 0.0952 | 0.0204 | 0.1718 | 0.5398 | 0.9083 | 0.2739 | 0.3728 | 0.5081 | 0.3742 | 0.6258
SDR-BOC | 0.1953 | 0.2329 | 0.0974 | 0.0206 | 0.1751 | 0.5530 | 0.9151 | 0.2756 | 0.3751 | 0.5086 | 0.3689 | 0.6311
AES-BOW | 0.1516† | 0.1768† | 0.0785† | 0.0190† | 0.1369† | 0.4611† | 0.8794† | 0.2163† | 0.3106† | 0.4552† | 0.4549† | 0.5451†
AES-BOW-P | 0.1604† | 0.1872† | 0.0809† | 0.0193† | 0.1480† | 0.4954† | 0.8895† | 0.2274† | 0.3255† | 0.4669† | 0.4088† | 0.5912†
SDR-BOW-LEE-AES | 0.1716† | 0.2008† | 0.0870† | 0.0197 | 0.1484† | 0.5250† | 0.8988† | 0.2389† | 0.3429† | 0.4792† | 0.4148† | 0.5852†
SDR-BOW-AES | 0.1958 | 0.2309 | 0.0957 | 0.0203 | 0.1750 | 0.5568 | 0.9163 | 0.2756 | 0.3764 | 0.5090 | 0.3880† | 0.6120†
SDR-BOC-AES | 0.1964 | 0.2364 | 0.0972 | 0.0204 | 0.1770 | 0.5699 | 0.9195 | 0.2800 | 0.3813 | 0.5117 | 0.3830† | 0.6170†
SDR-BOW-LEE-AES-P | 0.1764† | 0.2058† | 0.0883† | 0.0199 | 0.1570 | 0.5349† | 0.9081† | 0.2448† | 0.3500† | 0.4865† | 0.3796† | 0.6204†
SDR-BOW-AES-P | 0.1983 | 0.2322 | 0.0961 | 0.0204 | 0.1740 | 0.5673 | 0.9206 | 0.2768 | 0.3812 | 0.5128 | 0.3608 | 0.6392
SDR-BOC-AES-P | 0.1984 | 0.2369 | 0.0970 | 0.0205 | 0.1788 | 0.5737 | 0.9241 | 0.2807 | 0.3837 | 0.5147 | 0.3566 | 0.6434
Table 1: Reproduction results of baselines and SDR methods on the CLEF TAR 2017 dataset. For BOW methods, the pre-processing pipeline used by Lee and Sun is denoted by ‘-LEE’. BOW methods that do not have this demarcation correspond to our pipeline. For AES methods, word2vec PubMed embeddings are denoted by ‘-P’. AES methods that do not have this demarcation correspond to word2vec embeddings that include PubMed and Wikipedia. Statistical significance (Student’s two-tailed paired t-test with Bonferonni correction, $p<0.05$) between the most effective method (SDR-BOC-AES-P) and all other methods is indicated by $\dagger$. | Method | MAP | Prec. | Prec. | Prec. | Recall | Recall | Recall | nDCG | nDCG | nDCG | LR% | WSS
---|---|---|---|---|---|---|---|---|---|---|---|---|---
| | 10 | 100 | 1000 | 10 | 100 | 1000 | 10 | 100 | 1000 | | |
2017 | QLM | 0.1894 | 0.2330 | 0.0951 | 0.0202 | 0.1809 | 0.5376 | 0.9032 | 0.2771 | 0.3684 | 0.5018 | 0.3936 | 0.6064
SDR | 0.1953 | 0.2329 | 0.0974 | 0.0206 | 0.1751 | 0.5530 | 0.9151 | 0.2756 | 0.3751 | 0.5086 | 0.3689 | 0.6311
SDR-AES-P | 0.1984 | 0.2369 | 0.0970 | 0.0205 | 0.1788 | 0.5737 | 0.9241 | 0.2807 | 0.3837 | 0.5147 | 0.3566 | 0.6434
2018 | QLM-BOC | 0.2344 | 0.2594 | 0.1130 | 0.0219 | 0.1821 | 0.6214 | 0.9104 | 0.3141 | 0.4156 | 0.5312 | 0.3317† | 0.6683†
SDR | 0.2374 | 0.2549 | 0.1136 | 0.0221 | 0.1798 | 0.6176 | 0.9174 | 0.3117 | 0.4163 | 0.5351 | 0.3024 | 0.6976
SDR-AES-P | 0.2503 | 0.2688 | 0.1161 | 0.0222 | 0.1957 | 0.6036 | 0.9234 | 0.3259 | 0.4243 | 0.5445 | 0.2695 | 0.7305
2019 | QLM | 0.2614 | 0.2599 | 0.0881 | 0.0169 | 0.2748 | 0.7032 | 0.9297 | 0.3458 | 0.4700 | 0.5482 | 0.4085 | 0.5915
SDR | 0.2790 | 0.2663 | 0.0899 | 0.0169 | 0.3048 | 0.7151 | 0.9337 | 0.3594 | 0.4846 | 0.5602 | 0.3819 | 0.6181
SDR-AES-P | 0.2827 | 0.2667 | 0.0898 | 0.0168 | 0.2973 | 0.7174 | 0.9378 | 0.3649 | 0.4913 | 0.5672 | 0.3876 | 0.6124
Table 2: Generalisability of results on the CLEF TAR 2017, 2018 and 2019
datasets. Representations used in this table are all BOC. Statistical
significance (Student’s two-tailed paired t-test with Bonferonni correction,
$p<0.05$) between the most effective method (SDR-AES-P) and other methods is
indicated by $\dagger$.
### 4.2 Effect of Multiple Seed Studies
| Method | MAP | Prec. | Prec. | Prec. | Recall | Recall | Recall | nDCG | nDCG | nDCG | LR% | WSS
---|---|---|---|---|---|---|---|---|---|---|---|---|---
| | | 10 | 100 | 1000 | 10 | 100 | 1000 | 10 | 100 | 1000 | |
2017 | Single-BOC | 0.3116 | 0.4235 | 0.1463 | 0.0255 | 0.2219 | 0.6344 | 0.9469 | 0.4830 | 0.5330 | 0.6595 | 0.3699 | 0.6301
Single-BOW | 0.3098 | 0.4076 | 0.1465 | 0.0255 | 0.2158 | 0.6366 | 0.9472 | 0.4679 | 0.5312 | 0.6566 | 0.3687 | 0.6313
Multi-BOC | 0.4554† | 0.5804† | 0.1752† | 0.0272† | 0.2917† | 0.7151† | 0.9661† | 0.6817† | 0.6765† | 0.7835† | 0.3427 | 0.6573
Multi-BOW | 0.4610† | 0.5910† | 0.1762† | 0.0272† | 0.2951† | 0.7155† | 0.9659† | 0.6924† | 0.6805† | 0.7866† | 0.3450 | 0.6550
% Change | 47.4801 | 41.0234 | 20.0132 | 6.6705 | 34.1131 | 12.5557 | 2.0029 | 44.5398 | 27.5202 | 19.3035 | -6.8792 | 4.0283
2018 | Single-BOC | 0.3345 | 0.4443 | 0.1671 | 0.0285 | 0.2041 | 0.6181 | 0.9280 | 0.5011 | 0.5296 | 0.6551 | 0.2641 | 0.7359
Single-BOW | 0.3384 | 0.4433 | 0.1678 | 0.0286 | 0.2062 | 0.6197 | 0.9383 | 0.4955 | 0.5301 | 0.6579 | 0.2577 | 0.7423
Multi-BOC | 0.4779† | 0.6130† | 0.1979† | 0.0307† | 0.2821† | 0.6997† | 0.9592† | 0.7199† | 0.6823† | 0.7908† | 0.2394† | 0.7606†
Multi-BOW | 0.4809† | 0.6109† | 0.1978† | 0.0306† | 0.2813† | 0.6968† | 0.9585† | 0.7218† | 0.6835† | 0.7924† | 0.2396 | 0.7604
% Change | 42.5011 | 37.8814 | 18.1509 | 7.2657 | 37.3377 | 12.8217 | 2.7561 | 44.6754 | 28.8870 | 20.5797 | -8.1919 | 2.8990
2019 | Single-BOC | 0.3900 | 0.4249 | 0.1285 | 0.0221 | 0.3196 | 0.7261 | 0.9368 | 0.5365 | 0.6164 | 0.6897 | 0.4304 | 0.5696
Single-BOW | 0.3925 | 0.4418 | 0.1272 | 0.0222 | 0.3366 | 0.7243 | 0.9386 | 0.5516 | 0.6164 | 0.6916 | 0.4285 | 0.5715
Multi-BOC | 0.5341† | 0.5746† | 0.1533† | 0.0243† | 0.3962† | 0.7896† | 0.9622† | 0.7105† | 0.7458† | 0.8091† | 0.3852† | 0.6148†
Multi-BOW | 0.5374† | 0.5864† | 0.1521† | 0.0244† | 0.4031† | 0.7853† | 0.9616† | 0.7223† | 0.7466† | 0.8114† | 0.3877† | 0.6123†
% Change | 36.9305 | 33.9958 | 19.3948 | 9.9327 | 21.8599 | 8.5825 | 2.5819 | 31.6927 | 21.0510 | 17.3213 | -10.0189 | 7.5424
Table 3: Results comparing single-SDR and multi-SDR on the CLEF TAR 2017,
2018, and 2019 datasets. Note that the results for single-SDR are not directly
comparable to the above tables as explained in Section 2.4.3. Statistical
differences (Student’s paired two-tailed t-test, $p<0.05$) are indicated
pairwise between the single- and multi- SDR BOC and BOW methods for each year
(e.g., single-SDR-BOC-AES-P vs. multi-SDR-BOC-AES-P for 2017). % Change
indicates the average difference between single- and multi-{BOW+BOC}.
Next, we investigate the second research question: What is the impact of using
multiple seed studies collectively on the effectiveness of SDR? Firstly,
several topics were further removed for these experiments. Therefore, the
results of single-SDR in Table 3 are not directly comparable to the results in
Tables 1 and 2. In order to measure the effect multiple studies has on SDR
compared to single seed studies, we also remove the same topics for single-
SDR.
We find that across all three datasets, compared to single-SDR, multi-SDR can
significantly increase the effectiveness. We also find that the largest
increases in effectiveness are seen on shallow metrics across all three CLEF
TAR datasets. This has implications for the use of SDR in practice, as
typically, multiple seed studies are available before conducting the screening
process. Therefore, when multiple seed studies are used for the initial
ranking process, active learning methods that iteratively rank unjudged
studies will naturally be more effective (as more relevant studies are
retrieved in the early rankings). However, we argue that the assumption that
relevant studies are a good surrogate for seed studies made by Lee and Sun
[16] and by others in other work such as Scells et al. [23] may be weak and
that methods that utilise relevant studies for this purpose overestimate
effectiveness. In reality, seed studies may not be relevant studies. They may
be discarded once a Boolean query has been formulated (e.g., they may not be
randomised controlled trials or unsuitable for inclusion in the review).
### 4.3 Variability of Seed Studies on Effectiveness
(a) Single-SDR; 2017
(b) Single-SDR; 2018
(c) Single-SDR; 2019
(d) Multi-SDR; 2017
(e) Multi-SDR; 2018
(f) Multi-SDR; 2019
Figure 3: Topic-by-topic distribution of effectiveness (MAP) for the oracle-
selected single-SDR-BOC-AES-P method (top figures) versus multi-SDR-BOC-AES-P.
Finally, we investigate the last research question: To what extent do seed
studies impact the ranking stability of single- and multi-SDR? We investigate
this research question by comparing the topic-by-topic distribution of
performance for the same results present in Table 3. These results are
visualised in Figure 3. That is, we compare the multi-SDR results to the
oracle single-SDR results, described in Section 2.4.3 so that we can fairly
compare the variance of one to the other. We find that the variance obtained
by multi-SDR is generally higher than that of single-SDR using DTA systematic
review topics (Figure 3(a) vs. Figure 3(d) – and Figure 3(b) vs. Figure 3(e)).
We compute the mean variance across all topics, and find that the variance of
multi-SDR (4.49e-2) is 10.89% higher than single-SDR (4.44e-2) result for the
2017 dataset, and 11.76 % for the 2018 dataset (single: 3.43e-2; multi:
4.17e-2). For the 2019 dataset, we find that the variance of multi-SDR
(7.93e-2) is 6.51% lower than single-SDR (8.48e-2).
However, when we randomly sample seed studies from each group for single-SDR,
we find that the variance of multi-SDR is significantly lower: 53.2% average
decrease across 2017, 2018, and 2019. For space reasons, we do not include the
full results. This suggests that the choice of seed study is considerably more
important for single-SDR than for multi-SDR and that multi-SDR produces much
more stable rankings, regardless of the seed studies chosen for re-ranking.
## 5 Related Work
Currently, it is a requirement for most high-quality systematic reviews to
retrieve literature using a Boolean query [4, 6]. Given that a Boolean query
retrieves studies in an unordered set, it is also a requirement that all of
the studies must be screened (assessed) for inclusion in the systematic review
[4]. It is currently becoming more common for a ranking to be induced over
this set of studies in order to begin downstream processes of the systematic
review earlier [19], e.g., acquiring the full-text of studies or results
extraction. This ranking of studies has come to be known as ‘screening
prioritisation’, as popularised by the CLEF TAR tasks which aimed to automate
these early stages of the systematic review creation pipeline [9, 11, 10]. As
a result, in recent years there has been an uptake in Information Retrieval
approaches to enable screening prioritisation [18, 5, 3, 25, 2, 22, 16, 15, 1,
27, 21]. The vast majority of screening prioritisation use a different
representation than the original Boolean query for ranking. Often, a separate
query must be used to perform ranking, which may not represent the same
information need as the Boolean query. Instead, the SDR method by Lee and Sun
[16] forgoes the query all together and uses studies that have a high
likelihood of relevance, seed studies [6], to rank the remaining studies.
These are studies that are known a priori to the query formulation step. The
use of documents for ranking is similar to the task of query-by-document [26,
17] which has also been used extensively in domain-specific applications [12,
14, 13]. However, as Lee and Sun note, the majority of these methods try to
extract key phrases or concepts from these documents to use for searching. SDR
differentiates itself from these as the intuition is that the entire document
is a relevance signal, rather than certain meaningful sections. Given the
relatively short length of documents here (i.e., abstracts of studies), this
intuition is more meaningful than other settings where the length of a
document may be much longer.
## 6 Conclusions
We reproduced the SDR for systematic reviews method by Lee and Sun [16] on all
the available CLEF TAR datasets [9, 11, 10]. Across all three of these
datasets, we found that the 2017 and 2018 datasets share a similar trend in
results than to the 2019 dataset. We believe that this is due to topical
differences between the datasets and that proper tuning of SDR would result in
results that better align with those seen in 2017 and 2018. We also performed
several pre-processing steps that revealed that the BOW representation of
relevant studies could also share a relatively high commonality of terms
compared to the BOC representation. Furthermore, we found that the BOC
representation for SDR is generally beneficial and that term weighting
generally improves the effectiveness of SDR. We also found that multi-SDR was
able to outperform single-SDR consistently. Our results also used an oracle to
select the most effective seed studies to compare multi-SDR to single-SDR.
This means that the actual gap in effectiveness between single-SDR and multi-
SDR may be considerably larger. Finally, in terms of the impact of seed
studies on ranking stability, we found that although multi-SDR was able to
achieve higher performance than single-SDR, multi-SDR generally had a higher
variance in effectiveness.
For future work, we believe that deep learning approaches such as BERT and
other transformer-based architectures will provide richer document
representations that may better discriminate relevant from non-relevant
studies. Finally, we believe that the technique used to sample seed studies in
the original paper and this reproduction paper may overestimate the actual
effectiveness. This is because a seed study is not necessarily a relevant
study, and that seed studies may be discarded after the query has been
formulated. For this, we suggest that a new collection is required that
includes the seed studies that were originally used to formulate the Boolean
query, in addition to the studies included in the analysis portion of the
systematic review.
Further investigation into SDR will continue to accelerate systematic review
creation, thus increasing and improving evidence-based medicine as a whole.
## References
* [1] Abualsaud, M., Ghelani, N., Zhang, H., Smucker, M.D., Cormack, G.V., Grossman, M.R.: A system for efficient high-recall retrieval. In: Proceedings of the 41st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 1317–1320 (2018)
* [2] Alharbi, A., Briggs, W., Stevenson, M.: Retrieving and ranking studies for systematic reviews: University of Sheffield’s approach to CLEF eHealth 2018 Task 2. In: CEUR Workshop Proceedings: Working Notes of CLEF 2018: Conference and Labs of the Evaluation Forum. vol. 2125. CEUR Workshop Proceedings (2018)
* [3] Alharbi, A., Stevenson, M.: Ranking abstracts to identify relevant evidence for systematic reviews: The university of sheffield’s approach to CLEF eHealth 2017 task 2. In: CEUR Workshop Proceedings: Working Notes of CLEF 2017: Conference and Labs of the Evaluation Forum (2017)
* [4] Chandler, J., Cumpston, M., Li, T., Page, M.J., Welch, V.A.: Cochrane Handbook for Systematic Reviews of Interventions. John Wiley & Sons (2019)
* [5] Chen, J., Chen, S., Song, Y., Liu, H., Wang, Y., Hu, Q., He, L., Yang, Y.: ECNU at 2017 eHealth task 2: Technologically assisted reviews in empirical medicine. In: CEUR Workshop Proceedings: Working Notes of CLEF 2017: Conference and Labs of the Evaluation Forum (2017)
* [6] Clark, J.: Systematic reviewing. In: Suhail A. R. Doi, G.M.W. (ed.) Methods of Clinical Epidemiology (2013)
* [7] Cohen, A., Hersh, W., Peterson, K., Yen, P.: Reducing workload in systematic review preparation using automated citation classification. Journal of the American Medical Informatics Association 13(2), 206–219 (2006)
* [8] Croft, W.B.: Combining approaches to information retrieval. In: Advances in Information Retrieval, pp. 1–36. Springer (2002)
* [9] Kanoulas, E., Li, D., Azzopardi, L., Spijker, R.: CLEF 2017 technologically assisted reviews in empirical medicine overview. In: CEUR Workshop Proceedings: Working Notes of CLEF 2017: Conference and Labs of the Evaluation Forum (2017)
* [10] Kanoulas, E., Li, D., Azzopardi, L., Spijker, R.: CLEF 2019 technology assisted reviews in empirical medicine overview. In: CEUR Workshop Proceedings: Working Notes of CLEF 2018: Conference and Labs of the Evaluation Forum. vol. 2380 (2019)
* [11] Kanoulas, E., Spijker, R., Li, D., Azzopardi, L.: CLEF 2018 technology assisted reviews in empirical medicine overview. In: CEUR Workshop Proceedings: Working Notes of CLEF 2018: Conference and Labs of the Evaluation Forum (2018)
* [12] Kim, Y., Croft, W.B.: Diversifying query suggestions based on query documents. In: Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval. pp. 891–894 (2014)
* [13] Kim, Y., Croft, W.B.: Improving patent search by search result diversification. In: Proceedings of the 2015 International Conference on The Theory of Information Retrieval. pp. 201–210 (2015)
* [14] Kim, Y., Seo, J., Croft, W.B., Smith, D.A.: Automatic suggestion of phrasal-concept queries for literature search. Information Processing & Management 50(4), 568–583 (2014)
* [15] Lagopoulos, A., Anagnostou, A., Minas, A., Tsoumakas, G.: Learning-to-rank and relevance feedback for literature appraisal in empirical medicine. In: CEUR Workshop Proceedings: Working Notes of CLEF 2018: Conference and Labs of the Evaluation Forum. pp. 52–63. Springer (2018)
* [16] Lee, G.E., Sun, A.: Seed-driven document ranking for systematic reviews in evidence-based medicine. In: Proceedings of the 41st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 455–464 (2018)
* [17] Lv, Y., Moon, T., Kolari, P., Zheng, Z., Wang, X., Chang, Y.: Learning to model relatedness for news recommendation. In: Proceedings of the 20th international conference on World wide web. pp. 57–66 (2011)
* [18] Miwa, M., Thomas, J., O’Mara-Eves, A., Ananiadou, S.: Reducing systematic review workload through certainty-based screening. Journal of Biomedical Informatics 51, 242–253 (2014)
* [19] Norman, C.R., Leeflang, M.M., Porcher, R., Névéol, A.: Measuring the impact of screening automation on meta-analyses of diagnostic test accuracy. Systematic reviews 8(1), 243 (2019)
* [20] Noy, N.F., Shah, N.H., Whetzel, P.L., Dai, B., Dorf, M., Griffith, N., Jonquet, C., Rubin, D.L., Storey, M.A., Chute, C.G., et al.: Bioportal: ontologies and integrated data resources at the click of a mouse. Nucleic acids research 37(suppl_2), W170–W173 (2009)
* [21] Scells, H., Zuccon, G.: You can teach an old dog new tricks: Rank fusion applied to coordination level matching for ranking in systematic reviews. In: Proceedings of the 42nd European Conference on Information Retrieval. pp. 399–414 (2020)
* [22] Scells, H., Zuccon, G., Deacon, A., Koopman, B.: QUT ielab at CLEF eHealth 2017 technology assisted reviews track: Initial experiments with learning to rank. In: CEUR Workshop Proceedings: Working Notes of CLEF 2017: Conference and Labs of the Evaluation Forum (2017)
* [23] Scells, H., Zuccon, G., Koopman, B.: A comparison of automatic boolean query formulation for systematic reviews. Information Retrieval Journal pp. 1–26 (2020)
* [24] Soldaini, L., Goharian, N.: Quickumls: A fast, unsupervised approach for medical concept extraction. In: Medical Information Retrieval Workshop (2016)
* [25] Wu, H., Wang, T., Chen, J., Chen, S., Hu, Q., He, L.: Ecnu at 2018 ehealth task 2: Technologically assisted reviews in empirical medicine. Methods-a Companion to Methods in Enzymology 4(5), 7 (2018)
* [26] Yang, Y., Bansal, N., Dakka, W., Ipeirotis, P., Koudas, N., Papadias, D.: Query by document. In: Proceedings of the Second ACM International Conference on Web Search and Data Mining. pp. 34–43 (2009)
* [27] Zou, J., Li, D., Kanoulas, E.: Technology assisted reviews: Finding the last few relevant documents by asking Yes/No questions to reviewers. In: Proceedings of the 41st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 949–952 (2018)
|
issue[2][open]
Issue #2: (#1)
# A Polynomial Kernel for Funnel Arc Deletion Set
Marcelo Garlet Milani We thank the referees for their numerous helpful
comments.
###### Abstract
In Directed Feedback Arc Set (DFAS) we search for a set of at most $k$ arcs
which intersect every cycle in the input digraph. It is a well-known open
problem in parameterized complexity to decide if DFAS admits a kernel of
polynomial size. We consider $\mathcal{C}$-Arc Deletion Set
($\mathcal{C}$-ADS), a variant of DFAS where we want to remove at most $k$
arcs from the input digraph in order to turn it into a digraph of a class
$\mathcal{C}$. In this work, we choose $\mathcal{C}$ to be the class of
_funnels_. Funnel-ADS is NP-hard even if the input is a DAG, but is fixed-
parameter tractable with respect to $k$. So far no polynomial kernels for this
problem were known. Our main result is a kernel for Funnel-ADS with
$\mathcal{O}(k^{6})$ many vertices and $\mathcal{O}(k^{7})$ many arcs,
computable in $\mathcal{O}(nm)$ time, where $n$ is the number of vertices and
$m$ the number of arcs in the input digraph.
## 1 Introduction
In graph editing problems, we are given a (directed or undirected) graph $G$
and a number $k$, and we search for a set of at most $k$ vertices, edges or
arcs whose removal or addition produces a graph with a desired property. There
are several variants of these problems, and in this paper we consider the
problem of removing arcs from a digraph in order to obtain a digraph in a
given class $\mathcal{C}$. When $\mathcal{C}$ is the class of all directed
acyclic graphs (DAGs), the problem is called Directed Feedback Arc Set (DFAS).
If we remove vertices instead of arcs, the problem is called Directed Feedback
Vertex Set (DFVS).
There are simple reductions between DFAS and DFVS. We can reduce DFAS to DFVS
by taking the line digraph of the input. Removing a vertex from the reduced
instance corresponds to removing an arc from the input instance and vice
versa. For a reduction in the other direction, we split each vertex $v$ into
two vertices, say, $v_{o}$ and $v_{i}$, connect them with an arc
$(v_{i},v_{o})$ and shift all outgoing arcs of $v$ to $v_{o}$ and all incoming
arcs to $v_{i}$. In the context of _parameterized complexity_ , such
reductions are called _parameterized_ as the parameter $k$ is preserved.
Hence, parameterized results are often stated for DFVS.
In a breakthrough paper it was proven that there is an algorithm for DFVS with
running time $4^{k}k!\cdot n^{\mathcal{O}(1)}$ [4], showing that the problem
is _fixed-parameter tractable_ (FPT) with respect to $k$. After obtaining an
FPT result, it is natural to ask if the problem also admits a polynomial
_kernel_ , that is, if there is a polynomial-time algorithm which reduces the
input instance to an instance of size at most $\mathcal{O}(k^{c})$ for some
constant $c$. Such an algorithm is called a _kernelization_ algorithm.
The existence of a polynomial kernel for DFVS is a fundamental open question
in the field of parameterized complexity. One approach towards solving this
question is to consider different parametrizations or restrictions of the
input digraph. By considering progressively smaller parameters or more general
digraph classes, one can hope to eventually close the gap between the
restricted cases and the general case of DFVS.
On tournaments, DFVS admits a polynomial kernel [1]; this was extended to
generalizations of tournaments as well [3]. When parameterized by solution
size $k$ and the size $\ell$ of a treewidth $\eta$-modulator, DFVS admits a
kernel of size $(k\cdot\ell)^{\mathcal{O}(\eta^{2})}$ [10].
One can also restrict the output instead, that is, we can consider
$\mathcal{C}$-Vertex Deletion Set ($\mathcal{C}$-VDS) or $\mathcal{C}$-Arc
Deletion Set ($\mathcal{C}$-ADS), where, for a fixed digraph class
$\mathcal{C}$, we search for a set of at most $k$ vertices (arcs) whose
removal turns the input into a digraph in $\mathcal{C}$. Unlike DFVS and DFAS,
$\mathcal{C}$-VDS and $\mathcal{C}$-ADS can belong to different complexity
classes depending on $\mathcal{C}$: While Out-Forest-ADS can be solved in
polynomial time, Out-Forest-VDS is NP-hard [12]. Further, note that even if
$\mathcal{C^{\prime}}\subseteq\mathcal{C}$, a polynomial kernel for
$\mathcal{C}$-ADS does not immediately imply a polynomial kernel for
$\mathcal{C^{\prime}}$-ADS, and the implication also does not work in the
other direction. Indeed, while the problem is trivial when $\mathcal{C}$ is
the class of all independent sets or the class of all digraphs, it is NP-hard
if $\mathcal{C}$ is the class of DAGs, which contains all independent sets and
is a subclass of all digraphs. In a sense, the complexity landscapes of
$\mathcal{C}$-ADS and $\mathcal{C}$-VDS are much more fine-grained than the
landscape of DFVS, and may allow for smaller steps towards more general
results.
Out-Forest-ADS and Pumpkin-ADS can be solved in polynomial time [12], while
Out-Forest-VDS and Pumpkin-VDS are NP-hard and admit polynomial kernels [2,
12] of size $\mathcal{O}(k^{2})$ and $\mathcal{O}(k^{3})$, respectively [2].
$\mathcal{F}_{\eta}$-VDS admits a polynomial kernel for constant $\eta$, where
$\mathcal{F}_{\eta}$ is the class of all digraphs with (undirected) treewidth
at most $\eta$ [10].
In this work we consider Funnel-ADS and provide a polynomial kernel with
$\mathcal{O}(k^{6})$ vertices and $\mathcal{O}(k^{7})$ arcs. A digraph is a
funnel if it is a DAG and every source to sink path has an arc which is not in
any other source to sink path. Funnel-ADS is NP-hard even if the input is DAG,
but it can be solved in $\mathcal{O}(3^{k}\cdot(n+m))$ time [11], where $k$ is
the solution size. Out-forests and pumpkins are also funnels, but there are
also dense funnels like complete bipartite digraphs (where all arcs go from
the first partition to the second but not back).
Our results rely on characterizations for funnels based on forbidden subgraphs
and on a “labeling” of the vertices [11]. We believe the techniques used here
can be generalized to other digraph classes which are also similarly
characterized, and hope they provide further insight about the classes
$\mathcal{C}$ for which $\mathcal{C}$-ADS admits a polynomial kernel.
## 2 Preliminaries
A (partial) function $f:A\rightarrow B$ is a set of tuples $(a,f(a))\in
A\times B$ where for every $a\in A$ there is at most one $b\in B$ with
$(a,b)\in f$ (that is, $f(a)=b$). We write $\operatorname{Dom}(f)$ for the set
of values $a\in A$ for which $f$ is defined. Hence, $\emptyset$ is the
undefined function, and $f^{\prime}\supseteq f$ if $f^{\prime}(x)=f(x)$ for
every $x\in\operatorname{Dom}(f)$. All our functions are _partial_ , that is,
$\operatorname{Dom}(f)$ is not necessarily $A$.
A _parameterized language_ $L$ is _fixed-parameter tractable_ with respect to
the parameter $k$ if there is some algorithm with running time $f(k)\cdot
n^{\mathcal{O}(1)}$ deciding whether $(x,k)\in L$, where $f$ is some
computable function, $n=\left|{x}\right|$ and $k$ is the parameter (refer to
[5, 6] for an introduction to parameterized complexity). We say that $L$
_admits a problem kernel_ if there is a polynomial-time algorithm which
transforms an instance $(x,k)$ into an instance $(x^{\prime},k^{\prime})$ such
that $(x,k)\in L$ if and only if $(x^{\prime},k^{\prime})\in L$,
$k^{\prime}\leq k$ and $\left|{x^{\prime}}\right|\leq f(k)$ for some
computable function $f$. If $f$ is a polynomial, we say that $L$ _admits a
polynomial kernel_ with respect to $k$.
When describing a kernelization algorithm, it is common to define _reduction
rules_. These rules have a _condition_ and an _effect_ , and we say that a
reduction rule is _applicable_ if the condition is true. The effect of the
reduction rule produces a new instance $(x^{\prime},k^{\prime})$ of the
problem, and a rule is said to be _safe_ if $(x^{\prime},k^{\prime})\in L$ if
and only if the original instance is in $L$. We refer the reader to [8, 9] for
surveys on kernelization and to [7] for a book on the topic.
We only consider directed graphs (digraphs) without loops or parallel arcs
(but we allow arcs in opposite directions) in this paper. Let $D$ be a
digraph. The set of arcs of $D$ is denoted by $A(D)$, and its set of vertices
is $V(D)$. The set of outneighbors (inneighbors) in $D$ of a vertex $v\in
V(D)$ is denoted by $\text{{{out}}}_{D}(v)$ ($\text{{{in}}}_{D}(v)$); the
outdegree (indegree) of $v$ is
$\text{{{outdeg}}}_{D}(v)=\left|{\text{{{out}}}_{D}(v)}\right|$
($\text{{{indeg}}}_{D}(v)=\left|{\text{{{in}}}_{D}(v)}\right|$). If the
digraph $D$ is clear from context, we omit it from the index. For a set
$U\subseteq V(D)$ we write $\text{{{out}}}(U)$ for the set
$\\{\text{{{out}}}(u)\mid u\in U\\}\setminus U$ (and analogously for
$\text{{{in}}}(U)$). A vertex $v$ is a _source_ if $\text{{{indeg}}}(v)=0$,
and it is a sink if $\text{{{outdeg}}}(v)=0$. We write $H\subseteq D$ if $H$ a
_subgraph_ of $D$; the subgraph of $D$ _induced_ by $U$ is given by $D[U]$. We
write $D-X$ for the operation of deleting a set of vertices or arcs $X$ from
$D$. Similarly, we add a set of arcs or vertices to $D$ with $D+X$.
A _directed acyclic graph_ (DAG) is a digraph which does not contain any
directed cycle. A digraph $D$ is a funnel if $D$ is a DAG and for every path
$P$ from a source to a sink of $D$ of length at least one there is some arc
$a\in A(P)$ such that for any different path $Q$ from a (possibly different)
source to a sink we have $a\not\in A(Q)$.
Figure 1: $D_{1}$, a forbidden subgraph for
funnels.$u_{1}$$u_{2}$$v_{0}$$v_{1}$$w_{1}$$w_{2}$
We repeat below several known characterizations for funnels, as they are
particularly useful for our results.
###### Theorem 2.1 ([11], Theorem 1).
Let $D$ be a DAG. The following statements are equivalent.
1. a.
$D$ is a funnel.
2. b.
$V(D)$ can be partitioned into two sets $F$ and $M$ such that: _(1)_ $F$
induces an out-forest; _(2)_ $M$ induces an in-forest; and _(3)_ $(M\times
F)\cap A(D)=\emptyset$.
3. c.
No digraph in $\mathcal{F}=\\{D_{i}\mid i\in\\{0,1,\dots\\}\\}$ is contained
in $D$ as a (not necessarily induced) subgraph, where (see Figure 1 for an
example)
* •
$V(D_{k})=\\{u_{1},u_{2},w_{1},w_{2}\\}\cup\\{v_{i}\mid 0\leq i\leq k\\}$, and
* •
$A(D_{k})=\\{(u_{1},v_{0}),(u_{2},v_{0}),(v_{k},w_{1}),(v_{k},w_{2})\\}\cup\\{(v_{i},v_{i+1})\mid
1\leq i\leq k-1\\}$
4. d.
$D$ does not contain $D_{0}$ as a butterfly minor.
The digraphs in $\mathcal{F}$ are called _forbidden subgraphs for funnels_.
For a digraph $D$ we define a _labeling_ as a function
$\ell:V(D)\rightarrow\\{\textsc{F},\textsc{M}\\}$. We say that $\ell$ is a
_funnel labeling_ for $D$ if $\operatorname{Dom}(\ell)=V(D)$, the set
$F=\\{v\in V(D)\mid\ell(v)=\textsc{F}\\}$ induces an out-forest in $D$, the
set $M=\\{v\in V(D)\mid\ell(v)=\textsc{M}\\}$ induces an in-forest in $D$ and
$(M\times F)\cap A(D)=\emptyset$. Due to Theorem 2.1(b), a digraph $D$ is a
funnel if and only if there exists a funnel labeling for $D$.
In the _feedback arc set_ problem, we are given a digraph $D$ and a
$k\in\mathds{N}$ as an input, and we search for a set $S\subseteq A(D)$ such
that $D-S$ is a DAG and $\left|{S}\right|\leq k$. We consider a variant of
this problem where we want $D-S$ to be a funnel instead, which is formally
defined below.
Funnel Arc Deletion Set (FADS)
Input A digraph $D$ and a number $k\in\mathds{N}$. Question Is there a set
$S\subseteq A(D)$ with $\left|S\right|\leq k$ such that $D-S$ is a funnel?
To make better use of Theorem 2.1(b), we consider a more general problem in
which some vertices might already be labeled with F or M, and the funnel we
obtain in the end must respect this labeling. Formally, the problem is defined
as follows.
Funnel Arc Deletion Labeling (FADL)
Input A digraph $D$, a labeling
$\ell:V(D)\rightarrow\\{\textsc{F},\textsc{M}\\}$ and a number
$k\in\mathds{N}$. Question Are there a set $S\subseteq A(D)$ and a labeling
$\hat{\ell}\supseteq\ell$ such that $\hat{\ell}$ is a funnel labeling for
$D-S$ and $\left|{S}\right|\leq k$?
We say that $(D,\ell,k)$ is the _input instance_ and $(S,\hat{\ell})$ is a
solution for the input instance. This more general version of the problem
allows us to decide which label a vertex will take and encode this in the
instance itself. While technically not necessary, using FADL instead of FADS
simplifies the kernelization algorithm and also the proofs.
## 3 Basic reduction rules
We construct our kernelization algorithm by defining a series of reduction
rules and then showing that, if no reduction rule is applicable, the input
size is bounded in a polynomial of $k$. Our strategy is to partition the
vertex set into labeled and unlabeled vertices, then bound the number of
unlabeled vertices (Section 3.1) and use this to bound the number of labeled
vertices (Section 3.2) as well. In this section we define some reduction rules
which are useful both in Section 3.1 as well as in Section 3.2. For brevity,
we assume that a reduction rule is no longer applicable to the input instance
after it has been defined.
Let $(D,\ell,k)$ be the input instance. From Theorem 2.1(c) we can see that a
funnel has no vertex $v$ with $\text{{{indeg}}}(v)>1$ and
$\text{{{outdeg}}}(v)>1$. Further, $\text{{{indeg}}}(v)\leq 1$ if
$\ell(v)=\textsc{F}$, and $\text{{{outdeg}}}(v)\leq 1$ if
$\ell(v)=\textsc{M}$. Hence, by simply counting the number of vertices
disrespecting each case, we can obtain a lower bound for the number of arcs
that need to be removed from $D$ in order to obtain a funnel. As removing one
arc changes the degree of two vertices, we obtain a bound of at most $2k$ such
vertices. The safety of the following reduction rule follows easily from
Theorem 2.1.
###### Reduction Rule 3.1 (Lower Bound).
Let $V_{I}\subseteq V(D)$ be the set of vertices with indegree greater than
one, let $V_{O}$ be the set of vertices with outdegree greater than one and
let $V_{X}=V_{O}\cap V_{I}$. Output a trivial “no” instance if
$\displaystyle\sum_{u\in
V_{O},\ell(u)=\textsc{M}}(\text{{{outdeg}}}(u)-1)+\sum_{u\in
V_{I},\ell(u)=\textsc{F}}(\text{{{indeg}}}(u)-1)+$ $\displaystyle\sum_{u\in
V_{X},u\not\in\operatorname{Dom}(\ell)}(\min\\{\text{{{indeg}}}(u),\text{{{outdeg}}}(u)\\}-1)>2k.$
The following reduction rule is based on [11], with some modifications since
the original reduction rule is applied as an intermediate step in an FPT
algorithm and is not safe for kernelization. For certain vertices it is
possible to optimally decide which label they should receive in an optimal
solution. For example, vertices with outdegree greater than $k+1$ can always
be labeled with F, as otherwise we would need to remove at least $k+1$ of its
outgoing arcs, which is not possible.
###### Reduction Rule 3.2 (Set Label).
Let $v\in V(D)$ be an unlabeled vertex.
Set $\ell(v)\coloneqq\textsc{F}$ if at least one of the following is true:
_(1)_ $\text{{{indeg}}}(v)=0$; _(2)_ $v$ has a single inneighbor $u$ and
$\ell(u)=\textsc{F}$; _(3)_ there are at least $\text{{{indeg}}}(v)+1$
vertices $u\in\text{{{out}}}(v)$ with $\ell(u)=\textsc{M}$ or
$\ell(u)=\textsc{F}\land\text{{{indeg}}}(u)=1$; or _(4)_
$\text{{{outdeg}}}(v)>k+1$.
Set $\ell(v)\coloneqq\textsc{M}$ if at least one of the following is true:
_(1)_ $\text{{{outdeg}}}(v)=0$; _(2)_ $v$ has a single outneighbor $u$ and
$\ell(u)=\textsc{M}$; _(3)_ there are at least $\text{{{outdeg}}}(v)+1$
vertices $u\in\text{{{in}}}(v)$ with $\ell(u)=\textsc{F}$ or
$\ell(u)=\textsc{M}\land\text{{{outdeg}}}(u)=1$; or _(4)_
$\text{{{indeg}}}(v)>k+1$.
###### Proof of safety of Set Label (RR 3.2).
Clearly, a solution for the reduced instance is also a solution for the
original instance. For the other direction, we consider only the case where we
set $\ell(v)\coloneqq\textsc{F}$, as the other case is symmetric. Let
$\ell_{r}$ be the labeling obtained by the reduction rule. Let
$(S,\hat{\ell})$ be a solution for the original instance. We set
$\hat{\ell}_{r}\coloneqq\hat{\ell}$ and
$\hat{\ell}_{r}(v)\coloneqq\textsc{F}$. If $\hat{\ell}(v)=\textsc{F}$, then
clearly $(S,\hat{\ell}_{r})$ is a solution for the reduced instance. So assume
$\hat{\ell}(v)=\textsc{M}$. This implies that $\text{{{outdeg}}}(v)\leq k+1$,
as otherwise $\left|S\right|>k$.
If $\text{{{indeg}}}(v)=0$, or $\text{{{indeg}}}(v)=1$ and there is some
$u\in\text{{{in}}}(v)$ with $\ell(u)=\textsc{F}$, then $\hat{\ell}_{r}$ is
clearly also a funnel labeling for $D-S$.
Let $U=\\{u\in\text{{{out}}}(v)\mid\ell(u)=\textsc{M}\text{ or
}\ell(u)=\textsc{F}\land\text{{{indeg}}}(u)=1\\}$. If
$\left|U\right|\geq\text{{{indeg}}}(v)+1$, we construct an $S_{r}$ from $S$ as
follows. We add all incoming arcs of $v$ to $S_{r}$ and remove from $S_{r}$
all outgoing arcs $(v,u)$ where $u\in U$. Since $\hat{\ell}(v)=\textsc{M}$, at
least $\text{{{outdeg}}}(v)-1\geq\text{{{indeg}}}(v)$ many outgoing arcs of
$v$ are in $S$. Hence, we remove at least $\text{{{indeg}}}(v)$ arcs from $S$
and add at most $\text{{{indeg}}}(v)$. Thus,
$\left|{S_{r}}\right|\leq\left|{S}\right|$.
The digraph $D-S_{r}$ does not contain cycles, as all incoming arcs of $v$
were removed, so any cycle in $D-S_{r}$ is also in $D-S$, which is a funnel.
To see that $\hat{\ell}_{r}$ is a funnel labeling of $D-S_{r}$, first note
that we can always keep arcs $(v,u)$ in $D-S_{r}$ where $\ell(u)=\textsc{M}$.
We can also keep arcs $(v,u)$ in $D-S_{r}$ where $\ell(u)=\textsc{F}$ and
$\text{{{indeg}}}(u)=1$. As $v$ has no incoming arcs in $D-S_{r}$, it lies in
an out-forest. Hence, $\hat{\ell}_{r}$ is a funnel labeling of $D-S_{r}$. ∎
Figure 2: A digraph which is not a funnel. Removing the arcs $(v,u)$ and
$(u,w)$ results in a funnel.$v$$u$$w$
Replacing an arc in a funnel by a directed path cannot create any cycles nor
any forbidden subgraph for funnels. The next reduction rule reverses this
operation: We can contract certain paths where all vertices have in- and
outdegree one to a single arc. However, we cannot replace any such path: In
the example in Figure 2, if we remove $u$ and add the arc $(v,w)$, then the
size of an optimal solution set decreases by one. Some cases where contracting
an arc is safe are identified below.
###### Reduction Rule 3.3 (Dissolve Vertex).
Let $u,v,w$ be a path such that the following is true: _(1)_
$v,u\in\operatorname{Dom}(\ell)$ implies $\ell(v)=\ell(u)$; and _(2)_
$v,w\in\operatorname{Dom}(\ell)$ implies $\ell(v)=\ell(w)$.
If $\text{{{indeg}}}(v)=\text{{{outdeg}}}(v)=1$ and
$(\text{{{indeg}}}(w)=1\lor\text{{{outdeg}}}(u)=1)$, delete the vertex $v$ and
add the arc $(u,w)$.
###### Proof of safety of Dissolve Vertex (RR 3.3).
Let $D^{\prime}$ be the reduced digraph. It is easy to see that we can obtain
a solution for the reduced instance from a solution for the original instance:
If we remove $(u,v)$ or $(v,w)$ from $D$, we can instead remove $(u,w)$ from
$D^{\prime}$. As this is equivalent to removing $v$ from $D$, the result is
also a funnel and we can keep the same labeling (up to $v$, which is not in
$D^{\prime}$). If neither $(u,v)$ nor $(v,w)$ were removed, we simply keep the
same arc-deletion set and labeling.
Now let $(S_{r},\hat{\ell}_{r})$ be a solution for the reduced instance. We
start by setting $\hat{\ell}\coloneqq\hat{\ell}_{r}$. If $(u,w)\not\in S_{r}$,
we set $\hat{\ell}(v)\coloneqq\hat{\ell}_{r}(u)$. It is easy to see that
$\hat{\ell}$ is a funnel labeling for $D-S_{r}$.
If $(u,w)\in S_{r}$, we distinguish two cases. If
$\text{{{outdeg}}}_{D}(u)=1$, we set $\hat{\ell}(v)\coloneqq\hat{\ell}(u)$ and
$S\coloneqq(S_{r}\setminus\\{(u,w)\\})\cup\\{(v,w)\\}$. Regardless of whether
$\hat{\ell}(u)=\textsc{M}$ or $\hat{\ell}(u)=\textsc{F}$, we do not need to
remove $(u,v)$. Since the neighborhood of $w$ did not change and any cycle in
$D-S$ is also a cycle in $D^{\prime}-S_{r}$, we have that $\hat{\ell}$ is a
funnel labeling for $D-S$.
If $\text{{{outdeg}}}_{D}(u)>1$ and $\text{{{indeg}}}_{D}(w)=1$, we set
$\hat{\ell}(v)\coloneqq\hat{\ell}(w)$ and
$S\coloneqq(S_{r}\setminus\\{(u,w)\\})\cup\\{(u,v)\\}$. As before, we may keep
the arc $(w,u)$ in $D-S$, and $\hat{\ell}$ is a funnel labeling for $D-S$.
Since the case $\text{{{outdeg}}}_{D}(u)>1$ and $\text{{{indeg}}}_{D}(w)>1$
does not occur, this concludes the proof. ∎
### 3.1 Bounding the number of unlabeled vertices
From Lower Bound (RR 3.1) we know there are few vertices with both in- and
outdegree greater than one. In this section we bound the number of unlabeled
vertices by considering the remaining unlabeled vertices, that is, vertices
$v$ with $\text{{{indeg}}}(v)\leq 1$ or $\text{{{outdeg}}}(v)\leq 1$. Our
strategy is to group such vertices into subgraphs of $D$ with specific
properties which we define later, and then develop reduction rules to both
bound the maximum number of such subgraphs and also their size in any “yes”
instance of FADL.
Even if the previous reduction rules are not applicable, there can still exist
some “large” subgraph $H\subseteq D$ for which there is a “small” set
$S\subseteq A(D)$ such that the weakly-connected component of $H$ is a funnel
in $D-S$. Our goal is to bound the size of such subgraphs $H$.
We first define a specific type of subgraph of $D$ which behaves like a funnel
in the sense that the degrees of the vertices match Theorem 2.1(b). We call
such subgraphs _local funnels_ and formally define them below.
###### Definition 3.1.
An induced subgraph $H\subseteq D$ is a _local funnel_ in $D$ if $H$ is a
funnel, $H$ has only one source and its vertex set can be partitioned into
$F\uplus M=V(H)$ such that $\text{{{indeg}}}_{D}(v)\leq 1$ for all $v\in F$;
$\text{{{outdeg}}}_{D}(v)\leq 1$ for all $v\in M$; and $(M\times F)\cap
A(H)=\emptyset$.
Unlike _local_ funnels, we might still have to remove many arcs from an
_induced_ funnel in $D$, as it can have, for example, several vertices $v$
with $\text{{{indeg}}}_{D}(v)>1$ and $\text{{{outdeg}}}_{D}(v)>1$. Our goal is
to bound the size of each unlabeled local funnel (that is, each local funnel
where none of the vertices have a label) and the number of unlabeled local
funnels in $D$. We start by “pushing” as many vertices as we can to the
neighborhood of the roots of the in- and out-forests of a local funnel.
Consider for example a path $u,v,w$ as in Figure 3, whose vertices have
indegree one but can have higher outdegree. Intuitively, a cycle containing
$v$ and $x$ must also contain $u$. To destroy this cycle, we can remove the
unique incoming arc of $u$, as this will potentially destroy further cycles
that contain $u$ but not $v$. Hence, replacing the arc $(v,x)$ with $(u,x)$ in
this case does not change the size of the solution.
By moving vertices in an out-tree towards its root $s$, we increase the
outdegree of $s$. If the outdegree of $s$ increases beyond $k+1$, we can apply
Set Label (RR 3.2) to $s$, giving it a label. By further applying Set Label
(RR 3.2) to the neighbors of $s$ which are in its out-tree, we can label the
entire tree. As we are only considering unlabeled local funnels in this
section, we can use the idea above to limit the branching of any in- or out-
tree of an unlabeled local funnel.
We provide here a somewhat more general reduction rule which can also be
applied if some vertices are labeled. Later, this reduction rule will again be
useful to bound the number of labeled vertices. However, we need to carefully
consider the possible labels of the vertices, as in some cases the rule would
not be safe.
Figure 3: Example application of Shift Neighbors (RR 3.4).
$u$$v$$w$$x$
$u$$v$$w$$x$
###### Reduction Rule 3.4 (Shift Neighbors).
Let $u,v,w$ be a path.
* •
If $\text{{{indeg}}}(u)=\text{{{indeg}}}(v)=\text{{{indeg}}}(w)=1$,
$(u,\textsc{M})\not\in\ell$, $(v,\textsc{M})\not\in\ell$ and there is an
$x\in\text{{{out}}}(v)\setminus\text{{{out}}}(u)$ with $w\neq x$, then remove
the arc $(v,x)$ and add the arc $(u,x)$.
* •
If $\text{{{outdeg}}}(u)=\text{{{outdeg}}}(v)=\text{{{outdeg}}}(w)=1$,
$(v,\textsc{F})\not\in\ell$, $(w,\textsc{F})\not\in\ell$ and there is an
$x\in\text{{{in}}}(v)\setminus\text{{{in}}}(w)$ with $u\neq x$, then remove
the arc $(x,v)$ and add the arc $(x,w)$.
Before proving that Shift Neighbors (RR 3.4) is safe, we need two simple
observations about certain cases where we can safely exchange two arcs or add
an arc.
###### Observation 1.
Let $H$ be a funnel with funnel labeling $\ell$ and let $x,u,v\in V(H)$ such
that $(v,x)\in A(H)$, $(u,x)\not\in A(H)$ and at least one of the following is
true: _(1)_ $\ell(u)=\textsc{F}$; or _(2)_ $\ell(u)=\textsc{M}=\ell(v)$ and
$\text{{{outdeg}}}_{H}(u)=0$. Let $H^{\prime}=H-(v,x)+(u,x)$. Then $\ell$ is
also a funnel labeling for $H^{\prime}$ if $H^{\prime}$ is a DAG.
###### Proof.
Assume $H^{\prime}$ is a DAG.
_Case 1:_ $\ell(u)=\textsc{F}$. If $\ell(x)=\textsc{M}$, then both $H+(u,x)$
and $H-(v,x)+(u,x)$ are funnels. If $\ell(x)=\textsc{F}$, then $H^{\prime}$ is
the result of switching the unique inneighbour of $x$ in the out-forest
induced by vertices labeled with F. This is clearly an out-forest, and hence
$\ell$ is a funnel labeling for $H^{\prime}$.
_Case 2:_ $\ell(u)=\textsc{M}=\ell(v)$ and $\text{{{outdeg}}}_{H}(u)=0$. As
$(v,x)\in A(H)$, we have $\ell(x)=\textsc{M}$. Since
$\text{{{outdeg}}}_{H}(u)=0$, $\ell$ is a funnel labeling for $H+(u,x)$ and,
hence, also for $H^{\prime}$.∎
###### Observation 2.
Let $H$ be a DAG and $x,u,v\in V(H)$ such that $\\{u\\}=\text{{{in}}}(v)$.
Then $H+(u,x)$ contains a cycle if and only if $H+(v,x)$ contains a cycle.
###### Proof.
Assume $H+(v,x)$ contains a cycle $C$. As $\\{u\\}=\text{{{in}}}(v)$, we get
$(u,v)\in A(C)$. Hence, replacing $(u,v)$ and $(v,x)$ with $(u,x)$ in $C$
produces a cycle in $H+(u,x)$.
If $H+(u,x)$ contains a cycle $C$, then we can replace $(u,x)$ by the path
from $u$ to $x$ in $H+(v,x)$ going through $v$. This constructs a cycle in
$H+(v,x)$, as desired. ∎
###### Proof of safety of Shift Neighbors (RR 3.4).
Consider the case where
$\text{{{indeg}}}(u)=\text{{{indeg}}}(v)=\text{{{indeg}}}(w)=1$,
$(u,\textsc{M})\not\in\ell$, $(v,\textsc{M})\not\in\ell$ and there is an
$x\in\text{{{out}}}(v)\setminus\text{{{out}}}(u)$ with $w\neq x$. The other
case follows analogously. Let $(D^{\prime},\ell,k)$ be the reduced instance
and $(S_{r},\hat{\ell}_{r})$ be a solution for it. We construct a solution
$(S,\hat{\ell})$ for the input instance $(D,\ell,k)$.
First observe that, if $(u,x)\in S_{r}$, we can replace it with $(v,x)$ in
$S$, which means that $D^{\prime}-S_{r}$ and $D-S$ are isomorphic. By setting
$\hat{\ell}\coloneqq\hat{\ell}_{r}$, we obtain the desired solution. If
$(u,x)\not\in S_{r}$, we consider the following cases.
_Case 1:_ $\hat{\ell}_{r}(v)=\textsc{F}$. We set
$\hat{\ell}\coloneqq\hat{\ell}_{r}$ and $S\coloneqq S_{r}$. Let
$D^{\star}=D-S$. Clearly, $D^{\star}=D^{\prime}-S_{r}-(u,x)+(v,x)$. As $u$ is
the only inneighbor of $v$, from Observation 2 we know $D^{\star}$ is a DAG.
From Observation 1, we know that $\hat{\ell}=\hat{\ell}_{r}$ is a funnel
labeling for $D^{\star}$.
_Case 2:_ $\hat{\ell}_{r}(v)=\textsc{M}=\hat{\ell}_{r}(u)$. If
$D^{\prime}-S_{r}+(u,v)$ is a DAG, we can assume that $(u,v)\not\in S_{r}$,
implying $(u,x)\in S_{r}$ (which was already considered).
If $D^{\prime}-S_{r}+(u,v)$ is not a DAG, then it contains a cycle with $v$
and $u$, implying $(u,v)\in S_{r}$. In particular,
$\text{{{indeg}}}_{D^{\prime}-S_{r}}(v)=0$. We set
$\hat{\ell}\coloneqq\hat{\ell}_{r}$ and $\hat{\ell}(v)\coloneqq\textsc{F}$.
Clearly, $\hat{\ell}$ is a funnel labeling for $D^{\prime}-S_{r}$. From
Observation 1 we have that $\hat{\ell}$ is a funnel labeling for $D-S_{r}$ as
well.
_Case 3:_ $\hat{\ell}_{r}(v)=\textsc{M}$ and $\hat{\ell}_{r}(u)=\textsc{F}$.
We set $\hat{\ell}\coloneqq\hat{\ell}_{r}$, $\hat{\ell}(v)\coloneqq\textsc{F}$
and $S\coloneqq S_{r}$. As $\\{u\\}=\text{{{in}}}_{D}(v)$ and
$\hat{\ell}_{r}(u)=\textsc{F}$, $\hat{\ell}$ is a funnel labeling for
$D^{\prime}-S_{r}$. Let $D^{\star}=D-S$.
From Observation 2 we know $D^{\star}=D^{\prime}-S_{r}-(u,x)+(v,x)$ is a DAG
since $D^{\prime}-S_{r}$ is a DAG. Hence, from Observation 1 we obtain that
$(S_{r},\hat{\ell})$ is a solution for the input instance. In all cases a
solution for the reduced instance implies a solution for the original
instance.
Now assume there is a solution $(S,\hat{\ell})$ for the original instance. We
show that there is solution $(S_{r},\hat{\ell}_{r})$ for the reduced instance.
As in the previous direction, if $(v,x)\in S$, we can replace it with $(u,x)$
and obtain the desired solution. So assume $(v,x)\not\in S$.
If $(u,v)\in S$, let $S_{1}=S\cup\\{(y,u)\\}$, where
$\\{y\\}=\text{{{in}}}(u)$. Clearly, $\hat{\ell}$ is a funnel labeling for
$D-S_{1}$. We set $\hat{\ell}_{r}\coloneqq\hat{\ell}$ and
$\hat{\ell}_{r}(u)\coloneqq\textsc{F}$. As $\text{{{indeg}}}_{D-S_{1}}(u)=0$,
$\hat{\ell}_{r}$ is also a funnel labeling for $D-S_{1}$. From Observation 1
we know that $\hat{\ell}_{r}$ is a funnel labeling for
$D_{1}=D^{\prime}-S_{1}$. Since
$\text{{{indeg}}}_{D_{1}}(v)=0=\text{{{indeg}}}_{D_{1}}(u)$ and
$\hat{\ell}_{r}(u)=\textsc{F}$, we have that $\hat{\ell}_{r}$ is a funnel
labeling for $D_{1}+(u,v)$. Hence, $(S\setminus\\{(u,v)\\},\hat{\ell}_{r})$ is
a solution for the reduced instance.
In the following we consider the remaining cases where $\\{(u,v),(v,x)\\}\cap
S=~{}\emptyset$. Note that the case $\hat{\ell}(u)=\textsc{M}$ and
$\hat{\ell}(v)=\textsc{F}$ does not happen under this assumption.
_Case 1:_ $\hat{\ell}(v)=\textsc{F}=\hat{\ell}(u)$. We set
$\hat{\ell}_{r}\coloneqq\hat{\ell}$ and $S_{r}\coloneqq S$. Clearly,
$D^{\prime}-S_{r}=D-S-(v,x)+(u,x)$. From Observation 2 there is no cycle in
$D-S+(u,x)$ and, hence, $D^{\prime}-S_{r}$ is a DAG. Thus, from Observation 1
we have that $\hat{\ell}_{r}$ is a funnel labeling for $D^{\prime}-S_{r}$.
_Case 2:_ $\hat{\ell}(v)=\textsc{M}=\hat{\ell}(u)$. Since $(v,x)\not\in S$, we
have $(v,w)\in S$ and $\hat{\ell}(x)=\textsc{M}$. Further, we know that
$D^{\prime}-S$ is a DAG due to Observation 2. Let $S_{1}=S\cup\\{(u,v)\\}$.
Clearly, $\hat{\ell}$ is a funnel labeling for $D-S_{1}$, and
$D^{\prime}-S_{1}$ is also a DAG. From Observation 1 we have that $\hat{\ell}$
is a funnel labeling for $D^{\prime}-S_{1}$.
We set $\hat{\ell}_{r}\coloneqq\hat{\ell}$ and
$\hat{\ell}_{r}(v)\coloneqq\textsc{F}$. Since
$\text{{{indeg}}}_{D^{\prime}-S_{1}}(w)=0=\text{{{indeg}}}_{D^{\prime}-S_{1}}(v)$,
we have that $\hat{\ell}_{r}$ is a funnel labeling for
$D^{\prime}-S_{1}+(v,w)$, regardless of the label of $w$. By setting
$S_{r}\coloneqq(S\setminus\\{(v,w)\\})\cup\\{(u,v)\\}$, we get that
$\hat{\ell}_{r}$ is a funnel labeling for $D^{\prime}-S_{r}$ and
$\left|{S_{r}}\right|\leq\left|{S}\right|$.
_Case 3:_ $\hat{\ell}(v)=\textsc{M}$ and $\hat{\ell}(u)=\textsc{F}$. Let
$S_{r}=S$ and $\hat{\ell}_{r}=\hat{\ell}$. Since $(u,v)\not\in S_{r}$, from
Observation 2 we know that $D-S_{r}-(v,x)+(u,x)$ is a DAG. From Observation 1
we have that $\hat{\ell}_{r}$ is a funnel labeling for $D^{\prime}-S_{r}$.
In all cases we found a solution $(S_{r},\hat{\ell}_{r})$ for the reduced
instance, concluding the proof. ∎
It is not always possible to exhaustively apply Shift Neighbors (RR 3.4): If
$u,v,w$ forms a cycle, we would shift $x$ indefinitely through this cycle. To
prevent this from happening, we need the following reduction rule:
###### Reduction Rule 3.5 (Break Cycle).
Let $C$ be a cycle in $D$. If every vertex in $C$ has indegree (outdegree) one
and either every vertex in $C$ is unlabeled or every vertex in $C$ is labeled
with F (M), then delete one arc of $C$ and decrease $k$ by one.
###### Proof of safety of Break Cycle (RR 3.5).
Let $(v,u)$ be the arc removed by the reduction rule. Clearly, a solution for
the reduced instance together with the arc $(v,u)$ is a solution for the
original instance. Let $(S,\hat{\ell})$ be a solution for the original
instance, and assume that $(v,u)\not\in S$. Let $(w,x)$ be an arc of $C$
contained in $S$. Without loss of generality, we assume that $(w,x)$ is the
only incoming arc of $x$. The case where it is the only outgoing arc of $w$
follows analogously.
We can assume that $\hat{\ell}(v)=\textsc{F}$ for all $v\in V(C)$: If they
were not labeled by $\ell$ when the rule was applied, then by repeatedly
applying Set Label (RR 3.2) (starting with $x$) we can label them with F.
Because $\text{{{indeg}}}_{D}(v)=1$ for every $v\in C$, it follows that $C$ is
the only cycle in $D$ using the arc $(w,x)$. Hence,
$D^{\prime}=D-S+(w,x)-(v,u)$ is a DAG. Further, as
$\hat{\ell}(w)=\hat{\ell}(x)=\textsc{F}$, it is easy to see that $\hat{\ell}$
is a funnel labeling for $D^{\prime}$. ∎
If Shift Neighbors (RR 3.4) is not applicable, then many vertices in a long
path $P$ in a local funnel must share a common out- or inneighbor $w$.
However, from Set Label (RR 3.2) we know that $w$ receives a label if it has
too many neighbors. The next and final reduction rule needed for bounding the
number of unlabeled vertices exploits this property and allows us to label
some vertex $u$ in $P$ if its predecessor $v$ in $P$ is adjacent to a labeled
vertex $w$.
###### Reduction Rule 3.6 (Labeled Neighbor).
Let $(v,u)$ be an arc between unlabeled vertices. Set
$\ell(u)\coloneqq\textsc{F}$ if $\text{{{indeg}}}(u)=\text{{{indeg}}}(v)=1$
and $\exists w\in\text{{{out}}}(v):\ell(w)=\textsc{M}$. Set
$\ell(v)\coloneqq\textsc{M}$ if $\text{{{outdeg}}}(u)=\text{{{outdeg}}}(v)=1$
and $\exists w\in\text{{{in}}}(u):\ell(w)=\textsc{F}$.
###### Proof of safety of Labeled Neighbor (RR 3.6).
Assume, without loss of generality, that the first case of the rule was
applied. The proof for the second case follows analogously (note that it is
not possible for both cases to be applied simultaneously). Let
$(D,\ell_{r},k)$ be the reduced instance. First note that
$\ell_{r}\supseteq\ell$, which means that a solution for the reduced instance
is already a solution for the original instance. Hence, it suffices to show
that a solution $(S,\hat{\ell})$ for the original instance implies a solution
$(S_{r},\hat{\ell}_{r})$ for the reduced instance.
If $\hat{\ell}(u)=\textsc{F}$, we set $\hat{\ell}_{r}\coloneqq\hat{\ell}$ and
$S_{r}\coloneqq S$ and we are done. So assume that $\hat{\ell}(u)=\textsc{M}$.
_Case 1:_ $(v,u)\in S$. We set $S_{r}\coloneqq S$,
$\hat{\ell}_{r}\coloneqq\hat{\ell}$ and
$\hat{\ell}_{r}(u)\coloneqq\textsc{F}$. As $\text{{{indeg}}}_{D-S}(u)=0$, we
know that $\hat{\ell}_{r}$ is also a funnel labeling for $D-S$.
_Case 2:_ $(v,u)\not\in S$ and $\hat{\ell}(v)=\textsc{F}$. We set
$S_{r}\coloneqq S$, $\hat{\ell}_{r}\coloneqq\hat{\ell}$ and
$\hat{\ell}_{r}(u)\coloneqq\textsc{F}$. As
$\hat{\ell}_{r}(v)=\textsc{F}=\hat{\ell}_{r}(u)$, we may keep the arc $(v,u)$
and $\hat{\ell}_{r}$ is a funnel labeling for $D-S_{r}$.
_Case 3:_ $(v,u)\not\in S$ and $\hat{\ell}(v)=\textsc{M}$. Then $(v,w)\in S$.
We set $\hat{\ell}_{r}\coloneqq\hat{\ell}$,
$\hat{\ell}_{r}(u)\coloneqq\textsc{F}$,
$\hat{\ell}_{r}(v)\coloneqq\textsc{F}$,
$S_{r}\coloneqq(S\setminus\\{(v,w)\\})\cup\\{(y,v)\\}$, where $y$ is the
unique inneighbor of $v$.
The digraph $D-S_{r}$ is a DAG: if it has a cycle, the cycle would have to use
the arc $(v,w)$, yet $\text{{{indeg}}}_{D-S_{r}}(v)=0$, a contradiction. We
now argue that $\hat{\ell}_{r}$ is a funnel labeling for $D-S_{r}$. Since
$\text{{{indeg}}}_{D-S_{r}}(v)=0$, $\text{{{indeg}}}_{D-S_{r}}(u)=1$ and
$\hat{\ell}_{r}(u)=\textsc{F}$, the vertex $v$ is the unique inneighbor of $u$
in the out-forest of the funnel $D-S_{r}$. Finally, as
$\hat{\ell}_{r}(w)=\textsc{M}$, the arc $(v,w)$ is allowed in the funnel.
Hence, $\hat{\ell}_{r}$ is a funnel labeling for $D-S_{r}$. In all cases we
find a solution $(\hat{\ell}_{r}$, $S_{r})$ for the reduced instance,
concluding the proof. ∎
###### Lemma 1.
Let $s$ be some source (sink) of some unlabeled local funnel $H$ in the
reduced digraph $D$. Let $P_{1},P_{2},\dots P_{a}$ be a sequence of paths in
$H$ starting (ending) at $s$ such that $\text{{{indeg}}}(u)\leq 1$
$(\text{{{outdeg}}}(u)\leq 1)$ for each $u$ in each $P_{i}$, and
$V(P_{j})\not\subseteq V(P_{i})$ for all $1\leq i,j\leq a$ where $i\neq j$.
Let $E$ be the set of end (start) points of all $P_{i}$. Then all of the
following hold.
1. (1)
$\text{{{outdeg}}}(u)>1$ $(\text{{{indeg}}}(u)>1)$ for any inner vertex $u$ of
any $P_{i}$.
2. (2)
$\text{{{out}}}(\bigcup_{i=1}^{a}V(P_{i})\setminus
E)\subseteq\text{{{out}}}(s)$
$(\text{{{in}}}(\bigcup_{i=1}^{a}V(P_{i})\setminus
E)\subseteq\text{{{in}}}(s))$,
3. (3)
$V(P_{i})\cap V(P_{j})=\\{s\\}$ for each $1\leq i,j\leq a$ where $i\neq j$,
and
4. (4)
$a\leq k+1$ and $\left|V(P_{i})\right|\leq k+2$ for each $1\leq i\leq a$.
###### Proof.
We consider the case where $s$ is a source of $H$. The other case follows
analogously.
Let $u$ be some inner vertex of some $P_{i}$ and $w$ the unique outneighbor of
$u$ in $P_{i}$. By assumption on $P_{i}$, we have $\text{{{indeg}}}_{D}(w)=1$.
As Dissolve Vertex (RR 3.3) is not applicable, we have that
$\text{{{outdeg}}}(u)>1$ (proving (1)). In particular, $u$ has some
outneighbor $x$ not in $P_{i}$.
Let $v$ be the inneighbor of $u$ in $P_{i}$. Since
$\text{{{indeg}}}_{D}(v)=\text{{{indeg}}}_{D}(u)=\text{{{indeg}}}_{D}(w)=1$
and Shift Neighbors (RR 3.4) is not applicable, we have
$x\in\text{{{out}}}_{D}(v)$. By repeating this argument to the predecessors of
$u$ in $P_{i}$, we prove (2) (and also that $a\leq k+1$, as
$\text{{{outdeg}}}_{D}(s)\leq k+1$ due to Set Label (RR 3.2)).
Assume there are two paths $P_{i}$ and $P_{j}$ intersecting at more than one
vertex. Let $u$ be the last vertex of the intersection. Note that, if $u$ is
the last vertex of $P_{i}$ or $P_{j}$, then one path has to contain the other.
Hence, $u$ has two outneighbors $w_{i}$ and $w_{j}$ lying on $P_{i}$ and
$P_{j}$, respectively, and $w_{i}\neq w_{j}$. But due to (2), we have
$w_{i},w_{j}\in\text{{{out}}}_{D}(s)$, implying
$\text{{{indeg}}}_{D}(w_{i})>1$ and $\text{{{indeg}}}_{D}(w_{j})>1$, a
contradiction to our assumptions on $P_{i}$ and $P_{j}$ (proving (3)).
Let $v_{1},v_{2},\dots,v_{m}$ be the sequence of vertices of a path $P_{i}$.
From (1) we know that there is some $w\in\text{{{out}}}_{D}(v_{m-1})$ outside
of $P_{i}$. We also have $w\in\text{{{out}}}_{D}(v_{j})$ for all $1\leq j\leq
m-1$, implying $\text{{{indeg}}}_{D}(w)\geq m-1$. If $m-1>k+1$, then
$\ell(w)=\textsc{M}$, as Set Label (RR 3.2) is not applicable. However, as
$\text{{{indeg}}}_{D}(v_{m-1})=1=\text{{{indeg}}}_{D}(v_{m-2})$,
$w\in\text{{{out}}}_{D}(v_{m-2})$ and Labeled Neighbor (RR 3.6) is not
applicable, we have $\ell(v_{m-1})=\textsc{F}$, a contradiction to the
assumption that $H$ is unlabeled. Hence, $m-1\leq k+1$, implying
$\left|{V(P_{i})}\right|\leq k+2$ (proving (4)). ∎
###### Lemma 2.
Let $H$ be an unlabeled local funnel in $D$. Then
$\left|{V(H)}\right|\in\mathcal{O}(k^{3})$.
###### Proof.
Let $s$ be the source of $H$. Consider a partitioning of the vertices of $H$
into an out-tree (since $H$ has only one source) and an in-forest where the
out-tree is maximal. Let $P_{1},P_{2},\dots,P_{a}$ be a sequence of paths such
that the out-tree is the union of all $P_{i}$. From Lemma 1 we know that
$a\leq k+1$ and that $V(P_{i})\cap V(P_{j})=\\{s\\}$ for all $i\neq j$. Let
$v_{i}$ be the endpoint of $P_{i}$ which is not $s$ and let
$X=\bigcup_{i=1}^{a}\text{{{out}}}(v_{i})$. As Set Label (RR 3.2) is not
applicable, we have $\left|{X}\right|\leq a\cdot k\leq k^{2}+k$. Further, as
$\left|{V(P_{i})}\right|\leq k+1$, the out-tree of $H$ has at most $(k+1)^{2}$
many vertices.
Let $Y$ be the set of sinks of $H$ lying on its in-forest. Since $H$ has only
one source $s$, for every sink $t\in Y$ there is a path $Q$ from $s$ to $t$.
Let $Q_{1},Q_{2},\dots,Q_{b}$ be the set of all paths from $s$ to each sink in
$Y$, and let $R_{i}$ be the subpath of $Q_{i}$ contained in the in-forest of
$H$.
Let $Q_{i}$ be one of such paths, and let $u$ be the first vertex of $R_{i}$
(which is not in any $P_{j}$). Note that this implies $\text{{{indeg}}}(u)>1$,
otherwise the out-tree would not be maximal. Due to Lemma 1 we have that no
other $R_{\ell}$ contains $u$ and if $u\not\in x$, then
$u\in\text{{{out}}}(s)$. Since Set Label (RR 3.2) is not applicable and each
distinct $R_{\ell}$ implies the existence of a distinct outneighbor of $s$,
there are at most $k$ paths $R_{\ell}$ not ending in a vertex in $X$. By
definition, all inneighbors of any vertex of $X$ lie in some $P_{i}$. This
implies that there are at most $k$ paths $Q_{\ell}$ not containing any vertex
of $X$.
If $Q_{i}$ contains a vertex of $X$, then $R_{i}$ ends on a vertex $u\in X$.
Due to Lemma 1, no other $R_{\ell}$ contains $u$. As $\left|{X}\right|\leq
k^{2}+k$, we have that there are at most $k^{2}+k$ paths $R_{\ell}$ which
contain some vertex of $X$. Adding both cases, we obtain that there are at
most $k^{2}+2k$ paths $Q_{\ell}$.
Due to Lemma 1, the subpath $R_{i}$ of $Q_{i}$ has at most $k+1$ vertices.
Since the in-forest of $H$ is the union over all $R_{i}$, we have that this
in-forest has at most $(k+1)(k^{2}+k)=k^{3}+2k^{2}+k$ many vertices. Thus,
$\left|{V(H)}\right|\leq(k+1)^{2}+(k+1)(k^{2}+2k)\in\mathcal{O}(k^{3})$,
concluding the proof. ∎
We conclude by bounding the number of maximal vertex-disjoint unlabeled local
funnels in $D$. Since we can always partition unlabeled vertices with in- or
outdegree at most one into local funnels, by bounding the number of local
funnels in such a partitioning, together with the bound on the size of each
local funnel, we obtain a bound for the number of unlabeled vertices with in-
or outdegree at most one.
Let $\mathcal{H}=\\{H_{1},H_{2},\dots H_{a}\\}$ be a set of maximal vertex-
disjoint unlabeled local funnels in $D$ (in this context, maximal means that
$H_{i}\cup H_{j}$ is not a local funnel for any two distinct
$H_{i},H_{j}\in\mathcal{H}$). Let $s_{i}$ be the unique source of $H_{i}$ for
each $i$. We now show that, if there is a solution removing at most $k$ arcs,
then $\left|{\mathcal{H}}\right|$ is “small”. By contraposition this means
that, if $\left|{\mathcal{H}}\right|$ is “large”, then we have a “no” instance
and can stop the kernelization process.
We start with the simple observation that cycles intersecting inside a local
funnel must also intersect outside it.
###### Observation 3.
Let $C_{i}$ and $C_{j}$ be two distinct cycles in $D$ such that $V(C_{i})\cap
V(C_{j})\subseteq H_{\ell}$ for some $H_{\ell}\in\mathcal{H}$. Then
$V(C_{i})\cap V(C_{j})=\emptyset$.
###### Proof.
Assume towards a contradiction that $V(C_{i})\cap V(C_{j})\neq\emptyset$. Let
$v\in V(C_{i})\cap V(C_{j})$ such that the predecessor of $v$ in $C_{i}$ is
different from the predecessor of $v$ in $C_{j}$. Then
$\text{{{indeg}}}(v)>1$. As $H_{\ell}$ is a local funnel, $v$ can only reach
one sink $t$ of $H_{\ell}$, implying that $t$ is in both $C_{i}$ and $C_{j}$.
The unique out-neighbor of $t$ is however not in $H_{\ell}$, but it has to be
in both $C_{i}$ and $C_{j}$, a contradiction to the assumption that
$V(C_{i})\cap V(C_{j})\subseteq V(H_{\ell})$. ∎
We partition the set of maximal unlabeled local funnels $\mathcal{H}$ into
three sets _(1)_ $\mathcal{F}=\\{H_{i}\in\mathcal{H}\mid\text{there is some
}v\in V(H_{i})\text{ with }\text{{{outdeg}}}_{D}(v)>1\\}$; _(2)_
$\mathcal{M}=\\{H_{i}\in\mathcal{H}\mid\text{{{indeg}}}_{D}(s_{i})>1\\}$; and
_(3)_
$\mathcal{X}=\\{H_{i}\in\mathcal{H}\mid\text{{{indeg}}}_{D}(s_{i})=1\text{ and
}\forall v\in V(H_{i}):\text{{{outdeg}}}_{D}(v)=1\\}$.
###### Lemma 3.
If there is a solution $(S,\hat{\ell})$ for $(D,\ell,k)$, then
$\left|{\mathcal{X}}\right|\leq 2k^{2}$.
###### Proof.
Let $H_{i}\in\mathcal{X}$ and $u$ be the unique inneighbor of $s_{i}$. Note
that $\text{{{outdeg}}}_{D}(s_{i})=1$. As Dissolve Vertex (RR 3.3) is not
applicable, we have that $\text{{{outdeg}}}_{D}(u)>1$ and
$\text{{{indeg}}}_{D}(w)>1$, where $w$ is the unique outneighbor of $s_{i}$.
_Case 1:_ $u\in\operatorname{Dom}(\ell)$. Then $\ell(u)=\textsc{M}$ since Set
Label (RR 3.2) is not applicable. As $\text{{{outdeg}}}(u)>1$, each
$H_{j}\in\mathcal{X}$ with $s_{j}\in\text{{{out}}}_{D}(u)$ requires one more
arc of $u$ to be in $S$.
_Case 2:_ $u\not\in\operatorname{Dom}(\ell)$. If $\text{{{indeg}}}_{D}(u)=1$,
then there is some $v_{i}\in V(H_{i})$ such that $(v_{i},u)\in A(D)$,
otherwise $H_{i}$ would not be maximal. Hence, there is a cycle $C_{i}$
containing $u,s_{i}$ and $v_{i}$. If there is any other $H_{j}\in\mathcal{X}$
with $s_{j}\in\text{{{out}}}_{D}(u)$ and with some $v_{j}\in V(H_{j})$ such
that $(v_{j},u)$, then the cycle $C_{j}$ containing $u,s_{j}$ and $v_{j}$ is
arc-disjoint to the cycle $C_{i}$ due to Observation 3. Thus, $S$ must contain
at least one arc of each such $C_{j}$, implying there are at most $k$ local
funnels $H_{j}$ that fall into this case.
If $\text{{{indeg}}}_{D}(u)>1$, one arc of $u$ is in $S$ as
$\text{{{outdeg}}}_{D}(u)>1$. Further, $\text{{{outdeg}}}_{D}(u)\leq k$. This
means that there are at most $k$ local funnels $H_{j}\in\mathcal{X}$ with
$s_{j}\in\text{{{out}}}_{D}(u)$. As there can be at most $2k$ such vertices
$u$, we have that there are at most $2k^{2}$ local funnels
$H_{j}\in\mathcal{X}$ which fall into this case. In the worst case, we have
$\left|{\mathcal{X}}\right|\leq\max\\{k+1,2k^{2}\\}\leq 2k^{2}$. ∎
###### Lemma 4.
If there is a solution $(S,\hat{\ell})$ for $(D,\ell,k)$, then
$\left|{\mathcal{F}}\right|\leq 2k^{2}+3k$.
###### Proof.
Let $H_{i}\in\mathcal{F}$ and let $u$ be the unique inneighbor of $s_{i}$ in
$D$. Assume $u$ is in some local funnel $H_{j}\in\mathcal{H}$ and let
$D_{i}=D[V(H_{i})\cup V(H_{j})]$.
_Case 1:_ $D_{i}$ is a DAG. Then there are $w_{j}\in V(H_{j})$ and $w_{i}\in
V(H_{i})$ such that $\text{{{indeg}}}_{D}(w_{j})>1$,
$\text{{{outdeg}}}_{D}(w_{i})>1$ and there is a path $P$ from $w_{j}$ to
$w_{i}$. If this were not the case, $H_{i}$ and $H_{j}$ would not be maximal,
as $D_{i}$ would be an unlabeled local funel containing $H_{i}$ and $H_{j}$.
Let $G_{i}\subseteq D$ be a subgraph containing $P$, two incoming neighbors of
$w_{j}$ and two outgoing neighbors of $w_{i}$. Clearly, $S$ contains some arc
of $G_{i}$. Since $H_{j}$ and $H_{i}$ are local funnels, $w_{j}$ can only
reach one sink of $H_{j}$, namely $u$, and $w_{i}$ can be reached by only one
source of $H_{i}$, namely $s_{i}$. This means in particular that $P$ is the
only path from $w_{j}$ to $w_{i}$. Hence, if there is any other
$H_{\ell}\in\mathcal{F}$ that falls into this case, then the corresponding
$G_{\ell}$ constructed is arc-disjoint to $G_{i}$. As there can be at most $k$
arc-disjoint forbidden subgraphs for funnels in $D$, there are at most $k$
local funnels in $\mathcal{F}$ that fall into this case.
_Case 2:_ $D_{i}$ is not a DAG. Then there is some cycle $C_{i}$ containing
some $w_{i}\in V(H_{i})$ and some $w_{j}\in V(H_{j})$. Clearly, $S$ contains
some arc of $C_{i}$. Assume there is some other $H_{\ell}\in\mathcal{F}$ such
that $\text{{{in}}}_{D}(s_{\ell})\cap V(H_{j})\neq\emptyset$ and
$D_{\ell}=D[V(H_{j})\cup V(H_{\ell})]$ is not a DAG. Let $C_{\ell}$ be a cycle
in $D_{\ell}$. From Observation 3 we know $C_{i}$ and $C_{\ell}$ are arc
disjoint. As we need one arc in $S$ for each such cycle, we get that there are
at most $k$ local funnels falling into this case.
Now assume $u$ is not in any local funnel in $\mathcal{H}$. We have two cases.
_Case 1:_ $u\in\operatorname{Dom}(\ell)$. Then $\ell(u)=\textsc{M}$, as Set
Label (RR 3.2) is not applicable to $s_{i}$. Since there is some $v_{i}\in
V(H_{i})$ with $\text{{{outdeg}}}(v_{i})>1$ and $s_{i}$ can reach $v_{i}$, we
have that $u$ can also reach $v_{i}$ and so $(u,s_{i})\in S$ or some arc of
$H_{i}$ is in $S$. Hence, there are at most $k$ local funnels
$H_{j}\in\mathcal{F}$ with $s_{j}\in\text{{{out}}}(u)$.
_Case 2:_ $u\not\in\operatorname{Dom}(\ell)$. As $u$ is not in a local funnel,
we have $\text{{{indeg}}}(u)>1$ and $\text{{{outdeg}}}(u)>1$. Since Set Label
(RR 3.2) is not applicable, $\text{{{outdeg}}}(u)\leq k$. Hence, there can be
at most $k$ local funnels $H_{j}\in\mathcal{F}$ with
$u\in\text{{{in}}}(s_{j})$. Because Lower Bound (RR 3.1) is not applicable, we
know there are at most $2k$ vertices $u^{\prime}$ with
$\text{{{indeg}}}(u^{\prime})>1$ and $\text{{{outdeg}}}(u^{\prime})>1$. Thus,
there can be at most $2k^{2}$ local funnels $H_{j}\in\mathcal{F}$ that fall
into this case.
By adding the bounds obtained in each case, we get $\mathcal{F}\leq
k+k+k+2k^{2}\in\mathcal{O}(k^{2})$. ∎
###### Lemma 5.
If there is a solution $(S,\hat{\ell})$ for $(D,\ell,k)$, then
$\left|{\mathcal{M}}\right|\leq k^{2}+2k$.
###### Proof.
Let $H_{i}\in\mathcal{M}$.
_Case 1:_ $\forall u\in\text{{{in}}}(s_{i}):u\in\operatorname{Dom}(\ell)$. As
Set Label (RR 3.2) is not applicable, there is some $u\in\text{{{in}}}(s_{i})$
with $\ell(u)=\textsc{M}$ and $\text{{{outdeg}}}(u)>1$. Hence, $S$ contains
some outgoing arc of $u$. Any additional $H_{j}\in\mathcal{M}$ that falls into
this case increases the outdegree of some $u^{\prime}$ with
$\ell(u^{\prime})=\textsc{M}$ and $\text{{{outdeg}}}(u^{\prime})>1$. Thus, if
there are more than $k$ local funnels $H_{j}\in\mathcal{M}$ that fall into
this case, then $\left|{S}\right|>k$.
_Case 2:_ There is some $u\in\text{{{in}}}(s_{i})$ and some
$H_{j}\in\mathcal{H}$ such that $u\in V(H_{j})$. As
$\text{{{indeg}}}(s_{i})>1$ and $H_{i}$ is maximal, we have that
$D_{i}=D[V(H_{j})\cup V(H_{i})]$ is not a DAG. Let $C_{i}$ be the cycle in
$D_{i}$. If there is some other $H_{\ell}\in\mathcal{M}$ that falls into this
case, we know from Observation 3 that the corresponding cycle $C_{\ell}$ and
$C_{i}$ are arc disjoint. As $S$ must contain one arc of each $C_{\ell}$, if
there are more than $k$ local funnels $H_{\ell}$ falling into this case, then
$\left|{S}\right|>k$.
_Case 3:_ There is some $u\in\text{{{in}}}(s_{i})$ such that $u$ is not in any
local funnel and $u\not\in\operatorname{Dom}\ell$. Then
$\text{{{indeg}}}(u)>1$ and $\text{{{outdeg}}}(u)>1$. As Lower Bound (RR 3.1)
is not applicable, there can be at most $2k$ such vertices $u$ in $D$.
Further, $\text{{{outdeg}}}(u)\leq k$ as Set Label (RR 3.2) is not applicable.
Hence, there can be at most $2k^{2}$ local funnels $H_{j}\in\mathcal{M}$ that
fall into this case.
By adding all cases together we obtain
$\left|{\mathcal{M}}\right|\leq+k+k+2k^{2}\in\mathcal{O}(k^{2})$, as desired.
∎
From Lemmas 3, 4 and 5, we easily obtain a bound for the number of vertices in
unlabeled local funnels. Together with the fact that Lower Bound (RR 3.1) is
not applicable, we obtain a bound for the number of unlabeled vertices in $D$.
###### Lemma 6.
Let $D$ be a reduced digraph. Then there are $\mathcal{O}(k^{5})$ vertices
$v\in V(D)$ with $v\not\in\operatorname{Dom}(\ell)$ and
$\text{{{indeg}}}(v)=1\lor\text{{{outdeg}}}(v)=1$.
###### Proof.
Let $\mathcal{H}$ be a maximal set of maximal vertex-disjoint local funnels in
$D$. Clearly, every vertex $v\in V(D)$ with $v\not\in\operatorname{Dom}(\ell)$
and $\text{{{indeg}}}(v)=1\lor\text{{{outdeg}}}(v)=1$ is in some local funnel.
From Lemmas 3, 4 and 5 we know that
$\left|{\mathcal{H}}\right|\leq\left|{F}\right|+\left|{M}\right|+\left|{X}\right|\in\mathcal{O}(k^{2})$.
Due to Lemma 2, each local funnel has at most
$k^{3}+3k^{2}+2k\in\mathcal{O}(k^{3})$ many vertices. Hence, there are at most
$(5k^{2}+5k)(k^{3}+3k^{2}+2k)\in\mathcal{O}(k^{5})$ vertices $v$ lying in some
unlabeled local funnel. ∎
### 3.2 Bounding the number of labeled vertices
In Section 3.1 we exploited the property that unlabeled vertices have bounded
degree, and that we can label them if their neighborhood has some special
structure captured by the reduction rules. For the labeled vertices, however,
we can apply neither of those strategies. Instead, we first exploit the fact
that we know the label of a vertex and use this to decide if an arc is never
in an optimal solution or if it is always in an optimal solution.
Arcs from M to F vertices clearly need to be removed. We show that we can also
ignore arcs from F to M vertices, that is, we can remove them without changing
$k$.
###### Reduction Rule 3.7 (Remove Arcs).
Let $(v,u)\in A(D)$. If $\ell(v)=\textsc{F}$ and $\ell(u)=\textsc{M}$, remove
$(v,u)$. If $\ell(v)=\textsc{M}$ and $\ell(u)=\textsc{F}$, remove $(v,u)$ and
decrease $k$ by 1.
###### Proof of safety of Remove Arcs (RR 3.7).
Clearly, a solution for $(D,\ell,k)$ is also a solution for the reduced
instance $(D^{\prime},\ell_{r},k^{\prime})$. So let $(\hat{\ell}_{r},S_{r})$
be a solution for the reduced instance.
If $\ell(v)=\textsc{M}$ and $\ell(u)=\textsc{F}$, then $(v,u)$ is in any
solution. Hence, $(\hat{\ell}_{r},S_{r}\cup\\{(v,u)\\})$ is a solution for the
input instance and $\left|{S_{r}\cup\\{(v,u)\\}}\right|\leq k$.
If $\ell(v)=\textsc{F}$ and $\ell(u)=\textsc{M}$, then we claim
$\hat{\ell}_{r}$ is a funnel labeling for $D-S_{r}$. If $D-S_{r}$ is a DAG,
then the claim trivially holds.
Now assume towards a contradiction that $D-S_{r}$ is not a DAG. Then there is
a cycle $C$ using the arc $(v,u)$. This implies that there is a path $P$ from
$u$ to $v$ in $D-S_{r}$. In particular, this path also exists in
$D^{\prime}-S_{r}$ since it does not use the arc $(v,u)$. However, as
$\ell(u)=\textsc{M}$ and $\ell(v)=\textsc{F}$, we know there is some arc
$(v^{\prime},u^{\prime})$ in $P$ with $\hat{\ell}_{r}(v^{\prime})=\textsc{M}$
and $\hat{\ell}_{r}(u^{\prime})=\textsc{F}$. But then $\hat{\ell}_{r}$ is not
a funnel labeling for $D^{\prime}-S_{r}$, a contradiction. Hence, a solution
for the reduced instance implies a solution for the input instance, proving
the rule is safe. ∎
We now identify certain vertices that can be removed safely. Clearly, sources
and sinks cannot be in any cycle in $D$. By carefully considering the
neighborhood of a source or sink $v$, we can also prove that $v$ is not
“relevant” for any forbidden subgraph for funnels in $D$.
###### Reduction Rule 3.8 (Sources and Sinks).
Let $v\in V(D)$ be a labeled vertex where
$\text{{{out}}}(v)\cup\text{{{in}}}(v)\subseteq\operatorname{Dom}(\ell)$.
Remove $v$ if one of the following holds.
1. 1.
$\text{{{indeg}}}(v)=0$ and no $u\in\text{{{out}}}(v)$ exists with
$\ell(u)=\textsc{F}$ and $\text{{{indeg}}}(u)>1$, or
2. 2.
$\text{{{outdeg}}}(v)=0$ and no $u\in\text{{{in}}}(v)$ exists with
$\ell(u)=\textsc{M}$ and $\text{{{outdeg}}}(u)>1$.
###### Proof of safety of Sources and Sinks (RR 3.8).
Let $(D^{\prime},\ell_{r},k)$ be the reduced instance. Clearly, if
$(S,\hat{\ell})$ is a solution for $(D,\ell,k)$, then, after restricting
$(S,\hat{\ell})$ to the vertices in $D^{\prime}$, we also have a solution for
$(D^{\prime},\ell_{r},k)$.
Now let $(S_{r},\hat{\ell}_{r})$ be a solution for the reduced instance. We
consider the case where $\text{{{indeg}}}_{D}(v)=0$ and there is no
$u\in\text{{{out}}}(v)$ such that $\ell(u)=\textsc{F}$ and
$\text{{{indeg}}}(u)>1$. The other case follows analogously. We claim that the
labeling $\hat{\ell}\supseteq\hat{\ell}_{r}$ with $\hat{\ell}(v)=\ell(v)$ is a
funnel labeling for $D-S_{r}$. Since $\text{{{indeg}}}_{D-S_{r}}(v)=0$, there
can be no cycle containing $v$. There can be no vertex $u\in V(D)$ with
$\hat{\ell}_{r}(u)=\textsc{M}$ and $\text{{{outdeg}}}_{D-S_{r}}(u)>1$ as this
would imply $\text{{{outdeg}}}_{D^{\prime}-S_{r}}(u)>1$ because
$\text{{{indeg}}}_{D-S_{r}}(v)=0$, a contradiction. There can also be no
vertex $u\in V(D)$ with $\hat{\ell}_{r}(u)=\textsc{F}$ and
$\text{{{indeg}}}_{D-S_{r}}(u)>1$ as all $u\in\text{{{out}}}_{D-S_{r}}(v)$
with $\ell(u)=\textsc{F}$ have $\text{{{indeg}}}_{D-S_{r}}(u)=1$, and
$\text{{{out}}}_{D}(v)\subseteq\operatorname{Dom}(\ell)$.
Because Remove Arcs (RR 3.7) is not applicable, if $\ell(v)=\textsc{M}$, there
is no $u\in\text{{{out}}}_{D-S_{r}}(v)$ with $\ell(u)=\textsc{F}$. Hence,
there is no arc $(w,x)\in A(D-S_{r})$ with $\hat{\ell}_{r}(w)=\textsc{M}$ and
$\hat{\ell}_{r}(x)=\textsc{F}$. From the labeling characterization from
Theorem 2.1 we have that $(S_{r},\hat{\ell}_{r})$ is solution for $(D,\ell,k)$
and the reduction rule is safe. ∎
Having exhaustively applied Reduction Rules 3.7 and 3.8, we can bound the
number of labeled vertices in $D$. Since Lower Bound (RR 3.1) is not
applicable, we already have a bound for the number of vertices $v$ with
$\ell(v)=\textsc{F}\land\text{{{indeg}}}(v)>1$ or
$\ell(v)=\textsc{M}\land\text{{{outdeg}}}(v)>1$. Hence, we only need to
consider vertices in the set
$L=\\{v\in\operatorname{Dom}(\ell)\mid\ell(v)=\textsc{F}\land\text{{{indeg}}}(v)\leq
1\text{ or }\ell(v)=\textsc{M}\land\text{{{outdeg}}}(v)\leq 1\\}$.
To bound $\left|{L}\right|$, we exploit the bound on the number of unlabeled
vertices from Lemma 6 and also the fact that such vertices have small degree
as Set Label (RR 3.2) is not applicable. We first partition $L$ into two
subsets $L_{1}=\\{v\in
L\mid\text{{{in}}}(v)\cup\text{{{out}}}(v)\not\subseteq\operatorname{Dom}(\ell)\\}$
and $L_{2}=L\setminus L_{1}$.
###### Lemma 7.
$\left|{L_{1}}\right|\in\mathcal{O}(k^{6})$.
###### Proof.
Let $U$ be the set of unlabeled vertices. Clearly
$L_{1}\subseteq\text{{{in}}}(U)\cup\text{{{out}}}(U)$. As Set Label (RR 3.2)
is not applicable, we have $\text{{{indeg}}}(v)\leq k+1$ and
$\text{{{outdeg}}}(v)\leq k+1$ for every $v\in U$. From Lemma 6 we know
$\left|{U}\right|\in\mathcal{O}(k^{5})$. Hence,
$\left|{L_{1}}\right|\leq\left|{\text{{{in}}}(U)\cup\text{{{out}}}(U)}\right|\in\mathcal{O}(k^{6})$.
∎
###### Lemma 8.
$\left|{L_{2}}\right|\in\mathcal{O}(k)$.
###### Proof.
Let $V_{\textsc{F}}=\\{v\mid\ell(v)=\textsc{F}\\}$ and
$L_{\textsc{F}}=V_{\textsc{F}}\cap L_{2}$. The case for the vertices labeled
with M follows analogously.
Since Remove Arcs (RR 3.7) is not applicable, we have $\ell(u)=\textsc{F}$ for
all $u\in\text{{{out}}}(L_{\textsc{F}})\cup\text{{{in}}}(L_{\textsc{F}})$. Let
$R_{1}=\\{u\in V_{\textsc{F}}\mid\text{{{indeg}}}(u)>1\\}$, $R_{2}=\\{u\in
L_{\textsc{F}}\mid\text{{{indeg}}}(u)\leq 1,\text{{{out}}}(u)\cap
R_{1}\neq\emptyset\\}$ and $R_{3}=\\{u\in
L_{\textsc{F}}\mid\text{{{indeg}}}(u)\leq 1,\text{{{out}}}(u)\cap
R_{1}=\emptyset\\}$.
Note that $L_{2}=R_{2}\cup R_{3}$ and $R_{1}\cap L_{2}=\emptyset$.
A solution set $S\subseteq A(D)$ must contain at least $\text{{{indeg}}}(v)-1$
many incoming arcs of $v$ for every $v\in R_{1}$. As each $u\in R_{2}$ has
some $v\in R_{1}$ as outneighbor, we have $\left|{R_{2}}\right|\leq 2k$.
Let $v\in R_{3}$. We claim that $v$ can reach some vertex of $R_{2}$. Since
Sources and Sinks (RR 3.8) is not applicable and $\text{{{out}}}(v)\cap
R_{1}=\emptyset$, we have $\text{{{indeg}}}(v)=1$ and
$\text{{{outdeg}}}(v)\geq 1$. This means that, if we successively follow the
outneighbors of $v$, we reach a vertex of $R_{2}$ or find a cycle $C$ using
only vertices of $R_{3}$. However, as Break Cycle (RR 3.5) is not applicable,
such a cycle $C$ cannot exist: every vertex $v\in R_{3}$ has
$\text{{{indeg}}}(v)=1$ and $\ell(v)=\textsc{F}$, implying we could apply
Break Cycle (RR 3.5) to $C$. Hence, every vertex of $R_{3}$ can reach some
$u\in R_{2}$.
We greedily construct vertex-disjoint paths $P_{1},P_{2},\dots,P_{a}$ ending
in $R_{2}$ whose inner vertices lie in $R_{3}$. For a vertex $v\in R_{3}$ take
an arbitrary $u\in R_{2}$ such that $v$ can reach $u$. Consider a path $P$
from $v$ to $u$. If none of its vertices lie in any already constructed
$P_{i}$, we just take the path $P$ into our set of paths. Otherwise, assume
that $P$ intersects some $P_{i}$ at $w$ and let $w$ be the first such vertex
in $P$. Since the indegree of any vertex in $R_{3}\cup R_{2}$ is at most one,
we know that $w$ is the starting point of $P_{i}$. Hence, we can obtain a path
$P_{j}$ by taking the path from $v$ to $w$ in $P$ and then concatenating
$P_{i}$. As $w$ is the first vertex of $P$ intersecting any other path, we get
that $P_{j}$ only intersects $P_{i}$. By replacing $P_{i}$ with $P_{j}$, we
obtain a path that also contains $v$. We repeat this process until we covered
all $v\in R_{3}$.
Since $\left|{R_{2}}\right|\leq 2k$, we have $a\leq 2k$. We now prove that
$\left|{V(P_{i})}\right|\leq 4$ for any $P_{i}$ in our set of vertex-disjoint
paths. Note that $\text{{{indeg}}}(u)\leq 1$ for any vertex $u\in V(P_{i})$.
Since Dissolve Vertex (RR 3.3) is not applicable, any inner vertex $u$ of
$P_{i}$ has $\text{{{outdeg}}}(u)>1$. Let $w$ be the successor of $u$ in
$P_{i}$. As Shift Neighbors (RR 3.4) is not applicable, we have that
$\text{{{indeg}}}(w)>1$ or $v\in\text{{{out}}}(u)$ where $v$ is the unique
inneighbor of $u$. If $\text{{{indeg}}}(w)>1$, then $u\in R_{2}$ and is the
endpoint of $P_{i}$, a contradiction to the assumption that $u$ is an inner
vertex of $P_{i}$. Otherwise, we know that $u\not\in\text{{{out}}}(w)$ as
$\text{{{indeg}}}(u)=1$. If $u$ is the only inner vertex of $P_{i}$, then
$\left|{V(P_{i})}\right|\leq 3$. Otherwise, its successor $w$ in $P_{i}$ is an
inner vertex of $P_{i}$ (since $v$ is the starting point of $P_{i}$, and so
$v\not\in\text{{{out}}}(u)$). Hence, we can apply the same argumentation to
$w$ and conclude that it has some outneighbor $x$ with
$\text{{{indeg}}}(x)>1$, implying $x\in R_{2}$ and
$\left|{V(P_{i})}\right|\leq 4$.
Since $\left|{V(P_{i})}\right|\leq 4$ and $a\leq 2k$, we have that
$\left|{L_{2}}\right|\leq 6k$. Because $L_{2}=R_{2}\cup R_{3}$, we have that
$\left|{L_{2}}\right|\leq 8k\in\mathcal{O}(k)$, as desired. ∎
###### Lemma 9.
Let $(D,\ell,k)$ be an FADL instance where Reduction Rules 3.1, 3.2, 3.3, 3.5,
3.4, 3.6, 3.7 and 3.8 are not applicable. Then
$\left|{V(D)}\right|\in\mathcal{O}(k^{6})$ and
$\left|{A(D)}\right|\in\mathcal{O}(k^{6})$.
###### Proof.
As Lower Bound (RR 3.1) is not applicable, there are at most $2k$ vertices $v$
with $\text{{{indeg}}}(v)>1$ and $\text{{{outdeg}}}(v)>1$, and also at most
$2k$ many vertices $v$ with $\ell(v)=\textsc{F}\land\text{{{indeg}}}(v)>1$ or
$\ell(v)=\textsc{M}\land\text{{{outdeg}}}(v)>1$. From Lemma 6 we know there
are $\mathcal{O}(k^{5})$ many unlabeled vertices $v\in V(D)$ with
$\text{{{indeg}}}(v)\leq 1$ or $\text{{{outdeg}}}(v)\leq 1$. Finally, due to
Lemmas 7 and 8 there are $\mathcal{O}(k^{6})$ vertices $v$ with
$\ell(v)=\textsc{F}\land\text{{{indeg}}}(v)\leq 1$ or
$\ell(v)=\textsc{M}\land\text{{{outdeg}}}(v)\leq 1$. As any vertex in $D$
falls into one of these groups, we have
$\left|{V(D)}\right|\in\mathcal{O}(k^{6})$.
As Remove Arcs (RR 3.7) is not applicable, there is no arc $(v,u)$ where
$v,u\in\operatorname{Dom}(\ell)$ and $\ell(v)\neq\ell(u)$. Since there are
$\mathcal{O}(k^{5})$ many unlabeled vertices and every unlabeled vertex has
in- and outdegree at most $k+1$, there are $\mathcal{O}(k^{6})$ arcs $(v,u)$
where $v\not\in\operatorname{Dom}(\ell)$ or
$u\not\in\operatorname{Dom}(\ell)$.
Now let $(v,u)$ be some arc where $v,u\in\operatorname{Dom}(\ell)$. Note that
$\ell(v)=\ell(u)$.
_Case 1:_ $v,u\in L$. Then $\text{{{outdeg}}}(v)=1$ (if $\ell(v)=\textsc{M}$)
or $\text{{{indeg}}}(u)=1$ (if $\ell(u)=\textsc{F}$). Thus, there can be at
most $\left|{L}\right|\in\mathcal{O}(k^{6})$ many arcs $(v,u)$ where $v,u\in
L$.
_Case 2:_ $v,u\not\in L$. As Lower Bound (RR 3.1) is not applicable, there can
be at most $2k$ such vertices. Thus, there are at most $4k^{2}$ arcs between
labeled vertices not in $L$.
_Case 3:_ Exactly one of $v,u$ is in $L$.
_Case 3.1:_ $v\not\in L\land\ell(v)=\textsc{F}$ or $u\not\in
L\land\ell(u)=\textsc{M}$. Then $\text{{{indeg}}}(u)=1$ or
$\text{{{outdeg}}}(v)=1$. Hence, there can be at most
$\left|{L}\right|\in\mathcal{O}(k^{6})$ such arcs.
_Case 3.2:_ $v\not\in L\land\ell(v)=\textsc{M}$ or $u\not\in
L\land\ell(u)=\textsc{F}$. If $v\not\in L$, then at least half of its outgoing
arcs need to be in a solution set. Similarly, if $u\not\in L$, at least half
of its incoming arcs need to be in a solution set. Hence, there can be at most
$2k$ many arcs falling into this case. By adding all cases together, we obtain
that $\left|{A(D)}\right|\in\mathcal{O}(k^{6})$, concluding the proof. ∎
## 4 Computing the Kernel
In Sections 3.1 and 3.2 we defined the reduction rules for the kernelization
process and showed that, if none of the reduction rules are applicable to a
digraph $D$, then the size of $D$ is polynomially bounded on $k$. To conclude
the proof that FADS admits a polynomial problem kernel, we show that it is
possible to apply all reduction rules in $\mathcal{O}(nm)$ time and also
reduce the FADL instance back into an FADS instance.
###### Lemma 10.
We can exhaustively apply Reduction Rules 3.1, 3.2, 3.3, 3.5, 3.4, 3.6, 3.7
and 3.8 in $\mathcal{O}(nm)$ time to an FADL instance $(D,\ell,k)$, where
$n=\left|{V(D)}\right|$ and $m=\left|{A(D)}\right|$.
###### Proof.
We apply the reduction rules exhaustively in the order they are defined. In
order to do so efficiently, we use a constant number of counters for each
vertex $v$ in order to check if a reduction rule is applicable to $v$.
To apply Lower Bound (RR 3.1), Set Label (RR 3.2) and Dissolve Vertex (RR 3.3)
we only need to check the labels and degrees of a vertex and its neighbors.
Whenever the label of a vertex changes, we need to recheck if we can apply the
reduction rules to its neighbors. It is therefore sufficient to store the
degree of $v$ and the number of its neighbors which have the a certain type
(for example, the number of $u\in\text{{{out}}}(v)$ with $\ell(u)=\textsc{F}$
and $\text{{{indeg}}}(u)=1$). Whenever the label of a vertex changes, we only
need to increment the counters its neighbors. As we set the label of a vertex
at most once, we need to visit each arc constantly many times.
For Shift Neighbors (RR 3.4), we consider the first case of the rule where the
indegrees are one (the other case is applied analogously). We search for a
vertex $w$ with $\text{{{indeg}}}(w)=1$. We then take the unique inneighbor
$v\in\text{{{in}}}(w)$, tracking in a counter the number of
$w\in\text{{{out}}}(v)$ with $\text{{{indeg}}}(w)=1$. If
$\text{{{indeg}}}(v)=1$, we construct a path $P$ ending in $v$ by following
its unique inneighbor until we obtain a vertex $y$ with
$\text{{{indeg}}}(y)>1$ or $\ell(y)=\textsc{M}$, which we do not include in
$P$. We denote the starting point of $P$ by $u$.
If $u=w$, we can apply Break Cycle (RR 3.5) by deleting the arc $(v,w)$ and
labeling every vertex in $P$ with F (if they are not already labeled). Now
assume $P$ is indeed a path. For each $x\in\text{{{out}}}(P)$ we count the
number $c=\left|{\text{{{indeg}}}(x)\cap V(P)}\right|$ and then shift all arcs
$(v,x)$ with $v\in V(P)$ by arcs coming from the first $c$ vertices in $P$.
The cost of applying this operation is linear on the number of arcs leaving
$P$. If a vertex $v$ is in another such path $Q$, then after applying Shift
Neighbors (RR 3.4) once to $v$ we shift $Q$ in such a way that only the
startpoint $u$ is in more than one path to which the rule is applicable.
We only need to recheck if Shift Neighbors (RR 3.4) is applicable to $v$ when
an arc is removed from it by Remove Arcs (RR 3.7) or by Break Cycle (RR 3.5)
(arcs removed from Sources and Sinks (RR 3.8) never trigger Shift Neighbors
(RR 3.4) due to the degrees of the affected vertices). In this case, we apply
depth-first search on $v$, following its outgoing arcs, in order to find all
paths to which Shift Neighbors (RR 3.4) is applicable. As observed above, the
computational cost of shifting the arcs of a path $P$ is linear in the number
of arcs leaving $P$. Since no reduction rule adds arcs to $D$, we only need to
recheck if Shift Neighbors (RR 3.4) is applicable to $v$ once. Hence, each arc
gets shifted $\mathcal{O}(n)$ many times, and this rule can be exhaustively
applied in $\mathcal{O}(nm)$ time.
Remove Arcs (RR 3.7) is applied whenever we set the label of a vertex $v$. In
order to apply it, it suffices to iterate through $\text{{{in}}}(v)$ or
$\text{{{out}}}(v)$ and check the labels of those vertices. Since we set the
label of a vertex at most once, we need $\mathcal{O}(n+m)$ time in total for
this rule. Finally, we apply Sources and Sinks (RR 3.8) by applying breadth-
first search on the sources and sinks of $D$. Note that, although Sources and
Sinks (RR 3.8) remove vertices (and arcs) from $D$, it cannot cause Break
Cycle (RR 3.5), Shift Neighbors (RR 3.4) or other reduction rules to become
applicable if they were not applicable before. Hence, by applying Sources and
Sinks (RR 3.8) only after no other rule is applicable, we can exhaustively
apply this rule in $\mathcal{O}(n+m)$ time, giving us a total running time of
$\mathcal{O}(nm)$ for the kernelization process, concluding the proof. ∎
###### Theorem 4.1.
FADS admits a kernel with $\mathcal{O}(k^{6})$ vertices and
$\mathcal{O}(k^{7})$ arcs which can be computed in $\mathcal{O}(nm)$ time,
where $n=\left|{V(D)}\right|$, $m=\left|{A(D)}\right|$ and $D$ is the input
digraph.
###### Proof.
We start by reducing the FADS instance into an FADL instance $(D,\ell,k)$ by
adding an empty labeling $\ell$. Using Lemma 10, we can exhaustively apply all
reduction rules to $(D,\ell,k)$ in $\mathcal{O}(nm)$ time.
From Lemma 9 we know $\left|{V(D)}\right|\in\mathcal{O}(k^{6})$ and
$\left|{A(D)}\right|\in\mathcal{O}(k^{6})$. We now reduce the FADL instance
back into an FADS instance $(D^{\prime},k)$ in order to obtain a kernel for
the original problem. We first set $D^{\prime}\coloneqq D$ and add $k+2$
vertices $f_{1},f_{2},\dots,f_{k+2}$ and $k+2$ vertices
$m_{1},m_{2},\dots,m_{k+2}$ to $D^{\prime}$. Let
$v\in\operatorname{Dom}(\ell)$. If $\ell(v)=\textsc{F}$, we add the arc
$(v,f_{i})$ for each $1\leq i\leq k+2$. If $\ell(v)=\textsc{M}$, we add the
arc $(m_{i},v)$ for each $1\leq i\leq k+2$.
Trivially, a solution for the FADL instance is also a solution for the FADS
instance. It is also easy to see that, if there is some arc set
$S_{r}\subseteq A(D^{\prime})$ and some funnel labeling $\hat{\ell}_{r}$ for
$D^{\prime}-S_{r}$ such that $\ell(v)\neq\hat{\ell}_{r}(v)$ for some
$v\in\operatorname{Dom}(\ell)$, then $\left|{S_{r}}\right|>k$. Hence, a
solution for $(D^{\prime},k)$ implies a solution for $(D,\ell,k)$.
We added $2k+4$ vertices and $\mathcal{O}(k^{7})$ many arcs to $D^{\prime}$,
and so $\left|{V(D^{\prime})}\right|\in\mathcal{O}(k^{6})$ and
$\left|{A(D^{\prime})}\right|\in\mathcal{O}(k^{7})$, thus concluding the
proof. ∎
## 5 Conclusion
The kernelization algorithm provided in this paper heavily relies on the
characterizations of Theorem 2.1 for funnels. Both the characterization by
forbidden subgraphs as well as the labeling characterization allowed us to
derive reduction rules based only on “local” substructures as the degree or
neighborhood of a vertex. In a sense, this “locality” property saved us from
computing any set of vertex-disjoint local funnels, despite the fact that the
results and reduction rules from Section 3.1 heavily rely on local funnels.
The polynomial kernels for Out-Forest-VDS and Pumpkin-VDS due to [12] also
rely on “localized” forbidden substructures. We consider that generalizing
these results to larger digraph classes of unbounded treewidth, but which are
characterized by forbidden substructures, to be a very interesting direction
for future research.
Further, it would also be interesting to decide if Funnel-VDS admits a
polynomial kernel or not (it is in FPT with respect to the solution size
[11]), especially since a kernel for this problem would require considerably
different ideas from the ones presented in this paper, as it is no longer
clear how to exploit the vertex labeling in the vertex-deletion setting.
## References
* Abu-Khzam [2010] Faisal N Abu-Khzam. A kernelization algorithm for $d$-hitting set. _Journal of Computer and System Sciences_ , 76(7):524–531, 2010.
* Agrawal et al. [2018] Akanksha Agrawal, Saket Saurabh, Roohani Sharma, and Meirav Zehavi. Kernels for deletion to classes of acyclic digraphs. _Journal of Computer and System Sciences_ , 92:9–21, 2018\.
* Bang-Jensen et al. [2016] Jørgen Bang-Jensen, Alessandro Maddaloni, and Saket Saurabh. Algorithms and kernels for feedback set problems in generalizations of tournaments. _Algorithmica_ , 76(2):320–343, 2016.
* Chen et al. [2008] Jianer Chen, Yang Liu, Songjian Lu, Barry O’sullivan, and Igor Razgon. A fixed-parameter algorithm for the directed feedback vertex set problem. _Journal of the ACM (JACM)_ , 55(5):21, 2008.
* Cygan et al. [2015] Marek Cygan, Fedor V Fomin, Łukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michał Pilipczuk, and Saket Saurabh. _Parameterized algorithms_ , volume 4. Springer, 2015.
* Downey and Fellows [2013] Rodney G Downey and Michael R Fellows. _Fundamentals of parameterized complexity_ , volume 4. Springer, 2013.
* Fomin et al. [2019] Fedor V Fomin, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi. _Kernelization: theory of parameterized preprocessing_. Cambridge University Press, 2019.
* Kratsch [2014] Stefan Kratsch. Recent developments in kernelization: A survey. _Bulletin of EATCS_ , 2(113), 2014.
* Lokshtanov et al. [2012] Daniel Lokshtanov, Neeldhara Misra, and Saket Saurabh. Kernelization–preprocessing with a guarantee. In _The Multivariate Algorithmic Revolution and Beyond_ , pages 129–161. Springer, 2012.
* Lokshtanov et al. [2019] Daniel Lokshtanov, MS Ramanujan, Saket Saurabh, Roohani Sharma, and Meirav Zehavi. Wannabe bounded treewidth graphs admit a polynomial kernel for dfvs. In _Workshop on Algorithms and Data Structures_ , pages 523–537. Springer, 2019.
* Millani et al. [2018] Marcelo Garlet Millani, Hendrik Molter, Rolf Niedermeier, and Manuel Sorge. Efficient algorithms for measuring the funnel-likeness of DAGs. In Jon Lee, Giovanni Rinaldi, and A. Ridha Mahjoub, editors, _Combinatorial Optimization_ , pages 183–195, Cham, 2018. Springer International Publishing. ISBN 978-3-319-96151-4.
* Mnich and van Leeuwen [2017] Matthias Mnich and Erik Jan van Leeuwen. Polynomial kernels for deletion to classes of acyclic digraphs. _Discrete Optimization_ , 25:48–76, 2017.
|
L.8
# Surreal fields stable under exponential, logarithmic, derivative and anti-
derivative functions
Olivier<EMAIL_ADDRESS>École Polytechnique, LIX,
91128 Palaiseau Cedex, France
This work was partially supported by ANR Project $\partial$IFFERENCE. Quentin
<EMAIL_ADDRESS>École Polytechnique, LIX, 91128
Palaiseau Cedex, France
This work was partially supported by ANR Project $\partial$IFFERENCE.
###### Abstract
The class of surreal numbers, denoted by No, initially proposed by Conway, is
a universal ordered field in the sense that any ordered field can be embedded
in it. They include in particular the real numbers and the ordinal numbers.
They have strong relations with other fields such as field of transseries.
Following Gonshor, surreal numbers can be seen as signs sequences of ordinal
length, with some exponential and logarithmic functions that extend the usual
functions over the reals. No can actually be seen as an elegant (generalized)
power series field with real coefficients, namely Hahn series with exponents
in No itself.
Some years ago, Berarducci and Mantova considered derivation over the surreal
numbers, seeing them as germs of functions, in correspondence to transseries.
In this article, following our previous work, we exhibit a sufficient
condition on the structure of a surreal field to be stable under all
operations among exponential, logarithm, derivation and anti-derivation.
Motivated, in the long term, by computability considerations, we also provide
a non-trivial application of this theorem: the existence of a pretty
reasonable field that only requires ordinals up to $\varepsilon_{\omega}$,
which is far smaller than $\omega_{1}^{\textrm{CK}}$ (resp. $\omega_{1}$), the
first non-computable (resp. uncountable) ordinal.
## 1 Introduction
Conway introduced in [Con00] the class of surreal numbers. They were later on
popularized by Knuth [Knu74], and then formalized later on by Gonshor [Gon86],
and by many other authors. The general initial idea is to define a class of
numbers, based on a concept of “simplicity”. This permits to obtain a real
closed field that both contains the real numbers and the ordinals, as this
provides an unification of Dedekind’s construction of real numbers in terms of
cuts of the rational numbers, and of von Neumann’s construction of ordinal
numbers by transfinite induction in terms of set membership.
Following the alternative presentation from Gonshor in [Gon86], a surreal
number can also be seen as an ordinal-length sequence over $\\{+,-\\}$, that
we call a signs sequence. Basically, the idea is that such sequences are
ordered lexicographically, and have a tree-like structure. Namely, a $+$
(respectively $-$) added to a sequence $x$ denotes the simplest number greater
(resp. smaller) than $x$ but smaller (resp. greater) than all the prefixes of
$x$ which are greater (resp. smaller) than $x$. With this definition of
surreal numbers, it is possible to define operations such as addition,
substraction, multiplication, division, obtaining a real closed field.
Following Gonshor [Gon86], based on ideas from Kruskal, it is also possible to
define consistently classical functions such as the exponential function and
the logarithmic function over No, and to do analysis of this fields of
numbers.
It can be considered as “the” field that includes “all numbers great and
small” [Ehr12]. In particular, any divisible ordered Abelian group is
isomorphic to an initial subgroup of No, and any real closed field is
isomorphic to an initial subfield of No [Ehr01, Theorems 9 and 19], [Con00,
Theorems 28 and 29]. This leads to the fact that it can be considered as “the”
field that includes “all numbers great and small” [Ehr12]. No can also be
equipped with a derivation, so that it can be considered as a fields of
transseries [BM18a]. See example [MM17] for a survey of fascinating recent
results in all these directions.
More concretely, No can also be seen as a field of (generalized) power series
with real coefficients, namely as Hahn series where exponents are surreal
numbers themselves. More precisely, write
$\mathbb{K}\left(\left(G\right)\right)$ for the set of Hahn series with
coefficients in $\mathbb{K}$ and terms corresponding to elements of $G$, where
$\mathbb{K}$ is a field, and $G$ is some divisible ordered Abelian group: This
means that $\mathbb{K}\left(\left(G\right)\right)$ corresponds to formal power
series of the form $s=\sum_{g\in S}a_{g}t^{g}$, where $S$ is a well-ordered
subset of $G$ and $a_{g}\in\mathbb{K}.$ The support of $s$ is
$\operatorname{supp}(s)=\left\\{\left.g\in S\ \vphantom{a_{g}\neq 0}\right|\
a_{g}\neq 0\right\\}$ and the length of the serie of $s$ is the order type of
$\operatorname{supp}(s)$. The field operations on
$\mathbb{K}\left(\left(G\right)\right)$ are defined as expected, considering
elements of $\mathbb{K}\left(\left(G\right)\right)$ as formal power series. We
have $\textnormal{No}=\mathbb{R}\left(\left(\textnormal{No}\right)\right)$.
Our previous work [BG22] has shown that some acceptable subfields of No are
stable by both exponential and logarithm. This fields are built with some
restriction on ordinals allowed in the ordinal sum. In the current article, we
pursue the work by exhibiting some acceptable subfields of No are stable by
both exponential and logarithm, derivation and anti-derivation. We actually
provide some sufficient condition on the structure of a surreal field to be
stable under all these operations among exponential, logarithm, derivation and
anti-derivation, and use these to derive such subfields.
##### More precise statements
Given some ordinal $\gamma$ (or more generally a class of ordinals), we write
$\mathbb{K}\left(\left(G\right)\right)_{\gamma}$ for the restriction of
$K\left(\left(G\right)\right)$ to formal power series whose support has an
order type in $\gamma$ (that is to say, corresponds to some ordinal less than
$\gamma$). We have of course
$\textnormal{No}=\mathbb{R}\left(\left(\textnormal{No}\right)\right)_{\textnormal{Ord}}$.
In this point of view, $\varepsilon$-numbers, i.e ordinals $\lambda$, such
that $\omega^{\lambda}=\lambda$, play a major role as they are such limit on
ordinal we can accept in our fields.
As we will often play with exponents of formal power series considered in the
Hahn series, we propose to introduce the following notation: We denote
$\mathbb{R}_{\lambda}^{\Gamma}=\mathbb{R}\left(\left(\Gamma\right)\right)_{\lambda}$
when $\lambda$ is an $\varepsilon$-number and $\Gamma$ a divisible Abelian
group.
As a consequence of MacLane’s theorem (Theorem 3.1 below from [Mac39], see
also [All87, section 6.23]), we know that
$\mathbb{R}_{\lambda}^{\textnormal{No}_{\mu}}$ is a real-closed field when
$\mu$ is a multiplicative ordinal (i.e. $\mu=\omega^{\omega^{\alpha}}$ for
some ordinal $\alpha$) and $\lambda$ an $\varepsilon$-number.
Furthermore:
###### Theorem 1.1 ([P]roposition 4.7).
DriesEhrlich01] Let $\lambda$ be an $\varepsilon$-number. Then
1. 1.
The field $\textnormal{No}_{\lambda}$ can be expressed as
(1)
$\textnormal{No}_{\lambda}=\bigcup_{\mu}\mathbb{R}_{\lambda}^{\textnormal{No}_{\mu}},$
where $\mu$ ranges over the additive ordinals less than $\lambda$
(equivalently, $\mu$ ranges over the multiplicative ordinals less than
$\lambda$ ).
2. 2.
$\textnormal{No}_{\lambda}$ is a real closed subfield of No, and is closed
under the restricted analytic functions of No.
3. 3.
$\textnormal{No}_{\lambda}=\mathbb{R}_{\lambda}^{\textnormal{No}_{\lambda}}$
if and only if $\lambda$ is a regular cardinal.
Actually, even if we always can write $\textnormal{No}_{\lambda}$ as an
increasing union of fields by Equation (1), and even if
$\textnormal{No}_{\lambda}$ is stable under exponential and logarithmic
functions (Theorem 5.4) none of the fields in this union has stability
property beyond the fact that they are fields. Indeed:
###### Proposition 1.1 ([BG22, Proposition 1.5]).
$\mathbb{R}_{\lambda}^{\textnormal{No}_{\mu}}$ is never closed under
exponential function for $\mu<\lambda$ a multiplicative ordinal.
In our previous work [BG22], we studied stability of subfields of No by
exponential and logarithm. For the sake of effectiveness and representations
for ordinal Turing machines, we kept ordinals as small as possible to identify
natural subfields stable by both these functions. This paper will keep the
same spirit and we will give an example construction that only involves
ordinals up to $\varepsilon_{\omega}$, which is much smaller than
$\omega_{1}$, the first uncountable ordinal, and even
$\omega_{1}^{\textrm{CK}}$, the first non-computable ordinal.
To achieve that purpose, we will have to handle carefully the
$\varepsilon$-numbers that are involved. Recall that there is some enumeration
$(\varepsilon_{\alpha})_{\alpha\in\textnormal{Ord}}$of $\varepsilon$-numbers:
Any $\varepsilon$-number ordinal $\lambda$ is $\varepsilon_{\alpha}$ for some
ordinal $\alpha$.
###### Definition 1.1 (Canonical sequence defining an $\varepsilon$-number).
Let $\lambda$ be an $\varepsilon$-number. Ordinal $\lambda$ can always be
written as $\lambda=\sup\left(e_{\beta}\right)_{\beta<\gamma_{\lambda}}$ for
some canonical sequence, where $\gamma_{\lambda}$ is the length of this
sequence, and this sequence is defined as follows:
* •
If $\lambda=\varepsilon_{0}$ then we can write
$\varepsilon_{0}=\sup\\{\omega,\omega^{\omega},\omega^{\omega^{\omega}},\dots\\}$
and we take$\omega,\omega^{\omega},\omega^{\omega^{\omega}},\dots$ as
canonical sequence for $\varepsilon_{0}$. Its length is $\omega$, and for
$\beta<\lambda$, $e_{\beta}$ is $\omega^{\iddots^{\omega}}$ where there are
$\beta$ occurrences of $\omega$ in the exponent.
* •
If $\lambda=\varepsilon_{\alpha}$, where $\alpha$ is a non-zero limit ordinal,
then we can write $\lambda=\underset{\beta<\alpha}{\sup}\varepsilon_{\beta}$
and we take $\left(\varepsilon_{\beta}\right)_{\beta<\alpha}$ as the canonical
sequence of $\lambda$. Its length is $\alpha$ and for $\beta<\alpha$,
$e_{\beta}=\varepsilon_{\beta}$.
* •
If $\lambda=\varepsilon_{\alpha}$, where $\alpha$ is a successor ordinal, then
we can write
$\lambda=\sup\\{\varepsilon_{\alpha-1},{\varepsilon_{\alpha-1}}^{\varepsilon_{\alpha-1}},{\varepsilon_{\alpha-1}}^{{\varepsilon_{\alpha-1}}^{\varepsilon_{\alpha-1}}},\dots\\}$
and we take
$\varepsilon_{\alpha-1},{\varepsilon_{\alpha-1}}^{\varepsilon_{\alpha-1}},{\varepsilon_{\alpha-1}}^{{\varepsilon_{\alpha-1}}^{\varepsilon_{\alpha-1}}},\dots$
as the canonical sequence of $\lambda$. Its length is $\omega$, and for
$\beta<\omega$,
$e_{\beta}={\varepsilon_{\alpha-1}}^{\iddots^{\varepsilon_{\alpha-1}}}$ where
there are $\beta$ occurrences of $\varepsilon_{\alpha-1}$ in the exponent.
For example, the canonical sequence defining $\varepsilon_{1}$ is
$\varepsilon_{0},{\varepsilon_{0}}^{\varepsilon_{0}},{\varepsilon_{0}}^{{\varepsilon_{0}}^{\varepsilon_{0}}},\dots$,
the canonical sequence defining $\varepsilon_{\omega}$ is
$\varepsilon_{0},\varepsilon_{1},\varepsilon_{2},\dots$, the canonical
sequence of $\varepsilon_{\omega 2}$ is
$\varepsilon_{0},\varepsilon_{1},\varepsilon_{2},\dots,\varepsilon_{\omega},\varepsilon_{\omega+1},\dots$
and the canonical sequence of $\varepsilon_{\omega 2+1}$ is
${\varepsilon_{\omega 2}},{\varepsilon_{\omega 2}}^{\varepsilon_{\omega
2}},{\varepsilon_{\omega 2}}^{{\varepsilon_{\omega 2}}^{\varepsilon_{\omega
2}}},\dots$
###### Definition 1.1.
Let $\Gamma$ be an Abelian subgroup of No and $\lambda$ be an
$\varepsilon$-number whose canonical sequence is
$\left(e_{\beta}\right)_{\beta<\gamma_{\lambda}}$. We denote
$\Gamma^{\uparrow\lambda}$ for the family of group
$\left(\Gamma_{\beta}\right)_{\beta<\gamma_{\lambda}}$ defined as follows:
* •
$\Gamma_{0}=\Gamma$;
* •
$\Gamma_{\beta+1}$ is the group generated by the groups $\Gamma_{\beta}$,
$\mathbb{R}_{e_{\beta}}^{g\left((\Gamma_{\beta})^{*}_{+}\right)}$ and the set
$\left\\{\left.h(a_{i})\
\vphantom{\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}\in\Gamma_{\beta}}\right|\
\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}\in\Gamma_{\beta}\right\\}$
where $g$ and $h$ are Gonshor’s functions associated to exponential and
logarithm (see Section 5 below for some details);
* •
For a limit ordinal number $\beta$,
$\Gamma_{\beta}=\underset{\gamma<\beta}{\overset{}{\bigcup}}\Gamma_{\gamma}$.
When considering a family of set $(S_{i})_{i\in I}$, we denote
$\mathbb{R}_{\lambda}^{(S_{i})_{i\in I}}=\underset{i\in
I}{\overset{}{\bigcup}}\mathbb{R}_{\lambda}^{S_{i}}$
In particular,
$\mathbb{R}_{\lambda}^{\Gamma^{\uparrow\lambda}}=\underset{i<\gamma_{\lambda}}{\overset{}{\bigcup}}\mathbb{R}_{\lambda}^{\Gamma_{i}}$
###### Remark 1.1.
By construction, if $\Gamma\subseteq\Gamma^{\prime}$ then
$\mathbb{R}_{\lambda}^{\Gamma^{\uparrow\lambda}}\subseteq\mathbb{R}_{\lambda}^{{\Gamma^{\prime}}^{\uparrow\lambda}}$.
The idea behind the definition of $\Gamma^{\uparrow}$ is that at step $i+1$ we
add new elements to close $\mathbb{R}_{\lambda}^{\Gamma_{i}}$ under
exponential and logarithm. The reason why we add
$\mathbb{R}_{e_{\beta}}^{g\left((\Gamma_{\beta})^{*}_{+}\right)}$ to
$\Gamma_{\beta}$ rather than
$\mathbb{R}_{\lambda}^{g\left((\Gamma_{\beta})^{*}_{+}\right)}$ is that we
want to keep control on what we add in the new group. In our previous work
[BG22] we came up with the following three statements:
###### Lemma 1.1 ([BG22, Lemma 5.7]).
Write
$\Gamma^{\uparrow\lambda}=\left(\Gamma_{\beta}\right)_{\beta<\gamma_{\lambda}}$,
and let
$L=\left\\{\left.\exp_{n}x,\ln_{n}x\
\vphantom{\begin{array}[]{c}x\in\mathbb{L},\ n\in\mathbb{N},\\\ \exists
y\in\mathbb{R}_{\lambda}^{\Gamma}\ \exists P\in\mathcal{P}(y)\ \exists
k\in\mathbb{N}\quad P(k)=x\end{array}}\right|\
\begin{array}[]{c}x\in\mathbb{L},\ n\in\mathbb{N},\\\ \exists
y\in\mathbb{R}_{\lambda}^{\Gamma}\ \exists P\in\mathcal{P}(y)\ \exists
k\in\mathbb{N}\quad P(k)=x\end{array}\right\\}$
we have for all $i<\gamma_{\lambda}$,
$L=\left\\{\left.\exp_{n}x,\ln_{n}x\
\vphantom{\begin{array}[]{c}x\in\mathbb{L},n\in\mathbb{N},\\\ \exists
y\in\mathbb{R}_{\lambda}^{\Gamma_{i}}\ \exists P\in\mathcal{P}(y)\ \exists
k\in\mathbb{N}\quad P(k)=x\end{array}}\right|\
\begin{array}[]{c}x\in\mathbb{L},n\in\mathbb{N},\\\ \exists
y\in\mathbb{R}_{\lambda}^{\Gamma_{i}}\ \exists P\in\mathcal{P}(y)\ \exists
k\in\mathbb{N}\quad P(k)=x\end{array}\right\\}$
###### Corollary 1.1 ([BG22, Corollary 5.8]).
Let $\Gamma$ be an Abelian additive subgroup of No and
$L=\left\\{\left.\exp_{n}x,\ln_{n}x\
\vphantom{\begin{array}[]{c}x\in\mathbb{L},n\in\mathbb{N},\\\ \exists
y\in\mathbb{R}_{\lambda}^{\Gamma}\ \exists P\in\mathcal{P}(y)\ \exists
k\in\mathbb{N}\quad P(k)=x\end{array}}\right|\
\begin{array}[]{c}x\in\mathbb{L},n\in\mathbb{N},\\\ \exists
y\in\mathbb{R}_{\lambda}^{\Gamma}\ \exists P\in\mathcal{P}(y)\ \exists
k\in\mathbb{N}\quad P(k)=x\end{array}\right\\}$
Then,
$L=\left\\{\left.\exp_{n}x,\ln_{n}x\
\vphantom{\begin{array}[]{c}x\in\mathbb{L},\quad n\in\mathbb{N},\\\ \exists
y\in\mathbb{R}_{\lambda}^{\Gamma^{\uparrow\lambda}}\ \exists
P\in\mathcal{P}(y)\ \exists k\in\mathbb{N}\quad P(k)=x\end{array}}\right|\
\begin{array}[]{c}x\in\mathbb{L},\quad n\in\mathbb{N},\\\ \exists
y\in\mathbb{R}_{\lambda}^{\Gamma^{\uparrow\lambda}}\ \exists
P\in\mathcal{P}(y)\ \exists k\in\mathbb{N}\quad P(k)=x\end{array}\right\\}$
###### Theorem 1.2 ([T]heorem 1.10).
bournez2022surreal] Let $\Gamma$ be an Abelian subgroup of No and $\lambda$ be
an $\varepsilon$-number, then
$\mathbb{R}_{\lambda}^{\Gamma^{\uparrow\lambda}}$ is stable under exponential
and logarithmic functions.
With such a notion, we managed to make a link between the two types of field
involved in Theorems 5.4 and 1.2. More precisely, the fields
$\mathbb{R}_{\lambda}^{\Gamma^{\uparrow\lambda}}$ are part of the fields
$\textnormal{No}_{\lambda}$.
###### Theorem 1.3 ([T]heorem 1.11).
bournez2022surreal]
$\textnormal{No}_{\lambda}=\bigcup_{\mu}\mathbb{R}_{\lambda}^{{\textnormal{No}_{\mu}}^{\uparrow\lambda}}$,
where $\mu$ ranges over the additive ordinals less than $\lambda$
(equivalently, $\mu$ ranges over the multiplicative ordinals less $\lambda$),
Notice that now, $\textnormal{No}_{\lambda}$ is expressed as a increasing
union of fields, each of them closed by $\exp$ and $\ln$. Indeed, by
definition, if $\mu<\mu^{\prime}$ then
$\textnormal{No}_{\mu}\subseteq\textnormal{No}_{\mu^{\prime}}$ and Remark 1
gives
$\mathbb{R}_{\lambda}^{{\textnormal{No}_{\mu}}^{\uparrow\lambda}}\subseteq\mathbb{R}_{\lambda}^{{\textnormal{No}_{\mu^{\prime}}}^{\uparrow\lambda}}$.
Finally, we proved that each field
$\mathbb{R}_{\lambda}^{{\textnormal{No}_{\mu}}^{\uparrow\lambda}}$ is
interesting for itself since none of them is $\textnormal{No}_{\lambda}$. More
precisely:
###### Theorem 1.4 ([T]heorem 1.12).
bournez2022surreal] For all $\varepsilon$-number $\lambda$, the hierarchy in
previous theorem is strict:
$\mathbb{R}_{\lambda}^{{\textnormal{No}_{\mu}}^{\uparrow\lambda}}\subsetneq\mathbb{R}_{\lambda}^{{\textnormal{No}_{\mu^{\prime}}}^{\uparrow\lambda}}$
for all multiplicative ordinals $\mu$ and $\mu^{\prime}$ such that
$\omega<\mu<\mu^{\prime}<\lambda$.
In this article, we will go further and investigate the case of stability
under derivative and anti-derivative.
###### Main Theorem 1.4.
Let $\alpha$ be a limit ordinal and
$\left(\Gamma_{\beta}\right)_{\beta<\alpha}$ be a sequence of Abelian
subgroups of No such that
* •
$\forall\beta<\alpha\quad\forall\gamma<\beta\qquad\Gamma_{\gamma}\subseteq\Gamma_{\beta}$
* •
$\forall\beta<\alpha\qquad\omega^{(\Gamma_{\beta})^{*}_{+}}\succ^{K}\kappa_{-\varepsilon_{\beta}}$
* •
$\forall\beta<\alpha\quad\forall\gamma<\varepsilon_{\beta}\qquad\kappa_{-\gamma}\in\omega^{\Gamma_{\beta}}$
* •
$\forall\beta<\alpha\quad\exists\eta_{\beta}<\varepsilon_{\beta}\quad\forall
x\in\omega^{\Gamma_{\beta}}\qquad\operatorname{NR}(x)<\eta_{\beta}$
Then
$\underset{\beta<\alpha}{\overset{}{\bigcup}}\mathbb{R}_{\varepsilon_{\beta}}^{\Gamma_{\beta}^{\uparrow\varepsilon_{\beta}}}$
is stable under $\exp$, $\ln$, $\partial$ and anti-derivation (see section 6).
Actually, we mainly focus on properties of the derivation suggested by
Beraducci and Mantova and its anti-derivation. In particular, we establish
various bounds that are useful to find fields stable under derivation and
anti-derivation. We also prove the following:
###### Proposition 1.4.
For any $x\in\textnormal{No}$, the set $\mathcal{P}_{\mathbb{L}}(x)$ is well-
ordered with order type
$\beta<\omega^{\omega^{\omega(\operatorname{NR}(x)+1)}}$. In particular,
$\nu(\partial x)<\omega^{\omega^{\omega(\operatorname{NR}(x)+1)}}$
In the above proposition and the above theorem, $\operatorname{NR}$ is the
nested truncation rank which is defined in Definition 6.2.1 and $\nu(x)$ is
the length of the series of the normal form the surreal number $x$ (see
Definition 3.3.3). The previous proposition is essential to control
derivatives of surreal numbers and then get field stable under derivation. To
handle anti-derivation, we came up with the following proposition:
###### Proposition 1.4.
Let $x$ be a surreal number. Let $\gamma$ be the smallest ordinal such that
$\kappa_{-\gamma}\prec^{K}P(k_{P})$ for all path
$P\in\mathcal{P}_{\mathbb{L}}(x)$. Let $\lambda$ be the least
$\varepsilon$-number greater than $\operatorname{NR}(x)$ and $\gamma$. Then
$\underset{i\in\mathbb{N}}{\overset{}{\bigcup}}\operatorname{supp}\Phi^{i}(x)$
(see Definition 6.5) is reverse well-ordered with order type less than
$\omega^{\omega^{\lambda+2}}$.
Thanks to Propositions 1 and 1, we will be able to prove our main theorem:
##### Organization of the paper
This article is organized as follows. Section 2 is a quick reminder of some
lemmas about order types that will be useful at the end of this article.
Section 3 recalls basics of the concepts and definitions of the theory of
surreal numbers, and fixes the notations used in the rest of the paper.
Section 4 recalls what is known about the stability properties of various
subfields of No according to their signs sequence representation or Hahn
series representation. In Section 5 we recall the definitions and properties
of exponential and logarithm. In Section 6, we recall some existing literature
about log-atomic numbers and derivation and established some result about the
nested truncation rank, a notion of rank related to the structure of the
surreal numbers and to log-atomic numbers. Finally, in Section 7, we build
surreal fields that are stable under exponentiation, logarithm, derivation and
anti-derivation. We also show how this construction can lead too an example
which only uses “small” ordinals, which is good from a Computability Theory
point of view.
## 2 Order type toolbox
In this section, we quickly take a look at some useful lemma about order type
of well ordered sets. In all the following, circled operators
($\oplus,\otimes$) stand for usual operations over ordinal numbers. The usual
symbols ($+,\times$) stand for natural operations, which are commutative.
Our first proposition is about the union of well ordered sets. This result is
already knows but we still provide a proof since it is hard to find it in the
literature.
###### Lemma 2.0 (Folklore).
Let $\Gamma$ be a totally ordered set, $A\subseteq\Gamma$ be a well-ordered
subset with order type $\alpha$. Let $g\in\Gamma$. Then the set $A\cup\\{g\\}$
is well ordered with order type at most $\alpha+1$.
###### Proof.
We prove it by induction on $\alpha$.
* •
If $\alpha=0$ then $A\cup\\{g\\}$ has only one element, and then has order
type $1=\alpha+1$.
* •
If $\alpha=\gamma+1$ is a successor ordinal. Let $u$ the largest element in
$A$. If $u\leq g$ then $A\cup\\{g\\}$ has indeed order type at most
$\alpha+1$. If not, then, by induction hypothesis,
$\left(A\setminus\\{u\\}\right)\cup\\{g\\}$ has order type at most
$\gamma+1=\alpha$. Then
$A\cup\\{g\\}=\left(\left(A\setminus\\{u\\}\right)\cup\\{g\\}\right)\cup\\{u\\}$
has order type at most $\alpha+1$.
* •
If $\alpha$ is a limit ordinal. If $g$ is larger than any element of $A$, then
$A\cup\\{g\\}$ has order type $\alpha+1$. If not, let $a_{0}\in A$ such that
$a_{0}\geq g$. For $a\in A$ such that $a>a_{0}$ set
$B_{a}=\\{g\\}\cup\left\\{\left.a^{\prime}\in A\
\vphantom{a^{\prime}<a}\right|\ a^{\prime}<a\right\\}$
Since $\alpha$ is limit, we have
$A\cup\\{g\\}=\underset{a>a_{0}}{\overset{}{\bigcup}}B_{a}$
and each of the element in the union is an initial segment of $A\cup\\{g\\}$.
We also denote $\alpha_{a}$ the order type of the set
$\left\\{\left.a^{\prime}\in A\ \vphantom{a^{\prime}<a}\right|\
a^{\prime}<a\right\\}$. In particular, $\alpha_{a}<\alpha$. Using induction
hypothesis, $B_{a}$ has order type at most $\alpha_{a}+1$. Then, since we have
an increasing union of initial segments, the order type of $A\cup\\{g\\}$ is
at most
$\sup\left\\{\left.\alpha_{a}+1\ \vphantom{a>a_{0}}\right|\
a>a_{0}\right\\}=\sup\left\\{\left.\alpha^{\prime}+1\
\vphantom{\alpha^{\prime}<\alpha}\right|\
\alpha^{\prime}<\alpha\right\\}=\alpha$
since $\alpha$ is a limit ordinal.
We conclude thanks to the induction principle. ∎
###### Proposition 2.0 (Union of well-ordered sets, folklore).
Let $\Gamma$ be a totally ordered set $A,B\subseteq\Gamma$ be non-empty well-
ordered subsets with respective order types $\alpha$ and $\beta$. Then the
subset $A\cup B$ is well ordered with order type at most $\alpha+\beta$.
###### Proof.
$A\cup B$ is well-ordered. Indeed, if we have an infinite decreasing sequence
of $A\cup B$, then we can extract either an infinite one for either $A$ or $B$
which is not possible. It remains to show the bound on its order type. We do
it by induction over $\alpha$ and $\beta$.
* •
If $\alpha=\beta=1$, then $A\cup B$ has at most two elements. Then, its order
type is at most $2=\alpha+\beta$.
* •
If $\alpha$ or $\beta$ is a successor ordinal. Since both cases are symmetric,
we assume without loss of generality that $\beta=\gamma+1$. Let $u$ be the
largest element of $B$ and $C=B\setminus\\{u\\}$. Then, by induction
hypothesis, $A\cup C$ has order type at most $\alpha+\gamma$. Using Lemma 2,
we get that the order type of $A\cup B$ is at most
$\alpha+\gamma+1=\alpha+\beta$.
* •
If $\alpha$ and $\beta$ are limit ordinal. $A$ or $B$ must be cofinal with
$A\cup B$. For instance say it is $A$. For $a\in A$, let
$A_{a}=\left\\{\left.a^{\prime}\in A\ \vphantom{a^{\prime}<a}\right|\
a^{\prime}<a\right\\}\qquad\text{and}\qquad B_{a}=\left\\{\left.b\in B\
\vphantom{b<a}\right|\ b<a\right\\}$
We have $A\cup B=\underset{a\in A}{\overset{}{\bigcup}}A_{a}\cup B_{a}$
Since $A$ is cofinal with $A\cup B$, it is an increasing union of initial
segments. Let $\alpha_{a}$ be the order type of $A_{a}$ and $\beta_{a}$ the
one of $B_{a}$. We have $\alpha_{a}<\alpha$ and $\beta_{a}\leq\beta$. By
induction hypothesis, $A_{a}\cup B_{a}$ has order type at most
$\alpha_{a}+\beta_{a}$. Then $A\cup B$ has order type at most
$\sup\left\\{\left.\alpha_{a}+\beta_{a}\ \vphantom{a\in A}\right|\ a\in
A\right\\}\leq\alpha+\beta$
We conclude the proof using the induction principle. ∎
We know move to addition of well ordered subset of a group. Again this result
in know but its proof is not easily findable in the literature.
###### Proposition 2.0 (Folklore).
Let $\Gamma$ be an ordered Abelian additive monoid and $A,B\subseteq\Gamma$ be
non-empty well-ordered subsets with respective order types $\alpha$ and
$\beta$. Then the subset $A+B=\left\\{\left.a+b\ \vphantom{a\in A\quad B\in
B}\right|\ a\in A\quad B\in B\right\\}$ is well ordered with order type at
most $\alpha\beta$.
###### Proof.
We do it by induction over $\alpha$ and $\beta$.
* •
If $\alpha=\beta=1$, then $A+B$ has only one element, then has order type
$1=\alpha\beta$.
* •
If $\alpha$ or $\beta$ is not an additive ordinal. Let say
$\beta=\gamma+\delta$ with $\gamma,\delta<\beta$. We choose $\gamma,\delta$
such that $\gamma+\delta=\gamma\oplus\delta$. Let $B_{1}$ the initial segment
of length $\gamma$ of $B$. Let $B_{2}=B\setminus B_{1}$. $B_{2}$ has order
type $\delta$. Then, by induction hypothesis, $A+B_{1}$ has order type at most
$\alpha\gamma$ and $A+B_{2}$ has order type at most $\alpha\delta$. Then,
using Proposition 2, $A+B$ has order type at most
$\alpha\gamma+\alpha\delta=\alpha\beta$.
* •
If both $\alpha$ and $\beta$ are additive ordinals. Assume $A+B$ has order
type more than $\alpha\beta$. Let $a+b\in A+B$ such that the set $C$ defined
by
$C:=\left\\{\left.c\in A+B\ \vphantom{c<a+b}\right|\ c<a+b\right\\}$
has order type $\alpha\beta$. Let
$A_{0}=\left\\{\left.a^{\prime}\in A\ \vphantom{a^{\prime}<a}\right|\
a^{\prime}<a\right\\}$ and $B_{0}=\left\\{\left.b^{\prime}\in B\
\vphantom{b^{\prime}<b}\right|\ b^{\prime}<b\right\\}$
and $\alpha_{0}$ and $\beta_{0}$ their respective order types. We have
$C\subseteq\left(A_{0}+B\right)\cup\left(A+B_{0}\right)$
Using induction hypothesis and Proposition 2, $C$ has order type at most
$\alpha_{0}\beta+\alpha\beta_{0}$. Since $\alpha_{0}<\alpha$ and
$\beta_{0}<\beta$, we have $\alpha_{0}\beta<\alpha\beta$ and
$\alpha\beta_{0}<\alpha\beta$. $\alpha$ and $\beta$ being additive ordinal,
$\alpha\beta$ is itself an additive ordinal and then $C$ has order type less
than $\alpha\beta$, what is a contradiction. Then $A+B$ has order type at most
$\alpha\beta$.
We conclude thanks to the induction principle. ∎
In the same idea, we can take a look at a well ordered non-negative subset of
an ordered group. The proof is less easy so we refer to [Wei09] for the
details.
###### Proposition 2.0 ([Wei09, Corollary 1]).
Let $\Gamma$ be an ordered Abelian group and $S\subseteq\Gamma_{+}$ be a well-
ordered subset with order type $\alpha$. Then, $\left\langle S\right\rangle$,
the monoid generated by $S$ in $\Gamma$ is itself well-ordered with order type
at most $\omega^{\widehat{\alpha}}$ where, if the Cantor normal form of
$\alpha$ is
$\alpha=\underset{i=1}{\overset{n}{\sum}}\omega^{\alpha_{i}}n_{i}$
then
$\widehat{\alpha}=\underset{i=1}{\overset{n}{\sum}}\omega^{\alpha_{i}^{\prime}}n_{i}$
and $\beta^{\prime}=\left\\{\begin{array}[]{@{}r@{\quad}l@{}}\beta+1&\text{if
$\beta$ is an $\varepsilon$-number}\\\
\beta&\text{otherwise}\end{array}\right.$
In particular, $\left\langle S\right\rangle$ has order type at most
$\omega^{\omega\alpha}$ (commutative multiplication).
Finally, we consider finite sequences over a well ordered set.
###### Theorem 2.1 ([dP77, Theorem 3.11] and [Sch20, Theorem 2.9]).
Let $(X,\leq)$ be a well ordered set with order type $\alpha$. Let $X^{*}$ be
the set of finite sequences over $X$. Let $\beta$ the order type of $X^{*}$.
We have
$\beta\leq\left\\{\begin{array}[]{@{}r@{\quad}l@{}}\omega^{\omega^{\alpha-1}}&\text{if
}\alpha\text{ is finite}\\\ \omega^{\omega^{\alpha+1}}&\textit{if
}\varepsilon\leq\alpha<\varepsilon+\omega\text{ for some $\varepsilon$-number
}\varepsilon\\\ \omega^{\omega^{\alpha}}&\text{ otherwise}\end{array}\right.$
## 3 Surreal numbers
We assume some familiarity with the ordered field of surreal numbers (refer to
[Con00, Gon86] for presentations) which we denote by No. In this section we
give a brief presentation of the basic definitions and results, and we fix the
notations that will be used in the rest of the paper.
### 3.1 Order and simplicity
The class No of surreal numbers can be defined either by transfinite
recursion, as in [Con00] or by transfinite length sequences of $+$ and $-$ as
done in [Gon86]. We will mostly follow [Gon86], as well as [BM18a] for their
presentation.
We introduce the class $\textnormal{No}=2^{<\textnormal{On}}$ of all binary
sequences of some ordinal length $\alpha\in\textnormal{On}$, where On denotes
the class of the ordinals. In other words, No corresponds to functions of the
form $x:\alpha\to\\{-,+\\}$. The length (sometimes also called birthday in
literature) of a surreal number $x$ is the ordinal number
$\alpha=\operatorname{dom}(x)$. We will also write
$\alpha=\left|x\right|_{+-}$ (the point of this notation is to “count” the
number of pluses and minuses). Note that No is not a set but a proper class,
and all the relations and functions we shall define on No are going to be
class-relations and class-functions, usually constructed by transfinite
induction.
We say that $x\in\textnormal{No}$ is simpler than $y\in\textnormal{No}$,
denoted $x\sqsubset y$, i.e., if $x$ is a strict initial segment (also called
prefix) of $y$ as a binary sequence. We say that $x$ is simpler than or equal
to $y$, written $x\sqsubseteq y$, if $x\sqsubset y$ or $x=y$ i.e., $x$ is an
initial segment of $y$. The simplicity relation is a binary tree-like partial
order on No, with the immediate successors of a node $x\in\textnormal{No}$
being the sequences $x_{-}$ and $x_{+}$ obtained by appending $-$ or $+$ at
the end of the signs sequence of $x$. Observe in particular that the
simplicity relation $\sqsubset$ is well-founded, and the empty sequence, which
will play the role of the number zero, is simpler than any other surreal
number.
We can introduce a total order $<$ on No which is basically the lexicographic
order over the corresponding sequences: More precisely, we consider the order
$-<\square<+$ where $\square$ is the blank symbol. Now to compare two signs
sequences, append blank symbols to the shortest so that they have the same
length. Then, just compare them with the corresponding lexicographic order to
get the total order $<$.
Given two sets $A\subseteq\textnormal{No}$ and $B\subseteq\textnormal{No}$
with $A<B$ (meaning that $a<b$ for all $a\in A$ and $b\in B$), it is quite
easy to understand why there is a simplest surreal number, denoted
$\left[\left.A\ \vphantom{B}\right|\ B\right]$ such that $A<\left[\left.A\
\vphantom{B}\right|\ B\right]<B$. However, a formal proof is long. See [Gon86,
Theorem 2.1] for details. If $x=\left[\left.A\ \vphantom{B}\right|\ B\right]$,
we say that Such a pair $\left[\left.A\ \vphantom{B}\right|\ B\right]$ is
representation of $x$.
Every surreal number $x$ has several different representations
$x=\left[\left.A\ \vphantom{B}\right|\ B\right]=\left[\left.A^{\prime}\
\vphantom{B^{\prime}}\right|\ B^{\prime}\right]$, for instance, if $A$ is
cofinal with $A^{\prime}$ and $B$ is coinitial with $B^{\prime}$. In this
situation, we shall say that $\left[\left.A\ \vphantom{B}\right|\
B\right]=\left[\left.A^{\prime}\ \vphantom{B^{\prime}}\right|\
B^{\prime}\right]$ by cofinality. On the other hand, as discussed in [BM18a],
it may well happen that $\left[\left.A\ \vphantom{B}\right|\
B\right]=\left[\left.A^{\prime}\ \vphantom{B^{\prime}}\right|\
B^{\prime}\right]$ even if $A$ is not cofinal with $A^{\prime}$ or $B$ is not
coinitial with $B^{\prime}$. The canonical representation $x=\left[\left.A\
\vphantom{B}\right|\ B\right]$ is the unique one such that $A\cup B$ is
exactly the set of all surreal numbers strictly simpler than $x$. Indeed it
turns out that is $A=\left\\{\left.y\sqsubset x\ \vphantom{y<x}\right|\
y<x\right\\}$ and $B=\left\\{\left.y\sqsubset x\ \vphantom{y>x}\right|\
y>x\right\\}$, then $x=\left[\left.A\ \vphantom{B}\right|\ B\right]$.
###### Remark 3.0.
By definition, if $x=\left[\left.A\ \vphantom{B}\right|\ B\right]$ and
$A<y<B$, then $x\sqsubseteq y$.
To make the reading easier we may forget $\\{\\}$ when writing explicitly $A$
and $B$. For instance $\left[\left.x\ \vphantom{y}\right|\ y\right]$ will
often stand for $\left[\left.\\{x\\}\ \vphantom{\\{y\\}}\right|\
\\{y\\}\right]$ when $x,y\in\textnormal{No}$.
### 3.2 Field operations
Ring operations $+$, $·$ on No are defined by transfinite induction on
simplicity as follows:
$x+y:=\left[\left.x^{\prime}+y,x+y^{\prime}\
\vphantom{x^{\prime\prime}+y,x+y^{\prime\prime}}\right|\
x^{\prime\prime}+y,x+y^{\prime\prime}\right]$
$xy:=\left[\left.\begin{array}[]{c}x^{\prime}y+xy^{\prime}-x^{\prime}y^{\prime}\\\
x^{\prime\prime}y+xy^{\prime\prime}-x^{\prime\prime}y^{\prime\prime}\end{array}\
\vphantom{\begin{array}[]{c}x^{\prime}y+xy^{\prime\prime}-x^{\prime}y^{\prime\prime}\\\
x^{\prime\prime}y+xy^{\prime}-x^{\prime\prime}y^{\prime}\end{array}}\right|\
\begin{array}[]{c}x^{\prime}y+xy^{\prime\prime}-x^{\prime}y^{\prime\prime}\\\
x^{\prime\prime}y+xy^{\prime}-x^{\prime\prime}y^{\prime}\end{array}\right]$
where $x^{\prime}$ (resp. $y^{\prime}$) ranges over the numbers simpler than
$x$ (resp. $y$) such that $x^{\prime}<x$ (resp. $y^{\prime}<y$) and
$x^{\prime\prime}$ (resp. $y^{\prime\prime}$) ranges over the numbers simpler
than $x$ (resp. $y$) such that $x<x^{\prime\prime}$ (resp.
$y<y^{\prime\prime}$); in other words, when $x=\left[\left.x^{\prime}\
\vphantom{x^{\prime\prime}}\right|\ x^{\prime\prime}\right]$ and
$y=\left[\left.y^{\prime}\ \vphantom{y^{\prime\prime}}\right|\
y^{\prime\prime}\right]$ are the canonical representations of $x$ and $y$
respectively. The expression for the product may seem not intuitive, but
actually, it is basically inspired by the fact that we expect
$(x-x^{\prime})(y-y^{\prime})>0$,
$(x-x^{\prime\prime})(y-y^{\prime\prime})>0$,
$(x-x^{\prime})(y-y^{\prime\prime})<0$ and
$(x-x^{\prime\prime})(y-y^{\prime})<0$.
###### Remark 3.0.
The definitions of sum and product are uniform in the sense of [Gon86, page
15]. Namely the equations that define $x+y$ and $xy$ does not require the
canonical representations of $x$ and $y$ but any representation. In
particular, if $x=\left[\left.A\ \vphantom{B}\right|\ B\right]$ and
$y=\left[\left.C\ \vphantom{D}\right|\ D\right]$, the variables
$x^{\prime},x^{\prime\prime},y^{\prime},y^{\prime\prime}$ may range over $A$,
$B$, $C$, $D$ respectively.
It is an early result that these operations, together with the order, give No
a structure of ordered field, and even a structure of real closed field (see
[Gon86, Theorem 5.10]). Consequently, there is a unique embedding of the
rational numbers in No so we can identify $\mathbb{Q}$ with a subfield of No.
Actually, the subgroup of the dyadic rationals $m/2^{n}\in\mathbb{Q}$, with
$m\in\mathbb{Z}$ and $n\in\mathbb{N}$, correspond exactly to the surreal
numbers $s:k\to\\{-,+\\}$ of finite length $k\in\mathbb{N}.$
The field $\mathbb{R}$ can be isomorphically identified with a subfield of No
by sending $x\in\mathbb{R}$ to the number $\left[\left.A\ \vphantom{B}\right|\
B\right]$ where $A\subseteq\textnormal{No}$ is the set of rationals
(equivalently: dyadics) lower than $x$ and $B\subseteq\textnormal{No}$ is the
set of (equivalently: dyadics) greater than $x$. This embedding is consistent
with the one of $\mathbb{Q}$ into No. We may thus write
$\mathbb{Q}\subseteq\mathbb{R}\subseteq\textnormal{No}$. By [Gon86, page 33],
the length of a real number is at most $\omega$ (the least infinite ordinal).
There are however surreal numbers of length $\omega$ which are not real
numbers, such as $\omega$ itself or its inverse that is a positive
infinitesimal.
The ordinal numbers can be identified with a subclass of No by sending the
ordinal $\alpha$ to the sequence $s:\alpha\rightarrow\\{+,-\\}$ with constant
value $+$. Under this identification, the ring operations of No, when
restricted to the ordinals $\textnormal{Ord}\subseteq\textnormal{No}$,
coincide with the Hessenberg sum and product (also called natural operations)
of ordinal numbers. Similarly, the sequence $s:\alpha\rightarrow\\{+,-\\}$
with constant value $-$ corresponds to the opposite (inverse for the additive
law) of the ordinal $\alpha$, namely $-\alpha$. We remark that
$x\in\textnormal{Ord}$ if and only if $x$ admits a representation of the form
$x=\left[\left.A\ \vphantom{B}\right|\ B\right]$ with $B=\varnothing$, and
similarly $x\in-\textnormal{Ord}$ if and only if we can write
$x=\left[\left.A\ \vphantom{B}\right|\ B\right]$ with $A=\varnothing$.
Under the above identification of $\mathbb{Q}$ as a subfield of No, the
natural numbers $\mathbb{N}\subseteq\mathbb{Q}$ are exactly the finite
ordinals.
### 3.3 Hahn series
#### 3.3.1 Generalities
Let $\mathbb{K}$ be a field, and let $G$ be a divisible ordered Abelian group.
###### Definition 3.0 (Hahn series [Hah95]).
The Hahn series (obtained from $\mathbb{K}$ and $G$) are formal power series
of the form $s=\sum_{g\in S}a_{g}t^{g}$, where $S$ is a well-ordered subset of
$G$ and $a_{g}\in\mathbb{K}.$ The support of s is
$\operatorname{supp}(s)=\left\\{\left.g\in S\ \vphantom{a_{g}\neq 0}\right|\
a_{g}\neq 0\right\\}$ and the length of $s$ is the order type of
$\operatorname{supp}(s)$.
We write $\mathbb{K}\left(\left(G\right)\right)$ for the set of Hahn series
with coefficients in $\mathbb{K}$ and terms corresponding to elements of $G$.
###### Definition 3.0 (Operations on
$\mathbb{K}\left(\left(G\right)\right)$).
The operations on $K\left(\left(G\right)\right)$ are defined in the natural
way: Let $s=\sum_{g\in S}a_{g}t^{g},s^{\prime}=\sum_{g\in
S^{\prime}}a_{g}^{\prime}t^{g}$, where $S,S^{\prime}$ are well ordered.
* •
$s+s^{\prime}=\sum_{g\in S\cup
S^{\prime}}\left(a_{g}+a_{g}^{\prime}\right)t^{g}$, where $a_{g}=0$ if
$g\notin S$, and $a_{g}^{\prime}=0$ if $g\notin S^{\prime}$.
* •
$s\cdot s^{\prime}=\sum_{g\in T}b_{g}t^{g}$, where
$T=\left\\{\left.g_{1}+g_{2}\ \vphantom{g_{1}\in S\wedge g_{2}\in
S^{\prime}}\right|\ g_{1}\in S\wedge g_{2}\in S^{\prime}\right\\}$, and for
each $g\in T$, we set $b_{g}=\underset{g_{1}\in S,g_{2}\in
S^{\prime}|g_{1}+g_{2}=g}{\overset{}{\sum}}b_{g_{1}}\cdot b_{g_{2}}$
Hahn fields inherits a lot of from the structure of the coefficient field. In
particular if $\mathbb{K}$ is algebraically closed, and if $G$ is some
divisible (i.e. for any $n\in\mathbb{N}$ and $g\in G$ there is some
$g^{\prime}\in G$ such that $ng^{\prime}=g$) ordered Abelian group, then the
corresponding Hahn field is also algebraically closed. More precisely:
###### Theorem 3.1 (Generalized Newton-Puiseux Theorem, Maclane [Mac39]).
Let $G$ be a divisible ordered Abelian group, and let $\mathbb{K}$ be a field
that is algebraically closed of characteristic $0$. Then
$\mathbb{K}\left(\left(G\right)\right)$ is also algebraically closed.
As noticed in [All87], we can deduce the following:
###### Corollary 3.1.
Let $G$ be a divisible ordered Abelian group, and let $\mathbb{K}$ be a field
that is real closed of characteristic $0$. Then
$\mathbb{K}\left(\left(G\right)\right)$ is also real closed.
###### Proof.
$\mathbb{K}$ is real closed. That is to say that $-1$ is not a square in
$\mathbb{K}$ and that $\mathbb{K}[i]$ is algebraically closed. Notice that
$\mathbb{K}[i]((G))=\left(\mathbb{K}((G))\right)[i]$. Therefore, Theorem 3.1
ensures that $\left(\mathbb{K}((G))\right)[i]$ is algebraically closed. Also,
$-1$ is not a square in $\mathbb{K}((G))$. Therefore, $\mathbb{K}((G))$ is
real closed. ∎
#### 3.3.2 Restricting length of ordinals
In this article, will often restrict the class of ordinals allowed in the
ordinal sum, namely by restricting to ordinals up to some ordinal $\lambda$.
We then give the following notation:
###### Definition 3.1 ($\mathbb{K}\left(\left(G\right)\right)_{\gamma}$).
Let $\lambda$ be some ordinal. We define
$\mathbb{K}\left(\left(G\right)\right)_{\gamma}$ for the restriction of
$\mathbb{K}\left(\left(G\right)\right)$ to formal power series whose support
has an order type in $\gamma$ (that is to say, corresponds to some ordinal
less than $\gamma$).
###### Theorem 3.2.
Assume $\gamma$ is some $\varepsilon$-number. Then
$\mathbb{K}\left(\left(G\right)\right)_{\gamma}$ is a field.
###### Proof.
This basically relies on the observation that the length of the inverse of
some Hahn series in this field remains in the field: This is basically a
consequence of Proposition 2. ∎
We also get:
###### Proposition 3.2 ([vE01, Lemma 4.6]).
Assume $\mathbb{K}$ is some real closed field, and $G$ is some abelian
divisible group. Then $\mathbb{K}\left(\left(G\right)\right)_{\gamma}$ is real
closed.
Actually, this was stated in [vE01, Lemma 4.6] for the case
$\mathbb{K}=\mathbb{R}$, but the proof ony uses the fact that $\mathbb{R}$ is
real-closed.
#### 3.3.3 Normal form theorem for surreal numbers
###### Definition 3.2.
For $a$ and $b$ two surreal numbers, we define the following relations:
* •
$a\prec b$ if for all $n\in\mathbb{N}$, $n|a|<|b|$.
* •
$a\preceq b$ if there is some natural number $n\in\mathbb{N}$ such that
$|a|<n|b|$.
* •
$a\asymp b$ if $a\preceq b$ and $b\preceq a$.
With this definition, $\preceq$ is a preorder and $\prec$ is the corresponding
strict preorder. The associated equivalence relation is $\asymp$ and the
equivalence classes are the Archimedean classes.
###### Theorem 3.3 ([Gon86, Theorem 5.1]).
For all surreal number $a$ there is a unique positive surreal $x$ of minimal
length such that $a\asymp x$.
The unique element of minimal length in its Archimedean class has many
properties similar to those of exponentiation:
###### Definition 3.3.
For all surreal number $a$ written in canonical representation
$a=\left[\left.a^{\prime}\ \vphantom{a^{\prime\prime}}\right|\
a^{\prime\prime}\right]$, we define
$\omega^{a}=\left[\left.0,\left\\{\left.n\omega^{a^{\prime}}\
\vphantom{n\in\mathbb{N}}\right|\ n\in\mathbb{N}\right\\}\
\vphantom{\left\\{\left.\frac{1}{2^{n}}\omega^{a^{\prime\prime}}\
\vphantom{n\in\mathbb{N}}\right|\ n\in\mathbb{N}\right\\}}\right|\
\left\\{\left.\frac{1}{2^{n}}\omega^{a^{\prime\prime}}\
\vphantom{n\in\mathbb{N}}\right|\ n\in\mathbb{N}\right\\}\right]$
we call such surreal numbers monomials.
Actually this definition is uniform ([Gon86, Corollary 5]) and therefore, we
can use any representation of $a$ in this definition. Another point is that we
can easily check that this notation is consistent with the ordinal
exponentiation. More precisely, if $a$ is an ordinal, $\omega^{a}$ is indeed
the ordinal corresponding to the ordinal exponentiation (see [Gon86, Theorem
5.4]). Finally, as announced, this definition gives the simplest elements
among the Archimedean classes.
###### Theorem 3.4 ([Gon86, Theorem 5.3]).
A surreal number is of the form $\omega^{a}$ if and only if it is simplest
positive element in its Archimedean class. More precisely,
$\forall a\in\textnormal{No}\qquad(\exists c\in\textnormal{No}\quad
a=\omega^{c})\implies(\forall b\in\textnormal{No}\quad b\asymp a\implies
a\sqsubseteq|b|)$
Elements of the form $\omega^{a}$ are by definition positive and have the
following property:
###### Proposition 3.4 ([Gon86, Theorem 5.4]).
We have
* •
$\omega^{0}=1$
* •
$\forall a,b\in\textnormal{No}\qquad\omega^{a}\omega^{b}=\omega^{a+b}$
Thanks to this definition of the $\omega$-exponentiation, we are now ready to
expose a normal form for surreal numbers which is analogous to the Cantor
normal form for ordinal normal.
###### Definition 3.4 ([Gon86, Section 5C, page 59]).
For $\nu$ an ordinal number, $\left(r_{i}\right)_{i<\nu}$ a sequence of non-
zero real numbers and $\left(a_{i}\right)_{i<\nu}$ a decreasing sequence of
surreal numbers, we define
$\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$ inductively as
follows:
* •
If $\nu=0$, then $\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}=0$
* •
If $\nu=\nu^{\prime}+1$ then
$\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}=\underset{i<\nu^{\prime}}{\overset{}{\sum}}r_{i}\omega^{a_{i}}+r_{\nu^{\prime}}\omega^{a_{\nu^{\prime}}}$
* •
If $\nu$ is a limit ordinal,
$\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$ is defined as the
following bracket:
$\left[\left.\left\\{\left.\underset{i<\nu^{\prime}}{\overset{}{\sum}}r_{i}\omega^{a_{i}}+s\omega^{a_{\nu^{\prime}}}\
\vphantom{\begin{array}[]{c}\nu^{\prime}<\nu\\\
s<r_{\nu^{\prime}}\end{array}}\right|\ \begin{array}[]{c}\nu^{\prime}<\nu\\\
s<r_{\nu^{\prime}}\end{array}\right\\}\
\vphantom{\left\\{\left.\underset{i<\nu^{\prime}}{\overset{}{\sum}}r_{i}\omega^{a_{i}}+s\omega^{a_{\nu^{\prime}}}\
\vphantom{\begin{array}[]{c}\nu^{\prime}<\nu\\\
s>r_{\nu^{\prime}}\end{array}}\right|\ \begin{array}[]{c}\nu^{\prime}<\nu\\\
s>r_{\nu^{\prime}}\end{array}\right\\}}\right|\
\left\\{\left.\underset{i<\nu^{\prime}}{\overset{}{\sum}}r_{i}\omega^{a_{i}}+s\omega^{a_{\nu^{\prime}}}\
\vphantom{\begin{array}[]{c}\nu^{\prime}<\nu\\\
s>r_{\nu^{\prime}}\end{array}}\right|\ \begin{array}[]{c}\nu^{\prime}<\nu\\\
s>r_{\nu^{\prime}}\end{array}\right\\}\right]$
Note that if $0$ is seen as a limit ordinal, then both definition are
consistent.
###### Theorem 3.5 ([Gon86, Theorem 5.6]).
Every surreal number can has a unique writing of the form
$\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$. This expression will
be called its normal form.
Note that if $a$ is an ordinal number, then its normal form coincides with its
Cantor normal form. In such a sum, elements $r_{i}\omega^{a_{i}}$ will be
called the terms of the series.
###### Definition 3.5.
The length of the series in the normal form of a surreal number $x$ is denoted
$\nu(x)$.
###### Definition 3.5.
A surreal number $a$ in normal form
$a=\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$ is
* •
purely infinite if for all $i<\nu$, $a_{i}>0$. $\textnormal{No}_{\infty}$ will
stand for the class of purely infinite numbers.
* •
infinitesimal if for all $i<\nu$, $a_{i}<0$ (or equivalently if $a\prec 1$).
* •
appreciable if for all $i<\nu$, $a_{i}\leq 0$ (or equivalently if $a\preceq
1$).
If $\nu^{\prime}\leq\nu$ is the first ordinal such that $a_{i}\leq 0$, then
$\underset{i<\nu^{\prime}}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$ is called the
purely infinite part of $a$. Similarly, if $\nu^{\prime}\leq\nu$ is the first
ordinal such that $a_{i}<0$, $\underset{\nu^{\prime}\leq
i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$ is called the infinitesimal part
of $a$.
###### Theorem 3.6 ([Gon86, Theorems 5.7 and 5.8]).
Operation over surreal numbers coincides with formal addition and formal
multiplication over the normal forms. More precisely,
$\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}+\underset{i<\nu^{\prime}}{\overset{}{\sum}}s_{i}\omega^{b_{i}}=\underset{x\in\textnormal{No}}{\overset{}{\sum}}t_{x}\omega^{x}$
where
* •
$t_{x}=r_{i}$ if $i$ is such that $a_{i}=x$ and there is no $i$ such that
$b_{i}=x$.
* •
$t_{x}=s_{i}$ if $i$ is such that $b_{i}=x$ and there is no $i$ such that
$a_{i}=x$.
* •
$t_{x}=r_{i}+s_{j}$ if $i$ is such that $a_{i}=x$ and $j$ is such that
$b_{j}=x$
and
$\left(\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}\right)\left(\underset{i<\nu^{\prime}}{\overset{}{\sum}}s_{i}\omega^{b_{i}}\right)=\underset{x\in\textnormal{No}}{\overset{}{\sum}}\left(\underset{\tiny\left\\{\left.\begin{array}[]{c}i<\nu\\\
j<\nu^{\prime}\end{array}\ \vphantom{a_{i}+b_{j}=x}\right|\
a_{i}+b_{j}=x\right\\}}{\overset{}{\sum}}r_{i}s_{j}\right)\omega^{x}$
We stated that every surreal number has a normal form. However, in the other
direction, it is possible to get back the sign expansion from a normal form.
###### Definition 3.6 (Reduced sign expansion, Gonshor, [Gon86]).
Let $x=\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$ be a surreal
number. The reduced sign expansion of $a_{i}$, denoted $a_{i}^{\circ}$ is
inductively defined as follows:
* •
$a_{0}^{\circ}=a_{0}$
* •
For $i>0$, if $a_{i}(\delta)=-$ and if there is there is $j<i$ such that for
$\gamma\leq\delta$, $a_{j}(\gamma)=a_{i}(\gamma)$, then we discard the minus
in position $\delta$ in the sign expansion of $a_{i}$.
* •
If $i>0$ is a non-limit ordinal and $(a_{i-1})_{-}$ (as a sign expansion) is a
prefix of $a_{i}$, then we discard this minus after $a_{i-1}$ if $r_{i-1}$ is
not a dyadic rational number.
More informally, $a_{i}^{\circ}$ is the sign expansion obtained when copying
$a_{i}$ omitting the minuses that have already been treated before, in an
other exponent of the serie. We just keep the new one brought by $a_{i}$.
However, the later case give a condition where even a new minus can be
omitted.
###### Theorem 3.7 ([Gon86], Theorems 5.11 and 5.12).
For $\alpha$ an ordinal and a surreal $a$, we write $|a[:\alpha]|_{+}$ for the
(ordinal) number of pluses in $\alpha[:\alpha]$ the prefix of length of
$\alpha$ of $x$. Then,
* •
The sign expansion of $\omega^{a}$ is as follows: we start with a plus and the
for any ordinal $\alpha<|a|$ we add $\omega^{|a[:\alpha]|_{+}+1}$ occurrences
of $a(\alpha)$ (the sign in position $\alpha$ in the signs sequence of $a$).
* •
The sign expansion of $\omega^{a}n$ is the signs sequence of $\omega^{a}$
followed by $\omega^{|a|_{+}}(n-1)$ pluses.
* •
The sign expansion of
$\omega^{a}\mathchoice{\dfrac{1}{2^{n}}}{\dfrac{1}{2^{n}}}{\frac{1}{2^{n}}}{\frac{1}{2^{n}}}$
is the sign expansion of $\omega^{a}$ followed by $\omega^{|a|_{+}}n$ minuses.
* •
The sign expansion of $\omega^{a}r$ for $r$ a positive real is the sign
expansion of $\omega^{a}$ to which we add each sign of $r$ $\omega^{|a|_{+}}$
times excepted the first plus which is omitted.
* •
The sign expansion of $\omega^{a}r$ for $r$ a negative real is the sign
expansion of $\omega^{a}(-r)$ in which we change every plus in a minus and
conversely.
* •
The sign expansion of $\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$
is the juxtaposition of the sign expansions of the
$\omega^{a_{i}^{\circ}}r_{i}$
As a final note of this subsection, we give some bounds on the length of
monomials and terms.
###### Lemma 3.7 ([vE01, Lemma 4.1]).
For all surreal number $a\in\textnormal{No}$,
$\left|a\right|_{+-}\leq\left|\omega^{a}\right|_{+-}\leq\omega^{\left|a\right|_{+-}}$
###### Lemma 3.7 ([Gon86, Lemma 6.3]).
Let $x=\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$ a surreal
number. We have for all $i<\nu$,
$\left|r_{i}\omega^{a_{i}}\right|_{+-}\leq\left|x\right|_{+-}$.
#### 3.3.4 Hahn series and surreal numbers
As a consequence of Theorems 3.5 and 3.6, the field No in in fact a Hahn serie
field. More precisely,
###### Corollary 3.7.
The fields No and $\mathbb{R}((t^{\textnormal{No}}))$ are isomorphic.
###### Proof.
Sending $t^{a}$ to $\omega^{-a}$ for all surreal number $a$, we notice that
all the definitions match to each other. ∎
Notice that we have of course
$\textnormal{No}=\mathbb{R}\left(\left(\textnormal{No}\right)\right)_{\textnormal{Ord}}$.
## 4 Surreal subfields
### 4.1 Subfields defined by Gonshor’s representation
Let $\textnormal{No}_{\lambda}$ denote the set surreal number whose signs
sequences have length less than $\lambda$ where $\lambda$ is some ordinal. We
have of course
$\textnormal{No}=\bigcup_{\lambda\in\textnormal{On}}\textnormal{No}_{\lambda}$.
Van den Dries and Ehrlich have proved the following:
###### Theorem 4.1 ([vE01, vdDE01]).
The ordinals $\lambda$ such that $\textnormal{No}_{\lambda}$ is closed under
the various fields operations of No can be characterised as follows:
* •
$\textnormal{No}_{\lambda}$ is an additive subgroup of No iff
$\lambda=\omega^{\alpha}$ for some ordinal $\alpha$.
* •
$\textnormal{No}_{\lambda}$ is a subring of No iff
$\lambda=\omega^{\omega^{\alpha}}$ for some ordinal $\alpha$.
* •
$\textnormal{No}_{\lambda}$ is a subfield of No iff
$\omega^{\lambda}=\lambda$.
The ordinals $\lambda$ satisfying first (respectively: second) item are often
said to be additively (resp. multiplicatively) indecomposable but for the sake
of brevity we shall just call them additive (resp. multiplicative).
Multiplicative ordinals are exactly the ordinals $\lambda>1$ such that
$\mu\nu<\lambda$ whenever $\mu,\nu<\lambda$. The ordinal satisfying third item
are called $\varepsilon$-numbers. The smallest $\varepsilon$-number is usually
denoted by $\varepsilon_{0}$ and is given by
$\varepsilon_{0}:=\sup\\{\omega,\omega^{\omega},\omega^{\omega^{\omega}},\dots\\}.$
###### Remark 4.1.
Since rational numbers have length at most $\omega$, we have that if $\lambda$
is multiplicative, then $\textnormal{No}_{\lambda}$ is a divisible group.
If $\lambda$ is an $\varepsilon$-number, $\textnormal{No}_{\lambda}$ is
actually more than only a field:
###### Theorem 4.2 ([vE01, vdDE01]).
Let $\lambda$ be any $\varepsilon$-number. Then $\textnormal{No}_{\lambda}$ is
a real closed field.
### 4.2 Subfields defined from Hahn’s series representation
As we will often play with exponents of formal power series consided in the
Hahn series, we propose to introduce the following notation:
###### Definition 4.2.
If $\lambda$ is an $\varepsilon$-number and $\Gamma$ a divisible Abelian
group, we denote
$\mathbb{R}_{\lambda}^{\Gamma}=\mathbb{R}\left(\left(\Gamma\right)\right)_{\lambda}$
As a consequence of Proposition 3.3.2 we have
###### Corollary 4.2.
$\mathbb{R}_{\lambda}^{\textnormal{No}_{\mu}}$ is a real-closed field when
$\mu$ is a multiplicative ordinal and $\lambda$ an $\varepsilon$-number.
This fields are somehow the atoms constituting the fields
$\textnormal{No}_{\lambda}$.
See 1.1
###### Remark 4.2.
The fact that if $\lambda$ is not a regular cardinal, then
$\textnormal{No}_{\lambda}\neq\mathbb{R}_{\lambda}^{\textnormal{No}_{\lambda}}$
can be seen as follows: Suppose that $\lambda$ is not a regular cardinal. This
means that we can take some strictly increasing sequence
$(\mu_{\alpha})_{\alpha<\beta}$ that is cofinal in $\lambda$ with
$\beta<\lambda$. Then $\sum_{\alpha<\beta}\omega^{-\mu_{\alpha}}$ is in
$\mathbb{R}_{\lambda}^{\textnormal{No}_{\lambda}}$ by definition, but is not
in $\textnormal{No}_{\lambda}$.
## 5 Exponentiation and logarithm
### 5.1 Gonshor’s exponentiation
The field surreal numbers No admits an exponential function $\exp$ defined as
follows.
###### Definition 5.0 (Function $\exp$, [Gon86, page 145]).
Let $x=\left[\left.x^{\prime}\ \vphantom{x^{\prime\prime}}\right|\
x^{\prime\prime}\right]$ be the canonical representation of $x$. We define
inductively
$\exp x=\left[\left.\begin{array}[]{c}0,\exp(x^{\prime})[x-x^{\prime}]_{n},\\\
\exp(x^{\prime\prime})[x-x^{\prime\prime}]_{2n+1}\end{array}\
\vphantom{\mathchoice{\dfrac{\exp(x^{\prime})}{[x^{\prime}-x]_{2n+1}}}{\dfrac{\exp(x^{\prime})}{[x^{\prime}-x]_{2n+1}}}{\frac{\exp(x^{\prime})}{[x^{\prime}-x]_{2n+1}}}{\frac{\exp(x^{\prime})}{[x^{\prime}-x]_{2n+1}}},\mathchoice{\dfrac{\exp(x^{\prime\prime})}{[x^{\prime\prime}-x]_{2n+1}}}{\dfrac{\exp(x^{\prime\prime})}{[x^{\prime\prime}-x]_{2n+1}}}{\frac{\exp(x^{\prime\prime})}{[x^{\prime\prime}-x]_{2n+1}}}{\frac{\exp(x^{\prime\prime})}{[x^{\prime\prime}-x]_{2n+1}}}}\right|\
\mathchoice{\dfrac{\exp(x^{\prime})}{[x^{\prime}-x]_{2n+1}}}{\dfrac{\exp(x^{\prime})}{[x^{\prime}-x]_{2n+1}}}{\frac{\exp(x^{\prime})}{[x^{\prime}-x]_{2n+1}}}{\frac{\exp(x^{\prime})}{[x^{\prime}-x]_{2n+1}}},\mathchoice{\dfrac{\exp(x^{\prime\prime})}{[x^{\prime\prime}-x]_{2n+1}}}{\dfrac{\exp(x^{\prime\prime})}{[x^{\prime\prime}-x]_{2n+1}}}{\frac{\exp(x^{\prime\prime})}{[x^{\prime\prime}-x]_{2n+1}}}{\frac{\exp(x^{\prime\prime})}{[x^{\prime\prime}-x]_{2n+1}}}\right]$
where $n$ ranges in $\mathbb{N}$ and
$[x]_{n}=1+\frac{x}{1!}+\dots+\frac{x}{n!},$
with the further convention that the expressions containing terms of the form
$[y]_{2n+1}$ are to be considered only when $[y]_{2n+1}>0$.
It can be shown that the function $\exp$ is a surjective homomorphism from
$(\textnormal{No},+)$ to $(\textnormal{No}^{>0},·)$ which extends $\exp$ on
$\mathbb{R}$ and makes $(\textnormal{No},+,·,\exp)$ into an elementary
extension of $(\mathbb{R},+,·,\exp)$ (see [vdDMM94, Corollaries 2.11 and 4.6],
[vE01] and [Res93]). As $\exp$ is surjective, and from its properties, it can
be shown that it has some inverse $\ln:\textnormal{No}^{>0}\to\textnormal{No}$
(called logarithm).
###### Definition 5.0 (Functions $\log$, $\log_{n}$, $\exp_{n}$).
Let $\ln:\textnormal{No}^{>0}\to\textnormal{No}$ (called logarithm) be the
inverse of $\exp$. We let $\exp_{n}$ and $\ln_{n}$ be the $n$-fold iterated
compositions of $\exp$ and $\ln$ with themselves.
We recall some other basic properties of the exponential functions:
###### Theorem 5.1 ([Gon86, Theorems 10.2, 10.3 and 10.4]).
For all $r\in\mathbb{R}$ and $\varepsilon$ infinitesimal, we have
$\exp
r=\underset{k=0}{\overset{\infty}{\sum}}{\mathchoice{\dfrac{r^{k}}{k!}}{\dfrac{r^{k}}{k!}}{\frac{r^{k}}{k!}}{\frac{r^{k}}{k!}}}\quad\text{and}\quad\exp\varepsilon=\underset{k=0}{\overset{\infty}{\sum}}\mathchoice{\dfrac{\varepsilon^{k}}{k!}}{\dfrac{\varepsilon^{k}}{k!}}{\frac{\varepsilon^{k}}{k!}}{\frac{\varepsilon^{k}}{k!}}$
and
$\exp(r+\varepsilon)=\exp(r)\exp(\varepsilon)=\underset{k=0}{\overset{\infty}{\sum}}\mathchoice{\dfrac{(r+\varepsilon)^{k}}{k!}}{\dfrac{(r+\varepsilon)^{k}}{k!}}{\frac{(r+\varepsilon)^{k}}{k!}}{\frac{(r+\varepsilon)^{k}}{k!}}$
Moreover for all purely infinite number $x$,
$\exp(x+r+\varepsilon)=\exp(x)\exp(r+\varepsilon)$
###### Proposition 5.1 ([Gon86, Theorem 10.7]).
If $x$ is purely infinite, then $\exp x=\omega^{a}$ for some surreal number
$a$.
More precisely:
###### Proposition 5.1 (Function $g$, [Gon86, Theorem 10.13]).
If $x$ is purely infinite, i.e.
$x=\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$ with $a_{i}>0$ for
all $i$, then
$\exp x=\omega^{\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{g(a_{i})}},$
for some function $g:\textnormal{No}^{>0}\to\textnormal{No}$. Function $g$
satisfies for all $x$,
$g(x)=\left[\left.c(x),g(x^{\prime})\ \vphantom{g(x^{\prime\prime})}\right|\
g(x^{\prime\prime})\right]$
where $c(x)$ is the unique number such that $\omega^{c(x)}$ and $x$ are in the
same Archimedean class [Gon86, Thm. 10.11] (i.e. such that
$x\asymp\omega^{c(x)}$), where $x^{\prime}$ ranges over the lower non-zero
prefixes of $x$ and $x^{\prime\prime}$ over the upper prefixes of $x$.
### 5.2 About some properties of function $g$
###### Proposition 5.1 ([Gon86, Theorem 10.14]).
If $a$ is an ordinal number then
$g(a)=\left\\{\begin{array}[]{@{}r@{\quad}l@{}}a+1&\text{if }\lambda\leq
a<\lambda+\omega\text{ for some }\varepsilon\text{-number }\lambda\\\
a&\text{otherwise}\end{array}\right.$
Note that in the previous proposition, $a\neq 0$ since $g$ is defined only for
positive elements.
###### Proposition 5.1 ([Gon86, Theorem 10.15]).
Let $n$ be a natural number and $b$ be an ordinal. We have
$g(2^{-n}\omega^{-b})=-b+2^{-n}.$
###### Proposition 5.1 ([Gon86, Theorems 10.17, 10.19 and 10.20]).
If $b$ is a surreal number such that for some $\varepsilon$-number
$\varepsilon_{i}$, some ordinal $\alpha$ and for all natural number $n$,
$\varepsilon_{i}+n<b<\alpha<\varepsilon_{i+1}$, then $g(b)=b$. This is also
true if there is some ordinal $\alpha<\varepsilon_{0}$ such that for all
natural number $b$, $n\omega^{-1}<b<\alpha<\varepsilon_{0}$.
###### Proposition 5.1 ([Gon86, Theorem 10.18]).
If $\varepsilon\leq b\leq\varepsilon+n$ for some $\varepsilon$-number
$\varepsilon$ and some integer $n$. In particular, the sign expansion of $b$
is the sign expansion of $\varepsilon$ followed by some sign expansion $S$.
Then, the sign expansion of $g(b)$ is the sign expansion of $\varepsilon$
followed by a $+$ and then $S$. In particular, $g(b)=b+1$.
It is possible to bound the length of $g(a)$ depending on the length of $a$.
###### Lemma 5.1 ([vE01, Lemma 5.1]).
For all $a\in\textnormal{No}$,
$\left|g(a)\right|_{+-}\leq\left|a\right|_{+-}+1$.
The function $g$ has a inverse function, $h$ defined as follows
$h(b)=\left[\left.0,h(b^{\prime})\
\vphantom{h(b^{\prime\prime}),\mathchoice{\dfrac{\omega^{b}}{n}}{\dfrac{\omega^{b}}{n}}{\frac{\omega^{b}}{n}}{\frac{\omega^{b}}{n}}}\right|\
h(b^{\prime\prime}),\mathchoice{\dfrac{\omega^{b}}{n}}{\dfrac{\omega^{b}}{n}}{\frac{\omega^{b}}{n}}{\frac{\omega^{b}}{n}}\right]$
This expression is uniform (see [Gon86]) and then does not depend of the
expression of $b$ as $\left[\left.b^{\prime}\
\vphantom{b^{\prime\prime}}\right|\ b^{\prime\prime}\right]$.
###### Corollary 5.1.
If $a$ is an ordinal number then $h(-a)=\omega^{-a-1}$.
###### Proof.
It is a direct consequence of Proposition 5.2 and the fact that $h=g^{-1}$. ∎
As for $g$, we can bound the length of $h(a)$ in function of the length of
$a$.
###### Lemma 5.1 ([AVDDVDH19, Proposition 3.1]).
For all $a\in\textnormal{No}$ we have,
$\left|h(a)\right|_{+-}\leq\omega^{\left|a\right|_{+-}+1}$
We will also prove another lemma, Lemma 5.2, that looks like the previous
lemma but that is better in many cases but not always. To do so we first prove
another technical lemma.
###### Lemma 5.1.
For all $c$, denote $c_{+}$ the surreal number whose signs sequence is the one
of $c$ followed by a plus. Assume $g(a)<c$ for all $a\sqsubset\omega^{c}$ such
that $0<a<\omega^{c}$. Then $g(\omega^{c})$ is $c_{+}$ if $c$ does not have a
longest prefix greater than itself, otherwise,
$g(\omega^{c})=c^{\prime\prime}$ where $c^{\prime\prime}$ is the longest
prefix of $c$ such that $c^{\prime\prime}>c$.
###### Proof.
By induction on $c$:
* •
For $c=0$, $g(\omega^{0})=g(1)=1$ whose signs sequence is indeed the one of
$0$ followed by a plus.
* •
Assume the property for $b\sqsubset c$. Assume $g(a^{\prime})<c$ for all
$a^{\prime}\sqsubset\omega^{c}$ such that $0<a^{\prime}<\omega^{c}$. Then,
$g(\omega^{c})=\left[\left.c\ \vphantom{g(a^{\prime\prime})}\right|\
g(a^{\prime\prime})\right]$
where $a^{\prime\prime}$ ranges over the elements such that
$a^{\prime\prime}\sqsubset\omega^{c}$ and $a^{\prime\prime}>\omega^{c}$.
* ➢
First case: $c$ has a longest prefix $c_{0}$ such that $c_{0}>c$. Then, for
all $a^{\prime\prime}$ such that $a^{\prime\prime}\sqsubset\omega^{c}$ and
$a^{\prime\prime}>\omega^{c}$, $a^{\prime\prime}\succeq\omega^{c_{0}}$, hence
$g(a^{\prime\prime})>c_{0}$. Since $c<c_{0}<g(a^{\prime\prime})$, the
simplicity property ensures $g(\omega^{c})\sqsubseteq c_{0}\sqsubset c$. Then
$g(\omega^{c})$ is some prefix $c^{\prime\prime}$ of $c$, greater than $c$. We
look at $\omega^{c^{\prime\prime}}$. Notice that for all $b\sqsubset
c^{\prime\prime}$ is such that $0<b<c^{\prime\prime}$, $b\sqsubset c$ and
$b<c$, hence $g(b)<c<c^{\prime\prime}$. Therefore we can apply the induction
hypothesis to $c^{\prime\prime}$ and $g(\omega^{c^{\prime\prime}})$ is
$c^{\prime\prime}_{+}$ if the signs sequence of $c$ does not end with only
minuses, otherwise, $g(\omega^{c^{\prime\prime}})$ is the last (strict) prefix
of $c^{\prime\prime}$ greater than $c^{\prime\prime}$.
* $\because$
First subcase: $g(\omega^{c^{\prime\prime}})=c^{\prime\prime}_{+}$. If there
is some $b$ such that $c^{\prime\prime}\sqsubset b\sqsubset c$ and $b>c$, then
$g(\omega^{b})$ is a prefix of $g(\omega^{c})=c^{\prime\prime}$. But,
$c^{\prime\prime}=g(\omega^{c})<g(\omega^{b})<g(\omega^{c^{\prime\prime}})=c^{\prime\prime}_{+}$.
Then $c^{\prime\prime}$ must be a strict prefix of $g(\omega^{b})$ which is a
contradiction. Then $c^{\prime\prime}$ is indeed the last strict prefix of $c$
greater than $c$.
* $\because$
Second subcase: $g(\omega^{c^{\prime\prime}})$ is the last (strict) prefix of
$c^{\prime\prime}$ greater than $c^{\prime\prime}$. If there is some $b$ such
that $c^{\prime\prime}\sqsubset b\sqsubset c$ and $b>c$, then $g(\omega^{b})$
is a prefix of $g(\omega^{c})=c^{\prime\prime}$. Since
$g(b)<g(c^{\prime\prime})$, $g(b)$ is prefix of $c^{\prime\prime}$ smaller
than $c^{\prime\prime}$. But this contradicts the fact that
$g(\omega^{b})>g(\omega^{c})=c^{\prime\prime}$. Therefore, $c^{\prime\prime}$
is the last prefix of $c$ greater than $c$.
* ➢
Second case: $c$ does not have a longest prefix greater than $c$. Then,
$g(\omega^{c})=\left[\left.c\ \vphantom{g(\omega^{c^{\prime\prime}})}\right|\
g(\omega^{c^{\prime\prime}})\right]$
where $c^{\prime\prime}$ ranges over the prefixes of $c$ greater than $c$. Let
$d\sqsubset c$ such that $d>c$. Then there is $d_{1}$ or minimal length such
that $d\sqsubset d_{1}\sqsubset c$ and $d_{1}>c$. By minimality of $d_{1}$,
$d$ is the longest prefix of $d_{1}$ greater than $d_{1}$. As in the first
case, we can apply the induction hypothesis on $d_{1}$ and get
$g(\omega^{d_{1}})=d$. Therefore, again by induction hypothesis,
$g(\omega^{c})=\left[\left.c\
\vphantom{c^{\prime\prime},c^{\prime\prime}_{+}}\right|\
c^{\prime\prime},c^{\prime\prime}_{+}\right]=\left[\left.c\
\vphantom{c^{\prime\prime}}\right|\ c^{\prime\prime}\right]$
where $c^{\prime\prime}$ ranges over the prefixes of $c$ greater than $c$. We
finally conclude that $g(\omega^{c})=c_{+}$.
∎
In the following we denote $\oplus$ the usual addition over the ordinal
numbers and $\otimes$ the usual product over ordinal numbers.
###### Lemma 5.1.
For all $a>0$,
$\left|a\right|_{+-}\leq\left|\omega^{g(a)}\right|_{+-}\otimes(\omega+1)$.
###### Proof.
We proceed by induction on $\left|a\right|_{+-}$.
* •
For $a=1$, $g(a)=1$ and we indeed have $1\leq\omega^{2}$.
* •
Assume the property for all $b\sqsubset a$. Let $c$ such that
$\omega^{c}\asymp a$. Then
$g(a)=\left[\left.c,g(a^{\prime})\ \vphantom{g(a^{\prime\prime})}\right|\
g(a^{\prime\prime})\right]$
We split into two cases:
* ➢
If there is some $a_{0}\sqsubset a$ such that $a_{0}<a$ and $g(a_{0})\geq c$
then
$g(a)=\left[\left.g(a^{\prime})\ \vphantom{g(a^{\prime\prime})}\right|\
g(a^{\prime\prime})\right]$
and if $S$ stand for the signs sequence such that $a$ is the signs sequence of
$a_{0}$ followed by $S$, $g(a)$ is the signs sequence of $g(a_{0})$ followed
$S$. Let $\alpha$ the length of $S$. Therefore using Theorem 3.7,
$\left|\omega^{g(a)}\right|_{+-}\geq\left|\omega^{g(a_{0})}\right|_{+-}\oplus(\omega\otimes\alpha)$
and then,
$\displaystyle\left|\omega^{g(a)}\right|_{+-}\otimes(\omega+1)$
$\displaystyle\geq\left|\omega^{g(a_{0})}\right|_{+-}\otimes\omega\oplus\left|\omega^{g(a_{0})}\right|_{+-}\oplus\alpha$
$\displaystyle\geq\left|\omega^{g(a_{0})}\right|_{+-}\otimes(\omega+1)\oplus\alpha$
and by induction hypothesis on $a_{0}$,
$\left|\omega^{g(a)}\right|_{+-}\otimes(\omega+1)\geq\left|a_{0}\right|_{+-}\oplus\alpha=\left|a\right|_{+-}$
* ➢
Otherwise, for any $a_{0}\sqsubset a$ such that $a_{0}<a$, $g(a_{0})<c$.
Therefore,
$g(a)=\left[\left.c\ \vphantom{g(a^{\prime\prime})}\right|\
g(a^{\prime\prime})\right]$
Also, since $a>0$, we can write the signs sequence of $a$ as the one of
$\omega^{c}$ followed by some signs sequence $S$. If $S$ contains a plus, then
there is a prefix of $a$, $a_{0}$ such that $a_{0}<a$ and still
$a_{0}\asymp\omega^{c}$ and then $g(a_{0})>c$ what is not the case by
assumption. Then, $S$ is a sequence of minuses. If $S$ is not the empty
sequence, let $\alpha$ be the length of $S$. Then the signs sequence of $g(a)$
is the one of $g(\omega^{c})$ followed by $S$. Hence,
$\left|\omega^{g(a)}\right|_{+-}\geq\left|\omega^{g(\omega^{c})}\right|_{+-}\oplus(\omega\otimes\alpha)$
As in the previous case, but using the induction hypothesis on $\omega^{c}$,
$\left|\omega^{g(a)}\right|_{+-}\otimes(\omega+1)\geq\left|\omega^{c}\right|_{+-}\oplus\alpha=\left|a\right|_{+-}$
Now if $S$ is the empty sequence, $a=\omega^{c}$. Applying Lemma 5.2 to $c$ we
get that either $g(a)=c_{+}$ or $g(a)$ is the last prefix of $c$ greater than
$c$. If the first case occurs then $a$ is a prefix of $\omega^{g(a)}$ and then
$\left|\omega^{g(a)}\right|_{+-}\geq\left|a\right|_{+-}$. Now assume that the
second case occurs. Then for any $b$ such that $g(a)\sqsubset b\sqsubset c$,
$b<c$. If for all $b^{\prime}\sqsubset b$ such that $b^{\prime}<b$,
$g(b^{\prime})<b$, then Lemma 5.2 applies. Since $b$ has a last prefix greater
than itself, $g(a)$, $g(\omega^{b})=g(a)$ and we reach a contradiction since
$b<c$ and therefore $\omega^{b}<\omega^{c}=a$. Then for all $b$ such that
$g(a)\sqsubset b\sqsubset c$, there is some $b^{\prime}\sqsubset b$,
$b^{\prime}<b$ such that $g(\omega^{b^{\prime}})>b$. Since the signs sequence
of $b$ consists in the one of $g(a)$ a minus and then a bunch of pluses, and
since $g(\omega^{b^{\prime}})$ must also a a prefix of $c$,
$g(\omega^{b^{\prime}})\sqsubseteq g(a)\sqsubset b$. Therefore to ensure
$g(b^{\prime})>b$, we must have $g(\omega^{b^{\prime}})\geq g(a)$. Since
$\omega^{b^{\prime}}$ is a prefix of $a$ lower than $a$, it is a
contradiction. Therefore, there is no $b$ such that $g(a)\sqsubset b\sqsubset
c$ and $b<c$, and finally, the signs sequence of $c$ is the one $g(a)$
followed by a minus. In particular, $g(a)$ and $c$ have the same amount of
pluses, say $\alpha$. Then, using Theorem 3.7,
$\displaystyle\left|a\right|_{+-}$
$\displaystyle=\left|\omega^{g(a)}\right|_{+-}\oplus\omega^{\alpha+1}$
$\displaystyle\leq\left|\omega^{g(a)}\right|_{+-}\oplus\left|\omega^{g(a)}\right|_{+-}\otimes\omega=\left|\omega^{g(a)}\right|_{+-}\otimes\omega$
$\displaystyle\leq\left|\omega^{g(a)}\right|_{+-}\otimes(\omega+1)$
The induction principle concludes.
∎
###### Corollary 5.1.
For all $a>0$ and for all multiplicative ordinal greater than $\omega$, if
$\left|a\right|_{+-}\geq\mu$, then $\left|\omega^{g(a)}\right|_{+-}\geq\mu$.
###### Proof.
Assume the that $\left|\omega^{g(a)}\right|_{+-}<\mu$. Then using Lemma 5.2,
$\mu\leq\left|\omega^{g(a)}\right|_{+-}\otimes(\omega+1)$. Since $\mu$ is a
multiplicative ordinal greater than $\omega$, we have $\omega+1<\mu$. $\mu$ is
a multiplicative ordinal, hence
$\left|\omega^{g(a)}\right|_{+-}\otimes(\omega+1)<\mu$ and we reach a
contradiction. ∎
### 5.3 Gonshor’s logarithm
We already know that a logarithm exist over positive surreal numbers.
Nevertheless we were very elliptical and we now get deeper into it.
###### Definition 5.1.
For a surreal number $a$ in canonical representation
$a=\left[\left.a^{\prime}\ \vphantom{a^{\prime\prime}}\right|\
a^{\prime\prime}\right]$, we define
$\ln\omega^{a}=\left[\left.\begin{array}[]{c}\left\\{\left.\ln\omega^{a^{\prime}}+n\
\vphantom{\begin{array}[]{c}n\in\mathbb{N}\\\ a^{\prime}\sqsubset a\\\
a^{\prime}<a\end{array}}\right|\ \begin{array}[]{c}n\in\mathbb{N}\\\
a^{\prime}\sqsubset a\\\ a^{\prime}<a\end{array}\right\\}\\\
\left\\{\left.\ln\omega^{a^{\prime\prime}}-\omega^{\frac{a^{\prime\prime}-a}{n}}\
\vphantom{\begin{array}[]{c}n\in\mathbb{N}\\\ a^{\prime\prime}\sqsubset a\\\
a<a^{\prime\prime}\end{array}}\right|\ \begin{array}[]{c}n\in\mathbb{N}\\\
a^{\prime\prime}\sqsubset a\\\
a<a^{\prime\prime}\end{array}\right\\}\end{array}\\!\\!\
\vphantom{\\!\\!\begin{array}[]{c}\left\\{\left.\ln\omega^{a^{\prime\prime}}-n\
\vphantom{\begin{array}[]{c}n\in\mathbb{N}\\\ a^{\prime\prime}\sqsubset a\\\
a<a^{\prime\prime}\end{array}}\right|\ \begin{array}[]{c}n\in\mathbb{N}\\\
a^{\prime\prime}\sqsubset a\\\ a<a^{\prime\prime}\end{array}\right\\}\\\
\left\\{\left.\ln\omega^{a^{\prime}}+\omega^{\frac{a-a^{\prime}}{n}}\
\vphantom{\begin{array}[]{c}n\in\mathbb{N}\\\ a^{\prime}\sqsubset a\\\
a^{\prime}<a\end{array}}\right|\ \begin{array}[]{c}n\in\mathbb{N}\\\
a^{\prime}\sqsubset a\\\ a^{\prime}<a\end{array}\right\\}\end{array}}\right|\
\\!\\!\begin{array}[]{c}\left\\{\left.\ln\omega^{a^{\prime\prime}}-n\
\vphantom{\begin{array}[]{c}n\in\mathbb{N}\\\ a^{\prime\prime}\sqsubset a\\\
a<a^{\prime\prime}\end{array}}\right|\ \begin{array}[]{c}n\in\mathbb{N}\\\
a^{\prime\prime}\sqsubset a\\\ a<a^{\prime\prime}\end{array}\right\\}\\\
\left\\{\left.\ln\omega^{a^{\prime}}+\omega^{\frac{a-a^{\prime}}{n}}\
\vphantom{\begin{array}[]{c}n\in\mathbb{N}\\\ a^{\prime}\sqsubset a\\\
a^{\prime}<a\end{array}}\right|\ \begin{array}[]{c}n\in\mathbb{N}\\\
a^{\prime}\sqsubset a\\\ a^{\prime}<a\end{array}\right\\}\end{array}\right]$
As often with this kind of definitions, the uniformity property holds.
###### Lemma 5.1 ([Gon86, Lemma 10.1]).
The definition of $\ln\omega^{a}$ does not require $a$ in canonical
representation.
###### Proposition 5.1 ([Gon86, Theorem 10.8]).
For all surreal number $a$, $\ln\omega^{a}$ is purely infinite.
Purely infinite numbers are a special case in the definition of the
exponential function. We can state the previous definition of $\ln$ is
consistent with the one of $\exp$.
###### Theorem 5.2 ([Gon86, Theorem 10.9]).
For all surreal number $a$,
$\exp\ln\omega^{a}=\omega^{a}$
###### Theorem 5.3 ([Gon86, Theorem 10.12]).
For all surreal number $a$,
$\ln\omega^{\omega^{a}}=\omega^{h(a)}$
The above theorem is not actually stated like this in [Gon86] but this
statement follows from the proof there.
As a consequence of Theorems 5.2 and 5.3 and Propositions 5.3 and 5.1, we have
###### Corollary 5.3.
For all surreal number
$a=\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$, we have
$\ln\omega^{a}=\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{h(a_{i})}$
Finally, since for appreciable numbers $\exp$ is defined by its usual serie,
$\ln(1+x)$ is also defined by its usual serie when $x$ in infinitesimal. More
precisely,
###### Definition 5.3.
For $x$ an infinitesimal,
$\ln(1+x)=\underset{i=1}{\overset{\infty}{\sum}}\frac{(-1)^{i-1}x^{i}}{i}$
And thanks to Theorem 5.1,
###### Corollary 5.3.
Let $a=\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$ a positive
surreal number. Then
$\ln a=\ln\omega^{a_{0}}+\ln r_{0}+\ln\left(1+\underset{1\leq
i<\nu}{\overset{}{\sum}}\frac{r_{i}}{r_{0}}\omega^{a_{i}-a_{0}}\right)$
where the last term is defined in Definition 5.3.
### 5.4 Stability of $\textnormal{No}_{\lambda}$ by exponential and logarithm
We first recall some result by van den Dries and Ehrlich.
###### Lemma 5.3 ([vE01, Lemmas 5.2, 5.3 and 5.4]).
For all surreal number $a\in\textnormal{No}$,
* •
$\left|\exp a\right|_{+-}\leq\omega^{\omega^{2\left|a\right|_{+-}\oplus 3}}$
* •
$\left|\ln\omega^{a}\right|_{+-}\leq\omega^{4\omega\left|a\right|_{+-}\left|a\right|_{+-}}$
* •
$\left|\ln a\right|_{+-}\leq\omega^{\omega^{3\left|a\right|_{+-}\oplus 3}}$
###### Corollary 5.3 ([vE01, Corollary 5.5]).
For $\lambda$ an $\varepsilon$-number, $\textnormal{No}_{\lambda}$ is stable
under $\exp$ and $\ln$.
We have
###### Theorem 5.4 ([BG22, Theorem 1.3]).
The following are equivalent:
* •
$\textnormal{No}_{\lambda}$ is a subfield of No stable by $\exp$, and $\ln$
* •
$\textnormal{No}_{\lambda}$ is a subfield of No
* •
$\lambda$ is some $\varepsilon$-number.
### 5.5 A hierarchy of subfields of No stable by exponential and logarithm
In this subsection we recall our previous work on a hierarchy of surreal
subfields stable under exponential and logarithm.
We start by Theorem 1.2 repeated here for readability:
See 1.2
This result is actually a consequence of a more general proposition which is
the following.
###### Proposition 5.4 ([BG22, Proposition 5.1]).
Let $\lambda$ be an $\varepsilon$-number and $\left(\Gamma_{i}\right)_{i\in
I}$ be a family of Abelian subgroups of No. Then
$\mathbb{R}_{\lambda}^{\left(\Gamma_{i}\right)_{i\in I}}$ is stable under
$\exp$ and $\ln$ if and only if
$\underset{i\in I}{\overset{}{\bigcup}}\Gamma_{i}=\underset{i\in
I}{\overset{}{\bigcup}}\mathbb{R}_{\lambda}^{g\left(\left(\Gamma_{i}\right)^{*}_{+}\right)}$
Note that a consequence of Proposition 5.5 is also the following:
###### Corollary 5.4 ([BG22, Corollary 5.2]).
Let $\lambda$ be an $\varepsilon$-number and $\Gamma$ be an abelian subgroup
of No. Then $\mathbb{R}_{\lambda}^{\Gamma}$ is stable under $\exp$ and $\ln$
if and only if $\Gamma=\mathbb{R}_{\lambda}^{g\left(\Gamma^{*}_{+}\right)}$.
This result is quite similar to Theorem 1.2 but in the particular very
particular case where
$\underset{G\in\Gamma^{\uparrow\lambda}}{\overset{}{\bigcup}}G=\Gamma$. This
apply for instance when $\Gamma=\\{0\\}$. In this case, we get
$\mathbb{R}_{\lambda}^{\Gamma}=\mathbb{R}$. If $\lambda$ is a regular cardinal
we get an other example considering
$\mathbb{R}_{\lambda}^{\Gamma}=\Gamma=\textnormal{No}_{\lambda}$.
Theorem 1.2 enables us to consider a lot of fields stable under exponential
and logarithm and enabled us to prove that we can express
$\textnormal{No}_{\lambda}$ as a strictly increasing hierarchy of fields
stable under $\exp$ and $\ln$.
See 1.3
See 1.4
## 6 The class of log-atomic numbers, derivation and anti-derivation
### 6.1 Log-atomic numbers
We now introduce the concept of log-atomic numbers. Log-atomic numbers were
first introduced by Schmeling in [Sch01, page 30] about transseries. Such
number are basically number whose series of iterated logarithm have all length
$1$.
###### Definition 6.0 (Log-atomic).
A positive surreal number $x\in\textnormal{No}^{*}_{+}$ is said log-atomic iff
for all $n\in\mathbb{N}$, there is a surreal number $a_{n}$ such that
$\ln_{n}x=\omega^{a_{n}}$. We denote $\mathbb{L}$ the class of log-atomic
numbers.
For instance, $\omega$ is a log-atomic number and we can check that for all
$n\in\mathbb{N}$,
$\ln_{n}\omega=\omega^{\mathchoice{\dfrac{1}{\omega^{n}}}{\dfrac{1}{\omega^{n}}}{\frac{1}{\omega^{n}}}{\frac{1}{\omega^{n}}}}$.
Log-atomic number are the number we cannot divide into simpler numbers when
considering exponential and logarithm and are the fundamental blocs we end up
with when writing $x=\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$
and then each $\omega^{a_{i}}$ as $\omega^{a_{i}}=\exp x_{i}$ with $x_{i}$ a
purely infinite number and then doing the same thing with each of the
$x_{i}$s. The use of the word “simpler” is not innocent. Indeed, log-atomic
numbers are also the simplest elements for some equivalence relation
introduced by Beraducci and Mantova [BM18b].
###### Definition 6.0 ([BM18b, Definition 5.2]).
Let $x,y$ be two positive infinite surreal numbers. We write
* •
$x\asymp^{L}y$ iff there are some natural numbers $n,k$ such that
$\exp_{n}\left(\mathchoice{\dfrac{1}{k}}{\dfrac{1}{k}}{\frac{1}{k}}{\frac{1}{k}}\ln_{n}y\right)\leq
x\leq\exp_{n}\left(k\ln_{n}y\right)$
Equivalently, we ask that the is a natural number $n$ such that
$\ln_{n}x\asymp\ln_{n}y$. For such $n$ we notice that
$\ln_{n+1}x\sim\ln_{n+1}y$.
* •
$x\prec^{L}y$ iff for all natural numbers $n$ and $k$,
$x<\exp_{n}\left(\mathchoice{\dfrac{1}{k}}{\dfrac{1}{k}}{\frac{1}{k}}{\frac{1}{k}}\ln_{n}y\right)$
Equivalently, we ask that for all $n\in\mathbb{N}$, $\ln_{n}x\prec\ln_{n}y$.
* •
$a\preceq^{L}b$ iff there are some natural numbers $n$ and $k$,
$x\leq\exp_{n}\left(\mathchoice{\dfrac{1}{k}}{\dfrac{1}{k}}{\frac{1}{k}}{\frac{1}{k}}\ln_{n}y\right)$
Equivalently, we ask that for some $n\in\mathbb{N}$,
$\ln_{n}x\preceq\ln_{n}y$.
Log-atomic number are closely related to this equivalence relation since they
representatives of each equivalence classes.
###### Proposition 6.0 ([BM18b, Propositions 5.6 and 5.8]).
For all positive infinite $x$ there is unique log-atomic number
$y\in\mathbb{L}$ such that $y\sqsubseteq x$ and such that $y\asymp^{L}x$. In
particular, if $x,y\in\mathbb{L}$ with $x<y$ then $x\prec^{L}y$.
This proposition shows in particular that not even log-atomic are
representative of the equivalence classes of $\asymp^{L}$, they also are the
simplest element (i.e the shortest in terms of length) in their respective
equivalence classes. This make them a canonic class of representatives.
As we can parametrize additive ordinal, multiplicative ordinal or even
$\varepsilon$-numbers (for which a generalization for surreal numbers exists
in Gonshor’s book [Gon86]), we can parametrized epsilon numbers by a an
increasing function $\lambda_{\cdot}$. A first conjecture was to consider
$\kappa$-numbers which are defined by Kuhlmann and Matusinski as follows:
###### Definition 6.0 ([KM14, Definition 3.1]).
Let $x$ be a surreal number and write it in canonical representation as
$x=\left[\left.x^{\prime}\ \vphantom{x^{\prime\prime}}\right|\
x^{\prime\prime}\right]$. Then we define
$\kappa_{x}=\left[\left.\mathbb{R},\exp_{n}\kappa_{x^{\prime}}\
\vphantom{\ln_{n}\kappa_{x^{\prime\prime}}}\right|\
\ln_{n}\kappa_{x^{\prime\prime}}\right]$
Intuitively, $x<y$ iff every iterated exponential of $\kappa_{x}$ is less than
$\kappa_{y}$ and we try to build them as simple as possible. As an example, it
is quite easy to see that $\kappa_{0}=\omega$,
$\kappa_{-1}=\omega^{\omega^{-\omega}}$ and $\kappa_{1}=\varepsilon_{0}$. It
was conjectured that $\mathbb{L}$ consists in $\kappa$-number and there
iterated exponentials and logarithms. As shown by Berarducci and Mantova, it
turns out that it is not true. They then suggest a more general map which is
the following:
###### Definition 6.0 ([BM18b, Definition 5.12]).
Let $x$ be a surreal number and write it in canonical representation
$x=\left[\left.x^{\prime}\ \vphantom{x^{\prime\prime}}\right|\
x^{\prime\prime}\right]$. Then we define
$\lambda_{x}=\left[\left.\mathbb{R},\exp_{n}\left(k\ln_{n}\lambda_{x^{\prime}}\right)\
\vphantom{\exp_{n}\left(\mathchoice{\dfrac{1}{k}}{\dfrac{1}{k}}{\frac{1}{k}}{\frac{1}{k}}\ln_{n}\lambda_{x^{\prime\prime}}\right)}\right|\
\exp_{n}\left(\mathchoice{\dfrac{1}{k}}{\dfrac{1}{k}}{\frac{1}{k}}{\frac{1}{k}}\ln_{n}\lambda_{x^{\prime\prime}}\right)\right]$
where $n,k\in\mathbb{N}^{*}$.
###### Proposition 6.0 ([BM18b, Proposition 5.13 and Corollary 5.15]).
The function $x\mapsto\lambda_{x}$ is well defined, increasing, satisfies the
uniformity property and if $x<y$ then $\lambda_{x}\prec^{L}\lambda_{y}$.
###### Proposition 6.0 ([BM18b, Proposition 5.16]).
For every $x\in\textnormal{No}$ with $x>\mathbb{R}$ there is a unique
$y\in\textnormal{No}$ such that $x\asymp^{L}\lambda_{y}$ and
$\lambda_{y}\sqsubseteq x$. In particular, $\lambda_{y}$ is the simplest
number in its equivalence class for $\asymp^{L}$. As a consequence,
$\lambda_{\textnormal{No}}=\mathbb{L}$.
Moreover, the $\lambda$ map behaves very nicely with exponential and
logarithm.
###### Proposition 6.0 ([AVDDVDH19, Proposition 2.5]).
For all surreal number $x$,
$\exp\lambda_{x}=\lambda_{x+1}\qquad\text{and}\qquad\ln\lambda_{x}=\lambda_{x-1}$
###### Lemma 6.0 ([AVDDVDH19, Lemma 2.6]).
For all ordinal $\alpha$, $\lambda_{-\alpha}=\omega^{\omega^{-\alpha}}$.
###### Lemma 6.0 ([AVDDVDH19, Aschenbrenner, van den Dries and van der
Hoeven, Corollary 2.9]).
For all ordinal number $\alpha$,
$\kappa_{-\alpha}=\lambda_{-\omega\otimes\alpha}=\omega^{\omega^{-\omega\otimes\alpha}}$
### 6.2 Nested truncation rank
#### 6.2.1 Definition
Log-atomic number are the base case (up to minor changes) of a notion of rank
over surreal numbers, the nested truncation rank. As expected, it is based on
some well partial order. This one has been defined by Berarducci and Mantova
as follows:
###### Definition 6.0 ([BM18b, Definition 4.3]).
For all natural number $n\in\mathbb{N}$, we define the relation
$\trianglelefteq_{n}$ as follows:
* •
Writing $y\trianglelefteq_{0}x$ if any only if
$y=\underset{i<\nu^{\prime}}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$ and
$x=\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$ with
$\nu^{\prime}\leq\nu$. We say that $y$ is a truncation of $x$.
* •
Let $x=\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$ Since
$\omega^{\textnormal{No}}=\exp(\textnormal{No}_{\infty})$, we can write
$x=\underset{i<\nu}{\overset{}{\sum}}r_{i}\exp(x_{i})$
where $\exp(x_{i})=\omega^{a_{i}}$. For a surreal number $y$, we say
$y\trianglelefteq_{n+1}x$ if there is $\nu^{\prime}<\nu$ and
$y^{\prime}\trianglelefteq_{n}x_{\nu^{\prime}}$ such that
$y=\underset{i<\nu^{\prime}}{\overset{}{\sum}}r_{i}\exp(x_{i})+\operatorname{sign}(r_{\nu^{\prime}})\exp
y^{\prime}$
We say that $y$ is a nested truncation of $x$.
We also write $y\trianglelefteq x$ is there is some natural number $n$ such
that $y\trianglelefteq_{n}x$. We also introduce the corresponding strict
relations $\triangleleft_{n}$ and $\triangleleft$.
###### Definition 6.0 (Nested truncation rank [BM18b, Definition 4.27]).
The nested truncation rank of $x\in\textnormal{No}^{*}$ is defined by
$\operatorname{NR}(x)=\sup\left\\{\left.\operatorname{NR}y+1\
\vphantom{y\triangleleft x}\right|\ y\triangleleft x\right\\}$
By convention, we also set $\operatorname{NR}(0)=0$.
#### 6.2.2 Properties
We know investigate some properties of the nested truncation rank. More
precisely, we provide compatibility properties with the operations over
surreal numbers and bounds on some particular nested truncation ranks. First
of all, the nested truncation rank is unaffected by the exponential.
###### Proposition 6.0 ([BM18b, Proposition 4.28]).
If $\gamma\in\textnormal{No}_{\infty}$, then
$\operatorname{NR}(\pm\exp\gamma)=\operatorname{NR}(\gamma)$
###### Corollary 6.0.
For all $a\in\textnormal{No}^{*}$,
$\operatorname{NR}(a)=\operatorname{NR}\left(-a\right)$
###### Proof.
Without loss of generality, we assume that $a>0$. Then
| $\operatorname{NR}\left(a\right)$ | $\ =\ $ | $\operatorname{NR}\left(\ln a\right)$ | (Proposition 6.2.2)
---|---|---|---|---
| | $\ =\ $ | $\operatorname{NR}(-\exp\ln a)$ | (Proposition 6.2.2)
| | $\ =\ $ | $\operatorname{NR}\left(-a\right)$ |
∎
###### Corollary 6.0.
For all $a\in\textnormal{No}^{*}$,
$\operatorname{NR}(a)=\operatorname{NR}\left(\mathchoice{\dfrac{1}{a}}{\dfrac{1}{a}}{\frac{1}{a}}{\frac{1}{a}}\right)$
###### Proof.
| $\operatorname{NR}\left(\mathchoice{\dfrac{1}{a}}{\dfrac{1}{a}}{\frac{1}{a}}{\frac{1}{a}}\right)$ | $\ =\ $ | $\operatorname{NR}\left(\ln\mathchoice{\dfrac{1}{a}}{\dfrac{1}{a}}{\frac{1}{a}}{\frac{1}{a}}\right)$ | (Proposition 6.2.2)
---|---|---|---|---
| | $\ =\ $ | $\operatorname{NR}(-\ln a)$ |
| | $\ =\ $ | $\operatorname{NR}\left(\ln a\right)$ | (Corollary 6.2.2)
| | | |
| | $\ =\ $ | $\operatorname{NR}(a)$ | (Proposition 6.2.2)
∎
###### Lemma 6.0.
For all $x\in\textnormal{No}$, $\operatorname{NR}(x)=0$ iff either
$x\in\mathbb{R}$ or $x=\pm\lambda^{\pm 1}$ for some log-atomic number
$\lambda$.
###### Proof.
* $\overset{\text{SC}}{\Leftarrow}$
Note that if $x\in\mathbb{R}$ then there is no $y\in\textnormal{No}$ such that
$y\triangleleft x$. Therefore $\operatorname{NR}(x)=0$. Now assume that there
is some $x=\pm\lambda^{\pm 1}$ with $\lambda\in\mathbb{L}$ such that
$\operatorname{NR}(x)\neq 0$. Therefore there is some $y\in\textnormal{No}$
such that $y\triangleleft x$. Let $n\in\mathbb{N}$ minimal such that there is
$y\in\textnormal{No}$ and $\lambda\in\mathbb{L}$ such that
$y\triangleleft_{n}\pm\lambda^{\pm 1}$. Note that since $\pm\lambda^{\pm 1}$
is a term, $n>0$. Then $y=\pm\exp(\pm y^{\prime})$ with
$y^{\prime}\triangleleft_{n-1}\ln\lambda\in\mathbb{L}$. But this contradicts
the minimality of $n$. hence, for all $\lambda\in\mathbb{L}$,
$\operatorname{NR}\left(\pm\lambda^{\pm 1}\right)=0$.
* $\overset{\text{NC}}{\Rightarrow}$
Assume $\operatorname{NR}(x)=0$ and $x$ is not a real number. If $x$ is not a
term, then there is $y\triangleleft_{0}x$ and in particular
$\operatorname{NR}(x)\geq 1$, what is impossible. Therefore there is some
$r\in\mathbb{R}^{*}$ and some $x_{1}\in\mathbb{J}$ such that $x=r\exp(x_{1})$.
If $r\neq\pm 1$ then $\operatorname{sign}(x)\exp(x^{\prime})\triangleleft x$
what is again impossible. Hence, $x=\pm\exp(x_{1})$. Proposition 6.2.2 ensures
that $\operatorname{NR}(x_{1})=0$. We then can apply the same work to $x_{1}$
so that there is some $x_{2}\in\mathbb{J}$ such that $x_{1}=\pm\exp(x_{2})$.
By induction, we can always define $x_{n}=\pm\exp(x_{n+1})$ with
$x_{n+1}\in\mathbb{J}$. For $n\geq 1$ we have $x_{n}\in\mathbb{J}$, therefore
$x_{n+1}>0$. In particular
$\forall n\geq 2\qquad x_{n}=\exp(x_{n+1})$
So, for all $n\in\mathbb{N}$, $\ln_{n}x_{2}$ is a monomial, this means that
$x_{2}\in\mathbb{L}$. We also have
$x=\pm\exp\left(\pm\exp x_{2}\right)=\pm\left(\exp_{2}(x_{2})\right)^{\pm 1}$
Since $\exp_{2}x_{2}\in\mathbb{L}$, we have the expected result.
∎
###### Lemma 6.0.
Let $x=\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$ and
$r\in\mathbb{R}^{*},a\in\textnormal{No}$ such that for all $i<\nu$,
$r\omega^{a}\prec\omega^{a_{i}}$. Then
$\operatorname{NR}(x+r\omega^{a})=\operatorname{NR}(x)\oplus
1\oplus\operatorname{NR}(\omega^{a})\oplus\mathds{1}_{r\neq\pm 1}$
where the $\oplus$ is the usual sum on ordinal numbers.
###### Proof.
Let $y\triangleleft x+r\omega^{a}$. Then $y\trianglelefteq x$ or
$y=x+\operatorname{sign}(r)\exp(\delta)$ with
$\delta\triangleleft\ln\omega^{a}$ or, if $r\neq\pm 1$,
$y=x+\operatorname{sign}(r)\omega^{a}$. Let
$A=\left\\{\left.y\ \vphantom{y\trianglelefteq x}\right|\ y\trianglelefteq
x\right\\}\qquad\text{and}\qquad
B=\left\\{\left.x+\operatorname{sign}(r)\exp(\delta)\
\vphantom{\delta\triangleleft\ln\omega^{a}}\right|\
\delta\triangleleft\ln\omega^{a}\right\\}$
and $C=\left\\{\begin{array}[]{@{}r@{\quad}l@{}}\varnothing&r=\pm 1\\\
x+\operatorname{sign}(r)\omega^{a}&r\neq\pm 1\end{array}\right.$
One can easily see that
$\forall y\in A\quad\forall y^{\prime}\in B\quad\forall y^{\prime\prime}\in
C\qquad y\triangleleft y^{\prime}\wedge y\triangleleft y^{\prime\prime}\wedge
y^{\prime}\triangleleft y^{\prime\prime}$
We now proceed by induction on $\operatorname{NR}(\omega^{a})$.
* •
If $\operatorname{NR}(\omega^{a})=0$, using Lemma 6.2.2, either
$\omega^{a}=\pm\lambda^{\pm 1}$ for some log-atomic number $\lambda$ or $a=0$.
In both cases, there is no $\delta\triangleleft\ln\omega^{a}$.
| $\operatorname{NR}(x+r\omega^{a})$ | $\ =\ $ | $\sup\left\\{\left.\operatorname{NR}(y)+1\ \vphantom{y\in A\cup C}\right|\ y\in A\cup C\right\\}$
---|---|---|---
| | $\ =\ $ | $\sup\left(\left\\{\left.\underbrace{\operatorname{NR}(y)+1}_{\leq\operatorname{NR}(x)}\ \vphantom{y\triangleleft x}\right|\ y\triangleleft x\right\\}\right.$
| | | $\left.\qquad\vphantom{\underbrace{\operatorname{NR}(y)+1}_{\leq\operatorname{NR}(x)}}\cup\\{\operatorname{NR}(x)+1\\}\cup\left\\{\left.\operatorname{NR}(y)+1\ \vphantom{y\in C}\right|\ y\in C\right\\}\right)$
| | $\ =\ $ | $\left\\{\begin{array}[]{@{}r@{\quad}l@{}}\operatorname{NR}(x)+1&r=\pm 1\\\ \operatorname{NR}(x+\operatorname{sign}(r)\omega^{a})&r\neq\pm 1\end{array}\right.$
| | $\ =\ $ | $\left\\{\begin{array}[]{@{}r@{\quad}l@{}}\operatorname{NR}(x)+1&r=\pm 1\\\ \operatorname{NR}(x)+2&r\neq\pm 1\end{array}\right.$
| $\operatorname{NR}(x+r\omega^{a})$ | $\ =\ $ | $\operatorname{NR}(x)+1+\operatorname{NR}(\omega^{a})+\mathds{1}_{r\neq\pm 1}$
* •
For heredity now. Let $\delta\triangleleft\ln\omega^{a}$. Since
$\ln\omega^{a}$ is a purely infinite number, so is $\delta$. Then $\exp\delta$
is of the form $\omega^{b}$ for some surreal $b\in\textnormal{No}$. Moreover
$\operatorname{NR}(\omega^{b})\underset{\text{Proposition
}\ref{prop:NRStableExp}}{=}\operatorname{NR}(\delta)<\operatorname{NR}(\ln\omega^{a})\underset{\text{Proposition
}\ref{prop:NRStableExp}}{=}\operatorname{NR}(\omega^{a})$
From the induction hypothesis, we have that for any
$\delta\triangleleft\ln\omega^{a}$
$\operatorname{NR}(x+\operatorname{sign}(r)\exp(\delta))=\operatorname{NR}(x)\oplus
1\oplus\operatorname{NR}(\exp\delta)$
| $\operatorname{NR}(x+r\omega^{a})$ | $\ =\ $ | $\sup\left\\{\left.\operatorname{NR}(y)+1\ \vphantom{y\in B\cup C}\right|\ y\in B\cup C\right\\}$
---|---|---|---
| | $\ =\ $ | $\sup\left(\left\\{\left.\underbrace{\operatorname{NR}(x+\operatorname{sign}(r)\exp\delta)+1}_{\leq\operatorname{NR}(x+\operatorname{sign}(r)\omega^{a})}\ \vphantom{\delta\triangleleft\ln\omega^{a}}\right|\ \delta\triangleleft\ln\omega^{a}\right\\}\right.$
| | | $\left.\qquad\cup\left\\{\left.\operatorname{NR}(y)+1\ \vphantom{y\in C}\right|\ y\in C\right\\}\vphantom{\underbrace{\operatorname{NR}(x+\operatorname{sign}(r)\exp\delta)+1}_{\leq\operatorname{NR}(x+\operatorname{sign}(r)\omega^{a})}}\right)$
| | $\ =\ $ | $\sup\left\\{\left.\operatorname{NR}(x)\oplus 1\oplus\operatorname{NR}(\exp\delta)\oplus 1\ \vphantom{\delta\triangleleft\ln\omega^{a}}\right|\ \delta\triangleleft\ln\omega^{a}\right\\}+\mathds{1}_{r\neq\pm 1}$
| | $\ =\ $ | $\operatorname{NR}(x)\oplus 1\oplus\sup\left\\{\left.\operatorname{NR}(\exp\delta)+1\ \vphantom{\delta\triangleleft\ln\omega^{a}}\right|\ \delta\triangleleft\ln\omega^{a}\right\\}\oplus\mathds{1}_{r\neq\pm 1}$
| $\operatorname{NR}(x+r\omega^{a})$ | $\ =\ $ | $\operatorname{NR}(x)\oplus 1\oplus\operatorname{NR}(\omega^{a})\oplus\mathds{1}_{r\neq\pm 1}$
∎
###### Lemma 6.0.
Let $x=\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$ such that for
all $i<\nu$, $r_{i}=\pm 1$ and $\omega^{a_{i}}=\lambda_{i}^{\pm 1}$ for some
$\lambda\in\mathbb{L}$. Then
$\operatorname{NR}(x)=\left\\{\begin{array}[]{@{}r@{\quad}l@{}}\nu+1&\nu<\omega\\\
\nu&\nu\geq\omega\end{array}\right.$
###### Proof.
If $\nu<\omega$, we just proceed by induction using Lemma 6.2.2. Now we prove
by induction the remaining.
* •
If $\nu=\omega$. Then
$\operatorname{NR}(x)=\sup\left\\{\left.\operatorname{NR}\left(\underset{i<\nu^{\prime}}{\overset{}{\sum}}r_{i}\omega^{a_{i}}\right)+1\
\vphantom{\nu^{\prime}<\nu}\right|\
\nu^{\prime}<\nu\right\\}=\sup\left\\{\left.\nu^{\prime}+2\
\vphantom{\nu^{\prime}<\omega}\right|\ \nu^{\prime}<\omega\right\\}=\omega$
* •
Assume for $\omega\leq\nu^{\prime}<\nu$,
$\operatorname{NR}\left(\underset{i<\nu^{\prime}}{\overset{}{\sum}}r_{i}\omega^{a_{i}}\right)=\nu^{\prime}$.
If $\nu$ is a non-limit ordinal, then Lemma 6.2.2 concludes. Otherwise
$\operatorname{NR}(x)=\sup\left\\{\left.\operatorname{NR}\left(\underset{i<\nu^{\prime}}{\overset{}{\sum}}r_{i}\omega^{a_{i}}\right)+1\
\vphantom{\nu^{\prime}<\nu}\right|\
\nu^{\prime}<\nu\right\\}=\sup\left\\{\left.\nu^{\prime}+^{1}\
\vphantom{\omega\leq\nu^{\prime}<\nu}\right|\
\omega\leq\nu^{\prime}<\nu\right\\}=\nu$
∎
###### Lemma 6.0.
Let
$x=\underset{i<\nu}{\overset{}{\sum}}r_{i}(x)\omega^{a_{i}(x)}\in\textnormal{No}$.
Then $\nu\leq\operatorname{NR}(x)+1$. The equality stands iff $x$ is a finite
sum of numbers of the form $\pm y^{\pm 1}$ with $y\in\mathbb{L}$ and possibly
one non-zero real number.
###### Proof.
Using induction on $\nu$ it is trivial. For $0$, $\nu=0=\operatorname{NR}(0)$.
Now assume $\nu\neq 0$. Then, by definition
$\operatorname{NR}(x)+1\geq\sup\left\\{\left.\operatorname{NR}(y)+1\
\vphantom{y\triangleleft_{0}x\quad y\neq 0}\right|\ y\triangleleft_{0}x\quad
y\neq 0\right\\}+1\underset{\text{induction
hypothesis}}{\geq}\sup\left\\{\left.\nu(y)\ \vphantom{y\triangleleft_{0}x\quad
y\neq 0}\right|\ y\triangleleft_{0}x\quad y\neq 0\right\\}+1\geq\nu(x)$
Now assume $\nu(x)=\operatorname{NR}(x)+1$ and write
$x=\underset{i<\nu(x)}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$. We use induction
on $\textnormal{No}^{*}$ with the well partial order $\triangleleft_{0}$.
* •
If $x$ is a monomial, $\nu(x)=1$ and $\operatorname{NR}(x)=0$. That is $x=\pm
y^{\pm 1}$ for some $y\in\mathbb{L}$ or $x\in\mathbb{R}$ (using Lemma 6.2.2).
* •
If $x$ is not a monomial. Assume $r_{i}\omega^{a_{i}}\notin\pm\mathbb{L}^{\pm
1}\cup\mathbb{R}^{*}$ with $i$ minimal for that property. Let
$x^{\prime}=\underset{j<i}{\overset{}{\sum}}r_{j}\omega^{a_{j}}$.
* ➢
If $i=0$ then $\operatorname{NR}(r_{0}\omega^{a_{0}})\geq 1$. A simple
induction shows that
$\operatorname{NR}\left(\underset{i<\nu^{\prime}}{\overset{}{\sum}}r_{i}\omega^{a_{i}}\right)\geq\nu^{\prime}$
for all $\nu^{\prime}\leq\nu$. What is a contradiction.
* ➢
Otherwise $x^{\prime}\neq 0$ and $x^{\prime}\triangleleft_{0}x$. If
$\operatorname{NR}(x^{\prime})+1\neq i$ then
$\operatorname{NR}(x^{\prime})\geq i$ and
$\operatorname{NR}(x)\geq\operatorname{NR}(x^{\prime})\oplus(\nu\ominus
i)\geq\nu$
where $\nu\ominus i$ is the ordinal such that $i\oplus(\nu\ominus i)=\nu$.
what is a contraction. Then by induction hypothesis,
$i=\operatorname{NR}(x^{\prime})+1$ is finite. Now consider $y\triangleleft
x^{\prime}+r_{i}\omega^{a_{i}}$. Then $y\trianglelefteq_{0}x^{\prime}$
($y\triangleleft_{n}x^{\prime}$ with $n\geq 1$ is impossible since
$x^{\prime}$ has only terms in $\pm\mathbb{L}^{\pm 1}\cup\mathbb{R}$) or
$y=x^{\prime}+\operatorname{sign}(r_{i})\exp(\delta)$ with
$\delta\trianglelefteq\ln(\omega^{a_{i}})$. Since
$r_{i}\omega^{a_{i}}\notin\pm\mathbb{L}^{\pm 1}\cup\mathbb{R}$, there is such
a $y$ of the later form such that $y\neq x^{\prime}+r_{i}\omega^{a_{i}}$. From
Lemma 6.2.2, we have
$\operatorname{NR}(y)\geq\operatorname{NR}(x^{\prime})+1$. Then
$\operatorname{NR}(x^{\prime}+r_{i}\omega^{a_{i}})\geq\operatorname{NR}(y)+1\geq\operatorname{NR}(x^{\prime})+2$.
By induction we then can show that
$\operatorname{NR}(x)\geq\operatorname{NR}(x^{\prime}+r_{i}\omega^{a_{i}})\oplus(\nu-(i+1))\geq\operatorname{NR}(x^{\prime})\oplus
2\oplus(\nu\ominus(i\oplus 1))=i\oplus 1+(\nu\ominus(i\oplus 1))=\nu$
and we get a contradiction.
Then, every term of $x$ is in $\pm\mathbb{L}^{\pm 1}\cup\mathbb{R}$ and by
definition only one can be a non-zero real number. It remains to show that
there are finitely many terms, what follows from Lemma 6.2.2.
∎
###### Remark 6.0.
For all $x\in\textnormal{No}$, $\operatorname{NR}(x)\leq\left|x\right|_{+-}$
###### Proof.
Assume the converse and take $x$ with minimal length that contradicts the
property then there is $y\triangleleft x$ such that
$\operatorname{NR}(y)\geq\left|x\right|_{+-}$. Since
$\left|x\right|_{+-}>\left|y\right|_{+-}$, then $y$ reaches contradiction with
the minimality of $x$. ∎
###### Proposition 6.0 ([BM18b, Berarducci and Mantova, Proposition 4.29]).
For all $a\in\textnormal{No}^{*}$, for all $r\in\mathbb{R}\setminus\\{\pm
1\\}$, we have
$\operatorname{NR}(r\omega^{a})=\operatorname{NR}(\omega^{a})+1$.
###### Proposition 6.0 ([BM18b, Berarducci and Mantova, Proposition 4.30]).
Let
$x=\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}\in\textnormal{No}^{*}$.
Then
* •
$\forall
i<\nu\qquad\operatorname{NR}(r_{i}\omega^{a_{i}})\leq\operatorname{NR}(x)$
* •
$\forall i<\nu\qquad
i+1<\nu\Rightarrow\operatorname{NR}(r_{i}\omega^{a_{i}})<\operatorname{NR}(x)$
We can also say something about the nested truncation rank of a sum of surreal
number.
###### Lemma 6.0.
For $a,b\in\textnormal{No}$,
$\operatorname{NR}(a+b)\leq\operatorname{NR}(a)+\operatorname{NR}(b)+1$
(natural sum of ordinal, which correspond to the surreal sum).
###### Proof.
We prove it by induction on the couple
$(\operatorname{NR}(a),\operatorname{NR}(b))$.
* •
If $\operatorname{NR}(a)=\operatorname{NR}(b)=0$ then, by Lemma 6.2.2 both
$a,b$ are in $\pm\mathbb{L}^{\pm 1}\cup\mathbb{R}$. If $a\in\mathbb{R}$ or
$b\in\mathbb{R}$ then $\operatorname{NR}(a+b)\leq 1$ by Lemmas 6.2.2 and
6.2.2. Otherwise, either $a=\pm b$ and then $\operatorname{NR}(a+b)=0$ or
$a\neq\pm b$ and Lemma 6.2.2 ensure that $\operatorname{NR}(a+b)=1$.
* •
Assume the property for all $x,y$ such that
$(\operatorname{NR}(x),\operatorname{NR}(y))<_{lex}(\operatorname{NR}(a),\operatorname{NR}(b))$
Then, consider $y\triangleleft a+b$. Write
$a+b=\underset{i<\nu}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$.
* ➢
If $y=\underset{i<\nu^{\prime}}{\overset{}{\sum}}r_{i}\omega^{a_{i}}$ with
$\nu^{\prime}<\nu$. Let $z_{a}$ be the series constituted of the terms of $a$
which asolute value is infintely larger than $\omega^{a_{\nu}^{\prime}}$. We
define the same way $z_{b}$. Then $y=z_{a}+z_{b}$. We have
$(\operatorname{NR}(z_{a}),\operatorname{NR}(z_{b}))<_{lex}(\operatorname{NR}(a),\operatorname{NR}(b))$
since there is term with order of magnitude $\omega^{a_{\nu^{\prime}}}$ in
either $a$ or $b$. Then, applying induction hypothesis,
$\operatorname{NR}(y)\leq\operatorname{NR}(z_{a})+\operatorname{NR}(z_{b})+1$
Since we have at least one of the following inequalities
$z_{a}\triangleleft_{0}a$ or $z_{b}\triangleleft_{0}b$, then
$\operatorname{NR}(z_{a})+1\leq\operatorname{NR}(a)$ or
$\operatorname{NR}(z_{b})+1\leq\operatorname{NR}(b)$. In all cases
$\operatorname{NR}(y)+1\leq\operatorname{NR}(a)+\operatorname{NR}(b)+1$
* ➢
If
$y=\underset{i<\nu^{\prime}}{\overset{}{\sum}}r_{i}\omega^{a_{i}}+\operatorname{sign}(r_{\nu^{\prime}})\exp(y^{\prime})$
with $\nu^{\prime}<\nu$ and
$y^{\prime}\trianglelefteq\ln\omega^{a_{\nu^{\prime}}}$ (and
$y\triangleleft\ln\omega^{a_{\nu^{\prime}}}$ if $r_{\nu^{\prime}}=\pm 1$). Let
$z_{a}$ be the series constituted of the terms of $a$ which absolute value is
infinitely larger than $\omega^{a_{\nu}^{\prime}}$. We define the same way
$z_{b}$. Then
$y=z_{a}+z_{b}+\operatorname{sign}(r_{\nu^{\prime}})\omega^{a_{\nu^{\prime}}}$.
Since there is term with order of magnitude $\omega^{a_{\nu^{\prime}}}$ with
the same sign as $r_{\nu^{\prime}}$ in either $a$ or $b$. Without loss of
generality, assume it is $a$. Then
$z_{a}+\operatorname{sign}(r_{\nu^{\prime}})\exp y^{\prime}\trianglelefteq a$.
We have
$(\operatorname{NR}(z_{a}+\operatorname{sign}(r_{\nu^{\prime}})\exp
y^{\prime}),\operatorname{NR}(z_{b}))<_{lex}(\operatorname{NR}(a),\operatorname{NR}(b))$
otherwise $y=a+b$ what is not the case. Then, applying induction hypothesis,
$\operatorname{NR}(y)\leq\operatorname{NR}(z_{a}+\operatorname{sign}(r_{\nu^{\prime}})\exp
y^{\prime})+\operatorname{NR}(z_{b})+1$
Since we have at least one of the following inequalities
$z_{a}+\operatorname{sign}(r_{\nu^{\prime}})\exp y^{\prime}\triangleleft a$ or
$z_{b}\triangleleft_{0}b$, then we have either
$\operatorname{NR}(z_{a}+\operatorname{sign}(r_{\nu^{\prime}})\exp
y^{\prime})+1\leq\operatorname{NR}(a)$
or $\operatorname{NR}(z_{b})+1\leq\operatorname{NR}(b)$
In all cases
$\operatorname{NR}(y)+1\leq\operatorname{NR}(a)+\operatorname{NR}(b)+1$
Then, for any $y\triangleleft a+b$,
$\operatorname{NR}(y)+1\leq\operatorname{NR}(a)+\operatorname{NR}(b)+1$. This
proves that
$\operatorname{NR}(a+b)\leq\operatorname{NR}(a)+\operatorname{NR}(b)+1$
∎
###### Corollary 6.0.
For all $a,b\in\textnormal{No}$,
$\operatorname{NR}(ab)\leq\operatorname{NR}(a)+\operatorname{NR}(b)+1$.
###### Proof.
We have | $\operatorname{NR}(ab)$ | $\ =\ $ | $\operatorname{NR}(\ln\left(ab\right))$ | (Proposition 6.2.2)
---|---|---|---|---
| | $\ =\ $ | $\operatorname{NR}(\ln a+\ln b)$ |
| | $\ \leq\ $ | $\operatorname{NR}(\ln a)+\operatorname{NR}(\ln b)+1$ | (Lemma 6.2.2)
| | $\ \leq\ $ | $\operatorname{NR}(a)+\operatorname{NR}(b)+1$ | (Proposition 6.2.2)
∎
#### 6.2.3 Paths
Surreal numbers can be seen as trees. More precisely, it is possible to
associate to each surreal number a tree (with an ordinal numbers of node at
each layers) whose leaves are labeled by log-atomic numbers or $0$. This gives
us some information about the structure of the surreal number. With this
notion of tree we can look at the paths from the root (labeled by the surreal
number itself) to the leaves that are not labeled by $0$ (actually there is at
most one such a leaf). More precisely, the tree associated to a surreal number
$x$ is built as follows:
* •
Base case: if $x\in\mathbb{L}$ or $x=0$ just create a node labeled by $x$ and
stop the construction.
* •
Otherwise:
1. 1.
Put a node at the root and label is t $x$. Write $x$ under the form
$x=\underset{i<\nu}{\overset{}{\sum}}r_{i}\exp(x_{i})$ where
$r_{i}\in\mathbb{R}^{*}$, $\nu$ is an ordinal and
$x_{i}\in\textnormal{No}_{\infty}$ form a decreasing sequence.
2. 2.
For all $i<\nu$ create built the tree for $x_{i}$ and link its root to $x$ by
an edge labeled by $r_{i}$.
With the a notion, it is possible to have a geometric interpretation of the
well partial order $\triangleleft$.
The dotted arrows from “$\operatorname{sign}$” are to be understood by the
fact that we can apply the sign function or not to this arrow. The plain one
means that we must apply it. Thanks to this figure we can understand
$y\triangleleft x$ by the fact that the tree representation of $y$ is a left-
part of the tree representation of $x$.
###### Remark 6.0.
The reason why we stop the construction on log-atomic numbers is because if we
proceed the construction, we would get an infinite path where each node as
exactly one child and where every edge is labeled by $1$.
This notion of tree comes with a notion of path inside the tree.
###### Definition 6.0.
Let $x$ be a surreal number. A path $P$ of $x$ is sequence
$P:\mathbb{N}\to\textnormal{No}$ such that
* •
$P(0)$ is a term of $x$
* •
For all $i\in\mathbb{N}$, $P(i+1)$ is an infinite term of $\ln|P(i)|$
We denote $\mathcal{P}(x)$ the set of all paths of $x$.
We also denote $\ell(x)$ to be the purely infinite part of $\ln|x|$. Then
$P(i+1)$ is an infinite term of $\ell(P(i))$.
###### Definition 6.0.
The dominant path of $x$ is the path such that
* •
$P(0)$ is the leading term of $x$
* •
$P(i+1)$ is the leading term of $\ln|P(i)|$.
In a more graphical point of view, the dominant path of $x$ is the left most
path in the tree of $x$ that does not end on the lead $0$. This reduce to the
left most path if $x\not\asymp 1$.
###### Proposition 6.0.
Let $x\in\textnormal{No}$ and $P\in\mathcal{P}(x)$. Then for any
$n\in\mathbb{N}$, the length of the serie of $\ell(P(n))$, $\nu(\ell(P(n)))$
satisfies
$\nu(\ell(P(n)))\leq\operatorname{NR}(x)+1$
###### Proof.
For any $x\in\textnormal{No}$ we write
$x=\underset{i<\nu(x)}{\overset{}{\sum}}r_{i}(x)\omega^{a_{i}(x)}$ in
Gonshor’s normal form. Now fix $x\in\textnormal{No}$. Let
$P\in\mathcal{P}(x)$. We set $x_{0}=x$, and $\alpha_{0}<\nu(x)$ such
$P(0)=r_{\alpha_{0}}(x)\omega^{a_{\alpha_{0}}(x_{0})}$ and for any natural
number $n$,
$x_{n+1}=\ln\omega^{a_{\alpha_{n}}(x_{n})}=\ell(P(n))$
and $P(n+1)=r_{\alpha_{n+1}}\omega^{a_{\alpha_{n+1}}(x_{n+1})}$
Using Proposition 6.2.2, we get
$\operatorname{NR}(x_{n+1})=\operatorname{NR}\left(\omega^{a_{\alpha_{n}}(x_{n})}\right)$
By definition $x_{n+1}$ is purely infinite. Then $a_{\alpha_{n+1}}(x_{n+1})>0$
for all natural number $n$. Since $P$ is path, $P(0)\notin\mathbb{R}$
(otherwise $P(1)$ is not defined) and then $a_{\alpha_{0}}(x_{0})\neq 0$. We
then can apply Proposition 6.2.2 and get for all natural number $n$
$\operatorname{NR}(x_{n+1})\leq\operatorname{NR}\left(r_{\alpha_{n}}(x_{n})\omega^{a_{\alpha_{n}}(x_{n})}\right)$
Now using Proposition 6.2.2,
$\operatorname{NR}(x_{n+1})\leq\operatorname{NR}(x_{n})$
Then for any natural number $n$ we have
$\operatorname{NR}(x_{n})\leq\operatorname{NR}(x_{0})=\operatorname{NR}(x)$.
Applying Lemma 6.2.2, we get
$\forall
n\in\mathbb{N}\qquad\nu(x_{n})\leq\operatorname{NR}(x_{n})+1\leq\operatorname{NR}(x)+1$
∎
###### Remark 6.0.
Actually, we often have $\nu(\ell(P(n)))\leq\operatorname{NR}(x)$. Indeed,
using the notations of the proof and assuming that
$\nu(x_{n+1})=\operatorname{NR}(x)+1$, we have
$\operatorname{NR}(x)+1=\nu(x_{n+1})\underset{\text{Proposition
\ref{prop:cheminMajorationNu}}}{\leq}\operatorname{NR}(x_{n+1})+1\leq\cdots\leq\operatorname{NR}(x)+1$
Then, all the inequalities are equalities and from Proposition 6.2.3 we get
that $x_{n+1}$ is a finite sum of terms of the form $\pm\mathbb{L}^{\pm 1}$,
in particular $\nu(x_{n+1})<\omega$ and $\operatorname{NR}(x)$ is finite.
### 6.3 Derivative of a surreal number
###### Definition 6.0 (Summable family).
Let $\left\\{x_{i}\right\\}_{i\in I}$ be a family of surreal numbers. For
$i\in I$ write
$x_{i}=\underset{a\in\textnormal{No}}{\overset{}{\sum}}r_{i,a}\omega^{a}$
The family $\left\\{x_{i}\right\\}_{i\in I}$ is summable iff
1. (i)
$\underset{i\in I}{\overset{}{\bigcup}}\operatorname{supp}x_{i}$ is a reverse
well ordered set.
2. (ii)
For all $a\in\underset{i\in I}{\overset{}{\bigcup}}\operatorname{supp}x_{i}$,
$\left\\{\left.i\in I\ \vphantom{a\in\operatorname{supp}x_{i}}\right|\
a\in\operatorname{supp}x_{i}\right\\}$ is a finite set.
In this case, its sum is defined as $\underset{i\in
I}{\overset{}{\sum}}x_{i}=\underset{a\in\textnormal{No}}{\overset{}{\sum}}s_{a}\omega^{a}$
where for all $a\in\textnormal{No}$,
$s_{a}=\underset{i\in I\ |\
a\in\operatorname{supp}x_{i}}{\overset{}{\sum}}r_{i,a}$
which is a finite sum.
###### Definition 6.0 ([BM18b, Berarducci and Mantova, Definition 6.1]).
A derivation $D$ over a totally ordered exponential (class)-field
$\mathbb{K}\supseteq\mathbb{R}$ is a function $D:\mathbb{K}\to\mathbb{K}$ such
that
1. D1.
It satisfies $\forall x,y\in\mathbb{K}\qquad D(xy)=xD(y)+D(x)y$ (Liebniz Rule)
2. D2.
If $\left\\{x_{i}\right\\}_{i\in I}$ is summable, $D\left(\underset{i\in
I}{\overset{}{\sum}}x_{i}\right)=\underset{i\in I}{\overset{}{\sum}}D(x_{i})$
(Strong additivity)
3. D3.
$\forall x\in\mathbb{K}\qquad D(\exp x)=D(x)\exp x$
4. D4.
$\ker D=\mathbb{R}$
5. D5.
$\forall x>\mathbb{N}\qquad D(x)>0$
###### Remark 6.0.
We can replace Axiom D2. by
1. D2’.
If $\left\\{x_{i}\right\\}_{i\in I}$ is summable and
$\left\\{r_{i}\right\\}_{i\in I}$ is a family of real numbers,
$D\left(\underset{i\in I}{\overset{}{\sum}}r_{i}x_{i}\right)=\underset{i\in
I}{\overset{}{\sum}}r_{i}D(x_{i})$ (Strong lineraity)
Indeed, we have
$\ref{ax:D2p}\implies\ref{ax:D2}\qquad\text{and}\qquad\ref{ax:D1}\wedge\ref{ax:D2}\wedge\ref{ax:D4}\implies\ref{ax:D2p}$
Berarducci and Mantova [BM18b] provided a general way to define derivation
over the class-field No. We recall quickly some of their results.
###### Proposition 6.0 ([BM18b, Berarducci and Mantova, Proposition 6.4]).
We have the following properties for a derivation $D$:
* •
$\forall x,y\in\mathbb{K}\qquad 1\not\asymp x\succ y\Rightarrow D(x)\succ
D(y)$
* •
$\forall x,y\in\mathbb{K}\qquad 1\not\asymp x\sim y\Rightarrow D(x)\sim D(y)$
* •
$\forall x,y\in\mathbb{K}\qquad 1\not\asymp x\asymp y\Rightarrow D(x)\asymp
D(y)$
If $\mathbb{K}\subseteq\textnormal{No}$ is stable under $\exp$ and $\ln$, we
can get a nice property satisfied by a general derivation.
###### Proposition 6.0 ([BM18b, Berarducci and Mantova, Proposition 6.5]).
Let $\mathbb{K}\subseteq\textnormal{No}$ be a field of surreal number stable
by $\exp$ and $\ln$. Let $D$ be a derivation over $\mathbb{K}$. For all
$x,y>\mathbb{N}$ such that $x-y>\mathbb{N}$,
$\ln D(x)-\ln D(y)\prec x-y\preceq\max(x,y)$
To define the derivation, Berarducci and Mantova started by defining it on
log-atomic numbers and then extending it on all surreal numbers. More
precisely, a derivation on log-atomic number must satisfy the following:
###### Definition 6.0 ([BM18b, Berarducci and Mantova, Definition 9.1]
Prederivation).
Let $\mathbb{K}$ be a field of surreal numbers stale under $\exp$ and $\ln$
and such that for all $x\in\mathbb{K}$, for all path $P\in\mathcal{P}(x)$, for
all $k\in\mathbb{N}$, if $P(k)\in\mathbb{L}$, then $P(k)\in\mathbb{K}$. A
prederivation over $\mathbb{K}$ is a function
$D_{\mathbb{L}}:\mathbb{L}\cap\mathbb{K}\to\mathbb{K}$ such that
* D3.
$\forall\lambda\in\mathbb{L}\cap\mathbb{K}\qquad
D_{\mathbb{L}}\exp\lambda=(D_{\mathbb{L}}\lambda)\exp\lambda$
* PD1.
For all $\lambda\in\mathbb{L}\cap\mathbb{K}$, $D_{\mathbb{L}}\lambda$ is a
positive term.
* PD2.
$\forall\lambda,\mu\in\mathbb{L}\cap\mathbb{K}\qquad\ln
D_{\mathbb{L}}\lambda-\ln D_{\mathbb{L}}\ln\mu\prec\max(\lambda,\mu)$
They key notion to define the derivation from the the prederivation is the
notion of path derivative. This notion look at all the paths of the surreal
number to say how it contributes to the derivative of the surreal number.
###### Definition 6.0 ([BM18b, Berarducci and Mantova, Definition 6.13] Path
derivative).
Let $P$ be a path. We define the path derivative $\partial
P\in\mathbb{R}\omega^{\textnormal{No}}$ by
$\partial P=\left\\{\begin{array}[]{@{}r@{\quad}l@{}}P(0)\cdots
P(k-1)D_{\mathbb{L}}P(k)&P(k)\in\mathbb{L}\\\ 0&\forall k\in\mathbb{N}\quad
P(k)\notin\mathbb{L}\end{array}\right.$
We denote $\mathcal{P}_{\mathbb{L}}(x)=\left\\{\left.P\in\mathcal{P}(x)\
\vphantom{\partial P\neq 0}\right|\ \partial P\neq 0\right\\}$, which is the
set of paths that indeed reach log-atomic numbers at some point.
One can notice that for any $P\in\mathcal{P}_{\mathbb{L}}(x)$, $\partial
P=r\omega^{a}$ for some $r\in\mathbb{R}^{*}$ and $a\in\textnormal{No}$.
Indeed, every $P(k)$ is a term and $D_{\mathbb{L}}P(k)$, when
$P(k)\in\mathbb{L}$ is an exponential of a purely infinite number, hence, it
is a monomial. For $P\in\mathcal{P}_{\mathbb{L}}(x)$ there is a minimum
$k_{P}\in\mathbb{N}$ such that $P(k_{P})\in\mathbb{L}$. Then $P$ is entirely
determined by $P(0),\dots,P(k_{P})$. We then define
$\alpha_{0}(P),\dots,\alpha_{k_{P}}(P)$ as follows :
* •
Writing $x=\underset{i<\nu(x)}{\overset{}{\sum}}r_{i}(x)\omega^{a_{i}(x)}$,
then define $\alpha_{0}(P)<\nu(x)$ such that
$P(0)=r_{\alpha_{0}(P)}(x)\omega^{a_{\alpha_{0}(P)}}$.
* •
For $0\leq i<k$, write $P(i)=r\omega^{a}$. Then $P(i+1)$ is a term of
$\ln\omega^{a}$. Write
$\ln\omega^{a}=\underset{i<\nu(a)}{\overset{}{\sum}}r_{i}(a)\omega^{h(a_{i}(a))}$.
Then set $\alpha_{i+1}(P)$ such that
$P(i+1)=r_{\alpha_{i+1}(P)}(a)\omega^{h(a_{\alpha_{i+1}(P)}(a))}$
Using Proposition 6.2.3, we get that
$\left(\alpha_{i}(P)\right)_{i\in\left\llbracket\,0\ ;\
k_{P}\,\right\rrbracket}$ is a finite sequence over ordinal less than
$\operatorname{NR}(x)+1$. In particular, we can give
$\mathcal{P}_{\mathbb{L}}(x)$ a lexicographic order inherited from the one
over finite sequences.
###### Definition 6.0.
We define the order $<_{lex}$ on paths by
$P<_{lex}Q\Longleftrightarrow(\alpha_{0}(P),\dots,\alpha_{k_{P}}(P))<_{lex}(\alpha_{0}(Q),\dots,\alpha_{k_{Q}}(Q))$
This order will be useful later when we will try to understand better what is
going on to get some bounds about the derivatives. For now, the path-
derivative being defined, we can recall a theorem by Berarducci and Mantova
which explains how to build a general derivation from a prederivation.
###### Lemma 6.0 ([BM18b, Berarducci and Mantova, Corollary 6.17]).
Let $P,Q\in\mathcal{P}(x)$ such that $\partial P,\partial Q\neq 0$. If there
is $i\in\mathbb{N}$ such that
1. 1.
$\forall j\leq i\qquad P(i)\preceq Q(i)$
2. 2.
$P(i+1)$ is not a term of $\ell(Q(i))$,
then $\partial P\prec\partial Q$
###### Lemma 6.0 ([BM18b, Berarducci and Mantova, Lemma 6.18]).
Given $P\in\mathcal{P}(x)$ a path of $x$ we have for all $i$
$\operatorname{NR}(P(i+1))\leq\operatorname{NR}(P(i))$ with equality if and
only if $P(i)$ is the last term of $\ell(P(i))$. We also have
$\operatorname{NR}(P(0)\leq\operatorname{NR}(x)$ with equality if and only if
$P(0)$ is the last term of $x$.)
###### Theorem 6.1 ([BM18b, Berarducci and Mantova, Proposition 6.20, Theorem
6.32]).
Let $D_{\mathbb{L}}$ be a prederivation over a surreal field $\mathbb{K}$
stable under $\exp$ and $\ln$. Then $D_{\mathbb{L}}$ extends to a derivation
$\partial:\mathbb{K}\to\textnormal{No}$ such that
$\forall x\in\mathbb{K}\qquad\partial
x=\underset{P\in\mathcal{P}(x)}{\overset{}{\sum}}\partial P$
In particular, $\left\\{\partial P\right\\}_{P\in\mathcal{P}(x)}$ is summable
(see Definition 6.3).
The study would not be complete without an example. Berarducci and Mantova
provided such a derivation and even more: it is the simplest in some sense.
###### Definition 6.1 ([BM18b, Berarducci and Mantova, Definition 6.7]).
We define $\partial_{\mathbb{L}}:\mathbb{L}\to\textnormal{No}$ by
$\forall\lambda\in\mathbb{L}\qquad\partial_{\mathbb{L}}\lambda=\exp\left(-\underset{\alpha\in\textnormal{Ord}|\kappa_{-\alpha}\succeq^{K}\lambda}{\overset{}{\sum}}\
\underset{n=1}{\overset{{\mbox{\raisebox{0.56905pt}{\tiny{$+$}}}\infty}}{\sum}}\ln_{n}\kappa_{\alpha}+\underset{n=1}{\overset{{\mbox{\raisebox{0.56905pt}{\tiny{$+$}}}\infty}}{\sum}}\ln_{n}\lambda\right)$
For example, we have:
$\displaystyle\partial_{\mathbb{L}}\omega$ $\displaystyle=1$
$\displaystyle\partial_{\mathbb{L}}\exp\omega$ $\displaystyle=\exp\omega$
$\displaystyle\partial_{\mathbb{L}}\ln\omega$
$\displaystyle=\exp(-\ln\omega)=\mathchoice{\dfrac{1}{\omega}}{\dfrac{1}{\omega}}{\frac{1}{\omega}}{\frac{1}{\omega}}$
$\displaystyle\partial_{\mathbb{L}}\ln_{n}\omega$
$\displaystyle=\mathchoice{\dfrac{1}{\underset{k=0}{\overset{n-1}{\prod}}\ln_{k}\omega}}{\dfrac{1}{\underset{k=0}{\overset{n-1}{\prod}}\ln_{k}\omega}}{\frac{1}{\underset{k=0}{\overset{n-1}{\prod}}\ln_{k}\omega}}{\frac{1}{\underset{k=0}{\overset{n-1}{\prod}}\ln_{k}\omega}}$
$\displaystyle\partial_{\mathbb{L}}\kappa_{1}=\partial_{\mathbb{L}}\varepsilon_{0}$
$\displaystyle=\exp\left(\underset{n=1}{\overset{{\mbox{\raisebox{0.56905pt}{\tiny{$+$}}}\infty}}{\sum}}\ln_{n}\kappa_{1}\right)$
$\displaystyle\partial_{\mathbb{L}}\kappa_{-1}$
$\displaystyle=\exp\left(-\underset{n=1}{\overset{{\mbox{\raisebox{0.56905pt}{\tiny{$+$}}}\infty}}{\sum}}\ln_{n}\omega\right)$
In fact, $\kappa_{1}$ is intuitively $\exp_{\omega}\omega$. Therefore it is
also quite intuitive that
$\partial_{\mathbb{L}}\kappa_{1}=\kappa_{1}\ln(\kappa_{1})\ln\ln(\kappa_{1})\cdots$.
The same happens for $\kappa_{-1}$ which is intuitively $\ln_{\omega}\omega$.
We indeed have
$\partial_{\mathbb{L}}\kappa_{-1}=\mathchoice{\dfrac{1}{\omega\ln(\omega)\ln\ln(\omega)\cdots}}{\dfrac{1}{\omega\ln(\omega)\ln\ln(\omega)\cdots}}{\frac{1}{\omega\ln(\omega)\ln\ln(\omega)\cdots}}{\frac{1}{\omega\ln(\omega)\ln\ln(\omega)\cdots}}$.
###### Proposition 6.1 ([BM18b, Berarducci and Mantova, Propositions 6.9 and
6.10]).
$\partial_{\mathbb{L}}$ is a prederivation.
The previous proposition ensures that the associated function $\partial$
defined by Theorem 6.1 is indeed a derivation over surreal numbers. It turns
out that it the simplest for the order $\sqsubseteq$.
We now explain what is meant when saying that $\partial$ is the simplest
derivation. In fact, we mean that $\partial_{\mathbb{L}}$ is the simplest
prederivation with respect to the order $\sqsubseteq$.
###### Theorem 6.2 (Berarducci and Mantova[BM18b, Theorem 9.6]).
Let $D_{\lambda}$ be a prederivation. Let $\lambda\in\mathbb{L}$, minimal (in
$\mathbb{L}$) for $\sqsubseteq$ such that
$D_{\mathbb{L}}\lambda\neq\partial_{\mathbb{L}}\lambda$. Then
$\partial_{\mathbb{L}}\lambda\sqsubset D_{\mathbb{L}}\lambda$.
### 6.4 A first bound about the derivative
We give here some bound on the length of the series of a derivative.
See 1
###### Proof.
We know that $\left\\{\partial P\right\\}_{P\in\mathcal{P}(x)}$ is summable
(see Definition 6.3). In particular $\left\\{\partial
P\right\\}_{P\in\mathcal{P}_{\mathbb{L}}(x)}$ is summable. By definition of
summability (in this context) for any $P\in\mathcal{P}_{\mathbb{L}}(x)$, there
are finitely many $Q\in\mathcal{P}_{\mathbb{L}}(x)$ such that $\partial
P\asymp\partial Q$.
By definition of summability, $<_{\mathcal{P}}$ is a well total order over
$\mathcal{P}_{\mathbb{L}}(x)$ and if $\beta$ is its order type, then
$\omega\otimes\nu(\partial x)<\beta$ (usual ordinal product). Then, to
complete the proof, we just need to show that
$\beta<\omega^{\omega^{\omega(\operatorname{NR}(x)+1)}}$. We proceed by
induction on $\operatorname{NR}(x)$.
* •
$\operatorname{NR}(x)$=0 : then $x=0$ or $x=\pm y^{\pm 1}$ for some
$y\in\mathbb{L}$ and $\nu\left(\partial x\right)\leq
1<\omega^{\omega^{\omega}}$ and we conclude the proof.
* •
Assume that for any $y$ such that $\operatorname{NR}(y)<\operatorname{NR}(x)$,
$\mathcal{P}_{\mathbb{L}}(y)$ has order type less than
$\omega^{\omega^{\omega(\operatorname{NR}(y)+1)}}$. Assume for contradiction
that $\beta\geq\omega^{\omega^{\omega(\operatorname{NR}(x)+1)}}$. Then for any
multiplicative ordinal $\mu<\omega^{\omega^{\omega(\operatorname{NR}(x)+1)}}$,
there is some $P_{\mu}\in\mathcal{P}_{\mathbb{L}}(x)$, minimum with respect to
$<_{lex}$, such that the set
$\mathcal{E}_{\mu}(x)=\left\\{\left.Q\in\mathcal{P}_{\mathbb{L}}(x)\
\vphantom{Q<_{\mathcal{P}}P_{\mu}}\right|\ Q<_{\mathcal{P}}P_{\mu}\right\\}$
has order type $\beta_{\mu}\geq\mu$. Let us select any $\mu$ such that
$\mu\geq\omega^{\omega^{\omega\operatorname{NR}(x)+1}}$. Now define
| $\mathcal{E}_{\mu}^{(1)}(x)$ | $\ =\ $ | $\left\\{\left.Q\in\mathcal{P}_{\mathbb{L}}(x)\ \vphantom{Q<_{\mathcal{P}}P_{\mu}\quad Q<_{lex}P_{\mu}}\right|\ Q<_{\mathcal{P}}P_{\mu}\quad Q<_{lex}P_{\mu}\right\\}$
---|---|---|---
| $\mathcal{E}_{\mu}^{(2)}$ | $\ =\ $ | $\left\\{\left.Q\in\mathcal{P}_{\mathbb{L}}(x)\ \vphantom{Q>_{lex}P_{\mu}}\right|\ Q>_{lex}P_{\mu}\right\\}$
Theses sets are disjoints and
$\mathcal{E}_{\mu}=\mathcal{E}_{\mu}^{(1)}\cup\mathcal{E}_{\mu}^{(2)}$
Let $\beta_{\mu}^{(i)}$ be the order type of $\mathcal{E}_{\mu}^{(i)}$. We
then have
$\mu\leq\beta_{\mu}\leq\beta_{\mu}^{(1)}+\beta_{\mu}^{(2)}$
where the addition is the surreal addition of ordinal numbers. Since $\mu$ is
multiplicative ordinal, hence, an additive one, at least one of the
$\beta_{\mu}^{(i)}\geq\mu$.
* ➢
First case : $\beta_{\mu}^{(2)}\geq\mu$. Since $\mu$ is additive, there is an
$i\in\left\llbracket\,0\ ;\ k_{P}\,\right\rrbracket$ such that the well
ordered set
$\mathcal{E}_{\mu}^{(2,i)}=\left\\{\left.Q\in\mathcal{E}_{\mu}^{(2)}\
\vphantom{\forall j<i\ Q(j)=P_{\mu}(j)\quad Q(i)\prec P_{\mu}(i)}\right|\
\forall j<i\ Q(j)=P_{\mu}(j)\quad Q(i)\prec P_{\mu}(i)\right\\}$
has order type at least $\mu$. We take such an $i$. For
$Q\in\mathcal{E}_{\mu}^{(2,i)}$, we consider the path
$Q^{\prime}(n)=Q(n+i+1)$. Since $\partial Q\succeq\partial P_{\mu}$, Lemma 6.3
gives us that $Q(i+1)$ is a term of $\ell(P_{\mu}(i))$. We then have
$Q^{\prime}\in\mathcal{P}\left(\ell(P_{\mu}(i))\right)$ and
$\partial Q^{\prime}=\mathchoice{\dfrac{\partial Q}{Q(0)\cdots
Q(i)}}{\dfrac{\partial Q}{Q(0)\cdots Q(i)}}{\frac{\partial Q}{Q(0)\cdots
Q(i)}}{\frac{\partial Q}{Q(0)\cdots Q(i)}}=\mathchoice{\dfrac{\partial
Q}{P_{\mu}(0)\cdots P_{\mu}(i-1)Q(i)}}{\dfrac{\partial Q}{P_{\mu}(0)\cdots
P_{\mu}(i-1)Q(i)}}{\frac{\partial Q}{P_{\mu}(0)\cdots
P_{\mu}(i-1)Q(i)}}{\frac{\partial Q}{P_{\mu}(0)\cdots P_{\mu}(i-1)Q(i)}}$
In particular
$Q^{\prime}\in\mathcal{P}_{\mathbb{L}}\left(\ell(P_{\mu}(i))\right)$. Since
$Q(i)\prec P_{\mu}(i)$, $P_{\mu}(i)$ is not the last term of
$\ell(P_{\mu}(i-1))$ (or $x$ if $i=0$). Then Proposition 6.2.2 ensures that
$\operatorname{NR}(\ell(P_{\mu}(i)))\leq\operatorname{NR}(P_{\mu}(i))<\operatorname{NR}(x)$
Applying the induction hypothesis on $\ell(P_{\mu}(i))$, the order type of
$\mathcal{P}_{\mathbb{L}}(\left(\ell(P_{\mu}(i)))\right)$ has order type
$\gamma$ such that
$\gamma<\omega^{\omega^{\omega\left(\operatorname{NR}(\ell(P(i)))+1\right)}}\leq\omega^{\omega^{\omega\operatorname{NR}(x)}}<\omega^{\omega^{\omega\operatorname{NR}(x)+1}}\leq\mu$
For $Q,R\in\mathcal{E}_{\mu}^{(2,i)}$, $Q<_{\mathcal{P}}R$ iff
| $(Q(i)\partial Q^{\prime}\succ R(i)\partial R^{\prime})$ | $\ \vee\ $ | $(Q(i)\partial Q^{\prime}\asymp R(i)\partial R^{\prime}\wedge Q(i)\partial Q^{\prime}>R(i)\partial R^{\prime})$
---|---|---|---
| | | $\vee(Q(i)\partial Q^{\prime}=R(i)\partial R\wedge Q<_{lex}R)$
what we can also write
| $Q<_{\mathcal{P}}R$ | $\ \Leftrightarrow\ $ | $\big{(}\ell(Q(i))+\ell(\partial Q^{\prime})>\ell(R(i))+\ell(\partial R^{\prime})\big{)}$
---|---|---|---
| | | $\vee\big{(}\ell(Q(i))+\ell(\partial Q^{\prime})=\ell(R(i))+\ell(\partial R^{\prime})\wedge Q(i)\partial Q^{\prime}>R(i)\partial R^{\prime}\big{)}$
| | | $\vee(Q(i)\partial Q^{\prime}=R(i)\partial R\wedge Q<_{lex}R)$
where the two later cases occur finitely may times for $Q$ or $R$ fixed. Let
$\delta$ denote the order type of the possible values for $Q(i)$ and
$\beta_{\mu}^{(2,i)}$ the order type of $\mathcal{E}_{\mu}^{(2,i)}$. Since
$\ell$ is non-decreasing, the set $\left\\{\left.\ell(\partial Q^{\prime})\
\vphantom{Q\in\mathcal{E}_{\mu}^{(2,i)}}\right|\
Q\in\mathcal{E}_{\mu}^{(2,i)}\right\\}$ has order type at most $\gamma$ and
the set $\left\\{\left.\ell(Q(i))\
\vphantom{Q\in\mathcal{E}_{\mu}^{(2,i)}}\right|\
Q\in\mathcal{E}_{\mu}^{(2,i)}\right\\}$ has order type at most
$\operatorname{NR}(x)$. Using Proposition 2,
$\beta_{\mu}^{(2,i)}\leq(\gamma\operatorname{NR}(x))\otimes\omega<\mu$
Finally $\mu\leq\beta_{\mu}^{(2,i)}<\mu$
and we reach the contradiction.
* ➢
Second case : $\beta_{\mu}^{(2)}<\mu$. Then $\beta_{\mu}^{(1)}\geq\mu$. Let us
define for $i\in\left\llbracket\,0\ ;\ k_{P}\,\right\rrbracket$
$\mathcal{E}_{\mu}^{(1,i)}=\left\\{\left.Q\in\mathcal{E}_{\mu}^{(1)}\
\vphantom{\forall j<i\ P_{\mu}(j)=Q(j)\quad P_{\mu}(i)\prec Q(i)}\right|\
\forall j<i\ P_{\mu}(j)=Q(j)\quad P_{\mu}(i)\prec Q(i)\right\\}$
Since there are finitely many of them, that they form a partition of
$\mathcal{E}_{\mu}^{(1)}$ and $\mu$ is multiplicative, hence additive, there
is at least one of them which has order type at least $\mu$. We consider such
an $i\in\left\llbracket\,0\ ;\ k_{P}\,\right\rrbracket$. Now define
$x_{j}=\left\\{\begin{array}[]{@{}r@{\quad}l@{}}x&i=j\\\
\ell(P(i-j-1))&j<i\end{array}\right.$
Writing
$x_{0}=\underset{n<\nu(x_{0})}{\overset{}{\sum}}r_{n}(x_{0})\omega^{a_{n}(x_{0})}$
and $P_{\mu}(i)=r_{\alpha_{0}}(x_{0})\omega^{a_{\alpha_{0}}(x_{0})}$ we set
$y_{0}=\underset{n<\alpha_{0}}{\overset{}{\sum}}r_{n}(x_{0})\omega^{a_{n}(x_{0})}$
Now for $0\leq j<i$, we define $y_{j+1}$ has follows. $P_{\mu}(i-j-1)$ is a
term of $x_{j+1}$. Write
$P_{\mu}(i-j-1)=r_{\alpha_{j+1}}(x_{j+1})\omega^{a_{\alpha_{j+1}}(x_{j+1})}$
for some ${\alpha_{j+1}}<\nu(x_{j+1})$. Then set
$y_{j+1}=\underset{n<\alpha_{j+1}}{\overset{}{\sum}}r_{n}(x_{j+1})\omega^{a_{n}(x_{j+1})}+\operatorname{sign}(r_{\alpha_{j+1}}(x_{j+1}))\exp(y_{j})$
Denote $y=y_{i}$. For $Q\in\mathcal{E}_{\mu}^{(1,i)}$. For any
$Q\in\mathcal{E}_{\mu}^{(1,i)}$ we will build
$Q^{\prime}\in\mathcal{P}_{\mathbb{L}}(y)$. We expect to use the induction
hypothesis on $y$. First we prove that
$\operatorname{NR}(y)<\operatorname{NR}(x)$. In fact, by trivial induction, we
have $y_{j}\triangleleft_{j}x_{j}$. So $y\triangleleft_{i}x$ and by definition
of $\operatorname{NR}$ we have $\operatorname{NR}(y)<\operatorname{NR}(x)$.
Now consider the path $Q^{\prime}$ defined as follows :
* $\because$
$\forall j<i\qquad
Q^{\prime}(j)=\operatorname{sign}(r_{\alpha_{j}}(x_{i-j}))\exp(y_{i-j-1})$
* $\because$
$\forall j\geq i\qquad Q^{\prime}(j)=Q(j)$
We then have $Q^{\prime}\in\mathcal{P}(y)$. We can even say
$Q^{\prime}\in\mathcal{P}_{\mathbb{L}}(y)$. Moreover, since we change only the
common terms of the path, and the changes do not depend on $Q$, we have
$\forall Q,R\in\mathcal{E}_{\mu}^{(1,i)}\qquad
Q<_{\mathcal{P}}R\Leftrightarrow Q^{\prime}<_{\mathcal{P}}R^{\prime}$
We then have an increasing function
$\Phi:\left\\{\begin{array}[]{c c
c}\mathcal{E}_{\mu}^{(1,i)}&\rightarrow&\mathcal{P}_{\mathbb{L}}(y)\\\
|
# Optimistix: modular optimisation in JAX and Equinox
Jason Rader Terry Lyons Patrick Kidger
###### Abstract
We introduce Optimistix: a nonlinear optimisation library built in JAX and
Equinox. Optimistix introduces a novel, modular approach for its minimisers
and least-squares solvers. This modularity relies on new practical
abstractions for optimisation which we call search and descent, and which
generalise classical notions of line search, trust-region, and learning-rate
algorithms. It provides high-level APIs and solvers for minimisation,
nonlinear least-squares, root-finding, and fixed-point iteration. Optimistix
is available at https://github.com/patrick-kidger/optimistix.
Machine Learning
## 1 Introduction
JAX is a Python autodifferentiation framework popular for scientific computing
and machine learning (Bradbury et al., 2018; Kidger, 2021). Equinox (Kidger &
Garcia, 2021) extends many JAX core transformations and concepts, and adds
additional functionality for parameterised functions. Equinox has become a
popular choice for machine learning (Hall et al., 2023a, b; Singh, 2022; Wang,
2023) and scientific machine learning ‘sciML’ (Kidger, 2021; Rader et al.,
2023; Pastrana et al., 2023; Pastrana, 2023) in JAX.
We introduce Optimistix, a nonlinear optimisation library built in JAX +
Equinox. Optimistix targets differentiable scientific computing and sciML
tasks. The sciML ecosystem in JAX is large and growing, and already includes
packages for differentiable rigid-body physics simulation (Freeman et al.,
2021), computational fluid dynamics (Dresdner et al., 2022; Bezgin et al.,
2022), protein structure prediction (Jumper et al., 2021), linear solves and
least-squares (Rader et al., 2023), ordinary and stochastic differential
equations (Kidger, 2021), general-purpose optimisation (Blondel et al., 2021),
structural design (Pastrana et al., 2023), Bayesian optimisation (Song et al.,
2022; Golovin et al., 2017), and probabilistic modeling (Stanojević & Sartran,
2023).
### 1.1 Contributions
We introduce Optimistix, a JAX nonlinear optimisation library with the
following features:
1. 1.
Modular optimisers.
2. 2.
Fast compile times and run times.
3. 3.
Support for general PyTrees, 111JAX data structures consisting of arbitrarily
nested container types, containing other JAX/Python types. The containers
(tuples/dictionaries/lists/custom types) are referred to as ‘nodes’, and the
data types they hold as ‘leaves’. and use of PyTrees for solver state.
We highlight the first point as the central contribution of this work. This
allows users to define custom optimisers for a specific problem by swapping
components of the optimiser.
To achieve this, Optimistix introduces two practical abstractions: search and
descent. These abstractions generalise classical line search, trust-region,
and learning-rate algorithms. To the best of our knowledge, Optimistix is the
first optimisation software modularised in this way.
Optimistix includes APIs for four types of optimisation tasks: minimisation,
nonlinear least-squares, root-finding, and fixed-point iteration. Each has a
high-level API, and support automatic conversion of appropriate problem types.
Optimistix is already seeing adoption in sciML. For example, in software
libraries for solving differential equations (Kidger, 2021) and probabilistic
inference (Carroll, 2023).
Example usage on the Rosenbrock problem (Rosenbrock, 1960) using BFGS and
automatically converting least-squares to minimisation is:
⬇
1import jax
2import optimistix as optx
3
4def loss(x, scaling):
5 res1 = scaling * (x[1:] - x[:-1]**2)
6 return (res1, 1 - x)
7
8init = jax.numpy.zeros(100)
9scaling = 10
10solver = optx.BFGS(rtol=1e-5, atol=1e-6)
11minimum = optx.least_squares(
12 loss, solver, init, args=scaling
13)
## 2 Background: Preconditioned Gradient Methods
The dominant approach to differentiable optimisation is to locally approximate
the objective function $f:\mathbb{R}^{N}\to\mathbb{R}$ at an iterate $x_{k}$
using a quadratic model function:
$m_{k}(p)=f(x_{k})+\nabla f(x_{k})^{T}p+\frac{1}{2}p^{T}H_{k}p$ (1)
where $p\in\mathbb{R}^{N}$, and $H_{k}\in\mathbb{R}^{N\times N}$ is an
approximation to the Hessian $\nabla^{2}f(x_{k})$ (Nocedal & Wright,
2006)[sections 1-6], (Bonnans et al., 2006)[section 4], (Conn et al., 2000).
This quadratic is used to find the next step in an iterative algorithm to
minimise $f$. The minimum of (1) is found at $p=-H_{k}^{-1}\nabla f(x_{k})$.
As such, these are sometimes called preconditioned gradient methods (Gupta et
al., 2018) (not to be confused with preconditioned conjugate gradient methods
(Nocedal & Wright, 2006)[section 5].)
Most first-order methods common in the machine learning literature, such as
Adam (Kingma & Ba, 2017) and Adagrad (Duchi et al., 2011), are also
preconditioned gradient methods. In these algorithms, $H^{-1}$ is usually
stored and updated directly, rather than computed from $H$. Moreover, $H^{-1}$
is often highly structured or diagonal to reduce memory cost.
The class of preconditioned gradient methods is extremely large, and includes
Newton’s method, quasi-Newton methods, and Gauss-Newton methods (Bonnans et
al., 2006)[section 4], (Nocedal & Wright, 2006)[section 6], some nonlinear
conjugate gradient methods (Sherali & Ulular, 1990), gradient descent, and
adaptive gradient methods such as Adam (Kingma & Ba, 2017) and Adagrad (Duchi
et al., 2011). In many of these algorithms, the quadratic approximation is
implicit.
Preconditioned gradient methods are central in Optimistix, and were an
important motivator of the descent and search abstractions we now introduce in
section 3.
## 3 Modularity in Optimistix
This section introduces the key advancement of Optimistix: modularity. To
attain this, Optimistix uses a generalised approach to line searches, trust-
regions, and learning-rates through the abstractions of search and descent.
These concepts offer a precise formalism of ideas already present in the
optimisation literature. This approach makes it easier both to advance theory
(in Section 3.1 we demonstrate how this may be used to create a novel
minimiser), and serve as a practical approach to modularity. To the best of
our knowledge this approach is new here, and is not present in any other open-
source optimisation package.
Consider a scalar function to minimise: $f:\mathbb{R}^{n}\to\mathbb{R}$.
Searches consume local information about $f$, such as its value, gradient,
and/or Hessian at a point, and return a scalar. This scalar corresponds to the
distance along a line search, the trust-region radius of a trust-region
method, the value of a learning-rate, etc. Searches are a generalisation of
these.
A descent consumes this scalar, as well as the same local information about
$f$, and returns the step the optimiser should take. For example, gradient
descent with a fixed learning-rate $\alpha$ is implemented as a search which
returns a fixed value $\alpha\in\mathbb{R}$, and a descent which returns
$\alpha\operatorname{\nabla}f(x_{k})$.
Searches and descents are modular components in Optimistix, and they can be
easily swapped by the user.
### 3.1 Creating a Novel Optimiser with Ease: an Example
Optimistix provides a new ”mix-and-match” API which allows users to easily
create new optimisers. For example, here we define a custom optimiser and use
this to solve a toy linear regression problem:
⬇
1from collections.abc import Callable
2import jax.numpy as jnp
3import jax.random as jr
4import optimistix as optx
5
6class HybridMinimiser(optx.AbstractBFGS):
7 rtol: float
8 atol: float
9 norm: Callable = optx.max_norm
10 use_inverse: bool = False
11 descent: optx.AbstractDescent = (
12 optx.DoglegDescent()
13 )
14 search: optx.AbstractSearch = (
15 optx.LearningRate(0.1)
16 )
17
18solver = HybridMinimiser(
19 rtol=1e-8, atol=1e-9
20)
21
22def loss(weights, data):
23 x, y = data
24 return weights.T @ x - y
25
26key1, key2 = jr.split(jr.PRNGKey(0))
27noise = 0.1 * jr.normal(key1, shape=(99,))
28
29true_weights = jnp.array([3.14, -7, 2.71])
30xs = jr.normal(key2, shape=(3, 99))
31ys = true_weights.T @ xs + noise
32init = jnp.array([0.0, 0.0, 0.0])
33
34# Convert least-squares to minimisation
35# then solve.
36minimum = optx.least_squares(
37 loss, solver, init, args=(xs, ys)
38)
39# ‘minimum.value‘:
40# Array([3.129, -6.983, 2.732]
`HybridMinimiser` defines an optimiser which, at each step $k$, uses the
gradient and BFGS quasi-Newton approximation
$B(x_{k})\approx\nabla^{2}f(x_{k})$ (Nocedal & Wright, 2006)[section 6.1] to
construct a quadratic model function as discussed in section 2. A ‘dogleg’
descent path is then built, which interpolates between $x_{k}$,
$\operatorname{\nabla}f(x_{k})$, and
$B^{-1}(x_{k})\operatorname{\nabla}f(x_{k})$ with a piecewise linear curve.
Finally, a constant step of length `0.1` is taken along this descent path, to
form the next iterate $x_{k+1}$.
`HybridMinimiser` is not an off-the-shelf optimiser, and would require a
custom implementation in most optimisation packages. Implementing a custom,
performant algorithm may require significant experience in optimisation, and
hundreds of lines of technical code. In Optimistix, a performant
implementation of this novel optimiser takes fewer than 10 lines of code.
Custom optimisers allow users to choose optimisation methods appropriate for
the problem at hand. For example, the novel optimiser above can solve the
poorly-scaled Biggs EXP6 function (Moré et al., 1981), whereas the standard
BFGS algorithm with backtracking line search fails to solve the problem to an
acceptable accuracy.
### 3.2 Search
For most nonlinear functions, the quadratic approximation $m_{k}$ in (1) is
only a reasonable approximation in a small neighborhood of $x_{k}$. The full
preconditioned gradient step $-H_{k}^{-1}\operatorname{\nabla}f(x_{k})$ often
overshoots the region where the approximation is good, slowing the
optimisation process. Line searches, trust-regions, and learning-rates are all
methods to keep steps within a region where the approximation is good.
We begin by discussing the classical approach to line searches and trust-
regions.
Line searches
Line searches (Nocedal & Wright, 2006)[section 3] move in the direction of the
preconditioned gradient, but only move a certain amount
$\alpha_{k}\in\mathbb{R}^{+}$. ie.
$x_{k+1}=x_{k}-\alpha_{k}H_{k}^{-1}\operatorname{\nabla}f(x_{k}).$
Algorithms for choosing $\alpha_{k}$ often seek to satisfy conditions of
sufficient decrease and curvature – keeping $\alpha_{k}$ from growing too
large or shrinking too small. Popular conditions are the Armijo conditions,
Wolfe conditions, and Goldstein conditions with various relaxations of each
(Nocedal & Wright, 2006)[section 3], (Moré & Thuente, 1994), (Bonnans et al.,
2006).
Trust-region methods
Trust-region methods (Nocedal & Wright, 2006)[section 4] are another popular
class of algorithms which seek to approximately solve the constrained
optimisation problem
$\displaystyle p^{*}$
$\displaystyle=\operatorname*{arg\,min}_{p}m_{k}(p)\text{ subject to
}\left\lVert p\right\rVert\leq\Delta_{k}$ (2) $\displaystyle x_{k+1}$
$\displaystyle=x_{k}+p^{*}$ (3)
for some norm $\left\lVert\cdot\right\rVert$ and trust-region radius
$\Delta_{k}\in\mathbb{R}^{+}$. The trust-region radius is chosen at each step
based upon how well $m_{k}$ approximated $f$ at the previous step, see (Conn
et al., 2000)[sections 6.1 and 10.5] for details.
The minimum in (2) is found using a variety of approximate methods (Nocedal &
Wright, 2006)[section 4], (Steihaug, 1983), and when $\left\lVert-
H_{k}^{-1}\operatorname{\nabla}f(x_{k})\right\rVert\leq\left\lVert\Delta_{k}\right\rVert$,
then by construction $p^{*}=-H_{k}^{-1}\operatorname{\nabla}f(x_{k})$. For
this reason, trust-region methods are often interpreted as a class of methods
for interpolating between $x_{k}$ and $-H^{-1}\operatorname{\nabla}f(x_{k})$
via the scalar parameter
$\Delta_{k}\in\left[0,\left\lVert-H^{-1}\operatorname{\nabla}f(x_{k})\right\rVert\right].$
One advantage line search algorithms have over trust-region algorithms is the
value $H_{k}^{-1}\operatorname{\nabla}f(x_{k})$ can be computed once, cached,
and reused as $\alpha_{k}$ varies. This is not always true for trust-region
algorithms, which may require significant effort to recompute $p^{*}$ as
$\Delta_{k}$ varies.
Searches in Optimistix
A _search_ is a new abstraction introduced in Optimistix to generalise line
searches, trust-region methods, and learning-rates. Searches are defined as
functions taking local information and an internal state, and producing a
scalar
$s:\mathcal{D}\times\mathcal{S}\to\mathbb{R}.$ (4)
For an example of a line search, consider the backtracking Armijo update. The
Armijo backtracking algorithm uses local data
$d_{k}=(d_{k}^{(1)},d_{k}^{(2)})\in\mathcal{D}=\mathbb{R}\times\mathbb{R}^{N}$,
where $d_{k}^{(1)}$ represents the objective function value $f(x_{k})$, and
$d_{k}^{(2)}$ it’s gradient $\operatorname{\nabla}f(x_{k})$. The Armijo search
state is
$\sigma_{k}=(\sigma_{k}^{(1)},\sigma_{k}^{(2)})\in\mathcal{S}=(0,1]\times\\{0,1\\}$,
where $\sigma_{k}^{(1)}$ represents the current step-size and
$\sigma_{k}^{(2)}$ represents whether to shrink or reset the step size,
depending on whether the last step satisfied the Armijo condition. For a
decrease factor $c\in(0,1]$, the Armijo search takes the step
$s(d_{k},\sigma_{k})=\begin{cases}c\sigma_{k}^{(1)}&\sigma_{k}^{(2)}=0\\\
1&\sigma_{k}^{(2)}=1,\end{cases}$ (5)
and updates its state $\sigma_{k}$ via
$\sigma_{k+1}^{(1)}=s(d_{k},\sigma_{k})$
and $\sigma_{k+1}^{(2)}=1$ when the Armijo condition
$f(x_{k}+\delta_{k})\leq d_{k}^{(1)}+\eta\delta_{k}^{T}d_{k}^{(2)}$ (6)
is satisfied, or $\sigma_{k+1}^{(2)}=0$ otherwise. Here, $0<\eta<1$ is a
hyperparameter which determines how much a step must decrease to be accepted
(larger means more decrease is required) and $\delta_{k}$ is the proposed
step.
There are a number of different line search algorithms, but trust-region
methods typically use the same trust-region radius selection algorithm. This
algorithm is represented by the search
$s_{\text{TR}}(d_{k},\sigma_{k})=\begin{cases}c_{2}\sigma_{k}^{(1)}&\sigma_{k}^{(2)}>C_{2},\\\
\sigma_{k}^{(1)}&C_{1}<\sigma_{k}^{(2)}<C_{2}\\\
c_{1}\sigma_{k}^{(1)}&\sigma_{k}^{(2)}<C_{1}\\\ \end{cases}$ (7)
where $0<c_{1}<1$ is a decrease amount, $c_{2}>1$ an increase amount, $C_{1}$
a low cutoff (close to $0$) and $C_{2}$ is a high cutoff (close to $1$.) The
state is updated via
$\sigma_{k+1}^{(1)}=s(d_{k},\sigma_{k})$
and
$\sigma_{k}^{(2)}=\frac{f(x_{k})-f(x_{k}+\delta_{k})}{m_{k}(0)+m_{k}(\delta_{k})}$
(8)
where $m_{k}$ is the quadratic model function (1) and $\delta_{k}$ is again
the proposed step $x_{k+1}=x_{k}+\delta_{k}$. The second component of the
state, $\sigma_{k}^{(2)}$, is referred to as the ‘trust-region ratio’, and
roughly indicates how well the model function $m_{k}$ predicted the decrease
in $f$ that would come from taking the step $\delta_{k}$.
Current searches implemented in Optimistix are: learning rate, backtracking
Armijo line search (Nocedal & Wright, 2006)[section 3.1], the classical trust-
region ratio update (Conn et al., 2000)[section 6.1], and a trust-region
update using a linear local approximation for first-order methods (Conn et
al., 2000).
### 3.3 Descent
A _descent_ is another new abstraction in Optimistix. It is a function taking
a scalar, the same local data, and an internal state and producing a vector:
$\delta:\mathbb{R}\times\mathcal{D}\times\mathcal{R}\to\mathbb{R}^{N}.$ (9)
The value produced by $\delta$ is used to update the iterates $x_{k}$
$x_{k+1}=x_{k}+\delta(\alpha_{k},d_{k},\gamma_{k})$
with $\alpha_{k}\in\mathbb{R},\ d_{k}\in\mathcal{D},\
\gamma_{k}\in\mathcal{R}$.
The main function of the descent is to map the scalar produced by the search
$\alpha_{k}$ into a meaningful optimisation step.
For example, we can represent the preconditioned gradient step used in a line
search as a descent via
$\delta_{\text{LS}}(\alpha_{k},d_{k},\gamma_{k})=-\alpha_{k}(d^{(2)})^{-1}(d^{(1)})$
(10)
here,
$d_{k}=(d^{(1)}_{k},d^{(2)}_{k})\in\mathcal{D}=\mathbb{R}^{N}\times\mathbb{R}^{N\times
N}$. Breaking this descent down, consider applying (10) to a minimisation
problem. If $d_{k}=(\operatorname{\nabla}f(x_{k}),H_{k})$, then (10) is
equivalent to the standard preconditioned gradient descent
$\delta_{\text{TR}}(\alpha_{k},d_{k},\gamma_{k})=-\alpha_{k}H_{k}^{-1}\operatorname{\nabla}f(x_{k})$,
with a step-size $\alpha_{k}$. If $d_{k}=(r_{k},J_{k})$, the residual vector
$r_{k}$ and Jacobian $J_{k}$ at step $k$, then (10) is the Gauss-Newton step
with step-size $\alpha_{k}$ for a least-squares problem.
As another example, we can represent the trust-region subproblem as a descent
via
$\delta_{\text{TR}}(\alpha_{k},d_{k},\gamma_{k})\approx\operatorname*{arg\,min}_{p,\
\|p\|\leq\alpha_{k}}d_{k}^{(1)}+\operatorname{\nabla}(d_{k}^{(2)})^{T}p+\frac{1}{2}p^{T}d_{k}^{(3)}p$
(11)
for
$d_{k}\in\mathcal{D}=\mathbb{R}\times\mathbb{R}^{N}\times\mathbb{R}^{N\times
N}$. For a minimisation problem this is
$d_{k}=(f(x_{k}),\operatorname{\nabla}f(x_{k}),H_{k})$. Here, the output of
the search is mapped not to a line search, but to the trust-region radius of a
trust-region algorithm.
$\alpha_{k}$ has a similar meaning in both of these algorithms. It’s used to
restrict the size of the update at step $k$ when confidence in the
approximation $m_{k}$ is low. By abstracting the map from $\alpha_{k}$ to the
optimiser update $\delta_{k}$, descents allow a search to be used in a variety
of different algorithms without knowing exactly how the step-size will affect
the optimiser update. This decoupling is novel to optimistix, and is one of
its most powerful features.
Optimistix descents, like classical line searches, allow certain values used
in their computation to be cached and reused. For example, in
$\delta_{\text{LS}}$, the value $(d^{(2)})^{-1}d^{(1)}$ is only computed once
as $\alpha_{k}$ varies.
Current descents implemented in Optimistix are: steepest descent, nonlinear
conjugate gradient descent (Shewchuk, 1994), direct and indirect damped Newton
(used in the Levenberg-Marquardt algorithm (Moré, 1978; Nocedal & Wright,
2006)), and dogleg (Nocedal & Wright, 2006)[section 4.1]. The indirect damped
Newton and dogleg descents both approximately solve (2) (see (Nocedal &
Wright, 2006)[sections 4.1 & 4.3]) and are capable of using any user-provided
function which maps a PyTree to a scalar as the norm $\|\cdot\|$.
Flattening the bilevel optimisation problem
The classical approach to implementing a line search is as a bilevel
optimisation problem. The inner loop to obtain the solution to the line
search, the outer loop over the iterates $x_{k}$.
The termination condition of the line-search in the inner loop, such as the
Armijo or Wolfe conditions, may evaluate $f(x_{k+1})$ when determining whether
to halt the inner loop. This can result in an extra function evaluation or
additional logic at each step of the outer loop, as $f(x_{k+1})$ is usually
computed in each step of the outer loop as well (eg. when computing the
gradient of $f$.)
Not every search has termination conditions which require computing
$f(x_{k+1})$, so we cannot avoid an extra function evaluation by requiring the
inner loop to evaluate $f(x_{k+1})$ and pass this to the outer loop.
Optimistix gets around this by using step-rejection: a step $x_{k+1}$ is
proposed, and the outer loop computes $f(x_{k+1})$, and at the next step the
search decides whether to accept or reject $x_{k+1}$.
## 4 Converting Between Problem Types
Optimistix handles four types of nonlinear optimisation task, each with a
simple user API:
* •
Minimisation:
`optimistix.minimise`.
* •
Nonlinear least-squares solves:
`optimistix.least_squares`.
* •
Root-finding:
`optimistix.root_find`.
* •
Fixed-point iteration:
`optimistix.fixed_point`.
For example, to find a root of the function `fn` we call:
⬇
solver = optimistix.Newton(
rtol=1e-3, atol=1e-3
)
optimistix.root_find(fn, solver, y0)
where `y0` is an initial guess and `optimistix.Newton` is the optimistix
solver used to find the root. `rtol` and `atol` are relative and absolute
tolerances used to deteremine whether the solver should terminate, as
described in section 6.
Optimistix can convert between these problem types when appropriate. A fixed-
point iteration $f(x)=x$ is converted to a root-find problem by solving
$f(x)-x=0$. A root-find problem $f(x)=0$ is converted to a nonlinear least-
squares problem by using entries of $f(x)$ as the residuals:
$\min_{x}\sum_{i}f(x)_{i}^{2}$. A least-squares problem is already a special
case of minimisation problem, so we take the objective function
$g(x)=\sum_{i}f(x)_{i}^{2}$ and provide this directly to a minimiser.
Converting between problem types is not just a neat trick, but a common
pattern in optimisation, especially for root-finding and fixed-point
iteration. While standard methods for root-finding and fixed-point iteration,
such as Newton, chord, bisection, and fixed-point iteration (Nocedal & Wright,
2006; Bonnans et al., 2006) are available in Optimistix, these methods tend
not to work as well for complex, highly nonlinear problems. In this case,
common practice is to convert the fixed-point or root-find problem into a
least-squares or minimisation problem, and solve using a minimiser or
nonlinear least-squares solver (Nocedal & Wright, 2006)[section 11].
The conversion between problem types outlined above is done automatically: the
solver is checked, and the problem automatically converted and lowered to a
solve of that type. For example,
⬇
solver = optimistix.BFGS(
rtol=1e-3, atol=1e-3
)
optimistix.root_find(fn, solver, y0)
will convert the root-find problem to a minimisation problem and use the BFGS
algorithm (Nocedal & Wright, 2006)[section 6.1] to find a root.
## 5 Automatic Differentiation
All solvers in Optimistix are iterative, and provide two methods for automatic
differentiation:
* •
Differentiation via the implicit function theorem (Blondel et al., 2021)
* •
Differentiation via online treeverse (Stumm & Walther, 2010; Wang et al.,
2009).
The former is the default for both forward-mode autodiff and backpropagation
in Optimistix. Both are known automatic differentiation techniques; however,
to the best of our knowledge, Optimistix is the first nonlinear optimisation
library in JAX to feature online treeverse. We now discuss each of these in
turn.
### 5.1 Forward-Autodiff and Backpropagation Using the Implicit Function
Theorem
Consider a continuously differentiable function
$F:\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}$ and values
$x_{0}\in\mathbb{R}^{n}$ and $\theta_{0}\in\mathbb{R}^{m}$ satisfying
$\displaystyle F(x_{0},\theta_{0})$ $\displaystyle=0$ (12)
$\displaystyle\det\left(\frac{\mathrm{d}F}{\mathrm{d}x}(x_{0},\theta_{0})\right)$
$\displaystyle\neq 0.$ (13)
The implicit function theorem (IFT) states there exists neighborhoods
$x_{0}\in N_{x_{0}}$ and $\theta_{0}\in N_{\theta_{0}}$ and a differentiable
function $x^{*}:N_{\theta_{0}}\to N_{x_{0}}$ such that
$\displaystyle x^{*}(\theta_{0})$ $\displaystyle=x_{0}$ $\displaystyle
F(x^{*}(\theta),\theta)$ $\displaystyle=0\ \ \forall x\in N_{\theta_{0}}.$
and
$\frac{\mathrm{d}x^{*}}{\mathrm{d}\theta}=-\left(\frac{\mathrm{d}F}{\mathrm{d}x}(x_{0},\theta_{0})\right)^{-1}\frac{\mathrm{d}F}{\mathrm{d}\theta}(x_{0},\theta_{0})$
(14)
The implicit function theorem gives means of calculating the derivative
$\frac{\mathrm{d}x^{*}}{\mathrm{d}\theta}$, despite the fact that $x^{*}$ is
only defined implicitly, and is the solution to a nonlinear problem.
Every optimisation task in Optimistix (minimisation, …) may be rewritten in
the form of equation (12), and thus the derivative of the solution (the
argminimum, …) with respect to any parameters $\theta$ may be found using
equation (14).
For root finding, equations (12-13) apply directly. For fixed-point iteration,
the system of nonlinear equations is $F(x,\theta)=f(x,\theta)-x$. For
minimisation and least-squares,
$F(x,\theta)=\frac{\mathrm{d}f}{\mathrm{d}x}(x,\theta)$, as the minimum is
found at a critical points of $f$.
We highly recommend (Blondel et al., 2021) for many more details on using the
implicit function theorem to differentiate nonlinear optimisation routines.
### 5.2 Backpropagation using Online Treeverse
An optimisation routine can also be differentiated naively, by backpropagating
through the components of the optimiser at each iteration $k$. This method
works even when the assumptions (12-13) of the implicit function theorem are
not satisfied, or the full Jacobian $\frac{dF}{dx}$ cannot be constructed due
to memory constraints. Further, it can be faster in some cases, especially
when the dimensionality of the domain of the optimisation problem is much
larger than the number of iterations $K$. See (Ablin et al., 2020) for more
details comparing the efficiency of these methods.
The main downside of the naive backpropagation algorithm for nonlinear
optimisation is memory cost. It costs $\mathcal{O}(KM)$ memory to
backpropagate through an optimisation, were $M$ is the memory cost of
backpropagating through a single iteration of the optimiser, and $K$ is the
total number of iterations used. The classical way around this limitation is
checkpointing, which chooses a fixed number $N\in\mathbb{N}$ of checkpoint
iterations, and stores the values of the optimiser only at those checkpointed
iterations, reducing the memory cost to $\mathcal{O}(NM)$. Iterations which
are not checkpointed are recomputed from the previous checkpoint, trading off
memory cost for extra computation time.
Online treeverse is a checkpointing backpropagation scheme designed for
performing backpropagation over loops where the number of iterations is
a-priori unknown. Online treeverse dynamically updates its checkpoint
locations as the loop continues, keeping optimal computation times for a fixed
number of checkpoints $c$. For further details see (Stumm & Walther, 2010) and
(Wang et al., 2009).
## 6 Termination Condition
Optimistix introduces a Cauchy convergence criteria, where iterations are
stopped if
$\left\lvert\frac{f(x_{k+1})-f(x_{k})}{\varepsilon_{a}+\varepsilon_{r}f(x_{k})}\right\rvert<1\text{
and
}\left\lVert\frac{x_{k+1}-x_{k}}{\varepsilon_{a}+\varepsilon_{r}x_{k}}\right\rVert<1$
for $\varepsilon_{a}$ an absolute tolerance set by the user and
$\varepsilon_{r}$ a relative tolerance set by the user, and division is
performed elementwise. By default, the norm $\left\lVert\cdot\right\rVert$ is
$\left\lVert\cdot\right\rVert_{\infty}$, ie. the maximum over the elementwise
absolute differences in $x_{k+1}$ and $x_{k}$. However, users can set this
themselves to be any function from a PyTree to a scalar.
To justify this choice, we note that the optimisation literature is not
consistent with the termination criteria used (Nocedal & Wright, 2006),
(Bonnans et al., 2006), (Conn et al., 2000), (Moré et al., 1980). For example,
(SAS Institute Inc, 2004) outlines 20 different convergence criteria currently
in use for nonlinear optimisation.
Without an established convention, we choose this termination condition to
match the better established convention for the literature on solving
numerical differential equations (Hairer et al., 2008; Hairer & Wanner, 2002).
This is reasonable choice for optimisation, resembling (but not exactly
matching) a combination of the ‘X-convergence’ and ‘F-convergence’ criteria in
the highly successful MINPACK (Moré et al., 1980) optimisation suite.
Alternatively, it is roughly a combination of all four of the ‘FTOL’,
‘ABSFTOL’, ‘XTOL’, and ‘ABSXTOL’ criteria found in (SAS Institute Inc, 2004).
This also ensures a consistent approach between Optimistix and existing JAX +
Equinox libraries, such as Diffrax (Kidger, 2021).
## 7 Experiments
### 7.1 Summary of Experiments
We demonstrate the fast compilation and run times of Optimistix by
benchmarking our solvers for both runtime and compile time against the state-
of-the-art optimisation algorithms in JAXopt (Blondel et al., 2021) and SciPy
(Virtanen et al., 2020). For SciPy, we only benchmark runtimes, since SciPy
does not have a notion of compile times (however, we still compile the
function passed to the SciPy solver. The overhead of this function compilation
is excluded.)
Table 1: Average minimum runtime comparison Average minimum runtime
(milliseconds)
---
Optimiser | Optx | JAXopt | SciPy
BFGS | 0.836 | 16.4 | 250
Nonlinear CG | 0.466 | 6.27 | 208
LM1 | - | 1.70 | -
LM2 | 144 | - | 188
Gauss-Newton | 4.54 | 3.08 | N/A
Table 2: Average compile time comparison Average compile time (milliseconds)
---
Optimiser | Optx | JAXopt
BFGS | 251 | 854
Nonlinear CG | 130 | 849
LM1 | - | 554
LM2 | 296 | -
Gauss-Newton | 288 | 416
Tables 1-2 present the average minimum runtime and average compile time for a
set of over 100 test problems. Each problem is solved 10 times to a fixed
tolerance with a max budget of 2000 iterations, and the minimum over those 10
iterations is taken to be the runtime of the solver for that problem. The
reported values are averaged over all the problems in the test set. The full
methodology and additional comparisons are described in appendix A.
The average compile times for Optimistix (Optx) are up to 6.5 times faster
than the average compile times for JAXopt. The runtimes include all operations
for a full solve; notably, it includes the difference in the number of
iterations arising from convergence criteria and step-sizes in Optimistix and
JAXopt. The times for a single iteration are presented in appendix A, where
JAXopt is marginally faster for a single iteration than Optimistix.
The difference between Levenberg-Marquardt (LM) algorithms LM1 and LM2 relates
to a technical detail in their implementation. At each iteration, LM performs
a linear solve of the equation $(J_{k}^{T}J_{k}+\lambda I)p=-J^{T}r_{k}$,
where $J_{k}$ is the Jacobian and $r_{k}$ is the residual of the objective
function at step $k$. This linear system can be solved directly with a
Cholesky or conjugate gradient solver, but the condition number of $J_{k}$ is
squared in $J_{k}^{T}J_{k}$. This is the approach used in LM1, which JAXopt
implements. Alternatively, this linear system can be transformed into an
equivalent least-squares problem which does not square the condition number,
but which requires a slower linear solve. This is the approach used in LM2,
which Optimistix and SciPy use. We accept the slower solve in return for
increased robustness and accuracy. This is detailed in (Moré, 1978) and
(Nocedal & Wright, 2006)[section 10]. A similar tradeoff is made for Gauss-
Newton. The difference in robustness is noticeable: Optimistix LM failed to
solve 3 of the test problems to the specified accuracy within 2000 iterations,
SciPy failed to solve 7, and JAXopt failed to solve 13.
Compile times in the table are 3-4 orders of magnitude longer than runtimes.
For solves with fewer than many thousands of iterations, the total time is
dominated by compilation, where Optimistix is faster.
### 7.2 Benchmarking with Performance Profiles
Performance profiles are an established technique for benchmarking
optimisation software (Dolan & Moré, 2001; Beiranvand et al., 2017).
Performance profiles are defined in terms of solver performance ratios.
Formally, let $\mathcal{S}$ be a collection of optimisers, and $\mathcal{P}$ a
collection of problems. Letting
$\displaystyle R_{s,p}$ $\displaystyle=\text{the time for optimiser }s\text{
to solve problem }p,$ excluding compilation time. $\displaystyle C_{s,p}$
$\displaystyle=\text{the time for optimiser }s\text{ to compile problem }p,$
the runtime performance ratio and compile time performance ratios are:
$\displaystyle r^{R}_{s,p}$ $\displaystyle=\frac{R_{s,p}}{\min\\{R_{s,p}\colon
s\in\mathcal{S}\\}}$ $\displaystyle r^{C}_{s,p}$
$\displaystyle=\frac{C_{s,p}}{\min\\{C_{s,p}\colon s\in\mathcal{S}\\}}$
The runtime and compile time performance profiles are then:
$\displaystyle\rho^{R}_{s}(\tau)=\frac{1}{|\mathcal{P}|}|\\{p\in\mathcal{P}\colon
r^{R}_{s,p}\leq\tau\\}|$
$\displaystyle\rho^{C}_{s}(\tau)=\frac{1}{|\mathcal{P}|}|\\{p\in\mathcal{P}\colon
r^{C}_{s,p}\leq\tau\\}|$
ie. $\rho^{R}_{s}(\tau)$ is the proportion of problems solver $s$ solved
within a factor $\tau$ of the minimum runtime attained by any solver in
$\mathcal{S}$. The value $\rho^{R}_{s}(1)$ is the proportion of problems where
solver $s$ had the best runtime performance in $\mathcal{S}$. For large values
of $\tau$, $\rho^{R}_{s}(\tau)$ represents the proportion of problems solver
$s$ accurately solved. Similarly, $\rho^{C}_{s}(\tau)$ is the proportion of
problems solved within a factor of $\tau$ of the best compile time.
Figure 1: BFGS runtime comparison Figure 2: Levenberg-Marquardt runtime
comparison Figure 3: BFGS compile time comparison Figure 4: Levenberg-
Marquardt compile time comparison
In figures 1-2 we see the runtime performance profiles of BFGS, and Levenberg-
Marquardt with Optimsitix in red, JAXopt in blue, and SciPy in purple (no
distinction is made between LM1 and LM2 in the plot.) The plots are log scale
on the x-axis (representing $\tau$ in the performance profile,) and range from
$1$ to $2^{7}$. Optimistix performs the best of all solvers for BFGS. For
Levenberg-Marquardt it is generally slower than JAXopt but solves more
problems in total.
It may appear as though SciPy has not solved many of the test problems;
however, it is just more than 250 times slower and therefore does not show on
the plots.
In figures 3-4 we see a compile time comparison of BFGS and Levenberg-
Marquardt. Optimistix outperforms JAXopt on every problem for both optimisers.
## 8 Related Work
Gradient-based minimisation Optax (Babuschkin et al., 2020) implements
algorithms for gradient-based minimisation in JAX. Optimistix has a broader
scope than Optax, handling other nonlinear optimisation tasks and second-order
optimisers. Optax and Optimistix are compatible libraries, and Optax
minimisers can be used within Optimistix via `optimistix.OptaxMinimiser`.
General-purpose optimisation JAXopt (Blondel et al., 2021) is an excellent
differentiable linear and nonlinear optimisation library in JAX which includes
many optimisation tasks, including ones out-of-scope for Optimistix such as
quadratic programming and non-smooth optimisation.
We see Optimistix and JAXopt as having fundamentally different scopes and core
abstractions. JAXopt has a larger scope than Optimistix, but its focus is not
on modularity. While specific choices of line search algorithms can be
interchanged in some cases, for the most part introducing a new optimiser
requires writing the algorithm in its entirety. We recommend JAXopt for
general optimisation tasks.
SciPy (Virtanen et al., 2020) also offers general-purpose nonlinear
optimisation routines, many of which call into MINPACK (Moré et al., 1980), a
nonlinear optimisation library written in Fortran. SciPy includes many
optimisation tasks, including minimisation, least-squares, root-finding, and
global optimisation. However, SciPy implementations are not differentiable,
and are generally difficult to extend.
## 9 Conclusion
We introduced Optimistix, a nonlinear optimisation software for minimisation,
least-squares, root-finding, and fixed-point iteration with a novel, modular
approach.
## 10 Impact Statement
This paper presents work whose goal is to advance the field of Machine
Learning. There are many potential societal consequences of our work, none
which we feel must be specifically highlighted here.
## References
* Ablin et al. (2020) Ablin, P., Peyré, G., and Moreau, T. Super-efficiency of automatic differentiation for functions defined as a minimum, 2020.
* Ali et al. (2005) Ali, M. M., Khompatraporn, C., and Zabinsky, Z. B. A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems. _Journal of Global Optimization_ , 2005.
* Andrei (2008) Andrei, N. An unconstrained optimization test functions collection. In _Advanced Modeling and Optimization, Volume 10_ , 2008. URL https://api.semanticscholar.org/CorpusID:63504217.
* Averick et al. (1992) Averick, B., Carter, R., Moré, J., and Xue, G.-L. The minpack test problem collection. Technical report, Office of Energy Research, US Department of Energy, 1992\.
* Babuschkin et al. (2020) Babuschkin, I., Baumli, K., Bell, A., Bhupatiraju, S., Bruce, J., Buchlovsky, P., Budden, D., Cai, T., Clark, A., Danihelka, I., Dedieu, A., Fantacci, C., Godwin, J., Jones, C., Hemsley, R., Hennigan, T., Hessel, M., Hou, S., Kapturowski, S., Keck, T., Kemaev, I., King, M., Kunesch, M., Martens, L., Merzic, H., Mikulik, V., Norman, T., Papamakarios, G., Quan, J., Ring, R., Ruiz, F., Sanchez, A., Sartran, L., Schneider, R., Sezener, E., Spencer, S., Srinivasan, S., Stanojevi´c, M., Stokowiec, W., Wang, L., Zhou, G., and Viola, F. The deepmind jax ecosystem, 2020. URL http://github.com/deepmind.
* Beiranvand et al. (2017) Beiranvand, V., Hare, W., and Lucet, Y. Best practices for comparing optimization algorithms. _Optimization and Engineering_ , 18(4):815–848, 2017.
* Bezgin et al. (2022) Bezgin, D. A., Buhendwa, A. B., and Adams, N. A. Jax-fluids: A fully-differentiable high-order computational fluid dynamics solver for compressible two-phase flows. _Computer Physics Communications_ , pp. 108527, 2022.
* Blondel et al. (2021) Blondel, M., Berthet, Q., Cuturi, M., Frostig, R., Hoyer, S., Llinares-López, F., FabianPedregosa, and Vert, J.-P. Efficient and modular implicit differentiation. _arXiv preprint arXiv:2105.15183_ , 2021.
* Bonnans et al. (2006) Bonnans, F., Gilbert, C., Lemaréchal, C., and Sagastizábal, C. _Numerical Optimization: Theoretical and Practical Aspects_. Springer Berlin Heidelberg, 2006.
* Bradbury et al. (2018) Bradbury, J., Frostig, R., Hawkins, P., Johnson, M. J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., and Zhang, Q. Jax: composable transformations of python+numpy programs, 2018. URL http://github.com/google/jax.
* Carroll (2023) Carroll, C. Bayeux. Accessed 2024, 2023. URL https://github.com/jax-ml/bayeux.
* Conn et al. (2000) Conn, A. R., Gould, N. I. M., and Toint, P. L. _Trust Region Methods_. Society for Industrial and Applied Mathematics, 2000.
* Dolan & Moré (2001) Dolan, E. D. and Moré, J. J. Benchmarking optimization software with performance profiles. _CoRR_ , cs.MS/0102001, 2001.
* Dresdner et al. (2022) Dresdner, G., Kochkov, D., Norgaard, P., Zepeda-Núñez, L., Smith, J. A., Brenner, M. P., and Hoyer, S. Learning to correct spectral methods for simulating turbulent flows. _arXiv_ , 2022. doi: 10.48550/ARXIV.2207.00556. URL https://arxiv.org/abs/2207.00556.
* Duchi et al. (2011) Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. _Journal of Machine Learning Research_ , 12:2121–2159, 2011\.
* Freeman et al. (2021) Freeman, C. D., Frey, E., Raichuk, A., Girgin, S., Mordatch, I., and Bachem, O. Brax - a differentiable physics engine for large scale rigid body simulation, 2021. URL http://github.com/google/brax.
* Golovin et al. (2017) Golovin, D., Solnik, B., Moitra, S., Kochanski, G., Karro, J., and Sculley, D. Google vizier: A service for black-box optimization. In _Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, August 13 - 17, 2017_ , pp. 1487–1495. ACM, 2017. URL https://doi.org/10.1145/3097983.3098043.
* Gupta et al. (2018) Gupta, V., Koren, T., and Singer, Y. Shampoo: Preconditioned stochastic tensor optimization, 2018.
* Hairer & Wanner (2002) Hairer, E. and Wanner, G. _Solving Ordinary Differential Equations II Stiff and D ifferential-Algebraic Problems_. Springer, Berlin, second revised edition edition, 2002.
* Hairer et al. (2008) Hairer, E., Nørsett, S., and Wanner, G. _Solving Ordinary Differential Equations I Nonstiff P roblems_. Springer, Berlin, second revised edition edition, 2008.
* Hall et al. (2023a) Hall, D., Zhou, I., and Liang, P. Haliax. Accessed 2023, 2023a. URL https://github.com/stanford-crfm/haliax.
* Hall et al. (2023b) Hall, D., Zhou, I., and Liang, P. Levanter — legible, scalable, reproducible foundation models with jax. Accessed 2023, 2023b. URL https://github.com/stanford-crfm/levanter.
* Jamil & Yang (2013) Jamil, M. and Yang, X. S. A literature survey of benchmark functions for global optimisation problems. _International Journal of Mathematical Modelling and Numerical Optimisation_ , 4(2):150, 2013.
* Jumper et al. (2021) Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., Bridgland, A., Meyer, C., Kohl, S. A. A., Ballard, A. J., Cowie, A., Romera-Paredes, B., Nikolov, S., Jain, R., Adler, J., Back, T., Petersen, S., Reiman, D., Clancy, E., Zielinski, M., Steinegger, M., Pacholska, M., Berghammer, T., Bodenstein, S., Silver, D., Vinyals, O., Senior, A. W., Kavukcuoglu, K., Kohli, P., and Hassabis, D. Highly accurate protein structure prediction with AlphaFold. _Nature_ , 596(7873):583–589, 2021. doi: 10.1038/s41586-021-03819-2.
* Kidger (2021) Kidger, P. _On Neural Differential Equations_. PhD thesis, University of Oxford, 2021.
* Kidger & Garcia (2021) Kidger, P. and Garcia, C. Equinox: neural networks in jax via callable pytrees and filtered transformations. _Differentiable Programming workshop at Neural Information Processing Systems 2021_ , 2021.
* Kingma & Ba (2017) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization, 2017.
* Moré (1978) Moré, J. J. The levenberg-marquardt algorithm: Implementation and theory. In _Numerical Analysis_ , pp. 105–116, Berlin, Heidelberg, 1978\. Springer Berlin Heidelberg.
* Moré et al. (1981) Moré, J., Garbow, B., and Hillstrom, K. Testing unconstrained optimistation software. Technical report, US Department of Energy, 1981.
* Moré & Thuente (1994) Moré, J. J. and Thuente, D. J. Line search algorithms with guaranteed sufficient decrease. _ACM Transactions on Mathematical Software_ , 20:286–307, 1994. ISSN 0098-3500.
* Moré et al. (1980) Moré, J. J., Garbow, B. S., and Hillstrom, K. E. User guide for minpack-1. Technical report, Argonne National Lab. (ANL), 1980.
* Nocedal & Wright (2006) Nocedal, J. and Wright, S. _Numerical Optimization (Second Edition)_. Springer New York, 2006.
* Pastrana (2023) Pastrana, R. flowmc. Accessed 2023, 2023. URL https://github.com/kazewong/flowMC.
* Pastrana et al. (2023) Pastrana, R., Oktay, D., Adams, R. P., and Adriaenssens, S. JAX FDM: A differentiable solver for inverse form-finding. In _ICML 2023 Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators_ , 2023. URL https://openreview.net/forum?id=Uu9OPgh24d.
* Rader et al. (2023) Rader, J., Lyons, T., and Kidger, P. Lineax: unified linear solves and linear least-squares in jax and equinox. _AI for science workshop at Neural Information Processing Systems 2023, arXiv:2311.17283_ , 2023.
* Rosenbrock (1960) Rosenbrock, H. H. An Automatic Method for Finding the Greatest or Least Value of a Function. _The Computer Journal_ , 3(3):175–184, 01 1960\.
* SAS Institute Inc (2004) SAS Institute Inc. _SAS/IML 9.1 User’s Guide_. SAS Institute Inc, 2004.
* Sherali & Ulular (1990) Sherali, H. D. and Ulular, O. Conjugate gradient methods using quasi-newton updates with inexact line searches. _Journal of Mathematical Analysis and Applications_ , 150:359–377, 8 1990. ISSN 0022247X.
* Shewchuk (1994) Shewchuk, J. R. An introduction to the conjugate gradient method without the agonizing pain. Technical report, Carnegie Mellon University, USA, 1994.
* Singh (2022) Singh, A. Eqxvision. Accessed 2023, 2022. URL https://github.com/paganpasta/eqxvision.
* Song et al. (2022) Song, X., Perel, S., Lee, C., Kochanski, G., and Golovin, D. Open source vizier: Distributed infrastructure and api for reliable and flexible black-box optimization. In _Automated Machine Learning Conference, Systems Track (AutoML-Conf Systems)_ , 2022.
* Stanojević & Sartran (2023) Stanojević, M. and Sartran, L. SynJax: Structured Probability Distributions for JAX. _arXiv preprint arXiv:2308.03291_ , 2023.
* Steihaug (1983) Steihaug, T. The conjugate gradient method and trust regions in large scale optimization. _SIAM Journal on Numerical Analysis_ , 20(3):626–637, 1983.
* Stumm & Walther (2010) Stumm, P. and Walther, A. New algorithms for optimal online checkpointing. _SIAM Journal on Scientific Computing_ , 32(2):836–854, 2010. doi: 10.1137/080742439.
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., van der Walt, S. J., Brett, M., Wilson, J., Millman, K. J., Mayorov, N., Nelson, A. R. J., Jones, E., Kern, R., Larson, E., Carey, C. J., Polat, İ., Feng, Y., Moore, E. W., VanderPlas, J., Laxalde, D., Perktold, J., Cimrman, R., Henriksen, I., Quintero, E. A., Harris, C. R., Archibald, A. M., Ribeiro, A. n. H., Pedregosa, F., van Mulbregt, P., and SciPy 1.0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. _Nature Methods_ , 17:261–272, 2020.
* Wang (2023) Wang, P. Palm - jax. Accessed 2023, 2023. URL https://github.com/lucidrains/PaLM-jax.
* Wang et al. (2009) Wang, Q., Moin, P., and Iaccarino, G. Minimal repetition dynamic checkpointing algorithm for unsteady adjoint calculation. _SIAM Journal on Scientific Computing_ , 31(4):2549–2567, 2009. doi: 10.1137/080727890.
## Appendix A Experiment details
### A.1 Methodology
We choose as our comparison set BFGS, nonlinear CG, Levenberg-Marquardt, and
Gauss-Newton, as these are the four nontrivial minimisation/least-squares
algorithms shared by Optimistix and JAXopt (ie. gradient descent is excluded
intentionally.) Where solver implementations differ, we try and match them as
closely as possible.
Specifically, where algorithms differ from the base implementations of the
algorithms is:
* •
BFGS uses backtracking Armijo line search in both implementations (default for
JAXopt is zoom.)
* •
Gauss-Newton uses a conjugate gradient solver on the normal equations in both
implementations (default for Optimisix is the polyalgorithm
`AutoLinearSolver(well_posed=None)` from (Rader et al., 2023).
The set of test problems consists of 104 minimisation problems, 63 of which
are least-squares problems, taken from the test collections (Jamil & Yang,
2013), (Ali et al., 2005), (Andrei, 2008), (Averick et al., 1992), and (Moré
et al., 1981). Levenberg-Marquardt and Gauss-Newton are only ran on the least-
squares problems. Solvers are initialised at canonical initialisations when
available (see (Moré et al., 1981) and (Andrei, 2008)).
Runtime is a noisy measurement. During an experiment, the computer an
experiment is running on may be running background processes which we cannot
control. To mitigate this, when assessing runtime we run each problem 10 times
and take the minimum over these repeats. This indicates roughly what the ”best
we can expect” from a given solver is.
The global minima are not known for all test problems. On problems where
minima are known, we require that
$\frac{|f(x_{N})-f(x^{*})|}{\epsilon_{a}+\epsilon_{r}f(x^{*})}<1$
for an absolute tolerance $\epsilon_{a}$ and relative tolerance
$\epsilon_{r}$, and $x_{N}$ the argmin found by the solver. If this condition
fails, the runtime is set to `jnp.inf`. This information is automatically
incorporated in to the performance profile; however, it is not included in the
tables of average runtime performance. This is because there is no obvious way
to penalise nonconvergence when comparing runtimes. To get around this, we
assure that in all cases the number of failures for an Optimistix solver was
less than or equal to the number of failures for a JAXopt solver, as to not
give an unfair advantage to Optimistix.
Though this convergence criteria looks similar to the Cauchy termination
condition for Optimistix, it is applied to $f(x_{N})-f(x^{*})$ and not
$f(x_{k+1})-f(x_{k})$. $f(x^{*})$ is an unknown quantity to both solvers, so
there is no advantage provided by the convergence criterion in Optimistix.
Finally, while the general formulation of performance profile allows for
$\mathcal{S}$ to contain more than two optimisers, using performance profiles
with more than two optimisers requires careful interpretation. For example,
when comparing three optimisers, it is possible that the method which appears
to be second best performs worse than the method which appears to be third
best when these two methods are compared directly.
For this reason, we provide all the pairwise comparisons of Optimistix vs
JAXopt and SciPy in A.2, with the Optimsitix solver in red and the comparison
solver (JAXopt or SciPy) in blue.
### A.2 Further experiments
For simplicity, we included a number of performance profiles with one-on-one
comparisons of Optimistix vs JAXopt and Optimistix vs SciPy. Also included is
a table of average minimum runtimes for a single optimiser run. Ultimately, no
user will use only a single optimiser run, so the aggregate information
presented in section 7 should be taken as more informative.
Average minimum runtime for a single optimiser run (milliseconds)
---
Optimiser | Optimistix | JAXopt | SciPy
BFGS | 0.151 | 0.138 | 2.20
Nonlinear CG | 0.123 | 0.0645 | 2.61
LM | 0.515 | 0.0917 | 3.31
Gauss-Newton | 0.393 | 0.0578 | N/A
Figure 5: Optimistix vs JAXopt BFGS
Figure 6: Optimistix vs JAXopt nonlinear CG
Figure 7: Optimistix vs JAXopt Gauss-Newton
Figure 8: Optimistix vs SciPy BFGS (left) and nonlinear CG (right) Figure 9:
Optimistix vs SciPy Levenberg-Marquardt runtimes
|
# S-RL Toolbox: Environments, Datasets and Evaluation Metrics for State
Representation Learning
Antonin Raffin<EMAIL_ADDRESS>Ashley Hill
<EMAIL_ADDRESS>René Traoré<EMAIL_ADDRESS>Timothée Lesort<EMAIL_ADDRESS>Natalia Díaz-Rodríguez
<EMAIL_ADDRESS>David Filliat<EMAIL_ADDRESS>
U2IS, ENSTA ParisTech / INRIA FLOWERS Team http://flowers.inria.fr
Palaiseau, France
###### Abstract
State representation learning aims at learning compact representations from
raw observations in robotics and control applications. Approaches used for
this objective are auto-encoders, learning forward models, inverse dynamics or
learning using generic priors on the state characteristics. However, the
diversity in applications and methods makes the field lack standard evaluation
datasets, metrics and tasks. This paper provides a set of environments, data
generators, robotic control tasks, metrics and tools to facilitate iterative
state representation learning and evaluation in reinforcement learning
settings.
Keywords: Deep learning, reinforcement learning, state representation
learning, robotic priors
## 1 Introduction
Robotics control relies on compact and expressive representations of sensor
data, as the task objectives are often expressed in much smaller dimensions
than the sensor space dimension (e.g., the position of an object versus the
size of an image). These representations are usually hand crafted by human
experts, but deep-learning now makes it possible to avoid this feature
engineering by using end-to-end learning (e.g., learning a policy from raw
pixels). However, such approach is mostly possible in simulation because it
requires a huge amount of training data –usually millions of samples– that
makes it impractical in the real world. To overcome this issue, State
Representation Learning (SRL) methods (Jonschkowski, 2018) can be used to
create an intermediate representation that should contain only useful
information to control a robot and thus simplify the policy learning task.
Many different SRL approaches have been proposed (see (Lesort et al., 2018)
for a review), but comparing their performances is challenging. A common
approach to evaluate learned representations is to compare performance in a
Reinforcement Learning (RL) setting. However, because of the instability of RL
algorithms and their cost, this should not be the only method used to assess
learned states. Moreover, this approach gives no mean to interpret a state
representation, making it difficult to understand which information is encoded
in this representation.
While Reinforcement Learning has well established benchmarks, SRL has no
metrics nor universal criterion to compare the different approaches. With that
in mind, we propose a set of environments with an increasing difficulty,
designed for comparing State Representation Learning (SRL) methods for robotic
control. We also introduce qualitative and quantitative metrics along with
visualization tools to facilitate the development and the comparison of SRL
algorithms. The proposed framework allows fast iteration and eases research of
new SRL methods by making it easy to produce statistically relevant results:
the simulated environments run at 250 FPS on a 8-core machine that allows to
train a RL agent on 1 Million steps in only 1h (or to generate 20k samples in
less than 2 min)111Environments, code and data are available at
https://github.com/araffin/robotics-rl-srl.
In this paper, we first present quickly the reinforcement learning framework
and the main state representation learning approaches that are implemented,
before presenting the SRL Toolbox environments and datasets, the qualitative
and quantitative evaluation methods, and a set of experiments illustrating the
performances of the implemented approaches.
## 2 Reinforcement Learning and State Representation Learning
In this section, we introduce Reinforcement Learning (RL), as well as the
State Representation Learning (SRL) approaches we integrated in our framework.
### 2.1 Reinforcement Learning
In RL, an agent must learn to select the best action to maximize a reward it
will receive. More formally, in a given state $s_{t}$ ($t$ denotes the current
time-step), an agent performs an action $a_{t}$ and receives a reward $r_{t}$.
The learned behaviour, that should maximize the long-term discounted reward by
mapping states to actions, is called a policy: $a_{t}=\pi(s_{t})$.
In the most common settings, the state $s_{t}$ is either a low dimension
representation given by an expert human, or corresponds to the raw observation
(end-to-end learning). In order to differentiate these cases, we introduce the
observation $o_{t}$ that correspond to the raw sensor data, and use the term
state only to refer to low dimensional representations that can be provided by
humans, or can be learned using SRL.
In this paper, we work in the context of Markov Decision Processes (MDP),
where the next state of the system only depends on the previous state and the
taken action. As described above, the term observations refers to raw sensor
data (mostly images), but do not imply partial observability as assumed in
Partially Observable Markov Decision Processes (POMDP).
### 2.2 State Representation Learning
SRL (Lesort et al., 2018) aims at learning compact representations from raw
observations (e.g., learn a position $(x,y)$ directly from raw pixels) without
explicit supervision. Most of the time, the goal is to use that representation
to solve a task with RL. The idea is that a low-dimensional representation
should only keep the useful information and reduce the search space, thus
contributing to address two main challenges of RL: sample inefficiency and
instability. Moreover, a state representation learned for a particular task
may be transferred to related tasks and therefore speed up learning in
multiple task settings.
Using RL notations, SRL corresponds to learning a transformation222In
practice, the learned transformation is a neural network $\varphi$ from the
observation space $\mathcal{O}$ to the state space $\mathcal{S}$. Then, a
policy $\pi$, that takes a state $s_{t}\in\mathcal{S}$ as input and outputs
action $a_{t}$, is learned to solve the task:
$o_{t}\xrightarrow[SRL]{\varphi}s_{t}\xrightarrow[RL]{\pi}a_{t}$ (1)
In the next sections, we present approaches of SRL that are implemented in our
toolbox. Each method is not mutually exclusive and can be combined to create
new models.
#### 2.2.1 Auto-encoders (AE, VAE)
A first approach to learn a state representation is to compress the
observation into a low dimensional state that is sufficient to reconstruct the
observation. This approach does not take advantage of the robotic context
because it ignores the possible actions, therefore it is often associated with
different objectives (e.g. forward model), and provides a performance
baseline.
We integrated Auto-Encoders (Baldi, 2012) and Variational Auto-encoders (VAE)
(Kingma and Welling, 2013), i.e., auto-encoders that enforce the latent
variables to follow a given distribution.
#### 2.2.2 Robotic Priors
To build a relevant representation of states, one can use prior knowledge
about the dynamics or physics of the world. This knowledge can account for
temporal continuity or causality principles that reflect the interactions of
an agent with its environment (Jonschkowski and Brock, 2015; Jonschkowski et
al., 2017). Robotic Priors are defined as objective functions that constrain
the state representation.
#### 2.2.3 Forward and Inverse Models
The dynamics of the world can be integrated by learning a forward model that
predicts state $s_{t+1}$ given state $s_{t}$ and action $a_{t}$. Constraints
on the state representation can be added by constraining the forward model,
for instance, by enforcing the system to follow linear dynamics (Watter et
al., 2015).
Another approach is to learn an inverse model (Shelhamer et al., 2017; Pathak
et al., 2017), which predicts the taken action $a_{t}$ given two successive
states $s_{t}$ and $s_{t+1}$. This enforces the states to encode information
about the dynamics, in order to recover the action needed for such transition.
#### 2.2.4 Combining Approaches
Auto-encoders tend to reconstruct everything (that is salient enough in the
observation), including static objects and irrelevant features for the task
(distractors), whereas forward and inverse models focus on the dynamics,
usually encoding the position of the controlled robot in the representation,
but not a goal that is not controlled by the actions.
However, these approaches are not mutually exclusive and can be combined to
create improved state representations. For instance, (Pathak et al., 2017)
combine both inverse and forward models and (Zhang et al., 2018) additionally
integrate an auto-encoder. In the same vein, Ha and Schmidhuber (2018) use a
VAE with a recurrent forward model to learn a state representation.
## 3 Datasets and Environments
In this section, we describe a set of environments with incremental
difficulty, designed to assess SRL algorithms for robotic control. They all
follow the interface defined by OpenAI Gym (Brockman et al., 2016), which
makes integration with RL algorithms easy.
Mobile Navigation | Robotic Arm | Real Robot
---|---|---
| |
Figure 1: Environments and datasets for state representation learning.
### 3.1 Environments Details
The settings we propose (Fig. 1) are variations of two environments: a 2D
environment with a mobile robot and a 3D environment with a robotic arm. In
all settings, there is a controlled robot and one or more targets (that can be
static, randomly initialized or moving). Each environment can either have a
continuous or discrete action space, and the reward can be sparse or shaped,
allowing us to cover many different situations.
Static & random target mobile navigation: This setting simulates a navigation
task using a small car resembling the task of (Jonschkowski and Brock, 2014),
with either a cylinder or a horizontal band on the ground as a goal, which can
be fixed or moving from episode to episode. The car can move in four
directions (forward, backward, left, right) and will get a +1 reward when
reaching the target, -1 when hitting walls, and 0 otherwise.
Static & random target robotic arm: This setting simulates a robotic arm
(Kuka), fixed on a table, with the task of pushing a button that may move or
not in between episodes. The arm can be controlled either in the $x$, $y$ and
$z$ position using inverse kinematics, or directly controlling the joints. The
robot will get +1 reward when the arm pushes the button, -1 when it hits the
table (this will end the episode), and 0 otherwise. A variant adds
distractors, i.e., moving objects on the table irrelevant for solving the
task.
Simulated & real robotic arm: We used a real Baxter robot arm (Gazebo in
simulation) to perform the same button pushing task as the previous task, with
the same actions and the same rewards. The goal is to test the different
methods in a real world setup.
A ground truth state is defined in each scenario: the absolute robot position
in static scenarios and the relative position (w.r.t. the target) in moving
goal scenarios. Note that apart from providing all described environments, we
also provide the corresponding datasets used in our evaluations: images are
224x224 pixels, navigation datasets use 4 discrete actions (right, left,
forward, backward); robot arms use one more (down) action.
In section A.2, we provide baselines results (Ground Truth, Auto-Encoder, Raw
Pixels) for each environment.
### 3.2 Motivation of the Goal-Based Robotics Tasks
The environments proposed have several characteristics that make them suitable
for research and benchmarking.
Designed for Robotics: The proposed environments cover basic goal-based
robotics tasks: navigation for a mobile robot and reaching a desired position
for a robotic arm.
Designed for State Representation Learning: The simplicity of the environments
makes the extracted features easier to interpret (correlation can be computed
between learned states and position of relevant objects). It is also clear
what a good state representation should encode because of the small number of
important elements: there is only the controllable robot and the target.
Designed for Research: The environments have incremental difficulty, the
minimal number of variables for describing each environment (minimal state
dimension for solving the task with RL) is increasing from 2 (mobile robot
with static target) to 6 (robotic arm with random target). Having simple
environments of gradual difficulty is really important when developing new
methods. The proposed environments are also easily customizable so that they
cover all possibilities: reward can be sparse/dense, actions can be
continuous/discrete. Finally, our benchmark is completely free and fast (it
runs at 250 FPS on a 8-core machine with one GPU).
## 4 Evaluation of Learned State Representations
The most practical evaluation of SRL is assessing if the learned states can be
used for solving the task in RL. However, algorithm development can benefit
from other metrics and visually assessing the validity of the representation
being learned, for faster iteration and interpretation of the state embedding
space. We provide tools and metrics for this.
### 4.1 Qualitative Evaluation
Real-time SRL | Interactive scatter | Latent visualization
---|---|---
| |
Figure 2: Visual tools for analysing SRL; Left: Live trajectory of the robot
in the state space. Centre: 3D scatter plot of a state space; clicking on any
point displays the corresponding observation. Right: reconstruction of the
point in the state space defined by the sliders. See complementary material
for videos.
Qualitative evaluation in our case is the perceived utility of the state
representation using visualization tools. The perceived utility depends on the
task at hand. For example, the state representation of the robotic arm dataset
is expected to have a continuous and correlated change with respect to the arm
tip position. Three tools are proposed (Fig. 2):
Real-time SRL: This tool is used in conjunction with a graphical interface of
the simulated environment. It shows the correspondence between observation and
state by plotting current position of the observation in the state
representation.
Interactive scatter: A clickable plot of the state representation, with the
reward defining the colour for each point. Here, we expect to visualize
structure in the state representation.
Latent visualization: It allows to navigate in the latent space by projecting
the state to the observation space. This is achieved either by reconstructing
the output (for AE and VAE), or by using a nearest neighbour approach for the
models lacking a reconstruction.
To deal with state dimensions larger than three, PCA is used to visually
assess the learned representations in a qualitative manner.
These tools allow to have better insights of the learned representation over
different settings of state representation learning. This is especially useful
in situations where the state dimension is greater than 3. This way, we are
able to validate different configurations before running more exhaustive and
time consuming methods.
### 4.2 Metrics
#### 4.2.1 KNN-MSE
We use an assessment of the representation’s quality based on a Nearest-
Neighbours approach (as in (Sermanet et al., 2017)). While the nearest
neighbour coherence can be assessed visually, KNN-MSE (Lesort et al., 2017)
derives a quantitative metric from this information.
For a given observation $o$, we find the nearest neighbours of its associated
state $s$ in the learned state space, and project them in the ground truth
state space. Then we compute the average distance to its neighbours in the
latter space:
$\textrm{KNN-MSE}(s)=\frac{1}{k}\sum_{s^{\prime}\in
KNN(s,k)}||\tilde{s}-\tilde{s}^{\prime}||^{2}$ (2)
where $\textrm{KNN}(s,k)$ returns the $k$ nearest neighbours of $s$ (chosen
with the Euclidean distance) in the learned state space $\mathcal{S}$,
$\tilde{s}$ is the ground truth state associated to $s$, and
$\tilde{s}^{\prime}$ is the one associated to $s^{\prime}$. A low KNN-MSE
means that a neighbour in the ground truth is still a neighbour in the learned
representation, and thus, local coherence is preserved.
#### 4.2.2 Correlation
A Pearson $\rho$ correlation coefficient’s matrix is computed for each
dimension pair ($s$, $\tilde{s}$), where $\tilde{s}$ is the ground truth (GT)
state, $s$ the learned state, and $\mu_{s}$ and $\sigma_{s}$ are the mean and
standard deviation, respectively, of state $s$:
$\rho_{s,\tilde{s}}=\frac{\mathbb{E}[(s-\mu_{s})*(\tilde{s}-\mu_{\tilde{s}})]}{\sigma_{s}*\sigma_{\tilde{s}}}$
(3)
We can visualize the correlation matrix to quantitatively assess the ability
of a model to encode relevant information in the states learned. For instance,
the correlation matrix in Figure 3 shows degrees of correlation between the
mobile robot position and the learned states. The plot illustrates that for
each dimension $i$ of the predicted states $s$, there is a correlation close
to 1 (in absolute value) with at least one dimension $j$ of the agent’s real
position $\tilde{s}_{j}$. Therefore, this gives measurable evidences that the
model was able to encode the position of the mobile robot.
Figure 3: Correlation matrix for mobile robot navigation dataset (static
target), between each dimension $s_{i}$ of predicted states $s$ and the ground
truth $\tilde{s}_{j}$. We consider the ground truth to be the agent’s real
position. The states (dimension=2) are learned by combining a forward and an
inverse model.
This visualization tool is quite useful for low-dimensional spaces. However,
for state spaces with a high number of dimensions, looking at the correlation
matrix becomes impractical. Therefore, we introduce the following measure,
named GTC for Ground Truth Correlation, that allows to compare the models
ability to encode relevant information:
$GTC_{(i)}=\max\limits_{j}|\rho_{s,\tilde{s}}(i,j)|\in[0,1]$ (4)
with $i\in\llbracket 0,|\tilde{s}|\rrbracket$, $j\in\llbracket
0,|s|\rrbracket$, $\tilde{s}=[\tilde{s}_{1};...;\tilde{s}_{n}]$, and
$\tilde{s}_{k}$ being the $k^{th}$ dimension of the ground truth state vector.
For instance, in the Mobile Robot environment with random target, the ground
truth state is composed of the 2D robot position and 2D target position. That
is to say, the ground truth states have a dimension of 4: $|\tilde{s}|=4$.
The vector GTC gives for each component $i$ of the ground truth states
$\tilde{s}$, the maximum absolute correlation value between $\tilde{s}_{i}$
and any component of the predicted states $s$. Therefore, GTC measures the
similarity per component, between the learned states $s$ and the ground truth
states $\tilde{s}$.
We also introduce a metric, which is the mean of GTC, that allows to compare
learned states using one scalar value:
$GTC_{mean}=\mathbb{E}[GTC]$ (5)
### 4.3 Quantitative Evaluation With Reinforcement Learning
Comparing the performance of RL algorithms, using the learned state
representations, is the most relevant approach to evaluate the SRL methods. To
do so, our framework integrates 8 algorithms (A2C, ACKTR, ACER, DQN, DDPG,
PPO1, PPO2, TRPO) from Stable-Baselines (Hill et al., 2018) (a fork of OpenAI
baselines (Dhariwal et al., 2017)), Augmented Random Search (ARS) (Mania et
al., 2018), Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES)
(Hansen et al., 2003) and Soft Actor Critic (SAC) (Haarnoja et al., 2018).
## 5 Experiments
We perform experiments on the proposed datasets using states learned with the
approaches described in Section 2.2 along with ground truth (GT). Here, we
report results obtained with PPO.
Ground Truth States | Learned States | RL Performance
---|---|---
| |
Table 1: Ground truth states (left), states learned (Inverse and Forward)
(centre), and RL performance evaluation (PPO) (right) in the mobile robot
environment. Colour denotes the reward, red for positive, blue for negative
and grey for null reward (left and center). The full resolution plot of the RL
performance evaluation can be found in appendix Fig. 4 (right)
Table 1 illustrate the qualitative evaluation of a state space learned by
combining forward and inverse models on the mobile robot environment. It also
shows the performance of PPO based on the states learned by several
approaches.
Dataset | Mobile-robot (1) | Robotic-arm (3) | Robotic-arm-real (4)
---|---|---|---
Static/Random Target | Static | Random | Static | Random | Static
Ground Truth | 0.0099 | 0.0164 | 0.0025 | 0.0025 | 0.0105
AE | 0.0168 | 0.7213 | 0.00336 | 0.0027 | 0.0179
VAE | 0.0161 | 0.1295 | 0.0032 | 0.0027 | 0.0177
Robotic Priors | 0.0200 | 0.0900 | 0.0029 | 0.0027 | 0.0213
Forward | 0.5111 | 1.1557 | 0.1425 | 0.2564 | 0.0796
Inverse | 0.0191 | 0.7703 | 0.0182 | 0.2705 | 0.0521
Fwd+Inv | 0.0164 | 0.7467 | 0.0176 | 0.2705 | 0.0368
Table 2: KNN-MSE results for each SRL method. Ground Truth (robot absolute or
relative position with respect to target object), auto-encoders, forward
(fwd), inverse (inv) models and combinations.
Table 2 shows the KNN-MSE for the different SRL approaches on the implemented
environments.
The table 3 gives the $GTC_{mean}$ metrics for several approaches and the
associated RL performance using PPO. It shows that $GTC_{mean}$ is a good
indicator for the performance than can be obtained in RL: a good
disentanglement with a good correlation with the ground truth state (i.e. a
high $GTC_{mean}$) lead to a higher mean reward in RL.
During the development of S-RL Toolbox, we learned some useful insights on
SRL. We observed that auto-encoders do not reconstruct objects if they are too
small. This is an issue if the object is relevant for the task. An inverse
model is usually sufficient to learn a coherent state representation (in our
case, the position of the controllable object). Continuity in the state space
is important to perform well in RL. Using real-time SRL tools, we note
discontinuities in the state space learned by AE, even if reconstruction error
is low, and that hinders RL.
GroundTruthCorrelation | $x_{robot}$ | $y_{robot}$ | $x_{target}$ | $y_{target}$ | Mean | RL
---|---|---|---|---|---|---
Robotic Priors | 0.2 | 0.2 | 0.41 | 0.66 | 0.37 | 5.4 $\pm$ 3.1
Random | 0.68 | 0.65 | 0.34 | 0.31 | 0.50 | 163.4 $\pm$ 10.0
Supervised | 0.69 | 0.73 | 0.70 | 0.72 | 0.71 | 213.3 $\pm$ 6.0
Auto-Encoder | 0.52 | 0.51 | 0.24 | 0.23 | 0.38 | 138.5 $\pm$ 12.3
GT | 1 | 1 | 1 | 1 | 1 | 229.7 +- 2.7
Table 3: GTC and $GTC_{mean}$, and mean reward performance in RL (using PPO)
per episode after 2 millions steps, with standard error for each SRL method in
mobile robot with random target environment.
## 6 Related Work
In the RL literature, several classic benchmarks are used to compare
algorithms. For discrete actions, Atari Games from OpenAI Gym suite (Brockman
et al., 2016) are often adopted, whereas for continuous actions, MuJoCo
locomotion tasks (Todorov et al., 2012) is favoured. OpenAI recently open-
sourced robotics goal-based tasks (Plappert et al., 2018). These environments
are close to what we propose; however, they use a non-free physics engine,
observations are not images and customization is not easy. Also, our tools
were designed with SRL in mind and offer a gradual difficulty.
Regarding the SRL literature, very diverse environments are used, without
common metrics or visualizations. The SRL methods are usually only compared
with learning from raw observations. Jonschkowski and Brock (2015) assess the
quality of learned representations in a slot car scenario (similar to Lange et
al. (2012)) and a mobile navigation task (following Sprague (2009); Boots et
al. (2011)). More recently, Ha and Schmidhuber (2018) present their results on
Vizdoom and RaceCar environments. They show interesting insights of what was
learned by navigating in the latent space and projecting states back to the
pixel space (similar to Fig. 2). Zhang et al. (2018) use a subset of MuJoCo
tasks, with joints as input, and a binary maze environment. They perform an
insightful ablation study of their SRL model.
Our contribution is two-fold: we provide a framework integrated with RL
algorithms, environments and tools for SRL, along with implementations of the
main SRL methods.
## 7 Discussion and Conclusions
This paper presents a set of environments on which to perform SRL benchmarks
of incremental difficulty to solve tasks in RL, specifically, in robotics
control. Our proposed toolbox facilitates fast iterations, interpretability
and reproducibility, with a set of qualitative and quantitative metrics and
interactive visualization tools. We believe such a framework is needed to have
fair comparisons, focused on robotics control, among SRL methods.
Acknowledgments
This work is supported by the DREAM project333http://www.robotsthatdream.eu
through the European Union Horizon 2020 FET research and innovation program
under grant agreement No 640891.
## A Implementation Details
### A.1 Datasets Details
In this section, we provide the parameters used to generate datasets, for the
results presented in Table 2. The simulated datasets were created with a
random policy, using PyBullet (Coumans et al., 2018) (generating up to 30k
samples per minute on a 8-core machine 444CPU: Intel Core i7-7700K GPU:Nvidia
GeForce GTX 1080 Ti). The real Baxter dataset was recorded using ROS.
We used 10k samples of each dataset to learn a state representation. Reward is
sparse (see Section 3 for details) and actions are discrete (encoded as
integers) for all datasets 555Datasets can also be generated with continuous
actions and dense (shaped) reward.
Each target object can be fixed or randomly positioned between episodes; what
we call "absolute" or "relative" dataset, refers to the fact of learning a
relative or absolute position of the robot with respect to the target object.
The images dimension in every dataset is 224x224 pixels.
Table 4: Dataset details. Distractors can both be static or moving Dataset | Reward | Actions | Distractors
---|---|---|---
Mobile-robot | Sparse | 4, Discrete ($\Delta$(X,Y) pos.) | No
Robotic-arm | Sparse | 5, Discrete ($\Delta$(X,Y,Z) pos.) | No
Robotic-arm-real | Sparse | 5, Discrete ($\Delta$(X,Y,Z) pos.) | Yes
### A.2 Baselines Results
In this section, we provide baselines results for each environment.
Budget (in timesteps) | 1 Million | 2 Million
---|---|---
Ground Truth | 198.0 $\pm$ 16.1 | 211.6 $\pm$ 14.0
Raw Pixels | 177.9 $\pm$ 15.6 | 215.7 $\pm$ 9.6
Auto-Encoder | 159.8 $\pm$ 16.1 | 188.8 $\pm$ 13.5
Table 5: Mean reward performance in RL (using PPO) per episode (average on 100 episodes) for different budgets, with standard error in Navigation 1D target environment. Budget (in timesteps) | 1 Million | 2 Million | 3 Million | 5 Million
---|---|---|---|---
Ground Truth | 227.8 $\pm$ 2.8 | 229.7 $\pm$ 2.7 | 231.5 $\pm$ 1.9 | 234.4 $\pm$ 1.3
Raw Pixels | 136.3 $\pm$ 11.5 | 188.2 $\pm$ 9.4 | 214.0 $\pm$ 5.9 | 231.5 $\pm$ 3.1
Auto-Encoder | 97.0 $\pm$ 12.3 | 138.5 $\pm$ 12.3 | 167.7 $\pm$ 11.1 | 192.6 $\pm$ 8.9
Table 6: Mean reward performance in RL (using PPO) per episode (average on 100 episodes) for different budgets, with standard error in Navigation 2D random target environment. Budget (in timesteps) | 1 Million | 2 Million | 3 Million | 5 Million
---|---|---|---|---
Ground Truth | 4.1 $\pm$ 0.5 | 4.1 $\pm$ 0.6 | 4.1 $\pm$ 0.6 | 4.2 $\pm$ 0.5
Raw Pixels | 0.6 $\pm$ 0.3 | 0.8 $\pm$ 0.3 | 1.2 $\pm$ 0.3 | 2.6 $\pm$ 0.3
Auto-Encoder | 0.92 $\pm$ 0.3 | 1.6 $\pm$ 0.3 | 2.2 $\pm$ 0.3 | 3.4 $\pm$ 0.3
Table 7: Mean reward performance in RL (using PPO) per episode (average on 100 episodes) for different budgets, with standard error in robotic arm with random target environment. Budget (in timesteps) | 1 Million | 2 Million | 3 Million | 5 Million
---|---|---|---|---
Ground Truth | 4.3 $\pm$ 0.3 | 4.4 $\pm$ 0.2 | 4.4 $\pm$ 0.2 | 4.6 $\pm$ 0.2
Raw Pixels | 0.8 $\pm$ 0.3 | 1.0 $\pm$ 0.3 | 1.2 $\pm$ 0.3 | 2.0 $\pm$ 0.3
Auto-Encoder | 1.17 $\pm$ 0.3 | 1.5 $\pm$ 0.3 | 1.9 $\pm$ 0.4 | 3.0 $\pm$ 0.4
Table 8: Mean reward performance in RL (using PPO) per episode (average on 100
episodes) for different budgets, with standard error in robotic arm with
moving target environment.
### A.3 Reinforcement Learning Experiments
#### A.3.1 Reinforcement Learning
We use the implementations from Stable-Baselines (Hill et al., 2018) for the
RL experiments (except for CMA-ES, ARS and SAC which we implemented
ourselves). Where we used the default hyperparameters set by OpenAI Baselines.
With 10 random seeds in order to have quantitative results.
As for the network of the policies, the same architecture is used in all
different methods. For the SRL and ground truth approaches, it is a 2-layers
MLP, whereas for learning from raw pixels, it is the CNN from (Mnih et al.,
2015) implemented in OpenAI baselines.
All CNN policies normalize the input image by dividing it by 255. Observations
are not stacked. When learning from SRL, the states are normalized using a
running mean/std average. Reinforcement learning metrics reported are the
average returned rewards over 5 policies, independently trained using the same
RL algorithm with a different seed.
#### A.3.2 State Learning Representation
For the SRL methods, we used an altered form of ResNet (Table 9), which is
more compact for our needs. However, for forward and inverse models, we used a
linear model in addition to the CNN for learning a state representation. We
are using the Adam (Kingma and Ba, 2014) optimizer in all our models, with
learning rates that vary from $10^{-2}$ to $10^{-3}$. The batch size of
forward and inverse is 128, 256 for robotic priors, and 32 for the other
models.
Table 9: CNN architecture for SRL methods. The first layers are inspired by ResNet architecture (He et al., 2015). Layer | Architecture
---|---
1 | Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) + BN + ReLU
2 | MaxPool2d(kernel_size=3, stride=2, padding=1)
3 | Conv2d(64, 64, kernel_size=3, stride=1, padding=1, bias=False) + BN + ReLU
4 | MaxPool2d(kernel_size=3, stride=2)
5 | Conv2d(64, 64, kernel_size=3, stride=2, padding=1, bias=False) + BN + ReLU
6 | MaxPool2d(kernel_size=3, stride=2)
7 | Linear(6 * 6 * 64, state_dim)
#### A.3.3 Results
Fig. 4 shows the average reward during learning of the PPO algorithm.
Figure 4: Performance (mean and standard error for 10 runs) for PPO algorithm
for different state representations learned in mobile-robot-navigation (random
target) environment Figure 5: Performance (mean and standard error) on RL
algorithms using ground truth states with mobile robot (random target)
environment
We observed during the development of the _S-RL Toolbox_ , that PPO was one of
the best RL algorithm for SRL benchmarking. It allows us to obtain good
performance and was consistent without needing to change any hyperparameters.
Figure 5 illustrates this, in addition to the variability of the RL
algorithms.
## B Supplementary material
The visualization and interactive state space exploration tools are
demonstrated in the following videos:
* •
S-RL Toolbox Showcase: https://youtu.be/qNsHMkIsqJc
* •
S-RL Toolbox Environments: https://tinyurl.com/y973vhfy
* •
Kuka robot arm: RL running PPO (SRL trained with VAE):
https://tinyurl.com/yarpbs2c
* •
Kuka robot arm environment: State Representation Learning Benchmark running
PPO on ground truth states: https://tinyurl.com/yd549l7o
## References
* Baldi (2012) Pierre Baldi. Autoencoders, unsupervised learning, and deep architectures. In _Proceedings of ICML Workshop on Unsupervised and Transfer Learning_ , pages 37–49, 2012.
* Boots et al. (2011) Byron Boots, Sajid M Siddiqi, and Geoffrey J Gordon. Closing the learning-planning loop with predictive state representations. _The International Journal of Robotics Research_ , 30(7):954–966, 2011.
* Brockman et al. (2016) Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. _CoRR_ , abs/1606.01540, 2016. URL http://arxiv.org/abs/1606.01540.
* Coumans et al. (2018) E. Coumans, Y. Bai, and J. Hsu. Pybullet physics engine. http://pybullet.org/, 2018.
* Dhariwal et al. (2017) Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, and Yuhuai Wu. Openai baselines. https://github.com/openai/baselines, 2017.
* Ha and Schmidhuber (2018) D. Ha and J. Schmidhuber. World Models. _ArXiv e-prints_ , March 2018.
* Haarnoja et al. (2018) Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. _arXiv preprint arXiv:1801.01290_ , 2018.
* Hansen et al. (2003) Nikolaus Hansen, Sibylle D Müller, and Petros Koumoutsakos. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (cma-es). _Evolutionary computation_ , 11(1):1–18, 2003\.
* He et al. (2015) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. _CoRR_ , abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385.
* Hill et al. (2018) Ashley Hill, Antonin Raffin, Rene Traore, Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, and Yuhuai Wu. Stable baselines. https://github.com/hill-a/stable-baselines, 2018.
* Jonschkowski (2018) Rico Jonschkowski. Learning robotic perception through prior knowledge. 2018\.
* Jonschkowski and Brock (2014) Rico Jonschkowski and Oliver Brock. State representation learning in robotics: Using prior knowledge about physical interaction. In _Proceedings of Robotics: Science and Systems_ , July 2014.
* Jonschkowski and Brock (2015) Rico Jonschkowski and Oliver Brock. Learning state representations with robotic priors. _Autonomous Robots_ , 39(3):407–428, 2015. ISSN 0929-5593.
* Jonschkowski et al. (2017) Rico Jonschkowski, Roland Hafner, Jonathan Scholz, and Martin A. Riedmiller. PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured State Representations. _CoRR_ , abs/1705.09805, 2017. URL http://arxiv.org/abs/1705.09805.
* Kingma and Welling (2013) D. P Kingma and M. Welling. Auto-Encoding Variational Bayes. _ArXiv e-prints_ , December 2013.
* Kingma and Ba (2014) Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _CoRR_ , abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980.
* Lange et al. (2012) Sascha Lange, Martin Riedmiller, and Arne Voigtlander. Autonomous reinforcement learning on raw visual input data in a real world application. In _Neural Networks (IJCNN), The 2012 International Joint Conference on_ , pages 1–8. IEEE, 2012.
* Lesort et al. (2017) Timothée Lesort, Mathieu Seurin, Xinrui Li, Natalia Díaz-Rodríguez, and David Filliat. Unsupervised state representation learning with robotic priors: a robustness benchmark. _CoRR_ , abs/1709.05185, 2017. URL http://arxiv.org/abs/1709.05185.
* Lesort et al. (2018) Timothée Lesort, Natalia Díaz-Rodríguez, Jean-François Goudou, and David Filliat. State representation learning for control: An overview. _Neural Networks_ , 2018. ISSN 0893-6080. doi: https://doi.org/10.1016/j.neunet.2018.07.006. URL http://www.sciencedirect.com/science/article/pii/S0893608018302053.
* Mania et al. (2018) Horia Mania, Aurelia Guy, and Benjamin Recht. Simple random search provides a competitive approach to reinforcement learning. _arXiv preprint arXiv:1803.07055_ , 2018.
* Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. _Nature_ , 518(7540):529–533, 2015.
* Pathak et al. (2017) Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. In _ICML_ , 2017.
* Plappert et al. (2018) Matthias Plappert, Marcin Andrychowicz, Alex Ray, Bob McGrew, Bowen Baker, Glenn Powell, Jonas Schneider, Josh Tobin, Maciek Chociej, Peter Welinder, Vikash Kumar, and Wojciech Zaremba. Multi-goal reinforcement learning: Challenging robotics environments and request for research. _CoRR_ , abs/1802.09464, 2018. URL http://arxiv.org/abs/1802.09464.
* Sermanet et al. (2017) Pierre Sermanet, Corey Lynch, Jasmine Hsu, and Sergey Levine. Time-contrastive networks: Self-supervised learning from multi-view observation. _CoRR_ , abs/1704.06888, 2017. URL http://arxiv.org/abs/1704.06888.
* Shelhamer et al. (2017) Evan Shelhamer, Parsa Mahmoudieh, Max Argus, and Trevor Darrell. Loss is its own reward: Self-supervision for reinforcement learning. _arXiv preprint arXiv:1612.07307_ , 2017.
* Sprague (2009) Nathan Sprague. Predictive projections. In _IJCAI_ , pages 1223–1229, 2009.
* Todorov et al. (2012) Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In _Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on_ , pages 5026–5033. IEEE, 2012.
* Watter et al. (2015) Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, _Advances in Neural Information Processing Systems 28_ , pages 2746–2754. Curran Associates, Inc., 2015.
* Zhang et al. (2018) Amy Zhang, Harsh Satija, and Joelle Pineau. Decoupling dynamics and reward for transfer learning. _arXiv preprint arXiv:1804.10689_ , 2018.
|
# Hybrid Star Properties with NJL and MFTQCD Model: A Bayesian Approach
Milena Albino<EMAIL_ADDRESS>Department of Physics, CFisUC,
University of Coimbra, P-3004 - 516 Coimbra, Portugal Tuhin Malik<EMAIL_ADDRESS>Department of Physics, CFisUC, University of Coimbra, P-3004 - 516 Coimbra,
Portugal Márcio Ferreira<EMAIL_ADDRESS>Department of Physics,
CFisUC, University of Coimbra, P-3004 - 516 Coimbra, Portugal Constança
Providência<EMAIL_ADDRESS>Department of Physics, CFisUC, University of Coimbra,
P-3004 - 516 Coimbra, Portugal
###### Abstract
Context: The composition of the core of neutron stars is still under debate.
One possibility is that because of the high densities reached in their cores,
matter could be deconfined into quark matter.
Aims: In this work, we investigate the possible existence of hybrid stars,
using microscopic models to describe the different phases of matter. Within
the adopted microscopic models we aim at calculating both the properties of
neutron stars and the properties of matter. In particular, we want to probe
the influence of pQCD calculations and analyze the properties that identify a
transition to deconfined matter.
Methods: A Bayesian approach using a Markov-Chain Monte Carlo sampling process
is applied to generate 8 sets of equations of state. A Maxwell construction is
adopted to describe the deconfinement transition. For the hadronic phase, we
consider a stiff and a soft EOS obtained from the Relativistic Mean Field
(RMF) model with non-linear meson terms. For the quark phase, we use two
different models: the Nambu-Jona-Lasinio (NJL) model with multiquark
interactions and the Mean Field Theory of QCD, a model similar to the MIT bag
model with a vector term. Bayesian inference was applied to determine the
model parameters that satisfy the X-ray observations from NICER and have phase
transition at densities between 0.15 and 0.40 fm-3. We have also applied
restrictions from the pQCD calculations to half of the sets.
Results: Hybrid stars are compatible with current observational data. The
restrictions of pQCD reduce the value of the maximum mass. However, even when
applying this restriction, the models were able to reach values of
$M_{\text{max}}=2.1-2.3M_{\odot}$. The conformal limit was still not attained
at the center of the most massive stars. The vector interactions are essential
to describe hybrid stars with a mass above 2$M_{\odot}$. The multiquark
interactions introduced may affect the limits of some quantities considered as
indicators of the presence of a deconfined phase. It is possible to find a set
of EOS, that predict that inside NS the renormalized matter trace anomaly is
always positive.
## I Introduction
Neutron stars (NSs) represent some of the densest forms of matter observable
in the universe, providing unique natural laboratories for studying the
extreme states of quantum chromodynamics (QCD) [1, 2]. The cores of NSs, in
particular, may host a myriad of exotic phases of matter, offering a window
into the behavior of QCD under such extreme conditions [3, 4]. Theoretical
predictions based on perturbative QCD (pQCD) suggest the presence of
deconfined quark matter at extraordinarily high baryon densities,
approximately forty times greater than the density of nuclear matter at
saturation $\rho_{0}$ [5]. The most challenging region to study theoretically
is, however, at intermediate densities i.e. few times nuclear matter
saturation density which is actually relevant for the matter in the core of
NSs [3].
At these intermediate densities, fundamental calculations like Lattice QCD
face significant computational obstacles, particularly the well-known sign
problem in simulations at finite baryon densities. The current ability of
chiral effective field theory ($\chi$EFT) to accurately compute the EOS at
relatively low baryon densities, below twice saturation density [6, 7] does
not cover the range of densities that is hypothesized in neutron star (NS)
cores. In contrast, various effective models that suggest a range of novel and
exotic phases of quark matter in the intermediate density have been studied
over the past decades. These include pion superfluidity [8, 9, 10], different
types of color superconductivity such as two-flavor color superconductivity
[11, 12, 13, 14] and the color flavor locked (CFL) phase [15], along with more
intricate configurations like the Larkin-Ovchinkov-Fulde-Ferrel (LOFF) [16,
17] and crystalline superconductivity phases. To support the possibility of
quark matter cores within neutron stars, recent model-independent studies
combining astrophysical observations and theoretical calculations have shown
that the interior properties of the most massive neutron stars align with the
characteristics of a deconfined quark phase, suggesting the existence of
sizable quark-matter cores in stars approaching or exceeding two solar masses
[18, 19, 20]. However, no information on the composition was considered in the
intermediate density regime. The behavior of some quantities as the speed of
sound, the polytropic index [18], the renormalized trace anomaly
$\Delta=1/3-P/\epsilon$ [21] and the measure of conformality
$d_{c}\equiv\sqrt{\Delta^{2}+\Delta^{\prime 2}}$, where $\Delta^{\prime}$ is
the logarithmic derivative with respect to the energy density [22], were
analyzed and proposed as possibly indicating the existence of a quark core in
NSs. These analyzes were carried out considering agnostic model independent
descriptions of the EOS.
One of the aims of this work is to describe hybrid stars considering
physically justified hadron and quark models in order to study the possible
existence of a quark core inside a NS. The quantities that have been proposed
to identify the presence of a quark core within agnostic models will be
studied considering these microscopic hybrid EOS, and confronted with the
values proposed as indicative of the presence of deconfined matter.
Since the formulation of QCD, the possible existence of quarks inside neutron
stars has been questioned [3]. One of the first models of hybrid stars [23]
predicted that the quark-hadron transition would only happen around
$10\rho_{0}$, making the existence of hybrid stars very unlikely. However, as
new models emerged, the results pointed to a phase transition at lower
densities, in which it was possible to have quarks inside neutron stars. Since
then, various studies have been carried out, considering different hadronic
models [24, 25, 26, 6], hybrid models [13, 27, 28, 29, 30, 31, 32, 33, 34, 35,
36, 37, 38, 39, 40, 41] or even model-independent studies [18, 19, 21, 22].
Despite numerous studies and the rapid development of pulsar detection
techniques, it is still not possible to determine with confidence the
existence of quarks in the NS. In this work, we apply a Bayesian inference
approach to generate numerous hybrid equations of state (EOS) that satisfy the
observational data, in order to explore the possible existence of hybrid
stars. We adopt the two-model approach, in which the Relativistic Mean Field
(RMF) model, as in [25], describes the hadron phase, and the quiral symmetric
SU(3) Nambu-Jona-Lasinio (NJL) model [42, 43, 44] with four-quark and eight-
quark interaction terms [45, 29] describes the quark phase. We also obtain
hybrid EOS sets using the Mean Field Approximation of QCD (MFTQCD). The MFTQCD
model is derived from the QCD Lagrangian, considering a gluon field
decomposition in soft and hard momentum components. A mean field approximation
is considered for the hard gluons and its contribution stiffens the EOS
allowing for high pressures at high energy densities. The soft gluon fields
originate condensates that soften the EOS and give rise to a bag like term
[46].
In 1961, before the formulation of QCD, Yochiro Nambu and Giovanni Jona-
Lasinio proposed the Nambu-Jona-Lasinio (NJL) model [47, 48]. In the original
idea, the Dirac particle mass was assumed to be produced by the nucleon
interaction, analogous to the superconductivity in BCS theory. Later, NJL
model was adapted to have quarks and gluons as degrees of freedom. It became
an effective theory of QCD, being a good approximation in the low energy
limit. In the NJL model, gluons are omitted and the quark interaction is
assumed to be point-like. Although color confinement property can not be
described by NJL, it reproduces all the QCD global symmetries and the
spontaneus chiral symmetry breaking, see [49] for a review. The NJL model is
suitable in the context of NS and has been widely used to describe the
deconfined phase in hybrid stars [44, 50, 51, 52, 53, 54, 13, 55, 29, 45].
Some of the studies discussing hybrid stars also apply a Bayesian inference to
constrain the model parameters, either using a grid approach [31, 38, 40, 41]
or a Markov-Chain Monte Carlo sampling process [39]. Different models are
used, a metamodeling description for the hadronic [38, 39], or a RMF model
[31] together with a quark phase described by the NJL model [31, 38] or a
quark metamodel [39], in all cases describing the deconfinement phase
transition by a Maxwell construction, or a parametrized smooth crossover
between the hadronic phase described by a RMF EOS and a quark phase described
within a constituent quark model [41].
In a previous work [25], Bayesian inference was used to explore the NSs EOS
with a RMF model. However, it was assumed that NS were made entirely by hadron
matter. In this work, we explore the possibility of the existence of quark
matter inside NS using microscopic models. For the hadron phase, we choose two
equations of state from [25] with extreme properties, in particular, a soft
and a stiff EOS. The quark phase is described by the Nambu-Jona-Lasinio (NJL)
model and the MFTQCD model. Bayesian inference is used to obtain large samples
of EOS that satisfy the imposed constraints, which include the NICER x-ray
observations from PSR J0030+0451 [56, 57] and PSR J0740+6620 [58, 59] and a
constraint in the phase transition density, which we consider should occur
above saturation density. In [60] it was found that pQCD calculations can have
an influence at intermediate densities, if thermodynamic relations and
causality are taken into account. Therefore, we also impose these pQCD
constraints in some of the generated sets to analyze their effect on the
results for NSs.
This article is structured as follows: Section II gives a brief overview of
the NJL, MFTQCD and RMF models. The Bayesian inference is described in Section
III. The results are presented and discussed in Section IV while the
conclusions are drawn in Section V.
## II Model
In this section, we present the quark and the hadron models separately. To
build hybrid equations, we assumed a first order quark-hadron phase transition
and applied the Maxwell construction.
### II.1 Quark phase
We obtained sets considering the Nambu-Jona-Lasinio (NJL) model (Section
II.1.1) and sets considering the Mean Field Theory of QCD (Section II.1.2). A
brief review of both models is presented here.
#### II.1.1 Nambu-Jona-Lasinio (NJL)
In this work, we considered the SU(3) Nambu-Jona-Lasinio (NJL) Lagrangian
being as
$\displaystyle\mathcal{L}_{\text{NJL}}$
$\displaystyle=\bar{\psi}\left(i\not{\partial}-m+\mu\gamma^{0}\right)\psi$
$\displaystyle\quad+\frac{G}{2}\left[\left(\bar{\psi}\lambda_{a}\psi\right)^{2}+\left(\bar{\psi}i\gamma^{5}\lambda_{a}\psi\right)^{2}\right]$
$\displaystyle\quad+\mathcal{L}_{\text{'t Hooft}}+\mathcal{L}_{\text{I}},$ (1)
which $\mathcal{L}_{\text{'t Hooft}}$ is known as the ’t Hooft term, to
simulate the U(1)A symmetry breaking
$\mathcal{L}_{\text{'t
Hooft}}=8\kappa\left[\det\left(\bar{\psi}P_{R}\psi\right)+\det\left(\bar{\psi}P_{L}\psi\right)\right],$
(2)
where the determinant is applied in the space of flavors. However, these terms
are not enough to reach the $2M_{\odot}$. For this reason, we add 4- and
8-quark interaction terms:
$\displaystyle\mathcal{L}_{\text{I}}$
$\displaystyle=-G_{\omega}\left[\left(\bar{\psi}\gamma^{\mu}\lambda_{0}\psi\right)^{2}+\left(\bar{\psi}\gamma^{\mu}\gamma_{5}\lambda_{0}\psi\right)^{2}\right]$
$\displaystyle\quad-
G_{\rho}\sum_{a=1}^{8}\left[\left(\bar{\psi}\gamma^{\mu}\lambda^{a}\psi\right)^{2}+\left(\bar{\psi}\gamma^{\mu}\gamma_{5}\lambda^{a}\psi\right)^{2}\right]$
$\displaystyle\quad-
G_{\omega\omega}\left[\left(\bar{\psi}\gamma^{\mu}\lambda_{0}\psi\right)^{2}+\left(\bar{\psi}\gamma^{\mu}\gamma_{5}\lambda_{0}\psi\right)^{2}\right]^{2}$
$\displaystyle\quad-
G_{\sigma\omega}\sum_{a=0}^{8}\left[\left(\bar{\psi}\lambda_{a}\psi\right)^{2}+\left(\bar{\psi}i\gamma^{5}\lambda_{a}\psi\right)^{2}\right]$
$\displaystyle\qquad\times\left[\left(\bar{\psi}\gamma^{\mu}\lambda_{0}\psi\right)^{2}+\left(\bar{\psi}\gamma^{\mu}\gamma_{5}\lambda_{0}\psi\right)^{2}\right]$
$\displaystyle\quad-
G_{\rho\omega}\sum_{a=1}^{8}\left[\left(\bar{\psi}\gamma^{\mu}\lambda_{0}\psi\right)^{2}+\left(\bar{\psi}\gamma^{\mu}\gamma_{5}\lambda_{0}\psi\right)^{2}\right]$
$\displaystyle\qquad\times\left[\left(\bar{\psi}\gamma^{\mu}\lambda_{a}\psi\right)^{2}+\left(\bar{\psi}\gamma^{\mu}\gamma_{5}\lambda_{a}\psi\right)^{2}\right].$
(3)
Most of the interaction terms effects were studied in [55, 45, 29]. As the NJL
model is not renormalizable, we implemented the 3-momentum cutoff scheme.
After the mean field approximation, the grand canonical potential at $T=0$ can
be written as
$\displaystyle\Omega$
$\displaystyle=\Omega_{0}+G\left(\sigma_{u}^{2}+\sigma_{d}^{2}+\sigma_{s}^{2}\right)+4\kappa\sigma_{u}\sigma_{d}\sigma_{s}$
$\displaystyle\quad-\frac{2}{3}G_{\omega}(\rho_{u}+\rho_{d}+\rho_{s})^{2}$
$\displaystyle\quad-\frac{4}{3}G_{\rho}(\rho_{u}^{2}+\rho_{s}^{2}+\rho_{s}^{2}-\rho_{u}\rho_{d}-\rho_{u}\rho_{s}-\rho_{d}\rho_{s})$
$\displaystyle\quad-\frac{4}{3}G_{\omega\omega}(\rho_{u}+\rho_{d}+\rho_{s})^{4}$
$\displaystyle\quad-4G_{\sigma\omega}\left(\sigma_{u}^{2}+\sigma_{d}^{2}+\sigma_{s}^{2}\right)\left(\rho_{u}+\rho_{d}+\rho_{s}\right)^{2}$
$\displaystyle\quad-\frac{8}{3}G_{\rho\omega}\left(\rho_{u}+\rho_{d}+\rho_{s}\right)^{2}$
$\displaystyle\qquad\times\left(\rho_{u}^{2}+\rho_{d}^{2}+\rho_{s}^{2}-\rho_{u}\rho_{d}-\rho_{u}\rho_{s}-\rho_{d}\rho_{s}\right)$
$\displaystyle\quad-\frac{3}{\pi^{2}}\sum_{f=u,d,s}\int_{k_{F_{f}}}^{\Lambda}dpp^{2}E_{f}-\frac{1}{\pi^{2}}\sum_{f=u,d,s}\tilde{\mu}_{f}k_{F_{f}},$
(4)
where $k_{F_{f}}=\sqrt{\tilde{\mu}_{f}^{2}-\tilde{m}_{f}^{2}}$ is the Fermi
momentum and $\Omega_{0}$ is calculated to vanishs the potential in the
vacuum. The effective mass ($\tilde{m}$) and chemical potential
($\tilde{\mu}$) expressions are given by the gap equations
$\displaystyle\tilde{m}_{i}$
$\displaystyle=m_{i}+-2G\sigma_{i}-2\kappa\sigma_{j}\sigma_{k}$
$\displaystyle\quad+\frac{8}{3}G_{\sigma\omega}(\sigma_{i}^{2}+\sigma_{j}^{2}+\sigma_{k}^{2})\sigma_{i}$
(5) $\displaystyle\tilde{\mu}_{i}$
$\displaystyle=\mu_{i}-\frac{4}{3}G_{\omega}(\rho_{i}+\rho_{j}+\rho_{k})-\frac{4}{3}G_{\rho}(2\rho_{i}-\rho_{j}-\rho_{k})$
$\displaystyle\quad-\frac{16}{9}G_{\omega\omega}(\rho_{i}+\rho_{j}+\rho_{k})^{3}$
$\displaystyle\quad-\frac{8}{3}G_{\sigma\omega}(\sigma_{i}^{2}+\sigma_{j}^{2}+\sigma_{k}^{2})(\rho_{i}+\rho_{j}+\rho_{k})$
$\displaystyle\quad-\frac{8}{9}G_{\rho\omega}(\rho_{i}+\rho_{j}+\rho_{k})$
$\displaystyle\quad\times(4\rho_{i}^{2}+\rho_{j}^{2}+\rho_{k}^{2}-\rho_{i}\rho_{j}-\rho_{i}\rho_{k}-4\rho_{j}\rho_{k}),$
(6)
for ${i\neq j\neq k}\in{u,d,s}$. Imposing thermodynamic consistence [61],
i.e., $d\Omega/d\sigma_{f}=0$ and $d\Omega/d\rho_{f}=0$, we obtain expressions
for the quark condensate ($\sigma_{i}$) and density ($\rho_{i}$). At zero
temperature, we can write them as
$\displaystyle\sigma_{f}$
$\displaystyle=-\frac{3}{\pi^{2}}\int_{k_{F_{f}}}^{\Lambda}dpp^{2}\frac{\tilde{m}_{f}}{E_{f}},$
(7) $\displaystyle\rho_{f}$ $\displaystyle=\frac{1}{\pi^{2}}k_{F_{f}}^{3}.$
(8)
The parameters are set in order to reproduce the experimental values of mass
and decay constant from $\pi^{\pm}$, $K^{\pm}$, $\eta$ and $\eta^{\prime}$
mesons obtained in [62]. Their values are given in Table 1. In addition, we
also use the electron mass being $m_{e}=0.510\text{ MeV}$.
$\Lambda$ (MeV) | $m_{u,d}$ (MeV) | $m_{s}$ (MeV) | $G\Lambda^{2}$ | $\kappa\Lambda^{5}$
---|---|---|---|---
623.58 | 5.70 | 136.60 | 3.34 | -13.67
Table 1: Fixed parameters of NJL model. $\Lambda$ is the cutoff, $m_{u,d,s}$
are the quarks up, down and strange masses, $G$ and $\kappa$ are coupling
constants. The values of these parameters are obtained in such a way that they
reproduce the experimental masses and decay constants of mesons [62].
As the NJL is normalized in the vacuum, we add a constant $B$ in the pressure
equation, $P\to P+B$, similar to the MIT bag model. This term was also
included in [54, 13], in the first study to allow the restoration of chiral
symmetry and the deconfinement phase transition to occur at the same baryon
density, and in the second to control the deconfinement phase transition. In
the present study we include the extra $B$ term with the same proposal as in
[13]. In this way, we have 6 parameters to be determined by Bayesian analysis:
the adimensional coupling constants (defined as $\xi_{i}=2G_{i}/G$ and
$\xi_{ii}=16G_{ii}/G^{4}$) $\xi_{\omega}$, $\xi_{\rho}$, $\xi_{\omega\omega}$,
$\xi_{\omega\sigma}$, $\xi_{\omega\rho}$ and the constant $B$.
#### II.1.2 Mean Field Theory of QCD (MFTQCD)
In this model, we start with the QCD lagrangian and we assume we can decompose
the gluon field in low and high momentum components (soft and hard gluons)
[46, 63, 64]
$\tilde{G}^{a\mu}(k)=\tilde{A}^{a\mu}(k)+\tilde{\alpha}^{a\mu}(k),$ (9)
where $\tilde{G}$ is the gluon field in the momentum space and we defined $A$
and $\alpha$ as the soft and hard gluon, respectively. We approximate the
momentum of the soft gluons to be zero. This implies that their field is
constant. We replace the fields with their expected value in the vacuum [65,
66]
$\displaystyle\left<A^{a\mu}A^{b\nu}\right>$
$\displaystyle=-\frac{\delta^{ab}}{8}\frac{g^{\mu\nu}}{4}\mu_{0}^{2},$ (10)
$\displaystyle\left<A^{a\mu}A^{b\nu}A^{c\rho}A^{d\eta}\right>$
$\displaystyle=\frac{\phi_{0}^{4}}{(32)(34)}\left[g_{\mu\nu}g^{\rho\eta}\delta^{ab}\delta^{cd}\right.$
$\displaystyle\quad\left.+g_{\mu}^{\rho}g_{\nu}^{\eta}\delta^{ac}\delta^{bd}+g_{\mu}^{\eta}g_{\nu}^{\rho}\delta^{ad}\delta^{bc}\right],$
(11)
where $\phi_{0}$ and $\mu_{0}$ are the energy scale that will be determined.
Next, we consider the hard gluons as classical field, due to its large
occupation numbers at all energy levels [67, 68]
$\alpha_{\mu}^{a}\to\left<\alpha_{\mu}^{a}\right>=\alpha_{0}^{a}\delta_{\mu
0},$ (12)
where $\alpha_{0}^{a}$ is constant. After some calculation, we obtain the
MFTQCD lagrangian
$\displaystyle\mathcal{L}_{\text{MFTQCD}}$
$\displaystyle=-B+\frac{m_{G}^{2}}{2}\alpha_{0}^{a}\alpha_{0}^{a}$
$\displaystyle\quad+\sum_{q=1}^{N_{f}}\bar{\psi}_{i}^{q}\left(i\delta_{ij}\gamma^{\mu}\partial_{\mu}+g\gamma^{0}T_{ij}^{a}\alpha_{0}^{1}-\delta_{ij}m_{q}\right)\psi_{j}^{q},$
(13)
where we defined
$\displaystyle m_{G}^{2}$ $\displaystyle=\frac{9}{32}g^{2}\mu_{0}^{2},$ (14)
$\displaystyle B$
$\displaystyle=\frac{9}{4(34)}g^{2}\phi_{0}^{4}=\left<\frac{1}{4}F^{a\mu\nu}F^{a}_{\mu\nu}\right>.$
(15)
The 2-dimensional condensate contributes to the mass of the hard gluon
($m_{G}$), while the 4-dimensional one acts as a constant analogous to the MIT
bag model. After obtaining the equations of motion and using the energy-
momentum tensor, we obtain the equations of state below:
$\displaystyle P$ $\displaystyle=\frac{27}{2}\xi^{2}\rho_{B}^{2}-B+P_{F},$
(16) $\displaystyle\epsilon$
$\displaystyle=\frac{27}{2}\xi^{2}\rho_{B}^{2}+B+\epsilon_{F},$ (17)
where $P_{F}$ and $\epsilon_{F}$ are the pressure and energy density of a non-
interacting Fermi gas of quarks and electrons and $\xi=g/m_{G}$. The deduction
of the EOS can be found in [46]. We considered the masses values as
$m_{u}=5\text{ MeV}$, $m_{d}=7\text{ MeV}$ and $m_{s}=100\text{ MeV}$. $\xi$
and $B$ are determined from the Bayesian analysis.
### II.2 Hadron phase
In [25], equations of state of pure nuclear matter were obtained from a
relativistic mean field model with non-linear meson terms, both self-
interactions and mixed terms. In this model, the nuclear interactions are
described by the exchange of the $\sigma$ (scalar-isoscalar), $\omega$
(vector-isoscalar) and $\rho$ (vector-isovector) mesons. The Lagrangian is
given by
$\mathcal{L}=\mathcal{L}_{N}+\mathcal{L}_{M}+\mathcal{L}_{NL},$ (18)
where
$\displaystyle\mathcal{L}_{N}$
$\displaystyle=\bar{\Psi}\left[\gamma^{\mu}\left(i\partial_{\mu}-g_{\omega}\omega_{\mu}-g_{\rho}\boldsymbol{t}\cdot\boldsymbol{\rho}_{\mu}\right)\right.$
$\displaystyle\quad\left.-(m-g_{\sigma}\phi)\right]\Psi,$ (19)
$\displaystyle\mathcal{L}_{M}$
$\displaystyle=\frac{1}{2}\left[\partial_{\mu}\phi\partial^{\mu}\phi-
m_{\sigma}^{2}\phi^{2}\right]$
$\displaystyle\quad-\frac{1}{4}F_{\mu\nu}^{(\omega)}F^{(\omega)\mu\nu}+\frac{1}{2}m_{\omega}^{2}\omega_{\mu}\omega^{\mu}$
$\displaystyle\quad-\frac{1}{4}\boldsymbol{F}_{\mu\nu}^{(\rho)}\cdot\boldsymbol{F}^{(\rho)\mu\nu}+\frac{1}{2}m_{\rho}^{2}\boldsymbol{\rho}_{\mu}\cdot\boldsymbol{\rho}^{\mu},$
(20) $\displaystyle\mathcal{L}_{NL}$
$\displaystyle=-\frac{1}{3}bmg_{\sigma}^{3}(\sigma)^{3}-\frac{1}{4}cg_{\sigma}^{4}(\sigma)^{4}+\frac{\xi}{4!}g_{\omega}^{4}(\omega_{\mu}\omega^{\mu})^{2}$
$\displaystyle\quad+\Lambda_{\omega}g_{\rho}^{2}\boldsymbol{\rho}_{\mu}\cdot\boldsymbol{\rho}^{\mu}g_{\omega}^{2}\omega_{\mu}\omega^{\mu},$
(21)
where the Dirac spinor $\Psi$ represents the nucleon doublet (neutron and
proton) with a bare mass $m$; $g_{\sigma}$ ($m_{\sigma}$), $g_{\omega}$
($m_{\omega}$) and $g_{\rho}$ ($m_{\rho}$) are the coupling constants (masses)
of the $\sigma$, $\omega$ and $\rho$ mesons, respectively; $b$, $c$, $\xi$ and
$\Lambda_{\omega}$ are the couplings that determine the strength of the non-
linear terms interaction. All coupling constants were obtained through a
Bayesian inference approach. This method was applied so that the equations
should satisfy the chiral EFT constraints for pure neutron matter, properties
of nuclear matter and neutron star observational data. In this work, we will
use two equations of state (EOS) from [25]: the BMPF 220, a soft EOS, and the
BMPF 260, a stiff EOS. Nuclear matter and NSs properties of these equations
are shown in Table 2. With this choice we analyze the extreme hadronic cases,
and consider they will define for a given mass the low and high radius
extremes.
Quantity | Units | BMPF220 | BMPF260
---|---|---|---
$\rho_{0}$ | fm-3 | 0.146 | 0.149
$m^{\ast}$ | … | 0.74 | 0.61
$\epsilon_{0}$ | MeV | -15.93 | -16.02
$K_{0}$ | MeV | 288 | 244
$J_{\text{sym},0}$ | MeV | 31.74 | 29.06
$L_{\text{sym},0}$ | MeV | 45 | 56
$K_{\text{sym},0}$ | MeV | -159 | 2
$M_{\text{max}}$ | M⊙ | 2.200 | 2.592
$R_{\text{max}}$ | km | 11.35 | 12.64
$R_{1.4}$ | km | 12.94 | 13.51
$\Lambda_{1.4}$ | | 548 | 828
Table 2: Nuclear matter properties and NSs properties of the hadronic EOS.
## III Bayesian Approach
The Bayesian method is a robust technique that enables us to sample an
ensemble of EOS, all of which meet the specified constraints. To apply this,
we have to define the prior, likelihood, and sampler.
Prior:- In Bayesian inference, a prior distribution represents our initial
assumptions about a parameter before any data is observed. This prior is then
revised with new data through the likelihood of the observed data to create
the posterior distribution. In our calculation, we have employed uniform
priors for all the parameters of the model. The minimum and maximum values of
the uniform distribution are shown in Table 3.
NJL | | MFTQCD
---|---|---
Parameters | Units | min | max | | Parameters | Units | min | max
$\xi_{\omega}$ | … | 0 | 0.6 | | $\xi$ | MeV-1 | 0 | 0.0018
$\xi_{\rho}$ | … | 0 | 1 | | $B$ | MeV.fm-3 | 50 | 180
$\xi_{\omega\omega}$ | … | 0 | 40 | | | | |
$\xi_{\sigma\omega}$ | … | 0 | 8 | | | | |
$\xi_{\rho\omega}$ | … | 0 | 50 | | | | |
$B$ | MeV.fm-3 | 0 | 50 | | | | |
Table 3: The lowest and highest values for the uniform distribution prior used
for NJL and MFTQCD in this study.
Likelihood:- In Bayesian inference, the likelihood quantifies the probability
of the observed data for particular parameter values, acting as a link between
prior beliefs and posterior knowledge.
1. (i)
X-ray observation(NICER): X-ray observations give the mass and radius
measurements of NS. Therefore, the corresponding evidence takes the following
form,
$\displaystyle P(d_{\rm
X-ray}|\mathrm{EOS})=\int^{M_{\mathrm{max}}}_{M_{\mathrm{min}}}dmP(m|\mathrm{EOS})$
$\displaystyle\times P(d_{\rm X-ray}|m,R(m,\mathrm{EOS}))$
$\displaystyle=\mathcal{L}^{\rm NICER}$ (22)
$P(m|\rm{EOS})=\left\\{\begin{matrix}\frac{1}{M_{\mathrm{max}}-M_{\mathrm{min}}}&\text{
if }M_{\mathrm{min}}\leq m\leq M_{\mathrm{max}}\ ,\\\
0&\text{otherwise.}&\end{matrix}\right.$ (23)
Here, $M_{\rm min}$ is 1 $M_{\odot}$, and $M_{\rm max}$ represents the maximum
mass of a NS for the given EOS [69].
2. (ii)
Phase transition: We constraint the phase transition value through a super-
Gaussian centered at $\rho_{\text{trans}}=0.275\text{ fm}^{-3}$, with a
standard deviation of $\sigma=0.08\text{ fm}^{-3}$ and $p=5$. Therefore, the
likelihood is written as
$\mathcal{L}^{\rm
PhT}=\text{exp}\left[-\left(\frac{(\rho_{\text{trans}}-\rho_{B})^{2}}{2\sigma^{2}}\right)^{p}\right].$
(24)
The density range was chosen such that the occurrence of the transition is
possible in medium mass stars with $M\geq 1.4M_{\odot}$. If the transition
occurs at a too large density the quark core is too small or does not exist
inside a NS. For the lower limit, saturation density was considered. Since we
are dealing with extremely asymmetric matter, with a proton fraction below 10%
it is reasonable to suppose that a transition to quark matter could be favored
energetically.
3. (iii)
pQCD EOS: In [60], the authors found that for the neutron stars’ EOS to be in
agreement with the pQCD calculations, they must be within a limited region in
$P\times\epsilon\times\rho_{B}$ space. To apply this restriction, we have
calculated the area in $P\times\epsilon$ allowed by pQCD at
$\rho_{B}=7\rho_{0}$ for the renormalization scale $X=4$. Thus, we define the
likelihood as
$\mathcal{L}^{\rm
pQCD}=\begin{cases}1,(P(7\rho_{0}),\epsilon(7\rho_{0}))\text{ inside pQCD
region};\\\ 0,(P(7\rho_{0}),\epsilon(7\rho_{0}))\text{ {not}
inside},\end{cases}$ (25)
where $(P(7\rho_{0}),\epsilon(7\rho_{0}))$ is the pressure and energy density
at $7\rho_{0}$.
The final likelihood is then given by
$\mathcal{L}=\mathcal{L}^{\rm NICERI}\mathcal{L}^{\rm NICERII}\mathcal{L}^{\rm
PhT}\mathcal{L}^{\rm pQCD}\,.$ (26)
NICER I and NICER II correspond to the mass-radius measurements of PSR
J0030+0451 and PSR J0740+6620, respectively.
Sampler:- We utilized the nested sampling method, specifically the PyMultinest
[70, 71] sampler, as part of the Bayesian Inference Library (BILBY) [72].
## IV Results
In this section we present and discuss the properties of hybrid stars,
considering different models to describe the total EOS. For the confined
phase, we choose two representative hadronic EOS, one soft and one hard, but
still compatible with the constraints imposed on the EOS. This choice allows
us to estimate the effect of the hadronic EOS on the total EOS. For the
deconfined phase, we consider the two different models, NJL and MFTQCD,
introduced in Sections II.1.1 and II.1.2, respectively. This choice allows us
to estimate the effects of the choice of a particular quark model. In order to
understand the effect of imposing the thermodynamic and casuality constraints
from the pQCD EOS, we have also performed the inference analysis with and
without this constraint in the following. Note that the NJL model cannot be
extended up to a density as large as the one where the pQCD EOS was
calculated, but following [60] we can estimate the effects of this constraint
on the hybrid EOS.
Figure 1: Pressure versus energy density for the soft EOS (left panel) and
stiff EOS (right panel). Gray plots show the results from [18].
Taking into account the above choices, we have generated, within a Bayesian
inference approach for the quark model parameters, eight different EOS sets
with $\sim 11000$ (NJL) and $\sim 2000$ (MFTQCD) EOS each combining the
following options: i) two hadron EOS (soft and hard), ii) two quark models
(NJL and MFTQCD), iii) with or without imposing the pQCD constraint. These
eight sets are discussed next.
In Fig. 1, the EOS distributions, i.e. the pressure versus the energy density,
are shown at a 90% confidence level (CL) for the two hadronic models, the soft
(stiff) on the left (right) panel. In both panels the hadronic EOS is
represented by a dotted line. The sets including (not including) the pQCD
constraint are represented by colored homogeneous bands (patterns) and the
sets with the NJL (MFTQCD) model are represented by the colors blue/dark blue
(orange/dark orange). The gray band represents the results from [18]. This
band spans the entire phase space generated by an agnostic description of the
EOS and imposing constraints on pure neutron matter from $\chi$EFT
calculations, the pQCD constraints at very high densities and NS observables.
Some conclusions are in order: i) our results are compatible with those
obtained in [18] (shown in gray): all hybrid EOS fall within this range; ii)
the MFTQCD model allows a deconfinement phase transition at much lower baryon
densities compared to the NJL model (phase transition distribution can be seen
in Fig. 11 in Appendix A). Although, the constraints permitted a phase
transition at a baryon density of the order of the saturation density or
above, the transition does not occur below $\rho_{B}\sim 0.25$ fm-3 within
NJL, while the MFTQCD model admits transitions at saturation density.
Figure 2: Mass-radius diagram of the 90% of CL stiff sets: MFTQCD (left) and
NJL (right). Solid bands represent sets were obtained applying the pQCD
constraint. Observational data is show as: PSR J0030+0451 (cyan and purple)
[56, 57], PSR J0740+6620 (yellow) [58, 59], HESS J1731-347 (magenta) [73], and
GW170817 (light and dark gray) [74]. Figure 3: Corner plots of
$\xi_{\omega\omega}$, the maximum mass ($M_{\text{max}}$) and the speed of
sound at $M_{\text{max}}$ ($c^{2}_{\text{s,max}}$) for the NJL stiff sets. Set
with (without) pQCD constraint is shown in blue (black).
Solving the Tolman-Oppenheimer-Volkov equations [75, 76], we obtain the mass-
radius relation of static spherical and homogeneous neutron stars. Figure 2
shows the mass-radius diagram obtained with the sets built from the stiff
hadronic EOS. The same color codes from Fig. 1 are used. In the mass-radius
diagrams, we also show the 68% confidence zone of the 2D posterior
distribution of PSR J0030+0451 (cyan and magenta) [56, 57] and PSR J0740+6620
(yellow) [58, 59] NICER data, and the HESS J1731-347 (magenta) [73]. The
GW170817 data from the LIGO-Virgo collaboration is represented by the solid
(dashed) gray area with 90% (50%) CL [74].
The pQCD constraint reduces the maximum mass in both NJL and MFTQCD models.
However, this reduction occurs with more intensity in the stiff NJL model.
This effect can be explained by the corner plot given in Fig. 3. In this
figure, we show the relation between $\xi_{\omega\omega}$, the maximum mass
($M_{\text{max}}$) and the speed of sound at the maximum mass
($c^{2}_{s,\text{max}}$) for the NJL stiff set with (without) the pQCD
constraint in blue (black). We identify a strong correlation between these
terms. Thus, $\xi_{\omega\omega}$ is the variable responsible for generating
high values of maximum mass at the cost of considerably increasing the values
of the speed of sound. However, the pQCD constraint is related to causality
[60], allowing only smaller $\xi_{\omega\omega}$. We do not have this behavior
in the soft sets, as the $\xi_{\omega\omega}$ values are smaller (see Figs. 9
and 10 in Appendix A for a corner plot with the model parameters).
The pQCD constraint has also a direct effect on the speed of sound. When this
constraint is imposed, the NJL $c_{s}^{2}$ PDF shows a maximum below
$c_{s}^{2}=0.3$, and the medium value in the centre of the maximum mass
configuration is 0.35, although it can reach values as large as $\sim 0.7$. On
the other hand, for the soft hadron EOS a medium value $\sim 0.7$ is obtained,
with the possibility that a value as large as $\sim 0.85$ is reached in the
centre of the most massive stars (see Table 4). In Table 4, we give several NS
properties (radius, transition density to deconfined quark matter, central
pressure, energy density, baryon density and speed of sound squared) for stars
with a mass equal to 1.4, 1.6, 1.8, 2.0 $M_{\odot}$ and maximum mass.
None of the terms of MFTQCD model affect the speed of sound in a similar way
as the NJL term $\xi_{\omega\omega}$. It is the non-linearity of this term in
the NJL model involving the baryon density that causes the stiffening of the
EOS. MFTQCD EOS predict for $c_{s}^{2}$ in the centre of the most massive
stars values of the order of 0.5, although they can decrease to $\sim 0.35$ in
the case of the stiff EOS.
Figure 4: Comparison between models with pQCD constraint. It is shown, from
the left to the right: the speed of sound as a function of the baryon density,
the mass-radius diagram and the mass by the tidal deformability. The middle
panel with the mass-radius curves also includes observational data as
specified in Fig. 2.
The information on the neutron star properties is completed with Fig. 4 which
shows some properties of all the four sets including the pQCD constraint, soft
and stiff hadron EOS with the NJL and MFTQCD quark models: the speed of sound
squared (left), the mass-radius relations (middle) and the tidal deformability
as a function of the mass (right). The NJL and the MFTQCD sets are shown in
blue and orange, respectively. Sets with the soft (stiff) equations are shown
in solid (patterned) bands. Observational data has the same code color
introduced in Fig. 2. In the tidal deformability versus mass graph (right
panel), the GW170817 data from [74] is represented by the purple line.
Figure 5: 90% CL of the $u,\,d,\,$ and $s$-quark fraction, for MFTQCD (top)
and NJL (bottom) and soft (right) and stiff EOS (right). The bands at the low
density limit define the region where the deconfinement occurs for the
different set EOS, and result from the jump from zero to a finite value of the
order of 1/3 for $u$-quarks and $\lesssim 2/3$ for the $d$-quark.
Some comments are in order: from the Fig. 2 and the Table 4, it is seen that
both NJL and MFTQCD predict similar star radii for masses of the order of
$2\,M_{\odot}$. However, radii below 12 km for medium mass stars can only be
obtained with the MFTQCD model. While within the soft EOS, both quark models
predict for the radius of the maximum mass configuration a value just above 11
km, within the stiff hadron EOS, the NJL predicts almost 13 km, more than 1 km
larger than the MFTQCD model; ii) The pQCD constraints imposes a maximum mass
$\lesssim 2.3\,M_{\odot}$ for MFTQCD and $\lesssim 2.1\,M_{\odot}$ for NJL.
These constraints may lower the maximum mass on about 0.2$M_{\odot}$ for NJL
and 0.06$M_{\odot}$ for MFTQCD for the sets built with the stiff hadronic EOS
and has almost no effect for the soft EOS. See Table 4 where some NS
properties are given for all the sets built in the present work; iii) All sets
are compatible with the NICER observational data. However, only MFTQCD can
partially describe the HESS observation [73]; iv) The NJL stiff set is not
able to describe the tidal deformability obtained from the GW170817 data,
mainly because the stiff hadronic EOS does not satisfy this constraint.
Moreover, the MFTQCD stiff is only partially compatible with it; v) for the
NJL EOS, the deconfiment phase transition occurs just above 2 $\rho_{0}$ and
does not vary much within the complete set. Within the MFTQCD, the
deconfinement phase transition occurs mostly at a lower density, above the
saturation density but below twice the saturation density; vi) for the NJL
EOS, the speed of sound squared suffers a strong decrease at deconfinement,
followed by a steady increase, reaching a value above 0.9 under some
conditions. For the MFTQCD EOS, the increase in the speed of sound that
follows the decrease at the phase transition is weak.
In Fig. 5, the fraction of the three quark flavors is represented as a
function of the density. For NJL, the quark deconfinement occurs before the
onset of the $s$-quark at $\sim 0.4$ (0.5) fm-3 for the stiff (soft) EOS. The
deep in the speed of sound panel for the NJL model after deconfinement results
from the $s$-quark onset. The large baryon density at which is occurs is due
to the fact that the chiral restoration of $s$-quark mass occurs at large
densities. In the MFTQCD model, quarks are already in the chiral restored
phase and it is the larger current mass of the $s$-quark the cause of a
smaller $s$-quark fraction.
Figure 6: The top graphs show the $d_{c}$, and the graphs below show the speed
of sound by baryonic density. In first column, we have model with the soft
equation and in second column, with the stiff equation.
In the following we discuss some more properties of the EOS sets developed in
the present study. In [18, 21, 22], model independent equations were used to
probe quark matter inside NSs. The authors discussed the high density behavior
of the EOS based on the polytropic index $\gamma=\frac{d\ln
P}{d\ln\varepsilon}$, on the normalized matter trace anomaly
$\Delta=\frac{1}{3}-\frac{P}{\varepsilon}$ and some quantities derived from
the trace anomaly. In [22], the authors have conjectured that the matter part
of the trace anomaly could be positive definite and discuss the implication on
the mass-radius curves of NS. In [22], the authors propose that
$d_{c}=\sqrt{\Delta^{2}+\Delta^{\prime 2}}<0.2$ (where $\Delta^{\prime}$ is
$\Delta$’s logarithmic derivative with respect to the energy density
$\varepsilon$) could indicate a deconfined matter behavior. In this work, by
using the microscopic models that define the different datasets, and since the
composition of matter is know, we discuss the implications of these criteria.
Figure 6 shows the 90% CL of $\Delta$ (top panels), $d_{c}$ (middle panels)
and $\gamma$ (bottom panels) as functions of the baryon density represented by
blue (NJL) and orange (MFTQCD) bands. Concerning the trace anomaly, as
discussed in [22], the pQCD constraints favors a positive $\Delta$. Here, we
identify some EOS for which $\Delta$ is always positive. In Fig. 6, the 90% CL
of the $\Delta>0$ at the maximum mass density ($\Delta_{\text{max}}$) EOSs are
represented by dark blue (NJL) and dark orange (MFTQCD) patterns. NJL soft
(stiff) EOS with $\Delta_{\text{max}}>0$ represent 40% (94%) of the respective
sets. For MFTQCD soft (stiff) equations, they represent only 5% (30%). The
high percentage of EOS satisfying $\Delta_{\text{max}}>0$ in the NJL stiff set
is due to the fact that we have low $\rho_{\text{max}}$ values (0.581 to 0.931
fm-3 with 90% CL - see Table 4). In Fig. 6, we see that the band of EOS
without the $\Delta_{\text{max}}>0$ filter already mostly satisfies this
restriction. It is also interesting to note that, when applying the
$\Delta_{\text{max}}>0$ filter, we have a notable change in the phase
transition values of the 90% band in the MFTQCD stiff set. This is because the
MFTQCD stiff set is able to reach higher values of $B$. From Eq. 17, we see
that this parameter directly increases the $\Delta$ values. However, $B$ also
increases the phase transition values, thus creating a correlation between
$\Delta$ and $\rho_{\text{trans}}$. These correlations can be seen in Fig. 8,
which shows the corner plots of the MFTQCD soft (left) and stiff (right) sets.
The effect of $\Delta_{\text{max}}>0$ filter in the mass-radius diagram is
shown in Fig. 7. Patterned (solid) bands show the equations with (without) the
$\Delta_{\text{max}}>0$ filter. In article [21], the authors analyzed the
trace anomaly using agnostic equations. When applying $\Delta>0$, they
observed a decrease in maximum mass and its radius. However, here this
behavior can only be seen in the MFTQCD stiff and NJL soft sets.
Figure 7: Mass-radius diagram of the sets with $\Delta(\rho_{\text{max}})>0$ filter (patterned bands). Sets without this filter are also shown in solid bands. Graphs on the left show the equations with the MFTQCD model, and on the right, with the NJL. |
---|---
Figure 8: Corners of MFTQCD soft (left) and stiff (right). Orange (black)
represents the sets without (with) the $\Delta(\rho_{\text{max}})>0$ filter.
Concerning $d_{c}$ and $\gamma$, we verify that for all sets of models, the
phase transition occurs before the graphs reach the $d_{c}<0.2$ and the
$\gamma<1.75$ region, but rapidly attain a value below these. After crossing
the $d_{c}<0.2$ value, the MFTQCD EOS show a monotonically decreasing behavior
but the same is not true to NJL: the 8-quark term with the
$\xi_{\omega\omega}$ coupling becomes stronger with density and gives rise to
a non-monotonic behavior of $d_{c}$. At large densities $d_{c}$ rises above
0.2. A similar behaviour can be seen in the $\gamma$ values.
NJL
---
| soft w/ pQCD | soft w/o pQCD | stiff w/ pQCD | stiff w/o pQCD
| median | min | max | median | min | max | median | min | max | median | min | max
$M_{\text{max}}$ | 2.024 | 1.893 | 2.108 | 2.023 | 1.893 | 2.109 | 2.058 | 1.968 | 2.130 | 2.120 | 1.979 | 2.311
$\rho_{\text{trans}}$ | 0.356 | 0.315 | 0.388 | 0.356 | 0.315 | 0.389 | 0.375 | 0.324 | 0.400 | 0.371 | 0.315 | 0.398
$R_{\text{max}}$ | 11.26 | 11.09 | 11.75 | 11.26 | 11.09 | 11.75 | 12.97 | 11.79 | 13.71 | 12.31 | 11.61 | 13.64
$P_{0}(M_{\text{max}})$ | 443 | 275 | 539 | 442 | 273 | 540 | 182 | 92 | 390 | 308 | 103 | 513
$\epsilon_{0}(M_{\text{max}})$ | 1272 | 1122 | 1319 | 1271 | 1122 | 1319 | 850 | 640 | 1156 | 1021 | 674 | 1203
$\rho_{0}(M_{\text{max}})$ | 1.015 | 0.942 | 1.041 | 1.014 | 0.942 | 1.041 | 0.733 | 0.581 | 0.931 | 0.833 | 0.607 | 0.945
$c^{2}_{s_{0}}(M_{\text{max}})$ | 0.696 | 0.407 | 0.847 | 0.694 | 0.407 | 0.849 | 0.349 | 0.173 | 0.709 | 0.604 | 0.190 | 0.942
$R(2M_{\odot})$ | 11.75 | 11.35 | 12.01 | 11.75 | 11.34 | 12.00 | 13.25 | 12.39 | 13.75 | 13.43 | 12.67 | 13.73
$P_{0}(2M_{\odot})$ | 221 | 186 | 316 | 220 | 186 | 316 | 99 | 81 | 155 | 97 | 82 | 143
$\epsilon_{0}(2M_{\odot})$ | 851 | 743 | 1081 | 848 | 741 | 1082 | 555 | 390 | 740 | 517 | 396 | 693
$\rho_{0}(2M_{\odot})$ | 0.857 | 0.780 | 0.970 | 0.858 | 0.780 | 0.972 | 0.622 | 0.507 | 0.792 | 0.546 | 0.466 | 0.722
$c^{2}_{s_{0}}(2M_{\odot})$ | 0.609 | 0.544 | 0.732 | 0.608 | 0.544 | 0.735 | 0.335 | 0.223 | 0.539 | 0.370 | 0.253 | 0.482
$R(1.8M_{\odot})$ | 12.67 | 12.48 | 12.76 | 12.67 | 12.48 | 12.77 | 13.76 | 13.50 | 13.76 | 13.76 | 13.47 | 13.76
$P_{0}(1.8M_{\odot})$ | 115 | 108 | 131 | 115 | 108 | 131 | 63 | 63 | 78 | 63 | 63 | 79
$\epsilon_{0}(1.8M_{\odot})$ | 582 | 547 | 670 | 582 | 546 | 669 | 363 | 356 | 470 | 363 | 357 | 465
$\rho_{0}(1.8M_{\odot})$ | 0.548 | 0.519 | 0.618 | 0.547 | 0.518 | 0.617 | 0.359 | 0.353 | 0.450 | 0.359 | 0.354 | 0.446
$c^{2}_{s_{0}}(1.8M_{\odot})$ | 0.336 | 0.266 | 0.383 | 0.335 | 0.266 | 0.383 | 0.404 | 0.209 | 0.501 | 0.363 | 0.232 | 0.498
$R(1.4M_{\odot})$ | 12.98 | 12.89 | 13.00 | 12.97 | 12.89 | 13.00 | 13.57 | 13.57 | 13.57 | 13.57 | 13.57 | 13.57
$P_{0}(1.4M_{\odot})$ | 53 | 52 | 56 | 53 | 52 | 56 | 37 | 37 | 37 | 37 | 37 | 37
$\epsilon_{0}(1.4M_{\odot})$ | 412 | 403 | 426 | 412 | 402 | 426 | 306 | 306 | 306 | 306 | 306 | 308
$\rho_{0}(1.4M_{\odot})$ | 0.407 | 0.399 | 0.420 | 0.407 | 0.398 | 0.420 | 0.309 | 0.309 | 0.309 | 0.309 | 0.309 | 0.310
$c^{2}_{s_{0}}(1.4M_{\odot})$ | 0.341 | 0.288 | 0.361 | 0.341 | 0.288 | 0.361 | 0.396 | 0.275 | 0.409 | 0.392 | 0.260 | 0.408
MFTQCD
| soft w/ pQCD | soft w/o pQCD | stiff w/ pQCD | stiff w/o pQCD
| median | min | max | median | min | max | median | min | max | median | min | max
$M_{\text{max}}$ | 2.101 | 1.968 | 2.218 | 2.105 | 1.957 | 2.212 | 2.143 | 1.955 | 2.318 | 2.161 | 1.965 | 2.378
$\rho_{\text{trans}}$ | 0.259 | 0.173 | 0.365 | 0.256 | 0.174 | 0.369 | 0.316 | 0.185 | 0.388 | 0.311 | 0.183 | 0.388
$R_{\text{max}}$ | 11.27 | 10.78 | 11.61 | 11.27 | 10.77 | 11.60 | 11.80 | 11.05 | 13.32 | 11.86 | 11.07 | 12.74
$P_{0}(M_{\text{max}})$ | 432 | 408 | 459 | 432 | 410 | 459 | 396 | 88 | 445 | 393 | 89 | 444
$\epsilon_{0}(M_{\text{max}})$ | 1239 | 1135 | 1370 | 1237 | 1140 | 1378 | 1135 | 791 | 1312 | 1126 | 802 | 1303
$\rho_{0}(M_{\text{max}})$ | 0.993 | 0.915 | 1.100 | 0.991 | 0.919 | 1.104 | 0.913 | 0.706 | 1.050 | 0.906 | 0.712 | 1.043
$c^{2}_{s_{0}}(M_{\text{max}})$ | 0.488 | 0.465 | 0.503 | 0.488 | 0.463 | 0.503 | 0.495 | 0.355 | 0.521 | 0.498 | 0.366 | 0.528
$R(2M_{\odot})$ | 12.01 | 11.43 | 12.37 | 12.00 | 11.46 | 12.35 | 12.68 | 11.72 | 13.49 | 12.77 | 11.78 | 13.57
$P_{0}(2M_{\odot})$ | 199 | 148 | 339 | 195 | 150 | 321 | 145 | 86 | 275 | 138 | 86 | 272
$\epsilon_{0}(2M_{\odot})$ | 742 | 598 | 1086 | 732 | 602 | 1047 | 621 | 483 | 947 | 605 | 447 | 940
$\rho_{0}(2M_{\odot})$ | 0.714 | 0.601 | 0.908 | 0.711 | 0.609 | 0.897 | 0.613 | 0.502 | 0.844 | 0.586 | 0.464 | 0.830
$c^{2}_{s_{0}}(2M_{\odot})$ | 0.471 | 0.465 | 0.479 | 0.471 | 0.465 | 0.478 | 0.473 | 0.463 | 0.478 | 0.473 | 0.464 | 0.481
$R(1.8M_{\odot})$ | 12.28 | 11.64 | 12.64 | 12.28 | 11.61 | 12.66 | 13.17 | 12.00 | 13.76 | 13.19 | 12.03 | 13.76
$P_{0}(1.8M_{\odot})$ | 126 | 103 | 175 | 125 | 104 | 179 | 88 | 63 | 148 | 86 | 63 | 145
$\epsilon_{0}(1.8M_{\odot})$ | 586 | 499 | 742 | 583 | 503 | 756 | 483 | 362 | 668 | 475 | 363 | 661
$\rho_{0}(1.8M_{\odot})$ | 0.555 | 0.484 | 0.683 | 0.553 | 0.487 | 0.693 | 0.465 | 0.358 | 0.621 | 0.457 | 0.358 | 0.615
$c^{2}_{s_{0}}(1.8M_{\odot})$ | 0.443 | 0.432 | 0.451 | 0.443 | 0.430 | 0.451 | 0.447 | 0.363 | 0.497 | 0.449 | 0.385 | 0.497
$R(1.4M_{\odot})$ | 12.42 | 11.81 | 12.93 | 12.41 | 11.78 | 12.94 | 13.57 | 12.09 | 13.57 | 13.57 | 12.20 | 13.57
$P_{0}(1.4M_{\odot})$ | 64 | 55 | 79 | 63 | 55 | 80 | 37 | 37 | 69 | 38 | 37 | 68
$\epsilon_{0}(1.4M_{\odot})$ | 441 | 399 | 508 | 441 | 400 | 511 | 307 | 306 | 467 | 329 | 306 | 468
$\rho_{0}(1.4M_{\odot})$ | 0.437 | 0.400 | 0.498 | 0.436 | 0.401 | 0.502 | 0.310 | 0.309 | 0.461 | 0.332 | 0.309 | 0.460
$c^{2}_{s_{0}}(1.4M_{\odot})$ | 0.426 | 0.407 | 0.437 | 0.426 | 0.407 | 0.437 | 0.399 | 0.309 | 0.447 | 0.401 | 0.295 | 0.451
Table 4: The median and the 90% CL of NS properties for all sets: maximum mass
$M_{\text{max}}$ ($M_{\odot}$); transition baryon density
$\rho_{\text{trans}}$ (fm-3); radium $R_{\text{i}}$ (km), central pressure
$P_{0,i}$ (MeV/fm-3), central energy density $\epsilon_{0,i}$ (MeV/fm-3),
central baryonic density $\rho_{0,i}$ (fm-3) at $M_{i}$, with
$i=[\text{max},2M_{\odot},1.8M_{\odot},1.4M_{\odot}]$.
## V Conclusions
In the present study we have as main objective to understand the possibility
that NS have a deconfined quark phase, considering the quark phase described
by phenomenological microscopic models, and avoiding agnostic descriptions.
For the hadron phase, we use two extreme hadron EOS from [25]: the BMPF 220
(soft) and the BMPF 260 (stiff) that satisfy nuclear matter properties and
observational constraints. For the quark phase, we consider the chiral
symmetric SU(3) Nambu-Jona-Lasinio (NJL) model with 4-quark and 8-quark
interaction terms, and the Mean Field Theory of QCD (MFTQCD) which results
from the QCD Lagrangian, considering a gluon field decomposition in soft and
hard momentum components [46, 63, 64].
Based on these models we have generated eight different sets choosing two
hadron EOS and the two quark model, and considering/not considering the very
high density pQCD constraints. Bayesian inference is applied to determine the
quark models coupling constants and a constant bag term $B$. We have imposed
as constraints the observational data from PSR J0030+0451 [56, 57] and PSR
J0740+6620 [58, 59]. Besides, we have also imposed that the phase transition
should occur around $\rho_{\text{trans}}\approx 0.15-0.4\text{ fm}^{-3}$. In
four sets, we have also enforced the pQCD constraint at $\rho_{B}=7n_{0}$
(where $n_{0}=0.16$ fm-3) [60]. We have confirmed that the current
observational data are compatible with the existence of a quark core inside
NS. The pQCD constraints were important to constrain the NJL model to give
causal EOS, since the NJL 8-quark term which is responsible for allowing two
solar mass stars with a large quark core may give rise to a super-luminous
speed of sound. For both quark models we have obtained for the maximum star
mass a value of the order of 2.1–2.3$M_{\odot}$ and similar radii. However,
with MFTQCD quark model it was possible to obtain medium and low mass NS with
a smaller radius, in particular, below 12 km (even compatible with the compact
object HESS J1731-347 [73]). With MTFQCD we have obtained a transition to
quark matter above $\sim 0.15$ fm-3 but within NJL the transition density was
above $\sim 0.25$ fm-3. This explains why within NJL the medium and low mass
stars have a larger radius. Note that in [38], only four quark interactions
were introduced in the NJL Lagrangian density, and, as a consequence, quark
cores were only found in the most massive stars. In [31], multiquark
interactions were considered in the NJL model, however, the main objective of
the study was to determine under which conditions a NS ”third family” with a
disconnected hybrid star branch exists. In our analysis, the multiquark
interactions considered do not predict a third family, since we did not
include a eight quark scalar-pseudoscalar interaction because we did not want
to affect the vacuum properties.
We have discussed the behavior of the normalized matter trace anomaly and the
derived quantity $d_{c}$ proposed to measure conformality in [22] and could
conclude that NS may have a quark core but within the present description,
quark matter is still strongly interacting matter and the conformal limit is
not attained in the NS center. Besides the multiquark vector interactions may
give rise to values of $d_{c}>0.2$ and $\gamma>1.7$ in a deconfined phase.
In a future work, we will apply the Bayesian inference also in the hadronic
phase. In that way, we will obtain all the range of the equations of state. We
also plan to compare to purely hadronic sets and analyze if hybrid stars are
more probable than hadronic stars based on the current observational data.
## Acknowledgments
M.A. expresses sincere gratitude to the FCT for their generous support through
Ph.D. grant number 2022.11685.BD. This research received partial funding from
national sources through FCT (Fundação para a Ciência e a Tecnologia, I.P,
Portugal) for projects UIDB/04564/2020 and UIDP/04564/2020, identified by DOI
10.54499/UIDB/04564/2020 and 10.54499/UIDP/04564/2020, respectively, as well
as for project 2022.06460.PTDC with DOI 10.54499/2022.06460.PTDC. The authors
acknowledge the Laboratory for Advanced Computing at the University of Coimbra
for providing HPC resources that have contributed to the research results
reported within this paper, URL: https://www.uc.pt/lca.
## Appendix A Parameter and phase transition posteriors
In this section, we present some additional results that support our
discussion. Figures 9 and 10 show the corner plots of the NJL and MFTQCD
models parameters, respectively. Each corner shows a set with the pQCD
constraint (blue or orange) and the same set without the constraint (black).
Figure 11 shows the probability distribution of the baryonic density of phase
transition. Plots are arranged as shown in figure 9 and 10.
|
---|---
Figure 9: Corner plots of the parameters values of NJL soft (left) and NJL stiff (right). Models with pQCD constraints are plotted in blue and without pQCD in black. |
---|---
Figure 10: Corner plots of the parameters values of MFTQCD soft (left) and MFTQCD stiff (right). Models with pQCD constraints are plotted in orange and without pQCD in black. |
---|---
|
Figure 11: The phase transition likelihood ($\mathcal{L}^{\rm PhT}$) and the
marginalized posteriors. The upper figures show the graphs for the NJL model
and the lower ones for the MFTQCD model. On the left, we have the soft models
and on the right, the stiff models. Models with pQCD constraints are plotted
in blue (NJL) or orange (MFTQCD) and without pQCD in black. Dotted lines shows
the imposed constraint in phase transition ($\mathcal{L}^{\text{PhT}}$).
## References
* Fukushima and Hatsuda [2011] K. Fukushima and T. Hatsuda, The phase diagram of dense QCD, Rept. Prog. Phys. 74, 014001 (2011), arXiv:1005.4814 [hep-ph] .
* Bazavov _et al._ [2012] A. Bazavov _et al._ , The chiral and deconfinement aspects of the QCD transition, Phys. Rev. D 85, 054503 (2012), arXiv:1111.1710 [hep-lat] .
* Glendenning [1997] N. K. Glendenning, _Compact stars: Nuclear physics, particle physics, and general relativity_ (1997).
* Rezzolla _et al._ [2018] L. Rezzolla, P. Pizzochero, D. I. Jones, N. Rea, and I. Vidaña, eds., _The Physics and Astrophysics of Neutron Stars_, Vol. 457 (Springer, 2018).
* Kurkela _et al._ [2010] A. Kurkela, P. Romatschke, and A. Vuorinen, Cold Quark Matter, Phys. Rev. D 81, 105021 (2010), arXiv:0912.1856 [hep-ph] .
* Hebeler _et al._ [2013] K. Hebeler, J. M. Lattimer, C. J. Pethick, and A. Schwenk, Equation of state and neutron star properties constrained by nuclear physics and observation, Astrophys. J. 773, 11 (2013), arXiv:1303.4662 [astro-ph.SR] .
* Drischler _et al._ [2016] C. Drischler, K. Hebeler, and A. Schwenk, Asymmetric nuclear matter based on chiral two- and three-nucleon interactions, Phys. Rev. C 93, 054314 (2016), arXiv:1510.06728 [nucl-th] .
* Son and Stephanov [2001] D. T. Son and M. A. Stephanov, QCD at finite isospin density, Phys. Rev. Lett. 86, 592 (2001), arXiv:hep-ph/0005225 .
* Ebert and Klimenko [2006] D. Ebert and K. G. Klimenko, Pion condensation in electrically neutral cold matter with finite baryon density, Eur. Phys. J. C 46, 771 (2006), arXiv:hep-ph/0510222 .
* Barducci _et al._ [2004] A. Barducci, R. Casalbuoni, G. Pettini, and L. Ravagli, A Calculation of the QCD phase diagram at finite temperature, and baryon and isospin chemical potentials, Phys. Rev. D 69, 096004 (2004), arXiv:hep-ph/0402104 .
* Alford _et al._ [1998] M. G. Alford, K. Rajagopal, and F. Wilczek, QCD at finite baryon density: Nucleon droplets and color superconductivity, Phys. Lett. B 422, 247 (1998), arXiv:hep-ph/9711395 .
* Mishra and Mishra [2004] A. Mishra and H. Mishra, Chiral symmetry breaking, color superconductivity and color neutral quark matter: A Variational approach, Phys. Rev. D 69, 014014 (2004), arXiv:hep-ph/0306105 .
* Bonanno and Sedrakian [2012] L. Bonanno and A. Sedrakian, Composition and stability of hybrid stars with hyperons and quark color-superconductivity, Astron. Astrophys. 539, A16 (2012), arXiv:1108.0559 [astro-ph.SR] .
* Abhishek and Mishra [2021] A. Abhishek and H. Mishra, Chiral Symmetry Breaking, Color Superconductivity, and Equation of State for Magnetized Strange Quark Matter, Springer Proc. Phys. 261, 593 (2021).
* Alford _et al._ [1999] M. G. Alford, K. Rajagopal, and F. Wilczek, Color flavor locking and chiral symmetry breaking in high density QCD, Nucl. Phys. B 537, 443 (1999), arXiv:hep-ph/9804403 .
* Mannarelli _et al._ [2006] M. Mannarelli, K. Rajagopal, and R. Sharma, Testing the Ginzburg-Landau approximation for three-flavor crystalline color superconductivity, Phys. Rev. D 73, 114012 (2006), arXiv:hep-ph/0603076 .
* Rajagopal and Sharma [2006] K. Rajagopal and R. Sharma, The Crystallography of Three-Flavor Quark Matter, Phys. Rev. D 74, 094019 (2006), arXiv:hep-ph/0605316 .
* Annala _et al._ [2020] E. Annala, T. Gorda, A. Kurkela, J. Nättilä, and A. Vuorinen, Evidence for quark-matter cores in massive neutron stars, Nature Phys. 16, 907 (2020), arXiv:1903.09121 [astro-ph.HE] .
* Altiparmak _et al._ [2022] S. Altiparmak, C. Ecker, and L. Rezzolla, On the Sound Speed in Neutron Stars, Astrophys. J. Lett. 939, L34 (2022), arXiv:2203.14974 [astro-ph.HE] .
* Somasundaram _et al._ [2023] R. Somasundaram, I. Tews, and J. Margueron, Perturbative QCD and the neutron star equation of state, Phys. Rev. C 107, L052801 (2023), arXiv:2204.14039 [nucl-th] .
* Fujimoto _et al._ [2022] Y. Fujimoto, K. Fukushima, L. D. McLerran, and M. Praszalowicz, Trace Anomaly as Signature of Conformality in Neutron Stars, Phys. Rev. Lett. 129, 252702 (2022), arXiv:2207.06753 [nucl-th] .
* Annala _et al._ [2023] E. Annala, T. Gorda, J. Hirvonen, O. Komoltsev, A. Kurkela, J. Nättilä, and A. Vuorinen, Strongly interacting matter exhibits deconfined behavior in massive neutron stars, Nature Commun. 14, 8451 (2023), arXiv:2303.11356 [astro-ph.HE] .
* Baym and Chin [1976] G. Baym and S. Chin, Can a neutron star be a giant mit bag?, Physics Letters B 62, 241 (1976).
* Oertel _et al._ [2017] M. Oertel, M. Hempel, T. Klähn, and S. Typel, Equations of state for supernovae and compact stars, Rev. Mod. Phys. 89, 015007 (2017), arXiv:1610.03361 [astro-ph.HE] .
* Malik _et al._ [2023] T. Malik, M. Ferreira, M. B. Albino, and C. Providência, Spanning the full range of neutron star properties within a microscopic description, Physical Review D 107, 10.1103/physrevd.107.103018 (2023).
* Malik _et al._ [2022] T. Malik, M. Ferreira, B. K. Agrawal, and C. Providência, Relativistic description of dense matter equation of state and compatibility with neutron star observables: A bayesian approach, The Astrophysical Journal 930, 17 (2022).
* Logoteta _et al._ [2013] D. Logoteta, C. Providência, and I. Vidaña, Formation of hybrid stars from metastable hadronic stars, Phys. Rev. C 88, 055802 (2013), arXiv:1311.0618 [nucl-th] .
* Benic _et al._ [2015] S. Benic, D. Blaschke, D. E. Alvarez-Castillo, T. Fischer, and S. Typel, A new quark-hadron hybrid equation of state for astrophysics - I. High-mass twin compact stars, Astron. Astrophys. 577, A40 (2015), arXiv:1411.2856 [astro-ph.HE] .
* Ferreira _et al._ [2021] M. Ferreira, R. C. Pereira, and C. Providência, Hybrid stars with large strange quark cores constrained by GW170817, Physical Review D 103, 10.1103/physrevd.103.123020 (2021).
* Matsuoka _et al._ [2018] H. Matsuoka, Y. Tsue, J. a. Da Providência, C. Providência, and M. Yamamura, Hybrid stars from the NJL model with a tensor-interaction, Phys. Rev. D 98, 074027 (2018), arXiv:1806.04377 [hep-ph] .
* Alvarez-Castillo _et al._ [2016] D. Alvarez-Castillo, A. Ayriyan, S. Benic, D. Blaschke, H. Grigorian, and S. Typel, New class of hybrid EoS and Bayesian M-R data analysis, Eur. Phys. J. A 52, 69 (2016), arXiv:1603.03457 [nucl-th] .
* Han and Steiner [2019] S. Han and A. W. Steiner, Tidal deformability with sharp phase transitions in binary neutron stars, Phys. Rev. D 99, 083014 (2019).
* Schramm _et al._ [2016] S. Schramm, V. Dexheimer, and R. Negreiros, Modelling hybrid stars in quark-hadron approaches, The European Physical Journal A 52, 10.1140/epja/i2016-16014-5 (2016).
* Parisi _et al._ [2020] A. Parisi, C. V. Flores, C. H. Lenzi, C.-S. Chen, and G. Lugones, Hybrid stars in the light of the merging event gw170817 (2020), arXiv:2009.14274 [astro-ph.HE] .
* Tonetto and Lugones [2020] L. Tonetto and G. Lugones, Discontinuity gravity modes in hybrid stars: Assessing the role of rapid and slow phase conversions, Phys. Rev. D 101, 123029 (2020).
* Bastian [2021] N.-U. F. Bastian, Phenomenological quark-hadron equations of state with first-order phase transitions for astrophysical applications, Phys. Rev. D 103, 023001 (2021).
* Minamikawa _et al._ [2021] T. Minamikawa, T. Kojo, and M. Harada, Quark-hadron crossover equations of state for neutron stars: Constraining the chiral invariant mass in a parity doublet model, Phys. Rev. C 103, 045205 (2021).
* Pfaff _et al._ [2022] A. Pfaff, H. Hansen, and F. Gulminelli, Bayesian analysis of the properties of hybrid stars with the Nambu–Jona-Lasinio model, Phys. Rev. C 105, 035802 (2022), arXiv:2112.09595 [astro-ph.HE] .
* Xie and Li [2021] W.-J. Xie and B.-A. Li, Bayesian inference of the dense-matter equation of state encapsulating a first-order hadron-quark phase transition from observables of canonical neutron stars, Phys. Rev. C 103, 035802 (2021), arXiv:2009.13653 [nucl-th] .
* Ayriyan _et al._ [2021] A. Ayriyan, D. Blaschke, A. G. Grunfeld, D. Alvarez-Castillo, H. Grigorian, and V. Abgaryan, Bayesian analysis of multimessenger M-R data with interpolated hybrid EoS, Eur. Phys. J. A 57, 318 (2021), arXiv:2102.13485 [astro-ph.HE] .
* Takatsy _et al._ [2023] J. Takatsy, P. Kovacs, G. Wolf, and J. Schaffner-Bielich, What neutron stars tell about the hadron-quark phase transition: A Bayesian study, Phys. Rev. D 108, 043002 (2023), arXiv:2303.00013 [astro-ph.HE] .
* Klevansky [1992] S. P. Klevansky, The Nambu-Jona-Lasinio model of quantum chromodynamics, Rev. Mod. Phys. 64, 649 (1992).
* Hatsuda and Kunihiro [1994] T. Hatsuda and T. Kunihiro, QCD phenomenology based on a chiral effective Lagrangian, Phys. Rept. 247, 221 (1994), arXiv:hep-ph/9401310 .
* Buballa and Oertel [1999] M. Buballa and M. Oertel, Strange quark matter with dynamically generated quark masses, Phys. Lett. B 457, 261 (1999), arXiv:hep-ph/9810529 .
* Ferreira _et al._ [2020a] M. Ferreira, R. Câmara Pereira, and C. Providência, Quark matter in light neutron stars, Phys. Rev. D 102, 083030 (2020a), arXiv:2008.12563 [nucl-th] .
* Fogaca and Navarra [2011] D. A. Fogaca and F. S. Navarra, Gluon condensates in a cold quark–gluon plasma, Phys. Lett. B 700, 236 (2011), arXiv:1012.5266 [hep-ph] .
* Nambu and Jona-Lasinio [1961a] Y. Nambu and G. Jona-Lasinio, Dynamical Model of Elementary Particles Based on an Analogy with Superconductivity. 1., Phys. Rev. 122, 345 (1961a).
* Nambu and Jona-Lasinio [1961b] Y. Nambu and G. Jona-Lasinio, Dynamical model of elementary particles based on an analogy with superconductivity. II., Phys. Rev. 124, 246 (1961b).
* Buballa [2005a] M. Buballa, NJL model analysis of quark matter at large density, Phys. Rept. 407, 205 (2005a), arXiv:hep-ph/0402234 .
* Schertler _et al._ [1999] K. Schertler, S. Leupold, and J. Schaffner-Bielich, Neutron stars and quark phases in the NJL model, Phys. Rev. C 60, 025801 (1999), arXiv:astro-ph/9901152 .
* Baldo _et al._ [2003] M. Baldo, M. Buballa, F. Burgio, F. Neumann, M. Oertel, and H. J. Schulze, Neutron stars and the transition to color superconducting quark matter, Phys. Lett. B 562, 153 (2003), arXiv:nucl-th/0212096 .
* Menezes and Providencia [2003] D. P. Menezes and C. Providencia, Warm stellar matter with deconfinement: Application to compact stars, Phys. Rev. C 68, 035804 (2003), arXiv:nucl-th/0308041 .
* Shovkovy _et al._ [2003] I. Shovkovy, M. Hanauske, and M. Huang, Nonstrange hybrid compact stars with color superconducting matter, Phys. Rev. D 67, 103004 (2003), arXiv:hep-ph/0303027 .
* Pagliara and Schaffner-Bielich [2008] G. Pagliara and J. Schaffner-Bielich, Stability of CFL cores in Hybrid Stars, Phys. Rev. D 77, 063004 (2008), arXiv:0711.1119 [astro-ph] .
* Ferreira _et al._ [2020b] M. Ferreira, R. Câmara Pereira, and C. Providência, Neutron stars with large quark cores, Phys. Rev. D 101, 123030 (2020b), arXiv:2005.10543 [nucl-th] .
* Riley _et al._ [2019] T. E. Riley _et al._ , A $NICER$ View of PSR J0030+0451: Millisecond Pulsar Parameter Estimation, Astrophys. J. Lett. 887, L21 (2019), arXiv:1912.05702 [astro-ph.HE] .
* Miller _et al._ [2019] M. C. Miller _et al._ , PSR J0030+0451 Mass and Radius from $NICER$ Data and Implications for the Properties of Neutron Star Matter, Astrophys. J. Lett. 887, L24 (2019), arXiv:1912.05705 [astro-ph.HE] .
* Riley _et al._ [2021] T. E. Riley _et al._ , A NICER View of the Massive Pulsar PSR J0740+6620 Informed by Radio Timing and XMM-Newton Spectroscopy, Astrophys. J. Lett. 918, L27 (2021), arXiv:2105.06980 [astro-ph.HE] .
* Miller _et al._ [2021] M. C. Miller _et al._ , The Radius of PSR J0740+6620 from NICER and XMM-Newton Data, Astrophys. J. Lett. 918, L28 (2021), arXiv:2105.06979 [astro-ph.HE] .
* Komoltsev and Kurkela [2022] O. Komoltsev and A. Kurkela, How Perturbative QCD Constrains the Equation of State at Neutron-Star Densities, Phys. Rev. Lett. 128, 202701 (2022), arXiv:2111.05350 [nucl-th] .
* Buballa [2005b] M. Buballa, NJL-model analysis of dense quark matter, Physics Reports 407, 205 (2005b).
* Olive [2014] K. Olive, Review of particle physics, Chinese Physics C 38, 090001 (2014).
* Franzon _et al._ [2012] B. Franzon, D. A. Fogaca, F. S. Navarra, and J. E. Horvath, Self-bound Interacting QCD Matter in Compact Stars, Phys. Rev. D 86, 065031 (2012), arXiv:1203.6090 [astro-ph.SR] .
* Albino _et al._ [2021] M. B. Albino, R. Fariello, and F. S. Navarra, Tidal Deformability of Quark Stars with Repulsive Interactions, Phys. Rev. D 104, 083011 (2021), arXiv:2106.12956 [nucl-th] .
* Celenza and Shakin [1986] L. S. Celenza and C. M. Shakin, Description of the gluon condensate, Phys. Rev. D 34, 1591 (1986).
* Li and Shakin [2005] X. Li and C. M. Shakin, Description of gluon propagation in the presence of an ${A}^{2}$ condensate, Phys. Rev. D 71, 074007 (2005).
* Serot and Walecka [1986] B. D. Serot and J. D. Walecka, The Relativistic Nuclear Many Body Problem, Adv. Nucl. Phys. 16, 1 (1986).
* Tezuka [1987] H. Tezuka, Mean Field Approximation to QCD. 1. Selfconsistent equations of motion, (1987).
* Landry _et al._ [2020] P. Landry, R. Essick, and K. Chatziioannou, Nonparametric constraints on neutron star matter with existing and upcoming gravitational wave and pulsar observations, Phys. Rev. D 101, 123007 (2020), arXiv:2003.04880 [astro-ph.HE] .
* Buchner _et al._ [2014] J. Buchner, A. Georgakakis, K. Nandra, L. Hsu, C. Rangel, M. Brightman, A. Merloni, M. Salvato, J. Donley, and D. Kocevski, X-ray spectral modelling of the AGN obscuring region in the CDFS: Bayesian model selection and catalogue, Astronomy & Astrophysics 564, A125 (2014), arXiv:1402.0004 [astro-ph.HE] .
* Buchner [2023] J. Buchner, Nested sampling methods, Statistics Surveys 17, 10.1214/23-ss144 (2023).
* Ashton _et al._ [2019] G. Ashton, M. Hübner, P. D. Lasky, C. Talbot, K. Ackley, S. Biscoveanu, Q. Chu, A. Divakarla, P. J. Easter, B. Goncharov, F. H. Vivanco, J. Harms, M. E. Lower, G. D. Meadors, D. Melchor, E. Payne, M. D. Pitkin, J. Powell, N. Sarin, R. J. E. Smith, and E. Thrane, Bilby: A user-friendly bayesian inference library for gravitational-wave astronomy, The Astrophysical Journal Supplement Series 241, 27 (2019).
* Doroshenko _et al._ [2022] V. Doroshenko, V. Suleimanov, G. Pühlhofer, and A. Santangelo, A strangely light neutron star within a supernova remnant, Nature Astronomy 6, 1444 (2022).
* Abbott _et al._ [2018] B. P. Abbott _et al._ (LIGO Scientific, Virgo), GW170817: Measurements of neutron star radii and equation of state, Phys. Rev. Lett. 121, 161101 (2018), arXiv:1805.11581 [gr-qc] .
* Tolman [1939] R. C. Tolman, Static solutions of Einstein’s field equations for spheres of fluid, Phys. Rev. 55, 364 (1939).
* Oppenheimer and Volkoff [1939] J. R. Oppenheimer and G. M. Volkoff, On massive neutron cores, Phys. Rev. 55, 374 (1939).
|
# Improving Robustness of Language Models from a Geometry-aware Perspective
Bin Zhu1 , Zhaoquan Gu1,2 , Le Wang1,2, Jinyin Chen3, Qi Xuan3
1 Cyberspace Institute of Advanced Technology (CIAT), Guangzhou University,
Guangzhou 510006, China
2 Institute of Cyberspace Platform, Peng Cheng Laboratory, Shenzhen 999077,
China
3 Institute of Cyberspace Security, Zhejiang University of Technology,
Hangzhou 310023, China
<EMAIL_ADDRESS><EMAIL_ADDRESS>
<EMAIL_ADDRESS>
Corresponding author
###### Abstract
Recent studies have found that removing the norm-bounded projection and
increasing search steps in adversarial training can significantly improve
robustness. However, we observe that a too large number of search steps can
hurt accuracy. We aim to obtain strong robustness efficiently using fewer
steps. Through a toy experiment, we find that perturbing the clean data to the
decision boundary but not crossing it does not degrade the test accuracy.
Inspired by this, we propose friendly adversarial data augmentation (FADA) to
generate friendly adversarial data. On top of FADA, we propose geometry-aware
adversarial training (GAT) to perform adversarial training on friendly
adversarial data so that we can save a large number of search steps.
Comprehensive experiments across two widely used datasets and three pre-
trained language models demonstrate that GAT can obtain stronger robustness
via fewer steps. In addition, we provide extensive empirical results and in-
depth analyses on robustness to facilitate future studies.
## 1 Introduction
Deep neural networks (DNNs) have achieved great success on many natural
language processing (NLP) tasks Kim (2014); Vaswani et al. (2017); Devlin et
al. (2019). However, recent studies Szegedy et al. (2013); Goodfellow et al.
(2015) have shown that DNNs are vulnerable to crafted adversarial examples .
For instance, an attacker can mislead an online sentiment analysis system by
making minor changes to the input sentences Papernot et al. (2016); Liang et
al. (2017). It has raised concerns among researchers about the security of
DNN-based NLP systems. As a result, a growing number of studies are focusing
on enhancing robustness to defend against textual adversarial attacks Jia et
al. (2019); Ye et al. (2020); Jones et al. (2020); Zhu et al. (2020).
Existing adversarial defense methods fall into two categories: empirical and
certified defenses. Empirical defenses include gradient-based adversarial
training (AT) and discrete adversarial data augmentation (ADA). Certified
defenses provide a provable guaranteed robustness boundary for NLP models.
This work focuses on empirical defenses.
There was a common belief that gradient-based AT methods in NLP was
ineffective compared with ADA in defending against textual adversarial attacks
Li and Qiu (2021); Si et al. (2021). Li et al. (2021) find that removing the
norm-bounded projection and increasing the number of search steps in
adversarial training can significantly improve robustness. Nonetheless, we
observe that increasing the number of search steps further does not
significantly improve robustness but hurts accuracy.
Figure 1: The clean accuracy achieved with ADA, FADA, and the original
training set. During training, both ADA and FADA have close to 100% accuracy.
However, ADA only achieves $\sim$15% accuracy during testing while FADA
maintains the same test accuracy with the original training set. This
indicates that training data which crosses the decision boundary hurts the
accuracy significantly. Figure 2: Illustration of GAT. Our GAT can save many
search steps since friendly adversarial examples are located near the decision
boundary.
We give a possible explanation from a geometry-aware perspective. Removing the
norm-bounded projection enlarge the search space. Appropriately increasing the
number of search steps brings the adversarial data closer to the decision
boundary. In this case, the model learns a robust decision boundary. Further
increasing the number of search steps can make the adversarial data cross the
decision boundary too far, hindering the training of natural data and hurting
natural accuracy.
To verify our hypothesis, we train a base model using adversarial data, which
are generated by adversarial word substitution (AWS) on the SST-2 Socher et
al. (2013) dataset. We report its training accuracy (“ada training acc”) on
adversarial data and test accuracy (“ada test acc”) on the clean test set in
Figure 1. Although achieving nearly 100% training accuracy, its test accuracy
is only about 15%, which implicates the adversarial data make the test
performance degraded. Then we train another base model, whose training data is
more “friendly”. We just recover their last modified words to return to the
correct class, namely friendly adversarial data augmentation (FADA). It means
that only one word is different in each sentence. Surprisingly, it achieves a
high test accuracy of $\sim$93%.
This preliminary inspired us to address two existing problems:
* •
The number of search steps is always large, which is computationally
inefficient.
* •
A too large number of steps leads to degraded test performance.
Geometrically speaking, the friendly adversarial data are close to the ideal
decision boundary. We can address the above two issues in one fell swoop if we
perform gradient-based adversarial training on these friendly adversarial
data. It is like we start one step before the end, allowing us to obtain
strong robustness through a tiny number of search steps. We name it geometry-
aware adversarial training (GAT). Figure 2 illustrates our proposed GAT.
In addition, the friendly adversarial data only need to be generated once per
dataset. It can be reused, so it is computationally efficient. It can also be
updated for every iteration or epoch but computationally expensive.
Our contributions are summarized as follows:
* 1)
We propose FADA to generate friendly adversarial data which are close to the
decision boundary (but not crossing it).
* 2)
We propose GAT, a geometry-aware adversarial training method that adds FADA to
the training set and performs gradient-based adversarial training.
* 3)
GAT is computationally efficient, and it outperforms state-of-the-art
baselines even if using the simplest FGM. We further provide extensive
ablation studies and in-depth analyses on GAT, contributing to a better
understanding of robustness.
## 2 Related Work
### 2.1 Standard Adversarial Training
Let $f_{\theta}(x)$ be our neural network, $\mathcal{L}(f_{\theta}(x),y)$ be
the loss function (e.g., cross entropy), where $x\in X$ is the input data and
$y\in Y$ is the true label. The learning objective of standard adversarial
training is
$\mathop{\min}_{\theta}\mathbb{E}_{(X,Y)\sim
D}\left[\mathop{\max}_{\|\delta\|\leq\epsilon}\mathcal{L}(f_{\theta}(X+\delta),y)\right],$
(1)
where $D$ is the data distribution, $\delta$ is the minor perturbation,
$\epsilon$ is the allowed perturbation size. To optimize the intractable min-
max problem, we search for the optimal $\delta$ to maximize the inner loss and
then minimize the outer loss w.r.t the parameters $\theta$, step by step.
The gradient $g$ of the inner loss w.r.t the input $x$ is used to find the
optimal perturbation $\delta$. Goodfellow et al. (2015) proposed fast gradient
sign method (FGSM) to obtain $\delta$ by one step:
$\delta=\epsilon\cdot sgn(g),$ (2)
where $sgn(\cdot)$ is the signum function. Madry et al. (2018) proposed
projected gradient descent (PGD) to solve the inner maximization as follows:
$\delta^{\left(t+1\right)}=\Pi\ \alpha\cdot g^{(t)}/\|g^{(t)}\|,\forall t\geq
0,$ (3)
where $\alpha>0$ is the step size (i.e., adversarial learning rate), $\Pi$ is
the projection function that projects the perturbation onto the
$\epsilon$-norm ball. Conventionally PGD stops after a predefined number of
search steps $K$, namely PGD-$K$. In addition, TRADES Zhang et al. (2019),
MART Wang et al. (2020) and FAT Zhang et al. (2020) are also effective
adversarial training methods for boosting model robustness.
Regarding FAT, the authors propose to stop adversarial training in a
predefined number of steps after crossing the decision boundary, which is a
little different from our definition of “friendly”.
### 2.2 Adversarial Training in NLP
Gradient-based adversarial training has significantly improved model
robustness in vision, while researchers find it helps generalization in NLP.
Miyato et al. (2017) find that adversarial and virtual adversarial training
have good regularization performance. Sato et al. (2018) propose an
interpretable adversarial training method that generates reasonable
adversarial texts in the embedding space and enhance models’ performance. Zhu
et al. (2020) develop FreeLB to improve natural language understanding.
There is also a lot of work focused on robustness. Wang et al. (2021) improve
model robustness from an information theoretic perspective. Dong et al. (2021)
use a convex hull to capture and defense against adversarial word
substitutions. Zhou et al. (2021) train robust models by augmenting training
data using Dirichlet Neighborhood Ensemble (DNE).
Besides, adversarial data augmentation is another effective approach to
improve robustness Ebrahimi et al. (2018); Li et al. (2019); Ren et al.
(2019); Jin et al. (2019); Zang et al. (2020); Li et al. (2020); Garg and
Ramakrishnan (2020); Si et al. (2021). However, it only works when the
augmentation happens to be generated by the same attacking method and often
hurts accuracy.
It is worth noting that recent empirical results have shown that previous
gradient-based adversarial training methods have little effect on defending
against textual adversarial attacks Li et al. (2021); Si et al. (2021). The
authors benchmark existing defense methods and conclude that gradient-based AT
can achieve the strongest robustness by removing the norm bounded projection
and increasing the search steps.
## 3 Methodology
### 3.1 Friendly Adversarial Data Augmentation
Algorithm 1 Friendly Adversarial Data Augmentation (FADA)
0: The original text $x$, ground truth label $y_{true}$, base model
$f_{\theta}$, adversarial word substitution function $AWS(\cdot)$
0: The friendly adversarial example $x_{f}$
1: Initialization:
2: $x_{f}\leftarrow x$
3: the last modified word $w^{*}$ $\leftarrow$ None
4: the last modified index $i^{*}$ $\leftarrow$ 0
5: $x_{adv},w^{*},i^{*}=AWS(x,y_{true},f_{\theta})$
6: if $w^{*}=$ None then
7: return $x_{f}$
8: end if
9: Replace $w_{i^{*}}$ in $x_{adv}$ with $w^{*}$
10: $x_{f}\leftarrow x_{adv}$
11: return $x_{f}$
For a sentence $x\in X$ with a length of $n$, it can be denoted as
$x=w_{1}w_{2}...w_{i}...w_{n-1}w_{n}$, where $w_{i}$ is the $i$-th word in
$x$. Its adversarial counterpart $x_{adv}$ can be denoted as
$w_{1}^{\prime}w_{2}^{\prime}...w_{i}^{\prime}...w_{n-1}^{\prime}w_{n}^{\prime}$.
In this work, $x_{adv}$ is generated by adversarial word substitution, so
$x_{adv}$ has the same length with $x$. Conventional adversarial data
augmentation generates adversarial data fooling the victim model and mixes
them with the original training set. As we claim in section 1, these
adversarial data can hurt test performance. An interesting and critical
question is when it becomes detrimental to test accuracy.
Algorithm 2 Ideal Geometry-aware Adversarial Training (GAT)
0: Our base network $f_{\theta}$, cross entropy loss $\mathcal{L}_{CE}$,
training set $D=\\{x_{i},y_{i}\\}_{i=1}^{n}$, number of epochs $T$, batch size
$m$, number of batches $M$
0: robust network $f_{\theta}$
1: for epoch = 1 $\textbf{to}{}~{}T$ do
2: for batch = 1 $\textbf{to}{}~{}M$ do
3: Sample a mini-batch $b=\\{(x_{i},y_{i})\\}_{i=1}^{m}$
4: for all $x_{i}$ in $b$ do
5: Generate friendly adversarial example $x_{i}^{f}$ via Algorithm 1
6: Apply an adversarial training method (e.g., FreeLB++) on both $x_{i}$ and
$x_{i}^{f}$ to obtain their adversarial counterpart $\widetilde{x}_{i}$ and
$\widetilde{x}_{i}^{f}$
7: end for
8: Update $f_{\theta}$ via
$\nabla_{x}\mathcal{L}_{CE}(f_{\theta}(\widetilde{x}_{i}),y_{i})$ and
$\nabla_{x}\mathcal{L}_{CE}(f_{\theta}(\widetilde{x}_{i}^{f}),y_{i})$
9: end for
10: end for
One straightforward idea is to recover all the $x_{adv}$ to $x$ word by word
and evaluate their impact on test accuracy. We train models only with these
adversarial data and test models with the original test set. We are excited
that the test accuracy immediately returns to the normal level when we recover
the last modified word. We denote these data with only one word recovered as
$x_{f}$. Geometrically, the only difference between $x_{adv}$ and $x_{f}$ is
whether they have crossed the decision boundary.
To conclude, when the adversarial data cross the decision boundary, they
become incredibly harmful to the test performance. We name all the $x_{f}$ as
friendly adversarial examples (FAEs) because they improve model robustness
without hurting accuracy. Similarly, we name the generation of FAEs as
friendly adversarial data augmentation (FADA). We show our proposed FADA in
Algorithm 1.
### 3.2 Geometry-aware Adversarial Training
#### 3.2.1 Seeking for the optimal $\delta$
Recall the inner maximization issue of the learning objective in Eq. (1). Take
PGD-$K$ as an instance. It divides the search for the optimal perturbation
$\delta$ into $K$ search steps, and each step requires a backpropagation (BP),
which is computationally expensive.
We notice that random initialization of $\delta^{0}$ is widely used in
adversarial training, where $\delta^{0}$ is always confined to a
$\epsilon$-ball centered at $x$. However, we initialize the clean data via
discrete adversarial word substitution in NLP. It is similar to data
augmentation (DA), with the difference that we perturb clean data in the
direction towards the decision boundary, whereas the direction of data
augmentation is random.
By doing so, we decompose the $\delta$ into two parts, which can be obtained
by word substitution and gradient-based adversarial training, respectively. We
denote them as $\delta_{l}$ and $\delta_{s}$. Therefore, the inner
maximization can be reformulated as
$\mathop{\max}_{\|\delta_{l}+\delta_{s}\|\leq\epsilon}\mathcal{L}(f_{\theta}(X+\delta_{l}+\delta_{s}),y).$
(4)
We aim to find the maximum $\delta_{l}$ that helps improve robustness without
hurting accuracy. As we claim in Section 3.1, FADA generates friendly
adversarial data which are close to the decision boundary. Furthermore, the
model trained with these friendly adversarial data keeps the same test
accuracy as the original training set (Figure 1). Therefore we find the
maximum $\delta_{l}$ which is harmless to the test accuracy through FADA.
Denote $X_{f}$ as the friendly adversarial data generated by FADA, Eq. (4) can
be reformulated as
$\mathop{\max}_{\|\delta_{s}\|\leq\epsilon}\mathcal{L}(f_{\theta}(X_{f}+\delta_{s}),y).$
(5)
The tiny $\delta_{s}$ can be obtained by some gradient-based adversarial
training methods (e.g., FreeLB++ Li et al. (2021)) in few search steps. As a
result, a large number of search steps are saved to accelerate adversarial
training. We show our proposed geometry-aware adversarial training in
Algorithm 2.
#### 3.2.2 Final Learning Objective
It is computationally expensive to update friendly adversarial data for every
mini-batch. In practice, we generate static augmentation ($X_{f}$,Y) for the
training dataset (X,Y) and find it works well with GAT. The static
augmentation ($X_{f}$,Y) is reusable. Therefore, GAT is computationally
efficient.
Through such a tradeoff, our final objective function can be formulated as
$\displaystyle\mathcal{L}=$ $\displaystyle\mathcal{L}_{CE}(X,Y,\theta)$ (6)
$\displaystyle+\mathcal{L}_{CE}(\widetilde{X},Y,\theta)+\mathcal{L}_{CE}(\widetilde{X}_{f},Y,\theta),$
where $\mathcal{L}_{CE}$ is the cross entropy loss, $\widetilde{X}$ and
$\widetilde{X}_{f}$ are generated from $X$ and $X_{f}$ using gradient-based
adversarial training methods, respectively.
## 4 Experiments
### 4.1 Datasets
We conduct experiments on the SST-2 Socher et al. (2013) and IMDb Maas et al.
(2011) datasets which are widely used for textual adversarial learning.
Statistical details are shown in Table 1. We use the GLUE Wang et al. (2019)
version of the SST-2 dataset whose test labels are unavailable. So we report
its accuracy on the develop set in our experiments.
Dataset | # train | # dev / test | avg. length
---|---|---|---
SST-2 | 67349 | 872 | 17
IMDb | 25000 | 25000 | 201
Table 1: Summary of the two datasets. SST-2 | Clean % | TextFooler | TextBugger | BAE
---|---|---|---|---
RA % | ASR % | # Query | RA % | ASR % | # Query | RA % | ASR % | # Query
BERTbase | 92.4 | 32.8 | 64.1 | 72.8 | 38.5 | 57.8 | 44.3 | 39.8 | 56.5 | 64.0
ADA | 92.2 | 46.7 | 48.7 | 79.4 | 42.0 | 53.9 | 47.0 | 41.2 | 54.8 | 64.0
ASCC | 87.2 | 32.0 | 63.3 | 71.6 | 27.8 | 68.2 | 42.5 | 41.7 | 52.1 | 63.0
DNE | 86.6 | 26.5 | 69.6 | 69.0 | 23.4 | 73.1 | 40.2 | 44.2 | 49.3 | 65.8
InfoBERT | 92.2 | 41.7 | 54.8 | 74.9 | 45.2 | 51.1 | 45.8 | 45.4 | 50.8 | 65.6
TAVAT | 92.2 | 40.4 | 56.3 | 74.3 | 42.3 | 54.2 | 45.7 | 42.7 | 53.8 | 64.2
FreeLB | 93.1 | 42.7 | 53.7 | 75.9 | 48.2 | 47.7 | 45.7 | 46.7 | 49.3 | 67.5
FreeLB++$10$ | 93.3 | 41.9 | 54.8 | 75.8 | 46.1 | 50.3 | 45.9 | 44.2 | 52.4 | 65.3
FreeLB++$30$ | 93.4 | 45.6 | 50.6 | 78.1 | 47.4 | 48.8 | 45.7 | 42.9 | 53.6 | 66.0
FreeLB++$50$ | 92.0 | 45.5 | 50.4 | 77.2 | 47.4 | 48.4 | 45.3 | 44.6 | 51.4 | 67.5
GATFGM (ours) | 92.8 | 45.8 | 49.8 | 78.5 | 49.0 | 46.3 | 47.0 | 45.5 | 50.1 | 64.9
GAT${}_{FreeLB++}10$ (ours) | 93.2 | 49.5 | 46.3 | 80.6 | 52.4 | 43.2 | 47.9 | 48.3 | 46.9 | 68.9
GAT${}_{FreeLB++}30$ (ours) | 92.7 | 52.5 | 42.2 | 82.3 | 53.8 | 40.9 | 47.5 | 46.1 | 50.0 | 65.8
Table 2: Main defense results on the SST-2 dataset, including the test
accuracy on the clean test set (Clean %), the robust accuracy under
adversarial attacks (RA %), the attack success rate (ASR %), and the average
number of queries requiring by the attacker (# Query).
### 4.2 Attacking Methods
Follow Li et al. (2021), we adopt TextFooler Jin et al. (2019), TextBugger Li
et al. (2019) and BAE Garg and Ramakrishnan (2020) as attackers. TextFooler
and BAE are word-level attacks and TextBugger is a multi-level attacking
method. We also impose restrictions on these attacks for a fair comparison,
including:
* 1.
The maximum percentage of perturbed words $p_{max}$
* 2.
The minimum semantic similarity $\varepsilon_{min}$ between the original input
and the generated adversarial example
* 3.
The maximum size $K_{syn}$ of one word’s synonym set
Since the average sentence length of IMDb and SST-2 are different, $p_{max}$
is set to 0.1 and 0.15, respectively; $\varepsilon_{min}$ is set to 0.84; and
$K_{syn}$ is set to 50. All settings are referenced from previous work.
### 4.3 Adversarial Training Baselines
We use BERTbase Devlin et al. (2019) as the base model to evaluate the impact
of the following variants of adversarial training on accuracy and robustness
and provide a comprehensive comparison with our proposed GAT.
* •
Adversarial Data Augmentation
* •
ASCC Dong et al. (2021)
* •
DNE Zhou et al. (2021)
* •
InfoBERT Wang et al. (2021)
* •
TAVAT Li and Qiu (2021)
* •
FreeLB Zhu et al. (2020)
* •
FreeLB++ Li et al. (2021)
ASCC and DNE adopt a convex hull during training. InfoBERT improves robustness
using mutual information. TAVAT establishes a token-aware robust training
framework. FreeLB++ removes the norm bounded projection and increases search
steps.
We only compare GAT with adversarial training-based defense methods and leave
comparisons with other defense methods (e.g., certified defenses) for future
work.
IMDb | Clean % | TextFooler | TextBugger | BAE
---|---|---|---|---
RA % | ASR % | # Query | RA % | ASR % | # Query | RA % | ASR % | # Query
BERTbase | 91.2 | 30.7 | 66.4 | 714.4 | 38.9 | 57.4 | 490.3 | 36.0 | 60.6 | 613.6
ADA | 91.4 | 34.6 | 61.7 | 804.8 | 40.5 | 55.2 | 538.8 | 37.0 | 59.1 | 693.4
ASCC | 86.4 | 22.2 | 73.9 | 595.9 | 27.2 | 68.0 | 415.8 | 34.7 | 59.1 | 642.2
DNE | 86.1 | 14.9 | 82.2 | 520.2 | 17.4 | 79.3 | 336.9 | 35.4 | 57.8 | 630.4
InfoBERT | 91.9 | 33.0 | 63.9 | 694.1 | 40.4 | 55.8 | 469.9 | 37.3 | 59.2 | 619.6
TAVAT | 91.5 | 37.8 | 58.9 | 1082.6 | 48.8 | 46.9 | 695.5 | 41.2 | 55.2 | 896.7
FreeLB | 91.3 | 34.6 | 61.9 | 782.0 | 42.9 | 52.7 | 542.7 | 37.6 | 58.5 | 646.7
FreeLB++-$10$ | 92.1 | 39.5 | 56.8 | 817.9 | 46.4 | 49.3 | 516.5 | 41.2 | 55.0 | 682.3
FreeLB++-$30$ | 92.3 | 49.8 | 45.6 | 992.9 | 56.0 | 38.8 | 600.1 | 48.3 | 47.2 | 788.2
FreeLB++-$50$ | 92.3 | 50.2 | 45.3 | 1117.7 | 56.5 | 38.5 | 649.8 | 48.2 | 47.5 | 861.3
GATFGM (ours) | 91.8 | 58.3 | 36.0 | 1004.3 | 60.4 | 33.7 | 556.1 | 54.6 | 40.1 | 747.4
GAT${}_{FreeLB++}10$ (ours) | 92.0 | 50.7 | 44.7 | 1093.8 | 54.7 | 40.4 | 648.9 | 50.7 | 44.7 | 908.5
GAT${}_{FreeLB++}30$ (ours) | 92.4 | 59.0 | 35.7 | 1629.4 | 62.2 | 32.2 | 914.8 | 54.4 | 40.7 | 1213.6
Table 3: Main defense results on the IMDb dataset.
### 4.4 Implementation Details
We implement ASCC, DNE, InfoBERT, and TAVAT models based on TextDefender Li et
al. (2021). We implement FGM, FreeLB, FreeLB++, and our GAT based on
HuggingFace Transformers.111https://huggingface.co/transformers We implement
ADA and FADA based on TextAttack Morris et al.
(2020).222https://github.com/QData/TextAttack All the adversarial hyper-
parameters settings are following their original papers. All the models are
trained on two GeForce RTX 2080 GPUs and eight Tesla T4 GPUs.
Regarding the training settings and hyper-parameters, the optimizer is AdamW
Loshchilov and Hutter (2019); the learning rate is $2e^{-5}$; the number of
epochs is $10$; the batch size is $64$ for SST-2 and $24$ for IMDb; the
maximum sentence length kept for all the models is 40 for SST-2 and 200 for
IMDb.
### 4.5 Main Results
Our proposed GAT can easily combine with other adversarial training methods.
In our experiments, we combine GAT with FGM (GATFGM) and FreeLB++
(GATFreeLB++), respectively. We aim to evaluate if GAT can bring improvements
to the simplest (FGM) and the most effective (FreeLB++) AT methods.
We summarize the main defense results on the SST-2 dataset in Table 2. When
GAT works with the simplest adversarial training method, FGM, the resulting
robustness improvement exceeds FreeLB++$50$. The effectiveness and efficiency
of GAT allow us to obtain strong robustness while saving many search steps.
Further combining FreeLB++ on GAT can obtain stronger robustness and
outperform all other methods.
Regarding the accuracy, FreeLB++$30$ obtains the highest 93.4%. GAT also
significantly improves accuracy.
In addition, ADA is effective in improving robustness but hurts accuracy. It
is not surprising that ASCC and DNE suffer from significant performance
losses. However, there is no improvement in robustness and even weaker
robustness under TextFooler and TextBugger attacks than the other methods.
AWS | AT method | Clean % | RA % | #Query
---|---|---|---|---
None | None | 92.4 | 38.5 | 44.3
None | FGM | 92.5 | 39.6 | 44.7
None | FreeLB++30 | 93.4 | 47.4 | 45.7
ADA | None | 92.2 | 42.0 | 47.0
ADA | FGM | 91.3 | 42.7 | 46.6
ADA | FreeLB++30 | 90.9 | 51.5 | 47.5
FADA | None | 92.7 | 44.4 | 45.8
FADA | FGM | 92.8 | 49.0 | 47.0
FADA | FreeLB++30 | 92.7 | 53.8 | 47.5
Table 4: Ablation studies on the SST-2 dataset. The attacking method is
TextBugger. We only report RA % and #Query due to the space limit. “AWS” means
adversarial word substitution methods.
Table 3 shows the defense results on the IMDb dataset. The defense
performances are generally consistent with that on the SST-2 dataset. It is
worth noting that GATFGM achieved an extremely high RA % with a medium #Query,
which needs further exploration.
Figure 3: (a) Robust and clean accuracy with different search steps. (b)
Robust and clean accuracy with different step sizes. (c) Robust accuracy
gradually increases on the SST-2 dataset during training. The adversarial
training method is GAT${}_{FreeLB++}30$. Zoom in for a better view.
## 5 Discussions
We further explore other factors that affect robustness and provide
comprehensive empirical results.
### 5.1 Ablation Studies
We conduct ablation studies on the SST-2 dataset to assess the impact of each
component of GAT.
SST-2 | clean % | PSO | FastGA
---|---|---|---
RA % | #Query | RA % | #Query
BERTbase | 92.4 | 23.9 | 322.0 | 39.2 | 234.4
ADA | 92.2 | 31.4 | 348.6 | 43.2 | 268.4
ASCC | 87.2 | 29.2 | 359.4 | 40.5 | 233.2
DNE | 86.6 | 17.3 | 266.2 | 43.9 | 250.1
InfoBERT | 92.2 | 29.0 | 335.7 | 45.3 | 256.0
TAVAT | 92.2 | 25.7 | 316.2 | 42.0 | 258.7
FreeLB | 93.1 | 27.8 | 325.6 | 42.9 | 267.9
FreeLB++$50$ | 92.0 | 38.4 | 368.6 | 49.2 | 258.9
GATFGM | 92.8 | 29.9 | 341.0 | 46.7 | 275.1
GAT${}_{FreeLB++}10$ | 93.2 | 34.5 | 351.3 | 51.0 | 289.5
GAT${}_{FreeLB++}30$ | 92.8 | 39.7 | 359.2 | 53.7 | 323.9
Table 5: The defense results of different AT methods against two combinatorial
optimization attacks. We remove ASR % due to the space limit.
As shown in Table 4, “FADA” consistently outperforms “ADA” and “None” with
different adversarial training methods. Furthermore, “FADA&FGM” achieve a
higher RA% than “None&FreeLB++$30$”, which implies that “FADA” can obtain
strong robustness in one adversarial search step. “ADA” also helps improve
robustness. However, as the number of search steps increases, so does the hurt
it does to Clean %. On the contrary, “FADA” does not harm Clean % but improves
it, implying its friendliness.
### 5.2 Results with Other Attacks
We have shown that GAT brings significant improvement in robustness against
three greedy-based attacks. We investigate whether GAT is effective under
combinatorial optimization attacks, such as PSO Zang et al. (2020) and FastGA
Jia et al. (2019).
We can see from Table 5 that GAT${}_{FreeLB++}30$ obtain the highest RA %
against the two attacks and GAT${}_{FreeLB++}10$ has the highest clean
accuracy. The results demonstrate that our proposed GAT consistently
outperforms other defenses against combinatorial optimization attacks.
### 5.3 Results with More Steps
As we claim in Section 1, the accuracy should degrade with a large number of
search steps. But what happens for robustness?
We aim to see if RA % can be further improved. Figure 3 shows that the RA %
gradually increases against TextFooler and TextBugger attacks. However, RA %
decreases against BAE with steps more than 30, which needs more investigation.
As the steps increase, the growth rate of RA % decreases, and the Clean %
decreases. We conclude that a reasonable number of steps will be good for both
RA % and Clean %. It is unnecessary to search for too many steps since
robustness grows very slowly in the late adversarial training period while
accuracy drops.
### 5.4 Impact of Step Size
A large step size (i.e., adversarial learning rate) will cause performance
degradation for conventional adversarial training. Nevertheless, what impact
does it have on robustness? We explore the impact of different step sizes on
robustness and accuracy. As shown in Figure 3, the clean test accuracy
slightly drops as the step size increases. The robust accuracy under
TextFooler attack increases, while the robust accuracy under Textbugger and
BAE attacks decrease. Overall, the impact of step size on robustness needs
further study.
### 5.5 Impact of Training Epochs
Ishida et al. (2020) have shown that preventing further reduction of the
training loss when reaching a small value and keeping training can help
generalization. In adversarial training, it is naturally hard to achieve zero
training loss due to the insufficient capacity of the model Zhang et al.
(2021).
SST-2 | Clean % | TextFooler | TextBugger | BAE
---|---|---|---|---
RA % | ASR % | # Query | RA % | ASR % | # Query | RA % | ASR % | # Query
RoBERTabase | 93.0 | 38.8 | 58.0 | 74.5 | 41.4 | 55.2 | 45.5 | 40.3 | 56.4 | 63.6
GATFGM | 91.4 | 47.6 | 47.7 | 78.6 | 49.8 | 45.3 | 46.3 | 42.7 | 53.2 | 65.3
GAT${}_{FreeLB++}30$ | 93.2 | 52.1 | 43.7 | 95.5 | 54.2 | 41.3 | 55.8 | 47.0 | 49.1 | 76.9
Table 6: Defense results on RoBERTa model on the SST-2 dataset. SST-2 | Clean % | TextFooler | TextBugger | BAE
---|---|---|---|---
RA % | ASR % | # Query | RA % | ASR % | # Query | RA % | ASR % | # Query
DeBERTabase | 94.6 | 53.7 | 43.4 | 79.5 | 55.1 | 42.0 | 48.7 | 49.8 | 47.5 | 66.8
GATFGM | 94.5 | 54.6 | 42.1 | 82.6 | 57.7 | 38.8 | 50.0 | 48.9 | 48.2 | 66.7
GAT${}_{FreeLB++}30$ | 94.7 | 60.4 | 35.7 | 83.4 | 62.0 | 33.9 | 51.2 | 52.2 | 44.4 | 69.9
Table 7: Defense results on DeBERTa model on the SST-2 dataset.
Therefore, we investigate whether more training iterations result in stronger
robustness in adversarial training. We report the RA % achieved by
GAT${}_{FreeLB++}30$ at each epoch in Figure 3. We observe that the RA % tends
to improve slowly, implying that more training iterations result in stronger
model robustness using GAT.
### 5.6 Results with Other Models
We show that GAT can work on more advanced models. We choose RoBERTabase Liu
et al. (2019) and DeBERTabase He et al. (2021), two improved versions of BERT,
as the base models. As shown in Table 6 and Table 7, GAT slightly improve
robustness of RoBERTa and DeBERTa models.
### 5.7 Limitations
We discuss the limitations of this work as follows.
* •
As we clarify in Section 3.2.2, instead of dynamically generating friendly
adversarial data in training, we choose to pre-generate static augmentation.
We do this for efficiency, as dynamically generating discrete sentences in
training is computationally expensive. Although it still significantly
improves robustness in our experiments, such a tradeoff may lead to failure
because the decision boundary changes continuously during training.
* •
GAT performs adversarial training on friendly adversarial data. It may help if
we consider the decision boundaries when performing gradient-based adversarial
training—for example, stopping early when the adversarial data crosses the
decision boundary. We consider this as one of the directions for future work.
## 6 Conclusion
In this paper, we study how to improve robustness from a geometry-aware
perspective. We first propose FADA to generate friendly adversarial data that
are close to the decision boundary. Then we combine gradient-based adversarial
training methods on FADA to save a large number of search steps, termed
geometry-aware adversarial training (GAT). GAT can efficiently achieve state-
of-the-art defense performance without hurting test accuracy.
We conduct extensive experiments to give in-depth analysis, and we hope this
work can provide helpful insights on robustness in NLP.
## Acknowledgments
The authors would like to thank the anonymous reviewers for their helpful
suggestions and comments. This work is supported in part by the National
Natural Science Foundation of China under Grant No. 61902082 and 61976064, and
the Guangdong Key R&D Program of China 2019B010136003.
## References
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186.
* Dong et al. (2021) Xinshuai Dong, Anh Tuan Luu, Rongrong Ji, and Hong Liu. 2021. Towards robustness against natural language word substitutions. In _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_. OpenReview.net.
* Ebrahimi et al. (2018) Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 31–36.
* Garg and Ramakrishnan (2020) Siddhant Garg and Goutham Ramakrishnan. 2020. BAE: bert-based adversarial examples for text classification. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020_ , pages 6174–6181. Association for Computational Linguistics.
* Goodfellow et al. (2015) Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_.
* He et al. (2021) Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: decoding-enhanced bert with disentangled attention. In _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_. OpenReview.net.
* Ishida et al. (2020) Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, and Masashi Sugiyama. 2020\. Do we need zero training loss after achieving zero training error? In _Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event_ , volume 119 of _Proceedings of Machine Learning Research_ , pages 4604–4614. PMLR.
* Jia et al. (2019) Robin Jia, Aditi Raghunathan, Kerem Göksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 4129–4142, Hong Kong, China. Association for Computational Linguistics.
* Jin et al. (2019) Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment. _arXiv e-prints_ , page arXiv:1907.11932.
* Jones et al. (2020) Erik Jones, Robin Jia, Aditi Raghunathan, and Percy Liang. 2020. Robust encodings: A framework for combating adversarial typos. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 2752–2765, Online. Association for Computational Linguistics.
* Kim (2014) Yoon Kim. 2014. Convolutional neural networks for sentence classification. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1746–1751.
* Li et al. (2019) Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2019. Textbugger: Generating adversarial text against real-world applications. In _26th Annual Network and Distributed System Security Symposium, NDSS 2019, San Diego, California, USA, February 24-27, 2019_. The Internet Society.
* Li et al. (2020) Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. BERT-ATTACK: Adversarial attack against BERT using BERT. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 6193–6202.
* Li and Qiu (2021) Linyang Li and Xipeng Qiu. 2021. Token-aware virtual adversarial training in natural language understanding. In _Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021_ , pages 8410–8418. AAAI Press.
* Li et al. (2021) Zongyi Li, Jianhan Xu, Jiehang Zeng, Linyang Li, Xiaoqing Zheng, Qi Zhang, Kai-Wei Chang, and Cho-Jui Hsieh. 2021. Searching for an effective defender: Benchmarking defense against adversarial word substitution. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 3137–3147, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Liang et al. (2017) Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2017\. Deep text classification can be fooled. _CoRR_ , abs/1704.08006.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. _CoRR_ , abs/1907.11692.
* Loshchilov and Hutter (2019) Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In _International Conference on Learning Representations_.
* Maas et al. (2011) Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In _Proceedings of ACL_ , pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
* Madry et al. (2018) Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In _International Conference on Learning Representations_.
* Miyato et al. (2017) Takeru Miyato, Andrew M. Dai, and Ian J. Goodfellow. 2017. Adversarial training methods for semi-supervised text classification. In _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_. OpenReview.net.
* Morris et al. (2020) John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020\. TextAttack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 119–126, Online. Association for Computational Linguistics.
* Papernot et al. (2016) Nicolas Papernot, Patrick D. McDaniel, Ananthram Swami, and Richard E. Harang. 2016\. Crafting adversarial input sequences for recurrent neural networks. In _2016 IEEE Military Communications Conference, MILCOM 2016, Baltimore, MD, USA, November 1-3, 2016_ , pages 49–54. IEEE.
* Ren et al. (2019) Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 1085–1097.
* Sato et al. (2018) Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. 2018. Interpretable adversarial perturbation in input embedding space for text. In _Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden_ , pages 4323–4330. ijcai.org.
* Si et al. (2021) Chenglei Si, Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, and Maosong Sun. 2021. Better robustness by more coverage: Adversarial and mixup data augmentation for robust finetuning. In _Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021_ , pages 1569–1576, Online. Association for Computational Linguistics.
* Socher et al. (2013) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In _Proceedings of EMNLP_ , pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
* Szegedy et al. (2013) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Ian Erhan, Dumitru; Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. _arXiv preprint arXiv:1312.6199_.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA_ , pages 5998–6008.
* Wang et al. (2019) Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In _7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019_. OpenReview.net.
* Wang et al. (2021) Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, and Jingjing Liu. 2021. Infobert: Improving robustness of language models from an information theoretic perspective. In _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_. OpenReview.net.
* Wang et al. (2020) Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, and Quanquan Gu. 2020\. Improving adversarial robustness requires revisiting misclassified examples. In _International Conference on Learning Representations_.
* Ye et al. (2020) Mao Ye, Chengyue Gong, and Qiang Liu. 2020. SAFER: A structure-free approach for certified robustness to adversarial word substitutions. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 3465–3475, Online. Association for Computational Linguistics.
* Zang et al. (2020) Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 6066–6080.
* Zhang et al. (2019) Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. 2019. Theoretically principled trade-off between robustness and accuracy. In _Proceedings of the 36th International Conference on Machine Learning_ , volume 97 of _Proceedings of Machine Learning Research_ , pages 7472–7482. PMLR.
* Zhang et al. (2020) Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, and Mohan Kankanhalli. 2020. Attacks which do not kill training make adversarial learning stronger. In _Proceedings of the 37th International Conference on Machine Learning_ , volume 119 of _Proceedings of Machine Learning Research_ , pages 11278–11287. PMLR.
* Zhang et al. (2021) Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, and Mohan Kankanhalli. 2021. Geometry-aware instance-reweighted adversarial training. In _International Conference on Learning Representations_.
* Zhou et al. (2021) Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, and Xuanjing Huang. 2021\. Defense against synonym substitution-based adversarial attacks via dirichlet neighborhood ensemble. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021_ , pages 5482–5492. Association for Computational Linguistics.
* Zhu et al. (2020) Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2020. Freelb: Enhanced adversarial training for natural language understanding. In _8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020_. OpenReview.net.
|
# Local Differentially Private Fuzzy Counting in Stream Data using
Probabilistic Data Structures
Dinusha Vatsalan, Raghav Bhaskar, and Mohamed Ali Kaafar Dinusha Vatsalan is
with the Faculty of Science and Engineering, Macquarie University, Sydney,
Australia,
Email<EMAIL_ADDRESS>Raghav Bhaskar is with AppsPicket, New
Delhi, India,
E-mail<EMAIL_ADDRESS>Mohamed Ali Kaafar is with the Faculty of
Science and Engineering, Macquarie University, Sydney, Australia,
<EMAIL_ADDRESS>Manuscript received May 03, 2022; revised August
09, 2022.
###### Abstract
Privacy-preserving estimation of counts of items in streaming data finds
applications in several real-world scenarios including word auto-correction
and traffic management applications. Recent works of RAPPOR [1] and Apple’s
count-mean sketch (CMS) algorithm [2] propose privacy preserving mechanisms
for count estimation in large volumes of data using probabilistic data
structures like counting Bloom filter and CMS. However, these existing methods
fall short in providing a sound solution for real-time streaming data
applications. Since the size of the data structure in these methods is not
adaptive to the volume of the streaming data, the utility (accuracy of the
count estimate) can suffer over time due to increased false positive rates.
Further, the lookup operation needs to be highly efficient to answer count
estimate queries in real-time. More importantly, the local Differential
privacy mechanisms used in these approaches to provide privacy guarantees come
at a large cost to utility (impacting the accuracy of count estimation).
In this work, we propose a novel (local) Differentially private mechanism that
provides high utility for the streaming data count estimation problem with
similar or even lower privacy budgets while providing: a) fuzzy counting to
report counts of related or similar items (for instance to account for typing
errors and data variations), and b) improved querying efficiency to reduce the
response time for real-time querying of counts. Our algorithm uses a
combination of two probabilistic data structures Cuckoo filter and Bloom
filter. We provide formal proofs for privacy and utility guarantees and
present extensive experimental evaluation of our algorithm using real and
synthetic English words datasets for both the exact and fuzzy counting
scenarios. Our privacy preserving mechanism substantially outperforms the
prior work in terms of lower querying time, significantly higher utility
(accuracy of count estimation) under similar or lower privacy guarantees, at
the cost of communication overhead.
###### Index Terms:
Local Differential privacy, fuzzy counting, real-time querying, Cuckoo filter,
Bloom filter, data streams.
## 1 Introduction
The growing demand for real-time and faster analytics of continuously
generated big volumes of data brings tremendous interest in streaming data
technologies. However, privacy concerns in sharing or revealing personal
information require privacy preserving techniques to be designed for such
technologies. Privacy preserving counting of frequency of items in the data is
useful in several real-time streaming data applications.
An example application is word auto-correction that requires learning the
frequency counts of word entries by many different users such that when a user
enters a common/frequent word (e.g. LOL, according to a frequency threshold),
it will not be auto-corrected [2]. Another example is Web data obfuscation
application for privacy preserving Web browsing, which requires counting the
uniqueness of data entry or click path entered by all users in order to
calculate privacy risks and obfuscate accordingly in real-time [3]. In such
applications, data entries need to be continuously monitored from many users
111note that the frequency needs to be calculated across multiple users, i.e.
how many users have entered a certain data, not the frequency of a single user
and the auto-correction or auto-obfuscation function needs to query for the
frequency count information from many users in real-time and on-the-fly when a
user actively enters/types a data/word in an application. However, users’
individual data entries may identify their private and sensitive information,
for example, personal interests, occupation, health, or location, thereby
necessitating the use of privacy preserving techniques in the counting task.
There have been few methods proposed for the count-frequency problem in
streaming data including the two state-of-the-art methods with provable
privacy guarantees: RAPPOR (Randomized Aggregatable Privacy-Preserving Ordinal
Response) proposed by Google for collecting statistics from end-user client
software [1] and the count-mean sketch-based approach introduced by Apple to
discover frequency of words or emojis used by users [2]222From now onwards, we
may refer to the count-mean sketch-based approach introduced by Apple in [2]
as Apple’s algorithm. However, these methods are developed for offline
processing of count querying functions used for analytics purposes and not
tailored for real-time or online query processing. We aim to improve real-time
processing for count querying functions where the count queries need to be
answered in real-time and online. For example, in the auto-spelling correction
and privacy-aware Web data obfuscation applications (described above), the
decision to auto-correct words and obfuscate data needs to be retrieved by the
corresponding applications in near real-time when a user types a word in a
mobile App or enters data in the Web (e.g. a search query) [3].
Given sufficient time and resources, calculating frequency of items is a
simple task, i.e. just keeping a count of observations for each item in the
data to obtain that item’s frequency. However, in the context of high-scale,
low-latency, and online data processing, counts of items need to be calculated
instantly as the query comes in. The naive approach of randomly sampling the
observations for estimating the counts with the assumption that the sample
generally reflects the properties of the whole is not effective, as ensuring
true randomness is a difficult task. That is where probabilistic data
structures come in to estimate the approximate counts of items in an efficient
way [4, 5, 6]. Probabilistic data structures generally trade space and
computational efficiency for accuracy (false positive rate) of data
processing.
Google’s RAPPOR and Apple’s algorithm use the probabilistic data structures,
counting Bloom filter and count-mean sketch, respectively, for efficient
privacy preserving counting [1, 2]. For real-time count querying applications,
the lookup operational time is most crucial in order to answer the queries in
near real-time. The lookup operations in these probabilistic data structures
are dependent on the number of hash functions used and the size of the
probabilistic data structures. Reducing the number of hash functions and the
size of these probabilistic data structures can improve the lookup operation
in terms of query response time and space, however, it comes at a cost of
utility loss. Further, continuous insertion of data into these probabilistic
data structures leads to increasing probability of collisions of different
items and therefore impacts the false positive rate that eventually reaches to
$1.0$. In fact, our experimental results on large English words datasets show
that the false positive rate with the RAPPOR and CMS methods reaches to $1.0$
when approximately $100,000$ records are inserted (as presented in Section 4).
Moreover, in the example applications (described above), the same data
entered/typed by different users might have different variations or forms
(e.g. lemma) or errors (e.g. typos), and therefore exact counting does not
provide good utility of count estimation in these applications. Existing works
in privacy preserving counting allow only exact matching of items when
querying for frequency counts of items [2, 1]. Both [1] and [2] achieve local
Differential privacy guarantees for the data items contributed by the clients
by using variants of the Randomized Response [7] techniques. A key challenge
of using Differential privacy technologies is achieving a good balance between
privacy and utility guarantees.
In this paper, we propose a novel Differentially private counting algorithm
for real-time streaming applications that addresses all the above-described
limitations or shortcomings of the existing methods.
Contributions:
1. 1.
We propose a Differentially private Cuckoo filter for efficient counting. The
lookup operation with cuckoo filter does not depend on the number of hash
functions nor the size of the filter, and thus is more efficient than other
probabilistic data structures in terms of lookup time and space [4].
2. 2.
We overcome the issue of the impact of velocity of data on the utility (false
positive probability) by extending the cuckoo filter to be adaptive to large
volume of data that keeps the false positive rate bounded regardless of the
number of items inserted.
3. 3.
Our algorithm allows efficient and effective fuzzy counting. We combine Bloom
filters [8] with cuckoo filters to allow fuzzy matching for counting of values
that are similar to each other or in a similar range.
4. 4.
In order to reduce the loss in utility due to noise addition, i.e., to achieve
better trade-off between privacy and utility, we propose a novel local
Differential privacy mechanism for our method that instead of perturbing the
Bloom filter (encoded item) itself, adds noise to generate ”artificial” Bloom
filters as noise such that an adversary is unable to distinguish a real Bloom
filter from ”artificial” ones. In our method, noise is introduced to clients’
input via two steps: choosing the bucket positions of the cuckoo filter and
then choosing the corresponding Bloom filters that need to be sent for each of
those buckets. At most one of these Bloom filters comes from the real item and
all others correspond to “artificial” items. We provide formal proofs for the
privacy and utility guarantees of our method.
5. 5.
Using real and synthetic English word frequency datasets, we conduct an
experimental study of our method and compare it to two state-of-the-art
methods, Google’s RAPPOR [1] and Apple’s algorithm [2]. Our experimental
results show that our method outperforms RAPPOR and Apple’s algorithm by a
large margin in terms of significantly higher accuracy of count estimation
with both exact and fuzzy counting cases (around 60% higher) and lower
querying time (more than one order magnitude lower cumulative time) with
similar privacy guarantees and insertion/update time. Since answering count
queries instantly and more accurately in online count querying applications
(e.g. word auto-correction or privacy-aware data obfuscation applications) is
critical, our method highly outperforms the state-of-the-art methods for such
applications.
Outline: We provide preliminaries in the following section and describe our
methodology in Section 3. In Section 4 we present the results of our
experimental study and in Section 5 we review the literature of privacy
preserving counting and Cuckoo filter techniques. Finally we conclude and
provide directions to future research in Section 6.
Figure 1: Fuzzy counting of string values (left) and numerical values (middle)
using Bloom filter encoding [6, 9], and fuzzy matching of Bloom filter
segments (right) used in our proposed fuzzy counting method described in
detail in Section 3.3.
## 2 Preliminaries
In this section, we describe some preliminaries of probabilistic data
structures in use, fuzzy counting, and the two state-of-the-art methods that
use probabilistic data structures and local Differential privacy for privacy
preserving counting.
### 2.1 Probabilistic data structures
In the context of high-scale, low-latency and online data processing
(virtually, there is never sufficient time nor resources), counts of items
need to be calculated instantly as the data streams in, regardless of its
scale. Hence, instead of accurately keeping track of each item, we estimate
the frequency using probabilistic data structures [4, 5, 6]. Probabilistic
data structures, such as Bloom filters and variants, sketches, and Cuckoo
filters, have recently received much attention, as they are highly efficient
for storing, processing, and computing [2, 1, 4, 5, 6].
Bloom filters: Bloom filters are bit vectors that initially contain $0$ in all
the bit positions. $k$ independent hash functions $h_{i}(\cdot)$ (with $1\leq
i\leq k$) are used to hash-map an element $x$ by setting the corresponding bit
positions in the Bloom filter $b$ to $1$ (i.e. $\forall_{i}~{}b[h_{i}(x)]=1)$.
A Bloom filter allows a tunable false positive rate $fpr$ so that a query
returns either “definitely not” (with no error), or “probably yes” (with
probability $fpr$ of being wrong). The lower $fpr$ is, the better utility is,
but the more space the filter requires. The false positive probability for
encoding $n$ elements into a Bloom filter of length $l$ bits using $k$ hash
functions is $fpr=(1-e^{-kn/l})^{k}$, which is controllable by tuning the
parameters $k$ and $l$.
###### Definition 2.1 (Exact vs. fuzzy counting:)
Given a list of items $X$ and a query item $x$, exact counting of $x$ is
$|\forall_{i\in X}~{}if~{}i==x|$, whereas fuzzy counting is $|\forall_{i\in
X}~{}if~{}sim(i,x)\geq s_{t}|$, where $|\cdot|$ denotes the cardinality of the
given set, $sim(\cdot)$ is a similarity function and $s_{t}$ is the minimum
similarity threshold.
The main feature of Bloom filter encoding that makes it applicable to
efficient fuzzy counting is that it preserves the similarity/distance between
an item and a queried item in the Bloom filter space (with a negligible
utility loss) [6, 9]. For example, with string values the $q$-grams (sub-
strings of length $q$) of string values can be hash-mapped into the Bloom
filter $b$ using $k$ independent hash functions [6], while for numerical
values, the neighbouring values (within a certain interval to allow fuzzy
matching) of values can be hash-mapped into the Bloom filter [9]. Figure 1
illustrates an example of fuzzy counting of string and numerical values using
Bloom filters [6, 9].
The similarity between Bloom filters can be calculated using a token-based
similarity function, such as Jaccard, Dice, or Hamming [10]. For example,
Dice-coefficient is used in Figure 1, which is calculated as
$2\times\frac{\sum(b_{1}\cap b_{2})}{\sum(b_{1})+\sum(b_{2})}$, where $b_{1}$
and $b_{2}$ are the two Bloom filters. Due to collision of different elements
being mapped to the same bit position that occurs during the hash-mapping
(depending on the parameter setting), the Bloom filter-based matching might
result in false positives. With appropriate parameter settings, Bloom filters
have shown to be successful in providing high matching results [6, 11, 9].
However, Bloom filters don’t allow storing the counts of items.
Counting Bloom filter is a variation of conventional Bloom filters which
allows storing counts as well as adding, deleting or updating of items. The
filter comprises an array of $t$-bit buckets. When an item is added, the
corresponding counters are incremented, and when it’s removed, the counters
are decremented. Consequently, a counting Bloom filter takes $t$-times more
space than a conventional Bloom filter, and it also has a scalability limit.
Count query for an element $x$ from a counting Bloom filter $cbf$ returns
$min(\forall_{i}~{}cbf[h_{i}(x)])$.
Figure 2: An overview of the proposed system for privacy preserving fuzzy
online counting in stream data.
Sketches: A sketch is an array that consists of $D$ rows and $W$ cells in each
row, initialized to 0, where $W$ and $D$ indicate the width and depth of the
sketch, respectively. Given a pair of parameters $(\theta,\delta)$, the sketch
parameters can be set as $W=\big{\lceil}e/\theta\big{\rceil}$ and
$D=\big{\lceil}ln(1/\delta)\big{\rceil}$, where $e$ is Euler’s number, and
$\theta$ and $\delta$ mean that the error in answering a query is within a
factor of $\theta$ with probability of $1-\delta$. In a count-min sketch, each
element is hashed by randomly chosen pairwise independent hash functions and
the corresponding cell values are incremented by the count of that element.
The maximum probability of any two different elements being hash mapped to the
same position is $1/W$ for each hash function, where $W$ is the width of the
sketch or the number of cells. The false positive probability $fpr$ for count-
min sketch is therefore, $fpr=[1-(1-1/W)^{n}]^{D}$, where $n$ is the number of
distinct elements stored in the sketch. Count query for an element $x$ from a
sketch $M$ returns $min(\forall_{i}~{}M[i][h_{i}(x)])$. Several methods based
on count-min sketch for the heavy-hitter problem were proposed [12, 13, 14].
The heavy-hitter problem is about identifying frequent elements from data, for
example IPs flooding in a network or over-consumed drugs to monitor disease
outbreak.
Cuckoo filter: The Cuckoo filter consists of a Cuckoo hash table that stores
the “fingerprints” of items inserted. The fingerprint of an item is a bit
string derived from the hash of that item. A Cuckoo hash table consists of an
array of buckets where an item to be inserted is mapped to two possible
buckets based on a hash function $h(\cdot)$. Each bucket can be configured to
store a variable number of fingerprints. To insert an item $x$ into the Cuckoo
filter, two indices need to be derived from the item based on hashing the item
and its fingerprint:
$\displaystyle i1=h(x),$ (1) $\displaystyle i2=i1\oplus h(fp(x)),$
where $h(\cdot)$ and $fp(\cdot)$ are the hash and fingerprint functions,
respectively.
On obtaining these indices, the item’s fingerprint ($f=fp(x)$) is inserted
into one of the two possible buckets that correspond to the derived indices.
$fp(\cdot)$ is used for space efficiency of the Cuckoo filter (depending on
the fingerprint size) [4]. As the Cuckoo hash table begins to fill up, it
encounters a situation where the two possible indices that an item can be
inserted has been filled. In this case, items currently in the Cuckoo hash
table are swapped (until the maximum number of swaps is met) to their
alternative indices to free up space for inserting the new item. Querying the
count of an item requires checking the two possible buckets for the
fingerprint of the item.
### 2.2 State-of-the-art methods
The two state-of-the-art methods for probabilistic privacy preserving counting
are: (1) Google’s RAPPOR that uses Bloom filter and counting Bloom filter for
collecting statistics about end-user software [1], and (2) Apple’s algorithm
presented in [2] that uses Count-mean Sketches (CMS) to learn frequency of
words, or web domains or emojis to e.g. determine the most popular emoji used
by individuals or to identify high energy and memory usage in Apple’s web
browser. Both of these methods only allow offline and exact count querying.
Cuckoo hashing/Cuckoo filter has been used in several private set intersection
applications [15, 16, 17, 18, 19] due to its significant space and time
efficiency compared to other probabilistic data structures [4]. However,
Cuckoo filter has not been studied for privacy preserving counting.
### 2.3 Differential Privacy
Differential privacy [20, 21, 22] guarantees for each individual in a dataset
that any information that could be discovered about an individual with their
data in the dataset could also, with high probability, be discovered without
their data in the dataset. That is, the output of any query $f$ performed on
dataset $x$ will be indistinguishable from the output of the same query $f$
performed on dataset $y$, where $y$ differs from $x$ by at most one record
(the record of any individual).
###### Definition 2.2 (Differential Privacy [20])
A randomized function $\mathcal{A}$ (i.e. a function with a randomized
component) is $\epsilon$-Differentially private if for all outputs $y\subseteq
Range(A)$ and for all data $x$, $x^{\prime}\in\mathcal{D}^{n}$ such that
$||x-x^{\prime}||_{1}\leq 1$:
$Pr(\mathcal{A}(x)=y)\leq e^{\epsilon}\times Pr(\mathcal{A}(x^{\prime})=y).$
(2)
The general differential privacy notion is defined for algorithms with input
databases of size larger than 1. Local differential privacy (LDP) is a
differential privacy model developed specifically to provide guarantees such
that even if an adversary has access to the individual records/data in the
dataset, the adversary is still unable to learn additional information about
the individual from the individual data with high probability [23]. It ensures
differential privacy guarantees for each individual’s inputs by processing
(perturbing) the data locally on-device rather than processing in the central
server. LDP has become the de-facto privacy standard around the world in
recent years, with the technology companies Google and Apple implementing LDP
in their latest operating systems and applications [1, 2, 24].
Assume two adjacent streams of data where the two data streams differ at most
by one record or item:
###### Definition 2.3 (Local Differential Privacy [20])
Let $\mathcal{A}:\mathcal{D}\rightarrow\mathcal{Y}$ be a randomized algorithm
mapping a data entry in $\mathcal{D}$ to $\mathcal{Y}$. The algorithm
$\mathcal{A}$ is $\epsilon$-local differentially private if for data entry
$x,\neg x\in\mathcal{D}$ and all outputs $y\in\mathcal{Y}$,
$-\epsilon\leq\frac{Pr[\mathcal{A}(x)=y]}{Pr[\mathcal{A}(\neg
x)=y]}\leq\epsilon$ (3)
RAPPOR uses the randomized response method to achieve $\epsilon$-LDP by
flipping bits in the Bloom filters of encoded items sent by the data clients
to the server for updates. Apple’s count mean sketch aggregation proposes to
flip the bits in vectors with probability $1/(e^{\epsilon/2}+1)$ to meet
$\epsilon$-LDP guarantees.
## 3 Methodology
In this section, we describe our proposed method for privacy preserving real-
time and fuzzy counting in stream data using Cuckoo filter and Bloom filter.
The proposed system overview is shown in Figure 2. At the client-side, new
items are encoded, perturbed, and sent to the server, while at the server-side
the (perturbed) items are stored in Cuckoo filter which allows fuzzy matching
for count querying (similar) items. We first describe the space and time-
efficient probabilistic data structure, Cuckoo filter, for enhanced efficiency
for processing streams of data, and we make the filter adaptive to continuous
flow of data. We then propose a novel local Differentially private algorithm
for cuckoo hashing of encoded (into Bloom filters) items, and finally, we
describe how Bloom filters can be combined with Cuckoo filters to allow fuzzy
matching for querying of counts in the presence of data errors and variations.
Algorithm 1 outlines the insertion step, Algorithm 2 presents the steps of
local Differential privacy mechanism, while Algorithm 3 outlines the querying
step (lookup operation) using fuzzy matching. Algorithms 1 and 3 are used by
the server while Algorithms 1 and 2 are used by the users/data custodian
before sending the items to the server for insertion/update.
### 3.1 Efficient and Adaptive Filter
Cuckoo filter consists of an array of buckets where an item $x$ is hash-mapped
by inserting the fingerprint ($fp(\cdot)$) of $x$ into one of two possible
buckets based on a hash function $h(\cdot)$, which are $h(x)$ and $h(x)\oplus
h(fp(x))$. In our proposed method, we insert Bloom filter encoding of the item
into the Cuckoo filter (i.e. $x^{\prime}=bf(x)$, where $bf(\cdot)$ is the
Bloom filter encoding function as described in Section 2). Bloom filter
encoding preserves the distances/similarities between items in the original
space, and therefore allows fuzzy matching (as will be discussed in Section
3.3) as well as enables adding noise to perturb data in the differential
privacy mechanism without significant impact on utility loss (as will be
described in Section 3.2). Our method hashes segments of Bloom filters into
the Cuckoo filter (as shown in lines 1-3 in Algorithm 1), such that more
similar items will have more number of similar Bloom filter segments hashed
into the Cuckoo filter.
Algorithm 1: Insertion (Client-side and server-side)
---
Input:
\- $x$: A data/element
\- $m$: Number of Bloom filter segments
\- $bf(\cdot)$: Bloom filter encoding function
\- $h(\cdot)$: Hash function
\- $fp(\cdot)$: Fingerprint function
\- $max\\_num\\_kicks$: Maximum number of kicks to relocate items
Output:
\- $\mathbf{C}$: Updated Cuckoo filter
Client-side: |
1: | $x^{\prime}=bf(x)$ | // Bloom filter encoding
2: | $bf\\_segs=x^{\prime}.segment(m)$ | // Split into m segments
3: | $send\\_to\\_server(bf\\_segs)$ | // Send to server
Server-side: |
4: | for $bf\\_seg\in bf\\_segs$ do: | // Iterate segments
5: | $f=fp(bf\\_seg)$ | // fingerprint
6: | $i1=h(bf\\_seg)$ | // First bucket index
7: | $i2=i1\oplus h(f)$ | // Second bucket index
8: | if $\mathbf{C}.bucket[i1]~{}not~{}full$ then |
9: | $\mathbf{C}.bucket[i1].add(f)$ | // Add to first bucket
10: | else if $\mathbf{C}.bucket[i2]~{}not~{}full$ then |
11: | $\mathbf{C}.bucket[i2].add(f)$ | // Add to second bucket
12: | else |
13: | $i=random(i1,i2)$ |
14: | for $n=0;n\leq max\\_num\\_kicks$ do | // Relocate items
15: | $f^{\prime}=random(\mathbf{C}.bucket[i])$ |
16: | $f,f^{\prime}=f^{\prime},f$ | // Swap values
17: | $i=i\oplus h(f)$ |
18: | if $\mathbf{C}.bucket[i]~{}not~{}full$ then |
19: | $\mathbf{C}.bucket[i].add(f)$ | // Add f to bucket i
20: | $break$ | // Exit loop
21: | $insertion=fail$ | // Insertion failed
22: | if $insertion==fail$ then |
23: | $\mathbf{C}.bucket\\_size+=\mathbf{C}.bucket\\_size$ | // If full, adaptive filter
24: | $\mathbf{C}.bucket[i1].add(f)$ | // Add to bucket i1
25: | return $\mathbf{C}$ | // Output $\mathbf{C}$
Cuckoo filters have been shown to be more efficient than Counting Bloom
filters and Count-min Sketches in terms of space size and lookup operations
[4]. The space cost, in bits, of storing one item in the Cuckoo filter depends
on the target false positive rate $fpr$ and is given by
$(log_{2}(1/fpr)+2)/\alpha$, where $\alpha$ is the load factor of the filter
which defines the maximum filter capacity. The false positive probability
$fpr$ of Cuckoo filters where an element $x$ has the same fingerprint as
another element $y$ and shares one of $x$’s buckets is $fpr=[2/O\times
1/2^{F}]$, where $O$ is the number of filled/occupied buckets and $F$ is the
fingerprint size. With increasing $O$, the $fpr$ increases.
To account for the impact of continuous flow of data on the false positive
rate $fpr$, we make Cuckoo filter adaptive with regard to the size of the
buckets in the filter. In contrast to Bloom filters and sketches, cuckoo
filters can be adaptive to increasing volume of data insertions. With Bloom
filters or sketches (as used by RAPPOR [1] and Apple’s algorithm [2],
respectively), it is not trivial to increase the size of these data structures
to be adaptive, without re-hashing all the previously stored items in these
data structures, and as a result the false positive rate increases with large
number of items insertion and finally reaches to $1.0$. While with cuckoo
filters, the bucket size can be incrementally increased depending on the
percentage of occupied buckets. If the filter is considered to be mostly
occupied (full), then the length or size of the buckets is increased
adaptively. Upon failure of an element insertion due to $O$ is large and
exceeds $\alpha$ and therefore an empty space is not found until the maximum
number of swaps is performed, we increment the filter’s bucket size by the
default size and continue with the insertion until $O$ remains less than
$\alpha$ (lines 22-24 in Algorithm 1).
The lookup operation for querying an item $x$ in the adaptive Cuckoo filter is
still only in 2 buckets ($h(bf(x))$ and $h(bf(x))\oplus h(f(bf(x)))$), however
the number of lookup items in each bucket increases in the adaptive filter
with the increasing number of records being inserted into the filter.
Algorithm 2: Noise addition for local differential privacy (Client-side)
---
Input:
\- $x$: A data/element
\- $bf(\cdot)$: Bloom filter (BF) encoding function
\- $h(\cdot)$: Hash function
\- $\epsilon$: Privacy budget
\- $s_{t}$: Minimum similarity threshold
\- $p$: Probability to flip bits $p=\frac{1}{1+s.e^{\epsilon}}$ where
$s=\frac{2^{l.(1-s_{t})}.B}{2^{l}}$ (Thm 3.1)
\- $F^{\prime}$: Corresponding fingerprints for each bucket
Output:
\- $V^{\prime}$: Perturbed vector containing fingerprints
1: | $x^{\prime}=bf(x)$ | //BF of $x$
2: | $i1=h(x^{\prime})$ | //First bucket
3: | $V=\\{-1\in\mathbb{R}^{B}$} | //Initialize a vector
4: | $V[i1]=+1$ |
5: | $N\in\\{-1,+1\\}^{B}$, $Pr[n\in N=+1]=1-p$ | //Noise vector
6: | $V=[v_{1}n_{1},\cdots,v_{B}N_{B}]$ | //Flipping bits
8: | for $v\in V$ do |
9: | if $v==i1$ then |
10: | $\tilde{x^{\prime}}=generate\\_similar\\_bf(x^{\prime},s_{t})$ | //similar BF with $s_{t}$
11: | $V^{\prime}.add(\tilde{x^{\prime}})$ | //Add similar BF $\tilde{x^{\prime}}$
12: | else |
13: | $x"=randomly\\_choose\\_from(F^{\prime}[r])$ | //Corresponding BF
14: | $V^{\prime}.add(x")$ | //Add artificial BFs
15: | return $V^{\prime}$ | //Send $V^{\prime}$ to server
### 3.2 Privacy Guarantees
Similar to [2], the threat model of this research problem is as follows:
encoded data (Bloom filters) of unique/new items from each client or custodian
are continuously being sent to the (untrusted) server to be stored or updated
in the cloud. Note that, in contrast to the problem addressed in [1], each
client needs to report only unique items (i.e. if the item is not already
reported by that client), and therefore the clients locally keep track of the
unique items reported to the server. When a data consumer/query issuer (for
example, a researcher who wants to identify the most popular emojis used by
many users) queries the frequency counts of the items in the cloud from all
users, the estimated counts are revealed. We assume the server is untrusted
and the server knows the hash function used by data custodians.
The privacy preserving context for this problem requires that the server or
the consumer of the protocol (query issuer) should not learn individual users’
membership information of an item from the data, and an eavesdropper who gets
access to the communication channel or to the server should not be able to
learn the items about any individual users. However, the encoded data (Bloom
filters) sent by custodians/clients can leak some information about individual
users by performing a frequency attack [25] or dictionary attack using the
hash encoding functions on some known values [10].
The domain space of the items is generally well-known and some of the items
are considered to be highly sensitive. For example, if the application is
counting the frequency of words entered by data custodians, some of the words
such as related to cancer illness, are considered to be highly confidential
for data custodians and therefore identifying a certain data custodian has
typed words related to cancer illness raises serious privacy issues. Bloom
filter encoding can be applied on the known (and sensitive/confidential) items
from the problem domain to match the incoming Bloom filter encodings from
different data custodians in order to infer the presence of a sensitive item,
or they can be inferred using the frequency distribution of bit patterns
(known as cryptanalysis attack [25]).
#### 3.2.1 Local Differential privacy
Perturbing the bits in the Bloom filter itself incurs significant amount of
utility loss. Even a single bit difference can correspond to completely
different item values. Therefore, in our method clients do not add fake
“artificially-crafted” bits in the Bloom filters, but they send possibly
multiple Bloom filters even when there is a single true item.
Pre-processing: As an offline and one-time pre-processing step, at the client-
side a dictionary $F^{\prime}$ containing corresponding Bloom filters for each
of the $B$ bucket is generated. All the possible bit patterns of length $l$
with random 1’s and 0’s ($2^{l}$ in total) are generated and assigned to the
two corresponding bucket indices ($i1$ and $i2$) based on the $h(\cdot)$ and
$fp(\cdot)$ functions.
Online processing: For every incoming item the real bucket index is first
represented as a one-hot vector containing 1 at the position corresponding to
the real bucket and -1 elsewhere. Randomised response noise mechanism is
applied to this vector using our flipping probability
$P_{flip}=\frac{1}{1+s.e^{\epsilon}}$, where $s=\frac{2^{l.(1-s_{t})}}{t}$,
and $t=2^{l}/B$, derived from Theorem A.1 (as will be detailed below). Clients
then send a Bloom filter corresponding to each bucket which is flipped to 1.
For all the buckets other than the real bucket, a random Bloom filter chosen
from the pre-computed dictionary for that bucket is sent. For the real bucket
though, if the bit is not flipped to 0, a randomly chosen Bloom filter at
distance of minimum similarity threshold $s_{t}$ and maximum $1.0$ is sent.
The steps of our local Differential privacy mechanism are outlined in
Algorithm 2:
1. 1.
For the actual item x, the Bloom filter $x^{\prime}=bf(x)$ and bucket position
$i1=h(x^{\prime})$ are computed (lines 1-2 in Algorithm 2).
2. 2.
A one-hot vector $V$ of size $B$ (where $B$ is the number of buckets in the
cuckoo filter) is derived containing $1$ at position $i1$ and $-1$ everywhere
else, and the randomised response noise mechanism is applied to $V$ using our
flipping probability (derived from Theorem A.1).
3. 3.
A new vector $V^{\prime}$ is constructed from $V$, where all the 1’s are
replaced by Bloom filters chosen as (lines 9-14):
1. (a)
For all buckets other than $i1$, a random Bloom filter chosen from a pre-
computed list $F^{\prime}$ containing corresponding Bloom filters (using the
$randomly\\_choose\\_from()$ function) is sent (lines 9-11).
2. (b)
If $i1$ is not flipped (lines 12-14), a similar Bloom filter is generated
(using the function $generate\\_similar\\_bf()$) with a distance $s_{c}$
chosen randomly from the range of $[s_{t},1.0]$ and sent (where $s_{t}$ is the
minimum similarity threshold to allow fuzzy matching). This Bloom filter is
generated by randomly flipping $s_{c}\times l$ bits in the original Bloom
filter, where $l$ is the length of Bloom filter. For example, in the running
example shown in Fig. 1, the original Bloom filter ‘110110011’ is perturbed as
‘110101011’ if $s_{c}=0.9$ chosen from the range $[0.8,1.0]$, as $l\times
s_{c}=9\times 0.9=2$ random bits ($5^{th}$ and $6^{th}$ in this example) need
to be flipped.
4. 4.
The vector $V^{\prime}$ is sent to the server.
In the following we provide the formal proof of Algorithm 2 being
$\epsilon$-local Differentially private for the adjacency of two neighbouring
streams of data that differ by one item ($x_{1}$) which is present in one
stream and absent in the other. We also provide the proof for the adjacency
items being different items ($x_{1}$ in one stream and $x_{2}$ in the other)
in Appendix A.
###### Theorem 3.1 (Differential privacy of Algorithm 2)
Algorithm 2 is $\epsilon$-local differentially private.
###### Proof 3.2
For an arbitrary output $\tilde{V^{\prime}}$, the ratio of probabilities of
observing the same output given a user reported item $x_{1}$ or not is:
$\frac{Pr[PPCF(x_{1},\epsilon)=\tilde{V^{\prime}}]}{Pr[PPCF(\neg
x_{1},\epsilon)=\tilde{V^{\prime}}]}=\frac{Pr[[b_{1,1},b_{2,1},\cdots,b_{u,1}]=\tilde{V^{\prime}}]}{Pr[[b_{1,2},b_{2,2},\cdots,b_{u,2}]=\tilde{V^{\prime}}]},$
(4)
where $u$ is the number of bits flipped to +1 in the vector $V$, and $b_{i,1}$
denotes the $i^{th}$ Bloom filter sent for item $x_{1}$ and $b_{i,2}$ denotes
the $i^{th}$ Bloom filter sent for item that is not $x_{1}$. At maximum, only
one $b_{i,j}$ (where $i=1$ or $i=2$) in each of the set $V^{\prime}_{j}$ can
correspond to the real item $x_{j}$, while others correspond to artificially
crafted Bloom filters.
The probability of $b_{i,j}$ corresponding to artificial items being the same
in two output sets $V^{\prime}_{1}$ and $V^{\prime}_{2}$ (sent for two items’
$x_{1}$ and $\neg x_{1}$ insertion) is same. Without loss of generality, let’s
assume except the first Bloom filter (i.e. $b_{1,j}$), all the other Bloom
filters are artificial. The above ratio is then maximized when the Bloom
filters of the real items is present in one of the output sets (say
$V^{\prime}_{1}$) (i.e. $b_{1,1}$ is the Bloom filter encoding of real item
$x_{1}$ and $b_{1,2}$ is the Bloom filter encoding of an artificial item).
We denote by $P_{11}$ the probability that a reported Bloom filter $b_{1,j}\in
V^{\prime}_{j}$ is the encoding of the real item (say $x_{1}$) and is equal to
$\tilde{v^{\prime}}$, and by $P_{01}$ the probability that $b_{1,j}\in
V^{\prime}_{j}$ is an artificial item and is equal to $\tilde{v^{\prime}}$.
$\begin{multlined}\frac{Pr[PPCF(x_{1},\epsilon)=\tilde{V^{\prime}}]}{Pr[PPCF(\neg
x_{1},\epsilon)=\tilde{V^{\prime}}]}=\frac{P_{11}\times P_{01}\cdots\times
P_{01}}{P_{11}\times P_{01}\cdots\times
P_{01}}=\frac{P_{11}}{P_{01}}\end{multlined}\frac{Pr[PPCF(x_{1},\epsilon)=\tilde{V^{\prime}}]}{Pr[PPCF(\neg
x_{1},\epsilon)=\tilde{V^{\prime}}]}=\frac{P_{11}\times P_{01}\cdots\times
P_{01}}{P_{11}\times P_{01}\cdots\times P_{01}}=\frac{P_{11}}{P_{01}}$ (5)
We first calculate $P_{11}$. This occurs when item $x_{1}$’s bucket index is
not flipped in the vector $V$, i.e. $h(bf(x_{1}))=+1$ and the random
$s_{t}$-close Bloom filter chosen for this bucket is the same as $bf(x_{1})$.
$\begin{multlined}P_{11}=(1-P_{flip})\times\frac{1}{2^{l\times(1-s_{t})}}\end{multlined}P_{11}=(1-P_{flip})\times\frac{1}{2^{l\times(1-s_{t})}}$
(6)
When $s_{t}==1.0$, $P_{11}$ depends only on the flip probability $P_{flip}$.
Next, let’s calculate $P_{01}$. This can occur due to two reasons:
1. 1.
An artificial bucket index gets flipped and $\tilde{v^{\prime}}$ is chosen
from the pre-filled set of $t$ Bloom filters for that bucket.
2. 2.
$\tilde{v^{\prime}}$ gets chosen due to its $s_{t}$-closeness to a real item’s
Bloom filter.
The probability of randomly choosing one Bloom filter from the corresponding
Bloom filters for each bucket is $1/t$, where $t=2^{l}/B$ assuming the Bloom
filters/bit vectors are uniformly distributed across all buckets such that
each bucket is assigned with $2^{l}/B$ corresponding Bloom filters. Then,
$\begin{multlined}P_{01}=[P_{flip}\times\frac{1}{t}]+[(1-P_{flip})\times\frac{1}{2^{l}}].\end{multlined}P_{01}=[P_{flip}\times\frac{1}{t}]+[(1-P_{flip})\times\frac{1}{2^{l}}].$
(7)
Hence, for the maximum ration to be bounded requires:
$\begin{multlined}e^{-\epsilon}\leq\frac{P_{11}}{P_{01}}\leq e^{\epsilon}\\\
e^{-\epsilon}\leq\frac{(1-P_{flip})\times\frac{1}{2^{l\times(1-s_{t})}}}{[P_{flip}\times\frac{1}{t}]+[(1-P_{flip})\times\frac{1}{2^{l}}]}\leq
e^{\epsilon}\\\ \end{multlined}e^{-\epsilon}\leq\frac{P_{11}}{P_{01}}\leq
e^{\epsilon}\\\
e^{-\epsilon}\leq\frac{(1-P_{flip})\times\frac{1}{2^{l\times(1-s_{t})}}}{[P_{flip}\times\frac{1}{t}]+[(1-P_{flip})\times\frac{1}{2^{l}}]}\leq
e^{\epsilon}\\\ $ (8)
Ignoring the $[(1-P_{flip})\times\frac{1}{2^{l}}]$ term, we bound the above
ratio for $P_{flip}\geq\frac{1}{1+s.e^{\epsilon}}$, where
$s=\frac{2^{l(1-s_{t})}}{t}$.
$\begin{multlined}-\epsilon\leq
log(\frac{Pr[PPCF(x_{1},\epsilon)=\tilde{V^{\prime}}]}{Pr[PPCF(\neg
x_{1},\epsilon)=\tilde{V^{\prime}}]}\leq\epsilon\\\
\end{multlined}-\epsilon\leq
log(\frac{Pr[PPCF(x_{1},\epsilon)=\tilde{V^{\prime}}]}{Pr[PPCF(\neg
x_{1},\epsilon)=\tilde{V^{\prime}}]}\leq\epsilon\\\ $ (9)
Algorithm 3: Count querying with fuzzy matching (server-side)
---
Input:
\- $x$: Query data/element
\- $m$: Number of Bloom filter segments
\- $bf(\cdot)$: Bloom filter encoding function
\- $h(\cdot)$: Hash function
\- $fp(\cdot)$: Fingerprint function
\- $s_{t}$: Minimum similarity threshold
Output:
\- $c$: Estimated count of $x$
1: | $x^{\prime}=bf(x)$ | //Bloom filter
2: | $bf\\_segs=x^{\prime}.segment(m)$ | //Segments
3: | $C=[~{}]$ |
4: | for $bf\\_seg\in bf\\_segs$ do | //Iterate segments
5: | $f=fp(bf\\_seg)$ | //Fingerprint
6: | $i1=h(bf\\_seg)$ | //First bucket
7: | $i2=i1\oplus h(f)$ | //Second bucket
8: | if $\mathbf{C}.bucket[i1]~{}or$ |
| $\mathbf{C}.bucket[i2]~{}has~{}f$ then |
9: | $C.add(count(f))$ | //Add $f$’s count to $C$
10: | else |
11: | $C.add(0)$ | //Add 0 count to $C$
12: | $0$-count_segs = $[j\in C~{}if~{}j==0]$ | // $0$-count segments
13: | $sim_{max}=(m-|0$-count_segs$|)/m$ | //Max similarity
14: | if $sim_{max}\geq s_{t}$ then |
15: | $c=min([j~{}for~{}j\in C~{}if~{}j>0])$ |
16: | else |
17: | $c=0$ |
18: | return $c$ |
### 3.3 Fuzzy Counting
Real-world data often contains variations and errors (e.g. typos or
misspellings in words), and therefore counting of items that differ by small
typos or variations from the queried item is important in many real
applications (e.g. word auto-correction application that counts words entered
by different users even with small typos and errors). Further, range queries
are commonly used in financial applications which require fuzzy counting of
similar items (e.g. similar salary range).
In order to allow fuzzy matching, the data custodians first encode the item
($x$) into a Bloom filter, add noise in terms of artificially crafted Bloom
filters (output set $V^{\prime}$ from Algorithm 2), and then split each of the
Bloom filters in $V^{\prime}$ (both real and artificial) into $m$ segments
(lines 1-2 in Algorithm 1). The larger the value for $m$ is, the more accuracy
the fuzzy matching will be. In Fig. 1 (right), an example fuzzy counting for
the two example pairs of (item, query) values in Fig. 1 (left) and (middle)
using $m=3$ is illustrated. The value for $m$ determines the degree of fault-
tolerance of the method to data errors and variations. A smaller value of $m$
is sufficient for a clean data, however if the data is assumed to be largely
dirty or for range queries with larger range, we need a large value of $m$ to
allow effective fuzzy matching (please see Theorem 3.3).
Then each of the $m$ Bloom filter segments are sent to the server (line 3 in
Algorithm 1) to be inserted into the Cuckoo filter. If the Bloom filter
segments of a querying item matches with a certain number of Bloom filter
segments $s_{m}$ in the Cuckoo filter (depending on the similarity threshold
parameter, $s_{t}$, given by the query issuer), where $s_{m}=m\times s_{t}$,
then the estimated count of the most similar item to the queried item will be
returned to the query issuer. In the example illustrated in Fig. 1, if
$s_{t}=0.65$, then the items need to match in at least $s_{m}=2$ Bloom filter
segments of the queried item in order to be counted. If the similarity
threshold is set to $s_{t}=1.0$, then it allows only exact matching, i.e. the
estimated count of the item that is matching exactly the same as the queried
item (in all $m$ segments) will be returned.
In contrast to the original Cuckoo hashing approach that stores fingerprints
of items, our approach stores fingerprints of Bloom filter segments of items
(lines 4-24 in Algorithm 1). Bloom filter segments preserve distance and
therefore allow fuzzy matching (as discussed in Section 2.1), whereas
fingerprints of items do not, i.e. a single character difference (in string
data) or number difference (in numerical data) returns completely different
fingerprints. Note that the Bloom filters in the output $V^{\prime}$ from
Algorithm 2 are differentially private using our proposed noise addition
algorithm. Bloom filter segmentation is a post-processing function that
enables fuzzy matching. Moreover, the Bloom filter segments of the Bloom
filters (real and artificial) in $V^{\prime}$ can be shuffled to amplify the
privacy guarantees, as similar to the Encode, Shuffle, and Analyze mechanism
proposed by Google [26].
As shown in Algorithm 3, the querying function returns the count value of the
closest value to the querying value. For example, if the query value is
$q=5000$, it will return the count/frequency of $5000$ if available in the
filter (exact matching), otherwise it will return the count/frequency of a
value $v$ that is the closest to $q$ and has a similarity above the similarity
threshold $s_{t}$ (i.e. $sim(v,q)\geq s_{t}$ and that has the highest
similarity with $q$). The algorithm first retrieves the count values
(frequency/number of occurrences as calculated by the $count(\cdot)$ function)
for each of the Bloom filter segments of the query value $q$ (lines 3-11 in
Algorithm 3) and calculates how many segments have a count of $0$, i.e. not
found in the filter (line 12).
Using this, it computes the maximum similarity of the closest value (line 13,
where $|\cdot|$ denotes the cardinality of a given set) and if this similarity
is above the similarity threshold $s_{t}$ (line 14) then the minimum count of
the segments with non-zero counts is returned as the count value of the
closest value of $q$ (lines 15-18).
### 3.4 Utility analysis
We next analyze the utility loss with our proposed method.
###### Theorem 3.3 (Count estimation error bound)
The bound of the estimated count $c^{\prime}$ of an element $x$ using our
method is given as.
$\begin{multlined}c-c\times
max(\frac{1}{1+s.e^{\epsilon}},\frac{s.e^{\epsilon}\times(1-s_{t})^{m\times
s_{t}}}{1+s.e^{\epsilon}})\leq c^{\prime}\\\ \leq c+[\frac{n\times(B-1)\times
2^{l.(1-s_{t})}}{(1+s.e^{\epsilon})\times 2^{l}}],\\\ \end{multlined}c-c\times
max(\frac{1}{1+s.e^{\epsilon}},\frac{s.e^{\epsilon}\times(1-s_{t})^{m\times
s_{t}}}{1+s.e^{\epsilon}})\leq c^{\prime}\\\ \leq c+[\frac{n\times(B-1)\times
2^{l.(1-s_{t})}}{(1+s.e^{\epsilon})\times 2^{l}}],\\\ $ (10)
where $c$ is the actual count of $x$, $n$ is the number of records inserted
into the Cuckoo filter, $l$ is the length of Bloom filters, $m$ is the number
of Bloom filter segments, $B$ is the number of buckets, and
$s=\frac{2^{l.(1-s_{t})}.B}{2^{l}}$ (see Theorem A.1)
###### Proof 3.4
The deviation of the estimated count $c^{\prime}$ of an element $x$ from its
actual count $c$ can occur due to false negatives as well as false positives
leading to lower and higher estimated counts than the true counts:
Lower bound: There are two cases associated with false negatives for an item
$x$ (Bloom filter of $x$ is $x^{\prime}$) resulting from noise addition to
meet differential privacy guarantees.
Case 1: the bit $h(x^{\prime})$ is flipped in $V$. This probability is
$\frac{1}{1+s.e^{\epsilon}}$, i.e. from $c$ records of item $x$,
$c\times\frac{1}{1+s.e^{\epsilon}}$ records would have been placed in wrong
buckets in the Cuckoo filter (not in one of the buckets $h(x^{\prime})$ or
$h(x^{\prime})\oplus h(fp(x^{\prime}))$) and therefore resulting in false
negatives.
Case 2: the bit $h(x^{\prime})$ is not flipped in $V$, however the similar
Bloom filter (similar to $x^{\prime}$) chosen for bucket $h(x^{\prime})$ does
not match to $x^{\prime}$ in the minimum number of segments out of $m$
segments according to the threshold $s_{t}$. The probability that the bit
$h(x^{\prime})$ in the vector is not flipped to $-1$ is
$\frac{s.e^{\epsilon}}{1+s.e^{\epsilon}}$. The minimum number of segments that
need to be matched is $s_{m}=m\times s_{t}$. False positives occur if at least
one bit in each of these segments gets flipped. The probability of a bit in
$x^{\prime}$ being flipped in $\tilde{x^{\prime}}$ is $l(1-s_{t})/l=1-s_{t}$.
Hence, the probability that at least 1 bit in each of the $s_{m}$ segments
being flipped is $(1-s_{t})^{m\times s_{t}}$.
Therefore, the lower bound of the estimated count $c^{\prime}$ is
$c-c~{}\times$ the maximum of the two cases, i.e.
$max(\frac{1}{1+s.e^{\epsilon}},\frac{s.e^{\epsilon}\times(1-s_{t})^{m\times
s_{t}})}{1+s.e^{\epsilon}})$. For example, if $l=20$, $\epsilon=6$,
$s_{t}=0.7$, $m=4$, and $B=10000$, then $s=0.61$, and therefore
$P_{flip}=\frac{1}{1+s.e^{\epsilon}}=0.004$. The lower bound of the estimated
frequency count for an item of true frequency count of $c=100$ is $c-c\times
max(0.004,\frac{0.61\times 403.4288\times(0.3)^{4})}{1+(0.061\times
403.4288)})=c-c\times max(0.004,0.008)=100-100\times 0.008=99.19$. With a
larger number of segments $m=5$, the lower count becomes $c-c\times
max(0.004,0.0002)=99.75$, and with a smaller $m=2$, the lower count becomes
$c-c\times max(0.004,0.089)=91.03$.
Upper bound: On the other hand, false positives could occur due to noise
addition (where bits with -1 in $V$ are flipped to 1) resulting in insertion
of corresponding (artificial) fingerprints into buckets and collision of
fingerprints of different elements. The likelihood of any record being falsely
inserted into the given bucket is $\frac{1}{1+s.e^{\epsilon}}$. Given $n$ is
the number of elements/records that are inserted into the Cuckoo filter,
$n\times(B-1)\times\frac{1}{1+s.e^{\epsilon}}$ records can be falsely inserted
into the (artificial) buckets. Among these, there are $2^{l(1-s_{t})}$
possible bit patterns that could match with the given real item’s Bloom
filter. The probability of choosing one of these potentially matching bit
patterns from the artificial buckets is $\frac{2^{l(1-s_{t})}/B}{2^{l}/B}$.
Therefore, the upper bound of $c^{\prime}$ is
$c+[\frac{n\times(B-1)}{1+s.e^{\epsilon}}\times\frac{2^{l.(1-S_{t})}}{2^{l}}]$.
With the previous example, if the number of records inserted so far is
$n=1000$, then the upper bound of the estimated frequency count of an item
whose real frequency count is $c=100$ is $100+[\frac{1000\times
9999}{1+(0.61\times 403.4288)}\times\frac{2^{6}}{2^{20}}]=100+2.47=102.47$.
### 3.5 Communication and computation overhead
Inserting an item into the Cuckoo filter requires sending $m\times
B\times\frac{1}{1+s.e^{\epsilon}}$ Bloom filter segments of length $l/m$ bits,
where $s=\frac{B.2^{l(1-s_{t})}}{2^{l}}$ (see Theorem A.1). The communication
complexity of insertion step therefore depends on the number of segments $m$,
number of buckets $B$, and the length of Bloom filters $l$. Our method has
higher communication overhead for the insertion step than RAPPOR and Apple’s
CMS algorithm which require only sending a vector of size $l$ bits to the
server followed by vector operation between two vectors (between the received
vector and the aggregated vector stored in the server) for inserting an item.
However, we parallelize the insertion step to reduce the computation
complexity by having multiple cuckoo filter data structures in the server and
dividing the segments and inserting them into different structures. The
querying function therefore needs to be applied on each of these data
structures and the counts are summed to get the total frequency count of a
queried item. This is conceptually similar to using different cohorts in
RAPPOR [1]. Hence, the server-side querying of our method requires $2\times m$
lookup operations at each of the cuckoo filters (in parallel) for every
queried item and therefore querying is efficient than the RAPPOR and Apple’s
algorithm.
In summary, our method provides significantly higher utility guarantees and
querying efficiency with similar privacy guarantees at the cost of
communication overhead for inserting items.
## 4 Experimental Evaluation
In this section we present and discuss the results of experimental study of
our proposed method.
### 4.1 Datasets and Synthetic Dataset Generation
We used two datasets in our experiments. The first one is words dataset that
contains a list of words from the complete work of Shakespeare 333available
from https://data.world/tronovan/shakespeare-word-frequencies. The number of
unique words in this dataset is 23,113. We duplicated these words to generate
1,750,000 records and split across 20 parties and the total number of records
is 583,456. The total number of records is randomly divided into 10,000
disjoint sets for simulating 10,000 parties.
We further generated a synthetic dataset based on this words dataset for fuzzy
counting experiments. For each of the unique values in the dataset, we created
synthetic duplicate records and/or queries by replacing some of the values $v$
with similar values $v^{\prime}$ according to the similarity threshold used
($s_{t}=0.8$). Specifically, the synthetic dataset is generated by replacing
50% of the count of each value with a similar value according to $s_{t}$. To
generate synthetic values $v^{\prime}$ that are similar to $v$ (i.e.,
$sim(v,v^{\prime})\geq s_{t}$), we modified $v$ by including character edits
(inserts with $0.3$ probability, deletes with $0.3$ probability, and swaps
with $0.4$ probability). We use these synthetically generated records for
querying in order to evaluate and compare the performance of counting
algorithms in the presence of data errors and variations.
True count for exact matching of a value in this dataset is the actual
frequency/count of that value, while true count for fuzzy matching of a value
is the count of all values similar to the value according to a similarity
metric and similarity threshold. We use the $q$-gram-based Dice-coefficient
metric for calculating the similarity between (unencoded) string data [9, 27].
Estimated count of a value is the count value returned by the privacy
preserving counting algorithm (Algorithm 2) when querying for that value.
### 4.2 Baseline Methods and Parameter Setting
We compare our proposed method with two state-of-the-art privacy preserving
counting methods proposed by Google (RAPPOR) [1] and Apple (CMS) [2] as these
are closely related to our work. Similar to our approach, these two methods
use probabilistic data structures and local Differential privacy for privacy
preserving counting (however, not specifically developed for real-time
counting in streaming data applications, as described in detail in Section 2).
Figure 3: Comparison of efficiency results in terms of (a) insertion time
(CDF) and (b) querying time (CDF) for all approaches with exact counting (i.e
$m=1$ with Cuckoo filter approach) on the words datasets.
Figure 4: Comparison of (a) fuzzy matching efficiency in terms of querying
time (CDF) for different values of $m$ and (b) the impact of increasing volume
of records insertion on the false positive rates for all approaches on the
synthetic words datasets.
We implemented both our proposed approach and the two competing baseline
approaches in Python 3.5.2, and ran all experiments on a server with four
2-core 64-bit Intel Core I7 2.6 GHz CPUs, 8 GBytes of memory and running
Ubuntu 16.04. Please note that the RAPPOR source code (implemented using R
combined with Python) is publicly available 444available from
https://github.com/google/rappor. In order to perform a fair comparison, we
implemented all three approaches in Python (using Python libraries such as
numpy, bitarray, and sklearn). The programs and test datasets are available
from the authors.
Default parameter setting of our approach is $\epsilon=6$, capacity of Cuckoo
filter $B=10000$, default bucket size $b=4$, maximum number of kicks $500$,
similarity threshold $s_{t}=0.8$, length of Bloom filters $l=30$, number of
segments $m=5$, and number of hash functions $k=2$, as it gave the best
results when validating with a grid search parameter tuning method. We also
evaluated our method against different $\epsilon$, in the range of
$\epsilon=[2,4,6,8,10]$, and $m=[1,3,5]$.
Parameter settings of the state-of-the-art methods are chosen in a similar
range (adjusted according to the number of records used) as used in the
original approaches. For Google-RAPPOR, we used the number of hash functions
$k=2$, length of Bloom filters $l=1000$, number of cohorts $m=32$, and noise
parameters $f=0.5$, $p=0.5$, and $q=0.75$. For Apple-CMS, we set $\epsilon=8$,
length of vectors $l=1024$, and the number of hash functions $k=20000$.
### 4.3 Evaluation metrics
We evaluated our method and compared with Google-RAPPOR and Apple-CMS in terms
of insertion and querying efficiency as well as accuracy of querying results.
Computational efficiency is measured using insertion time (in seconds) and
querying time (in seconds). The accuracy of count estimation is measured as
the absolute error that calculates the variance/difference between true count
and estimated count (i.e. the estimation error). We also evaluated the false
positive rate of the probabilistic data structures against number of records
inserted to evaluate the impact of continuous data insertion on the utility.
Privacy is measured using the privacy budget $\epsilon$ required for providing
Differential privacy guarantees.
Figure 5: Comparison of effectiveness results of exact counting (left) and
fuzzy counting (right) for all approaches on the words (left) and synthetic
words (right) datasets measured as the variance between actual and estimated
counts of queried items.
Figure 6: Ablation study: utility of count estimation of the proposed method
measured as the median of the variance/error between actual and estimated
counts of queried items in 10 iterations against different $\epsilon$ values
on the words datasets (left) and against different number of segments $m$ on
synthetic words datasets (right).
### 4.4 Results and Discussion
s Efficiency results: We compare the insertion time and querying time of our
method with the baselines for exact counting in Figures 3(left) and 3(right),
respectively. Please note that the insertion and querying time are shown in
CDF plots, i.e. the total time required for inserting or querying a certain
number of records (not a single record). The insertion time required for our
method is slightly larger than the baseline approaches. The reason is that it
needs to insert multiple fingerprints for every single item which requires
several lookup operations in the Cuckoo filter even with parallel settings.
This is negligible given the significant increment in utility of both exact
and fuzzy counting with our method compared to the state-of-the-art methods,
as will be discussed next. Moreover, the querying time is highly efficient and
faster than the baselines (Fig. 3 (right)). For real-time querying
applications querying time is more important than insertion time, as the
queries need to be instantly answered in real-time. Therefore, compared to
RAPPOR and CMS, our method is highly applicable to real-time querying.
We next present the querying time for fuzzy counting with our approach for
different number of Bloom filter segments $m=1$, $m=2$, and $m=3$ in Fig. 4
(left) on the synthetic words dataset where the queries contain data errors
(e.g. typos) and variations. As expected, the querying time increases with
larger $m$, as more segments need to be queried with larger $m$. Please note
that parallel processing of insertion can help reducing the insertion time and
hence the insertion time is not impacted with increasing $m$ (due to space
limitation we do not show this results), but still all $m$ segments need to be
queried from all cuckoo filters for a given query. The querying time for fuzzy
counting is the same as for exact counting with RAPPOR and CMS. However, even
with $m=5$, our cuckoo filter-based approach has considerably lower querying
time than these state-of-the-art methods (i.e. faster querying time not only
for exact counting, but also for fuzzy counting with a relatively larger
number of segments). In other words, the querying time required for fuzzy
counting by our method is still lower than the other two methods that only
allow exact counting.
Effectiveness results: The result of the impact of continuous insertion of
records on the utility or effectiveness of counting is illustrated in Figure 4
(right). As can be seen, the false positive rate is not impacted with number
of records inserted into the Cuckoo filter due to adaptively increasing the
bucket size. Our method has significantly lower false positive rate (nearly
$0.0$) compared to RAPPOR and Apple-CMS, and therefore provides higher quality
of hash-mapping into the probabilistic data structure (Cuckoo filter). By
bounding the false positive rate of adaptive Cuckoo filter to a very small
value ($0.002$ in this experiment), the number of false positives that could
occur due to hash-mapping in probabilistic data structures is negligible
(therefore utility is only impacted by the differential privacy noise addition
in our method). Since RAPPOR uses cohorts to reduce the effect of false
positives, it performs better than Apple-CMS until a certain number of records
are inserted. These results reveal that our method can provide higher utility
results with a significantly large margin even with large number of records
inserted (in contrast to the baseline methods) at the cost of computational
efficiency loss by a small margin.
In Figure 5 we evaluate the variance or absolute error of estimated and true
counts of querying data with exact counting on words (left) and with fuzzy
counting on synthetic words (right) datasets. Our results show that our method
significantly outperforms the baselines by resulting in the lowest median of
count estimation error in both cases, exact and fuzzy counting. As expected,
RAPPOR estimates counts with large variance from true counts (due to the large
false positive rate). Reducing the false positive rate by increasing the
length of Bloom filters required extremely longer insertion and querying time
for RAPPOR. The fuzzy counting results in Fig. 5 (right) evaluate the fault
tolerance/robustness of the methods to data errors and variations (using the
synthetic words dataset). The significantly lower variance or error between
estimated and true counts of our method on this dataset compared to the other
two methods that do not support fuzzy counting validates the importance of
fuzzy counting for improved accuracy of count estimation in data with errors
and variations. In summary, our method achieves higher accuracy of count
estimation compared to baselines by a significantly large margin for both
exact and fuzzy counting.
Ablation study: We performed an ablation study of our method with two
important parameters of our method: a) privacy budget $\epsilon$ and b) number
of Bloom filter segments for fuzzy counting $m$. The results are shown in Fig.
6. Fig. 6 (left) shows that the utility of count estimation by our method
decreases with privacy budget ($\epsilon$). The trade-off between privacy and
utility is reflected in these results.
Fig. 6 (right) shows the utility of fuzzy counting measured using the variance
or absolute error between true and estimated counts with different $m$ values.
As shown in the plot, with increasing $m$ the variance or error drops, i.e.
the utility of fuzzy counting increases, as the cost of more querying time (as
presented in Fig. 4 (left). These results validate the trade-off between
effectiveness and efficiency for fuzzy matching.
## 5 Related Work
Several privacy preserving counting or aggregation methods have been proposed
in the literature. They can be categorized into three: 1) cryptographic
methods, 2) sampling and statistics-based methods, and 3) probabilistic
methods.
Cryptographic: Searching on encrypted data for counting operations has been
focused by several works [28, 29, 30]. However, most of these rely on the
computation of very expensive functions (e.g. bilinear pairings) for each
element/item in a dataset, rendering them not practical. An efficient and
somewhat homomorphic encryption method, EPiC, using the MapReduce framework
for cloud counting was proposed in [30]. However, these methods are not
applicable to fuzzy counting or stream data applications.
PRIO is a privacy preserving system for the collection of aggregate statistics
using cryptographic techniques [31]. While the utility of aggregation with
such cryptographic techniques is high, they do not allow fuzzy matching due to
high computational cost required and do not support stream data processing.
Some works considered combining multi-party computation (MPC) with
differential privacy. For example, [32] developed secure aggregation protocols
to add Laplace or Gaussian noise to ensure differential privacy for the
clients, and [33] studied the overhead of adding Laplace noise to MPC.
Sampling and statistics: Another category of methods for privacy preserving
releasing or publishing of stream data considers providing statistics (e.g.
moving average) of local data with local and/or central differential privacy
(LDP or CDP) guarantees [34, 35]. A recent work on answering range queries
under LDP has proposed two approaches based on hierarchical histograms and the
Haar wavelet transform to approximately answer range queries [36]. Relaxed LDP
was studied in [37] that proposes E-LDP where E defines heterogeneous privacy
guarantees for different pairs of private data values for significant utility
gains in answering linear and multi-dimensional range queries. A recent work
used sampling-based frequency estimation with fairness constraints which
provides some level of privacy protection with good utility when the number of
clients is small [38]. A main challenge with these methods is that ensuring
true randomness is a difficult task, so the success of random sampling is
dependent on the data.
Probabilistic: This category of methods received a lot of attention in the
recent literature due to its computational and storage efficiency and provable
privacy guarantees. The two state-of-the-art methods based on probabilistic
methods for approximate aggregation are: (1) RAPPOR proposed by Google for
anonymously collecting statistics about end-user software [1] and (2) count-
mean sketch (CMS) proposed by Apple for identifying frequently used words or
emojis by users without compromising their privacy [2] (described in detail in
Section 2).
Another method proposed by Apple to overcome the communication cost of CMS is
the Hadamard transformation of CMS (HCMS) [2]. Instead of sending over a
vector to the server for inserting or updating an element, HCMS transforms the
vector using Hadamard transformation and sends a single bit that is sampled
from the transformed vector. Despite the improvement in the communication
cost, it is not applicable to streaming data as the Hadamard transformation of
the updated sketch is required upon every single item’s insertion/update,
which is computationally expensive for continuous data insertion and on-the-
fly querying of frequency of items.
Probabilistic data structures and differential privacy have also been used for
other private counting tasks, such as counting of distinct elements [39],
heavy hitters [14, 13], and incidence counting [40], due to their highly
efficient computation, space, and communication costs. Cuckoo filters have
been widely used for private set intersection (PSI) with the purpose of
increasing or optimising the efficiency (similar to blocking or hashing in
record lookup or linkage techniques [27]). For example, a PSI technique was
proposed that utilizes Cuckoo hashing for batching [15]. Similarly, an
efficient circuit-based PSI was proposed using Cuckoo hashing [19]. Cuckoo
filter was used to reduce the amount of data to be exchanged by a
cryptography-based PSI [16]. Cuckoo hashing has been used for reducing the
asymptotic overhead of Oblivious Transfer (OT)-based PSI protocol [17]. This
approach was further improved using permutation-based hashing and 3-way Cuckoo
hashing where 3 instead of 2 hash functions are used to generate a densely
populated Cuckoo table in order to reduce the overall number of OTs [18].
## 6 Conclusion
In this paper we have addressed a novel problem of privacy preserving real-
time counting in stream data, which is increasingly being required in many
real applications, using a hybrid method of Cuckoo filter and Bloom filter
probabilistic data structures. Privacy preserving counting has been studied in
the literature with two notable solutions by Google (RAPPOR) and Apple (CMS).
We have investigated the additional challenges of answering real-time count
querying in stream data and proposed a method that overcomes the limitations
of existing methods for real-time counting in stream data applications.
We have proposed a novel local Differential privacy algorithm with low utility
loss guarantees, and provided formal proof of privacy and utility guarantees
of our method. We have conducted an experimental study of our method using
large real and synthetic datasets. The results show that compared to two
state-of-the-art approaches by Google and Apple, our method achieves
significantly higher utility of count estimation and lower querying time at
the overhead of higher insertion time with similar privacy guarantees.
In the future, we aim to study noise addition according to “local sensitivity”
of the bucket in the Cuckoo filter in order to add noise based on the set of
items the bucket has seen so far which could improve the utility and
efficiency for a given privacy budget. The proposed differential privacy
algorithm requires that clients need to keep track of the items reported to
the server in order to avoid reporting the same item multiple times (i.e. only
unique items are reported to the server). In the future, we aim to address the
local storage/memory efficiency aspect by using efficient probabilistic data
structures, such as HyperLogLog.
Further, investigating other counting tasks in stream data, including counting
of distinct elements, heavy hitters, incidence counting, and estimating
cropped means, is also important in different applications. For example,
count-distinct is required to count views on a video, unique tweets, or unique
customers accessed an online service. More work is required towards the
development of advanced techniques for privacy preserving calculation of other
statistical functions, such as conditional querying and incidence counting, in
stream data. Investigating pan-private algorithms for streaming data
applications is important to guarantee that the intermediate states in dynamic
data are also private, i.e. the state of the algorithm after processing each
iteration’s update and the final output are jointly $\epsilon$-differentially
private [22].
## Appendix A Differential Privacy Proof for adjacency items being different
items
###### Theorem A.1 (Differential privacy of Algorithm 2)
Algorithm 2 is $\epsilon$-local Differentially private.
###### Proof A.2
We will show that for an arbitrary set $V^{\prime}$, the ratio of
probabilities of observing $V^{\prime}$ as output of Algorithm 2, given
adjacent sets ${\cal D}=\\{x_{1}\\}$ and ${\cal D^{\prime}}=\\{x_{2}\\}$ as
input is bounded by $e^{\epsilon}$. Without loss of generality let
$V^{\prime}=[b_{1},b_{2},\cdots,b_{u}]$ denote the $u$ Bloom filters output by
Algorithm 2. Then we want to show that,
$r=\frac{Pr[Algorithm~{}2(x_{1})=[b_{1},b_{2},\cdots,b_{u}]]}{Pr[Algorithm~{}2(x_{2})=[b_{1},b_{2},\cdots,b_{u}]]}\leq
e^{\epsilon},$ (11)
Note when ${\cal D}=\\{x_{1}\\}$ is the input, at most one Bloom filter in
$V^{\prime}$ is due to item $x_{1}$, and that Bloom filter can be either from
$x_{1}$’s true bucket (say $i_{1}$) or some other bucket $i_{\ell}$. This is
because the $generate\\_similar\\_bf(x_{1})$ function can output a Bloom
filter from either $x_{1}$’s true bucket or some other bucket. Thus, let’s
denote by $b_{i_{1}}$ and $b_{i_{\ell}}$ the respective Bloom filters that are
added to $V^{\prime}$ in those mutually exclusive cases. All other Bloom
filters in $V^{\prime}$ are the result of the flipping of $-1$ to $+1$ due to
the randomized response (with parameter $p_{f}$) and the subsequent choosing
of a random Bloom filter via the $randomly\\_choose\\_from()$ function. Since
the probability of these ’other’ Bloom filters being in $V^{\prime}$ is same
whether the incoming item is $x_{1}$ or $x_{2}$, we only need to analyze the
probability that input ${\cal D^{\prime}}=\\{x_{2}\\}$ results in the same
Bloom filter $b_{i_{1}}$ (or $b_{i_{\ell}}$ as the case may be).
For this analysis, we need to consider three buckets: $i_{1}$ \- the bucket of
item $x_{1}$’s Bloom filter, $i_{2}$ \- the bucket of item $x_{2}$’s Bloom
filter and $i_{\ell}$ \- the bucket of the ’similar’ Bloom filter that is
output by $generate\\_similar\\_bf(x_{1})$.
We will now upper bound the ratio $r$ for the following three cases:
Case 1: $b_{i_{1}}$ is in $V^{\prime}$ due to item $x_{1}$. In this case, when
the incoming item is $x_{1}$ the bit at position $i_{1}$ remains $+1$ and the
bits at $i_{2}$ and $i_{\ell}$ remain $-1$ after the RR mechanism and
$generate\\_similar\\_bf(x_{1})$ returns a Bloom filter from bucket $i_{1}$.
While when the incoming item is $x_{2}$, the bit at position $i_{1}$ is
flipped from $-1$ to $+1$, the bit at position $i_{2}$ is flipped from $+1$ to
$-1$ and the bit at $i_{\ell}$ remains at $-1$ after the RR mechanism. Thus
for this case the ratio $r$ is
$r=\frac{(1-p_{f}).\frac{1}{2^{\ell(1-s_{t})}}}{p_{f}.\frac{1}{t}}.\frac{(1-p_{f})}{p_{f}}.\frac{(1-p_{f})}{(1-p_{f})}$
Case 2: $b_{i_{\ell}}$ is in $V^{\prime}$ due to item $x_{1}$. In this case,
when the incoming item is $x_{1}$ the bit at position $i_{1}$ remains $+1$ and
the bits at $i_{2}$ and $i_{\ell}$ remain $-1$ after the RR mechanism and
$generate\\_similar\\_bf(x_{1})$ returns a Bloom filter from bucket
$i_{\ell}$. While when the incoming item is $x_{2}$, the bit at position
$i_{1}$ remains at $-1$, the bit at position $i_{2}$ is flipped from $+1$ to
$-1$ and the bit at $i_{\ell}$ is flipped from $-1$ to $+1$ after the RR
mechanism. Thus for this case the ratio $r$ is
$r=\frac{(1-p_{f}).\frac{1}{2^{\ell(1-s_{t})}}}{(1-p_{f})}.\frac{(1-p_{f})}{p_{f}}.\frac{(1-p_{f})}{p_{f}.\frac{1}{t}}$
Case 3: $\phi$ is in $V^{\prime}$ due to item $x_{1}$. In this case, when the
incoming item is $x_{1}$ the bit at position $i_{1}$ is flipped to $-1$ and
the bits at $i_{2}$ and $i_{\ell}$ remain $-1$ after the RR mechanism. While
when the incoming item is $x_{2}$, the bit at position $i_{1}$ remains at
$-1$, the bit at position $i_{2}$ is flipped from $+1$ to $-1$ and the bit at
$i_{\ell}$ remains at $-1$ after the RR mechanism. Thus for this case the
ratio $r$ is
$r=\frac{p_{f}}{(1-p_{f})}.\frac{(1-p_{f})}{p_{f}}.\frac{(1-p_{f})}{(1-p_{f})}$
Thus the ratio $r$ is upper bounded by
$\frac{(1-p_{f})^{2}.\frac{1}{2^{\ell(1-s_{t})}}}{p_{f}^{2}.\frac{1}{t}}$.
Thus $r\leq e^{\epsilon}$ implies
$p_{flip}\geq\frac{1}{1+\sqrt{s.e^{\epsilon}}}$
## Acknowledgment
This research was funded by Macquarie University CyberSecurity Hub and
strategic research funds from Macquarie University. Authors Dinusha Vatsalan
and Raghav Bhaskar were affiliated with CSIRO Data61 at the initial submission
of this manuscript.
## References
* [1] Ú. Erlingsson, V. Pihur, and A. Korolova, “Rappor: Randomized aggregatable privacy-preserving ordinal response,” in _SIGSAC conference on computer and communications security_. ACM, 2014, pp. 1054–1067.
* [2] D. P. T. Apple, “Learning with privacy at scale,” _Apple Machine Learning Journal_ , 2017.
* [3] R. Masood, D. Vatsalan, M. Ikram, and M. A. Kaafar, “Incognito: A method for obfuscating web data,” in _WWW_. International World Wide Web Conferences Steering Committee, 2018, pp. 267–276.
* [4] B. Fan, D. G. Andersen, M. Kaminsky, and M. D. Mitzenmacher, “Cuckoo filter: Practically better than bloom,” in _CoNEXT_. ACM, 2014, pp. 75–88.
* [5] M. Mitzenmacher and E. Upfal, _Probability and computing: Randomized algorithms and probabilistic analysis_. Cambridge University Press, 2005.
* [6] R. Schnell, “Privacy preserving record linkage,” in _Methodological developments in data linkage_ , K. Harron, H. Goldstein, and C. Dibben, Eds. Chichester: Wiley, 2016, pp. 201–225.
* [7] N. Holohan, D. J. Leith, and O. Mason, “Optimal differentially private mechanisms for randomised response,” _IEEE TIFS_ , vol. 12, no. 11, pp. 2726–2735, 2017.
* [8] B. Bloom, “Space/time trade-offs in hash coding with allowable errors,” _Communications of the ACM_ , vol. 13, no. 7, pp. 422–426, 1970.
* [9] D. Vatsalan and P. Christen, “Privacy-preserving matching of similar patients,” _JBI_ , vol. 59, pp. 285–298, 2016.
* [10] D. Vatsalan, P. Christen, and V. S. Verykios, “A taxonomy of privacy-preserving record linkage techniques,” _Information Systems_ , vol. 38, no. 6, pp. 946–969, 2013.
* [11] S. M. Randall, A. M. Ferrante, J. H. Boyd, and J. B. Semmens, “Privacy-preserving record linkage on large real world datasets,” _JBI_ , vol. 50, no. 1, p. 1, 2014.
* [12] G. Cormode and S. Muthukrishnan, “Approximating data with the count-min data structure,” _IEEE Software_ , vol. 29, no. 1, 2012.
* [13] D. Karapiperis, D. Vatsalan, V. S. Verykios, and P. Christen, “Large-scale multi-party counting set intersection using a space efficient global synopsis,” in _DASFAA_ , Hanoi, 2015.
* [14] L. Melis, G. Danezis, and E. De Cristofaro, “Efficient private statistics with succinct sketches,” in _NDSS_ , 2016.
* [15] H. Chen, K. Laine, and P. Rindal, “Fast private set intersection from homomorphic encryption,” in _ACM SIGSAC Conference on Computer and Communications Security_. ACM, 2017, pp. 1243–1255.
* [16] A. C. D. Resende and D. F. Aranha, “Faster unbalanced private set intersection,” _FC 2018_ , 2018.
* [17] B. Pinkas, T. Schneider, and M. Zohner, “Faster private set intersection based on $\\{$OT$\\}$ extension,” in _USENIX Security Symposium_ , 2014, pp. 797–812.
* [18] B. Pinkas, T. Schneider, G. Segev, and M. Zohner, “Phasing: Private set intersection using permutation-based hashing,” in _USENIX Security Symposium_ , 2015, pp. 515–530.
* [19] B. Pinkas, T. Schneider, C. Weinert, and U. Wieder, “Efficient circuit-based psi via cuckoo hashing,” in _EuroCrypt_. Springer, 2018, pp. 125–157.
* [20] C. Dwork, “Differential privacy,” _International Colloquium on Automata, Languages and Programming_ , pp. 1–12, 2006.
* [21] C. Dwork, “Differential privacy: A survey of results,” in _Theory and Applications of Models of Computation_. Springer, 2008, pp. 1–19.
* [22] C. Dwork, M. Naor, T. Pitassi, G. N. Rothblum, and S. Yekhanin, “Pan-private streaming algorithms.” in _ICS_ , 2010, pp. 66–80.
* [23] A. Evfimievski, J. Gehrke, and R. Srikant, “Limiting privacy breaches in privacy preserving data mining,” in _ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems_ , 2003, pp. 211–222.
* [24] A. Greenberg, “Apple’s ‘differential privacy’is about collecting your data—but not your data,” _Wired, June_ , vol. 13, 2016.
* [25] P. Christen, T. Ranbaduge, D. Vatsalan, and R. Schnell, “Precise and fast cryptanalysis for Bloom filter based privacy-preserving record linkage,” _IEEE TKDE_ , 2018.
* [26] A. Bittau, Ú. Erlingsson, P. Maniatis, I. Mironov, A. Raghunathan, D. Lie, M. Rudominer, U. Kode, J. Tinnes, and B. Seefeld, “Prochlo: Strong privacy for analytics in the crowd,” in _Symposium on Operating Systems Principles_ , 2017, pp. 441–459.
* [27] P. Christen, _Data matching - concepts and techniques for record linkage, entity resolution, and duplicate detection_ , ser. Data-Centric Systems and Applications. Springer, 2012.
* [28] D. Boneh, G. Di Crescenzo, R. Ostrovsky, and G. Persiano, “Public key encryption with keyword search,” in _International conference on the theory and applications of cryptographic techniques_. Springer, 2004, pp. 506–522.
* [29] D. Song, D. Wagner, and A. Perrig, “Practical techniques for searches on encrypted data,” _sp_ , p. 0044, 2000.
* [30] T. D. Vo-Huu, E.-O. Blass, and G. Noubir, “Epic: efficient privacy-preserving counting for mapreduce,” _Computing_ , vol. 101, no. 9, pp. 1265–1286, 2019\.
* [31] H. Corrigan-Gibbs and D. Boneh, “Prio: Private, robust, and scalable computation of aggregate statistics,” in _USENIX Symposium on Networked Systems Design and Implementation_ , 2017, pp. 259–282.
* [32] M. Pettai and P. Laud, “Combining differential privacy and secure multiparty computation,” in _Proceedings of the 31st Annual Computer Security Applications Conference_ , 2015, pp. 421–430.
* [33] V. Bindschaedler, S. Rane, A. E. Brito, V. Rao, and E. Uzun, “Achieving differential privacy in secure multiparty data aggregation protocols on star networks,” in _ACM on Conference on Data and Application Security and Privacy_ , 2017, pp. 115–125.
* [34] V. Perrier, H. J. Asghar, and D. Kaafar, “Private continual release of real-valued data streams,” _NDSS_ , 2019.
* [35] T. Wang, J. Q. Chen, Z. Zhang, D. Su, Y. Cheng, Z. Li, N. Li, and S. Jha, “Continuous release of data streams under both centralized and local differential privacy,” _NDSS_ , 2020.
* [36] T. Kulkarni, G. Cormode, and D. Srivastava, “Answering range queries under local differential privacy,” _VLDB_ , vol. 12, 2018.
* [37] Z. Xiang, B. Ding, X. He, and J. Zhou, “Linear and range counting under metric-based local differential privacy,” in _2020 IEEE International Symposium on Information Theory (ISIT)_. IEEE, 2020, pp. 908–913.
* [38] M. Yang, I. Tjuawinata, K.-Y. Lam, T. Zhu, and J. Zhao, “Fair and differentially private distributed frequency estimation,” _arXiv preprint arXiv:2104.05974_ , 2021.
* [39] R. Stanojevic, M. Nabeel, and T. Yu, “Distributed cardinality estimation of set operations with differential privacy,” in _2017 IEEE Symposium on Privacy-Aware Computing (PAC)_. IEEE, 2017, pp. 37–48.
* [40] M. Alaggan, M. Cunche, and M. Minier, “Non-interactive (t, n)-incidence counting from differentially private indicator vectors,” in _Proceedings of the 3rd ACM on International Workshop on Security And Privacy Analytics_ , 2017, pp. 1–9.
| Dinusha Vatsalan is a Senior Lecturer in Cyber Security at Macquarie
University. Dinusha received her PhD in Computer Science from Australian
National University and BSc (Hons) from University of Colombo, Sri Lanka. She
was a Research Scientist at Data61, CSIRO. Her research interests include
privacy-preserving technologies for record linkage, data sharing, and
analytics, privacy attacks and defences, and privacy risk quantification. She
has authored over 60 scientific articles in these topics.
---|---
| Raghav Bhaskar is a Co-founder and CEO at AppsPicket, New Delhi, India with
experience in the building blocks of cryptography (Digital signatures, Zero-
Knowledge Proofs, Anonymous Credentials etc.) and privacy (Differential
Privacy, Noiseless Privacy, Non-Interactive Zero Knowledge etc.) and interests
in solving real life security and privacy challenges. He was a senior research
scientist at Data61, CSIRO.
---|---
| Mohamed Ali (Dali) Kaafar is a Professor at the Faculty of Science and
Engineering at Macquarie University and the Executive Director of the Optus-
Macquarie University Cyber Security Hub. He is also the founder of the
Information Security and Privacy group at CSIRO Data61. Prior to that, Dali
was the research leader of the Data Privacy and Mobile systems group at NICTA
and Senior principal researcher at INRIA, the French research institution of
computer science and automation. He received his PhD from University of Nice
Sophia Antipolis and INRIA in France where he pioneered research in the
security of Internet Coordinate Systems. Prof. Kaafar is an associate editor
of IEEE Transactions on Information Forensics & Security and serves in the
Editorial Board of the Journal on Privacy Enhancing Technologies. He published
over 300 scientific peer-reviewed papers. He received several awards including
the INRIA Excellence of research National Award, and the Andreas Pfitzman
award from the Privacy Enhancing Technologies symposium in 2011. In 2019, he
has also been awarded the prestigious and selective Chinese Academy of
Sciences President’s Professorial fellowship Award.
---|---
|
# Sketch-Based Streaming Anomaly Detection in Dynamic Graphs
Siddharth Bhatia
National University of Singapore
<EMAIL_ADDRESS>
&Mohit Wadhwa
<EMAIL_ADDRESS>
&Philip S. Yu
University of Illinois at Chicago
<EMAIL_ADDRESS>
&Bryan Hooi
National University of Singapore
<EMAIL_ADDRESS>
###### Abstract
Given a stream of graph edges from a dynamic graph, how can we assign anomaly
scores to edges and subgraphs in an online manner, for the purpose of
detecting unusual behavior, using constant time and memory? For example, in
intrusion detection, existing work seeks to detect either anomalous edges or
anomalous subgraphs, but not both. In this paper, we first extend the count-
min sketch data structure to a higher-order sketch. This higher-order sketch
has the useful property of preserving the dense subgraph structure (dense
subgraphs in the input turn into dense submatrices in the data structure). We
then propose four online algorithms that utilize this enhanced data structure,
which (a) detect both edge and graph anomalies; (b) process each edge and
graph in constant memory and constant update time per newly arriving edge,
and; (c) outperform state-of-the-art baselines on four real-world datasets.
Our method is the first streaming approach that incorporates dense subgraph
search to detect graph anomalies in constant memory and time.
## 1 Introduction
Consider an intrusion detection system, in which many forms of anomalous
behavior can be described as a group of attackers making a large number of
connections to some set of targeted machines to restrict accessibility or look
for potential vulnerabilities. We can model this as a dynamic graph, where
nodes correspond to machines, and each edge represents a timestamped
connection from one machine to another. In this graph, anomalous behavior
often takes the form of a dense subgraph, as shown in several real-world
datasets in [1, 2, 3]. Thus, we ask the question: Given a stream of graph
edges from a dynamic graph, how can we assign anomaly scores to both edges and
subgraphs in an online manner, for the purpose of detecting unusual behavior,
using constant memory and constant update time per newly arriving edge?
Several approaches [4, 5, 6, 7, 8, 9, 10] aim to detect anomalies in graph
settings. However, these approaches focus on static graphs, whereas many real-
world graphs are time-evolving in nature. In streaming or online graph
scenarios, some methods can detect the presence of anomalous edges, [11, 3,
12, 13], while others can detect anomalous subgraphs [1, 2, 14]. However, all
existing methods are limited to either anomalous edge or graph detection but
not able to detect both kinds of anomalies, as summarized in Table 1. As we
discuss in Section 7, our approach outperforms existing methods in both
accuracy and running time; and on both anomalous edge and subgraph detection
scenarios. Moreover, our approach is the only streaming method that makes use
of dense subgraph search to detect graph anomalies while only requiring
constant memory and time.
We first extend the two-dimensional sketch to a higher-order sketch to enable
it to embed the relation between the source and destination nodes in a graph.
A higher-order sketch has the useful property of preserving the dense subgraph
structure; dense subgraphs in the input turn into dense submatrices in this
data structure. Thus, the problem of detecting a dense subgraph from a large
graph reduces to finding a dense submatrix in a constant size matrix, which
can be achieved in constant time. The higher-order sketch allows us to propose
several algorithms to detect both anomalous edges and subgraphs in a streaming
manner. We introduce two edge anomaly detection methods, AnoEdge-G, and
AnoEdge-L, and two graph anomaly detection methods AnoGraph, and AnoGraph-K,
that use the same data structure to detect the presence of a dense submatrix,
and consequently anomalous edges, or subgraphs respectively. All our
approaches process edges and graphs in constant time, and are independent of
the graph size, i.e., they require constant memory. We also provide
theoretical guarantees on the higher-order sketch estimate and the submatrix
density measure. In summary, the main contributions of our paper are:
1. 1.
Higher-Order Sketch (Section 4): We transform the dense subgraph detection
problem into finding a dense submatrix (which can be achieved in constant
time) by extending the count-min sketch (CMS) [15] data structure to a higher-
order sketch.
2. 2.
Streaming Anomaly Detection (Sections 5,6): We propose four novel online
approaches to detect anomalous edges and graphs in real-time, with constant
memory and update time. Moreover, this is the first streaming work that
incorporates dense subgraph search to detect graph anomalies in constant
memory/time.
3. 3.
Effectiveness (Section 7): We outperform all state-of-the-art streaming edge
and graph anomaly detection methods on four real-world datasets.
Reproducibility: Our code and datasets are available at
https://github.com/Stream-AD/AnoGraph.
Table 1: Comparison of relevant anomaly detection approaches.
Property | DenseStream | SedanSpot | MIDAS-R | PENminer | F-FADE | DenseAlert | SpotLight | AnomRank | Our Method
---|---|---|---|---|---|---|---|---|---
| (KDD’17) | (ICDM’20) | (AAAI’20) | (KDD’20) | (WSDM’21) | (KDD’17) | (KDD’18) | (KDD’19) | ($2021$)
Edge Anomaly | ✓ | ✓ | ✓ | ✓ | ✓ | – | – | – | ✔
Graph Anomaly | – | – | – | – | – | ✓ | ✓ | ✓ | ✔
Constant Memory | – | ✓ | ✓ | – | ✓ | – | ✓ | – | ✔
Constant Update Time | – | ✓ | ✓ | ✓ | ✓ | – | ✓ | – | ✔
Dense Subgraph Search | ✓ | – | – | – | – | ✓ | – | – | ✔
## 2 Related Work
Our work is closely related to areas like anomaly detection on graphs [16, 17,
18, 19, 20, 21, 22, 23] and streams [24, 25, 26, 27, 28, 29, 30, 31, 32], and
streaming algorithms [33, 34, 35, 36, 37]. Higher-order sketches are discussed
in [37], however, they are restricted to count-sketches and non-graph
settings. [38, 39] discuss deep learning based anomaly detection, however,
such approaches are unable to detect anomalies in a streaming manner. [4, 7,
8, 5, 10, 6, 9] are limited to anomaly detection in static graphs. In this
section, however, we limit our review only to methods detecting edge and graph
anomalies in dynamic graphs; see [40] for an extensive survey.
Edge Stream Methods: HotSpot [41] detects nodes whose egonets suddenly change.
RHSS [42] focuses on sparsely-connected graph parts. CAD [43] localizes
anomalous changes using commute time distance measurement. More recently,
DenseStream [1] maintains and updates a dense subtensor in a tensor stream.
SedanSpot [11] identifies edge anomalies based on edge occurrence,
preferential attachment, and mutual neighbors. PENminer [12] explores the
persistence of activity snippets, i.e., the length and regularity of edge-
update sequences’ reoccurrences. F-FADE [13] aims to detect anomalous
interaction patterns by factorizing their frequency. MIDAS [3] identifies
microcluster-based anomalies. However, all these methods are unable to detect
graph anomalies.
Graph Stream Methods: DTA/STA [44] approximates the adjacency matrix of the
current snapshot using matrix factorization. Copycatch [45] spots near-
bipartite cores where each node is connected to others in the same core
densely within a short time. SPOT/DSPOT [30] use extreme value theory to
automatically set thresholds for anomalies. IncGM+ [46] utilizes incremental
method to process graph updates. More recently, DenseAlert identifies
subtensors created within a short time and utilizes incremental method to
process graph updates or subgraphs more efficiently. SpotLight [2] discovers
anomalous graphs with dense bi-cliques, but uses a randomized approach without
any search for dense subgraphs. AnomRank [14], inspired by PageRank [47],
iteratively updates two score vectors and computes anomaly scores. However,
these methods are slow and do not detect edge anomalies. Moreover, they do not
search for dense subgraphs in constant memory and time.
## 3 Problem
Let $\mathscr{E}=\\{e_{1},e_{2},\cdots\\}$ be a stream of weighted edges from
a time-evolving graph $\mathcal{G}$. Each arriving edge is a tuple
$e_{i}=(u_{i},v_{i},w_{i},t_{i})$ consisting of a source node
$u_{i}\in\mathcal{V}$, a destination node $v_{i}\in\mathcal{V}$, a weight
$w_{i}$, and a time of occurrence $t_{i}$, the time at which the edge is added
to the graph. For example, in a network traffic stream, an edge $e_{i}$ could
represent a connection made from a source IP address $u_{i}$ to a destination
IP address $v_{i}$ at time $t_{i}$. We do not assume that the set of vertices
$\mathcal{V}$ is known a priori: for example, new IP addresses or user IDs may
be created over the course of the stream.
We model $\mathcal{G}$ as a directed graph. Undirected graphs can be handled
by treating an incoming undirected edge as two simultaneous directed edges,
one in each direction. We also allow $\mathcal{G}$ to be a multigraph: edges
can be created multiple times between the same pair of nodes. Edges are
allowed to arrive simultaneously: i.e. $t_{i+1}\geq t_{i}$, since in many
applications $t_{i}$ is given as a discrete time tick.
The desired properties of our algorithm are as follows:
* •
Detecting Anomalous Edges: To detect whether the edge is part of an anomalous
subgraph in an online manner. Being able to detect anomalies at the finer
granularity of edges allows early detection so that recovery can be started as
soon as possible and the effect of malicious activities is minimized.
* •
Detecting Anomalous Graphs: To detect the presence of an unusual subgraph
(consisting of edges received over a period of time) in an online manner,
since such subgraphs often correspond to unexpected behavior, such as
coordinated attacks.
* •
Constant Memory and Update Time: To ensure scalability, memory usage and
update time should not grow with the number of nodes or the length of the
stream. Thus, for a newly arriving edge, our algorithm should run in constant
memory and update time.
## 4 Higher-Order Sketch & Notations
Count-min sketches (CMS) [15] are popular streaming data structures used by
several online algorithms [48]. CMS uses multiple hash functions to map events
to frequencies, but unlike a hash table uses only sub-linear space, at the
expense of overcounting some events due to collisions. Frequency is
approximated as the minimum over all hash functions. CMS, shown in Figure
1(a), is represented as a two-dimensional matrix where each row corresponds to
a hash function and hashes to the same number of buckets (columns).
We introduce a Higher-order CMS (H-CMS) data structure where each hash
function maps multi-dimensional input to a generic tensor instead of mapping
it to a row vector. H-CMS enhances CMS by separately hashing the individual
components of an entity thereby maintaining more information. Figure 1(b)
shows a 3-dimensional H-CMS that can be used to hash two-dimensional entities
such as graph edges to a matrix. The source node is hashed to the first
dimension and the destination node to the other dimension of the sketch
matrix, as opposed to the original CMS that will hash the entire edge to a
one-dimensional row vector as shown in Figure 1(a).
Figure 1: (a) Original CMS (b) Higher-order CMS
###### Theorem 1.
(Proof in Appendix C) H-CMS has the same estimate guarantees as the original
CMS.
We use a 3-dimensional H-CMS (operations described in Appendix A) where the
number of hash functions is denoted by $n_{r}$, and matrix $\mathcal{M}$
corresponding to each hash function is of dimension $n_{b}\times n_{b}$, i.e.,
a square matrix. A hash function denoted by $h(u)$ maps an entity $u$ to an
integer in the range $[0,n_{b})$. A 3-dimensional H-CMS maps edge $(u,v)$ to a
matrix index $(h(u),h(v))$, i.e. the source node is mapped to a row index and
the destination node is mapped to a column index. Therefore, each matrix in a
3-dimensional H-CMS captures the essence of a graph adjacency matrix. Dense
subgraph detection can thus be transformed into a dense submatrix detection
problem (as shown in Figure 2) where the size of the matrix is a small
constant, independent of the number of edges or the graph size.
Figure 2: (a) Dense subgraph in the original graph between source nodes
$s_{1},s_{2}$, and destination nodes $d_{1},d_{2},d_{3}$ is transformed to a
(b) Dense submatrix between rows $r_{1},r_{2}$, and columns
$c_{1},c_{2},c_{3}$ in the H-CMS.
Frequently used symbols are discussed in Table 2, and we leverage the subgraph
density measure discussed in [49] to define the submatrix $(S_{x},T_{x})$
density.
Table 2: Table of symbols. Symbol | Definition
---|---
$n_{r}$ | number of hash functions
$n_{b}$ | number of buckets
$h(u)$ | hash function $u\rightarrow[0,n_{b})$
$\mathcal{M}$ | a square matrix of dimensions $n_{b}\times n_{b}$
$\mathcal{M}[i][j]$ | element at row index i and column index j
$S$ | set of all row indices
$S_{cur}$ | set of current submatrix row indices
$S_{rem}$ | set of remaining row indices i.e. indices not part of current submatrix
$T$ | set of all column indices
$T_{cur}$ | set of current submatrix column indices
$T_{rem}$ | set of remaining column indices i.e. indices not part of current submatrix
$[z]$ | set of all integers in the range $[1,z]$, i.e., $\\{1,2,...,z\\}$
$\mathcal{D}(\mathcal{M},S_{x},T_{x})$ | density of submatrix ($S_{x}$, $T_{x}$)
$\mathcal{E}(\mathcal{M},S_{x},T_{x})$ | sum of elements of submatrix ($S_{x}$, $T_{x}$)
$\mathcal{R}(\mathcal{M},u,T_{x})$ | submatrix row-sum i.e. sum of elements of submatrix ($\\{u\\}$, $T_{x}$)
$\mathcal{C}(\mathcal{M},S_{x},v)$ | submatrix column-sum i.e. sum of elements of submatrix ($S_{x}$, $\\{v\\}$)
$\mathcal{L}(\mathcal{M},u,v,S_{x},T_{x})$ | likelihood of index $(u,v)$ w.r.t. submatrix $(S_{x},T_{x})$
$d_{max}$ | maximum reported submatrix density
###### Definition 1.
Given matrix $\mathcal{M}$, density of a submatrix of $\mathcal{M}$
represented by $S_{x}\subseteq S$ and $T_{x}\subseteq T$, is:
$\mathcal{D}(\mathcal{M},S_{x},T_{x})=\frac{\sum_{s\in S_{x}}\sum_{t\in
T_{x}}\mathcal{M}[s][t]}{\sqrt{|S_{x}||T_{x}|}}$ (1)
## 5 Edge Anomalies
In this section, using the H-CMS data structure, we propose AnoEdge-G and
AnoEdge-L to detect edge anomalies by checking whether the received edge when
mapped to a sketch matrix element is part of a dense submatrix. AnoEdge-G
finds a Global dense submatrix and performs well in practice while AnoEdge-L
maintains and updates a Local dense submatrix around the matrix element and
therefore has better time complexity.
### 5.1 AnoEdge-G
AnoEdge-G, as described in Algorithm 1, maintains a _temporally decaying_
H-CMS, i.e. whenever 1 unit of time passes, we multiply all the H-CMS counts
by a fixed factor $\alpha$ (lines 2,4). This decay simulates the gradual
‘forgetting’ of older and hence more outdated information. When an edge
$(u,v)$ arrives, $u$, $v$ are mapped to matrix indices $h(u)$, $h(v)$
respectively for each hash function $h$, and the corresponding H-CMS counts
are updated (line 5). Edge-Submatrix-Density procedure (described below) is
then called to compute the density of a dense submatrix around $(h(u),h(v))$.
Density is reported as the anomaly score for the edge; a larger density
implies that the edge is more likely to be anomalous.
Edge-Submatrix-Density procedure calculates the density of a dense submatrix
around a given index $(u,v)$. A $1\times 1$ submatrix represented by $S_{cur}$
and $T_{cur}$, is initialized with row-index $u$ and column index $v$ (line
9). The submatrix is iteratively expanded by greedily selecting a row $u_{p}$
from $S_{rem}$ (or a column $v_{p}$ from $T_{rem}$) that obtains the maximum
row (or column) sum with the current submatrix (lines 14,16). This selected
row $u_{p}$ (or column $v_{p}$) is removed from $S_{rem}$ (or $T_{rem}$), and
added to $S_{cur}$ (or $T_{cur}$) (lines 15,17). The process is repeated until
both $S_{rem}$ and $T_{rem}$ are empty (line 11). Density of the current
submatrix is computed at each iteration of the submatrix expansion process and
the maximum over all greedily formed submatrix densities is returned (line
18).
1
Input: Stream $\mathscr{E}$ of edges over time
Output: Anomaly score per edge
2
1 Procedure _AnoEdge-G(_$\mathscr{E}$_)_
Initialize H-CMS matrix $\mathcal{M}$ for edge count
// H-CMS data structure
2 while _new edge $e=(u,v,w,t)\in\mathscr{E}$ is received_ do
Temporal decay H-CMS with timestamp change
// decay count
Update H-CMS matrix $\mathcal{M}$ for new edge $(u,v)$ with value $w$
// update count
3 output $score(e)\leftarrow$ Edge-Submatrix-Density($\mathcal{M},h(u),h(v)$)
4
5
6 Procedure _Edge-Submatrix-Density(_$\mathcal{M}$ , $u$, $v$_)_
7 $S\leftarrow[n_{b}];\enspace T\leftarrow[n_{b}];\enspace
S_{cur}\leftarrow\\{u\\};\enspace T_{cur}\leftarrow\\{v\\};\enspace
S_{rem}\leftarrow S/\\{u\\};\enspace T_{rem}\leftarrow T/\\{v\\}$
8 $d_{max}\leftarrow\mathcal{D}(\mathcal{M},S_{cur},T_{cur})$
9 while _$S_{rem}\neq\emptyset\enspace\vee\enspace T_{rem}\neq\emptyset$_ do
$u_{p}\leftarrow\operatorname*{argmax}_{s_{p}\in
S_{rem}}\mathcal{R}(\mathcal{M},s_{p},T_{cur})$
// submatrix max row-sum index
$v_{p}\leftarrow\operatorname*{argmax}_{t_{p}\in
T_{rem}}\mathcal{C}(\mathcal{M},S_{cur},t_{p})$
// submatrix max column-sum index
10 if _$\mathcal{R}(\mathcal{M},u_{p},T_{cur})
>\mathcal{C}(\mathcal{M},S_{cur},v_{p})$_ then
11 $S_{cur}\leftarrow S_{cur}\cup\\{u_{p}\\};\enspace S_{rem}\leftarrow
S_{rem}/\\{u_{p}\\}$
12
13 else
14 $T_{cur}\leftarrow T_{cur}\cup\\{v_{p}\\};\enspace T_{rem}\leftarrow
T_{rem}/\\{v_{p}\\}$
15
16 $d_{max}\leftarrow max(d_{max},\mathcal{D}(\mathcal{M},S_{cur},T_{cur}))$
17
return $d_{max}$
// dense submatrix density
18
19
Algorithm 1 AnoEdge-G : Streaming Anomaly Edge Scoring
###### Proposition 1.
(Proof in Appendix D.1) Time complexity of Algorithm 1 is
$O(|\mathscr{E}|*n_{r}*n_{b}^{2})$ 111This is for processing all edges; the
time per edge is constant..
###### Proposition 2.
(Proof in Appendix D.1) Memory complexity of Algorithm 1 is
$O(n_{r}*n_{b}^{2})$.
### 5.2 AnoEdge-L
Inspired by Definition 1, we define the likelihood measure of a matrix index
$(u,v)$ with respect to a submatrix $(S_{x},T_{x})$, as the sum of the
elements of submatrix $(S_{x},T_{x})$ that either share row with index $v$ or
column with index $u$ divided by the total number of such elements.
###### Definition 2.
Given matrix $\mathcal{M}$, likelihood of an index $(u,v)$ with respect to a
submatrix represented by $S_{x}\subseteq S$ and $T_{x}\subseteq T$, is:
$\mathcal{L}(\mathcal{M},u,v,S_{x},T_{x})=\frac{\sum_{(s,t)\;\in\;\;S_{x}\times{v}\;\cup\;{u}\times{T_{x}}}\mathcal{M}[s][t]}{|S_{x}\times{v}\;\cup\;{u}\times{T_{x}}|}$
(2)
AnoEdge-L, as described in Algorithm 2, maintains a temporally decaying H-CMS
to store the edge counts. We also initialize a mutable submatrix of size
$1\times 1$ with a random element, and represent it as $(S_{cur},T_{cur})$. As
we process edges, we greedily update $(S_{cur},T_{cur})$ to maintain it as a
dense submatrix. When an edge arrives, H-CMS counts are first updated, and the
received edge is then used to check whether to _expand_ the current submatrix
(line 7). If the submatrix density increases upon the addition of the row (or
column), then the row-index $h(u)$ (or column-index $h(v)$) is added to the
current submatrix, $(S_{cur},T_{cur})$. To remove the row(s) and column(s)
decayed over time, the process iteratively selects the row (or column) with
the minimum row-sum (or column-sum) until removing it increases the current
submatrix density. This ensures that the current submatrix is as _condensed_
as possible (line 9). As defined in Definition 2, AnoEdge-L computes the
likelihood score of the edge with respect to $(S_{cur},T_{cur})$ (line 10). A
higher likelihood measure implies that the edge is more likely to be
anomalous.
1
Input: Stream $\mathscr{E}$ of edges over time
Output: Anomaly score per edge
1 Procedure _AnoEdge-L(_$\mathscr{E}$_)_
Initialize H-CMS matrix $\mathcal{M}$ for edges count
// H-CMS data structure
Initialize a randomly picked $1\times 1$ submatrix $(S_{cur},T_{cur})$
// mutable submatrix
2 while _new edge $e=(u,v,w,t)\in\mathscr{E}$ is received_ do
Temporal decay H-CMS with timestamp change
// decay count
Update H-CMS matrix $\mathcal{M}$ for new edge $(u,v)$ with value $w$
// update Count
3 $\triangleright$ Check and Update Submatrix:
Expand $(S_{cur},T_{cur})$
// expand submatrix
Condense $(S_{cur},T_{cur})$
// condense submatrix
output $score(e)\leftarrow\mathcal{L}(\mathcal{M},h(u),h(v),S_{cur},T_{cur})$
// from Definition 2
4
5
6
Algorithm 2 AnoEdge-L : Streaming Anomaly Edge Scoring
###### Proposition 3.
(Proof in Appendix D.2) Time complexity of Algorithm 2 is
$O(n_{r}*n_{b}^{2}+|\mathscr{E}|*n_{r}*n_{b})$.
###### Proposition 4.
(Proof in Appendix D.2) Memory complexity of Algorithm 2 is
$O(n_{r}*n_{b}^{2})$.
## 6 Graph Anomalies
We now propose AnoGraph and AnoGraph-K to detect graph anomalies by first
mapping the graph to a higher-order sketch, and then checking for a dense
submatrix. These are the first streaming algorithms that make use of dense
subgraph search to detect graph anomalies in constant memory and time.
AnoGraph greedily finds a dense submatrix with a 2-approximation guarantee on
the density measure. AnoGraph-K leverages Edge-Submatrix-Density from
Algorithm 1 to greedily find a dense submatrix around $K$ strategically picked
matrix elements performing equally well in practice.
### 6.1 AnoGraph
AnoGraph, as described in Algorithm 3, maintains an H-CMS to store the edge
counts that are reset whenever a new graph arrives. The edges are first
processed to update the H-CMS counts. AnoGraph-Density procedure (described
below) is then called to find the dense submatrix. AnoGraph reports anomaly
score as the density of the detected (dense) submatrix; a larger density
implies that the graph is more likely to be anomalous.
AnoGraph-Density procedure computes the density of a dense submatrix of matrix
$\mathcal{M}$. The current dense submatrix is initialised as matrix
$\mathcal{M}$ and then the row (or column) from the current submatrix with
minimum row (or column) sum is greedily removed. This process is repeated
until $S_{cur}$ and $T_{cur}$ are empty (line 12). The density of the current
submatrix is computed at each iteration of the submatrix expansion process and
the maximum over all densities is returned (line 19).
Algorithm 3 is a special case of finding the densest subgraph in a directed
graph problem [49] where the directed graph is represented as an adjacency
matrix and detecting the densest subgraph essentially means detecting dense
submatrix. We now provide a guarantee on the density measure.
###### Theorem 2.
(Proof in Appendix E.1) Algorithm 3 achieves a 2-approximation guarantee for
the densest submatrix problem.
1
Input: Stream $\mathscr{G}$ of edges over time
Output: Anomaly score per graph
2
1 Procedure _AnoGraph(_$\mathscr{G}$_)_
Initialize H-CMS matrix $\mathcal{M}$ for graph edges count
// H-CMS data structure
2 while _new graph $G\in\mathscr{G}$ is received_ do
Reset H-CMS matrix $\mathcal{M}$ for graph $G$
// reset count
3 for _edge $e=(u,v,w,t)\in G$_ do
Update H-CMS matrix $\mathcal{M}$ for edge $(u,v)$ with value $w$
// update count
4
output $score(G)\leftarrow$ AnoGraph-Density($\mathcal{M}$)
// anomaly score
5
6
7
8Procedure _AnoGraph-Density(_$\mathcal{M}$_)_
$S_{cur}\leftarrow[n];\enspace T_{cur}\leftarrow[n]$
// initialize to size of $\mathcal{M}$
9 $d_{max}\leftarrow\mathcal{D}(\mathcal{M},S_{cur},T_{cur})$
10 while _$S_{cur}\neq\emptyset\enspace\vee\enspace T_{cur}\neq\emptyset$_ do
$u_{p}\leftarrow\operatorname*{argmin}_{s_{p}\in
S_{cur}}\mathcal{R}(\mathcal{M},s_{p},T_{cur})$
// submatrix min row-sum index
$v_{p}\leftarrow\operatorname*{argmin}_{t_{p}\in
T_{cur}}\mathcal{C}(\mathcal{M},S_{cur},t_{p})$
// submatrix min column-sum index
11 if _$\mathcal{R}(\mathcal{M},u_{p},T_{cur})
<\mathcal{C}(\mathcal{M},S_{cur},v_{p})$_ then
$S_{cur}\leftarrow S_{cur}/\\{u_{p}\\}$
// remove row
12
13 else
$T_{cur}\leftarrow T_{cur}/\\{v_{p}\\}$
// remove column
14
15 $d_{max}\leftarrow max(d_{max},\mathcal{D}(\mathcal{M},S_{cur},T_{cur}))$
16
return $d_{max}$
// dense submatrix density
17
Algorithm 3 AnoGraph: Streaming Anomaly Graph Scoring
###### Proposition 5.
(Proof in Appendix E.1) Time complexity of Algorithm 3 is
$O(|\mathscr{G}|*n_{r}*n_{b}^{2}+|\mathscr{E}|*n_{r})$.
###### Proposition 6.
(Proof in Appendix E.1) Memory complexity of Algorithm 3 is
$O(n_{r}*n_{b}^{2})$.
### 6.2 AnoGraph-K
Similar to AnoGraph, AnoGraph-K maintains an H-CMS which is reset whenever a
new graph arrives. It uses the AnoGraph-K-Density procedure (described below)
to find the dense submatrix. AnoGraph-K is summarised in Algorithm 4.
AnoGraph-K-Density computes the density of a dense submatrix of matrix
$\mathcal{M}$. The intuition comes from the heuristic that the matrix elements
with a higher value are more likely to be part of a dense submatrix. Hence,
the approach considers $K$ largest elements of the matrix $\mathcal{M}$ and
calls Edge-Submatrix-Density from Algorithm 1 to get the dense submatrix
around each of those elements (line 14). The maximum density over the
considered $K$ dense submatrices is returned.
1
Input: Stream $\mathscr{G}$ of edges over time
Output: Anomaly score per graph
2
1 Procedure _AnoGraph-K(_$\mathscr{G},K$_)_
Initialize H-CMS matrix $\mathcal{M}$ for graph edges count
// H-CMS data structure
2 while _new graph $G\in\mathscr{G}$ is received_ do
Reset H-CMS matrix $\mathcal{M}$ for graph $G$
// reset count
3 for _edge $e=(u,v,w,t)\in G$_ do
Update H-CMS matrix $\mathcal{M}$ for edge $(u,v)$ with value $w$
// update count
4
output $score(G)\leftarrow$ AnoGraph-K-Density($\mathcal{M},K$)
// anomaly score
5
6
7
8Procedure _AnoGraph -K-Density(_$\mathcal{M}$ , $K$_)_
$B\leftarrow[n]\times[n]$
// set of all indices
9 $d_{max}\leftarrow 0$
10 for _$j\leftarrow 1$ … $K$_ do
$u_{p},v_{p}\leftarrow\operatorname*{argmax}_{(s_{p},t_{p})\in
B}\mathcal{M}[s_{p}][t_{p}]$
// pick the max element
11 $d_{max}\leftarrow max(d_{max},\textsc{Edge-Submatrix-
Density}({\mathcal{M},u_{p},v_{p}}))$
$B\leftarrow B/\\{(u_{p},v_{p})\\}$
// remove max element index
12
return $d_{max}$
// dense submatrix density
13
Algorithm 4 AnoGraph-K: Streaming Anomaly Graph Scoring
###### Proposition 7.
(Proof in Appendix E.2) Time complexity of Algorithm 4 is
$O(|\mathscr{G}|*K*n_{r}*n_{b}^{2}+|\mathscr{E}|*n_{r})$.
###### Proposition 8.
(Proof in Appendix E.2) Memory complexity of Algorithm 4 is
$O(n_{r}*n_{b}^{2})$.
## 7 Experiments
In this section, we evaluate the performance of our approaches as compared to
all baselines discussed in Table 1. We use four real-world datasets: _DARPA_
[50] and _ISCX-IDS2012_ [51] are popular datasets for graph anomaly detection;
[52] surveys more than $30$ datasets and recommends to use the newer _CIC-
IDS2018_ and _CIC-DDoS2019_ datasets [53, 54]. Dataset details are discussed
in Appendix B. Hyperparameters for the baselines are provided in Appendix H.
Appendix F describes the experimental setup and results with some additional
parameters are shown in Appendix G. All edge (or graph)-based methods output
an anomaly score per edge (or graph), a higher score implying more
anomalousness. Similar to baseline papers, we report the Area under the ROC
curve (AUC) and the running time. Unless explicitly specified, all experiments
including those on the baselines are repeated $5$ times and the mean is
reported. We aim to answer the following questions:
1. Q1.
Edge Anomalies: How accurately do AnoEdge-G and AnoEdge-L detect edge
anomalies compared to baselines? Are they fast and scalable?
2. Q2.
Graph Anomalies: How accurately do AnoGraph and AnoGraph-K detect graph
anomalies i.e. anomalous graph snapshots? Are they fast and scalable?
### 7.1 Edge Anomalies
#### Accuracy:
Table 3 shows the AUC of edge anomaly detection baselines, AnoEdge-G, and
AnoEdge-L. We report a single value for DenseStream and PENminer because these
are non-randomized methods. PENminer is unable to finish on the large _CIC-
DDoS2019_ within 24 hours; thus, that result is not reported. SedanSpot uses
personalised PageRank to detect anomalies and is not always able to detect
anomalous edges occurring in dense block patterns while PENminer is unable to
detect structural anomalies. Among the baselines, MIDAS-R is most accurate,
however, it performs worse when there is a large number of timestamps as in
_ISCX-IDS2012_. Note that AnoEdge-G and AnoEdge-L outperform all baselines on
all datasets.
Table 3: AUC and Running Time when detecting edge anomalies. Averaged over $5$
runs.
Dataset | DenseStream | SedanSpot | MIDAS-R | PENminer | F-FADE | AnoEdge-G | AnoEdge-L
---|---|---|---|---|---|---|---
DARPA | $0.532$ | $0.647\pm 0.006$ | $0.953\pm 0.002$ | 0.872 | $0.919\pm 0.005$ | $\bf 0.970\pm 0.001$ | $0.964\pm 0.001$
57.7s | 129.1s | 1.4s | 5.21 hrs | 317.8s | 28.7s | 6.1s
ISCX-IDS2012 | $0.551$ | $0.581\pm 0.001$ | $0.820\pm 0.050$ | 0.530 | $0.533\pm 0.020$ | $0.954\pm 0.000$ | $\bf 0.957\pm 0.003$
138.6s | 19.5s | 5.3s | 1.3 hrs | 137.4s | 7.8s | 0.7s
CIC-IDS2018 | $0.756$ | $0.325\pm 0.037$ | $0.919\pm 0.019$ | 0.821 | $0.607\pm 0.001$ | $\bf 0.963\pm 0.014$ | $0.927\pm 0.035$
3.3 hours | 209.6s | 1.1s | 10 hrs | 279.7s | 58.4s | 10.2s
CIC-DDoS2019 | $0.263$ | $0.567\pm 0.004$ | $0.983\pm 0.003$ | — | $0.717\pm 0.041$ | $0.997\pm 0.001$ | $\bf 0.998\pm 0.001$
265.6s | 697.6s | 2.2s | > 24 hrs | 18.7s | 123.3s | 17.8s
#### Running Time:
Table 3 shows the running time (excluding I/O) and real-time performance of
AnoEdge-G and AnoEdge-L. Since AnoEdge-L maintains a local dense submatrix, it
is faster than AnoEdge-G. DenseStream maintains dense blocks incrementally for
every coming tuple and updates dense subtensors when it meets an updating
condition, limiting the detection speed. SedanSpot requires several
subprocesses (hashing, random-walking, reordering, sampling, etc), PENminer
and F-FADE need to actively extract patterns for every graph update, resulting
in the large computation time. When there is a large number of timestamps like
in _ISCX-IDS2012_ , MIDAS-R performs slower than AnoEdge-L which is the
fastest.
#### AUC vs Running Time:
Figure 3(a) plots accuracy (AUC) vs. running time (log scale, in seconds,
excluding I/O) on the _ISCX-IDS2012_ dataset. AnoEdge-G and AnoEdge-L achieve
much higher accuracy compared to all baselines, while also running
significantly fast.
Figure 3: On _ISCX-IDS2012_ , (a) AUC vs running time when detecting edge
anomalies. (b) Linear scalability with number of hash functions. (c) Linear
scalability with number of edges.
#### Scalability:
Figures 3(b) and 3(c) plot the running time with increasing number of hash
functions and edges respectively, on the _ISCX-IDS2012_ dataset. This
demonstrates the scalability of AnoEdge-G and AnoEdge-L.
### 7.2 Graph Anomalies
#### Accuracy:
Table 4 shows the AUC of graph anomaly detection baselines, AnoGraph, and
AnoGraph-K. We report a single value for DenseAlert and AnomRank because these
are non-randomized methods. AnomRank is not meant for a streaming scenario,
therefore the low AUC. DenseAlert can estimate only one subtensor at a time
and SpotLight uses a randomized approach without any actual search for dense
subgraphs. Note that AnoGraph and AnoGraph-K outperform all baselines on all
datasets while using a simple sketch data structure to incorporate dense
subgraph search as opposed to the baselines. We provide results with
additional set of parameters in Table 6, Appendix G.
Table 4: AUC and Running Time when detecting graph anomalies. Averaged over
$5$ runs.
Dataset | DenseAlert | SpotLight | AnomRank | AnoGraph | AnoGraph-K
---|---|---|---|---|---
DARPA | $0.833$ | $0.728\pm 0.016$ | $0.754$ | $0.835\pm 0.002$ | $\bf 0.839\pm 0.002$
49.3s | 88.5s | 3.7s | 0.3s | 0.3s
ISCX-IDS2012 | $0.906$ | $0.872\pm 0.019$ | $0.194$ | $\bf 0.950\pm 0.001$ | $\bf 0.950\pm 0.001$
6.4s | 21.1s | 5.2s | 0.5s | 0.5s
CIC-IDS2018 | $0.950$ | $0.835\pm 0.022$ | $0.783$ | $\bf 0.957\pm 0.000$ | $\bf 0.957\pm 0.000$
67.9s | 149.0s | 7.0s | 0.2s | 0.3s
CIC-DDoS2019 | $0.764$ | $0.468\pm 0.048$ | $0.241$ | $0.946\pm 0.002$ | $\bf 0.948\pm 0.002$
1065.0s | 289.7s | 0.2s | 0.4s | 0.4s
#### Running Time:
Table 4 shows the running time (excluding I/O). DenseAlert has
$O(|\mathscr{E}|)$ worse case time complexity (per incoming edge). AnomRank
needs to compute a global PageRank, which does not scale for stream
processing. Note that AnoGraph and AnoGraph-K run much faster than all
baselines.
#### AUC vs Running Time:
Figure 4 (a) plots accuracy (AUC) vs. running time (log scale, in seconds,
excluding I/O) on the _CIC-DDoS2019_ dataset. AnoGraph and AnoGraph-K achieve
much higher accuracy compared to the baselines, while also running
significantly faster.
Figure 4: On _CIC-DDoS2019_ , (a) AUC vs running time when detecting graph
anomalies. (b) AnoGraph-K scales linearly with factor $K$. (c) Linear
scalability with number of hash functions. (d) Linear scalability with number
of edges.
#### Scalability:
Figures 4(b), 4(c), and 4(d) plot the running time with increasing factor $K$
(used for top-$K$ in Algorithm 4), number of hash functions and number of
edges respectively, on the _CIC-DDoS2019_ dataset. This demonstrates the
scalability of AnoGraph and AnoGraph-K.
## 8 Conclusion
In this paper, we extend the CMS data structure to a higher-order sketch to
capture complex relations in graph data and to reduce the problem of detecting
suspicious dense subgraphs to finding a dense submatrix in constant time. We
then propose four sketch-based streaming methods to detect edge and subgraph
anomalies in constant time and memory. Furthermore, our approach is the first
streaming work that incorporates dense subgraph search to detect graph
anomalies in constant memory and time. We also provide a theoretical guarantee
on the submatrix density measure and prove the time and space complexities of
all methods. Experimental results on four real-world datasets demonstrate our
effectiveness as opposed to popular state-of-the-art streaming edge and graph
baselines. Future work could consider incorporating node and edge
representations, and more general types of data, including tensors.
## References
* Shin et al. [2017] Kijung Shin, Bryan Hooi, Jisu Kim, and Christos Faloutsos. Densealert: Incremental dense-subtensor detection in tensor streams. _KDD_ , 2017.
* Eswaran et al. [2018] Dhivya Eswaran, Christos Faloutsos, Sudipto Guha, and Nina Mishra. Spotlight: Detecting anomalies in streaming graphs. In _KDD_ , 2018.
* Bhatia et al. [2020] Siddharth Bhatia, Bryan Hooi, Minji Yoon, Kijung Shin, and Christos Faloutsos. Midas: Microcluster-based detector of anomalies in edge streams. In _AAAI_ , 2020.
* Akoglu et al. [2010] Leman Akoglu, Mary McGlohon, and Christos Faloutsos. Oddball: Spotting anomalies in weighted graphs. In _PAKDD_ , 2010.
* Chakrabarti [2004] Deepayan Chakrabarti. Autopart: Parameter-free graph partitioning and outlier detection. In _PKDD_ , 2004.
* Hooi et al. [2017] Bryan Hooi, Kijung Shin, Hyun Ah Song, Alex Beutel, Neil Shah, and Christos Faloutsos. Graph-based fraud detection in the face of camouflage. _TKDD_ , 2017.
* Jiang et al. [2016] Meng Jiang, Peng Cui, Alex Beutel, Christos Faloutsos, and Shiqiang Yang. Catching synchronized behaviors in large networks: A graph mining approach. _TKDD_ , 2016.
* Kleinberg [1999] Jon M Kleinberg. Authoritative sources in a hyperlinked environment. _JACM_ , 1999.
* Shin et al. [2018] Kijung Shin, Tina Eliassi-Rad, and Christos Faloutsos. Patterns and anomalies in k-cores of real-world graphs with applications. _KAIS_ , 2018.
* Tong and Lin [2011] Hanghang Tong and Ching-Yung Lin. Non-negative residual matrix factorization with application to graph anomaly detection. In _SDM_ , 2011.
* Eswaran and Faloutsos [2018] Dhivya Eswaran and Christos Faloutsos. Sedanspot: Detecting anomalies in edge streams. In _ICDM_ , 2018.
* Belth et al. [2020] C Belth, X Zheng, and D Koutra. Mining persistent activity in continually evolving networks. In _KDD_ , 2020.
* Chang et al. [2021] Yen-Yu Chang, Pan Li, Rok Sosic, MH Afifi, Marco Schweighauser, and Jure Leskovec. F-fade: Frequency factorization for anomaly detection in edge streams. In _WSDM_ , 2021.
* Yoon et al. [2019] Minji Yoon, Bryan Hooi, Kijung Shin, and Christos Faloutsos. Fast and accurate anomaly detection in dynamic graphs with a two-pronged approach. In _KDD_ , 2019.
* Cormode and Muthukrishnan [2005] Graham Cormode and Shan Muthukrishnan. An improved data stream summary: the count-min sketch and its applications. _Journal of Algorithms_ , 2005.
* Zhang et al. [2019] Jiabao Zhang, Shenghua Liu, Wenjian Yu, Wenjie Feng, and Xueqi Cheng. Eigenpulse: Detecting surges in large streaming graphs with row augmentation. In _PAKDD_ , 2019.
* Bogdanov et al. [2013] Petko Bogdanov, Christos Faloutsos, Misael Mongiovì, Evangelos E Papalexakis, Razvan Ranca, and Ambuj K Singh. Netspot: Spotting significant anomalous regions on dynamic networks. In _SDM_ , 2013.
* Shah et al. [2016] Neil Shah, Alex Beutel, Bryan Hooi, Leman Akoglu, Stephan Gunnemann, Disha Makhija, Mohit Kumar, and Christos Faloutsos. Edgecentric: Anomaly detection in edge-attributed networks. In _ICDMW_ , 2016.
* Perozzi and Akoglu [2018] Bryan Perozzi and Leman Akoglu. Discovering communities and anomalies in attributed graphs: Interactive visual exploration and summarization. _TKDD_ , 2018.
* Bonchi et al. [2019] Francesco Bonchi, Ilaria Bordino, Francesco Gullo, and Giovanni Stilo. The importance of unexpectedness: Discovering buzzing stories in anomalous temporal graphs. _Web Intelligence_ , 2019.
* Bonchi et al. [2016] Francesco Bonchi, Ilaria Bordino, Francesco Gullo, and Giovanni Stilo. Identifying buzzing stories via anomalous temporal subgraph discovery. In _WI_ , 2016.
* Bojchevski and Günnemann [2018] Aleksandar Bojchevski and Stephan Günnemann. Bayesian robust attributed graph clustering: Joint learning of partial anomalies and group structure. In _AAAI_ , 2018.
* Yu et al. [2018] Wenchao Yu, Wei Cheng, C Aggarwal, K Zhang, H Chen, and Wei Wang. Netwalk: A flexible deep embedding approach for anomaly detection in dynamic networks. _KDD_ , 2018.
* Na et al. [2018] Gyoung S Na, Donghyun Kim, and Hwanjo Yu. Dilof: Effective and memory efficient local outlier detection in data streams. In _KDD_ , 2018.
* Manzoor et al. [2018] Emaad A Manzoor, Hemank Lamba, and Leman Akoglu. xstream: Outlier detection in feature-evolving data streams. In _KDD_ , 2018.
* Tan et al. [2011] Swee Chuan Tan, Kai Ming Ting, and Tony Fei Liu. Fast anomaly detection for streaming data. In _IJCAI_ , 2011.
* Jankov et al. [2017] Dimitrije Jankov, Sourav Sikdar, Rohan Mukherjee, Kia Teymourian, and Chris Jermaine. Real-time high performance anomaly detection over data streams: Grand challenge. _DEBS_ , 2017.
* Zou et al. [2017] Shaofeng Zou, Yingbin Liang, H Vincent Poor, and Xinghua Shi. Nonparametric detection of anomalous data streams. _IEEE Transactions on Signal Processing_ , 2017.
* Moshtaghi et al. [2015] Masud Moshtaghi, James C Bezdek, Christopher Leckie, Shanika Karunasekera, and Marimuthu Palaniswami. Evolving fuzzy rules for anomaly detection in data streams. _IEEE Transactions on Fuzzy Systems_ , 2015.
* Siffer et al. [2017] Alban Siffer, Pierre-Alain Fouque, Alexandre Termier, Christine Largouet, and C Largouët. Anomaly detection in streams with extreme value theory. _KDD_ , 2017.
* Togbe et al. [2020] Maurras Ulbricht Togbe, Mariam Barry, Aliou Boly, Yousra Chabchoub, Raja Chiky, Jacob Montiel, and Vinh-Thuy Tran. Anomaly detection for data streams based on isolation forest using scikit-multiflow. In _ICCSA_ , 2020.
* Zhang et al. [2021] Jiabao Zhang, Shenghua Liu, Wenting Hou, Siddharth Bhatia, Hua-Wei Shen, Wenjian Yu, and Xueqi Cheng. Augsplicing: Synchronized behavior detection in streaming tensors. _AAAI_ , 2021.
* Pan et al. [2013] Shirui Pan, Xingquan Zhu, Chengqi Zhang, and S Yu Philip. Graph stream classification using labeled and unlabeled graphs. In _ICDE_ , 2013.
* Wang et al. [2008] Wei Wang, Xiaohong Guan, and Xiangliang Zhang. Processing of massive audit data streams for real-time anomaly intrusion detection. _Computer communications_ , 2008.
* Menon et al. [2007] Aditya Krishna Menon, Gia Vinh Anh Pham, Sanjay Chawla, and Anastasios Viglas. An incremental data-stream sketch using sparse random projections. In _SDM_ , 2007.
* Zhao et al. [2011] Peixiang Zhao, Charu C Aggarwal, and Min Wang. gsketch: On query estimation in graph streams. _VLDB_ , 2011.
* Shi and Anandkumar [2020] Yang Shi and Animashree Anandkumar. Higher-order count sketch: Dimensionality reduction that retains efficient tensor operations. _DCC_ , 2020.
* Chalapathy and Chawla [2019] Raghavendra Chalapathy and Sanjay Chawla. Deep learning for anomaly detection: A survey. _ArXiv_ , abs/1901.03407, 2019.
* Pang et al. [2020] Guansong Pang, Chunhua Shen, Longbing Cao, and Anton van den Hengel. Deep learning for anomaly detection: A review. _arXiv preprint arXiv:2007.02500_ , 2020.
* Akoglu et al. [2015] Leman Akoglu, Hanghang Tong, and Danai Koutra. Graph based anomaly detection and description: A survey. _Data mining and knowledge discovery_ , 2015.
* Yu et al. [2013] Weiren Yu, Charu C Aggarwal, Shuai Ma, and Haixun Wang. On anomalous hotspot discovery in graph streams. In _ICDM_ , 2013.
* Ranshous et al. [2016] Stephen Ranshous, Steve Harenberg, Kshitij Sharma, and Nagiza F Samatova. A scalable approach for outlier detection in edge streams using sketch-based approximations. In _SDM_ , 2016.
* Sricharan and Das [2014] Kumar Sricharan and Kamalika Das. Localizing anomalous changes in time-evolving graphs. In _SIGMOD_ , 2014.
* Sun et al. [2006] Jimeng Sun, Dacheng Tao, and Christos Faloutsos. Beyond streams and graphs: dynamic tensor analysis. In _KDD_ , 2006.
* Beutel et al. [2013] Alex Beutel, Wanhong Xu, Venkatesan Guruswami, Christopher Palow, and Christos Faloutsos. Copycatch: stopping group attacks by spotting lockstep behavior in social networks. In _WWW_ , 2013.
* Abdelhamid et al. [2017] Ehab Abdelhamid, Mustafa Canim, M. Sadoghi, B. Bhattacharjee, Yuan-Chi Chang, and Panos Kalnis. Incremental frequent subgraph mining on large evolving graphs. _TKDE_ , 2017.
* Page et al. [1999] Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. The pagerank citation ranking : Bringing order to the web. In _WWW_ , 1999.
* Mcgregor [2014] Andrew Mcgregor. Graph stream algorithms: a survey. _SIGMOD Record_ , 2014.
* Khuller and Saha [2009] Samir Khuller and Barna Saha. On finding dense subgraphs. In _ICALP_ , 2009.
* Lippmann et al. [1999] Richard Lippmann, Robert K Cunningham, David J Fried, Isaac Graf, Kris R Kendall, Seth E Webster, and Marc A Zissman. Results of the darpa 1998 offline intrusion detection evaluation. In _Recent advances in intrusion detection_ , 1999.
* Shiravi et al. [2012] Ali Shiravi, Hadi Shiravi, Mahbod Tavallaee, and Ali A Ghorbani. Toward developing a systematic approach to generate benchmark datasets for intrusion detection. _computers & security_, 2012.
* Ring et al. [2019] Markus Ring, Sarah Wunderlich, Deniz Scheuring, Dieter Landes, and Andreas Hotho. A survey of network-based intrusion detection data sets. _Computers & Security_, 2019.
* Sharafaldin et al. [2018] Iman Sharafaldin, Arash Habibi Lashkari, and Ali A Ghorbani. Toward generating a new intrusion detection dataset and intrusion traffic characterization. In _ICISSP_ , 2018.
* Sharafaldin et al. [2019] Iman Sharafaldin, Arash Habibi Lashkari, Saqib Hakak, and Ali A Ghorbani. Developing realistic distributed denial of service (ddos) attack dataset and taxonomy. In _ICCST_ , 2019.
* Forest [2021] Random Cut Forest. https://github.com/aws/random-cut-forest-by-aws, 2021.
* Carter and Wegman [1979] J Lawrence Carter and Mark N Wegman. Universal classes of hash functions. _Journal of computer and system sciences_ , 1979.
## Appendix
## Appendix A H-CMS
Algorithm 5 shows the H-CMS operations.
1
1 Procedure _INITIALIZE H-CMS(_$n_{r}$ , $n_{b}$_)_
2 for _$r\leftarrow 1$ … $n_{r}$_ do
$h_{r}:\mathcal{V}\rightarrow[0,n_{b})$
// hash vertex
3 $M_{r}\rightarrow[0]_{n_{b}\times n_{b}}$
4
1 Procedure _RESET H-CMS(_$n_{r}$ , $n_{b}$_)_
2 for _$r\leftarrow 1$ … $n_{r}$_ do
$\mathcal{M}_{r}\leftarrow[0]_{n_{b}\times n_{b}}$
// reset to zero matrix
3
4
1 Procedure _UPDATE H-CMS(_$u,v,w$_)_
2 for _$r\leftarrow 1$ … $n_{r}$_ do
3
$\mathcal{M}_{r}[h_{r}(u)][h_{r}(v)]\leftarrow\mathcal{M}_{r}[h_{r}(u)][h_{r}(v)]+w$
4
1 Procedure _DECAY H-CMS(_$\delta$_)_
2 for _$r\leftarrow 1$ … $n_{r}$_ do
$\mathcal{M}_{r}\leftarrow\delta*\mathcal{M}_{r}$
// decay factor: $\delta$
3
4
5
Algorithm 5 H-CMS Operations
## Appendix B Datasets
Table 5 shows the statistical summary of the datasets. $|E|$ corresponds to
the total number of edge records, $|V|$ and $|T|$ are the number of unique
nodes and unique timestamps, respectively.
Table 5: Statistics of the datasets. Dataset | $|V|$ | $|E|$ | $|T|$ | Edge Anomalies | Graph Anomalies
---|---|---|---|---|---
DARPA | 25,525 | 4,554,344 | 46,567 | $60.1\%$ | $26.5\%$
ISCX-IDS2012 | 30,917 | 1,097,070 | 165,043 | $4.23\%$ | $3.38\%$
CIC-IDS2018 | 33,176 | 7,948,748 | 38,478 | $7.26\%$ | $11.0\%$
CIC-DDoS2019 | 1,290 | 20,364,525 | 12,224 | $99.7\%$ | $51.4\%$
_DARPA_ [50] and _ISCX-IDS2012_ [51] are popular anomaly detection datasets
used by baseline papers to evaluate their algorithms. [52] surveys more than
$30$ datasets and recommends to use the newer _CIC-IDS2018_ [53] and _CIC-
DDoS2019_ [54] containing modern attack scenarios.
## Appendix C Higher-Order Sketch Proof
###### Theorem 1.
H-CMS has the same estimate guarantees as the original CMS.
###### Proof.
Consider a 3-dimensional H-CMS, with depth $n_{r}$, where an entity
$a\in[1,N]$ is mapped to index $(i,j)$ with two independent hash functions
$h^{\prime}:[1,N]\rightarrow[0,b)$ and
$h^{\prime\prime}:[1,N]\rightarrow[0,b)$, i.e., $i=h^{\prime}(a)$ and
$j=h^{\prime\prime}(a)$. Without loss of generality, the 3-dimensional H-CMS
can be converted to a CMS data structure by combining $h^{\prime}$ and
$h^{\prime\prime}$ in the following way:
$h(a)=n_{b}*h^{\prime}(a)+h^{\prime\prime}(a)$, i.e., $h(a)=n_{b}*i+j$ where
$b*i\in[0,n_{b}^{2}-n_{b})$ and $j\in[0,n_{b}]$. Hence, $h(a)\in[0,b^{2})$,
and $h:[1,N]\rightarrow[0,b^{2})$ can be a hash function for a CMS data
structure with width $n_{b}^{2}$ and depth $n_{r}$. Therefore, the CMS
estimate guarantee holds for a 3-dimensional H-CMS data structure. A higher
dimensional H-CMS can be reduced to a CMS data structure in a similar manner.
∎
## Appendix D Edge Anomalies Proofs
### D.1 Proofs: AnoEdge-G
###### Proposition 1.
Time complexity of Algorithm 1 is $O(|\mathscr{E}|*n_{r}*n_{b}^{2})$.
###### Proof.
Procedure Edge-Submatrix-Density removes rows (or columns) iteratively, and
the total number of rows and columns that can be removed is $n_{b}+n_{b}-2$.
In each iteration, the approach performs the following three operations: (a)
pick the row with minimum row-sum; (b) pick the column with minimum column-
sum; (c) calculate density. We keep $n_{b}$-sized arrays for flagging removed
rows (or columns), and for maintaining row-sums (or column-sums). Operations
(a) and (b) take maximum $n_{b}$ steps to pick and flag the row with minimum
row-sum (or column-sum). Updating the column-sums (or rows-sums) based on the
picked row (or column) again takes maximum $n_{b}$ steps. Time complexity of
(a) and (b) is therefore $O(n_{b})$. Density is directly calculated based on
subtracting the removed row-sum (or column-sum) and reducing the row-count (or
column-count) from the earlier density value. Row-count and column-count are
kept as separate variables. Therefore, the time complexity of the density
calculation step is $O(1)$. Total time complexity of procedure Edge-Submatrix-
Density is $O((n_{b}+n_{b}-2)*(n_{b}+n_{b}+1))=O(n_{b}^{2})$.
Time complexity to initialize and decay the H-CMS data structure is
$O(n_{r}*n_{b}^{2})$. Temporal decay operation is applied whenever the
timestamp changes, and not for every received edge. Update counts operation
updates a matrix element value ($O(1)$ operation) for $n_{r}$ matrices, and
the time complexity of this step is $O(n_{r})$. Anomaly score for each edge is
based on the submatrix density computation procedure which is $O(n_{b}^{2})$;
the time complexity of $n_{r}$ matrices becomes $O(n_{r}*n_{b}^{2})$.
Therefore, the total time complexity of Algorithm 1 is
$O(|\mathscr{E}|*(n_{r}+n_{r}*n_{b}^{2}))=O(|\mathscr{E}|*n_{r}*n_{b}^{2})$. ∎
###### Proposition 2.
Memory complexity of Algorithm 1 is $O(n_{r}*n_{b}^{2})$.
###### Proof.
For procedure Edge-Submatrix-Density, we keep an $n_{b}$-sized arrays to flag
rows and columns that are part of the current submatrix, and to maintain row-
sums and column-sums. Total memory complexity of Edge-Submatrix-Density
procedure is $O(4*n_{b})=O(n_{b})$.
Memory complexity of H-CMS data structure is $O(n_{r}*n_{b}^{2})$. Dense
submatrix search and density computation procedure require $O(n_{b})$ memory.
For $n_{r}$ matrices, this becomes $O(n_{r}*n_{b})$. Therefore, the total
memory complexity of Algorithm 1 is
$O(n_{r}*n_{b}^{2}+n_{r}*n_{b})=O(n_{r}*n_{b}^{2})$. ∎
### D.2 Proofs: AnoEdge-L
###### Proposition 3.
Time complexity of Algorithm 2 is
$O(n_{r}*n_{b}^{2}+|\mathscr{E}|*n_{r}*n_{b})$.
###### Proof.
As shown in Proposition 1, the time complexity of H-CMS is
$O(n_{r}*n_{b}^{2})$ and update operation is $O(n_{r})$. Current submatrix
$(S_{cur},T_{cur})$ is updated based on _expand_ and _condense_ submatrix
operations. (a) We keep an $n_{b}$-sized array to flag the current submatrix
rows (or column), and also to maintain row-sums (or column-sums). Expand
submatrix operation depends on the elements from row $h(u)$ and column $h(v)$,
and the density is calculated by considering these elements, thus requiring
maximum $n_{b}$ steps. Upon addition of the row (or column), the dependent
column-sums (or row-sums) are also updated taking maximum $n_{b}$ steps. Time
complexity of expand operation is therefore $O(n_{b})$. (b) Condense submatrix
operation removes rows and columns iteratively. A row (or column) elimination
is performed by selecting the row (or column) with minimum row-sum (or column-
sum) in $O(n_{b})$ time. Removed row (or column) affects the dependent column-
sums (or row-sums) and are updated in $O(n_{b})$ time. Time complexity of a
row (or column) removal is therefore $O(n_{b})$. Condense submatrix removes
rows (or columns) that were once added by the expand submatrix operation which
in worse case is $O|\mathscr{E}|$.
Expand and condense submatrix operations are performed for $n_{r}$ matrices.
Likelihood score calculation depends on elements from row $h(u)$ and column
$h(v)$, and takes $O(n_{r}*n_{b})$ time for $n_{r}$ matrices. Therefore, the
total time complexity of Algorithm 2 is
$O(n_{r}*n_{b}^{2}+|\mathscr{E}|*n_{r}+|\mathscr{E}|*n_{r}*n_{b}+|\mathscr{E}|*n_{r}*n_{b}+|\mathscr{E}|*n_{r}*n_{b})=O(n_{r}*n_{b}^{2}+|\mathscr{E}|*n_{r}*n_{b})$.
∎
###### Proposition 4.
Memory complexity of Algorithm 2 is $O(n_{r}*n_{b}^{2})$.
###### Proof.
Memory complexity of the H-CMS data structure is $O(n_{r}*n_{b}^{2})$. To keep
current submatrix information, we utilize $n_{b}$-sized arrays similar to
Proposition 2. For $n_{r}$ matrices, submatrix information requires
$O(n_{r}*n_{b})$ memory. Hence, total memory complexity of Algorithm 2 is
$O(n_{r}*n_{b}^{2}+n_{r}*n_{b})=O(n_{r}*n_{b}^{2})$. ∎
## Appendix E Graph Anomalies-Proofs
### E.1 Proofs: AnoGraph
###### Lemma 1.
Let $S^{*}$ and $T^{*}$ be the optimum densest sub-matrix solution of
$\mathcal{M}$ with density $\mathcal{D}(\mathcal{M},S^{*},T^{*})=d_{opt}$.
Then $\forall u\in S^{*}$ and $\forall v\in T^{*}$,
$\mathcal{R}(\mathcal{M},u,T^{*})\geq\tau_{S^{*}};\quad\mathcal{C}(\mathcal{M},S^{*},v)\geq\tau_{T^{*}}$
(3)
where: $\tau_{S^{*}}$ =
$\mathcal{E}(\mathcal{M},S^{*},T^{*})\left(1-\sqrt{1-\frac{1}{|S^{*}|}}\right)$,
$\tau_{T^{*}}$ =
$\mathcal{E}(\mathcal{M},S^{*},T^{*})\left(1-\sqrt{1-\frac{1}{|T^{*}|}}\right)$
###### Proof.
Leveraging the proof from [49], let’s assume that $\exists u\in S^{*}$ with
$\mathcal{R}(\mathcal{M},u,T^{*})<\tau_{S^{*}}$. Density of submatrix after
removing
$u=\frac{\mathcal{E}(\mathcal{M},S^{*},T^{*})-\mathcal{R}(\mathcal{M},u,T^{*})}{\sqrt{(|S^{*}-1|)|T^{*}|}}$
which is greater than
$\frac{\mathcal{E}(\mathcal{M},S^{*},T^{*})-\tau_{S^{*}}}{\sqrt{(|S^{*}-1|)|T^{*}|}}=d_{opt}$,
and that is not possible. Hence,
$\mathcal{R}(\mathcal{M},u,T^{*})\geq\tau_{S^{*}}$.
$\mathcal{C}(\mathcal{M},S^{*},v)\geq\tau_{T^{*}}$ can be proved in a similar
manner. ∎
###### Theorem 2.
Algorithm 3 achieves a 2-approximation guarantee for the densest submatrix
problem.
###### Proof.
Leveraging the proof from [49], we greedily remove the row (or column) with
minimum row-sum (or column-sum). At some iteration of the greedy process,
$\;\forall u\in S_{cur};\forall v\in T_{cur}$,
$\;\mathcal{R}(\mathcal{M},u,T_{cur})\geq\tau_{S^{*}}$ and
$\mathcal{C}(\mathcal{M},S_{cur},v)\geq\tau_{T^{*}}$. Therefore,
$\mathcal{E}(\mathcal{M},S_{cur},T_{cur})\geq|S_{cur}|\tau_{S^{*}}$ and
$\mathcal{E}(\mathcal{M},S_{cur},T_{cur})\geq|T_{cur}|\tau_{T^{*}}$. This
implies that the density
$\mathcal{D}(\mathcal{M},S_{cur},T_{cur})\geq\sqrt{\frac{|S_{cur}|\tau_{S^{*}}|T_{cur}|\tau_{T^{*}}}{|S_{cur}||T_{cur}|}}=\sqrt{\tau_{S^{*}}\tau_{T^{*}}}$.
Putting values of $\tau_{S^{*}}$ and $\tau_{T^{*}}$ from Lemma 1, and setting
$|S^{*}|=\frac{1}{\sin^{2}\alpha}$, $|T^{*}|=\frac{1}{\sin^{2}\beta}$, we get
$\mathcal{D}(\mathcal{M},S_{cur},T_{cur})\geq\frac{\mathcal{E}(\mathcal{M,S^{*},T^{*}})}{\sqrt{|S^{*}||T^{*}|}}\frac{\sqrt{(1-\cos\alpha)(1-\cos\beta)}}{\sin\alpha\sin\beta}\geq\frac{d_{opt}}{2\cos\frac{\alpha}{2}\cos\frac{\beta}{2}}\geq\frac{d_{opt}}{2}$.
∎
###### Proposition 5.
Time complexity of Algorithm 3 is
$O(|\mathscr{G}|*n_{r}*n_{b}^{2}+|\mathscr{E}|*n_{r})$.
###### Proof.
Procedure AnoGraph-Density iteratively removes row (or column) with minimum
row-sum (or column-sum). Maximum number of rows and columns that can be
removed is $n_{b}+n_{b}-2$. We keep $n_{b}$-sized arrays to store the current
submatrix rows and columns, and row-sums and column-sums. At each iteration,
selecting the row (or column) with minimum row-sum (or column-sum) takes
$O(n_{b})$ time, and updating the dependent row-sums (or column-sums) also
$O(n_{b})$ time. Density is calculated in $O(n_{b})$ time based on the current
submatrix row-sum and column-sum. Each iteration takes
$O(n_{b}+n_{b}+n_{b})=O(n_{b})$ time. Hence, the total time complexity of
AnoGraph-Density procedure is $O((n_{b}+n_{b}-2)*n_{b})=O(n_{b}^{2})$.
Initializing the H-CMS data structure takes $O(n_{r}*n_{b}^{2})$ time. When a
graph arrives, AnoGraph: (a) resets counts that take $O(n_{r}*n_{b}^{2})$
time; (b) updates counts taking $O(1)$ time for every edge update; (c)
computes submatrix density that follows from procedure AnoGraph-Density and
takes $O(n_{b}^{2})$ time. Each of these operations is applied for $n_{r}$
matrices. Therefore, the total time complexity of Algorithm 3 is
$O(n_{r}*n_{b}^{2}+|\mathscr{G}|*n_{r}*n_{b}^{2}+|\mathscr{E}|*n_{r}+|\mathscr{G}|*n_{r}*n_{b}^{2})=O(|\mathscr{G}|*n_{r}*n_{b}^{2}+|\mathscr{E}|*n_{r})$,
where $|\mathscr{E}|$ is the total number of edges over graphs $\mathscr{G}$.
∎
###### Proposition 6.
Memory complexity of Algorithm 3 is $O(n_{r}*n_{b}^{2})$.
###### Proof.
For procedure AnoGraph-Density, we keep $n_{b}$-sized array to flag rows and
columns that are part of the current submatrix, and to maintain row-sums and
column-sums. Hence, memory complexity of AnoGraph-Density procedure is
$O(4*n_{b})=O(n_{b})$.
H-CMS data structure requires $O(n_{r}*n_{b}^{2})$ memory. Density computation
relies on AnoGraph-Density procedure, and takes $O(n_{b})$ memory. Therefore,
the total memory complexity of Algorithm 3 is $O(n_{r}*n_{b}^{2})$. ∎
### E.2 Proofs: AnoGraph-K
###### Proposition 7.
Time complexity of Algorithm 4 is
$O(|\mathscr{G}|*K*n_{r}*n_{b}^{2}+|\mathscr{E}|*n_{r})$.
###### Proof.
Relevant operations in Procedure AnoGraph-K-Density directly follow from Edge-
Submatrix-Density procedure, which has $O(n_{b}^{2})$ time complexity. Edge-
Submatrix-Density procedure is called $K$ times, therefore, the total time
complexity of AnoGraph-K-Density procedure is $O(K*n_{b}^{2})$.
For Algorithm 4, we initialize an H-CMS data structure that takes
$O(n_{r}*n_{b}^{2})$ time. When a graph arrives, AnoGraph-K: (a) resets counts
that take $O(n_{r}*n_{b}^{2})$ time; (b) updates counts taking $O(1)$ time for
every edge update; (c) computes submatrix density that follows from procedure
AnoGraph-K-Density and takes $O(K*n_{b}^{2})$ time. Each of these operations
is applied for $n_{r}$ matrices. Therefore, the total time complexity of
Algorithm 4 is
$O(n_{r}*n_{b}^{2}+|\mathscr{G}|*K*n_{r}*n_{b}^{2}+|\mathscr{E}|*n_{r}+|\mathscr{G}|*n_{r}*n_{b}^{2})=O(|\mathscr{G}|*K*n_{r}*n_{b}^{2}+|\mathscr{E}|*n_{r})$,
where $|\mathscr{E}|$ is the total number of edges over graphs $\mathscr{G}$.
∎
###### Proposition 8.
Memory complexity of Algorithm 4 is $O(n_{r}*n_{b}^{2})$.
###### Proof.
The density of $K$ submatrices is computed independently, and the memory
complexity of Algorithm procedure AnoGraph-K-Density is the same as the memory
complexity of Edge-Submatrix-Density procedure i.e. $O(n_{b})$.
Maintaining the H-CMS data structure requires $O(n_{r}*n_{b}^{2})$ memory.
Density computation relies on AnoGraph-K-Density procedure, and it requires
$O(n_{b})$ memory. Therefore, the total memory complexity of Algorithm 4 is
$O(n_{r}*n_{b}^{2})$. ∎
## Appendix F Experimental Setup
All experiments are carried out on a $2.4GHz$ Intel Core $i9$ processor,
$32GB$ RAM, running OS $X$ $10.15.3$. For our approach, we keep $n_{r}=2$ and
temporal decay factor $\delta=0.9$. $n_{b}=32$ to have a fair comparison to
MIDAS which uses ${n_{b}}^{2}=1024$ buckets. We keep $K=5$ for Algorithm 4.
AUC for graph anomalies is shown with edge thresholds as $50$ for _DARPA_ and
$100$ for other datasets. Time window is taken as $30$ minutes for _DARPA_ and
$60$ minutes for other datasets.
## Appendix G Additional Results
Table 6 shows the performance of AnoGraph and AnoGraph-K for different time
windows and edge thresholds. The edge threshold is varied in such a way that a
sufficient number of anomalies are present within the time window. AnoGraph
and AnoGraph-K have performance similar to that in Table 4.
Table 6: AUC when detecting graph anomalies. Dataset | Time | Edge | AnoGraph | AnoGraph-K
---|---|---|---|---
| Window | Threshold | |
DARPA | $15$ | $25$ | $0.835\pm 0.001$ | $0.838\pm 0.001$
$30$ | $50$ | $0.835\pm 0.002$ | $0.839\pm 0.002$
$60$ | $50$ | $0.747\pm 0.002$ | $0.748\pm 0.001$
$60$ | $100$ | $0.823\pm 0.000$ | $0.825\pm 0.001$
ISCX-IDS2012 | $15$ | $25$ | $0.945\pm 0.001$ | $0.945\pm 0.000$
$30$ | $50$ | $0.949\pm 0.001$ | $0.948\pm 0.000$
$60$ | $50$ | $0.935\pm 0.002$ | $0.933\pm 0.002$
$60$ | $100$ | $0.950\pm 0.001$ | $0.950\pm 0.001$
CIC-IDS2018 | $15$ | $25$ | $0.945\pm 0.004$ | $0.947\pm 0.006$
$30$ | $50$ | $0.959\pm 0.000$ | $0.959\pm 0.001$
$60$ | $50$ | $0.920\pm 0.001$ | $0.920\pm 0.001$
$60$ | $100$ | $0.957\pm 0.000$ | $0.957\pm 0.000$
CIC-DDoS2019 | $15$ | $25$ | $0.864\pm 0.002$ | $0.863\pm 0.003$
$30$ | $50$ | $0.861\pm 0.003$ | $0.861\pm 0.003$
$60$ | $50$ | $0.824\pm 0.004$ | $0.825\pm 0.005$
$60$ | $100$ | $0.946\pm 0.002$ | $0.948\pm 0.002$
## Appendix H Baselines
We use open-source implementations of DenseStream [1] (Java), SedanSpot [11]
(C++), MIDAS-R [3] (C++), PENminer [12] (Python), F-FADE [13] (Python),
DenseAlert [1] (Java), and AnomRank [14] (C++) provided by the authors,
following parameter settings as suggested in the original paper. For SpotLight
[2], we used open-sourced implementations of Random Cut Forest [55] and Carter
Wegman hashing [56].
### H.1 Edge Anomalies
1. 1.
SedanSpot:
* •
sample_size $=10000$
* •
num_walk $=50$
* •
restart_prob $0.15$
2. 2.
MIDAS: The size of CMSs is 2 rows by 1024 columns for all the tests. For
MIDAS-R, the decay factor $\alpha=0.6$.
3. 3.
PENminer:
* •
ws $=1$
* •
ms $=1$
* •
view = id
* •
alpha $=1$
* •
beta $=1$
* •
gamma $=1$
4. 4.
DenseStream: We keep default parameters, i.e., order $=3$.
5. 5.
F-FADE:
* •
embedding_size $=200$
* •
W_upd $=720$
* •
T_th $=120$
* •
alpha $=0.999$
* •
M $=100$
For t_setup, we always use the timestamp value at the $10^{th}$ percentile of
the dataset.
### H.2 Graph Anomalies
1. 1.
SpotLight:
* •
K $=50$
* •
p $=0.2$
* •
q $=0.2$
2. 2.
DenseAlert: We keep default parameters, i.e., order $=3$ and window=$60$.
3. 3.
AnomRank: We keep default parameters, i.e., damping factor c $=0.5$, and L1
changes of node score vectors threshold epsilon $=10^{-3}$. We keep
${1/4}^{th}$ number of graphs for initializing mean/variance as mentioned in
the respective paper.
|
# A New Aspect of Chebyshev’s Bias for Elliptic Curves over Function Fields
Ikuya Kaneko The Division of Physics, Mathematics and Astronomy, California
Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125, USA
<EMAIL_ADDRESS>https://sites.google.com/view/ikuyakaneko/ and Shin-ya
Koyama Department of Biomedical Engineering, Toyo University, 2100 Kujirai,
Kawagoe, Saitama, 350-8585, Japan<EMAIL_ADDRESS>http://www1.tmtv.ne.jp/
koyama/
###### Abstract.
This work considers the prime number races for non-constant elliptic curves
$E$ over function fields. We prove that if $\mathrm{rank}(E)>0$, then there
exist Chebyshev biases towards being negative, and otherwise there exist
Chebyshev biases towards being positive. The main innovation entails the
convergence of the partial Euler product at the centre that follows from the
Deep Riemann Hypothesis over function fields.
###### Key words and phrases:
Deep Riemann Hypothesis, Grand Riemann Hypothesis, Birch–Swinnerton-Dyer
conjecture, $L$-functions, elliptic curves, function fields, Chebyshev’s bias
###### 2020 Mathematics Subject Classification:
Primary: 11G40; Secondary: 11M38
## 1\. Introduction
In 1853, Chebyshev noticed in a letter to Fuss that primes congruent to $3$
modulo $4$ seem to predominate over those congruent to $1$ modulo $4$. If
$\pi(x;q,a)$ signifies the number of primes $p\leq x$ such that $p\equiv
a{\@displayfalse\,(\mathrm{mod}{\,q})}$, then the inequality
$\pi(x;4,3)\geq\pi(x;4,1)$ holds for more than 97 % of $x<10^{11}$. On the
other hand, were we to leverage Dirichlet’s theorem on primes in arithmetic
progressions, then it is expected that the number of the primes of the form
$4k+1$ and $4k+3$ should be asymptotically equal. Therefore, the Chebyshev
bias indicates that the primes of the form $4k+3$ appear earlier than those of
the form $4k+1$. Classical triumphs include the work of Littlewood [12] who
established that $\pi(x;4,3)-\pi(x;4,1)$ changes its sign infinitely often.
Knapowski–Turán [9] conjectured that the density of the numbers $x$ for which
$\pi(x;4,3)\geq\pi(x;4,1)$ holds is $1$. Nonetheless, the work of Kaczorowski
[7] shows the falsity of their conjecture conditionally on the Generalised
Riemann Hypothesis. Note that they have a logarithmic density via work of
Rubinstein–Sarnak [14], which is approximately $0.9959\cdots$.
In this fundamental scenario, the next layer of methodological depth came with
the introduction of a weighted counting function that allows one to scrutinise
the above phenomenon. The work of Aoki–Koyama [2] introduces the counting
function
$\pi_{s}(x;q,a)\coloneqq\sum_{\begin{subarray}{c}p\leq x\\\ p\equiv
a{\@displayfalse\,(\mathrm{mod}{\,q})}\end{subarray}}\frac{1}{p^{s}},\qquad
s\geq 0,$ (1.1)
extending $\pi(x;q,a)=\pi_{0}(x;q,a)$, where the smaller prime $p$ permits a
higher contribution to $\pi_{s}(x;q,a)$ as long as we fix $s>0$. The function
$\pi_{s}(x;q,a)$ ought to be more appropriate than $\pi(x;q,a)$ to represent
the phenomenon, since it reflects the size of the primes that $\pi(x;q,a)$
ignores. Although the natural density of the set
$\\{x>0\mid\pi_{s}(x;4,3)-\pi_{s}(x;4,1)>0\\}$ does not exist when $s=0$ under
the Generalised Riemann Hypothesis (see the article [7]), they have shown
under the Deep Riemann Hypothesis (DRH) that it would be equal to $1$ when
$s=1/2$. More precisely, the Chebyshev bias could be realised in terms of the
asymptotic formula
$\pi_{\frac{1}{2}}(x;4,3)-\pi_{\frac{1}{2}}(x;4,1)=\frac{1}{2}\log\log
x+c+o(1)$ (1.2)
as $x\to\infty$, where $c$ is a constant.
The Chebyshev biases for prime ideals $\mathfrak{p}$ of a global field $K$ are
formulated as follows.
###### Definition 1.1 (Aoki–Koyama [2]).
Let $c(\mathfrak{p})\in\mathbb{R}$ be a sequence over primes $\mathfrak{p}$ of
$K$ such that
$\lim_{x\to\infty}\frac{\\#\\{\mathfrak{p}\mid
c(\mathfrak{p})>0,\mathrm{N}(\mathfrak{p})\leq x\\}}{\\#\\{\mathfrak{p}\mid
c(\mathfrak{p})<0,\mathrm{N}(\mathfrak{p})\leq x\\}}=1.$
We say that $c(\mathfrak{p})$ has a Chebyshev bias towards being positive if
there exists a constant $C>0$ such that
$\sum_{\mathrm{N}(\mathfrak{p})\leq
x}\frac{c(\mathfrak{p})}{\sqrt{\mathrm{N}(\mathfrak{p})}}\sim C\log\log x,$
where $\mathfrak{p}$ ranges over primes of $K$. On the other hand, we say that
$c(\mathfrak{p})$ is unbiased if
$\sum_{\mathrm{N}(\mathfrak{p})\leq
x}\frac{c(\mathfrak{p})}{\sqrt{\mathrm{N}(\mathfrak{p})}}=O(1).$
###### Definition 1.2 (Aoki–Koyama [2]).
Assume that the set of all primes $\mathfrak{p}$ of $K$ with
$\mathrm{N}(\mathfrak{p})\leq x$ is expressed as a disjoint union
$P_{1}(x)\cup P_{2}(x)$ and that their proportion converges to
$\delta=\lim_{x\to\infty}\frac{|P_{1}(x)|}{|P_{2}(x)|}.$
We say that there exists a Chebyshev bias towards $P_{1}$ or a Chebyshev bias
against $P_{2}$ if
$\sum_{p\in
P_{1}(x)}\frac{1}{\sqrt{\mathrm{N}(\mathfrak{p})}}-\delta\sum_{p\in
P_{2}(x)}\frac{1}{\sqrt{\mathrm{N}(\mathfrak{p})}}\sim C\log\log x$
for some $C>0$. On the other hand, we say that there exist no biases between
$P_{1}$ and $P_{2}$ if
$\sum_{p\in
P_{1}(x)}\frac{1}{\sqrt{\mathrm{N}(\mathfrak{p})}}-\delta\sum_{p\in
P_{2}(x)}\frac{1}{\sqrt{\mathrm{N}(\mathfrak{p})}}=O(1).$
Definitions 1.1 and 1.2 differ from those in [1, 6, 14] and formulate an
asymptotic formula for the size of the discrepancy caused by Chebyshev’s bias
disregarded in the conventional definitions of the length of the interval in
terms of the limiting distributions. Definitions 1.1 and 1.2 involve little
information on the density distributions and both types of formulations appear
not to have any logical connections. They shed some light on Chebyshev’s bias
from different directions.
It is shown in [2, Corollary 3.2] that the Chebyshev bias (1.2) against the
quadratic residue $1{\@displayfalse\,(\mathrm{mod}{\,4})}$ is equivalent to
the convergence of the Euler product at the centre $s=1/2$ for the Dirichlet
$L$-function $L(s,\chi_{-4})$ with $\chi_{-4}$ denoting the nontrivial
character modulo $4$. This is related to DRH proposed by Kurokawa [8, 11] in
2012.
Aoki–Koyama [2] have observed various instances of Chebyshev biases that
certain prime ideals in Galois extensions of global fields have over others,
with an emphasis on the biases against splitting primes and principal prime
ideals. Koyama–Kurokawa [10] have obtained an analogue of this phenomenon for
Ramanujan’s $\tau$-function $\tau(p)$ and shown that DRH for the automorphic
$L$-function $L(s+11/2,\Delta)$ implies the bias of $\tau(p)/p^{11/2}$ towards
positive values.
In this work, we delve into such phenomena on the prime number races that
elliptic curves over function fields give rise to. Let $E$ be an elliptic
curve over $K$ and let $E_{v}$ be the reduction of $E$ on the residue field
$k_{v}$ at a finite place $v$ of $K$. If $E$ has good reduction at $v$, we
define
$a_{v}=a_{v}(E)\coloneqq q_{v}+1-\\#E_{v}(k_{v}),$
where $q_{v}=\\#k_{v}$ and $\\#E_{v}(k_{v})$ is the number of $k_{v}$-rational
points on $E_{v}$. The symbol $a_{v}$ can be extended to all other finite
places $v$ as follows:
$a_{v}\coloneqq\begin{cases}1&\text{if $E$ has split multiplicative reduction
at $v$},\\\ -1&\text{if $E$ has nonsplit multiplicative reduction at $v$},\\\
0&\text{if $E$ has additive reduction at $v$}.\end{cases}$
This work aims to prove the following asymptotic corresponding to the case of
$s=1/2$ in (1.1).
###### Theorem 1.3.
Assume $\mathrm{char}(K)>0$ and that $E$ is a non-constant elliptic curve over
$K$ in the terminology of Ulmer [17, Definitions 1.1.4]. If
$\mathrm{rank}(E)>0$, then the sequence $a_{v}/\sqrt{q_{v}}$ has a Chebyshev
bias towards being negative. To be more precise, we have that
$\sum_{q_{v}\leq
x}\frac{a_{v}}{q_{v}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\left(\frac{1}{2}-\mathrm{rank}(E)\right)}\log\log
x+O(1).$ (1.3)
The proof uses the convergence of the Euler product at the centre, which
follows from DRH over function fields due to Conrad [5] and
Kaneko–Koyama–Kurokawa [8]; see §2 for details.
## Acknowledgements
The authors would like to thank Nobushige Kurokawa for instructive
discussions. The first author acknowledges the support of the Masason
Foundation and the Spirit of Ramanujan STEM Talent Initiative. The second
author was partially supported by the INOUE ENRYO Memorial Grant 2022, TOYO
University.
## 2\. Deep Riemann Hypothesis
Let $K$ be a one-dimensional global field that is either a number field or a
function field in one variable over a finite field. For a place $v$ of $K$,
let $M(v)$ denote a unitary matrix of degree $r_{v}\in\mathbb{N}$. We consider
an $L$-function expressed as the Euler product
$L(s,M)=\prod_{v<\infty}\det(1-M(v)q_{v}^{-s})^{-1},$ (2.1)
where $q_{v}$ is the cardinal of the residue field $k_{v}$ at $v$. The product
(2.1) is absolutely convergent in $\mathrm{Re}(s)>1$. In this article, we
assume that $L(s,M)$ has an analytic continuation as an entire function on
$\mathbb{C}$ and a functional equation relating values at $s$ and $1-s$.
Moreover, we set
$\delta(M)=-\mathop{\mathrm{ord}}_{s=1}L(s,M^{2}),$
where $\mathop{\mathrm{ord}}_{s=1}$ signifies the order of the zero at $s=1$.
We here do not presuppose that $M$ is a representation. The square $M^{2}$ is
interpreted as an Adams operation. Note that since
$L(s,M^{2})=\prod_{v<\infty}\det(1-M(v)^{2}q_{v}^{-s})^{-1}=\frac{L(s,\mathrm{Sym}^{2}M)}{L(s,\wedge^{2}M)},$
we derive the expression
$\delta(M)=-\mathop{\mathrm{ord}}_{s=1}L(s,\mathrm{Sym}^{2}M)+\mathop{\mathrm{ord}}_{s=1}L(s,\wedge^{2}M),$
(2.2)
where $\mathrm{Sym}^{2}$ and $\wedge^{2}$ denote the symmetric and the
exterior squares, respectively. If we assume that $M$ is an Artin
representation
$\rho\colon\mathrm{Gal}(K^{\mathrm{sep}}/K)\to\mathrm{Aut}_{\mathbb{C}}(V),\qquad\rho\neq\mathbbm{1}$
with a representation space $V$, then
$\delta(M)=\mathrm{mult}(\mathbbm{1},\mathrm{Sym}^{2}\rho)-\mathrm{mult}(\mathbbm{1},\wedge^{2}\rho),$
where $\mathrm{mult}(\mathbbm{1},\sigma)$ is the multiplicity of the trivial
representation $\mathbbm{1}$ in $\sigma$.
We are now in a position to formulate DRH due to Kurokawa [8, 11] in a general
context.
###### Conjecture 2.1 (Deep Riemann Hypothesis).
Keep the assumptions and notation as above. Let
$m=\mathop{\mathrm{ord}}_{s=1/2}L(s,M)$. Then the limit
$\lim_{x\to\infty}\left((\log x)^{m}\prod_{q_{v}\leq
x}\det\left(1-M(v)q_{v}^{-\frac{1}{2}}\right)^{-1}\right)$ (2.3)
satisfies the following conditions:
DRH (A):
The limit (2.3) exists and is nonzero.
DRH (B):
The limit (2.3) satisfies
$\lim_{x\to\infty}\left((\log x)^{m}\prod_{q_{v}\leq
x}\det\left(1-M(v)q_{v}^{-\frac{1}{2}}\right)^{-1}\right)=\frac{\sqrt{2}^{\delta(M)}}{e^{m\gamma}m!}\cdot
L^{(m)}\left(\frac{1}{2},M\right).$
It is obvious that DRH (B) indicates DRH (A). Nonetheless, DRH (A) is still
meaningful since it is essentially equivalent to the Chebyshev biases. The
following instances clarify this situation; we refer the reader to the work of
Aoki–Koyama [2] for more interesting examples.
###### Example 2.2 (Aoki–Koyama [2]).
If $K=\mathbb{Q}$, we denote a prime number by $v=p$. If $r_{p}=1$ for any $p$
and $M$ is the nontrivial Dirichlet character modulo 4, namely
$M(p)=\chi_{-4}(p)$, then we have that $L(s,M)=L(s,\chi_{-4})$ and
$\delta(M)=1$. It is shown in their work [2] that DRH (A) for $L(s,\chi_{-4})$
is equivalent to the original form of the Chebyshev bias (1.2).
###### Example 2.3 (Koyama–Kurokawa [10]).
Let $K=\mathbb{Q}$ and $r_{p}=2$ for any $p$, and let $\tau(p)\in\mathbb{Z}$
be Ramanujan’s $\tau$-function defined for $q=e^{2\pi iz}$ with
$\mathrm{Im}(z)>0$ by
$\Delta(z)=q\prod_{k=1}^{\infty}(1-q^{k})^{24}=\sum_{n=1}^{\infty}\tau(n)q^{n}.$
We introduce $M(p)=\begin{pmatrix}e^{i\theta_{p}}&0\\\
0&e^{-i\theta_{p}}\end{pmatrix}$, where the Satake parameter
$\theta_{p}\in[0,\pi]\cong\mathrm{Conj}(\mathrm{SU}(2))$ is defined by
$\tau(p)=2p^{11/2}\cos(\theta_{p})$. It then follows that the associated
$L$-function
$L(s,M)=\prod_{p}(1-2\cos(\theta_{p})p^{-s}+p^{-2s})^{-1}$
satisfies a functional equation for $s\leftrightarrow 1-s$. Using the
conventional notation of Ramanujan’s $L$-function
$L(s,\Delta)=\sum_{n=1}^{\infty}\frac{\tau(n)}{n^{s}},$
which satisfies a functional equation for $s\leftrightarrow 12-s$, we have
that $L(s,M)=L(s+11/2,\Delta)$ and $\delta(M)=-1$. It is shown in [10] that
DRH (A) for $L(s,M)=L(s+11/2,\Delta)$ implies that there exists a Chebyshev
bias for the sequence $\tau(p)/p^{11/2}$ towards being positive.
Conjecture 2.1 is known to hold when the characteristic is positive. The proof
was given by Conrad [5, Theorems 8.1 and 8.2] under the second moment
hypothesis, and the full proof was offered by Kaneko–Koyama–Kurokawa [8,
Theorem 5.5]. We record their result as follows.
###### Theorem 2.4 (Kaneko–Koyama–Kurokawa [8]).
Conjecture 2.1 is valid for $\mathrm{char}(K)>0$.
## 3\. Proof of Theorem 1.3
Let $E$ be an elliptic curve over a global field $K$ and let $a_{v}$ be the
same as in the introduction. We define the parameter
$\theta_{v}\in[0,\pi]\cong\mathrm{Conj}(\mathrm{SU}(2))$ by
$a_{v}=2\sqrt{q_{v}}\cos(\theta_{v})$. If one writes
$r_{v}=\begin{cases}2&\text{if $v$ is good},\\\ 1&\text{if $v$ is
bad},\end{cases}$
and
$M(v)=M_{E}(v)=\begin{cases}\begin{pmatrix}e^{i\theta_{v}}&0\\\
0&e^{-i\theta_{v}}\end{pmatrix}&\text{if $v$ is good},\\\ a_{v}&\text{if $v$
is bad},\end{cases}$
then the $L$-function (2.1) is equal to
$L(s,M)=L(s,M_{E})=\prod_{v\colon\text{good}}(1-2\cos(\theta_{v})q_{v}^{-s}+q_{v}^{-2s})^{-1}\prod_{v\colon\text{bad}}(1-a_{v}q_{v}^{-s})^{-1}.$
This Euler product is absolutely convergent in $\mathrm{Re}(s)>1$ and has an
meromorphic continuation to the whole complex plane $\mathbb{C}$ along with a
functional equation for $s\leftrightarrow 1-s$.
The $L$-function $L(s,M_{E})$ is expressed in terms of an Artin-type
$L$-function in the following manner. Fixing $\ell$ and $K$ with
$\ell\neq\mathrm{char}(K)$, we obtain the representation
$\rho_{E}\colon\mathrm{Gal}(K^{\mathrm{sep}}/K)\to\mathrm{Aut}(T_{\ell}(E)\otimes\mathbb{Q}_{\ell}),$
(3.1)
where $T_{\ell}(E)=\varprojlim\limits_{n}E[\ell^{n}]$ is the $\ell$-adic Tate
module of $E/K$. The Artin $L$-function associated to the Galois
representation $\rho_{E}$ is defined by the Euler product
$L(s,\rho_{E})=\prod_{v<\infty}\det(1-q_{v}^{-s}\rho_{E}(\mathrm{Frob}_{v}|V^{I_{v}}))^{-1}.$
The Euler factors of $L(s,M_{E})$ are in accordance with those of
$L(s+1/2,\rho_{E})$ for all places $v$ at which $E$ has good reduction. In
other words, the $L$-function $L(s,M_{E})$ equals $L(s+1/2,\rho_{E})$ up to
finite factors stemming from bad places.
Conjecture 2.1 originates from the Birch–Swinnerton-Dyer conjecture in the
following form.
###### Conjecture 3.1 (Birch–Swinnerton-Dyer [3, Page 79 (A)]).
Let $E$ be an elliptic curve $E$ over $K$. Then there exists a constant $A>0$
dependent on $E$ such that
$\prod_{\begin{subarray}{c}q_{v}\leq x\\\
v\colon\text{good}\end{subarray}}\frac{\\#E(k_{v})}{q_{v}}\sim A(\log x)^{r},$
(3.2)
where $r=\mathrm{rank}(E)$. Moreover, $r$ is equal to the order of vanishing
of $L(s,M_{E})$ at $s=1/2$.
Since the left-hand side of (3.2) coincides with the Euler product over good
places of the $L$-function $L(s,M_{E})$ at $s=1/2$, Conjecture 3.1 is
indicative of DRH (A) for $L(s,M_{E})$.
###### Theorem 3.2.
Keep the notation as above. The following conditions are equivalent.
1. (i)
DRH (A) holds for $L(s,M)$.
2. (ii)
There exists a constant $c$ such that
$\sum_{q_{v}\leq
x}\frac{\mathrm{tr}(M(v))}{\sqrt{q_{v}}}=-\left(\frac{\delta(M)}{2}+m\right){\log\log
x}+c+o(1),$
where $m=\mathop{\mathrm{ord}}_{s=1/2}L(s,M)$.
###### Proof.
We introduce
$\displaystyle\text{I}(x)$ $\displaystyle=\sum_{q_{v}\leq
x}\frac{\mathrm{tr}(M(v))}{\sqrt{q_{v}}},$ $\displaystyle\text{II}(x)$
$\displaystyle=\frac{1}{2}\sum_{q_{v}\leq
x}\frac{\mathrm{tr}(M(v)^{2})}{q_{v}},$ $\displaystyle\text{III}(x)$
$\displaystyle=\sum_{k=3}^{\infty}\frac{1}{k}\sum_{q_{v}\leq
x}\frac{\mathrm{tr}(M(v)^{k})}{q_{v}^{k/2}}.$
Since one has
$\text{I}(x)+\text{II}(x)+\text{III}(x)=\log\left(\prod_{q_{v}\leq
x}\det\left(1-M(v)q_{v}^{-\frac{1}{2}}\right)^{-1}\right),$
the condition (i) is equivalent to the claim that there exists a constant $L$
such that
$m\log\log x+\text{I}(x)+\text{II}(x)+\text{III}(x)=L+o(1).$ (3.3)
The generalised Mertens theorem (see [13, Theorem 5] and [8, Lemma 5.3]) gives
$\text{II}(x)=\frac{\delta(M)}{2}\log\log x+C_{1}+o(1)$ (3.4)
for some constant $C_{1}$. Additionally, it is straightforward to see that
there exists a constant $C_{2}$ such that
$\text{III}(x)=C_{2}+o(1).$ (3.5)
Therefore, the estimates (3.3)–(3.5) yield
$\text{I}(x)=-\left(\frac{\delta(M)}{2}+m\right)\log\log
x+L-C_{1}-C_{2}+o(1).$
If we assume (i), then the condition (ii) is valid with $c=L-C_{1}-C_{2}$.
Conversely, if we assume (ii), then (3.3) is valid with $L=c+C_{1}+C_{2}$.
This finishes the proof of Theorem 3.2. ∎
In order to examine the asymptotic behavior of a sum over $v$ with $q_{v}\leq
x$ as $x\to\infty$, it suffices to restrict ourselves to places $v$ at which
$E$ has good reduction. When $v$ is good, the $n$-th symmetric power matrix
$\mathrm{Sym}^{n}M$ of size $n+1$ is given by
$(\mathrm{Sym}^{n}M)(v)=\mathrm{diag}(e^{in\theta_{v}},e^{i(n-2)\theta_{v}},\cdots,e^{-i(n-2)\theta_{v}},e^{-in\theta_{v}}).$
We calculate
$\mathrm{tr}(\mathrm{Sym}^{n}M)(v)=\frac{\sin((n+1)\theta_{v})}{\sin\theta_{v}}.$
Extending the definition of $(\mathrm{Sym}^{n}M)(v)$ to all places $v$ by
setting $(\mathrm{Sym}^{n}M)(v)=a_{v}^{n}$ for bad places $v$, one defines the
$n$-th symmetric power $L$-function $L(s,\mathrm{Sym}^{n}M)$. With the
standard notation for the Galois representation $\rho=\rho_{E}$ in (3.1), we
have the following normalisation:
$L(s,\mathrm{Sym}^{n}M)=L\left(s+\frac{n}{2},\mathrm{Sym}^{n}\rho\right).$
(3.6)
If $E$ is a non-constant elliptic curve in the terminology of Ulmer [17,
Definitions 1.1.4], it is known that $L(s,\mathrm{Sym}^{n}M)$ is a polynomial
in $q^{-n/2-s}$ (see [4, 16]) and the absolute values of its roots are equal
to $q^{-(n+1)/2}$ with the normalisation (3.6) taken into account. Therefore,
all zeroes of (3.6) lie on the critical line $\mathrm{Re}(s)=1/2$ and there
holds
$\mathop{\mathrm{ord}}_{s=1}L(s,\mathrm{Sym}^{n}M)=0,\qquad n\in\mathbb{N}.$
(3.7)
###### Lemma 3.3.
If $\mathrm{char}(K)>0$ and $E$ is a non-constant elliptic curve over $K$,
then we have for $M=M_{E}$ that $\delta(M)=-1.$
###### Proof.
Since $M$ is a unitary matrix of size 2, the exterior square matrix
$\wedge^{2}M$ is trivial. Thus
$\mathop{\mathrm{ord}}_{s=1}L(s,\wedge^{2}M)=-1$. The claim follows from (2.2)
and (3.7). ∎
In what follows, we use the shorthand
$m_{n}=\mathop{\mathrm{ord}}_{s=1/2}L(s,\mathrm{Sym}^{n}M)$.
###### Theorem 3.4.
If $\mathrm{char}(K)>0$ and $E$ is a non-constant elliptic curve over $K$,
then we have that
$\sum_{q_{v}\leq
x}\frac{a_{v}}{q_{v}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\left(\frac{1}{2}-m_{1}\right)}\log\log
x+O(1).$ (3.8)
In particular, if $\mathrm{rank}(E)>0$, the sequence $a_{v}/\sqrt{q_{v}}$ has
a Chebyshev bias towards being negative.
###### Proof.
In [16, §3.1.7] and [17, Theorem 9.3], Ulmer proves that $L(s,M_{E})$ is a
polynomial in $q^{-s}$ for any non-constant elliptic curve $E$ and hence it is
entire and satisfies the assumption of Conjecture 2.1. By Theorem 2.4, DRH
holds for $L(s,M_{E})$. By Theorem 3.2 and Lemma 3.3, we are able to prove
(3.8). In order to justify the second assertion, we appeal to the work of
Ulmer [17, Theorem 12.1 (1)], which says that $\mathrm{rank}(E)\leq m_{1}$.
Using this inequality, we have $m_{1}\geq 1$ and thereby $C<0$. This completes
the proof of Theorem 3.4. ∎
Theorem 3.4 is in accordance with the prediction of Sarnak [15, page 5] that
$\mathrm{rank}(E)>0$ implies the existence of a bias towards being negative,
although he considers the sequence $a_{v}$ instead of $a_{v}/\sqrt{q_{v}}$. He
also states that $\mathrm{rank}(E)=0$ implies the existence of a bias towards
being positive. We ascertain this phenomenon under the Birch–Swinnerton-Dyer
conjecture.
###### Corollary 3.5.
Assume that $L(1,M_{E})\neq 0$. If $\mathrm{char}(K)>0$ and $E$ is a non-
constant elliptic curve over $K$ such that $\mathrm{rank}(E)=0$, then we have
that
$\sum_{q_{v}\leq x}\frac{a_{v}}{q_{v}}=\frac{1}{2}\log\log x+O(1).$ (3.9)
In other words, the sequence $a_{v}/\sqrt{q_{v}}$ has a Chebyshev bias towards
being positive.
###### Proof.
The Birch–Swinnerton-Dyer conjecture asserts that $m_{1}=\mathrm{rank}(E)$,
which implies that $m_{1}=0$. Hence the asymptotic formula (3.8) gives the
desired result. ∎
###### Remark 1.
Cha–Fiorilli–Jouve [4, Theorem 1.7] have shown that there exist infinitely
many elliptic curves $E/\mathbb{F}_{q}(T)$ such that the sequence $a_{v}$ is
unbiased in the following sense:
$\lim_{X\to\infty}\frac{1}{X}\sum_{\begin{subarray}{c}x\leq X\\\
T_{E}(x)>0\end{subarray}}1=\frac{1}{2}$
with
$T_{E}(x)=-\frac{x}{q^{x/2}}\sum_{\begin{subarray}{c}\deg(v)\leq x\\\
v\colon\text{good}\end{subarray}}2\cos\theta_{v}=-\frac{x}{q^{x/2}}\sum_{\begin{subarray}{c}\deg(v)\leq
x\\\ v\colon\text{good}\end{subarray}}\frac{a_{v}}{q^{\deg(v)/2}}.$
Despite their work discussing the Chebyshev bias for the sequence $a_{v}$,
which is different from the sequence $a_{v}/\sqrt{q_{v}}$ dealt with in
Theorem 3.4, we strongly believe that these are constant elliptic curves whose
$L$-functions have a pole at $s=1$ and do not satisfy DRH.
We obtain other types of biases on the Satake parameters $\theta_{v}$ by
considering the symmetric square $L$-function.
###### Theorem 3.6.
If $\mathrm{char}(K)>0$ and $E$ is a non-constant elliptic curve over $K$,
then we have that
$\sum_{q_{v}\leq
x}\frac{(2\cos\theta_{v}-1)(\cos\theta_{v}+1)}{\sqrt{q_{v}}}=(1-m_{1}-m_{2})\log\log
x+O(1).$ (3.10)
In particular, if both $L(1/2,M)\neq 0$ and $L(1/2,\mathrm{Sym}^{2}M)\neq 0$
are true, the sequence $(2\cos\theta_{v}-1)(\cos\theta_{v}+1)$ has a Chebyshev
bias towards being positive.
###### Proof.
Applying Theorem 3.2 to $L(s,M)$ and $L(s,\mathrm{Sym}^{2}M)$, we deduce
$\sum_{q_{v}\leq
x}\frac{2\cos\theta_{v}}{\sqrt{q_{v}}}=\left(\frac{1}{2}-m_{1}\right){\log\log
x}+O(1)$ (3.11)
and
$\sum_{q_{v}\leq x}\frac{2\cos
2\theta_{v}}{\sqrt{q_{v}}}=\left(\frac{1}{2}-m_{2}\right){\log\log x}+O(1).$
(3.12)
It follows that $\eqref{n=2}+\eqref{n=1}$ equals
$\sum_{q_{v}\leq x}\frac{2(\cos\theta_{v}+\cos
2\theta_{v})}{\sqrt{q_{v}}}=\sum_{q_{v}\leq
x}\frac{2(2\cos\theta_{v}-1)(\cos\theta_{v}+1)}{\sqrt{q_{v}}}=(1-m_{1}-m_{2})\log\log
x+O(1).$
This concludes the proof of Theorem 3.6. ∎
We give an example of unbiased sequences constructed from the Satake
parameters $\theta_{v}$.
###### Theorem 3.7.
If $\mathrm{char}(K)>0$ and $E$ is a non-constant elliptic curve over $K$,
then we have that
$\sum_{q_{v}\leq
x}\frac{2(2\cos\theta_{v}+1)(\cos\theta_{v}-1)}{\sqrt{q_{v}}}=(m_{1}-m_{2})\log\log
x+O(1).$ (3.13)
In particular, if $m_{1}=m_{2}$, the sequence
$(2\cos\theta_{v}+1)(\cos\theta_{v}-1)$ is unbiased.
###### Proof.
It follows that $\eqref{n=2}-\eqref{n=1}$ equals
$\sum_{q_{v}\leq x}\frac{2(\cos
2\theta_{v}-\cos\theta_{v})}{\sqrt{q_{v}}}=\sum_{q_{v}\leq
x}\frac{2(2\cos\theta_{v}+1)(\cos\theta_{v}-1)}{\sqrt{q_{v}}}=(m_{1}-m_{2})\log\log
x+O(1).$
This concludes the proof of Theorem 3.7. ∎
## References
* [1] A. Akbary, N. Ng, and M. Shahabi. Limiting distributions of the classical error terms of prime number theory. Q. J. Math., 65(3):743–780, 2014.
* [2] M. Aoki and S. Koyama. Chebyshev’s bias against splitting and principal primes in global fields. to appear in J. Number Theory, 35 pages, 2023. https://doi.org/10.1016/j.jnt.2022.10.005.
* [3] B. J. Birch and H. P. F. Swinnerton-Dyer. Notes on elliptic curves. II. J. Reine Angew. Math., 218:79–108, 1965.
* [4] B. Cha, D. Fiorilli, and F. Jouve. Prime number races for elliptic curves over function fields. Ann. Sci. École Norm. Sup. (4), 49(5):1239–1277, 2016.
* [5] K. Conrad. Partial Euler products on the critical line. Canad. J. Math., 57(2):267–297, 2005.
* [6] L. Devin. Chebyshev’s bias for analytic $L$-functions. Math. Proc. Cambridge Philos. Soc., 169(1):103–140, 2020.
* [7] J. Kaczorowski. On the distribution of primes (mod $4$). Analysis, 15(2):159–171, 1995.
* [8] I. Kaneko, S. Koyama, and N. Kurokawa. Towards the Deep Riemann Hypothesis for $\mathrm{GL}_{n}$. arXiv e-prints, 17 pages, 2022. https://arxiv.org/abs/2206.02612.
* [9] S. Knapowski and P. Turán. Comparative prime-number theory. I. Introduction. Acta Math. Acad. Sci. Hungar., 13:299–314, 1962.
* [10] S. Koyama and N. Kurokawa. Chebyshev’s bias for Ramanujan’s $\tau$-function via the Deep Riemann Hypothesis. Proc. Japan Acad. Ser. A Math. Sci., 98(6):35–39, 2022.
* [11] N. Kurokawa. The pursuit of the Riemann Hypothesis (in Japanese). Gijutsu Hyouron-sha, Tokyo, 2012.
* [12] J. E. Littlewood. Sur la distribution des nombres premiers. C. R. Acad. Sci. Paris, 158:1869–1872, 1914.
* [13] M. I. Rosen. A generalization of Mertens’ theorem. J. Ramanujan Math. Soc., 14:1–19, 1999.
* [14] M. Rubinstein and P. Sarnak. Chebyshev’s bias. Experiment. Math., 3(3):173–197, 1994.
* [15] P. Sarnak. Letter to: Barry Mazur on “Chebyshev’s bias” for $\tau(p)$, 2007\. https://publications.ias.edu/sites/default/files/MazurLtrMay08.PDF.
* [16] D. Ulmer. Geometric non-vanishing. Invent. Math., 159(1):133–186, 2005.
* [17] D. Ulmer. Elliptic curves over function fields. In C. Popescu, K. Rubin, and A. Silverberg, editors, Arithmetic of $L$-Functions, volume 18 of IAS/Park City Mathematics Series, pages 211–280. Amer. Math. Soc., Providence, RI, 2011.
|
$\Biggl{|}\mbox{$\>\\!$}\mathrm{e}^{\mathrm{i}t}-\sum_{j=0}^{k-1}\frac{(\mathrm{i}\mbox{$\>\\!$}t)^{j}}{j!}\Biggr{|}\leq\frac{\,|t|^{k}}{k!\,}.$
(5.11)
Now, combining the uniform expansions (5.10) and (5.11) (the latter with $k=2$
or $k=1$ as appropriate) and returning to (5.7), we obtain
$\displaystyle\sum_{\ell\in\mathbb{N}^{q}}\log\left(1+w_{\ell}\right)$
$\displaystyle=\sum_{\ell\in\mathbb{N}^{q}}\frac{z_{1}^{\ell}z_{2}}{1+z_{1}^{\ell}z_{2}}\left(\mathrm{i}\mbox{$\;\\!$}(\tilde{t}_{1}\ell+\tilde{t}_{2})-\frac{1}{2}\left(\tilde{t}_{1}\ell+\tilde{t}_{2}\right)^{2}+O(1)\left(\tilde{t}_{1}\ell+\tilde{t}_{2}\right)^{3}\right)$
$\displaystyle\quad+O(1)\sum_{\ell\in\mathbb{N}^{q}}\frac{z_{1}^{2\ell}z_{2}^{2}}{(1+z_{1}^{\ell}z_{2})^{2}}\left(\tilde{t}_{1}\ell+\tilde{t}_{2}\right)^{2}$
$\displaystyle=\mathrm{i}\mbox{$\;\\!$}\tilde{t}_{1}\sum_{\ell\in\mathbb{N}^{q}}\frac{\ell\mbox{$\;\\!$}z_{1}^{\ell}z_{2}}{1+z_{1}^{\ell}z_{2}}+\mathrm{i}\mbox{$\;\\!$}\tilde{t}_{2}\sum_{\ell\in\mathbb{N}^{q}}\frac{z_{1}^{\ell}z_{2}}{1+z_{1}^{\ell}z_{2}}-\frac{1}{2}\sum_{\ell\in\mathbb{N}^{q}}\frac{z_{1}^{\ell}z_{2}}{1+z_{1}^{\ell}z_{2}}\left(\tilde{t}_{1}\ell+\tilde{t}_{2}\right)^{2}$
$\displaystyle\quad+O(1)\sum_{\ell\in\mathbb{N}^{q}}z_{1}^{\ell}z_{2}\left(\tilde{t}_{1}\ell+\tilde{t}_{2}\right)^{3}+O(1)\sum_{\ell\in\mathbb{N}^{q}}z_{1}^{2\ell}z_{2}^{2}\left(\tilde{t}_{1}\ell+\tilde{t}_{2}\right)^{2}$
$\displaystyle=:\mathrm{i}\mbox{$\>\\!$}\tilde{t}_{1}\mbox{$\>\\!$}\varSigma_{1}+\mathrm{i}\mbox{$\>\\!$}\tilde{t}_{2}\mbox{$\;\\!$}\varSigma_{2}-\tfrac{1}{2}\mbox{$\>\\!$}\varSigma_{3}+O(1)\mbox{$\>\\!$}\varSigma_{4}+O(1)\mbox{$\>\\!$}\varSigma_{5}.$
(5.12)
According to the calibration equations (see (3.27) and (3.30)), the first two
sums in (5.12) are known exactly,
$\varSigma_{1}=\langle N\rangle,\qquad\varSigma_{2}=\langle M\rangle.$ (5.13)
Next, the error sums $\varSigma_{4}$ and $\varSigma_{5}$ can be shown to
asymptotically vanish. Indeed, using the elementary inequality $(a+b)^{r}\leq
2^{r-1}\left(a^{r}+b^{r}\right)$ ($r\geq 1$) and combining Lemma 3.2 with
formulas (3.25), (3.26) and (3.35) gives upon simple calculations the estimate
$\displaystyle 0\leq\varSigma_{4}$ $\displaystyle\leq
4\mbox{$\;\\!$}\tilde{t}_{1}^{\mbox{$\;\\!$}3}\sum_{\ell\in\mathbb{N}^{q}}\ell^{3}z_{1}^{\ell}z_{2}+4\mbox{$\;\\!$}\tilde{t}_{2}^{\mbox{$\;\\!$}3}\sum_{\ell\in\mathbb{N}^{q}}z_{1}^{\ell}z_{2}$
$\displaystyle=\frac{O(1)\mbox{$\;\\!$}\langle
M\rangle^{3/2}\mbox{$\>\\!$}z_{2}}{\langle
N\rangle^{3}\mbox{$\;\\!$}\gamma^{3+1/q}}+\frac{O(1)\mbox{$\;\\!$}z_{2}}{\langle
M\rangle^{3/2}\mbox{$\;\\!$}\gamma^{1/q}}=\frac{O(1)}{\langle
M\rangle^{1/2}}=o(1).$ (5.14)
Similarly,
$\displaystyle 0\leq\varSigma_{5}$ $\displaystyle\leq
2\mbox{$\;\\!$}\tilde{t}_{1}^{\mbox{$\;\\!$}2}\sum_{\ell\in\mathbb{N}^{q}}\ell^{2}z_{1}^{2\ell}z_{2}^{2}+2\mbox{$\;\\!$}\tilde{t}_{2}^{\mbox{$\;\\!$}2}\sum_{\ell\in\mathbb{N}^{q}}z_{1}^{2\ell}z_{2}^{2}$
$\displaystyle=\frac{O(1)\mbox{$\;\\!$}\langle
M\rangle\mbox{$\>\\!$}z_{2}^{2}}{\langle
N\rangle^{2}\mbox{$\;\\!$}\gamma^{2+1/q}}+\frac{O(1)\mbox{$\;\\!$}z_{2}^{2}}{\langle
M\rangle\mbox{$\;\\!$}\gamma^{1/q}}=O(\kappa^{1/q})=o(1).$ (5.15)
Finally, consider the sum $\varSigma_{3}$ in (5.12) which, as we will see,
provides the main contribution to (5.7). To this end, observe (cf. (3.27))
that
$\displaystyle
0\leq\sum_{\ell\in\mathbb{N}^{q}}z_{1}^{\ell}z_{2}\left(\tilde{t}_{1}\ell+\tilde{t}_{2}\right)^{2}-\varSigma_{3}$
$\displaystyle=\sum_{\ell\in\mathbb{N}^{q}}\frac{z_{1}^{2\ell}z_{2}^{2}}{1+z_{1}^{\ell}z_{2}}\left(\tilde{t}_{1}\ell+\tilde{t}_{2}\right)^{2}$
$\displaystyle\leq\sum_{\ell\in\mathbb{N}^{q}}z_{1}^{2\ell}z_{2}^{2}\left(\tilde{t}_{1}\ell+\tilde{t}_{2}\right)^{2}=\varSigma_{5}=o(1),$
(5.16)
as shown in (5.15). In turn, using the asymptotic results (3.43), (3.44) and
(3.45), and recalling the rescaling expressions (5.5), we obtain the limit
$\displaystyle\sum_{\ell\in\mathbb{N}^{q}}z_{1}^{\ell}z_{2}\left(\tilde{t}_{1}\ell+\tilde{t}_{2}\right)^{2}$
$\displaystyle=\tilde{t}_{1}^{\mbox{$\;\\!$}2}\sum_{\ell\in\mathbb{N}^{q}}\ell^{2}z_{1}^{\ell}z_{2}+2\mbox{$\;\\!$}\tilde{t}_{1}\tilde{t}_{2}\sum_{\ell\in\mathbb{N}^{q}}\ell\mbox{$\>\\!$}z_{1}^{\ell}z_{2}+\tilde{t}_{2}^{\mbox{$\;\\!$}2}\sum_{\ell\in\mathbb{N}^{q}}z_{1}^{\ell}z_{2}$
$\displaystyle\to
t_{1}^{2}+\frac{2\mbox{$\;\\!$}t_{1}t_{2}}{\sqrt{q+1}}+t_{2}^{2}\mbox{$\;\\!$},$
(5.17)
which is a quadratic form with matrix (5.2).
Thus, substituting the estimates (5.13), (5.14), (5.15), (5.16) and (5.17)
into (5.12) and returning to (5.7), we obtain
$\varphi(t_{1},t_{2})\to\exp\left\\{-\frac{1}{2}\left(t_{1}^{2}+\frac{2\mbox{$\;\\!$}t_{1}t_{2}}{\sqrt{q+1}}+t_{2}^{2}\right)\right\\}\\!,$
and the proof of Theorem 5.1 is complete. ∎
###### Corollary 5.2.
Under the hypotheses of Theorem 5.1, the following laws of large numbers hold
under the Boltzmann measure ${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}$,
$\frac{M_{\lambda}}{\langle
M\rangle}\stackrel{{\scriptstyle\mathrm{p}}}{{\longrightarrow}}1,\qquad\frac{N_{\lambda}}{\langle
N\rangle}\stackrel{{\scriptstyle\mathrm{p}}}{{\longrightarrow}}1.$ (5.18)
###### Remark 5.1.
As a curiosity, we observe that the limiting distribution in Theorem 4.1
formally conforms to Theorem 5.1 under the additional limit as $\langle
M\rangle\to\infty$. Indeed, start with the intermediate limit (4.11) (with
$\langle M\rangle$ fixed) and switch from Laplace transform to characteristic
function by formally changing $(s_{1},s_{2})$ ($s_{i}\geq 0$) to
$-\mathrm{i}\mbox{$\;\\!$}(t_{1},t_{2})$ ($t_{i}\in\mathbb{R}$). Then, bearing
in mind the normalization (5.1), we obtain
$\displaystyle-\langle
M\rangle\\!\left(1-\mathrm{e}^{\mbox{$\>\\!$}\mathrm{i}\mbox{$\>\\!$}t_{2}/\sqrt{\langle
M\rangle}}\left(1-\frac{\mathrm{i}\mbox{$\;\\!$}t_{1}q}{\sqrt{\langle
M\rangle}}\right)^{-1/q}\right)+\frac{\mathrm{i}\mbox{$\;\\!$}t_{1}\sqrt{\langle
M\rangle}}{\sqrt{q+1}}$
$\displaystyle\to-\frac{1}{2}\left(t_{1}^{2}+\frac{2\mbox{$\;\\!$}t_{1}t_{2}}{\sqrt{q+1}}+t_{2}^{2}\right)\\!,$
(5.19)
by Taylor expanding the left-hand side of (5.19) up to the second order in
parameter $\langle M\rangle^{-1/2}=o(1)$.
We finish this section by stating a theorem concerning the marginal local-type
asymptotics for the length $M_{\lambda}$ and the corresponding conditional
asymptotics for the weight $N_{\lambda}$.
###### Theorem 5.3.
Under Assumptions 3.2 and 5.1, the following distributional asymptotics hold
under the Boltzmann distribution ${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}$ on
the space $\check{\varLambda}^{q}$.
* (a)
(Local limit theorem for $M_{\lambda}$) For any $m$ such that $m-\langle
M\rangle=O\bigl{(}\sqrt{\langle M\rangle}\mbox{$\>\\!$}\bigr{)}$,
$Q_{\bm{z}}(M_{\lambda}=m)\sim f_{\langle M\rangle}(m),$ (5.20)
where $f_{\langle M\rangle}(x)$ is the normal density with mean $\langle
M\rangle$ and standard deviation $\sqrt{\langle M\rangle}$,
$f_{\langle M\rangle}(x)=\frac{1}{\sqrt{2\pi\langle
M\rangle}}\,\exp\left\\{-\frac{(x-\langle
M\rangle)^{2}}{2\mbox{$\>\\!$}\langle M\rangle}\right\\}\\!,\qquad
x\in\mathbb{R}.$
* (b)
(Conditional limit theorem for $N_{\lambda}$) Conditionally on $M_{\lambda}=m$
with $m-\langle M\rangle=O\bigl{(}\sqrt{\langle
M\rangle}\mbox{$\>\\!$}\bigr{)}$, the partition weight $N_{\lambda}$ is
asymptotically normal with mean $m\mbox{$\>\\!$}\langle N\rangle/\langle
M\rangle$ and variance $q\mbox{$\;\\!$}\langle N\rangle^{2}/\langle M\rangle$,
that is,
$\frac{\sqrt{\langle M\rangle}}{\sqrt{q}}\left(\frac{N_{\lambda}}{\langle
N\rangle}-\frac{m}{\langle
M\rangle}\right)\stackrel{{\scriptstyle\mathrm{d}}}{{\longrightarrow}}_{M_{\lambda}=m}\mathcal{N}(0,1).$
(5.21)
Part (a) of this theorem can be anticipated from the marginal asymptotic
normality of $M_{\lambda}$ proven in Theorem 5.1. The result in part (b) can
be formally derived from the joint asymptotic normality of the pair
$(N_{\lambda},M_{\lambda})$ by calculating the conditional density of
$N_{\lambda}$ given $M_{\lambda}$. Alternatively, one can refer to the “normal
correlation” result (see, e.g., [69, Sec. II.13, Theorem 2, pp. 303–304]). A
rigorous proof of Theorem 5.3 can be obtained by adapting the standard proof
of a “one-dimensional” local limit theorem for $N_{\lambda}$ (see, e.g., [13],
[80]). We will return to this issue in a separate publication.
### 5.2 Limit shape of Young diagrams
In this section we show that, under the slow growth condition on $\langle
M\rangle$, properly scaled Young diagrams of random partitions
$\lambda\in\check{\varLambda}^{q}$ have a limit shape given by the tail of the
gamma integral,
$\omega_{q}^{*}(x):=\frac{1}{\Gamma(1/q)}\int_{x}^{\infty}\\!u^{1/q-1}\mbox{$\;\\!$}\mathrm{e}^{-u}\,\mathrm{d}{u}=1-G_{1/q}(x),\qquad
x\geq 0,$ (5.22)
where $G_{1/q}(x)$ is the distribution function of
$\mathrm{Gamma}\mbox{$\;\\!$}(1/q)$ (see (4.1)). In particular, for $q=1$ the
definition (5.22) is reduced to
$\omega_{1}^{*}(x)=\mathrm{e}^{-x},\qquad x\geq 0.$
Specifically, set
$A:=q\mbox{$\;\\!$}\langle N\rangle/\langle M\rangle,\qquad B=\langle
M\rangle,$ (5.23)
and consider a scaled Young diagram with upper boundary
$\widetilde{Y}_{\lambda}(x)=B^{-1}\mbox{$\;\\!$}Y_{\lambda}(Ax),\qquad x\geq
0,$ (5.24)
where (see (2.2))
$Y_{\lambda}(x)=\sum_{\ell\geq x}\nu_{\ell},\qquad x\geq 0.$ (5.25)
###### Remark 5.2.
The area under the scaled Young diagram is given by
$\int_{0}^{\infty}\mbox{$\>\\!\\!$}\widetilde{Y}_{\lambda}(x)\,\mathrm{d}{x}=B^{-1}\int_{0}^{\infty}\mbox{$\>\\!\\!$}Y_{\lambda}(Ax)\,\mathrm{d}{x}=\frac{\langle
N\rangle}{AB}=\frac{1}{q}.$
Naturally, this condition is preserved by the limit shape; indeed, integrating
by parts we get
$\int_{0}^{\infty}\\!\omega_{q}^{*}(x)\,\mathrm{d}{x}=\frac{1}{\Gamma(1/q)}\int_{0}^{\infty}x^{1/q}\mbox{$\;\\!$}\mathrm{e}^{-x}\,\mathrm{d}{x}=\frac{\Gamma(1+1/q)}{\Gamma(1/q)}=\frac{1}{q}.$
First, we obtain the _expected limit shape_ result.
###### Theorem 5.4.
Under Assumptions 3.2 and 5.1, uniformly in $x\geq 0$
${\mathsf{E}}_{\bm{z}}\mbox{$\;\\!\\!$}\bigl{(}\widetilde{Y}_{\lambda}(x)\bigr{)}\to\omega_{q}^{*}(x),$
(5.26)
where the limit shape $x\mapsto\omega_{q}^{*}(x)$ is defined in (5.22).
###### Proof.
We first show that for each $x\geq 0$, the convergence (5.26) holds. By Lemma
3.1,
$\displaystyle{\mathsf{E}}_{\bm{z}}\bigl{(}Y_{\lambda}(Ax)\bigr{)}=\sum_{\ell\geq
Ax}{\mathsf{E}}_{\bm{z}}(\nu_{\ell})$ $\displaystyle=\sum_{\ell\geq
Ax}\frac{z_{1}^{\ell}z_{2}}{1+z_{1}^{\ell}z_{2}}=\sum_{\ell\geq
Ax}z_{1}^{\ell}z_{2}-\widetilde{R}_{1}(\bm{z}),$ (5.27)
where (cf. (3.27) and (3.28))
$0\leq\widetilde{R}_{1}(\bm{z})=\sum_{\ell\geq
Ax}\frac{z_{1}^{2\ell}\mbox{$\;\\!\\!$}z_{2}^{2}}{1+z_{1}^{\ell}z_{2}}\leq\sum_{\ell\in\mathbb{N}^{q}}z_{1}^{2\ell}\mbox{$\;\\!\\!$}z_{2}^{2}=\braket{M}O(\kappa^{1/q}),$
(5.28)
by virtue of (3.41) (with $r=2$). Furthermore, applying Lemma 3.4 (with
$\gamma=-\log z_{1}$ and $s=0$) and noting that $\gamma A\to 1$, we obtain
$\sum_{\ell\geq
Ax}z_{1}^{\ell}z_{2}\sim\frac{z_{2}}{q\,\gamma^{1/q}}\int_{\gamma
Ax}^{\infty}\mbox{$\>\\!\\!$}u^{1/q-1}\mbox{$\;\\!$}\mathrm{e}^{-u}\,\mathrm{d}{u}\sim\frac{\langle
M\rangle}{\Gamma(1/q)}\int_{x}^{\infty}\\!u^{1/q-1}\mbox{$\;\\!$}\mathrm{e}^{-u}\,\mathrm{d}{u},$
(5.29)
using the asymptotic formula (3.35), Thus, substituting (5.28) and (5.29) into
(5.27) gives
$\displaystyle{\mathsf{E}}_{\bm{z}}\bigl{(}\widetilde{Y}_{\lambda}(x)\bigr{)}=\frac{1}{\langle
M\rangle}\,{\mathsf{E}}_{\bm{z}}\bigl{(}Y_{\lambda}(Ax)\bigr{)}\sim\frac{1}{\Gamma(1/q)}\int_{x}^{\infty}\\!u^{1/q-1}\mbox{$\;\\!$}\mathrm{e}^{-u}\,\mathrm{d}{u}=\omega_{q}^{*}(x),$
as claimed. Finally, the uniform convergence in formula (5.26) follows by
Lemma 3.6, noting that the function $x\mapsto\omega_{q}^{*}(x)$ is continuous,
bounded, and decreasing on $[0,\infty)$. ∎
Now, we are ready to state and prove the main result of this section.
###### Theorem 5.5.
Under Assumptions 3.2 and 5.1, the rescaled Young diagrams converge to the
limit shape $y=\omega_{q}^{*}(x)$ in
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}$-probability uniformly for $x\geq 0$,
that is,
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}\\!\left(\lambda\in\check{\varLambda}^{q}\colon\sup_{x\geq
0}\,\bigl{|}\mbox{$\>\\!$}\widetilde{Y}_{\lambda}(x)-\omega_{q}^{*}(x)\bigr{|}>\varepsilon\right)\to
0.$
###### Proof.
By virtue of Theorem 5.4, letting
$Y^{0}_{\lambda}(x):=Y_{\lambda}(x)-{\mathsf{E}}_{\bm{z}}\bigl{(}Y_{\lambda}(x)\bigr{)}$
it suffices to check that
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}\\!\left(\sup_{x\geq
0}\,\bigl{|}Y^{0}_{\lambda}(Ax)\bigr{|}>B\mbox{$\>\\!$}\varepsilon\right)\to
0.$ (5.30)
Put $Z_{\lambda}(x):=Y_{\lambda}(x^{-1})$ for $0\leq x\leq\infty$; in
particular, $Z_{\lambda}(0)=Y_{\lambda}(\infty)=0$,
$Z_{\lambda}(\infty)=Y_{\lambda}(0)=M_{\lambda}$. By the definition (2.2), for
any $0<s<t\leq\infty$ we have
$Z_{\lambda}(t)-Z_{\lambda}(s)=Y_{\lambda}(t^{-1})-Y_{\lambda}(s^{-1})=\sum_{t^{-1}\leq\mbox{$\>\\!$}\ell\mbox{$\>\\!$}<s^{-1}}\\!\nu_{\ell}\mbox{$\;\\!$},$
which implies that the random process $Z_{\lambda}(x)$ ($x\geq 0$) has
independent increments. Hence,
$Z^{0}_{\lambda}(x):=Z_{\lambda}(x)-{\mathsf{E}}_{\bm{z}}\bigl{(}Z_{\lambda}(x)\bigr{)}$
is a martingale with respect to the filtration
${\mathcal{F}}_{x}=\sigma\\{\nu_{\ell}\mbox{$\>\\!$},\,\ell\geq x^{-1}\\}$.
From (2.2) it is also evident that $Z^{0}_{\lambda}(x)$ is _càdlàg_ (i.e., its
paths are everywhere right-continuous and have left limits, cf. Figure 3).
Therefore, by the Doob–Kolmogorov submartingale inequality (see, e.g., [89,
Theorem 6.16, p. 101]) we obtain
$\displaystyle{\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}\\!\left(\sup_{x\geq
0}\,\bigl{|}Y^{0}_{\lambda}(Ax)\bigr{|}>B\mbox{$\>\\!$}\varepsilon\right)$
$\displaystyle\equiv{\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}\\!\left(\sup_{y\leq\infty}\mbox{$\>\\!\\!$}|Z^{0}_{\lambda}(y\mbox{$\>\\!$}A^{-1})|>B\mbox{$\>\\!$}\varepsilon\right)$
$\displaystyle\leq\frac{{\mathsf{Var}}_{\bm{z}}\bigl{(}Z_{\lambda}(\infty)\bigr{)}}{B^{2}\mbox{$\>\\!$}\varepsilon^{2}}=\frac{{\mathsf{Var}}_{\bm{z}}\bigl{(}Y_{\lambda}(0)\bigr{)}}{B^{2}\mbox{$\>\\!$}\varepsilon^{2}}.$
(5.31)
Recalling that $Y_{\lambda}(0)=M_{\lambda}$ and using Theorem 3.9, the right-
hand side of (5.31) is estimated by $O(\langle M\rangle^{-1})$. Thus, the
claim (5.30) follows and the proof of Theorem 5.5 is complete. ∎
Convergence of normalized Young diagrams to their limits shape is illustrated
in Figure 3 for $q=1$ and $q=2$. Random partitions were simulated using a
suitable Boltzmann sampler implemented as Algorithm 1 (see Section 6.1).
$q=2$$q=1$ $\displaystyle\omega_{1}^{*}(x)=\mathrm{e}^{-x}$
---
$\omega^{*}_{2}(x)=1-G_{1/2}(x)$
Figure 3: Illustration of convergence to the limit shape for $q=1$ and $q=2$
(in the online version shown in blue and red, respectively). The step plots
depict the upper boundary of the scaled Young diagrams (see (5.24)), while the
smooth lines represent the limit shape $\omega_{q}^{*}(x)=1-G_{1/q}(x)$ (see
(5.22)). The corresponding partitions $\lambda\in\check{\varLambda}^{q}$ were
sampled using Algorithm 1 with hyper-parameters $\braket{M}=50$ and
$\braket{N}=2.5\cdot 10^{5}$ ($q=1$) or $\braket{N}=1.25\cdot 10^{7}$ ($q=2)$;
in both cases, $\kappa=0.01$ (cf. Assumption 3.2). The respective sample
weight and length are $N_{\lambda}=236{,}\mbox{$\>\\!$}369$, $M_{\lambda}=52$
($q=1$) and $N_{\lambda}=12{,}\mbox{$\>\\!$}733{,}\mbox{$\>\\!$}323$,
$M_{\lambda}=45$ ($q=2$).
Finally, we can analyse asymptotic fluctuations of scaled Young diagrams.
###### Theorem 5.6.
Under Assumptions 3.2 and 5.1, for any $x>0$ the random value
$\widetilde{Y}_{\lambda}(x)$ is asymptotically normal with variance
$\omega_{q}^{*}(x)/\langle M\rangle$, that is,
$\widetilde{Y}^{*}_{\lambda}(x):=\frac{\sqrt{\braket{M}}\left(\widetilde{Y}_{\lambda}(x)-{\mathsf{E}}_{\bm{z}}\bigl{(}\widetilde{Y}_{\lambda}(x)\bigr{)}\right)}{\sqrt{\mbox{$\>\\!$}\omega_{\smash{q}}^{*}(x)}}\stackrel{{\scriptstyle\mathrm{d}}}{{\longrightarrow}}\mathcal{N}(0,1).$
(5.32)
###### Proof.
Consider the characteristic function of $\widetilde{Y}^{*}_{\lambda}(x)$,
$\varphi_{\bm{z}}(t;x):={\mathsf{E}}_{\bm{z}}\bigl{[}\exp\mbox{$\;\\!\\!$}\bigl{(}\mathrm{i}\mbox{$\;\\!$}t\mbox{$\;\\!$}\widetilde{Y}^{*}_{\lambda}(x)\bigr{)}\bigr{]}\qquad(t\in\mathbb{R}).$
(5.33)
Substituting the definition (5.32) and using (5.23), (5.24) and (5.25), this
is transformed as
$\varphi_{\bm{z}}(t;x)=\exp\mbox{$\;\\!\\!$}\bigl{\\{}-\mathrm{i}\mbox{$\;\\!$}\tilde{t}\,{\mathsf{E}}_{\bm{z}}\bigl{(}Y_{\lambda}(Ax)\bigr{)}\mbox{$\;\\!$}\bigr{\\}}\,{\mathsf{E}}_{\bm{z}}\bigl{[}\exp\mbox{$\;\\!\\!$}\bigl{(}\mathrm{i}\mbox{$\;\\!$}\tilde{t}\,Y_{\lambda}(Ax)\bigr{)}\bigr{]},\qquad\tilde{t}:=\frac{t}{\sqrt{\braket{M}\omega_{\smash{q}}^{*}(x)}}\mbox{$\>\\!$},$
(5.34)
and furthermore (see (5.27))
${\mathsf{E}}_{\bm{z}}\bigl{(}Y_{\lambda}(Ax)\bigr{)}=\sum_{\ell\geq
Ax}\frac{z_{1}^{\ell}z_{2}}{1+z_{1}^{\ell}z_{2}}.$ (5.35)
Next, similarly to (5.6) and (5.7) the last expectation in (5.34) is expressed
as
${\mathsf{E}}_{\bm{z}}\\!\left[\exp\left(\mathrm{i}\mbox{$\;\\!$}\tilde{t}\sum_{\ell\geq
Ax}\nu_{\ell}\right)\right]=\prod_{\ell\geq
Ax}\frac{1+z_{1}^{\ell}z_{2}\,\mathrm{e}^{\mbox{$\;\\!$}\mathrm{i}\mbox{$\>\\!$}\tilde{t}}}{1+z_{1}^{\ell}z_{2}}=\prod_{\ell\geq
Ax}\bigl{(}1+w_{\ell}(\tilde{t}\mbox{$\>\\!$})\bigr{)},$ (5.36)
where
$w_{\ell}(t):=\frac{z_{1}^{\ell}z_{2}}{1+z_{1}^{\ell}z_{2}}\,(\mathrm{e}^{\mbox{$\;\\!$}\mathrm{i}\mbox{$\>\\!$}t}-1).$
(5.37)
Choosing the principal branch of the logarithm and using (5.35) and (5.36),
from (5.34) we get
$\log\varphi_{\bm{z}}(t;x)=-\mathrm{i}\mbox{$\;\\!$}\tilde{t}\sum_{\ell\geq
Ax}\frac{z_{1}^{\ell}z_{2}}{1+z_{1}^{\ell}z_{2}}+\sum_{\ell\geq
Ax}\log\bigl{(}1+w_{\ell}(\tilde{t}\mbox{$\>\\!$})\bigr{)}.$ (5.38)
In turn, similarly to (5.12) we obtain
$\sum_{\ell\geq
Ax}\log\bigl{(}1+w_{\ell}(\tilde{t}\mbox{$\>\\!$})\bigr{)}=\Bigl{(}\mathrm{i}\mbox{$\;\\!$}\tilde{t}-\tfrac{1}{2}\mbox{$\>\\!$}\tilde{t}^{2}+O(\tilde{t}^{3})\Bigr{)}\sum_{\ell\geq
Ax}\frac{z_{1}^{\ell}z_{2}}{1+z_{1}^{\ell}z_{2}}+O(\tilde{t}^{2})\sum_{\ell\geq
Ax}\frac{z_{1}^{2\ell}z_{2}^{2}}{(1+z_{1}^{\ell}z_{2})^{2}}.$ (5.39)
As was shown in the proof of Theorem 5.4 (see (5.27), (5.28)) and (5.29)),
$\sum_{\ell\geq Ax}\frac{z_{1}^{\ell}z_{2}}{1+z_{1}^{\ell}z_{2}}\sim\langle
M\rangle\,\omega_{q}^{*}(x),\qquad\sum_{\ell\geq
Ax}\frac{z_{1}^{2\ell}z_{2}^{2}}{(1+z_{1}^{\ell}z_{2})^{2}}=\braket{M}O(\kappa^{1/q})=\braket{M}o(1).$
(5.40)
Using (5.40) and recalling the notation of $\tilde{t}$ in (5.34), from (5.39)
we get
$\sum_{\ell\geq
Ax}\log\bigl{(}1+w_{\ell}(\tilde{t}\mbox{$\>\\!$})\bigr{)}=\mathrm{i}\mbox{$\;\\!$}\tilde{t}\sum_{\ell\geq
Ax}\frac{z_{1}^{\ell}z_{2}}{1+z_{1}^{\ell}z_{2}}-\tfrac{1}{2}\mbox{$\>\\!$}t^{2}+o(1).$
Finally, returning to (5.38), we see that
$\varphi_{\bm{z}}(t;x)\to-\tfrac{1}{2}\mbox{$\>\\!$}t^{2}$, which proves the
theorem. ∎
###### Remark 5.3.
The reason for using in (5.32) the intrinsic centering
${\mathsf{E}}_{\bm{z}}\bigl{(}\widetilde{Y}_{\lambda}(x)\bigr{)}$ rather than
the limit shape value $\omega_{q}^{*}(x)$ is that the error terms in the
asymptotic estimates (5.40) are of order $\braket{M}\kappa^{1/q}$, where
$\kappa=\langle M\rangle^{q+1}/\langle N\rangle=o(1)$ (see Assumption 5.1).
Combined with the factor $\tilde{t}=O\bigl{(}\braket{M}^{-1/2}\bigr{)}$, this
produces the error bound of order $\langle
M\rangle^{1/2}\mbox{$\;\\!$}\kappa^{1/q}$, which is not guaranteed to be
small. Thus, a stronger assumption to this end is $\langle
M\rangle^{1/2}\mbox{$\>\\!$}\kappa^{1/q}=o(1)$, that is, $\langle
M\rangle^{1+3\mbox{$\>\\!$}q/2}/\langle N\rangle=o(1)$. On the other hand,
lifting any control over the length may restore the limit-shape centering; for
example, for $q=1$ (ordinary strict partitions), a central theorem of that
kind was proved in [80].
### 5.3 A joint limit theorem for the extreme parts (growing expected length)
We use the notation $\lambda_{\rm min}$ and $\lambda_{\rm max}$ (cf. Section
4.3).
###### Theorem 5.7.
Under Assumptions 3.2 and 5.1, set
$b_{q}:=\left(\frac{q\mbox{$\;\\!\\!$}\braket{M}}{\displaystyle\Gamma(1/q)}\right)^{\mbox{$\;\\!\\!$}q}\\!,\qquad
B_{q}:=\log\mbox{$\;\\!\\!$}\langle
M\rangle-\bigl{(}1-\tfrac{1}{q}\bigr{)}\log\log\mbox{$\;\\!\\!$}\langle
M\rangle-\log\Gamma(1/q),$ (5.41)
and consider the normalized versions of $\lambda_{\rm min}$ and $\lambda_{\rm
max}$ defined as follows,
$\lambda_{\rm min}^{*}:=\gamma\mbox{$\;\\!$}b_{q}\mbox{$\;\\!$}\lambda_{\rm
min},\qquad\lambda_{\rm max}^{*}:=\gamma\mbox{$\>\\!$}\lambda_{\rm
max}-B_{q}\mbox{$\;\\!$}.$ (5.42)
Then $\lambda^{*}_{\rm min}$ and $\lambda^{*}_{\rm max}$ are asymptotically
independent under the measure ${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}$ as
$\braket{N}\to\infty$ and their marginal limiting laws are given,
respectively, by a Weibull distribution with shape parameter $1/q$ and the
standard double-exponential (Gumbel) distribution,
$\displaystyle{\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\lambda_{\rm
min}^{*}>x_{1})$
$\displaystyle\to\exp\mbox{$\;\\!\\!$}\bigl{(}-x_{1}^{1/q}\bigr{)},\qquad$
$\displaystyle x_{1}\geq 0,$ (5.43)
$\displaystyle{\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\lambda_{\rm max}^{*}\leq
x_{2})$
$\displaystyle\to\exp\mbox{$\;\\!\\!$}\bigl{(}-\mathrm{e}^{-x_{2}}\bigr{)},$
$\displaystyle x_{2}\in\mathbb{R}.$ (5.44)
###### Proof.
For $x_{1}\geq 0$ and $x_{2}\in\mathbb{R}$, set
$\ell^{*}_{1}(x_{1}):=\min\\{\ell\in\mathbb{N}^{q}\colon\ell>x_{1}/(\gamma\mbox{$\;\\!$}b_{q})\\},\qquad\ell^{*}_{2}(x_{2}):=\min\\{\ell\in\mathbb{N}^{q}\colon\ell>(B_{q}+x_{2})\mbox{$\;\\!$}\gamma^{-1}\\}.$
Recalling the asymptotic relations (3.25) and (3.22), observe that
$\displaystyle\gamma\mbox{$\;\\!$}\ell^{*}_{1}(x_{1})$
$\displaystyle=\frac{x_{1}}{b_{q}}+O(\gamma)\sim
x_{1}\left(\frac{\Gamma(1/q)}{\displaystyle
q\braket{M}}\right)^{\mbox{$\;\\!\\!$}q}\\!,$ (5.45)
$\displaystyle\gamma\mbox{$\;\\!$}\ell^{*}_{2}(x_{2})$
$\displaystyle=B_{q}+x_{2}+O(\gamma)\sim\log\braket{M}\mbox{$\>\\!\\!$}.$
(5.46)
Like in the proof of Theorem 4.4, we have
$\displaystyle{\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\lambda_{\rm
min}^{*}>x_{1},\,\lambda_{\rm max}^{*}\leq x_{2})$
$\displaystyle=\exp\Biggl{\\{}-\Biggl{(}\mbox{$\>\\!$}\sum_{\ell\in\mathbb{N}^{q}}^{\infty}-\sum_{\ell\geq\ell^{*}_{1}(x_{1})}^{\infty}+\sum_{\ell\geq\ell^{*}_{2}(x_{2})}^{\infty}\Biggr{)}\log\mbox{$\>\\!\\!$}\\!\left(1+z_{1}^{\ell}z_{2}\right)\Biggr{\\}}.$
Applying Lemmas 3.3 and 3.5 (with $\gamma=-\log z_{1}$ and $\eta=z_{2}$) and
using the asymptotic relation (3.35), we obtain
$-\log{\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\lambda^{*}_{\rm
min}>x_{1},\,\lambda^{*}_{\rm max}\leq
x_{2})\sim\frac{\braket{M}}{\Gamma(1/q)}\left(\int_{0}^{\gamma\mbox{$\>\\!$}\ell^{*}_{1}(x_{1})}\\!+\int_{\gamma\mbox{$\>\\!$}\ell^{*}_{2}(x_{2})}^{\infty}\right)u^{1/q-1}\mbox{$\;\\!$}\mathrm{e}^{-u}\,\mathrm{d}{u}.$
(5.47)
Integrating by parts and using the asymptotic relation (5.45), we obtain
$\int_{0}^{\gamma\mbox{$\>\\!$}\ell^{*}_{1}(x_{1})}\mbox{$\>\\!\\!$}u^{1/q-1}\mbox{$\;\\!$}\mathrm{e}^{-u}\,\mathrm{d}{u}=q\int_{0}^{\gamma\mbox{$\>\\!$}\ell^{*}_{1}(x_{1})}\\!\mathrm{e}^{-u}\,\mathrm{d}{(u^{1/q})}\sim
q\mbox{$\;\\!$}\bigl{(}\gamma\mbox{$\;\\!$}\ell^{*}_{1}(x_{1})\bigr{)}^{1/q}\sim\frac{\Gamma(1/q)\,x_{1}^{1/q}}{\displaystyle\braket{M}}.$
(5.48)
Next, using (5.46) we get
$\displaystyle\int_{\gamma\mbox{$\>\\!$}\ell^{*}_{2}(x_{2})}^{\infty}\mbox{$\>\\!\\!$}u^{1/q-1}\mbox{$\;\\!$}\mathrm{e}^{u}\,\mathrm{d}{u}$
$\displaystyle\sim\bigl{(}\gamma\mbox{$\;\\!$}\ell^{*}_{2}(x_{2})\bigr{)}^{1/q-1}\mbox{$\;\\!$}\mathrm{e}^{-\gamma\mbox{$\>\\!$}\ell^{*}_{2}(x_{2})}$
$\displaystyle\sim
B_{q}^{1/q-1}\mbox{$\;\\!$}\mathrm{e}^{-(B_{q}+x_{2})}\sim\frac{\Gamma(1/q)\,\mathrm{e}^{-x_{2}}}{\braket{M}}.$
(5.49)
Hence, substituting (5.48) and (5.49) into (5.47) yields
$\displaystyle-\log{\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\lambda^{*}_{\rm
min}>x_{1},\,\lambda^{*}_{\rm max}\leq x_{2})\sim
x_{1}^{1/q}+\mathrm{e}^{-x_{2}},$
which completes the proof of the theorem. ∎
###### Remark 5.4.
The necessary use of the intrinsic calibration parameter $\gamma=-\log z_{1}$
in Theorem 5.7 may be a little disappointing. This can be easily improved
under a slightly stronger condition on slow growth of $\langle M\rangle$ than
in Assumption 5.1; namely, $\kappa^{1/q}\log\langle M\rangle=o(1)$, that is,
$\langle M\rangle^{q+1}\mbox{$\;\\!$}(\log\mbox{$\>\\!\\!$}\langle
M\rangle)^{q}/\langle N\rangle=o(1)$. In this case, the normalization (5.42)
can be written more explicitly by replacing $\gamma$ with $\gamma_{0}=\langle
M\rangle/(q\mbox{$\;\\!$}\langle N\rangle)$ (see (3.34)).
###### Corollary 5.8.
Under the hypotheses of Theorem 5.7, the following law of large numbers holds,
$\frac{\langle M\rangle\mbox{$\>\\!$}\lambda_{\rm max}}{q\mbox{$\;\\!$}\langle
N\rangle\log\mbox{$\>\\!\\!$}\langle
M\rangle}\stackrel{{\scriptstyle\mathrm{p}}}{{\longrightarrow}}1,$ (5.50)
where the symbol $\stackrel{{\scriptstyle\mathrm{p}}}{{\to}}$ indicates
convergence in ${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}$-probability.
###### Proof.
Theorem 5.7 implies that $\gamma\mbox{$\>\\!$}\lambda_{\rm
max}/B_{q}\stackrel{{\scriptstyle\mathrm{p}}}{{\to}}1$, and the claim (5.50)
follows by noting that $\gamma\sim\gamma_{0}=\langle
M\rangle/(q\mbox{$\;\\!$}\langle N\rangle)$ and
$B_{q}\sim\log\mbox{$\>\\!\\!$}\langle M\rangle$. ∎
###### Remark 5.5.
Theorem 5.7 indicates that, under the condition of slow growth of
$\braket{M}$, the smallest part $\lambda_{\mathrm{min}}$ of a
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}$-typical partition
$\lambda\in\check{\varLambda}^{q}$ “lives” on the scale
$A_{*}=(\gamma\mbox{$\;\\!$}b_{q})^{-1}\propto\langle N\rangle/\langle
M\rangle^{q+1}=\kappa^{-1}$. On the other hand, Corollary 5.8 shows that the
scale of variation of the largest part $\lambda_{\mathrm{max}}$ is given by
$A^{*}\mbox{$\>\\!\\!$}=B_{q}\mbox{$\;\\!$}\gamma^{-1}\\!\propto\\!\langle
N\rangle\log\mbox{$\>\\!\\!$}\langle M\rangle/\langle M\rangle$. This is to be
compared with the typical behaviour in the bulk of the partition “spectrum”,
where the scale of variation is given by $A\sim\gamma^{-1}\propto\langle
N\rangle/\langle M\rangle$.
###### Remark 5.6.
Continuing an asymptotic linkage between the cases of fixed or slowly growing
parameter $\langle M\rangle$, observed above in Remark 5.1, the limiting
distributions of Theorem 4.4 formally conform to Theorem 5.7 in the limit as
$\langle M\rangle\to\infty$. Indeed, using (4.43) we have
$\displaystyle-\log G_{\rm max}(x+B_{q})$
$\displaystyle=\frac{\braket{M}\mbox{$\>\\!$}\Gamma(1/q,x+B_{q})}{\Gamma(1/q)}$
$\displaystyle\sim\frac{\braket{M}}{\Gamma(1/q)}\,(x+B_{q})^{1/q-1}\mbox{$\;\\!$}\mathrm{e}^{-x-B_{q}}$
$\displaystyle\sim\frac{\braket{M}}{\Gamma(1/q)}\,(\log\mbox{$\>\\!\\!$}\langle
M\rangle)^{1-1/q}\,\mathrm{e}^{-x}\cdot\mathrm{e}^{-B_{q}}=\mathrm{e}^{-x},$
according to the definition of $B_{q}$ in (5.41). Similarly, using (4.44) and
(5.48) we have
$\displaystyle-\log G^{c}_{\rm min}(x/b_{q})$
$\displaystyle=\braket{M}\\!\left(1-\frac{\Gamma(1/q,x/b_{q})}{\Gamma(1/q)}\right)$
$\displaystyle\sim\frac{\braket{M}\mbox{$\;\\!\\!$}q}{\Gamma(1/q)}\left(\frac{x}{b_{q}}\right)^{1/q}=x^{1/q},$
by the definition of $b_{q}$ in (5.41).
###### Remark 5.7 (Possible generalization to non-integral powers).
The reader may have observed that our main results (including Theorems 4.1 and
5.1 about joint limiting laws of weight $N_{\lambda}$ and length
$M_{\lambda}$; Theorems 4.4 and 5.7 about the joint asymptotics of extreme
parts; the limit shape Theorems 5.5 and 5.6; and the cardinality Theorem 4.3)
continue to make sense _for any real power $q>0$_, although our proofs proceed
from the assumption that $q$ is integer, which makes the partition model
combinatorically well defined. The interest in non-integral powers is not new
but was mostly motivated by applications in statistical physics [3, 19, 68]. A
mathematically meaningful interpretation of partitions with non-integral power
parts may be based on the idea recently proposed by Lipnik et al. [55],
whereby the multiplicative Boltzmann structure is introduced on the hidden
“substrate” $\mathbb{N}=\\{k\\}$, from which the $q$-power parts of the random
partition $\lambda=(\ell_{i})$ are formed as $\ell=\lfloor k^{q}\rfloor$, with
any $q\in\mathbb{R}_{+}$. We will review and extend our results under this
approach in a separate paper.
## 6 Application to random sampling
_Boltzmann sampling_ is a powerful technique conceptualized, streamlined and
popularized by Duchon et al. [23] in the context of single-parameter
combinatorial structures (for multi-parametric extensions, see Bendkowski et
al. [10] and the references therein). Random integer partitions with
controlled expected weight and length provide an “exactly soluble” instance of
a two-parametric combinatorial structure, where the issues of Boltzmann
sampling implementation and efficiency can be analysed in some depth.
Specifically, in this section we discuss sampling from the Boltzmann
distribution on partition spaces $\check{\varLambda}^{q}$ (i.e., into distinct
$q$-power parts), calibrated under the predefined hyper-parameters
$\braket{N}$ and $\braket{M}$, which have the meaning of the expected weight
and length, respectively. The two controlling parameters in question are
$z_{1}$ and $z_{2}$, which are amenable to asymptotic analysis as was shown in
Section 3.3. Once these parameters are fixed, due to the mutual independence
of the multiplicities $(\nu_{\ell})$ (see Proposition 2.1 and Lemma 3.1), the
Boltzmann sampling is essentially reduced to an iterated independent testing
of potential parts $\ell=j^{q}$ via dichotomous (Bernoulli) random trials with
success probabilities
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\nu_{\ell}=1)=z_{1}^{\ell}z_{2}\mbox{$\;\\!$}(1+z_{1}^{\ell}z_{2})^{-1}$.
The practical implementation of such sampling algorithms thus relies on a
random number generator $\mathtt{Ber}(p)$, in each call producing an
independent pseudo-random value $1$ or $0$ with probabilities $p$ and $1-p$,
respectively.
It is convenient to distinguish between the _free samplers_ and the _rejection
samplers_ , with the former just producing independent random realizations of
partitions under the said Boltzmann distribution, and the latter comprising
one or more rejection loops that iterate a free Boltzmann sampler until the
desired targets are met. We discuss these two versions in Sections 6.1 and
6.2, respectively.
Computer codes were implemented using the programming language C and Intel®
oneAPI DPC++ compiler, and run on a desktop CPU Intel® CoreTM i5-10600
(processor base frequency 3.30 GHz, turbo boost frequency 4.80 GHz). Numerical
calculations were carried out using MapleTM (Release 2022.1, licensed to the
University of Leeds).
### 6.1 Free sampler
In this subsection, we delineate a free Boltzmann sampler (see Algorithm 1
below) under the calibration through the hyper-parameters $\braket{N}$ and
$\braket{M}$. It should be noted that, despite an intuitive appeal of iterated
Bernoulli-type tests, there are some implementation concern that have to be
addressed. We discuss them below before presenting the algorithm.
#### 6.1.1 Correcting the bias
The first issue to consider is that of choosing the control parameters $z_{1}$
and $z_{2}$ to ensure that the sampler is unbiased, that is,
${\mathsf{E}}_{\bm{z}}(N_{\lambda})=\braket{N}$ and
${\mathsf{E}}_{\bm{z}}(M_{\lambda})=\braket{M}$. Unfortunately, we can solve
this set of equations only asymptotically (see Lemma 3.1). In a “crude”
version of Algorithm 1, we use the leading terms in the asymptotics by setting
(cf. (3.25) and (3.26))
$z_{1}=\mathrm{e}^{-\gamma_{0}},\qquad
z_{2}=\frac{\braket{M}\gamma_{0}^{1/q}}{\Gamma(1+1/q)},$ (6.1)
where $\gamma_{0}=\langle M\rangle/(q\mbox{$\;\\!\\!$}\braket{N})$.
Inevitably, this causes a bias in the resulting expectations. More precisely,
the first source of this bias clearly comes from dropping the (positive)
remainder terms $R_{1}(\bm{z})$ and $R_{2}(\bm{z})$ in the approximate series
representations of the aforementioned expected values (see equations (3.27)
and (3.30)). A further error occurs when replacing the resulting series with
the corresponding integrals, using Lemma 3.2.
In fact, one can show that the overall bias due to (6.1) is always negative.
Indeed, recalling that $\Delta_{0}(\gamma)<0$ (see (3.11)101010The inequality
$\Delta_{0}(\gamma)<0$ can also be seen directly by monotonicity of the
function $x\mapsto\mathrm{e}^{-\gamma\mbox{$\>\\!$}x^{q}}$.), we have
$\displaystyle{\mathsf{E}}_{\bm{z}}(M_{\lambda})=\sum_{\ell\in\mathbb{N}^{q}}\frac{z_{1}^{\ell}z_{2}}{1+z_{1}^{\ell}z_{2}}<z_{2}\sum_{\ell\in\mathbb{N}^{q}}z_{1}^{\ell}$
$\displaystyle=z_{2}\sum_{j=1}^{\infty}\mathrm{e}^{-\gamma_{0}\mbox{$\>\\!$}j^{q}}$
$\displaystyle<z_{2}\int_{0}^{\infty}\\!\mathrm{e}^{-\gamma_{0}\mbox{$\>\\!$}x^{q}}\mathrm{d}{x}=\frac{z_{2}\,\Gamma(1/q)}{q\,\gamma_{0}^{1/q}}=\langle
M\rangle,$
according to the parameter choice (6.1). Turning to
${\mathsf{E}}_{\bm{z}}(N_{\lambda})$, recall from (3.31) that the error term
$-R_{2}(\bm{z})$ is negative, and furthermore,
$R_{2}(\bm{z})\sim
z_{2}^{2}\sum_{\ell\in\mathbb{N}^{q}}\ell\mbox{$\>\\!$}z_{1}^{2\ell}\sim\frac{z_{2}\,q\mbox{$\;\\!\\!$}\braket{M}\mbox{$\;\\!\\!$}\gamma_{0}^{1/q}}{\Gamma(1/q)}\cdot\frac{\Gamma(1+1/q)}{q\,\gamma_{0}^{1+{1/q}}}=\frac{z_{2}\braket{M}}{q\,\gamma_{0}}.$
(6.2)
On the other hand, the error due to replacing the sum
$\sum_{\ell}\ell\mbox{$\>\\!$}z_{1}^{\ell}$ in (3.30) by the corresponding
integral is bounded, according to (3.16), by
$z_{2}\,O\bigl{(}\gamma_{0}^{-1+1/q}\bigr{)}$. Since $\braket{M}$ is bounded
away from zero, it follows that the $R_{2}$-term (6.2) is dominant and,
therefore, the overall bias in targeting $\braket{N}$ is negative.
A practical recipe towards correcting the bias may be to move the error terms
$R_{1}(\bm{z})$ and $R_{2}(\bm{z})$ to the left-hand side of equations in
(3.27) and (3.30), for simplicity using their integral approximations.
Effectively, this amounts to redefining the hyper-parameters,
$\displaystyle\braket{\tilde{N}}$
$\displaystyle:=\braket{N}+z_{2}^{2}\sum_{\ell\in\mathbb{N}^{q}}\ell\mbox{$\>\\!$}z_{1}^{2\ell}\approx\braket{N}+z_{2}^{2}\int_{0}^{\infty}\\!x^{q}\,\mathrm{e}^{-2\mbox{$\>\\!$}\gamma_{0}\mbox{$\>\\!$}x^{q}}\mbox{$\>\\!$}\mathrm{d}{x}=\braket{N}+\frac{\braket{M}^{2}\gamma_{0}^{1/q}}{2^{1/q}\,\Gamma(1/q)},$
$\displaystyle\braket{\tilde{M}}$
$\displaystyle:=\braket{M}+z_{2}^{2}\sum_{\ell\in\mathbb{N}^{q}}z_{1}^{2\ell}\approx\braket{M}+z_{2}^{2}\int_{0}^{\infty}\\!\mathrm{e}^{-2\mbox{$\>\\!$}\gamma_{0}\mbox{$\>\\!$}x^{q}}\mbox{$\>\\!$}\mathrm{d}{x}=\braket{N}+\frac{q\mbox{$\;\\!\\!$}\braket{M}^{2}\gamma_{0}^{1/q}}{2^{1/q}\,\Gamma(1/q)}.$
Accordingly, we redefine
$\tilde{\gamma_{0}}=\langle\tilde{M}\rangle/(q\mbox{$\;\\!\\!$}\braket{\tilde{N}})$
and (cf. (6.1))
$\tilde{z}_{1}=\mathrm{e}^{-\tilde{\gamma_{0}}},\qquad\tilde{z}_{2}=\frac{q\mbox{$\;\\!\\!$}\braket{\tilde{M}}\tilde{\gamma_{0}}^{1/q}}{\Gamma(1/q)}.$
(6.3)
A numerical illustration of the proposed modification is presented in Table 1,
showing a significant reduction of bias. The expected values were computed
from the exact series expansions (3.27) and (3.30) using Maple.
Table 1: Expected values of $N_{\lambda}$ and $M_{\lambda}$ for $q=1$ and $q=2$ under two choices of the calibrating parameters: using the leading asymptotic terms (6.1) and after a heuristic correction (6.3). $q=1$ | $q=2$
---|---
$\vphantom{\int^{)}}\braket{N}=10^{6}$ | $\braket{\tilde{N}}\doteq 1{,}\mbox{$\>\\!$}002{,}\mbox{$\>\\!$}499.50$ | $\braket{N}=10^{7}$ | $\braket{\tilde{N}}\doteq 10{,}\mbox{$\>\\!$}315{,}\mbox{$\>\\!$}391.57$
$\vphantom{\int^{)}}\braket{M}=100$ | $\braket{\tilde{M}}\doteq 100.4999$ | $\braket{M}=50$ | $\braket{\tilde{M}}\doteq 53.1539$
$z_{1}\doteq 0.9999000$ | $\tilde{z}_{1}=0.9998998$ | $z_{1}\doteq 0.9999975$ | $\tilde{z}_{1}\doteq 0.9999974$
$z_{2}\doteq 0.010000$ | $\tilde{z}_{2}\doteq 0.0100750$ | $z_{2}\doteq 0.0892062$ | $\tilde{z}_{2}\doteq 0.0962720$
${\mathsf{E}}_{\bm{z}}(N_{\lambda})\doteq 997{,}\mbox{$\>\\!$}510.70$ | ${\mathsf{E}}_{\tilde{z}}(N_{\lambda})\doteq 999{,}\mbox{$\>\\!$}985.73$ | ${\mathsf{E}}_{\bm{z}}(N_{\lambda})\doteq 9{,}\mbox{$\>\\!$}699{,}\mbox{$\>\\!$}070.63$ | ${\mathsf{E}}_{\tilde{z}}(N_{\lambda})\doteq 9{,}\mbox{$\>\\!$}981{,}\mbox{$\>\\!$}802.71$
${\mathsf{E}}_{\bm{z}}(M_{\lambda})\doteq 99.498326$ | ${\mathsf{E}}_{\tilde{z}}(M_{\lambda})\doteq 99.992019$ | ${\mathsf{E}}_{\bm{z}}(M_{\lambda})\doteq 47.018388$ | ${\mathsf{E}}_{\tilde{z}}(M_{\lambda})\doteq 49.754495$
If the remaining (small) bias is still an issue, a further recalibration can
be carried out by a suitable refinement of the solution $\bm{z}=(z_{1},z_{2})$
to the equations (3.27) and (3.30), for instance, by using a two-dimensional
Newton–Raphson method. For a general approach to the multidimensional tuning
of parameters based on convex optimization, see Bendkowski et al. [10].
#### 6.1.2 Truncation of the parts pipeline
We deal with a finitary computation, so should rule out the risk of indefinite
processing. That is to say, the algorithm must have a well-defined stopping
rule that would guarantee a finite-time termination. In a free sampler, the
sequence of productive outcomes in successive Bernoulli trials (i.e., with
sample multiplicities $\nu_{\ell}=1$ corresponding to non-zero parts) is
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}$-a.s. finite (see Lemma 2.2). More
precisely, the last successful trial selects the largest part $\lambda_{\rm
max}$, after which the testing settles down to pure idling. The
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}$-distribution of $\lambda_{\rm max}$
is given by (cf. Sections 4.3 and 5.3)
$\displaystyle{\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\lambda_{\rm max}\leq L)$
$\displaystyle={\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\nu_{\ell}\equiv 0\
\text{for all}\ \ell>L)$
$\displaystyle=\prod_{\ell>L}\frac{1}{1+z_{1}^{\ell}z_{2}}=\frac{1}{F(\bm{z})}\prod_{\ell\leq
L}(1+z_{1}^{\ell}z_{2}),$ (6.4)
where $F(\bm{z})=\prod_{\ell\in\mathbb{N}^{q}}(1+z_{1}^{\ell}z_{2})$ is the
generating function of the partition space $\check{\varLambda}^{p}$ (see
(3.2)). It is also easy to see that conditioning on $\lambda_{\rm
max}=j_{0}^{q}$ does not change the distribution of the preceding
multiplicities $\\{\nu_{j^{q}},1\leq j\leq j_{0}-1\\}$, that is, they remain
mutually independent and with Bernoulli distributions (3.1). Thus, if the
numerical value of $F(\bm{z})$ can be calculated in advance, which is a common
convention in computing known as an _oracle_ (see, e.g., [23, 30, 10]), then
we can sample the random value $\lambda_{\rm max}$ using formula (6.4) and
then sample independently the preceding candidate parts via the respective
Bernoulli trials. Unfortunately, this approach embeds a computational error
through the numerical calculation of $F(\bm{z})$, so it is not quite “exact”;
besides, convergence of the infinite product may not be fast, given that the
parameter $z_{1}$ is close to $1$ (see (3.25)). Specifically, using Lemma 3.5
one can check that the truncation error arising from a partial product up to
$\ell_{*}$ is of order
$\braket{M}(\gamma\mbox{$\>\\!$}\ell_{*})^{1/q-1}\mbox{$\>\\!$}\mathrm{e}^{-\gamma\mbox{$\>\\!$}\ell_{*}}$,
which dictates that $\ell_{*}$ be chosen much bigger than $\gamma^{-1}\sim
q\mbox{$\;\\!$}\langle N\rangle/\langle M\rangle$.
An alternative idea is to truncate the pipeline of potential parts
$\ell\in\mathbb{N}^{q}$ subject to testing at an appropriate threshold $L$
(see Section 2.5), so that the Bernoulli testing only runs over $\ell\leq L$.
A simple pragmatic solution is to choose the threshold $L$ so that the
probability of exceeding it in an indefinite free sampler is small enough,
that is,
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\lambda_{\mathrm{max}}>L)\leq\delta,$
(6.5)
where the confidence tolerance $\delta>0$ can be chosen in advance to be as
small as desired. Then the corresponding threshold $L=L(\delta)$ can be
determined from a suitable limit theorem for the largest part, namely, Theorem
4.4 if $\braket{M}>0$ is fixed, or Theorem 5.7 for slow growth of
$\braket{M}$. In the former case, threshold $L$ is determined by the
asymptotic equation (see (4.43))
$\Gamma(1/q,\gamma_{0}L)=\frac{\Gamma(1/q)}{\braket{M}}\cdot\log\frac{1}{1-\delta},$
(6.6)
where, as before,
$\gamma_{0}=\braket{M}\mbox{$\>\\!\\!$}/(q\mbox{$\;\\!\\!$}\braket{N})$. In
the latter case, we obtain from (5.44)
$L=\frac{1}{\gamma_{0}}\left(B_{q}-\log\log\frac{1}{1-\delta}\right)\\!,$
(6.7)
where (see (5.41))
$B_{q}=\log\mbox{$\;\\!\\!$}\langle
M\rangle-\bigl{(}1-\tfrac{1}{q}\bigr{)}\log\log\mbox{$\;\\!\\!$}\langle
M\rangle-\log\Gamma(1/q).$
Note that for $q=1$ the bounds (6.6) and (6.7) coincide, reducing to
$L=\frac{1}{\gamma_{0}}\left(\log\mbox{$\;\\!\\!$}\langle
M\rangle-\log\log\frac{1}{1-\delta}\right)\\!.$ (6.8)
An illustration of evaluation of the threshold $L$ is presented in Table 2 for
$q=1$ and $q=2$. The equation (6.6) was solved numerically using Maple. One
can observe from the table that while the value $\delta=10^{-k}$
($k=1,2,\dots$) is decreasing geometrically, the growth of threshold $L$ is
only about linear. Intuitively, this is explained by the fact that the
confidence probability $1-\delta$ enters expressions (6.6) and (6.7) under the
logarithm, that is, as $\log\left(1-\delta\right)$. More precisely, it readily
follows from (6.7) that
$L-\frac{B_{q}}{\gamma_{0}}\sim\frac{1}{\gamma_{0}}\log\frac{1}{\delta}\qquad(\delta\to
0{+}).$
Likewise, equation (6.6) asymptotically solves to yield
$L-\frac{\log\braket{M}}{\gamma_{0}\,\Gamma(1/q)}\sim\frac{1}{\gamma_{0}}\log\frac{1}{\delta}\qquad(\delta\to
0{+}).$
Table 2: Threshold $L$ for the largest part $\lambda_{\rm{max}}$ with confidence probability $1-\delta$, calculated from expressions (6.8) ($q=1$) and (6.6) or (6.7) ($q=2$) and rounded down to the nearest $q$-th power. $\vphantom{\int^{t}}q=1$, $\braket{N}=10^{6}$, $\braket{M}=100$ | $q=2$, $\braket{N}=10^{7}$, $\braket{M}=50$
---|---
$\vphantom{\int^{t}}\delta$ | $L$ (6.8) | $\delta$ | $L$ (6.6) | $L$ (6.7)
$0.1$ | $68{,}\mbox{$\>\\!$}555$ | $0.1$ | $1{,}\mbox{$\>\\!$}890{,}\mbox{$\>\\!$}625=1375^{2}$ | $1{,}\mbox{$\>\\!$}962{,}\mbox{$\>\\!$}801=1401^{2}\vphantom{\int^{t}}$
$0.01$ | $92{,}\mbox{$\>\\!$}053$ | $0.01$ | $2{,}\mbox{$\>\\!$}762{,}\mbox{$\>\\!$}244=1662^{2}$ | $2{,}\mbox{$\>\\!$}900{,}\mbox{$\>\\!$}209=1703^{2}\vphantom{\int^{t}}$
$0.001$ | $115{,}\mbox{$\>\\!$}124$ | $0.001$ | $3{,}\mbox{$\>\\!$}636{,}\mbox{$\>\\!$}649=1907^{2}$ | $3{,}\mbox{$\>\\!$}825{,}\mbox{$\>\\!$}936=1956^{2}\vphantom{\int^{t}}$
$0.0001$ | $138{,}\mbox{$\>\\!$}154$ | $0.0001$ | $4{,}\mbox{$\>\\!$}515{,}\mbox{$\>\\!$}625=2125^{2}$ | $4{,}\mbox{$\>\\!$}743{,}\mbox{$\>\\!$}684=2178^{2}\vphantom{\int^{t}}$
According to Lemma 2.3 (with
$\tilde{\varLambda}^{\dagger}=\check{\varLambda}_{L}$), the output of a
truncated sampling algorithm follows the Boltzmann distribution with a smaller
source set $\mathbb{A}_{L}=\\{\ell\in\mathbb{N}^{q}\colon\ell\leq L\\}$, which
nonetheless approximates well the target Boltzmann distribution
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}$ (see Lemma 2.6). One should be wary
though that truncation contributes to the negative bias (see (2.31) and
(2.32)), which may require a refined calibration through the parameters
$z_{1}$ and $z_{2}$.
Note that the confidence guarantee $1-\delta$ as discussed above is valid only
in the case of a single output instance. If the purpose of the free algorithm
is to produce an independent sample of, say, $k$ random Boltzmann partitions,
then the overall confidence probability is approximately given by
$(1-\delta)^{k}$, which may be exponentially small if $k$ is large while
$\delta$ stays fixed. A simple upper bound for the error probability is based
on the Bernoulli inequality, yielding $1-(1-\delta)^{k}\leq k\delta$. This
motivates the well-known Bonferroni correction, which amounts to choosing the
individual error probability $\delta_{0}=\delta/k$ in order to ensure the
overall error probability not exceeding $\delta$. As an example, if
$k=1{}\mbox{$\>\\!$}000$ and we would like to guarantee the overall error
probability bound $\delta=0.1$, then the individual error probability should
be taken as $\delta_{0}=0.0001$. The approximation is quite accurate here, as
the exact solution is $\delta_{0}\doteq 0.000105355$. Clearly, switching from
$\delta$ to $\delta_{0}$ leads to a higher threshold $L$. For instance, Table
2 shows that in the case $q=1$ the suitable threshold $L$ needs to double. In
general, the increase of $L$ due to multiple errors is not really dramatic
because of the logarithmic dependence on $\delta$ mentioned above.
Another unwelcome outcome of the Bernoulli testing is that it may not return
any parts at all, which has a positive
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}$-probability even in the infinite
sequence of tests (see Remark 3.3). This is not critical, as the sampling
cycle can be repeated if necessary. However, it may be wasteful and can be
easily rectified by adopting a similar approach based on confidence.
Specifically, one can set a lower cutoff $L_{0}$ such that the run of the
sampler is terminated, and the cycle is repeated, if the Bernoulli tests fail
to select at least one (non-zero) part $\ell\leq L_{0}$. To this end, we
choose $L_{0}$ in such a way that
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\lambda_{\rm min}>L_{0})\leq\delta$,
where $\delta>0$ is small enough (cf. (6.5)). Again referring to the limit
theorems regarding the smallest part, we obtain from Theorem 4.4 (cf. (6.6))
$\Gamma(1/q)-\Gamma(1/q,\gamma_{0}L_{0})=\frac{\Gamma(1/q)}{\braket{M}}\mbox{$\;\\!$}\log\frac{1}{\delta},$
(6.9)
and from Theorem 5.7 (cf. (6.7))
$L_{0}=\frac{1}{\gamma_{0}}\left(\frac{\Gamma(1/q)}{q\mbox{$\;\\!\\!$}\braket{M}}\log\frac{1}{\delta}\right)^{q}\\!.$
(6.10)
A numerical illustration of the confident lower threshold $L_{0}$ is presented
in Table 3 for $q=1$ and $q=2$. The equation (6.9) was solved numerically
using Maple. The match between the results produced via equations (6.9) and
(6.10) is quite close, especially for $q=2$. One should also observe a
significant difference between the thresholds $L_{0}$ and $L$, which underpins
a considerable computational saving due to the lower cutoff, activated
whenever the sampler fails to produce at least one positive part up to
$L_{0}$.
Table 3: Asymptotic threshold $L_{0}$ for the smallest part $\lambda_{\rm{min}}$ with confidence probability $1-\delta$, calculated from expressions (6.9) or (6.10), and rounded down to the nearest $q$-th power. $\vphantom{\int^{t}}q=1$, $\braket{N}=10^{6}$, $\braket{M}=100$ | $q=2$, $\braket{N}=10^{7}$, $\braket{M}=50$
---|---
$\vphantom{\int^{t}}\delta$ | $L_{0}$ (6.9) | $L_{0}$ (6.10) | $\delta$ | $L_{0}$ (6.9) | $L_{0}$ (6.10)
$0.1$ | $232$ | $230$ | $0.1$ | $625=25^{2}$ | $625=25^{2}\vphantom{\int^{t}}$
$0.01$ | $471$ | $460$ | $0.01$ | $2{,}\mbox{$\>\\!$}601=51^{2}$ | $2{,}\mbox{$\>\\!$}601=51^{2}\vphantom{\int^{t}}$
$0.001$ | $715$ | $690$ | $0.001$ | $5{,}\mbox{$\>\\!$}929=77^{2}$ | $5{,}\mbox{$\>\\!$}929=77^{2}\vphantom{\int^{t}}$
$0.0001$ | $966$ | $921$ | $0.0001$ | $10{,}\mbox{$\>\\!$}816=104^{2}$ | $10{,}\mbox{$\>\\!$}609=103^{2}\vphantom{\int^{t}}$
Finally, if both cutoffs $L$ and $L_{0}$ are exercised as described above
then, due to the asymptotic independence of $\lambda_{\rm max}$ and
$\lambda_{\rm min}$, the overall confidence probability is (asymptotically)
given by $(1-\delta)^{2}=1-2\mbox{$\>\\!$}\delta+\delta^{2}$, hence the
resulting error probability is bounded by $2\mbox{$\>\\!$}\delta$.
#### 6.1.3 Free sampling algorithm
A free Boltzmann sampler is presented below in pseudocode as Algorithm 1. For
simplicity, the algorithm incorporates only the upper threshold $L$ selected
in advance for a given confidence probability $1-\delta$. As discussed in
Section 6.1.2, this is essential to ensure termination of the code, but for
the sake of optimization a lower cutoff $L_{0}$ can also be included without
difficulty. As explained above, the confidence probability should be chosen
carefully to match a possibly multiple output.
Input: integer $q$, real $\langle N\rangle,\langle M\rangle,L$
Output: partition $\lambda\in\check{\varLambda}^{q}_{L}$, weight
$N_{\lambda}$, length $M_{\lambda}$
1 integer array $\lambda_{[\ ]}$;
2 real $z_{1},z_{2},\gamma_{0},q\mbox{$\>\\!$}$;
3 $\gamma_{0}\leftarrow\langle M\rangle/(q\mbox{$\;\\!\\!$}\braket{N})$;
4 $z_{1}\leftarrow\mathrm{e}^{-\gamma_{0}}$, $z_{2}\leftarrow
q\mbox{$\;\\!\\!$}\braket{M}\mbox{$\;\\!\\!$}\smash{\gamma_{0}^{1/q}}\mbox{$\>\\!\\!$}/\mbox{$\>\\!$}\Gamma(1/q)$;
5 integer $j^{*}$, $j$, $N$, $M$;
6 $j^{*}\leftarrow\lfloor L^{1/q}\rfloor$;
7 $N\leftarrow 0$, $M\leftarrow 0$;
8 for _$j$ from $j^{*}$ to $1$ by $-1$_ do
9 $p\leftarrow
z_{1}^{j^{q}}\\!z_{2}\mbox{$\;\\!$}(1+z_{1}^{j^{q}}\\!z_{2}){{}^{-1}}$;
10 if _$\mathtt{Ber}(p)=1$_ then
11 $N\leftarrow N+j^{q}$;
12 $M\leftarrow M+1$;
13 $\lambda_{M}\leftarrow j^{q}$;
14
15 end if
16
17 end for
18$N_{\lambda}\leftarrow N$, $M_{\lambda}\leftarrow M$;
return _$(\lambda,N_{\lambda},M_{\lambda})$_
Algorithm 1 FreeSampler ($q,\langle N\rangle,\langle M\rangle,L$)
The code structure is fairly straightforward and consists in a single cycle of
sequential Bernoulli tests over potential parts $\ell\in\mathbb{N}^{q}$. It is
convenient to do this via downward scoping in view of our convention to
enumerate the partition parts in decreasing order. Because the resulting
length of the output partition $\lambda=(\lambda_{i})$ is unknown in advance,
the space for the corresponding integer array is defined in the code as
$\lambda_{[\ ]}$, that is, through a dynamically allocated memory. Finally,
the calibration parameters $z_{1}$ and $z_{2}$ are specified using the
leading-term formulas (6.1); if desired, these can be replaced by the bias-
correcting values (6.3) or by any other, more refined choices.
By design, the output of Algorithm 1 is a random partition
$\lambda\in\check{\varLambda}^{q}_{L}=\check{\varLambda}^{q}\cap\\{(\lambda_{i})\colon\lambda_{\rm{max}}\leq
L\\}$. It has a Boltzmann distribution on the space
$\check{\varLambda}^{q}_{L}$, with expected values of weight $N_{\lambda}$ and
length $M_{\lambda}$ close to the predefined hyper-parameters $\braket{N}$ and
$\braket{M}$, respectively. As discussed in Section 2.5, this distribution
approximates (in total variation) the Boltzmann distribution on the infinite
partition space $\check{\varLambda}^{q}$, which may suffice for the sampling
purposes at hand.
#### 6.1.4 Validation of Algorithm 1
The output performance of the code in Algorithm 1 was visually monitored via
the marginal histograms for the sample weight $N_{\lambda}$ and length
$M_{\lambda}$ (Figure 4), as well as by the bivariate histograms and frequency
level plots of the sample pairs $(N_{\lambda},M_{\lambda})$ (Figure 5). The
numerical illustration was carried out in the case of square parts, $q=2$
(selected for computational convenience in order to reduce the completion
time), and in two different regimes with regard to the hyper-parameter
$\braket{M}$, that is, “fixed” and “slow growth”, illustrated by
$\braket{M}=5$ ($\braket{N}=12{,}\mbox{$\>\\!$}500$) and $\braket{M}=50$
($\braket{N}=10^{7}$), yielding for the parameter $\kappa=\langle
M\rangle^{3}\mbox{$\;\\!\\!$}/\langle N\rangle$ values $\kappa=0.01$ and
$\kappa=0.0125$, respectively (cf. Assumption 5.1). The algorithm was run at a
very low confidence tolerance (error probability) $\delta=10^{-8}$ and with
the corresponding truncation value $L$ calculated using formulas (6.6) or
(6.7) according to the regime at hand, yielding $L=299^{2}=89,401$ and
$L=2{,}\mbox{$\>\\!$}903^{2}=8{,}\mbox{$\>\\!$}427{,}\mbox{$\>\\!$}409$,
respectively (cf. Table 2).
(a) $\braket{M}=5$, $\braket{N}=12{,}\mbox{$\>\\!$}500$ ($\kappa=0.01$). A dot
at the origin on the left panel shows a discrete atom
$\pi_{0}=\mathrm{e}^{-\braket{M}}\doteq 0.00674$ (see (4.15)).
(b) $\braket{M}=50$, $\braket{N}=10^{7}$ ($\kappa=0.0125$). The atom
$\pi_{0}=\mathrm{e}^{-\braket{M}}\doteq 2\cdot 10^{-22}$ is too small to be
visible.
Figure 4: Marginal histograms for the weight $N_{\lambda}$ (left) and length
$M_{\lambda}$ (right) for random samples (of size $10^{5}$ each) from the
partition space $\check{\varLambda}^{q}$ ($q=2$), simulated using a free
Boltzmann sampler as set out in Algorithm 1. Colour coding (online version):
_blue_ designates the limiting distributions under the “fixed” regime, that
is, compound Poisson-Gamma (left) and Poisson (right); _red_ indicates a
normal approximation; _magenta_ depicts a mean-gamma approximation (see
Section 4.1). In the black-and-white version, the normal curves on the left
are identifiable by a noticeable positive shift.
The empirical results (with $10^{5}$ output partitions in both cases) were
compared with the theoretical predictions from Theorems 4.1 and 5.1. The
marginal histogram plots for $N_{\lambda}$ and $M_{\lambda}$ shown in Figure 4
depict bell-shaped unimodal empirical distributions with the sample modes
noticeably shifted to the left of the calibration hyper-parameters
$\braket{N}$ and $\braket{M}$, respectively. The discrepancy between the modes
and the means is observed especially well in the weight plots (the more so for
smaller $\braket{M}$), which is essentially due to the fact that the
underlying gamma and Poisson distributions are right-skewed; for example, the
mode of $\mathrm{Gamma}\mbox{$\;\\!$}(\alpha)$ is given by
$\max\\{\alpha-1,0\\}$ whereas the mean is $\alpha$. Another (minor) reason
for a negative bias is due to a certain miscalibration, as was pointed out in
Section 6.1.1. In line with theoretical predictions, this mismatch is
vanishing with the growth of the hyper-parameters $\braket{N}$ and
$\braket{M}$, together with the improving accuracy of the normal
approximations, both for $N_{\lambda}$ and $M_{\lambda}$. It is also
interesting to note that the mean-gamma approximation
$\mathrm{Gamma}\mbox{$\;\\!$}(\braket{M})$ (see Section 4.1) nearly perfectly
matches the exact compound Poisson-Gamma distribution for $\braket{M}=50$ (see
Figure 4(b), left); for smaller values of $\braket{M}$, this approximates is
rather crude, however it still captures well the mode of the Poisson-Gamma
distribution and also its right shoulder (see Figure 4(a), left).
A remarkable exception to the unimodality of the plots in Figure 4 is the
compound Poisson-Gamma plot for the weight $N_{\lambda}$, with a relatively
small value of $\braket{M}=5$ (see Figure 4(a), left), where one can clearly
see a singularity at zero (cf. Section 4.1). This theoretical prediction is
supported by the empirical results, with an obvious excess of smaller weights.
With $\braket{M}=5$ and $\braket{N}=12{,}\mbox{$\>\\!$}500$, the local minimum
of the theoretical density $g(x)$ defined in (4.17) occurs at $x_{0}\doteq
0.10340$ with value111111Note that the asymptotic formula (4.18) gives a
pretty accurate approximation $g(x_{0})\approx 0.14334$. $g(x_{0})\doteq
0.19632$, which corresponds to weight $n_{0}=\lceil
x_{0}/\gamma_{0}\rceil=517$. If the density $g(x)$ continued to decay to the
left of $x_{0}$, this would predict the (asymptotic) probability of getting
weights smaller than $n_{0}$ (together with an empty partition) loosely
bounded by $x_{0}\mbox{$\;\\!$}g(x_{0})+\pi_{0}\doteq 0.02704$. But the actual
compound Poisson-Gamma probability is higher, $G(x_{0})\doteq 0.03120$ (see
(4.15)). The excess of “small” partitions is reminiscent of a partition
interpretation of the Bose–Einstein condensation (see [79]). As already
mentioned in Section 4.1, this is a truly finite-length phenomenon, which
vanishes as $\braket{M}\to\infty$ (cf. Figure 4(b), where $\braket{M}=50$).
(a) Bivariate frequency plot. (b) Frequency level sets.
Figure 5: Joint sampling distribution of weight $N_{\lambda}$ and length
$M_{\lambda}$ for $q=2$, $\braket{M}=50$ and $\braket{N}=10^{7}$ (cf. marginal
plots in Figure 4(b)). A random Boltzmann sample of partitions
$\lambda\in\check{\varLambda}^{q}$ (of size $10^{5}$) was simulated using
Algorithm 1.
The bivariate plots in Figure 5 (for $\braket{M}=50$) appear to be
approximately consistent with the asymptotically predicted (standardized)
confidence ellipses of the form
$\mathcal{L}_{\alpha}=\\{\bm{x}\in\mathbb{R}^{2}\colon\bm{x}\mbox{$\>\\!$}\bm{K}_{q}^{-1}\bm{x}^{\top}\leq\chi^{2}_{2}(1-\alpha)\\},$
(6.11)
where $\bm{K}_{q}^{-1}$ is the inverse covariance matrix (5.2), and
$\chi^{2}_{2}(1-\alpha)$ is the quantile of the chi-squared distribution with
two degrees of freedom, corresponding to confidence probability $1-\alpha$.
The latter distribution simplifies to an exponential distribution with mean
$2$, hence $\chi^{2}_{2}(1-\alpha)=2\log\left(1/\alpha\right)$. According to
Theorem 5.1, a sample point $(N_{\lambda}^{*},M_{\lambda}^{*})$ belongs to the
ellipse (6.11) approximately with probability $1-\alpha$, where the
standardized values $N_{\lambda}^{*},M_{\lambda}^{*}$ are defined in (5.1).
The inverse of $\bm{K}_{q}$ is easily computed,
$\bm{K}_{q}^{-1}=\frac{q+1}{q}\begin{pmatrix}1&\displaystyle\frac{-1}{\sqrt{q+1}}\\\
\displaystyle\frac{-1}{\sqrt{q+1}}&1\end{pmatrix}\\!,$
and the confidence ellipse (6.11) specializes as follows,
$N_{\lambda}^{*2}-\frac{2\mbox{$\>\\!$}N_{\lambda}^{*}M_{\lambda}^{*}}{\sqrt{q+1}}+M_{\lambda}^{*2}\leq\frac{2\mbox{$\>\\!$}q}{q+1}\log\frac{1}{\alpha}\mbox{$\>\\!$}.$
A closer inspection of the level plots in Figure 5(b) reveals some elongation
of the frequency level sets towards bigger values of the weight $N_{\lambda}$,
thus indicating a bit of discrepancy with the predicted elliptical shape. This
observation is confirmed by comparison of the marginal histograms of
$N_{\lambda}$ and $M_{\lambda}$ in Figure 4, where the latter is reasonably
symmetric while the former is noticeably skewed to the right. A heuristic
explanation of such an effect may be based on noticing from formulas (2.7)
that, while the length $M_{\lambda}$ is built by summation of multiplicities
$\nu_{\ell}$, the weight $N_{\lambda}$ involves size-biased terms
$\ell\mbox{$\>\\!$}\nu_{\ell}$, which pinpoint skewing the distribution to the
right.
A well-localized unimodal nature of the distributions behind the outputs
$N_{\lambda}$ and $M_{\lambda}$ is sometimes referred to as a _bumpy type_
[23], characterized by an asymptotically large signal-to-noise ratio (SNR) in
response to a large signal,121212The “bumpy” property is especially helpful
for sampling with rejection designed to achieve certain targets for weight and
length, due to a guaranteed asymptotically fast delivery of the output (see
Section 6.2).
$\mathrm{SNR}(X):=\frac{[{\mathsf{E}}(X)]^{2}}{{\mathsf{Var}}(X)}\to\infty,\qquad{\mathsf{E}}(X)\to\infty.$
(6.12)
Here, the notation $X$ designates a random output in question (such as the
size), which has a large expected value. Following definition (6.12) and
applying Theorem 3.9, we readily get
$\mathrm{SNR}(N_{\lambda})\sim\frac{\braket{N}^{2}}{(q+1)\braket{N}^{2}\\!/\\!\braket{M}}=\frac{\braket{M}}{q+1},\qquad\mathrm{SNR}(M_{\lambda})\sim\frac{\braket{M}^{2}}{\braket{M}}=\braket{M},$
so that under Assumption 5.1 (slow growth of $\braket{M}$) each of the
marginal SNRs tends to infinity.
In the multivariate case, the SNR is usually defined in the literature as a
scalar value,
$\mathrm{SNR}(\bm{X}):=\bm{\mu}\mbox{$\;\\!$}\bm{K}^{-1}\mbox{$\;\\!\\!$}\bm{\mu}^{\top},\qquad\bm{\mu}:={\mathsf{E}}(\bm{X}),\
\ \bm{K}:={\mathsf{Cov}}(\bm{X},\bm{X})$ (6.13)
(see, e.g., [65, Eq. (1), p. 511]). Again using Theorem 3.9, we find the
asymptotic inverse of the covariance matrix,
$\bm{K}^{-1}(\bm{z})\sim\frac{1}{q\mbox{$\;\\!\\!$}\braket{N}^{2}}\begin{pmatrix}\braket{M}&-\braket{N}\\\
-\braket{N}&\displaystyle\frac{(q+1)\mbox{$\>\\!\\!$}\braket{N}^{2}}{\braket{M}}\end{pmatrix}\\!,$
and hence
$\mathrm{SNR}(N_{\lambda},M_{\lambda})\sim\frac{(\langle N\rangle,\langle
M\rangle)}{q\mbox{$\;\\!\\!$}\braket{N}^{2}}\begin{pmatrix}\braket{M}&-\braket{N}\\\
-\braket{N}&\displaystyle\frac{(q+1)\mbox{$\>\\!\\!$}\braket{N}^{2}}{\braket{M}}\end{pmatrix}\begin{pmatrix}\braket{N}\\\\[4.79993pt]
\braket{M}\end{pmatrix}=\braket{M}\to\infty.$
However, a scalar definition (6.13) is not entirely satisfactory — for
instance, it cannot detect whether the individual components of $\bm{X}$ are
of bumpy type. As an alternative, we propose the following matrix definition,
$\mathbf{SNR}(\bm{X}):=(\bm{\mu}\mbox{$\>\\!$}\bm{K}^{-1/2})^{\mbox{$\>\\!\\!$}\top}(\bm{\mu}\mbox{$\>\\!$}\bm{K}^{-1/2})=\bm{K}^{-1/2}\mbox{$\>\\!$}(\bm{\mu}^{\mbox{$\;\\!\\!$}\top}\mbox{$\>\\!\\!$}\bm{\mu})\mbox{$\;\\!$}\bm{K}^{-1/2},$
(6.14)
where $\bm{K}^{-1/2}$ is the (unique) positive definite square root of the
inverse covariance matrix $\bm{K}^{-1}$, that is,
$\bm{K}^{-1/2}\bm{K}^{-1/2}=\bm{K}^{-1}$ [43, Theorem 7.2.6, p. 439]. In our
case, the exact expression for $\bm{K}^{-1/2}(\bm{z})$ is cumbersome (although
available), but its asymptotic version under Assumption 5.1 simplifies to
$\bm{K}^{-1/2}(\bm{z})\sim\frac{\sqrt{\braket{M}}}{\sqrt{q\mbox{$\;\\!$}(q+1)}\braket{N}}\begin{pmatrix}\sqrt{q}&-1\\\\[4.79993pt]
-1&\displaystyle\frac{(q+1)\braket{N}}{\braket{M}}\end{pmatrix}\\!.$
Hence, after straightforward calculations we obtain from (6.14)
$\mathbf{SNR}(N_{\lambda},M_{\lambda})\sim\frac{\braket{M}}{q+1}\begin{pmatrix}1&\sqrt{q}\\\\[4.79993pt]
\sqrt{q}&q\end{pmatrix}\\!,$
which tends to infinity in matrix sense.
### 6.2 Rejection sampler
The idea of a rejection sampler discussed in this section is to run a free
sampler in a loop until a prescribed target is met. For example, if the target
is set in terms of the required partition length as $M_{\lambda}=m$, with a
fixed $m\in\mathbb{N}$, then the free sampler is iterated until a partition of
exact length $m$ is obtained. Likewise, if the target is set for the partition
weight, $N_{\lambda}=n$, with a fixed $n\in\mathbb{N}$, then the free sampling
loop runs until a partition of exact weight $n$ is found. These two targets
can be imposed simultaneously, $M_{\lambda}=m$ and $N_{\lambda}=n$; here, it
is natural to design the rejection algorithm as a juxtaposition of two loops
of the free sampler, such that the internal loop runs until the length target
is met and, every time this happens, the resulting partition is checked with
regards to the weight target and is either rejected, whereby the internal loop
starts afresh, or accepted, in which case the algorithm stops.
A more general approach, leading to the so-called approximate algorithms, is
to relax the exact targets to suitable intervals (brackets). In Algorithm 2
presented below in Section 6.2.3, we give an example of a Boltzmann rejection
sampler aiming to sample a partition $\lambda\in\check{\varLambda}^{q}$
satisfying two conditions, $M_{\lambda}=m$ and $n\leq
N_{\lambda}\leq\theta\kern 0.35999ptn$, for some predefined tolerance factor
$\theta\geq 1$. Of course, if $\theta=1$, the approximate algorithm is reduced
to an exact one. An approximate target can also be considered for length,
$m\leq M_{\lambda}\leq\theta^{\prime}m$, and furthermore, such approximate
targets can be combined if desired.
Before delineating Algorithm 2, we discuss a few implementation issues arising
therein.
#### 6.2.1 Parameter calibration and truncation of parts
To start with, the choice of the calibrating parameters $z_{1}$ and $z_{2}$
now follows a slightly different logic as compared to the case of a free
sampler. If only one target is set, such as $M_{\lambda}=m$, then according to
formula (2.25) the conditional Boltzmann measure
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\cdot\,|\mbox{$\;\\!$}\check{\varLambda}^{q}(\mathbin{\vbox{\hbox{\scalebox{0.5}{$\bullet$}}}}\mbox{$\>\\!$},m))$
does not depend on the parameter $z_{2}$, whereas the other parameter,
$z_{1}$, can be chosen with a view on a desired expected value $\braket{N}$ of
the output weight $N_{\lambda}$, as was the case with the free sampler.
Nonetheless, in order to maximize efficiency of the sampling algorithm, the
“free” parameter $z_{2}$ should still be chosen in line with the mean
conditions (3.21) subject to the specification $\braket{M}=m$, thus aiming to
benefit from the bumpy nature of the distribution
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\cdot\,|\mbox{$\;\\!$}\check{\varLambda}^{q}(\mathbin{\vbox{\hbox{\scalebox{0.5}{$\bullet$}}}}\mbox{$\>\\!$},m))$
(cf. Section 6.1.3). Similarly, if the condition $N_{\lambda}=n$ is targeted
then the conditional Boltzmann measure
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\cdot\,|\mbox{$\;\\!$}\check{\varLambda}^{q}(n,\mbox{$\;\\!\\!$}\mathbin{\vbox{\hbox{\scalebox{0.5}{$\bullet$}}}}))$
does not depend on $z_{1}$, however both parameters $z_{1}$ and $z_{2}$ are
chosen to match the mean conditions (3.21) with $\braket{N}=n$. Furthermore,
if both targets are imposed, $N_{\lambda}=n$ and $M_{\lambda}=m$, then by
Lemma 2.4 the measure
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\cdot\,|\mbox{$\>\\!$}\check{\varLambda}^{q}(n,m))$
is reduced to the uniform distribution on $\check{\varLambda}^{q}(n,m)$
regardless of the parameters $z_{1}$ and $z_{2}$. But, as explained above, it
is worthwhile to calibrate them in line with the mean conditions (3.21). With
these targets in mind, in what follows (including Algorithm 2) we use the
specific hyper-parameters $\braket{N}=n$ and $\braket{M}=m$. Moreover, since
avoiding bias is no longer a concern (unlike in the case of the free sampler),
using the leading-term expressions (6.1) is a perfectly satisfactory option.
Of course, the same recipe applies to the approximate sampling.
Another issue to be addressed is whether any truncation of the source of parts
is needed in the algorithm (cf. Section 6.1.2). As long as the weight target
is involved, $N_{\lambda}=n$, one can use a natural majorant $L^{*}=n$ as a
call parameter $L$ in Algorithm 1 (see Section 6.1.3), which clearly causes no
loss in confidence (i.e., $\delta=0$, see (6.5)). The same is true in the case
of an approximate target, $N_{\lambda}\in[n,\theta\kern 0.47992ptn]$, by
choosing the majorant $L^{*}=\theta\kern 0.71997ptn$. However, if only the
length target is in place then no such majorant is available and, therefore,
confidence considerations must be deployed, as discussed in Section 6.1.2.
#### 6.2.2 Censoring of iterations
As was pointed out in the Introduction, in contrast to the special value
$q=1$, in the general case with $q\geq 2$ and arbitrary $m\geq 1$ there is no
guarantee for a given natural number $n\in\mathbb{N}$ to be partitionable into
a required number $m$ of $q$-power parts (unless it is covered by a solution
of the Waring problem [76]). The requirement that the parts be distinct adds
to the complexity of the question. Therefore, the space
$\check{\varLambda}^{q}(n,m)$ may well be empty and, not knowing this in
advance, the task of sampling from such a space may be “mission impossible”.
To be specific, consider sampling subject to the joint targets $M_{\lambda}=m$
and $N_{\lambda}=n$. As already indicated at the beginning of Section 6, a
general design of the corresponding sampling algorithm is based on the two
nested loops according to the separated targets, internal for $M_{\lambda}$
and external for $N_{\lambda}$. While the internal loop is certain to produce
a random partition in $\check{\varLambda}^{q}$ with exactly $m$ parts (see
more about this below), the external loop contains an inherent loose end due
to its potential failure to satisfy the weight requirement, simply because
there may be no such partitions. That is to say, although the successful
completion of the external loop will take some time by repeatedly querying the
internal loop, it may be pointless to wait for too long as there is no
certainty if that is not wasteful.
We propose to resolve this difficulty by an appropriate “censoring” of
processing time, that is, by setting a limit $t^{*}$ on the waiting time,
chosen to ensure sufficiently high confidence in the algorithm’s ability to
deliver a successful completion within the allocated time limit, of course
provided that the task is feasible (i.e., required partitions exist). More
precisely, given a confidence tolerance (significance level) $\delta\in(0,1)$,
the threshold $t^{*}$ should be such that, if the target space is non-empty,
the probability that the algorithm would not succeed by time $t^{*}$ does not
exceed $\delta$. Such an approach is akin to statistical testing of the null
hypothesis “the target is non-empty”. Under this hypothesis, the test
(implemented as a sampling algorithm) fails to produce a required partition
(Type I error) with rate bounded by $\delta$.
The choice of threshold $t^{*}$ is determined by the sampling task at hand. A
few examples of interest are as follows (assuming the number of parts $m$ to
be fixed):
* (T1)
_Exact sampling:_ For a given $n$, attempt to sample
$\lambda\in\check{\varLambda}^{q}(n,m)$ (in other words, check
partitionability of $n$).
* (T2)
_Multiple exact sampling:_ For a given $n$ and some $\theta>1$, attempt to
sample $\lambda\in\check{\varLambda}^{q}(k,m)$ for each integer $k$ in the
range $k\in[n,\theta\kern 0.24005ptn]$ (that is, test partitionability of each
of these numbers).
* (T3)
_Approximate sampling:_ Same as in task (T2) but attempting to sample
$\lambda\in\bigcup_{n\leq k\leq\theta\kern
0.24005ptn}\check{\varLambda}^{q}(k,m)$ (that is, to find at least one
partitionable number in the said range).
###### Remark 6.1.
If required, tasks (T2) and (T3) could be modified to a two-sided version,
such as $\theta^{-1}n\leq k\leq\theta\kern 0.35999ptn$ or, more generally,
$\theta_{1}n\leq k\leq\theta_{2}\mbox{$\>\\!$}n$, with some
$0<\theta_{1}<1<\theta_{2}$.
First, let us look at how the internal loop performs towards its task of
sampling a partition
$\lambda\in\check{\varLambda}^{q}_{L}(\mathbin{\vbox{\hbox{\scalebox{0.5}{$\bullet$}}}}\mbox{$\>\\!$},m)=\check{\varLambda}^{q}(\mathbin{\vbox{\hbox{\scalebox{0.5}{$\bullet$}}}}\mbox{$\>\\!$},m)\cap\\{\lambda\colon\lambda_{\rm
max}\leq L\\}$ (i.e., the source of parts is truncated to $\\{\ell\leq L\\}$,
see Section 2.5). According to Theorem 4.2(a), if $\langle M\rangle>0$ is
fixed and $L\sim\theta\mbox{$\>\\!$}\langle N\rangle$ then the distribution of
$M_{\lambda}$ conditional on $\lambda_{\rm max}\leq L$ converges to a Poisson
law with mean $\mu_{\theta}=\braket{M}G_{1/q}(a_{\theta})$, where
$a_{\theta}=\theta\mbox{$\>\\!$}\langle M\rangle\mbox{$\;\\!\\!$}/q$. A
stronger result concerns a Poisson approximation (cf. Remark 4.1) under a
suitable metric, such as the total variation distance between distributions.
Namely, the said conditional distribution of $M_{\lambda}$ can be replaced by
a Poisson distribution with mean
${\mathsf{E}}_{\bm{z}}(M_{\lambda}\,|\,\lambda_{\rm{max}}\leq
L)=\sum_{\ell\leq
L}\frac{z_{1}^{\ell}z_{2}}{1+z_{1}^{\ell}z_{2}}\sim\mu_{\theta},$ (6.15)
with the error in total variation bounded by (see [8, Theorem 1, p. 474])
$\frac{1}{\mu_{\theta}}\sum_{\ell\leq
L}\left(\frac{z_{1}^{\ell}z_{2}}{1+z_{1}^{\ell}z_{2}}\right)^{\mbox{$\>\\!\\!$}2}=O(z_{2})=O(\kappa^{1/q})\to
0,$ (6.16)
according to (4.6). The advantage of such an approximation is that it holds
true even if $\braket{M}$ is slowly growing, whereby the error estimate (6.16)
is still valid.
Returning to the analysis of the internal loop, with $\braket{N}=n$ and
$\braket{M}=m$ we have
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(M_{\lambda}=m\,|\,\lambda_{\rm
max}\leq\theta\kern
0.35999ptn)\sim\frac{\mu_{\theta}^{m}\,\mathrm{e}^{-\mu_{\theta}}}{m!},\qquad\mu_{\theta}=m\,G_{1/q}(\theta\mbox{$\>\\!$}m/q).$
(6.17)
Hence, the probability (6.17) is bounded away from zero, and since the
attempts within the internal loop are independent, the number of internal runs
until success has geometric distribution, with the expected time to success
being bounded by a constant (depending on $m$). If $m\to\infty$ (with
$\kappa=m^{q+1}\mbox{$\>\\!\\!$}/n=o(1)$), then $\mu_{\theta}\sim m$ and,
according to (6.15) and (6.16), we have
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(M_{\lambda}=m\,|\,\lambda_{\rm
max}\leq\theta\kern
0.35999ptn)=\frac{\mu_{\theta}^{m}\,\mathrm{e}^{-\mu_{\theta}}}{m!}+O(\kappa^{1/q})\sim\frac{m^{m}\,\mathrm{e}^{-m}}{m!}\sim\frac{1}{\sqrt{2\pi\mbox{$\>\\!$}m}},$
(6.18)
with the help of the Stirling formula. In turn, formula (6.18) implies that
the expected number of runs of the internal loop is of order
$O\bigl{(}\sqrt{m}\mbox{$\>\\!$}\bigr{)}$, which is not particularly large for
practical implementation.
Let us now turn to tasks (T1) – (T3) and focus on probabilistic analysis of
runs of the external loop, taking for granted that $M_{\lambda}=m$ and,
automatically, $\lambda_{\rm max}\leq L^{*}$, where $L^{*}=\theta\kern
0.35999ptn$ is a majorant in $\bigcup_{n\leq k\leq\theta
n}\check{\varLambda}^{q}(k,m)$. As stipulated above, we use the hyper-
parameters $\braket{N}=n$ and $\braket{M}=m$, and specify the calibrating
parameters $z_{1}$ and $z_{2}$ according to the leading-term expressions
(6.1). Denote for short
$p_{k}^{*}:={\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(N_{\lambda}=k\,|\mbox{$\;\\!$}M_{\lambda}=m,\mbox{$\>\\!$}\lambda_{\rm
max}\leq\theta\kern 0.35999ptn),\qquad n\leq k\leq\theta\kern 0.35999ptn.$
(6.19)
for simplicity omitting reference to $n$ and $m$. Of course, if
$\check{\varLambda}^{q}(k,m)=\varnothing$ then $p_{k}^{*}=0$. The probability
(6.19) can be interpreted as the probability of successfully sampling a
partition of weight $k$ in a single run of the external loop. Due to
independence of successive runs, the number $T_{k}$ of attempts until success
for a targeted weight value $k$ follows geometric distribution,
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(T_{k}>t\,|\mbox{$\;\\!$}M_{\lambda}=m,\lambda_{\rm
max}\leq n)=\left(1-p_{k}^{*}\right)^{t},\qquad t\in\mathbb{N}_{0}.$ (6.20)
This includes the case $p_{k}^{*}=0$, whereby $T_{k}=\infty$
(${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}$-a.s.). Also note that $(T_{k})$ are
mutually independent for different $k$.
* (T1)
Here, $\theta=1$, so $L^{*}=n$. Suppose that
$\check{\varLambda}^{q}(n,m)\neq\varnothing$ and let
$\lambda_{*}\in\check{\varLambda}^{q}(n,m)$, so that $N_{\lambda_{*}}=n$ and
$M_{\lambda_{*}}=m$. Then we can write
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\lambda_{*}\mbox{$\;\\!$}|\mbox{$\;\\!$}M_{\lambda}=m,\mbox{$\>\\!$}\lambda_{\rm
max}\leq
n)=\frac{{\mathsf{P}\mbox{$\>\\!\\!$}}_{z}(\lambda_{*}\mbox{$\>\\!$}|\mbox{$\;\\!$}\lambda_{\rm
max}\leq n)}{{\mathsf{P}\mbox{$\>\\!\\!$}}_{z}(M_{\lambda}=m\,|\,\lambda_{\rm
max}\leq n)}.$ (6.21)
Starting with the numerator, we have
${\mathsf{P}\mbox{$\>\\!\\!$}}_{z}(\lambda_{*}\mbox{$\>\\!$}|\mbox{$\;\\!$}\lambda_{\rm
max}\leq n)=z_{1}^{n}z_{2}^{m}\prod_{\ell\leq
n}\frac{1}{1+z_{1}^{\ell}\mbox{$\;\\!\\!$}z_{2}}.$
Substituting formulas (6.1), we obtain
$z_{1}^{n}z_{2}^{m}\sim\mathrm{e}^{-m/q}\,\frac{q^{2m}(m/q)^{m+m/q}}{\bigl{(}\Gamma(1/q)\bigr{)}^{m}\mbox{$\;\\!$}n^{m/q}},$
while Lemma 3.5, with the help of the asymptotic relation (3.35), yields
$\prod_{\ell\leq
n}\frac{1}{1+z_{1}^{\ell}\mbox{$\;\\!\\!$}z_{2}}\sim\exp\bigl{(}-m\,G_{1/q}(m/q)\bigr{)}.$
Furthermore, by Theorem 4.2(a) the denominator in (6.21) is asymptotically
given by
${\mathsf{P}\mbox{$\>\\!\\!$}}_{z}(M_{\lambda}=m\,|\,\lambda_{\rm max}\leq
n)\sim\frac{m^{m}\mbox{$\>\\!$}\bigl{(}G_{1/q}(m/q)\bigr{)}^{m}\exp\bigl{(}-m\,G_{1/q}(m/q)\bigr{)}}{m!}.$
(6.22)
Hence, returning to (6.21) we obtain
$p_{n}^{*}\geq{\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(\lambda_{*}\mbox{$\;\\!$}|\mbox{$\;\\!$}M_{\lambda}=m,\lambda_{\rm
max}\leq n)\sim\frac{1}{C_{1}(m,q)\,n^{m/q}},$ (6.23)
where
$C_{1}(m,q):=\frac{1}{m!}\left(\frac{\mathrm{e}^{1/q}\,\Gamma(1/q)\,G_{1/q}(m/q)}{q^{1-1/q}\mbox{$\>\\!$}m^{1/q}}\right)^{\mbox{$\>\\!\\!$}m}\\!.$
(6.24)
Combining (6.20) and (6.23), we have, asymptotically,
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(T_{n}>t\,|\mbox{$\;\\!$}M_{\lambda}=m,\lambda_{\rm
max}\leq
n)=\left(1-p_{n}^{*}\right)^{t}\leq\left(1-\frac{1+o(1)}{C_{1}(m,q)\,n^{m/q}}\right)^{\mbox{$\;\\!\\!$}t}\\!.$
(6.25)
Thus, for the probability (6.25) not to exceed a predefined (small) confidence
tolerance $\delta>0$, it suffices to choose the threshold $t=t_{n}^{*}$ as
follows,
$t_{n}^{*}\simeq\frac{\displaystyle\log\delta}{\displaystyle\log\left(1-\frac{1+o(1)}{C_{1}(m,q)\,n^{m/q}}\right)}\sim
C_{1}(m,q)\,n^{m/q}\log\frac{1}{\delta}\mbox{$\>\\!$}.$ (6.26)
###### Remark 6.2.
The bound (6.26) is very conservative due to a crude estimate (6.23)
leveraging just one instance $\lambda_{*}\in\check{\varLambda}^{q}(n,m)$. If
more information was available about the size of the space
$\check{\varLambda}^{q}(n,m)$, the bound (6.26) could be reduced accordingly.
For example, if $q=1$ then it is known that $\\#\check{\varLambda}(n,m)\sim
n^{m-1}(m!\,(m-1)!)^{-1}$ (see (4.40)). Hence, formula (6.26) for the time
threshold in the case $q=1$ is replaced by a much better and more realistic
estimate,
$\tilde{t}_{n}^{*}\simeq\frac{C_{1}(m,1)}{\\#\check{\varLambda}(n,m)}\,n^{m}\log\frac{1}{\delta}\sim\frac{\mathrm{e}^{m}\left(1-\mathrm{e}^{-m}\right)^{m}(m-1)!}{m^{m}}\,n\mbox{$\>\\!$}\log\frac{1}{\delta}\mbox{$\>\\!$}.$
(6.27)
Likewise, for $q=2$ we get, using (4.42) (with $C_{m}=1$),
$\tilde{t}_{n}^{*}\simeq\frac{C_{1}(m,2)}{\\#\check{\varLambda}^{2}(n,m)}\,n^{m/2}\log\frac{1}{\delta}\sim\left(\frac{2\mbox{$\;\\!$}\mathrm{e}}{m}\right)^{\\!m/2}\Gamma(m/2)\,\bigl{(}G_{1/2}(m/2)\bigr{)}^{m}\,n\mbox{$\>\\!$}\log\frac{1}{\delta}\mbox{$\>\\!$},$
(6.28)
which again grows only linearly in $n$.
* (T2)
For the multiple exact sampling in the range $k\in[n,\theta\kern 0.24005ptn]$,
we can just repeat the procedure in task (T1) for each $k$ in that range.
According to (6.20), the probability that the number of attempts until
success, $T_{k}$, exceeds a threshold $t_{k}^{*}$ is given by (cf. (6.20))
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}\bigl{(}T_{k}>t_{k}^{*}\,|\mbox{$\;\\!$}M_{\lambda}=m,\lambda_{\rm
max}\leq k\bigr{)}=(1-p_{k}^{*})^{t_{k}^{*}},\qquad n\leq k\leq\theta\kern
0.35999ptn.$
Taking into account only partitionable numbers $k\in[n,\theta\kern
0.24005ptn]$ (i.e., such that $\check{\varLambda}^{q}(k,m)\neq\varnothing$
and, therefore, $p_{k}^{*}>0$) and using a Bonferroni-type inequality, the
probability of Type I error for task (T2) (i.e., that the external loop fails
for at least one such $k$) is bounded as follows,
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}\Biggl{(}\,\bigcup_{k=n}^{\lfloor\theta
n\rfloor}\\{t_{k}^{*}<T_{k}<\infty\\}\,\Bigl{|}\mbox{$\;\\!$}M_{\lambda}=m,\lambda_{\rm
max}\leq\theta\kern
0.35999ptn\Biggr{)}\leq\sum_{k\colon\mbox{$\>\\!\\!$}p_{k}^{*}>0}(1-p_{k}^{*})^{t_{k}^{*}}.$
(6.29)
Motivated by formula (6.26), we can look for the time limits $t_{k}^{*}$ in
the form
$t_{k}^{*}\sim c\mbox{$\;\\!$}k^{m/q}.$ (6.30)
Then from (6.29) using (6.23) we get, asymptotically,
$\displaystyle\sum_{k\colon\mbox{$\>\\!\\!$}p_{k}^{*}>0}(1-p_{k}^{*})^{t_{k}^{*}}$
$\displaystyle\lesssim\sum_{k=n}^{\lfloor\theta
n\rfloor}\left(1-\frac{1}{C_{1}(m,q)\,k^{m/q}}\right)^{c\mbox{$\>\\!$}k^{m/q}.}$
$\displaystyle\leq(\theta-1)\,n\mbox{$\;\\!$}\exp\left(-\frac{c}{C_{1}(m,q)}\right)\leq\delta.$
Solving this inequality for $c$ and returning to (6.30) ultimately yields
$\displaystyle t_{k}^{*}\simeq
C_{1}(m,q)\,k^{m/q}\log\frac{(\theta-1)\mbox{$\>\\!$}n}{\delta},\qquad n\leq
k\leq\theta\kern 0.35999ptn.$ (6.31)
Thus, the time bound (6.31) follows the same formula as in a single test (cf.
(6.26)) but with a Bonferroni-type adjustment of the significance level in
order to offset the multiple testing.
###### Remark 6.3.
The same comment as in Remark 6.2 applies to task (T2). Specifically, for
$q=1$ and $q=2$ the improved formulas for the thresholds $t^{*}_{k}$ are
given, respectively, by
$\displaystyle\tilde{t}_{k}^{*}$
$\displaystyle\simeq\frac{\mathrm{e}^{m}\left(1-\mathrm{e}^{-m}\right)^{m}(m-1)!}{m^{m}}\,k\mbox{$\>\\!$}\log\frac{(\theta-1)\mbox{$\>\\!$}n}{\delta},$
(6.32) $\displaystyle\tilde{t}_{k}^{*}$
$\displaystyle\simeq\left(\frac{2\mbox{$\;\\!$}\mathrm{e}}{\pi
m}\right)^{\\!m/2}\Gamma(m/2)\,\biggl{(}\int_{0}^{m/2}\\!u^{-1/2}\mbox{$\;\\!$}\mathrm{e}^{-u}\,\mathrm{d}{u}\biggr{)}^{m}\,k\mbox{$\>\\!$}\log\frac{(\theta-1)\mbox{$\>\\!$}n}{\delta}.$
(6.33)
* (T3)
In a single attempt, the external loop gets a partition
$\lambda\in\bigcup_{n\leq k\leq\theta\kern
0.24005ptn}\check{\varLambda}^{q}(k,m)$ with probability
$\displaystyle{\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}\bigl{(}n\leq
N_{\lambda}\leq\theta\kern
0.24005ptn\>|\mbox{$\;\\!$}M_{\lambda}=m,\lambda_{\rm max}\leq\theta\kern
0.24005ptn)$ $\displaystyle=\sum_{k=n}^{\lfloor\theta n\rfloor}p_{k}^{*}\to
G_{1/q}^{\star
m}(a_{\theta}\mbox{$\>\\!$}|\mbox{$\;\\!$}a_{\theta})-G_{1/q}^{\star
m}(a_{1}|\mbox{$\;\\!$}a_{\theta}).$ (6.34)
The limit (6.34) is due to Theorem 4.2(b), where
$a_{\theta}=\theta\mbox{$\>\\!$}m/q$ and $G_{1/q}^{\star
m}(x\mbox{$\;\\!$}|\mbox{$\;\\!$}a_{\theta})$ stands for the $m$-convolution
of the $a_{\theta}$-truncated gamma distribution
$G_{1/q}(x\mbox{$\;\\!$}|\mbox{$\;\\!$}a_{\theta})$.
To circumvent the trouble of computing such a convolution, observe that in the
range $0\leq x\leq a$ the distribution function $G^{\star
m}_{\alpha}(x\mbox{$\;\\!$}|\mbox{$\;\\!$}a)$ coincides with
$G_{m\mbox{$\>\\!$}\alpha}(x)$ up to the normalization factor
$G_{\alpha}(a)^{m}$. This is obvious for $m=1$, and the general case can be
seen by induction over $m$ using the convolution formula. Indeed, denoting the
corresponding densities by $g^{\star
m}_{\alpha}(x\mbox{$\;\\!$}|\mbox{$\;\\!$}a)$ and $g_{m\alpha}(x)$,
respectively, we have by definition
$g_{\alpha}(x\mbox{$\;\\!$}|\mbox{$\;\\!$}a)=g_{\alpha}(x)/G_{\alpha}(a)$
($0\leq x\leq a$), and the induction step is carried out as follows,
$\displaystyle g^{\star m}_{\alpha}(x\mbox{$\;\\!$}|\mbox{$\;\\!$}a)$
$\displaystyle=\int_{0}^{x}\\!g^{\star(m-1)}_{\alpha}(u\mbox{$\;\\!$}|\mbox{$\;\\!$}a)\,g_{\alpha}(x-u\mbox{$\;\\!$}|\mbox{$\;\\!$}a)\,\mathrm{d}{u}$
$\displaystyle=\frac{1}{G_{\alpha}(a)^{m}}\int_{0}^{x}\\!g_{(m-1)\mbox{$\>\\!$}\alpha}(u)\,g_{\alpha}(x-u)\,\mathrm{d}{u}=\frac{g_{m\mbox{$\>\\!$}\alpha}(x)}{\bigl{(}G_{\alpha}(a)\bigr{)}^{m}},\qquad
0\leq x\leq a,$
due to the convolution property of the gamma distribution. Thus, formula
(6.34) simplifies to
$\sum_{k=n}^{\lfloor\theta
n\rfloor}p_{k}^{*}\to\frac{G_{m/q}(\theta\mbox{$\>\\!$}m/q)-G_{m/q}(m/q)}{{\bigl{(}G_{1/q}(\theta\mbox{$\>\\!$}m/q)\bigr{)}^{m}}}=:C_{3}(m,q,\theta).$
(6.35)
Now, since individual runs of the external loop are independent, the
probability that the number of attempts until success, $T$, exceeds a
threshold $t$ is given by (cf. (6.20))
${\mathsf{P}\mbox{$\>\\!\\!$}}_{\bm{z}}(T>t\,|\mbox{$\;\\!$}M_{\lambda}=m,\lambda_{\rm
max}\leq\theta\kern 0.24005ptn)=\Biggl{(}1-\sum_{k=n}^{\lfloor\theta
n\rfloor}p_{k}^{*}\Biggr{)}^{\\!t}\simeq\bigl{(}1-C_{3}(m,q,\theta)\bigr{)}^{t},$
(6.36)
on account of (6.35). Hence, in order that this probability be bounded by a
confidence tolerance $\delta>0$, we may choose the threshold $t=t^{*}$ as
follows,
$t^{*}\simeq\frac{\log\delta}{\log\left(1-C_{3}(m,q,\theta)\right)}.$ (6.37)
###### Remark 6.4.
According to formula (6.37), the threshold $t^{*}$ does not depend on $n$. It
is of interest to look at how it depends on the growth of the power $q$. To
this end, by a direct analysis of the gamma distribution (see (4.1)) one can
verify that
$G_{\alpha}(\theta\mbox{$\>\\!$}\alpha)=(\theta\mbox{$\>\\!$}\alpha)^{\alpha}\bigl{(}1+O(\alpha)\bigr{)}\qquad(\alpha\to
0{+}).$
Hence, from formula (6.35) we get
$C_{3}(m,q,\theta)\sim
1-\theta^{-m/q}\sim\frac{m\log\theta}{q}\qquad(q\to\infty),$
and then (6.37) gives
$t^{*}\simeq\frac{q\mbox{$\;\\!$}\log\mbox{$\>\\!\\!$}(1/\delta)}{m\mbox{$\>\\!$}\log\theta}.$
#### 6.2.3 Rejection sampling algorithm
A stylized example of rejection sampler is presented below in pseudocode as
Algorithm 2. It is set out in a flexible way so as to be usable in exact and
approximate sampling alike, as determined by the tasks (T1)–(T3) described in
Section 6.2.2. In particular, the range parameter $\theta$ is allowed to take
the value $\theta=1$, in which case the algorithm would work towards the exact
sampling task (T1) (i.e., with a specific weight target $N_{\lambda}=n$). As
explained in Section 6.2.1, the hyper-parameters are adapted to the desired
targets, $\braket{N}=n$ and $\braket{M}=m$, and the calibrating parameters
$z_{1}$ and $z_{2}$ are set according to the simplified expressions (6.1). A
predefined time bound $t^{*}$ for the external loop is selected according to
the task at hand, as discussed in Section 6.2.2, and on account of the
required confidence probability $1-\delta$.
As briefly indicated at the start of Section 6.2, Algorithm 2 comprises an
external loop that iterates the free sampler in an internal loop (i.e.,
Algorithm 1, with the majorant $L=\theta\kern 0.24005ptn$), which delivers, in
each productive cycle, a partition $\lambda$ that meets the length target
$M_{\lambda}=m$. This continues until the trial partition $\lambda$ meets the
weight target (e.g., $N_{\lambda}=n$ in task (T1) or $n\leq
N_{\lambda}\leq\theta\kern 0.35999ptn$ in task (T3)). However, if the limit of
attempts $t^{*}$ is reached with no success then the algorithm terminates,
returning a message ‘VOID’. It remains to add that for task (T2) involving
multiple exact sampling, the algorithm should be run in an additional loop to
scan all weight values in the range $k\in[n,\theta\kern 0.24005ptn]$.
Input: integer $q,n,m$, real $\theta\geq 1$, $t^{*}$
Output: partition
$\lambda\in\check{\varLambda}^{q}(\mathbin{\vbox{\hbox{\scalebox{0.5}{$\bullet$}}}}\mbox{$\>\\!$},m)$
with $N_{\lambda}\in[n,\theta\kern 0.24005ptn]$, otherwise ‘VOID’
1 integer array $\lambda_{[\ ]}$;
2 integer $N,M,t$;
3 real $L$;
4 $L\leftarrow\theta\kern 0.24005ptn$;
5 $N\leftarrow 0$, $M\leftarrow 0$, $t\leftarrow 0$;
6 while _$N\notin[n,\theta\kern 0.24005ptn]$ and $t\leq t^{*}$_ do
7 while _$M\neq m$ _ do
8 $(\lambda,N,M)\leftarrow{\tt FreeSampler}(q,n,m,L)$
9 end while
10 $t\leftarrow t+1$;
11
12 end while
13if _$t\leq t^{*}$_ then
14 $N_{\lambda}\leftarrow N$;
15 return $(\lambda,N_{\lambda})$
16 else
17 return ‘VOID’
18 end if
Algorithm 2 ReSampler ($q,n,m,\theta,t^{*}$)
Algorithm 2 can be optimized in a number of ways. Since the weight of a valid
output $\lambda$ should not exceed $\theta\kern 0.35999ptn$, it is clear that
the run of the internal loop can be terminated prior to collecting the
required number of parts $m$ if the next candidate part is too large, so that
the incremented weight will certainly exceed the majorant. Furthermore, if the
number of collected parts has already reached the target value $m$ then there
is no need to keep scanning the remaining values in the range $\ell\leq L$ and
the current run of the internal loop may be stopped without any loss. However,
to avoid bias and maintain the Boltzmann distribution of the output, the
corresponding proposal
$\lambda\in\check{\varLambda}_{L}^{q}(\mathbin{\vbox{\hbox{\scalebox{0.5}{$\bullet$}}}}\mbox{$\>\\!$},m)$
must be accepted only if the remaining candidate parts in the range $\ell\leq
L$ were to be rejected by the respective Bernoulli checks. Since individual
such checks are mutually independent, their multitude can be replaced by a
single Bernoulli trial with the corresponding product probability of failure.
An additional benefit of such an aggregated Bernoulli check is that this will
reduce the number of calls of the random number generator and hence improve
the efficiency of the sampler.
Another improvement of the code implementation in the multiple testing task
(T2) proceeds from the observation that the sequential procedure based on
separate testing of each target in the range $[n,\theta\kern 0.24005ptn]$ (see
Section 6.2.2) is apparently wasteful, because a partition of some
$k^{\prime}\in[n,\theta\kern 0.24005ptn]$ obtained whilst looking for
partitions of a different number $k$ would be discarded in that cycle of the
external loop, whereas keeping it would have helped to achieve success if an
earlier search with target $k^{\prime}$ failed, or to save time on a duplicate
job when the algorithm moves to the new target $k^{\prime}$. In practice, all
partitions (at least, the new ones) obtained in every run of the external loop
should be stored as long as they fit into the range $[n,\theta\kern
0.24005ptn]$, thus leaving dynamically fewer targets to address.
For the sake of presentational clarity, Algorithm 2 embeds iterated calls of
the free sampler (Algorithm 1), but this means that the calibrating parameters
$z_{1}$ and $z_{2}$ are recalculated at every such call, which is of course
wasteful. This drawback can be easily amended by writing out the code
explicitly. Note, however, that such an improvement would have no significant
bearing on the asymptotic estimation of the code complexity.
#### 6.2.4 Complexity and performance
Building on the probabilistic analysis of the internal and external loops
carried out in Section 6.2.2, it is straightforward to estimate the _time
complexity_ of Algorithm 2, understood as the _expected number of elementary
runs to completion_.
Starting with the internal loop, in its crude (non-optimized) version each
internal run comprises $\lfloor(\theta\kern 0.35999ptn)^{1/q}\rfloor$ checks
of available parts $\ell\in\mathbb{N}^{q}$ not exceeding
$L^{*}\mbox{$\>\\!\\!$}=\theta\kern 0.35999ptn$. Combined with the estimate
(6.17) of the probability to collect $m$ parts in a single run and the
corresponding geometric distribution of the number of attempts, the complexity
of the internal loop is bounded by
$\mu_{\theta}^{-m}\mbox{$\>\\!$}m!\,\mathrm{e}^{\mbox{$\>\\!$}\mu_{\theta}}\mbox{$\>\\!$}(\theta\kern
0.35999ptn)^{1/q}.$ (6.38)
As for the external loop, its complexity depends on the task at hand. If
$\theta=1$ (which corresponds to task (T1) of exact sampling with the weight
target $N_{\lambda}=n$), then the time to completion, $T_{n}$, has geometric
distribution with parameter $p_{n}^{*}$ (see (6.20)). For simplicity dropping
a time bound $t^{*}$ (but still assuming that the space
$\check{\varLambda}^{q}(n,m)$ is non-empty, so that $p_{n}^{*}>0$), the
expected time to completion is given by
${\mathsf{E}}_{\bm{z}}(T_{n})=1/p_{n}^{*}$. With a time bound $t^{*}$, the
expectation is modified as follows,
$\displaystyle{\mathsf{E}}_{\bm{z}}(T_{n};T_{n}\leq t^{*})$
$\displaystyle=\sum_{t=1}^{\,t^{*}\\!}t\mbox{$\;\\!$}(1-p_{n}^{*})^{t-1}p_{n}^{*}+t^{*}(1-p_{n}^{*})^{t^{*}}=\frac{1-(1-p_{n}^{*})^{t^{*}}}{p_{n}^{*}}<\frac{1}{p_{n}^{*}}.$
(6.39)
However, the reduction in (6.39) is not significant, because under our
confidence-based choice of the time limit (see (6.23)), we always have
$(1-p_{n}^{*})^{t^{*}}\mbox{$\>\\!\\!$}\\!\leq\delta$. Thus, combining
formulas (6.38) and (6.39), the total complexity guarantee for task (T1) is
estimated by
$\frac{m!\,\mathrm{e}^{\mbox{$\>\\!$}\mu_{1}}}{\mu_{1}^{m}}\mbox{$\;\\!$}O\bigl{(}n^{1/q}\mbox{$\;\\!\\!$}/p_{n}^{*}\bigr{)},\qquad\mu_{1}=m\,G_{1/q}(m/q).$
(6.40)
Further specification depends on the informative lower bound for the
probability $p_{n}^{*}$. For example, a crude estimate (6.23) gives a more
explicit estimate for the complexity,
$\left(\frac{\exp\bigl{(}G_{1/q}(m/q)+1/q\bigr{)}\,\Gamma(1/q)}{q^{1-1/q}\mbox{$\>\\!$}m^{1+1/q}}\right)^{\mbox{$\>\\!\\!$}m}\mbox{$\;\\!$}O\bigl{(}n^{(m+1)/q}\bigr{)}.$
(6.41)
For $q=1$ and $q=2$, this estimate can be significantly improved by using
asymptotically exact cardinalities (4.40) and (4.42), respectively, yielding
the estimates
$\frac{(m!)^{2}\,\mathrm{e}^{2m}}{m^{2m+1}}\,O(n^{2})=O(n^{2})$ (6.42)
and
$\frac{2^{m/2}\mbox{$\>\\!$}m!\>\Gamma(m/2)\,\mathrm{e}^{3m/2}}{m^{3m/2}}\mbox{$\;\\!$}O\bigl{(}n^{3/2}\bigr{)}=O(n^{3/2}).$
(6.43)
Interestingly, the asymptotic bounds (6.42) and (6.43) do not depend on the
number of parts $m$.
For task (T2) (with some $\theta>1$), the above estimates just need to be
multiplied by the number of targeted weights,
$\lfloor(\theta-1)\mbox{$\;\\!$}n\rfloor+1=O(n)$. Finally, for task (T3) we
can use formula (6.40), but with $\mu_{1}$ changed to $\mu_{\theta}$ (see
(6.38)) and with the probability $p_{n}^{*}$ of success in a single attempt
replaced by the (asymptotic) probability (6.35) of at least one success in the
range $[n,\theta\kern 0.03607ptn]$, yielding
$\frac{m!\,\exp\bigl{(}m\,G_{1/q}(\theta\mbox{$\>\\!$}m/q)\bigr{)}}{m^{m}\mbox{$\;\\!$}\bigl{(}G_{m/q}(\theta\mbox{$\>\\!$}m/q)-G_{m/q}(m/q)\bigr{)}}\,O(n^{1/q})=O\bigl{(}m\mbox{$\>\\!$}n^{1/q}\bigr{)}.$
Table 4: Confident time thresholds $t^{*}$ for the external loop in tasks
(T1), (T2) and (T3), calculated for $q=1$ and $q=2$ with various values of
confidence tolerance $\delta$ using formulas (6.26), (6.31) and (6.37). In
both cases, the chosen values of $n$ and $m$ yield
$\kappa=m^{q+1}\mbox{$\>\\!\\!$}/n=0.01$. For tasks (T2) and (T3), the testing
range is set with the factor $\theta=1.1$. For comparison, corrected values
$\tilde{t}^{*}$ for tasks (T1) and (T2) are calculated from formulas (6.27),
(6.28) and (6.32), (6.33), respectively. $\vphantom{\int^{t}}q=1$,
$n=2{,}\mbox{$\>\\!$}500$, $m=5$
---
$\vphantom{\int_{y}^{t}}\delta$ | $t_{n}^{*}$ (T1) | $\tilde{t}_{n}^{*}\vphantom{\int^{T}}$ (T1) | $t_{n}^{*}$ (T2) | $\tilde{t}_{n}^{*}$ (T2) | $t^{*}$ (T3)
$0.1$ | $8.603518\cdot 10^{13}\vphantom{\int^{t}}$ | $6{,}\mbox{$\>\\!$}343.202$ | $2.923424\cdot 10^{14}$ | $21{,}\mbox{$\>\\!$}553.82$ | $26.01955$
$0.01$ | $1.720704\cdot 10^{14}$ | $12{,}\mbox{$\>\\!$}686.40$ | $3.783776\cdot 10^{14}\vphantom{\int^{t}}$ | $27{,}\mbox{$\>\\!$}897.02$ | $52.03911$
$0.001$ | $2.581056\cdot 10^{14}$ | $19{,}\mbox{$\>\\!$}029.61$ | $4.644128\cdot 10^{14}\vphantom{\int^{t}}$ | $34{,}\mbox{$\>\\!$}240.22$ | $78.05866$
$0.0001$ | $3.441407\cdot 10^{14}$ | $25{,}\mbox{$\>\\!$}372.81$ | $5.504479\cdot 10^{14}\vphantom{\int^{t}}$ | $40{,}\mbox{$\>\\!$}583.43$ | $104.0782$
$q=2$, $n=12{,}\mbox{$\>\\!$}500$, $m=5$
$\vphantom{\int_{y}^{t}}\delta$ | $t_{n}^{*}$ (T1) | $\tilde{t}_{n}^{*}$ (T1) | $t_{n}^{*}$ (T2) | $\tilde{t}_{n}^{*}$ (T2) | $t^{*}$ (T3)
$\vphantom{\int^{t}}0.1$ | $198{,}\mbox{$\>\\!$}687{,}\mbox{$\>\\!$}146$ | $41{,}\mbox{$\>\\!$}485.61$ | $814{,}\mbox{$\>\\!$}003{,}\mbox{$\>\\!$}357$ | $169{,}\mbox{$\>\\!$}962.8$ | $34.94282$
$\vphantom{\int^{t}}0.01$ | $397{,}\mbox{$\>\\!$}374{,}\mbox{$\>\\!$}292$ | $82{,}\mbox{$\>\\!$}971.22$ | $1{,}\mbox{$\>\\!$}012{,}\mbox{$\>\\!$}690{,}\mbox{$\>\\!$}503$ | $211{,}\mbox{$\>\\!$}448.4$ | $69.88564$
$\vphantom{\int^{t}}0.001$ | $596{,}\mbox{$\>\\!$}061{,}\mbox{$\>\\!$}438$ | $124{,}\mbox{$\>\\!$}456.8$ | $1{,}\mbox{$\>\\!$}211{,}\mbox{$\>\\!$}377{,}\mbox{$\>\\!$}649$ | $252{,}\mbox{$\>\\!$}934.0$ | $104.8285$
$\vphantom{\int^{t}}0.0001$ | $794{,}\mbox{$\>\\!$}748{,}\mbox{$\>\\!$}583$ | $165{,}\mbox{$\>\\!$}942.4$ | $1{,}\mbox{$\>\\!$}410{,}\mbox{$\>\\!$}064{,}\mbox{$\>\\!$}795$ | $294{,}\mbox{$\>\\!$}419.7$ | $139.7713$
To evaluate real time performance of the rejection sampler, we first need to
take a practical look at the censoring time limits $t^{*}$ in tasks (T1)–(T3)
proposed in Section 6.2.2. These are numerically illustrated in Table 4 for
$q=1$ and $q=2$, with various values of $n$ and $m$. Observe that the crude
bounds for tasks (T1) and (T2) calculated via formulas (6.26) and (6.31)
appear to be very high, especially for $q=1$ (of order $10^{14}$), casting
doubt on whether such limits are usable. In real terms, since each run of the
external loop is a simple check if $N_{\lambda}=n$ (see line 2 in Algorithm
2), we can assume for simplicity that it needs a single tick of the CPU clock.
If the algorithm is executed on a contemporary mid-range desktop PC (say, with
processor base frequency 3.30 GHz, which we used) then, under the estimate
(6.26) for task (T1) with $q=1$ and a fairly low confidence tolerance
$\delta=0.001$, the external loop alone may require up to $2.581056\cdot
10^{14}/(3.30\cdot 10^{9}\cdot 60\cdot 60)\doteq 21.72606\approx 22$ hours
until completion, which is unpleasantly long but not entirely unrealistic.
This estimate drops dramatically for $q=2$ to less than $1$ second. A steep
decreasing trend continues with larger powers;131313Keeping $m$ and
$\kappa=m^{q+1}\mbox{$\>\\!\\!$}/n$ fixed, from formulas (6.24) and (6.26) we
find
$\lim_{q\to\infty}t_{n}^{*}=(m^{m}\mbox{$\;\\!\\!$}/m!)\log\mbox{$\>\\!\\!$}(1/\delta)$.
For example, for $m=5$ as in Table 4 and $\delta=0.001$, this limiting value
specializes to $179.8895$ for example, for $q=3$, $n=62{,}\mbox{$\>\\!$}500$,
$m=5$ and $\delta=0.001$, formula (6.26) gives
$t_{n}^{*}=3{,}\mbox{$\>\\!$}358{,}\mbox{$\>\\!$}531$, leading to the maximum
execution time of up to $0.001$ second. Thus, the sampler becomes
progressively more efficient for larger $q$, even under a crude time bound. On
the other hand, as pointed out in Remarks 6.2 and 6.3, additional information
about the size of the corresponding partition spaces would allow a significant
reduction of the estimated bound as illustrated in Table 4 (by a factor
$10^{10}$ for $q=1$ and about $4{,}\mbox{$\>\\!$}790$ for $q=2$).
Let us now look at the real time computational cost due to the internal loop.
As mentioned before (cf. (6.38)), the expected numbers of internal runs until
collecting exactly $m$ parts is asymptotically given by
$\mu_{1}^{-m}\mbox{$\>\\!$}m!\,\mathrm{e}^{\mbox{$\>\\!$}\mu_{1}}$ (with
$\theta=1$), where $\mu_{1}=m\,G_{1/q}(m/q)$. Using for numerical illustration
the same values of $q$, $n$ and $m$ as in Table 4, this formula yields
$5.6993$ ($q=1$) and $5.7043$ ($q=2$). The average computing time for each of
such attempts is inversely proportional to the CPU base frequency (such as
3.30 GHz), but it involves many other important aspects such as the
operational efficiency of a random number generator, design of memory
allocation and data storage, numerical precision, coding implementation and
compiler used, and the overall architecture of the computer (e.g., the number
of cores and whether or not parallel processing was utilized). Thus, it is
impossible to estimate the actual computing time without real benchmarking.
To test the performance of the internal loop, we implemented the algorithm on
a desktop CPU as described at the beginning of Section 6, for simplicity using
a single core. Since internal runs are independent from each other and the
computational costs due to the multi-core design are negligible, we can simply
divide the average execution time on a single core by the number of cores at
disposal.
With the same values of $q$, $n$ and $m$ used above and in Table 4 (and with
$\theta=1$), the average number of sampling attempts (starting at line 2 of
Algorithm 2) was $5.6956$ for $q=1$ and $5.6404$ for $q=2$; note that these
sample averages match the expected values calculated above. Furthermore, the
program took on average $2.1036\cdot 10^{-3}\mbox{$\;\\!\\!$}$ seconds ($q=1$)
and $0.9780\cdot 10^{-4}\mbox{$\;\\!\\!$}$ seconds ($q=2$) per single
successful completion of the internal loop. The corresponding number of ticks
of the CPU clock per elementary check of a candidate part $\ell\leq n$ (see
formula (6.38)) is evaluated as $(2.1036\cdot
10^{-3}\mbox{$\;\\!\\!$}/(5.6956\cdot 2{}\mbox{$\;\\!$}500))\cdot 3.30\cdot
10^{9}\doteq 487.5258$ ($q=1$) and $(0.9780\cdot
10^{-4}\mbox{$\;\\!\\!$}/(5.6404\,\sqrt{12{}\mbox{$\;\\!$}500}\mbox{$\;\\!$}))\cdot
3.30\cdot 10^{9}=511.7854$, so it stays in the range about $450\div 550$.
However, there is a problem: if we combine the physical times benchmarked for
the internal loop with the time bounds $t^{*}_{n}$ for the external loop given
in Table 4 (say, with tolerance $\delta=0.001$), then for $q=1$ we obtain, by
converting seconds to minutes, hours, days and years, $2.1036\cdot
10^{-3}\mbox{$\;\\!\\!$}\cdot 2.581056\cdot 10^{14}\mbox{$\;\\!\\!$}/(60\cdot
60\cdot 24\cdot 365)\approx 17{,}\mbox{$\>\\!$}217$ years (!), which is
clearly impractical. For $q=2$, a similar calculation gives a more reasonable
estimate, $0.9780\cdot 10^{-4}\mbox{$\;\\!\\!$}\cdot
596{}\,061{}\mbox{$\;\\!$}438/(60\cdot 60)\approx 16$ hours. But with the
improved time bounds $\tilde{t}_{n}^{*}$ (see Table 4), we obtain much more
satisfactory estimates, $2.1036\cdot 10^{-3}\mbox{$\;\\!\\!$}\cdot
19\,029.61\approx 40$ seconds ($q=1$) and $0.9780\cdot 10^{-4}\cdot
124\,456.8\approx 12$ seconds ($q=2$).
As a concluding remark, Algorithm 2 could be used as an experimental tool for
searching satisfactory instances in additive problems of number theory such as
variants of the Waring problem. Here, iterated sampling attempts to find a
suitable instance subject to certain constraints — in the lack of prior
knowledge if such instances even exist — may be interpreted as statistical
testing of existence as the null hypothesis, under which a suitable confident
time limit $t^{*}$ can be determined. The sampling approach based on bounded
(although high) confidence bears similarity with primality testing procedure
such as the Miller–Rabin algorithm [66]. It can also be helpful in
verification of conjectures about the density of representable numbers via an
experimental analysis of the success rates. We will address such applications
in another paper.
## Acknowledgements
J.C.P. was supported by an EPSRC Doctoral Training Partnership scholarship at
the School of Mathematics, University of Leeds. The authors have benefited
from the useful discussions with Yuri V. Yakubovich (Saint Petersburg).
## References
* [1]
* [2]
* [3] Agarwala, B.K. and Auluck, F.C. Statistical mechanics and partitions into non-integral powers of integers. _Math. Proc. Cambridge Philos. Soc._ 47 (1951), 207–216. (doi:10.1017/S0305004100026505; MR0042998)
* [4] Andrews, G.E. The Theory of Partitions. Encyclopedia of Mathematics and its Applications, 2, G.-C. Rota, ed. Addison-Wesley, Reading, MA, 1976; reprinted by Cambridge University Press (Cambridge Mathematical Library), Cambridge, 1984. (doi:10.1017/CBO9780511608650; MR0557013)
* [5] Arratia, R., Barbour, A.D. and Tavaré, S. _Logarithmic Combinatorial Structures: A Probabilistic Approach_. EMS Monographs in Mathematics. European Mathematical Society, Zürich, 2003. (doi:10.4171/000; MR2032426)
* [6] Arratia, R. and Tavaré, S. Independent process approximations for random combinatorial structures. _Adv. Math._ 104 (1994), 90–154. (doi:10.1006/aima.1994.1022; MR1272071)
* [7] Auluck, F.C. and Kothari, D.S. Statistical mechanics and the partitions of numbers. Proc. Cambridge Philos. Soc. 42 (1946), 272–277. (doi:10.1017/S030500410002303; MR0017682)
* [8] Barbour, A. and Hall, P. On the rate of Poisson convergence. _Math. Proc. Cambridge Philos. Soc._ 95 (1984), 473–480. (doi:10.1017/S0305004100061806; MR0755837)
* [9] Barbour, A.D., Holst, L. and Janson, S. Poisson Approximation. Oxford Studies in Probability, 2. Oxford Science Publications. Clarendon Press, Oxford, 1992. (MR1163825)
* [10] Bendkowski, M., Bodini, O. and Dovgal, S. Tuning as convex optimisation: A polynomial tuner for multi-parametric combinatorial samplers. _Combin. Probab. Comput._ 31 (2022), 765–811. (doi:10.1017/S0963548321000547; MR4472289)
* [11] Bernstein, M., Fahrbach, M. and Randall, D. Analyzing Boltzmann samplers for Bose–Einstein condensates with Dirichlet generating functions. In _2018 Proceedings of the Meeting on Analytic Algorithmics and Combinatorics (ANALCO)_ , M. Nebel and S. Wagner, eds. SIAM, Philadelphia, PA, 2018, pp. 107–117. (doi:10.1137/1.9781611975062.10; MR3773639)
* [12] Bodini, O., Duchon, P., Jacquot, A. and Mutafchiev, L. Asymptotic analysis and random sampling of digitally convex polyominoes. In DGCI 2013: Discrete Geometry for Computer Imagery, Proceedings of the 17th IAPR International Conference (Seville, March 20–22, 2013), R. Gonzalez-Diaz et al., eds. Lecture Notes in Computer Science, 7749. Springer, Berlin, 2013, pp. 95–106. (doi:10.1007/978-3-642-37067-0_9; MR3155272)
* [13] Bogachev, L.V. Unified derivation of the limit shape for multiplicative ensembles of random integer partitions with equiweighted parts. _Random Struct. Algorithms_ 47 (2015), 227–266. (doi:10.1002/rsa.20540; MR3382672)
* [14] Bogachev, L.V. and Su, Z. Gaussian fluctuations of Young diagrams under the Plancherel measure. _Proc. Royal Soc. Ser. A_ 463 (2007), 1069–1080. (doi:10.1098/rspa.2006.1808; MR2310137)
* [15] Bogachev, L.V. and Yakubovich, Yu.V. Limit shape of minimal difference partitions and fractional statistics. _Comm. Math. Phys._ 373 (2020), 1085–1131. (doi:10.1007/s00220-019-03513-5; MR4061406)
* [16] Bogachev, L.V. and Zarbaliev, S.M. Universality of the limit shape of convex lattice polygonal lines. _Ann. Probab._ 39 (2011), 2271–2317. (doi:10.1214/10-AOP607; MR2932669)
* [17] Borodin, A., Okounkov, A. and Olshanski, G. Asymptotics of Plancherel measures for symmetric groups. _J. Amer. Math. Soc._ 13 (2000), 481–515. (doi:10.1090/S0894-0347-00-00337-4, MR1758751)
* [18] Bureaux, J. and Enriquez, N. Asymptotics of convex lattice polygonal lines with a constrained number of vertices. _Isr. J. Math._ 222 (2017), 515–549. (https://doi.org/10.1007/s11856-017-1599-3; MR3722260)
* [19] Comtet, A., Leboeuf, P. and Majumdar, S.N. Level density of a Bose gas and extreme value statistics. Phys. Rev. Lett. 98 (2007), 070404, 1–4. (doi:10.1103/PhysRevLett.98.070404)
* [20] Comtet, A., Majumdar, S.N., Ouvry, S. and Sabhapandit, S. Integer partitions and exclusion statistics: Limit shapes and the largest parts of Young diagrams. _J. Stat. Mech. Theory Exp._ (2007), no. 10, P10001, 1–13. (doi:10.1088/1742-5468/2007/10/P10001; MR2358050)
* [21] Cramér, H. Mathematical Methods of Statistics. Princeton Mathematical Series, 9. Princeton University Press, Princeton, NJ, 1946\. (https://www.jstor.org/stable/j.ctt1bpm9r4; MR0016588)
* [22] De Gregorio, P. and Rondoni, L. Microcanonical entropy, partitions of a natural number into squares and the Bose–Einstein gas in a box. _Entropy_ 20 (2018), 645\. (doi:10.3390/e20090645)
* [23] Duchon, P., Flajolet, P., Louchard, G. and Schaeffer, G. Boltzmann samplers for the random generation of combinatorial structures. _Combin. Probab. Comput._ 13 (2004), 577–625. (doi:10.1017/S0963548304006315; MR2095975)
* [24] Erdős, P. and Lehner, J. The distribution of the number of summands in the partitions of a positive integer. _Duke Math. J._ 8 (1941), 335–345. (doi:10.1215/S0012-7094-41-00826-8; MR0004841)
* [25] Erdős, P. and Szalay, M. On the statistical theory of partitions. In _Topics in Classical Number Theory, Vol. I (Budapest, 1981)_ , G. Halász, ed., pp. 397–450. Colloquia Mathematica Societatis János Bolyai, 34. North-Holland, Amsterdam, 1984. (MR0781149)
* [26] Erdős, P. and Turán, P. On some general problems in the theory of partitions, I. _Acta Arith._ 18 (1971), 53–62. (doi:10.4064/aa-18-1-53-62; MR0289446)
* [27] Erlihson, M.M. and Granovsky, B.L. Limit shapes of Gibbs distributions on the set of integer partitions: The expansive case. _Ann. Inst. Henri Poincaré Probab. Stat._ 44 (2008), 915–945. (https://doi.org/10.1214/07-AIHP129; MR2453776)
* [28] Ewens, W.J. The sampling theory of selectively neutral alleles. _Theoret. Population Biol._ 3 (1972), 87–112. (doi:10.1016/0040-5809(72)90035-4; MR0325177)
* [29] Feller, W. An Introduction to Probability Theory and Its Applications, Volume II, 2nd ed. Wiley Series in Probability and Mathematical Statistics. Wiley, New York, 1971. (MR0270403)
* [30] Flajolet, P., Fusy, É. and Pivoteau, C. Boltzmann sampling of unlabelled structures. In _2007 Proceedings of the Workshop on Analytic Algorithmics and Combinatorics (ANALCO)_ , D. Panario and R. Sedgewick, eds. SIAM, Philadelphia, PA, 2007, pp. 201–211. (doi:10.1137/1.9781611972979.5; MR2498128)
* [31] Freiman, G.A. and Granovsky, B.L. Clustering in coagulation-fragmentation processes, random combinatorial structures and additive number systems: asymptotic formulae and limiting laws. _Trans. Amer. Math. Soc._ 357 (2005), 2483–2507. (doi:10.1090/S0002-9947-04-03617-7; MR2140447)
* [32] Freiman, G.A. and Yudin, A.A. The interface between probability theory and additive number theory (local limit theorems and structure theory of set addition). In _Representation Theory, Dynamical Systems, and Asymptotic Combinatorics_ , V. Kaimanovich and A. Lodkin, eds. _Amer. Math. Soc. Transl. Ser. 2_ 217 (2006), 51–72. American Mathematical Society, Providence, RI. (doi:10.1090/trans2/217/05/trans2217-05.pdf; MR2276101)
* [33] Fristedt, B. The structure of random partitions of large integers. _Trans. Amer. Math. Soc._ 337 (1993), 703–735. (doi:10.1090/S0002-9947-1993-1094553-1; MR1094553)
* [34] Fulton, W. Young Diagrams, with Applications to Representation Theory and Geometry. London Mathematical Society Student Texts, 35. Cambridge University Press, Cambridge, 1997\. (doi:10.1017/CBO9780511626241, MR1464693)
* [35] Garibaldi, U. and Scalas, E. _Finitary Probabilistic Methods in Econophysics._ Cambridge University Press, Cambridge, 2010. (doi:10.1017/CBO9780511777585; MR2839542)
* [36] Goh, W.M.Y. and Hitczenko, P. Random partitions with restricted part sizes. _Random Struct. Algorithms_ 32 (2008), 440–462. (doi:10.1002/rsa.20191; MR2422389)
* [37] Greiner, W., Neise, L. and Stöcker, H. _Thermodynamics and Statistical Mechanics._ Classical Theoretical Physics. Springer, New York, 1995. (doi:10.1007/978-1-4612-0827-3)
* [38] Guy, R.K. _Unsolved Problems in Number Theory_ , 3rd ed. Problem Books in Mathematics. Springer, New York, 2003. (doi:10.1007/978-0-387-26677-0; MR2076335)
* [39] Hardy, G.H. and Ramanujan, S. Asymptotic formulæ in combinatory analysis. _Proc. London Math. Soc. (2)_ 17 (1918), 75–115. (doi:10.1112/plms/s2-17.1.75; MR1575586). Also available online at http://ramanujan.sirinudi.org/Volumes/published/ram36.pdf (accessed 26.11.2022).
* [40] Hardy, G.H. and Wright, E.M. _An Introduction to the Theory of Numbers_ , 6th ed. Revised by D. R. Heath-Brown and J. H. Silverman, with a foreword by A. Wiles. Oxford University Press, Oxford, 2008. (Publisher’s web page; MR2445243)
* [41] Haselgrove, C.B. and Temperley, H.N.V. Asymptotic formulae in the theory of partitions. _Proc. Cambridge Philos. Soc._ 50 (1954), 225–241. (doi:10.1017/S0305004100029273; MR0062782)
* [42] Hohloch, S. Optimal transport and integer partitions. _Discrete Appl. Math._ 190/191 (2015), 75–85. (doi:10.1016/j.dam.2015.04.002; MR3351722)
* [43] Horn, R.A. and Johnson, C.R. Matrix Analysis, 2nd ed. Cambridge University Press, Cambridge, 2013. (doi:10.1017/9781139020411; MR2978290)
* [44] Hua, L.-K. On the number of partitions of a number into unequal parts. _Trans. Amer. Math. Soc._ 51 (1942), 194–201. (doi:10.2307/1989985; MR0006195)
* [45] Huang, K. _Statistical Mechanics_ , 2nd ed. Wiley, New York, 1987. (MR1042093)
* [46] Hwang, H.-K. Limit theorems for the number of summands in integer partitions. _J. Combin. Theory Ser. A_ 96 (2001), 89–126. (doi:10.1006/jcta.2000.3170; MR1855788)
* [47] Ingham, A.E. A Tauberian theorem for partitions. _Ann. Math. (2)_ 42 (1941), 1075–1090. (doi:10.2307/1970462; MR0005522)
* [48] Khinchin, A.I. _Mathematical Foundations of Statistical Mechanics_. Translated from Russian by G. Gamow. Dover Publications, New York, 1949. Available online at https://archive.org/details/khinchin-mathematical-foundations-of-statistical-mechanics (accessed 05.12.2022). (MR0029808)
* [49] Khinchin, A.Y. _Mathematical Foundations of Quantum Statistics_. Translation from the Russian edition (1951), I. Shapiro, ed. Graylock Press, Albany, NY, 1960. Available online at https://archive.org/details/khinchin-mathematical-foundations-of-quantum-statistics (accessed 05.12.2022). (MR0111217)
* [50] Kingman, J.F.C. Random partitions in population genetics. _Proc. Royal Soc. London Ser. A_ 36 (1978), 11–20. (doi:10.1098/rspa.1978.0089; MR0526801)
* [51] Kolchin, V.F., Sevast’yanov, B.A. and Chistyakov, V.P. _Random Allocations._ Translated from the Russian edition (1976), A. V. Balakrishnan, ed. Scripta Series in Mathematics. V. H. Winston & Sons/Scripta Technica, Inc., Washington, D.C., 1978 [distributed by Halsted Press/Wiley, New York]. (MR0471016)
* [52] Landau, E. Über die Einteilung der positiven ganzen Zahlen in vier Klassen nach der Mindestzahl der zu ihrer additiven Zusammensetzung erforderlichen Quadrate. (German) [On the division of positive integers into four classes according to the minimum number of squares required for their additive composition]. Arch. Math. Phys. (3) 13 (1908), 305–312. Available online at https://ia600309.us.archive.org/31/items/archivdermathem37unkngoog/archivdermathem37unkngoog.pdf (accessed 28.10.2022).
* [53] Lee, D.V. The asymptotic distribution of the number of summands in unrestricted $\varLambda$-partitions. _Acta Arith._ 65 (1993), 29–43. (doi:10.4064/aa-65-1-29-43; MR1239241)
* [54] Lindsay, R.B. (XXII) A modification of Brillouin’s unified statistics. _Lond. Edinb. Dublin Philos. Mag. J. Sci. Ser. 7_ 17 (1934), 264–271. (doi:10.1080/14786443409462389)
* [55] Lipnik, G.F., Madritsch, M.G. and Tichy, R.F. A central limit theorem for integer partitions into small powers. Preprint (2022), arXiv:2204.05592. Available online at https://arxiv.org/abs/2204.05592 (accessed 08.12.2022).
* [56] Loève, M. _Probability Theory I_ , 4th ed. Graduate Texts in Mathematics, 45. Springer, New York, 1977. (doi:10.1007/978-1-4684-9464-8; MR0651017)
* [57] Logan, B.F. and Shepp, L.A. A variational problem for random Young tableaux. _Adv. in Math._ 26 (1977), 206–222. (doi:10.1016/0001-8708(77)90030-5; MR1417317)
* [58] Meinardus, G. Asymptotische Aussagen über Partitionen. (German) [Asymptotic statements about partitions] _Math. Z._ 59 (1954), 388–398. (doi:10.1007/BF01180268; MR0062781)
* [59] _NIST Handbook of Mathematical Functions_ , F. W.J̇. Olver et al., eds. National Institute of Standards and Technology, U.S. Department of Commerce, Washington, DC; Cambridge University Press, Cambridge, 2010. (https://www.cambridge.org/catalogue/catalogue.asp?isbn=9780521192255; MR2723248) Online version: _NIST Digital Library of Mathematical Functions_ , Release 1.1.7 of 2022-10-15, https://dlmf.nist.gov (accessed 28.10.2022).
* [60] Novak, S.Y. Poisson approximation. _Probab. Surveys_ 16 (2019), 228–276. (doi:10.1214/18-PS318; MR3992498)
* [61] Nuermaimaiti, R., Bogachev, L.V. and Voss, J. A generalized power law model of citations. In _18th International Conference on Scientometrics & Informetrics, ISSI 2021 (12–15 July 2021, KU Leuven, Belgium), Proceedings_, W. Glänzel et al., eds. International Society for Scientometrics and Informetrics (I.S.S.I.), 2021, pp. 843–848. Available online at https://issi2021.org/proceedings/ (accessed 08.12.2022).
* [62] Pak, I. The nature of partition bijections I. Involutions. _Adv. in Appl. Math._ 33 (2004), 263–289. (doi:10.1016/j.aam.2003.08.007; MR2074399)
* [63] Pitman, J. _Combinatorial Stochastic Processes. (Ecole d’Eté de Probabilités de Saint-Flour XXXII – 2002)_ , J. Picard, ed. Lecture Notes in Mathematics, 1875. Springer, Berlin, 2006. (doi:10.1007/b11601500; MR2245368)
* [64] Pittel, B. On a likely shape of the random Ferrers diagram. Adv. in Appl. Math. 18 (1997), 432–488. (doi:10.1006/aama.1996.0523; MR1445358)
* [65] Polcari, J. An informative interpretation of decision theory: the information theoretic basis for signal-to-noise ratio and log likelihood ratio. _IEEE Access_ 1 (2013), 509–522. (doi:10.1109/ACCESS.2013.2277930)
* [66] Rabin, M.O. Probabilistic algorithm for testing primality. J. Number Theory 12 (1980), 128–138. (doi:10.1016/0022-314X(80)90084-0; MR0566880)
* [67] Resnick, S.I. Extreme Values, Regular Variation and Point Processes. Applied Probability. A Series of the Applied Probability Trust, 4. Springer, New York, 1987. (doi:10.1007/978-0-387-75953-1; MR0900810)
* [68] Roccia, J. and Leboeuf, P. Level density of a Fermi gas and integer partitions: A Gumbel-like finite-size correction. _Phys. Rev. C_ 81 (2010), 044301. (doi:10.1103/PhysRevC.81.044301)
* [69] Shiryaev, A.N. _Probability_ , 2nd ed. Graduate Texts in Mathematics, 95. Springer, New York, 1996. (doi:10.1007/978-1-4757-2539-1; MR1368405)
* [70] Sinaĭ, Ya.G. A probabilistic approach to the analysis of the statistics of convex polygonal lines. (Russian) _Funktsional. Anal. i Prilozhen._ 28 (1994), no. 2, 41–48. English translation: Probabilistic approach to the analysis of statistics for convex polygonal lines. _Funct. Anal. Appl._ 28 (1994), 108–113. (doi:10.1007/BF01076497; MR1283251)
* [71] Su, Z. _Random Matrices and Random Partitions: Normal Convergence._ World Scientific Series on Probability Theory and Its Applications, 1. World Scientific, Hackensack, NJ, 2015. (doi:10.1142/9197, MR3381296)
* [72] Temperley, H.N.V. Statistical mechanics and the partition of numbers, I. The transition of liquid helium. _Proc. Royal Soc. London Ser. A_ 199 (1949), 361–375. (doi:10.1098/rspa.1949.0143; MR0037247)
* [73] Temperley, H.N.V. Statistical mechanics and the partition of numbers, II. The form of crystal surfaces. Proc. Cambridge Philos. Soc. 48 (1952), 683–697. (doi:10.1017/S0305004100076453; MR0053036)
* [74] Uspensky, J.V. Asymptotic expressions of numerical functions occurring in problems of partition of numbers. (Russian) _Bull. Acad. Sci. Rus. Ser. VI_ 14 (1920), 199–218. Available online at https://mi.mathnet.ru/eng/izv/v14/p199 (accessed 08.12.2022). (zbMATH JFM 48.1162.01)
* [75] Vaughan, R.C. Squares: Additive questions and partitions. _Int. J. Number Theory_ 11 (2015), 1367–1409. (doi:10.1142/S1793042115400096; MR3376217)
* [76] Vaughan, R.C. and Wooley, T.D. Waring’s problem: A survey. _Number Theory for the Millennium, III (Urbana, IL, 2000)_ , M. A. Bennett et al., eds. A K Peters, Natick, MA, 2002, pp. 301–340. (MR1956283) Reprinted in _Surveys in Number Theory: Papers from the Millennial Conference on Number Theory_. CRC Press, Boca Raton, FL, 2003, pp. 285–324. (doi:10.1201/9780429258978)
* [77] Vershik, A.M. Asymptotic combinatorics and algebraic analysis. In _Proceedings of the International Congress of Mathematicians (Zürich, 1994)_ , Vol. 2, S. D. Chatterji, ed. Birkhäuser, Basel, 1995, pp. 1384–1394. (doi:10.1007/978-3-0348-9078-6_133; MR1404040)
* [78] Vershik, A.M. Statistical mechanics of combinatorial partitions, and their limit shapes. (Russian) _Funktsional. Anal. i Prilozhen._ 30 (1996), no. 2, 19–39. English translation: _Funct. Anal. Appl._ 30 (1996), 90–105. (doi:10.1007/BF02509449; MR1402079)
* [79] Vershik, A.M. Limit distribution of the energy of a quantum ideal gas from the viewpoint of the theory of partitions of natural numbers. (Russian) Uspekhi Mat. Nauk 52(2) (1997), 139–146; English translation: Russian Math. Surveys 52 (1997), 379–386. (doi:10.1070/RM1997v052n02ABEH001782; MR1480142)
* [80] Vershik, A.M., Freĭman, G.A. and Yakubovich, Yu.V. A local limit theorem for random partitions of natural numbers. (Russian) _Teor. Veroyatnost. i Primenen._ 44 (1999), 506–525; English translation: Freiman, G., Vershik, A.M. and Yakubovich, Yu.V. A local limit theorem for random strict partitions._Theory Probab. Appl._ 44 (2000), 453–468. (doi:10.1137/S0040585X97977719; MR1805818)
* [81] Vershik, A.M. and Kerov, S.V. Asymptotics of the Plancherel measure of the symmetric group and the limit form of Young tableaux. (Russian) Dokl. Akad. Nauk SSSR 233 (1977), 1024–1027; available online at http://mi.mathnet.ru/eng/dan/v233/i6/p1024 (accessed 08.12.2022); English translation: Soviet Math. Doklady 18 (1977), 527–531. (MR0480398)
* [82] Vershik, A.M. and Kerov, S.V. Asymptotic behavior of the maximum and generic dimensions of irreducible representations of the symmetric group. (Russian) Funktsional. Anal. i Prilozhen. 19 (1985), no. 1, 25–36; English translation: Funct. Anal. Appl. 19 (1985), 21–31. (doi:10.1007/BF01086021; MR0783703)
* [83] Vershik, A. and Yakubovich, Yu. The limit shape and fluctuations of random partitions of naturals with fixed number of summands. _Mosc. Math. J._ 1 (2001), 457–468. (doi:10.17323/1609-4514-2001-1-3-457-468; MR1877604)
* [84] Vershik, A. and Yakubovich, Yu. Fluctuations of the maximal particle energy of the quantum ideal gas and random partitions. _Comm. Math. Phys._ 261 (2006), 759–769. (doi:10.1007/s00220-005-1434-2; MR2197546)
* [85] Whittaker, E.T. and Watson, G.N. A Course of Modern Analysis: An Introduction to the General Theory of Infinite Processes and of Analytic Functions; with an Account of the Principal Transcendental Functions, reprint of the 4th ed. (1927). Cambridge Mathematical Library. Cambridge University Press, Cambridge, 1996. (doi:10.1017/CBO9780511608759; MR1424469)
* [86] Wright, E.M. Asymptotic partition formulae, III. Partitions into $k$-th powers. _Acta Math._ 63 (1934), 143–191. (doi:10.1007/BF02547353; MR1555393)
* [87] Wright, E.M. The asymptotic expansion of the generalized Bessel function. _Proc. London Math. Soc. (2)_ 38 (1935), 257–270. (doi:10.1112/plms/s2-38.1.257; MR1576315)
* [88] Yakubovich, Yu. Ergodicity of multiplicative statistics. J. Combin. Theory Ser. A 119 (2012), 1250–1279. (doi:10.1016/j.jcta.2012.03.002; MR2915644)
* [89] Yeh, J. Martingales and Stochastic Analysis. Series on Multivariate Analysis, 1. World Scientific, Singapore, 1995. (doi:10.1142/2948; MR1412800)
* [90] Yong, A. Critique of Hirsch’s citation index: A combinatorial Fermi problem. _Notices Amer. Math. Soc._ 61 (2014), 1040–1050. (doi:10.1090/noti1164; MR3241560)
|
# Optimizing Gravitational-Wave Detector Design for Squeezed Light
Jonathan W. Richardson<EMAIL_ADDRESS>Department of Physics and
Astronomy, University of California, Riverside, Riverside, CA 92521, USA
Swadha Pandey Department of Physics, Indian Institute of Technology Kanpur,
Kanpur, UP 208016, India Edita Bytyqi Department of Applied Physics and
Applied Mathematics, Columbia University, New York, NY 10027, USA Tega Edo
LIGO Laboratory, California Institute of Technology, Pasadena, CA 91125, USA
Rana X. Adhikari LIGO Laboratory, California Institute of Technology,
Pasadena, CA 91125, USA
###### Abstract
Achieving the quantum noise targets of third-generation detectors will require
10 dB of squeezed-light enhancement as well as megawatt laser power in the
interferometer arms—both of which require unprecedented control of the
internal optical losses. In this work, we present a novel optimization
approach to gravitational-wave detector design aimed at maximizing the
robustness to common, yet unavoidable, optical fabrication and installation
errors, which have caused significant loss in Advanced LIGO. As a proof of
concept, we employ these techniques to perform a two-part optimization of the
LIGO A+ design. First, we optimize the arm cavities for reduced scattering
loss in the presence of point absorbers, as currently limit the operating
power of Advanced LIGO. Then, we optimize the signal recycling cavity for
maximum squeezing performance, accounting for realistic errors in the
positions and radii of curvature of the optics. Our findings suggest that
these techniques can be leveraged to achieve substantially greater quantum
noise performance in current and future gravitational-wave detectors.
## I Introduction
In the last six years, Advanced LIGO and Virgo have established gravitational
waves as a new observational probe of the Universe. With projected
improvements in gravitational-wave detector sensitivities, new tests of
gravity, cosmology, and dense nuclear matter will become possible within the
next decade. Higher sensitivity in the 200 Hz–1 kHz band will resolve the
ringdown radiation of newly coalesced black holes, detecting or constraining
potential quantum modifications at the event horizon [1, 2, 3]. Higher
sensitivity in the 1.5–5 kHz band will resolve binary neutron star mergers to
the moment of coalescence, illuminating the neutron star equation of state [4,
5]. More frequent detections of binary neutron star mergers will also enable
independent measurement of the Hubble constant to high precision [6],
addressing the growing tension between cosmic microwave background and local
distance ladder measurements [7].
The LIGO detectors are sensitive to gravitational waves in a broad frequency
band ranging from 20 Hz to 5 kHz. Across this band, the limiting source of
instrumental noise transitions from sensing and controls noise, below roughly
30 Hz, to Brownian noise of the dielectric optical coatings, up to roughly 200
Hz, and finally to quantum noise at higher frequencies (see [8] for a thorough
review of the LIGO noise budget). In laser interferometers, quantum noise
arises not from the positional uncertainties of the mirrors, but from the
quantization of the electromagnetic field used to interrogate their positions
[9, 10]. This effect, commonly described as “shot noise,” arises from ground-
state fluctuations of the vacuum field, which enter the interferometer and
beat with the circulating laser field. The interference of the two fields
produces intensity fluctuations which modulate the interferometer output
signal. These fluctuations also apply force to the mirrors via radiation
pressure, producing actual mirror displacements at low frequencies. Shot noise
can be reduced through two means: higher laser power in the interferometer,
which increases number of photons incident on the beamsplitter, and the
injection of squeezed quantum states of light. Both are critical to improving
the high-frequency sensitivity of gravitational-wave detectors.
In the third observing run, the Advanced LIGO detectors operated with roughly
250 kW of resonating power inside the arm cavities [8]—still only one third of
their 750 kW design power. Recent tests in both detectors have shown that as
the injected laser power is increased, the arm cavity optical gain severely
decays due to increasing internal loss [8]. The source of this loss has been
identified as sub-millimeter, highly absorbing defects in the optical coatings
known as _point absorbers_. In situ wavefront sensors have detected their
presence on at least four of the eight currently installed test masses [8,
11]. Point absorbers appear to originate during the coating deposition
process, although it is still not understood how these contaminants enter the
coating nor to what extent they can be eliminated. Each point absorber absorbs
roughly 80 ppb of the total incident power, or 20 mW when exposed to 250 kW.
The extremely localized heating induces a sharply peaked thermoelastic
deformation of the mirror surface, which scatters power into higher-order
spatial modes [11]. To achieve higher operating power, point absorber losses
must be mitigated.
Beginning also in the third observing run, squeezed light was injected into
both Advanced LIGO detectors [12]. Squeezed light allows for the engineering
of the electromagnetic vacuum state that enters the interferometer. Quantum
fluctuations of the vacuum field, initially distributed uniformly between the
amplitude and phase quadratures, are redistributed so that they are suppressed
in the phase quadrature, containing the gravitational-wave signal, and
amplified in the unsensed amplitude quadrature. In Advanced LIGO, squeezed
vacuum field is generated via degenerate optical parametric amplification [13]
and injected into the interferometer output port. During the third observing
run, a shot noise reduction factor of roughly 3 dB was achieved in each
detector [12].
Although the injected level of squeezing can be high, the observed level of
squeezing at the interferometer output depends on the amount of entanglement
remaining in the squeezed field. Losses within the detector lead directly to
decoherence of the squeezed state, limiting the quantum noise reduction [14].
Losses arise from scattering, imperfect transmissivity or reflectivity of
optics, photodetector quantum efficiency, and spatial mode-mismatch between
the optical cavities. For the interferometer, the largest source of loss is
mode-mismatch between the coupled laser cavities. For example, in the Advanced
LIGO detectors, the mode-matching loss between the arm cavities and the output
mode cleaner cavity alone is measured to be 10%. It has been demonstrated that
this loss can be attributed to practical, and largely irreducible, limitations
in the fabrication and hand-positioning of the interferometer optics [15]. For
third-generation detectors [16, 17], reducing the internal mode-matching
losses to $\sim 1\%$ levels is imperative.
In this work, we present a novel optimization approach to gravitational-wave
detector design. It is aimed at maximizing the robustness to common optical
fabrication and installation errors, which introduce losses that degrade the
optical gain and squeezing performance. Under this approach, design
performance is assessed and improved statistically, over thousands of trials
in which realistic random errors are assumed in the surface figures and
positions of the optics. As a proof of concept, we employ these techniques to
perform a two-part optimization of the LIGO A+ interferometers, planned to
become operational in 2025 [18]. First, in §II we modify the arm cavities for
reduced scattering loss in the presence of point absorbers. Then, in §III we
optimize the signal recycling cavity (SRC) for maximum squeezing performance,
accounting for realistic errors in the positions and radii of curvature of the
optics. Our findings suggest that these techniques can be leveraged to achieve
substantially greater quantum noise performance in current and future
gravitational-wave detectors. Finally, in §IV we summarize and discuss future
extensions of this work.
## II Arm cavity design
(a) (b)
Figure 1: Proposed surface profiles for the LIGO A+ input test masses (ITM;
left) and end test masses (ETM; right). In each panel, the total surface
figure (pink curve) is the sum of the polishing profile (blue curve) and the
optical coating nonuniformity (grey curve). Based on current fabrication
capabilities, the polishing slope is restricted to $\leq 2.5$ nm/mm, as shown
in the lower panels.
In the Advanced LIGO arm cavities, point absorbers on the mirror surfaces
disproportionately scatter power into 7th-order spatial modes. Although a
point-absorber-induced deformation scatters power into many higher-order modes
(HOM), the Fabry–Perot cavity resonantly enhances or suppresses each mode as a
function of the roundtrip phase it accumulates in the cavity. This effect was
first analyzed for static deformations by Vajente [19] and extended to power-
dependent surface deformations from point absorbers by Brooks et al. [11], who
showed that the power loss from the fundamental mode to the $mn$-th HOM is
approximately
$\mathcal{L}_{mn}=a_{00|mn}^{2}\,g_{mn}\;.$ (1)
The first term, $a_{00|mn}$, is the single-bounce amplitude scattering from
the fundamental mode to the $mn$-th HOM when reflected off the deformed mirror
surface. The second term, $g_{mn}$, is the optical gain of that HOM, which
depends on the cavity geometry and the actual (nonideal) surface profiles of
the two mirrors:
$g_{mn}=\frac{1-r_{1}^{\prime\,2}\,r_{2}^{\prime\,2}}{1+r_{1}^{\prime\,2}\,r_{2}^{\prime\,2}}\,\frac{1}{1-\frac{2\,r_{1}^{\prime}\,r_{2}^{\prime}}{1+r_{1}^{\prime\,2}\,r_{2}^{\prime\,2}}\,\cos\left[\Phi_{mn}\right]}\;.$
(2)
The factors $r_{1}^{\prime}$ and $r_{2}^{\prime}$ are the effective amplitude
reflectivities of the input test mass (ITM) and the end test mass (ETM),
respectively, accounting for mode-dependent clipping losses, and $\Phi_{mn}$
is the additional roundtrip phase that the HOM accumulates relative to the
fundamental mode. In the LIGO arm cavities, modes of order 7, by coincidence,
are nearly co-resonant with the fundamental mode, leading to optical gain
factors $g_{mn}$ up to 100 times larger than those for non-resonant modes.
Thus, to reduce point absorber losses and achieve higher operational power in
LIGO A+, our design objective is to fully eliminate mode co-resonances below
order 8 in the arm cavities. In principle, this could be achieved by adjusting
the arm cavity parameters (the arm length and the radii of curvature of the
test masses) for a more favorable transverse mode spacing. However, a
significant change of the cavity parameters is precluded by other operational
constraints. The 4 km arm length is constrained by the existing infrastructure
to approximately $\pm 2$ m of the current length. With only the radii of
curvature of the two mirrors free to vary, it is not possible to maintain the
current beam sizes on both optics. A smaller beam size results in increased
coating Brownian noise—unacceptable for the A+ design, which is already
thermal-noise-limited across its mid-frequency band [18]. A larger beam size,
on the other hand, results in unacceptably higher clipping losses inside the
arms and the signal recycling cavity. Thus, the problem is overconstrained
from the perspective of a standard cavity design approach using spherical
optics.
In this section, we demonstrate that by applying novel, nonspherical surface
profiles to the LIGO test masses, the mode 7 co-resonances can be eliminated
_without_ incurring any increase in coating thermal noise or clipping loss.
Our approach exploits the large difference in transverse spatial confinement
between the fundamental mode and 7th-order modes. Each mirror profile is
spherical in the central region, where fundamental mode power is concentrated,
but assumes a sharply nonspherical shape at the outermost radii, where the
incident power is almost purely in higher-order modes. We show that the outer
surface profile can be tailored to control the roundtrip phase $\Phi_{mn}$
(see Eq. 2) that an HOM accumulates relative to the fundamental mode. This
provides a means to suppress problematic higher-order modes while negligibly
altering the fundamental cavity mode, leading to a significant loss reduction
in the presence of scattering sources such as point absorbers.
### II.1 Nonspherical test mass profiles
For mirror fabrication in the A+ era, LIGO has the ability to specify an
arbitrary (nonspherical) polishing figure. Internal discussions with optics
manufacturers have indicated that an arbitrary radial profile, subject to a
maximum slope of 2.5 nm/mm, could be produced with high confidence, with a
possibility that an even steeper polishing slope could be achieved. Fig. 1
shows our proposed surface profiles for the LIGO input test masses (ITM) and
end test masses (ETM). In each panel, the polishing figure (blue curve) both
compensates the expected nonuniformity of the optical coating (grey curve) and
adds a nonspherical edge component to produce the total surface figure (pink
curve). To remain within demonstrated fabrication limits, we restrict the
polishing slope to 2.5 nm/mm, as shown in the lower panels of Fig. 1. In A+, a
new coating material with improved thermal noise performance, $\rm
TiO_{2}$-doped $\rm GeO_{2}$ [20], is expected to replace the $\rm
TiO_{2}$-doped $\rm Ta_{2}O_{5}$ coatings used in Advanced LIGO [21, 22].
Accordingly, we estimate the A+ coating nonuniformity as the measured
nonuniformity of the LIGO O4 coating plume, which will be reused to produce
the A+ optics, multiplied by the relative coating thickness required to
achieve the same reflectivity with the new material (1.2 for the ITM and 1.5
for the ETM).
Figure 2: Optical gain $g_{mn}$ of the 7th-order Laguerre-Gauss (LG) modes in
the LIGO A+ arm cavities, as a function of frequency detuning from the
fundamental mode resonance. _Top:_ The nominal resonance locations with a
spherical test mass polish. _Bottom:_ The new resonance locations after
including the compensation polish shown in Fig. 1 (blue curves). In both
panels, coating absorption of 120 mW per test mass is assumed.
Fig. 2 illustrates how these profiles eliminate the arm cavity modal
degeneracy. First, for comparison, the top panel shows the nominal locations
of the 7th-order Laguerre-Gauss (LG) mode resonances, assuming a spherical
mirror polish. With higher cavity power (or coating absorptivity), the
resonances shift toward higher frequency due to the increasing residual
thermal deformation of the test masses. Although ring heaters compensate the
central heating due to uniform coating absorption, the ring heaters
“overcorrect” the mirror surface at large radii, resulting in a net profile
that steeply rises near the edge of the optic [23]. Here, we assume 120 mW of
coating absorption per test mass, corresponding to a cavity power of 400 kW
for absorptivity at the level of the Advanced LIGO coatings. The bottom panel
shows the new locations of the 7th-order mode resonances after including the
compensation polish shown in Fig. 1 (blue curves). The polishing profiles are
designed to shift the 7th-order mode resonances rightward, toward higher
frequency, where any degree of thermal distortion now strictly shifts them
_further away_ from co-resonance. This has significant implications for the
loss performance of the arm cavities, as discussed in the following section.
### II.2 Loss performance improvement
We now assess the impact of our proposed compensation polish on the arm cavity
loss. For this, we consider two scenarios, with (“proposed”) and without
(“nominal”) our proposed modification. The “nominal” test mass profiles (with
spherical power subtracted) are equal to the grey curves in Fig. 1. The
“proposed” test mass profiles (again with spherical power subtracted) are
equal to the pink curves in Fig. 1. For each set of profiles, we perform
numerical simulations of an A+ arm cavity using SIS [24], an FFT-based optical
simulation package. The model includes all thermoelastic effects: (1) uniform
coating absorption and (2) optimal ring heater compensation, to maintain
constant mode-matching of the arm to the recycling cavities. Throughout, we
assume coating absorption at the average level of the Advanced LIGO test
masses, 0.3 ppm. We also assume a fixed high-angle scattering loss of 25 ppm
per optic. To account for realistic nonidealities, which could unequally
impact the two designs, all loss analyses are performed as Monte Carlo
simulations over 1000 trials with random beam miscenterings and surface
roughnesses. Beam miscenterings on the ITM and ETM are independently drawn
from a Gaussian distribution with zero mean and a standard deviation of 5 mm.
Surface roughness profiles are randomly generated by SIS with a power spectral
density chosen to match that of the current Advanced LIGO optics [25].
Figure 3: Arm cavity loss as a function of mode order. Each curve represents
the median loss in 1000 trials with random miscenterings and surface
roughnesses, averaged over all Laguerre-Gauss modes ${\rm LG}_{p,l}$ of order
$N=2p+|l|$. The shading represents the 16th and 84th percentiles of the loss
distributions over all trials and modes. The proposed mirror profiles achieve
significantly enhanced higher-order mode dissipation, with no increase in
fundamental mode loss.
First, we evaluate the baseline loss performance of both designs in the
absence of scattering sources. The aim of our compensated design is to achieve
the HOM frequency shifts outlined in §II.1 without worsening the roundtrip
loss of the fundamental mode. Fig. 3 shows the roundtrip arm loss under each
design as a function of mode order, at an arm power of 750 kW. The curves
represent the median loss values over all randomized trials, and averaged over
all modes ${\rm LG}_{p,l}$ of order $N=2p+|l|$. The shading represents the
16th and 84th percentiles of the loss distributions across all trials and
modes. Our results indicate that the proposed test mass profiles do not
increase loss in the fundamental mode, but they do significantly increase the
losses of HOMs above order 2. While the design objective in §II.1 was only to
shift resonance frequencies of HOMs, the larger dissipation of HOMs is an
added advantage that helps to further reduce their optical gain. Enhancing the
dissipation of certain HOMs may be also relevant for improving the damping of
parametric instabilities in gravitational-wave detectors [26]. For this
reason, we include a full breakdown of the dissipation per optical mode in
Appendix A.
Figure 4: Arm cavity loss distributions due to point absorbers, shown at three
different power levels. Each loss distribution represents 1000 trials with a
point absorber randomly positioned on each test mass. Uniform coating
absorption of 0.3 ppm per test mass, along with optimal ring heater
compensation, is assumed.
Next, we add random point absorbers to the Monte Carlo simulation and
reevaluate the loss performance of both designs. One point absorber is applied
to each test mass, randomly positioned in the central 150 mm diameter. The
radial and angular coordinates are drawn from uniform distributions, with the
radial distribution truncated at 75 mm and the angular distribution spanning
the full $360^{\circ}$. Point absorber phase maps are generated using the
analytic formalism for thermoelastic surface deformation from Brooks et al.
[11]. We assume a fixed absorptivity chosen so that, at a cavity power of 250
kW, a perfectly centered point absorber absorbs 20 mW of incident power. Fig.
4 shows the roundtrip loss distributions for the fundamental cavity mode under
each design, at three different arm power levels. We find the proposed
profiles statistically outperform the nominal profiles in all cases.
## III Signal recycling cavity design
The single largest source of loss in the Advanced LIGO interferometers is
spatial mode-mismatch between optical cavities. Mode-mismatch arises from
unintended deviations of the as-built optical system from design. The two
folding mirrors of the signal recycling cavity (SRC) are known to be
especially sensitive to fabrication and installation errors. Even small
perturbations in the curvatures and positions of these optics can result in a
significant mode-mismatch with the arm cavities.
The impact of mode-mismatch internal to the interferometer on the observed
squeezing is difficult to model analytically. In LIGO, the fundamental optical
mode is squeezed in the modal basis defined by a parametric amplifier cavity,
which serves as the squeezing source. The cavities of the interferometer each
define their own basis of optical modes. As the squeezed state propagates
through the interferometer, it is transformed from the modal basis of the
squeezing source into the basis of each respective cavity. If the spatial
modes of the cavities are imperfectly matched, these basis transformations
must mix the optical modes. Since only the fundamental mode in the source
basis is squeezed, the higher-order modes carry standard vacuum. Thus, basis
mixing from mode-mismatch leads to loss. However, unlike dissipative losses,
each modal mixing is coherent and unitary—leading to complex interference
effects which can potentially increase squeezing losses. To date, the most
detailed analytical treatment of the coherent interactions of transverse modal
mixing on squeezed states is given by McCuller et al. [27].
In the present work, we use a numerical simulation to model the squeezing
degradation from internal mode-mismatches. The aim of this analysis is to
identify an SRC design in LIGO A+ that is maximally robust to common errors
which induce mode-mismatch. We identify the maximally error-tolerant design
through a numerical optimization procedure described in §III.1, which employs
an evolutionary search algorithm in a parameter space spanning all possible
SRC designs. The results of this optimization and a quantitative analysis of
the design performance are described in §III.2.
### III.1 Optimization procedure
Our objective is to find the optimal values of the radii of curvature and
positions of the SRC mirrors, such that deviations from these nominal values
minimally degrade the squeezing level observed at the interferometer output.
To perform this search, we employ a global-best particle swarm optimization
(PSO) algorithm provided by the Pyswarms optimization toolkit [28]. PSO is an
evolutionary search algorithm designed to efficiently explore high-dimensional
parameter spaces. Initially, many “particles,” each, in our case, representing
a candidate optical design, are scattered around the parameter space. Each
particle has an associated velocity which determines its position in the
parameter space at the following iteration. Its velocity is determined by its
best known local position as well as the best positions discovered by other
particles. In this way, the entire swarm is iteratively guided toward the
global optimum.
The relative “goodness” of positions within the parameter space is quantified
by a cost function whose value the algorithm seeks to minimize. Primarily, our
cost function is designed to penalize SRC designs in which the observed
squeezing level is strongly sensitive to perturbations of the SRC parameters.
To evaluate the cost function at each particle’s position, at each iteration,
a Finesse optical simulation [29] is used to compute the partial derivatives
of the observed squeezing level with respect to small detunings of each
candidate SRC parameter. We detail the Finesse simulation in §III.1.1. Then,
in §III.1.2 we describe the set of optical parameters which we optimize, as
well as relevant parameter constraints. Finally, in §III.1.3 we detail the
construction of the cost function used to define the optimization objective.
#### III.1.1 Optical simulation
Figure 5: Optical configuration used for simulating a LIGO-like
interferometer. All of the distances and radii of curvature are fixed to the
nominal LIGO A+ design values, except for the signal recycling cavity
parameters which are indicated in blue.
In our optimization routine, the core compute engine is a Finesse simulation
[29] used to analyze the performance of a given SRC design. Finesse is a
modal-based optical simulation package widely used for modeling laser
cavities, whose modern user interface is provided by the Pykat [30] package.
To illustrate the optimization procedure, we adopt a toy interferometer model
based on the LIGO A+ design. Its optical layout is shown in Fig. 5. However,
for the purpose of this illustration, several simplifying departures from the
A+ design are made to reduce the computational cost and complexity:
* •
Frequency-independent squeezing. Although A+ will use frequency-dependent
squeezing [18], for simplicity we assume a frequency-independent squeezing
angle. In principle, our routine can be extended to the frequency-dependent
case by jointly optimizing the error tolerance at multiple frequencies.
* •
DC readout. For signal detection, a bright carrier field must present at the
interferometer output port. In Advanced LIGO, this is generated by offsetting
the differential arm length $\sim 1$ pm from a dark fringe. In A+, this
technique, known as “DC readout,” will be replaced by balanced homodye readout
[31]. As balanced homodyne readout adds considerable complexity, we use DC
readout in this simulation.
* •
Output mode-matching. Inclusion of an output mode cleaner (OMC) adds
significant computational cost because, as SRC parameters are detuned, at
least two adaptive optics between the SRC and OMC must be continually re-
optimized to maintain the mode-matching of the OMC to the arm cavities.
Although such a re-tuning of the output mode-matching been previously
demonstrated [15], for the present simulation we omit the OMC and instead
assume a fixed output loss ranging from $5-20\%$.
The Finesse simulation starts from a “nominal” model (using a provided set of
SRC parameters), then individually detunes each SRC parameter from its design
value and computes the change in observed squeezing. The parameter detunings
introduce a spatial mode-mismatch between the SRC and the arm cavities.
Higher-order modes up to order 4 are tracked, which is sufficient given the
small size of the parameter detunings. Coincidentally, the mode-mismatch
shifts interferometer length degrees of freedom away from their nominal
operating points, as well as rotates the squeezing quadrature away from the
interferometer readout quadrature. In a real detector, these offsets are
zeroed by a combination of control servos and manual optimizations. Thus, it
is necessary to implement servos within the Finesse simulation to zero all
such “artifical” detunings.
To prevent length detunings, we incorporate DC servos for all five length
degrees of freedom: the common arm length, differential arm length, power
recycling cavity length, Michelson length, and signal recycling cavity length
[see, e.g., 32]. Linear error signals are constructed by injecting 9 MHz and
45 MHz phase modulation sidebands at the interferometer input and measuring
the demodulated fields at the symmetric port, antisymmetric port, and a pick-
off port inside the power recycling cavity. Every time an SRC parameter is
varied, we re-orthogonalize the sensing matrix. Then, to verify the new servo
points, we individually detune each length degree of freedom and verify its
error signal to be at a zero crossing.
To calculate the observed squeezing level for the detector, we inject a $-14$
dB squeezed vacuum source at the output of the SRC, as shown in Fig. 5. The
injected squeezing level is chosen to match that expected for LIGO A+. We then
rotate the squeezing angle so as to minimize the quantum (shot) noise level in
the interferometer readout channel at 1 kHz. The signal frequency of 1 kHz is
chosen to lie in LIGO’s high-frequency, shot-noise-dominated band, where
optomechanical interactions with the interferometer optics may be neglected.
The injected squeezed field is mode-matched to the interferometer arm
cavities, rather than to the low-finesse SRC, equivalently to the procedure in
use for the real detectors. Every time an SRC parameter is varied, we adjust
the input squeezing mode to recover the mode-matching to the arm cavities,
then retune the squeezing angle for maximum shot noise reduction.
#### III.1.2 Parameters and constraints
During optimization, we allow the lengths and mirror curvatures defining the
SRC to vary, while keeping the arm cavities and the power recycling cavity
fixed. There are thus six degrees of freedom, as indicated in blue lettering
in Fig. 5: the radii of curvature of the SR3, SR2, and SRM mirrors ($R_{\rm
SR3}$, $R_{\rm SR2}$, and $\rm R_{\rm SRM}$, respectively) and the distances
between ITMX/Y and SR3, SR3 and SR2, and SR2 and the SRM ($L_{\rm
ITM\mbox{-}SR3}$, $L_{\rm SR3\mbox{-}SR2}$, and $L_{\rm SR2\mbox{-}SRM}$,
respectively). As shown in Fig. 5, the distance between the ITMs and SR3 is
the sum of the distances between the ITMs and beamsplitter ($L_{\rm X1}$ and
$L_{\rm Y1}$) and the beamsplitter and SR3 ($L_{\rm BS\mbox{-}SR3}$). To avoid
changing the power recycling cavity mode, we allow only the $L_{\rm
BS\mbox{-}SR3}$ component to vary.
It is necessary to impose two constraints on the SRC parameters, as described
below. In effect, these constraints reduce the dimensionality of the
optimization problem from six to four.
##### Total length.
For length sensing and control of the SRC, Advanced LIGO relies on the
resonance of a 45 MHz phase modulation sideband in this cavity [33]. In order
to avoid requiring a major change in the control system, the 45 MHz sideband
must remain resonant in the redesigned cavity. Thus, we require the total SRC
length to remain fixed. This reduces the dimensionality of the optimization
problem from six to five, via the constraint
$L_{\rm SRC}=L_{\rm ITM\mbox{-}SR3}+L_{\rm SR3\mbox{-}SR2}+L_{\rm
SR2\mbox{-}SRM}$ (3)
where $L_{\rm SRC}=56.01$ m is the current SRC length.
##### Mode-matching.
In order to read out the interferometer signal field through the SRC, the SRC
must be mode-matched to the arm cavities. Thus, at the longitudinal location
of the ITM reflective surface, $z=z_{\rm ITM}$, we require that the beam
parameter of the SRC, $q_{\rm SRC}$, equal that of the arm cavities, $q_{\rm
arm}$:
$q_{\rm SRC}\left(z_{\rm ITM}\right)=q_{\rm arm}\left(z_{\rm ITM}\right)$ (4)
This mode-matching constraint implies that for one roundtrip traversal through
the SRC, starting from the ITM, the ABCD matrix [34] of the SRC must satisfy
$q_{\rm arm}\left(z_{\rm ITM}\right)=\frac{A\,q_{\rm arm}\left(z_{\rm
ITM}\right)+B}{C\,q_{\rm arm}\left(z_{\rm ITM}\right)+D}\;.$ (5)
Implicitly, the matrix elements $A$, $B$, $C$, and $D$ are functions of the
six SRC design parameters. We numerically solve Eq. 5 for $R_{\rm SR2}$ in
terms of the other five parameters, further reducing the dimensionality of the
optimization problem from five to four.
#### III.1.3 Cost function
The relative performance of competing optical designs is quantified by a cost
function, whose value the optimization procedure seeks to minimize. Unlike
classical optimization methods, PSO does not use the gradient of the cost
function and, thus, does not require it to be differentiable. This allows a
high degree of flexibility in construction of the cost function. Primarily,
our cost function is designed to penalize SRC designs in which the observed
squeezing level is strongly sensitive to perturbations of the SRC parameters
($C_{\rm SQZ}$). Several additional penalties are included to ensure that the
cavity is stable ($C_{\rm stable}$), low-loss (due to clipping on the optics;
$C_{\rm loss}$), and modally non-degenerate ($C_{\rm HOM}$). The total cost of
an SRC design is defined as
$\textsc{Cost}=C_{\rm SQZ}+C_{\rm stable}+C_{\rm loss}+C_{\rm HOM}\;.$ (6)
Each of the terms in Eq. 6 is described in detail below.
##### Squeezing sensitivity ($C_{\rm SQZ}$).
To quantify the sensitivity of the SRC design to real-world errors, we detune
each SRC parameter individually to estimate the partial derivatives of the
observed squeezing. Radii of curvature are detuned by $\Delta R=\pm 0.1\%$ and
lengths by $\Delta L=\pm 3$ mm. These detunings are chosen to reflect the best
achievable fabrication and hand-placement tolerances for LIGO optics,
respectively. The design is assigned a cost of
$C_{\rm SQZ}\propto\sum_{i}\left[\left|\frac{\Delta S_{i,+}}{\Delta
x_{i}}\right|+\left|\frac{\Delta S_{i,-}}{\Delta x_{i}}\right|\right]\;,$ (7)
where $\Delta S_{i,\pm}$ is the change in observed squeezing for a positive or
negative detuning $\pm\Delta x_{i}$ of the parameter $x_{i}$.
##### Cavity stability ($C_{\rm stable}$).
To ensure the SRC is a stable optical resonator, we penalize cavities whose
$g$ factor is close to the instability limit of $\pm 1$ [35]. Designs with
stability factors of $\left|g\right|>0.9$ are assigned a linearly increasing
cost of
$C_{\rm stable}\propto\left|g\right|\;.$ (8)
Designs with stability factors below this threshold are assigned a fixed cost
of $C_{\rm stable}=-1$.
##### Clipping losses ($C_{\rm loss}$).
A larger beam size on the SRM will result in higher clipping losses further
downstream on the output mode-matching optics, which are smaller in diameter.
We thus include a penalty to ensure the beam exiting the SRC does not become
significantly larger than its present size. At the longitudinal location of
the SRM, $z=z_{\rm SRM}$, designs with a Gaussian beam radius $w\left(z_{\rm
SRM}\right)>3$ mm are assigned a linearly increasing cost of
$C_{\rm loss}\propto w\left(z_{\rm SRM}\right)\;.$ (9)
Designs with beam sizes below this threshold are assigned a fixed cost of
$C_{\rm loss}=-1$.
##### Modal degeneracy ($C_{\rm HOM}$).
The presence of higher-order mode (HOM) co-resonances in the SRC can lead to a
resonant amplification of scattering losses through a process known as “mode
harming” [36]. To ensure the SRC is modally non-degenerate, we penalize HOM
co-resonances up to order 10. If the roundtrip Guoy phase of the cavity,
$\phi_{G}$, is within $\pm 10\%$ of $2\pi\,/\,n$ for any $n\in\\{1,2,..10\\}$,
the design is assigned a linearly increasing cost of
$C_{\rm HOM}\propto 1-\frac{\left|\phi_{G}-2\pi\,/\,n\right|}{2\pi\,/\,n}\;.$
(10)
Designs with no HOM co-resonances within this threshold are assigned a fixed
cost of $C_{\rm HOM}=-1$.
### III.2 Squeezing performance improvement
Parameter | A+ Nominal | A+ Optimal
---|---|---
SR3 radius of curvature | 35.97 m | 60.24 m
SR2 radius of curvature | -6.41 m | -4.77 m
SRM radius of curvature | -5.69 m | -56.27 m
Beamsplitter to SR3 length | 19.37 m | 9.97 m
SR3 to SR2 length | 15.44 m | 28.56 m
SR2 to SRM length | 15.76 m | 12.04 m
Table 1: Nominal versus optimized signal recycling cavity parameters for LIGO
A+. Figure 6: Nominal versus optimized signal recycling cavity designs for
LIGO A+. Shown is the Gaussian beam size (top) and the accumulated Gouy phase
(bottom) along the cavity axis, from the ITM to the SRM. Of all the optics,
only the positions of SR2 and SR3 are allowed to vary.
In this section, we present the most error-tolerant SRC design identified by
our optimization routine and characterize its optical performance. Table 1
lists the optimized SRC parameter values compared to those for the nominal A+
design. The design differences are visualized in Fig. 6. The top panel shows
the Gaussian beam diameter along the cavity axis, from the ITM to the SRM. The
bottom panel shows the accumulated Gouy phase along the same path. As shown,
the optimization favors converging the beam more slowly, which is achieved
largely by increasing the separation between the SR2 and SR3 telescope
mirrors. For the reasons discussed in §III.1, the total SRC length is
constrained to remain the same, which fixes the position of the SRM in Fig. 6,
and a larger beam size at the SRM position is also strongly penalized.
To assess the competitiveness of this candidate design, we analyze its
squeezing performance statistically using a Monte Carlo method. With a fixed
level of injected squeezing, small random errors are added each of the SRC
parameters and the observed squeezing is computed for a large number of
trials. The resulting squeezing distributions provide a direct, quantitative
comparison of the performance of competing cavity designs. In detail, our
procedure is as follows:
1. 1.
Assume realistic uncertainties in the curvatures and positions of the SRC
optics. We assume the uncertainties to be normally distributed with zero mean
and a standard deviation of 0.1% for radius of curvature errors and 3 mm for
position errors.
2. 2.
Draw a set of random errors for all six SRC parameters listed in Table 1.
3. 3.
Using the Finesse model described in §III.1.1, simulate the interferometer
with these perturbed parameters and compute the observed squeezing level. A
fixed injected squeezing level of $-14$ dB is assumed.
4. 4.
Repeat the previous steps 2000 times, each time drawing a new set of random
parameter errors.
5. 5.
Calculate the probability distribution of observed squeezing across all
trials.
Convergence testing with varying numbers of trials has found 2000 to
adequately sample the distribution.
Fig. 7 shows the result of this comparative performance analysis for the
nominal and optimized SRC designs. The three panels show the probability
distributions of observed squeezing under varying levels of readout loss,
ranging from 5% (top panel) to 20% (bottom panel). This readout loss accounts
for attenuation losses due to output mode-mismatch, Faraday isolator insertion
loss, optical pick-offs for diagnostic and control purposes, and photodiode
quantum inefficiency. At the beginning of LIGO A+, the readout losses are
expected to be similar to the bottommost panel. We find that, in the presence
of random optical errors, our optimization procedure results in a significant
narrowing of the distribution of possible squeezing outcomes. As shown, this
narrowing leads to a modest improvement in the median squeezing level and, at
95% confidence, a dramatic improvement in the worst possible outcome (black
vertical lines in Fig. 7).
The squeezing distributions of the nominal design exhibit a bimodality which
arises from two distinct parameter regimes. In each panel, the left peak
(corresponding to higher squeezing) overwhelmingly consists of cases in which
the SR3 radius of curvature is smaller than intended ($\Delta R_{\rm
SR3}<0.0\%$). On the other hand, the right peak (corresponding to lower
squeezing) overwhelmingly consists of large, positive errors in the SR3 radius
of curvature ($\Delta R_{\rm SR3}>+0.1\%$). We find no strong correlation in
the values of the other five SRC parameters between the two peaks. The strong
dependency on $\Delta R_{\rm SR3}$ is reduced but not completely eliminated by
the optimization process. In the squeezing distributions of the optimal
design, nearly all of the low-squeezing outliers arise from cases of very
large, positive error in the SR3 radius of curvature ($\Delta R_{\rm
SR3}\geq+0.2\%$).
To understand the extreme sensitivity to errors in the SR3 mirror, and how our
optimization process reduces it, we generate a “corner plot” by detuning
individual pairs of SRC parameters, as shown in Fig. 8. In each panel, the
lines represent iso-squeezing contours at which errors in the two parameters
degrade the observed squeezing by 3 dB, compared to the unperturbed case
(located at the origin). The contours for the nominal SRC design (solid blue
lines) and the optimized design (dashed red lines) are overlaid to allow a
direct comparison of the parameter sensitivities. A greater error tolerance
appears as an increase in the area enclosed by the iso-squeezing contour (that
is, larger parameter errors are required to produce the same degradation in
squeezing). As shown, the single largest improvement is a dramatic reduction
of sensitivity to errors in $R_{\rm SR3}$.
Figure 7: Probability distributions of the squeezing achieved with two
different signal recycling cavity (SRC) designs, in the presence of realistic
random optical errors. The three panels assume various levels of readout loss
ranging from 5% (top) to 20% (bottom). In each panel, the black vertical lines
indicate the difference in worst possible outcome, at 95% confidence, between
the two designs. Figure 8: Corner plot comparing the sensitivity to error in
pairs of parameters (see Table 1) between two signal recycling cavity (SRC)
designs. In each panel, the lines represent iso-squeezing contours at which
errors in the two parameters degrade the observed squeezing by 3 dB, compared
to the unperturbed case (located at the origin). A larger area enclosed by the
iso-squeezing contour of one design indicates a greater error tolerance.
## IV Conclusions
The aim of this paper has been to demonstrate the promise of two novel,
complementary techniques in optical experiment design:
1. 1.
Nonspherical mirror surfaces as solutions to otherwise overconstrained cavity
design problems.
2. 2.
Statistics-guided cavity design for optimal robustness to real-world optical
errors.
As a proof of concept, we have performed a two-part optimization of the LIGO
A+ design, first modifying the arm cavities for reduced point-absorber-induced
loss (see §II) and then optimizing the SRC for maximum squeezing performance
(see §III). Our findings strongly suggest that these techniques can be
leveraged to achieve greater performance in current and future gravitational-
wave detectors. Both act to minimize internal optical losses, as is critical
to achieving megawatt-scale power and high levels of squeezing in third-
generation detectors.
Several caveats apply to the results presented herein, which will be the
target of future studies. In §II, the optimal test mass profiles depend
critically on the thermal state of the optics. Central heating due to
(uniform) coating absorption and thermal compensation applied by the ring
heaters induce thermoelastic surface deformations of the same magnitude as the
static polish, even at the outer radii. For the purpose of illustration, we
assume coating absorption at the same level as for the Advanced LIGO optics.
However, the absorptivity of the new coating materials targeted for LIGO A+,
$\rm TiO_{2}$-doped $\rm GeO_{2}$ [20], is not currently known. Given this
uncertainty, an equivalent active solution is possible in which an annular
heating pattern is projected onto the front surface of the test mass near the
edge, providing a tunable means of generating surfaces profiles similar to
those shown in Fig. 1.
In §III, an analytical or semi-analytical squeezing model is desirable to
cross-validate the numerical results presented in Fig. 7. In future work, we
aim to develop a semi-analytic mode-scattering model that will enable
squeezing calculations to be performed using the FFT-based optical simulation
SIS [24]. This will provide an important cross-check of the Finesse-based
models. In future work, we also aim to extend the complexity of the
optimization in several key ways:
1. 1.
Include an OMC, to remove sideband power and non-artificially capture output
mode-matching losses.
2. 2.
Jointly optimize the quantum noise performance at multiple frequencies, to
extend our procedure to frequency-dependent squeezing.
3. 3.
Incorporate additional constraints to generate optical designs compatible with
the footprint of the existing LIGO vacuum chambers.
4. 4.
Jointly optimize the power and signal recycling cavities, enabling potential
searches for “mode-healing” designs [36] in the presence of point absorbers.
Each of these extensions incurs a considerably higher computational cost than
the present work, due either to an increase in the dimensionality of the
optimization itself, or by requiring additional sub-optimizations (or
servoing) to be performed within each iteration of the optimization. Improved
algorithmic efficiency, together with highly parallelized simulations, will be
essential to achieving them at the scale of a full gravitational-wave
interferometer.
## Acknowledgments
We are grateful to GariLynn Billingsley for providing test mass mirror maps,
Hiro Yamamoto for guidance on using SIS, and Lee McCuller for helpful comments
on the squeezing modeling. We would also like to thank the LIGO Laboratory for
providing the resources with which to conduct this research, as well as the
LIGO SURF program, the National Science Foundation, and the California
Institute of Technology for sponsoring the project in part. LIGO was
constructed by the California Institute of Technology and Massachusetts
Institute of Technology with funding from the National Science Foundation, and
operates under Cooperative Agreement No. PHY-1764464. Advanced LIGO was built
under Grant No. PHY-0823459. This paper has LIGO Document Number
LIGO-P2100184.
## Appendix A Arm Cavity Loss by Mode
Mode ${\rm LG}_{p,l}$ | Roundtrip arm loss
---|---
Order | $p$ | $|l|$ | A+ Nominal | A+ Proposed | Units
0 | 0 | 0 | $76^{+0.7}_{-1.8}$ | $62^{+0.5}_{-0.3}$ | ppm
1 | 0 | 1 | $159^{+4}_{-15}$ | $122^{+6}_{-7}$ | ppm
2 | 0 | 2 | $357^{+17}_{-66}$ | $649^{+48}_{-88}$ | ppm
2 | 1 | 0 | $597^{+32}_{-110}$ | $1092^{+81}_{-148}$ | ppm
3 | 0 | 3 | $793^{+47}_{-174}$ | $3165^{+220}_{-420}$ | ppm
3 | 1 | 1 | $1.7^{+0.1}_{-0.4}$ | $7.2^{+0.4}_{-0.9}$ | ppt
4 | 0 | 4 | $2.7^{+0.1}_{-0.4}$ | $15^{+0.6}_{-1.8}$ | ppt
4 | 1 | 2 | $7.6^{+0.3}_{-1.1}$ | $35^{+1}_{-4}$ | ppt
4 | 2 | 0 | $10^{+0.4}_{-1.5}$ | $44^{+1}_{-5}$ | ppt
5 | 0 | 5 | $10^{+0.3}_{-1.2}$ | $42^{+1}_{-5}$ | ppt
5 | 1 | 3 | $32^{+1}_{-4}$ | $92^{+2}_{-8}$ | ppt
5 | 2 | 1 | $49^{+2}_{-6}$ | $120^{+3}_{-10}$ | ppt
6 | 0 | 6 | $31^{+0.6}_{-2.8}$ | $80^{+2}_{-7}$ | ppt
6 | 1 | 4 | $94^{+2}_{-7}$ | $160^{+3}_{-11}$ | ppt
6 | 2 | 2 | $150^{+3}_{-10}$ | $209^{+3}_{-11}$ | ppt
6 | 3 | 0 | $171^{+3}_{-11}$ | $225^{+3}_{-11}$ | ppt
7 | 0 | 7 | $76^{+2}_{-6}$ | $134^{+2}_{-9}$ | ppt
7 | 1 | 5 | $210^{+4}_{-13}$ | $261^{+3}_{-9}$ | ppt
7 | 2 | 3 | $312^{+5}_{-13}$ | $342^{+3}_{-8}$ | ppt
7 | 3 | 1 | $360^{+6}_{-13}$ | $379^{+3}_{-9}$ | ppt
8 | 0 | 8 | $154^{+3}_{-10}$ | $224^{+2}_{-7}$ | ppt
8 | 1 | 6 | $348^{+5}_{-11}$ | $388^{+3}_{-8}$ | ppt
8 | 2 | 4 | $463^{+6}_{-12}$ | $472^{+3}_{-11}$ | ppt
8 | 3 | 2 | $512^{+8}_{-14}$ | $511^{+4}_{-12}$ | ppt
8 | 4 | 0 | $526^{+8}_{-15}$ | $522^{+4}_{-12}$ | ppt
Table 2: Roundtrip arm cavity loss for each Laguerre-Gauss mode up to order 8,
shown for both the nominal and proposed test mass profiles. The lower and
upper error bars represent the 16th and 84th percentiles of the loss
distributions, respectively (see §II.2).
Although the test mass profiles presented in §II.1 are designed to shift the
resonance frequencies of 7th-order modes (to reduce point absorber
scattering), they also achieve a significantly greater dissipation of mode
orders 2-7 as shown in Fig. 3. Enhancing the dissipation of certain modes may
be relevant for improving the damping of parametric instabilities in
gravitational-wave detectors [26]. Thus, in Table 2 we include a breakdown of
the roundtrip dissipation per optical mode.
## References
* [1] Kostas Skenderis and Marika Taylor. The fuzzball proposal for black holes. Physics Reports, 467(4):117 – 171, 2008.
* [2] Vitor Cardoso and Paolo Pani. Tests for the existence of black holes through gravitational wave echoes. Nature Astronomy, 1:586–591, September 2017.
* [3] Ram Brustein and A. J. M. Medved. Quantum hair of black holes out of equilibrium. Phys. Rev. D, 97:044035, February 2018.
* [4] C Markakis, J S Read, M Shibata, K Uryū, J D E Creighton, J L Friedman, and B D Lackey. Neutron star equation of state via gravitational wave observations. Journal of Physics: Conference Series, 189:012024, October 2009\.
* [5] Jocelyn S. Read, Luca Baiotti, Jolien D. E. Creighton, John L. Friedman, Bruno Giacomazzo, Koutarou Kyutoku, Charalampos Markakis, Luciano Rezzolla, Masaru Shibata, and Keisuke Taniguchi. Matter effects on binary neutron star waveforms. Phys. Rev. D, 88:044042, August 2013.
* [6] Hsin-Yu Chen, Maya Fishbach, and Daniel E. Holz. A two per cent Hubble constant measurement from standard sirens within five years. Nature, 562(7728):545–547, October 2018.
* [7] Adam G. Riess, Stefano Casertano, Wenlong Yuan, Lucas M. Macri, and Dan Scolnic. Large Magellanic Cloud cepheid standards provide a 1% foundation for the determination of the Hubble constant and stronger evidence for physics beyond $\Lambda$CDM. The Astrophysical Journal, 876(1):85, May 2019.
* [8] A. Buikema, C. Cahillane, G. L. Mansell, C. D. Blair, R. Abbott, C. Adams, R. X. Adhikari, A. Ananyeva, S. Appert, K. Arai, J. S. Areeda, Y. Asali, S. M. Aston, C. Austin, A. M. Baer, M. Ball, S. W. Ballmer, S. Banagiri, D. Barker, L. Barsotti, J. Bartlett, B. K. Berger, J. Betzwieser, D. Bhattacharjee, G. Billingsley, S. Biscans, R. M. Blair, N. Bode, P. Booker, R. Bork, A. Bramley, A. F. Brooks, D. D. Brown, K. C. Cannon, X. Chen, A. A. Ciobanu, F. Clara, S. J. Cooper, K. R. Corley, S. T. Countryman, P. B. Covas, D. C. Coyne, L. E. H. Datrier, D. Davis, C. Di Fronzo, K. L. Dooley, J. C. Driggers, P. Dupej, S. E. Dwyer, A. Effler, T. Etzel, M. Evans, T. M. Evans, J. Feicht, A. Fernandez-Galiana, P. Fritschel, V. V. Frolov, P. Fulda, M. Fyffe, J. A. Giaime, K. D. Giardina, P. Godwin, E. Goetz, S. Gras, C. Gray, R. Gray, A. C. Green, E. K. Gustafson, R. Gustafson, J. Hanks, J. Hanson, T. Hardwick, R. K. Hasskew, M. C. Heintze, A. F. Helmling-Cornell, N. A. Holland, J. D. Jones, S. Kandhasamy, S. Karki, M. Kasprzack, K. Kawabe, N. Kijbunchoo, P. J. King, J. S. Kissel, Rahul Kumar, M. Landry, B. B. Lane, B. Lantz, M. Laxen, Y. K. Lecoeuche, J. Leviton, J. Liu, M. Lormand, A. P. Lundgren, R. Macas, M. MacInnis, D. M. Macleod, S. Márka, Z. Márka, D. V. Martynov, K. Mason, T. J. Massinger, F. Matichard, N. Mavalvala, R. McCarthy, D. E. McClelland, S. McCormick, L. McCuller, J. McIver, T. McRae, G. Mendell, K. Merfeld, E. L. Merilh, F. Meylahn, T. Mistry, R. Mittleman, G. Moreno, C. M. Mow-Lowry, S. Mozzon, A. Mullavey, T. J. N. Nelson, P. Nguyen, L. K. Nuttall, J. Oberling, Richard J. Oram, B. O’Reilly, C. Osthelder, D. J. Ottaway, H. Overmier, J. R. Palamos, W. Parker, E. Payne, A. Pele, R. Penhorwood, C. J. Perez, M. Pirello, H. Radkins, K. E. Ramirez, J. W. Richardson, K. Riles, N. A. Robertson, J. G. Rollins, C. L. Romel, J. H. Romie, M. P. Ross, K. Ryan, T. Sadecki, E. J. Sanchez, L. E. Sanchez, T. R. Saravanan, R. L. Savage, D. Schaetzl, R. Schnabel, R. M. S. Schofield, E. Schwartz, D. Sellers, T. Shaffer, D. Sigg, B. J. J. Slagmolen, J. R. Smith, S. Soni, B. Sorazu, A. P. Spencer, K. A. Strain, L. Sun, M. J. Szczepańczyk, M. Thomas, P. Thomas, K. A. Thorne, K. Toland, C. I. Torrie, G. Traylor, M. Tse, A. L. Urban, G. Vajente, G. Valdes, D. C. Vander-Hyde, P. J. Veitch, K. Venkateswara, G. Venugopalan, A. D. Viets, T. Vo, C. Vorvick, M. Wade, R. L. Ward, J. Warner, B. Weaver, R. Weiss, C. Whittle, B. Willke, C. C. Wipf, L. Xiao, H. Yamamoto, Hang Yu, Haocun Yu, L. Zhang, M. E. Zucker, and J. Zweizig. Sensitivity and performance of the Advanced LIGO detectors in the third observing run. Phys. Rev. D, 102:062003, September 2020.
* [9] Carlton Caves, Kip Thorne, Ronald Drever, Vernon Sandberg, and Mark Zimmermann. On the measurement of a weak classical force coupled to a quantum-mechanical oscillator. I. Issues of principle. Reviews of Modern Physics, 52(2):341–392, April 1980.
* [10] Carlton M. Caves. Quantum-mechanical noise in an interferometer. Phys. Rev. D, 23:1693–1708, April 1981.
* [11] Aidan F. Brooks, Gabriele Vajente, Hiro Yamamoto, Rich Abbott, Carl Adams, Rana X. Adhikari, Alena Ananyeva, Stephen Appert, Koji Arai, Joseph S. Areeda, Yasmeen Asali, Stuart M. Aston, Corey Austin, Anne M. Baer, Matthew Ball, Stefan W. Ballmer, Sharan Banagiri, David Barker, Lisa Barsotti, Jeffrey Bartlett, Beverly K. Berger, Joseph Betzwieser, Dripta Bhattacharjee, Garilynn Billingsley, Sebastien Biscans, Carl D. Blair, Ryan M. Blair, Nina Bode, Phillip Booker, Rolf Bork, Alyssa Bramley, Daniel D. Brown, Aaron Buikema, Craig Cahillane, Kipp C. Cannon, Huy Tuong Cao, Xu Chen, Alexei A. Ciobanu, Filiberto Clara, Camilla Compton, Sam J. Cooper, Kenneth R. Corley, Stefan T. Countryman, Pep B. Covas, Dennis C. Coyne, Laurence E. Datrier, Derek Davis, Chiara D. Difronzo, Katherine L. Dooley, Jenne C. Driggers, Peter Dupej, Sheila E. Dwyer, Anamaria Effler, Todd Etzel, Matthew Evans, Tom M. Evans, Jon Feicht, Alvaro Fernandez-Galiana, Peter Fritschel, Valery V. Frolov, Paul Fulda, Michael Fyffe, Joe A. Giaime, Dwayne D. Giardina, Patrick Godwin, Evan Goetz, Slawomir Gras, Corey Gray, Rachel Gray, Anna C. Green, Anchal Gupta, Eric K. Gustafson, Dick Gustafson, Evan Hall, Jonathan Hanks, Joe Hanson, Terra Hardwick, Raine K. Hasskew, Matthew C. Heintze, Adrian F. Helmling-Cornell, Nathan A. Holland, Kiamu Izmui, Wenxuan Jia, Jeff D. Jones, Shivaraj Kandhasamy, Sudarshan Karki, Marie Kasprzack, Keita Kawabe, Nutsinee Kijbunchoo, Peter J. King, Jeffrey S. Kissel, Rahul Kumar, Michael Landry, Benjamin B. Lane, Brian Lantz, Michael Laxen, Yannick K. Lecoeuche, Jessica Leviton, Liu Jian, Marc Lormand, Andrew P. Lundgren, Ronaldas Macas, Myron Macinnis, Duncan M. Macleod, Georgia L. Mansell, Szabolcs Marka, Zsuzsanna Marka, Denis V. Martynov, Ken Mason, Thomas J. Massinger, Fabrice Matichard, Nergis Mavalvala, Richard McCarthy, David E. McClelland, Scott McCormick, Lee McCuller, Jessica McIver, Terry McRae, Gregory Mendell, Kara Merfeld, Edmond L. Merilh, Fabian Meylahn, Timesh Mistry, Richard Mittleman, Gerardo Moreno, Conor M. Mow-Lowry, Simone Mozzon, Adam Mullavey, Timothy J. Nelson, Philippe Nguyen, Laura K. Nuttall, Jason Oberling, Richard J. Oram, Charles Osthelder, David J. Ottaway, Harry Overmier, Jordan R. Palamos, William Parker, Ethan Payne, Arnaud Pele, Reilly Penhorwood, Carlos J. Perez, Marc Pirello, Hugh Radkins, Karla E. Ramirez, Jonathan W. Richardson, Keith Riles, Norna A. Robertson, Jameson G. Rollins, Chandra L. Romel, Janeen H. Romie, Michael P. Ross, Kyle Ryan, Travis Sadecki, Eduardo J. Sanchez, Luis E. Sanchez, Saravanan R. Tiruppatturrajamanikkam, Richard L. Savage, Dean Schaetzl, Roman Schnabel, Robert M. Schofield, Eyal Schwartz, Danny Sellers, Thomas Shaffer, Daniel Sigg, Bram J. Slagmolen, Joshua R. Smith, Siddharth Soni, Borja Sorazu, Andrew P. Spencer, Ken A. Strain, Ling Sun, Marek J. Szczepanczyk, Michael Thomas, Patrick Thomas, Keith A. Thorne, Karl Toland, Calum I. Torrie, Gary Traylor, Maggie Tse, Alexander L. Urban, Guillermo Valdes, Daniel C. Vander-Hyde, Peter J. Veitch, Krishna Venkateswara, Gautam Venugopalan, Aaron D. Viets, Thomas Vo, Cheryl Vorvick, Madeline Wade, Robert L. Ward, Jim Warner, Betsy Weaver, Rainer Weiss, Chris Whittle, Benno Willke, Christopher C. Wipf, Liting Xiao, Hang Yu, Haocun Yu, Liyuan Zhang, Michael E. Zucker, and John Zweizig. Point absorbers in Advanced LIGO. Appl. Opt., 60(13):4047–4063, May 2021.
* [12] M. Tse, Haocun Yu, N. Kijbunchoo, A. Fernandez-Galiana, P. Dupej, L. Barsotti, C. D. Blair, D. D. Brown, S. E. Dwyer, A. Effler, M. Evans, P. Fritschel, V. V. Frolov, A. C. Green, G. L. Mansell, F. Matichard, N. Mavalvala, D. E. McClelland, L. McCuller, T. McRae, J. Miller, A. Mullavey, E. Oelker, I. Y. Phinney, D. Sigg, B. J. J. Slagmolen, T. Vo, R. L. Ward, C. Whittle, R. Abbott, C. Adams, R. X. Adhikari, A. Ananyeva, S. Appert, K. Arai, J. S. Areeda, Y. Asali, S. M. Aston, C. Austin, A. M. Baer, M. Ball, S. W. Ballmer, S. Banagiri, D. Barker, J. Bartlett, B. K. Berger, J. Betzwieser, D. Bhattacharjee, G. Billingsley, S. Biscans, R. M. Blair, N. Bode, P. Booker, R. Bork, A. Bramley, A. F. Brooks, A. Buikema, C. Cahillane, K. C. Cannon, X. Chen, A. A. Ciobanu, F. Clara, S. J. Cooper, K. R. Corley, S. T. Countryman, P. B. Covas, D. C. Coyne, L. E. H. Datrier, D. Davis, C. Di Fronzo, J. C. Driggers, T. Etzel, T. M. Evans, J. Feicht, P. Fulda, M. Fyffe, J. A. Giaime, K. D. Giardina, P. Godwin, E. Goetz, S. Gras, C. Gray, R. Gray, Anchal Gupta, E. K. Gustafson, R. Gustafson, J. Hanks, J. Hanson, T. Hardwick, R. K. Hasskew, M. C. Heintze, A. F. Helmling-Cornell, N. A. Holland, J. D. Jones, S. Kandhasamy, S. Karki, M. Kasprzack, K. Kawabe, P. J. King, J. S. Kissel, Rahul Kumar, M. Landry, B. B. Lane, B. Lantz, M. Laxen, Y. K. Lecoeuche, J. Leviton, J. Liu, M. Lormand, A. P. Lundgren, R. Macas, M. MacInnis, D. M. Macleod, S. Márka, Z. Márka, D. V. Martynov, K. Mason, T. J. Massinger, R. McCarthy, S. McCormick, J. McIver, G. Mendell, K. Merfeld, E. L. Merilh, F. Meylahn, T. Mistry, R. Mittleman, G. Moreno, C. M. Mow-Lowry, S. Mozzon, T. J. N. Nelson, P. Nguyen, L. K. Nuttall, J. Oberling, R. J. Oram, B. O’Reilly, C. Osthelder, D. J. Ottaway, H. Overmier, J. R. Palamos, W. Parker, E. Payne, A. Pele, C. J. Perez, M. Pirello, H. Radkins, K. E. Ramirez, J. W. Richardson, K. Riles, N. A. Robertson, J. G. Rollins, C. L. Romel, J. H. Romie, M. P. Ross, K. Ryan, T. Sadecki, E. J. Sanchez, L. E. Sanchez, T. R. Saravanan, R. L. Savage, D. Schaetzl, R. Schnabel, R. M. S. Schofield, E. Schwartz, D. Sellers, T. J. Shaffer, J. R. Smith, S. Soni, B. Sorazu, A. P. Spencer, K. A. Strain, L. Sun, M. J. Szczepańczyk, M. Thomas, P. Thomas, K. A. Thorne, K. Toland, C. I. Torrie, G. Traylor, A. L. Urban, G. Vajente, G. Valdes, D. C. Vander-Hyde, P. J. Veitch, K. Venkateswara, G. Venugopalan, A. D. Viets, C. Vorvick, M. Wade, J. Warner, B. Weaver, R. Weiss, B. Willke, C. C. Wipf, L. Xiao, H. Yamamoto, M. J. Yap, Hang Yu, L. Zhang, M. E. Zucker, and J. Zweizig. Quantum-enhanced Advanced LIGO detectors in the era of gravitational-wave astronomy. Phys. Rev. Lett., 123:231107, December 2019.
* [13] Anna G. Ciriolo, Matteo Negro, Michele Devetta, Eugenio Cinquanta, Davide Faccialà, Aditya Pusala, Sandro De Silvestri, Salvatore Stagira, and Caterina Vozzi. Optical parametric amplification techniques for the generation of high-energy few-optical-cycles IR pulses for strong field applications. Applied Sciences, 7(3), 2017.
* [14] Haixing Miao, Nicolas D. Smith, and Matthew Evans. Quantum limit for laser interferometric gravitational-wave detectors from optical dissipation. Phys. Rev. X, 9:011053, March 2019.
* [15] Antonio Perreca, Aidan F. Brooks, Jonathan W. Richardson, Daniel Töyrä, and Rory Smith. Analysis and visualization of the output mode-matching requirements for squeezing in Advanced LIGO and future gravitational wave detectors. Phys. Rev. D, 101:102005, May 2020.
* [16] David Reitze, Rana X Adhikari, Stefan Ballmer, Barry Barish, Lisa Barsotti, GariLynn Billingsley, Duncan A. Brown, Yanbei Chen, Dennis Coyne, Robert Eisenstein, Matthew Evans, Peter Fritschel, Evan D. Hall, Albert Lazzarini, Geoffrey Lovelace, Jocelyn Read, B. S. Sathyaprakash, David Shoemaker, Joshua Smith, Calum Torrie, Salvatore Vitale, Rainer Weiss, Christopher Wipf, and Michael Zucker. Cosmic Explorer: The U.S. contribution to gravitational-wave astronomy beyond LIGO. Bulletin of the AAS, 51(7), September 2019.
* [17] M Abernathy, F Acernese, P Ajith, B Allen, P Amaro-Seoane, N Andersson, S Aoudia, P Astone, B Krishnan, L Barack, F Barone, B Barr, M Barsuglia, M Bassan, R Bassiri, M Beker, N Beveridge, M Bizouard, C Bond, S Bose, L Bosi, S Braccini, C Bradaschia, M Britzger, F Brueckner, T Bulik, HJ Bulten, O Burmeister, E Calloni, P Campsie, L Carbone, G Cella, E Chalkley, E Chassande-Mottin, S Chelkowski, A Chincarini, A DiCintio, J Clark, E Coccia, CN Colacino, J Colas, A Colla, A Corsi, A Cumming, L Cunningham, E Cuoco, S Danilishin, K Danzmann, E Daw, R De Salvo, W DelPozzo, T Dent, R DeRosa, L Di Fiore, M Di Paolo Emilio, A Di Virgilio, A Dietz, M Doets, J Dueck, M Edwards, V Fafone, S Fairhurst, P Falferi, M Favata, V Ferrari, F Ferrini, F Fidecaro, R Flaminio, J Franc, F Frasconi, A Freise, D Friedrich, P Fulda, J Gair, M Galimberti, G Gemme, E Genin, A Gennai, A Giazotto, K Glampedakis, R Gouaty, C Graef, W Graham, M Granata, H Grote, G Guidi, J Hallam, G Hammond, M Hannam, J Harms, K Haughian, I Hawke, D Heinert, M Hendry, I Heng, E Hennes, S Hild, J Hough, D Huet, S Husa, S Huttner, B Iyer, I Jones, G Jones, I Kamaretsos, C Kant Mishra, F Kawazoe, F Khalili, B Kley, K Kokeyama, K Kokkotas, S Kroker, R Kumar, K Kuroda, B Lagrange, N Lastzka, TGF Li, M Lorenzini, G Losurdo, H Luck, E Majorana, V Malvezzi, I Mandel, V Mandic, S Marka, F Marin, F Marion, J Marque, I Martin, D McLeod, D Mckechan, M Mehmet, C Michel, Y Minenkov, N Morgado, A Morgia, S Mosca, L Moscatelli, B Mours, H Muller-Ebhardt, P Murray, L Naticchioni, R Nawrodt, J Nelson, R O’Shaughnessy, CD Ott, C Palomba, A Paoli, G Parguez, A Pasqualetti, R Passaquieti, D Passuello, M Perciballi, F Piergiovanni, L Pinard, M Pitkin, W Plastino, M Plissi, R Poggiani, P Popolizio, E Porter, M Prato, G Prodi, M Punturo, P Puppo, D Rabeling, I Racz, P Rapagnani, V Re, J Read, T Regimbau, H Rehbein, S Reid, L Rezzolla, F Ricci, F Richard, A Rocchi, R Romano, S Rowan, A Rudiger, A Samblowski, L Santamaria, B Sassolas, B Sathyaprakash, R Schilling, P Schmidt, R Schnabel, B Schutz, C Schwarz, J Scott, P Seidel, AM Sintes, K Somiya, CF Sopuerta, B Sorazu, F Speirits, L Storchi, K Strain, S Strigin, P Sutton, S Tarabrin, B Taylor, A Thurin, K Tokmakov, M Tonelli, H Tourneer, R Vaccarone, H Vahlbruch, JFJ van den Brand, C Van Den Broeck, S van der Putten, M van Veggel, A Vecchio, J Veitch, F Vetrano, A Vicere, S Vyatchanin, P Webels, B Willke, W Winkler, G Woan, K Wojcik, A Woodcraft, and K Yamamoto. Einstein Gravitational Wave Telescope conceptual design study. ET Technical Report ET-0106C-10, June 2011.
* [18] Peter Fritschel, Stuart Reid, Gabriele Vajente, Giles Hammond, Daniel Brown, Haixing Miao, and Volker Quetschke. Instrument science white paper 2020. LIGO Technical Report LIGO-T2000407-v3, 2020.
* [19] G. Vajente. In situ correction of mirror surface to reduce round-trip losses in Fabry-Perot cavities. Appl. Opt., 53(7):1459–1465, March 2014.
* [20] Gabriele Vajente, Le Yang, Aaron Davenport, Mariana Fazio, Alena Ananyeva, Liyuan Zhang, Garilynn Billingsley, Kiran Prasai, Ashot Markosyan, Riccardo Bassiri, Martin M. Fejer, Martin Chicoine, Francois Schiettekatte, and Carmen S. Menoni. Low thermal noise TiO2-doped GeO2 coatings for high sensitivity gravitational wave interferometers. LIGO Technical Report LIGO-P2100075, 2021. (in preparation).
* [21] Alex Amato, Silvana Terreni, Vincent Dolique, Danièle Forest, Gianluca Gemme, Massimo Granata, Lorenzo Mereni, Christophe Michel, Laurent Pinard, Benoit Sassolas, Julien Teillon, Gianpietro Cagnoli, and Maurizio Canepa. Optical properties of high-quality oxide coating materials used in gravitational-wave advanced detectors. Journal of Physics: Materials, 2(3):035004, jun 2019.
* [22] M Granata, A Amato, L Balzarini, M Canepa, J Degallaix, D Forest, V Dolique, L Mereni, C Michel, L Pinard, B Sassolas, J Teillon, and G Cagnoli. Amorphous optical coatings of present gravitational-wave interferometers. Classical and Quantum Gravity, 37(9):095004, apr 2020.
* [23] Aidan F. Brooks, Benjamin Abbott, Muzammil A. Arain, Giacomo Ciani, Ayodele Cole, Greg Grabeel, Eric Gustafson, Chris Guido, Matthew Heintze, Alastair Heptonstall, Mindy Jacobson, Won Kim, Eleanor King, Alexander Lynch, Stephen O’Connor, David Ottaway, Ken Mailand, Guido Mueller, Jesper Munch, Virginio Sannibale, Zhenhua Shao, Michael Smith, Peter Veitch, Thomas Vo, Cheryl Vorvick, and Phil Willems. Overview of Advanced LIGO adaptive optics. Appl. Opt., 55(29):8256–8265, October 2016.
* [24] Hiro Yamamoto. SIS (Stationary Interferometer Simulation) manual. LIGO Technical Report LIGO-T070039, 2008.
* [25] L. Pinard, C. Michel, B. Sassolas, L. Balzarini, J. Degallaix, V. Dolique, R. Flaminio, D. Forest, M. Granata, B. Lagrange, N. Straniero, J. Teillon, and G. Cagnoli. Mirrors used in the LIGO interferometers for first detection of gravitational waves. Appl. Opt., 56(4):C11–C15, Feb 2017.
* [26] Matthew Evans, Slawek Gras, Peter Fritschel, John Miller, Lisa Barsotti, Denis Martynov, Aidan Brooks, Dennis Coyne, Rich Abbott, Rana X. Adhikari, Koji Arai, Rolf Bork, Bill Kells, Jameson Rollins, Nicolas Smith-Lefebvre, Gabriele Vajente, Hiroaki Yamamoto, Carl Adams, Stuart Aston, Joseph Betzweiser, Valera Frolov, Adam Mullavey, Arnaud Pele, Janeen Romie, Michael Thomas, Keith Thorne, Sheila Dwyer, Kiwamu Izumi, Keita Kawabe, Daniel Sigg, Ryan Derosa, Anamaria Effler, Keiko Kokeyama, Stefan Ballmer, Thomas J. Massinger, Alexa Staley, Matthew Heinze, Chris Mueller, Hartmut Grote, Robert Ward, Eleanor King, David Blair, Li Ju, and Chunnong Zhao. Observation of parametric instability in Advanced LIGO. Phys. Rev. Lett., 114:161102, Apr 2015.
* [27] L. McCuller, S. E. Dwyer, A. C. Green, Haocun Yu, K. Kuns, L. Barsotti, C. D. Blair, D. D. Brown, A. Effler, M. Evans, A. Fernandez-Galiana, P. Fritschel, V. V. Frolov, N. Kijbunchoo, G. L. Mansell, F. Matichard, N. Mavalvala, D. E. McClelland, T. McRae, A. Mullavey, D. Sigg, B. J. J. Slagmolen, M. Tse, T. Vo, R. L. Ward, C. Whittle, R. Abbott, C. Adams, R. X. Adhikari, A. Ananyeva, S. Appert, K. Arai, J. S. Areeda, Y. Asali, S. M. Aston, C. Austin, A. M. Baer, M. Ball, S. W. Ballmer, S. Banagiri, D. Barker, J. Bartlett, B. K. Berger, J. Betzwieser, D. Bhattacharjee, G. Billingsley, S. Biscans, R. M. Blair, N. Bode, P. Booker, R. Bork, A. Bramley, A. F. Brooks, A. Buikema, C. Cahillane, K. C. Cannon, X. Chen, A. A. Ciobanu, F. Clara, C. M. Compton, S. J. Cooper, K. R. Corley, S. T. Countryman, P. B. Covas, D. C. Coyne, L. E. H. Datrier, D. Davis, C. Di Fronzo, K. L. Dooley, J. C. Driggers, T. Etzel, T. M. Evans, J. Feicht, P. Fulda, M. Fyffe, J. A. Giaime, K. D. Giardina, P. Godwin, E. Goetz, S. Gras, C. Gray, R. Gray, E. K. Gustafson, R. Gustafson, J. Hanks, J. Hanson, T. Hardwick, R. K. Hasskew, M. C. Heintze, A. F. Helmling-Cornell, N. A. Holland, J. D. Jones, S. Kandhasamy, S. Karki, M. Kasprzack, K. Kawabe, P. J. King, J. S. Kissel, Rahul Kumar, M. Landry, B. B. Lane, B. Lantz, M. Laxen, Y. K. Lecoeuche, J. Leviton, J. Liu, M. Lormand, A. P. Lundgren, R. Macas, M. MacInnis, D. M. Macleod, S. Márka, Z. Márka, D. V. Martynov, K. Mason, T. J. Massinger, R. McCarthy, S. McCormick, J. McIver, G. Mendell, K. Merfeld, E. L. Merilh, F. Meylahn, T. Mistry, R. Mittleman, G. Moreno, C. M. Mow-Lowry, S. Mozzon, T. J. N. Nelson, P. Nguyen, L. K. Nuttall, J. Oberling, Richard J. Oram, C. Osthelder, D. J. Ottaway, H. Overmier, J. R. Palamos, W. Parker, E. Payne, A. Pele, R. Penhorwood, C. J. Perez, M. Pirello, H. Radkins, K. E. Ramirez, J. W. Richardson, K. Riles, N. A. Robertson, J. G. Rollins, C. L. Romel, J. H. Romie, M. P. Ross, K. Ryan, T. Sadecki, E. J. Sanchez, L. E. Sanchez, T. R. Saravanan, R. L. Savage, D. Schaetzl, R. Schnabel, R. M. S. Schofield, E. Schwartz, D. Sellers, T. Shaffer, J. R. Smith, S. Soni, B. Sorazu, A. P. Spencer, K. A. Strain, L. Sun, M. J. Szczepańczyk, M. Thomas, P. Thomas, K. A. Thorne, K. Toland, C. I. Torrie, G. Traylor, A. L. Urban, G. Vajente, G. Valdes, D. C. Vander-Hyde, P. J. Veitch, K. Venkateswara, G. Venugopalan, A. D. Viets, C. Vorvick, M. Wade, J. Warner, B. Weaver, R. Weiss, B. Willke, C. C. Wipf, L. Xiao, H. Yamamoto, Hang Yu, L. Zhang, M. E. Zucker, and J. Zweizig. Ligo’s quantum response to squeezed states. Phys. Rev. D, 104:062006, Sep 2021.
* [28] Lester James Miranda. Pyswarms: a research toolkit for particle swarm optimization in Python. Journal of Open Source Software, 3(21):433, 2018.
* [29] Daniel David Brown and Andreas Freise. Finesse, May 2014. You can download the binaries and source code at http://www.gwoptics.org/finesse.
* [30] Daniel D. Brown, Philip Jones, Samuel Rowlinson, Sean Leavey, Anna C. Green, Daniel Töyrä, and Andreas Freise. Pykat: Python package for modelling precision optical interferometers. SoftwareX, 12:100613, 2020.
* [31] Peter Fritschel, Matthew Evans, and Valery Frolov. Balanced homodyne readout for quantum limited gravitational wave detectors. Opt. Express, 22(4):4224–4234, February 2014.
* [32] D. V. Martynov, E. D. Hall, B. P. Abbott, R. Abbott, T. D. Abbott, C. Adams, R. X. Adhikari, R. A. Anderson, S. B. Anderson, K. Arai, M. A. Arain, S. M. Aston, L. Austin, S. W. Ballmer, M. Barbet, D. Barker, B. Barr, L. Barsotti, J. Bartlett, M. A. Barton, I. Bartos, J. C. Batch, A. S. Bell, I. Belopolski, J. Bergman, J. Betzwieser, G. Billingsley, J. Birch, S. Biscans, C. Biwer, E. Black, C. D. Blair, C. Bogan, C. Bond, R. Bork, D. O. Bridges, A. F. Brooks, D. D. Brown, L. Carbone, C. Celerier, G. Ciani, F. Clara, D. Cook, S. T. Countryman, M. J. Cowart, D. C. Coyne, A. Cumming, L. Cunningham, M. Damjanic, R. Dannenberg, K. Danzmann, C. F. Da Silva Costa, E. J. Daw, D. DeBra, R. T. DeRosa, R. DeSalvo, K. L. Dooley, S. Doravari, J. C. Driggers, S. E. Dwyer, A. Effler, T. Etzel, M. Evans, T. M. Evans, M. Factourovich, H. Fair, D. Feldbaum, R. P. Fisher, S. Foley, M. Frede, A. Freise, P. Fritschel, V. V. Frolov, P. Fulda, M. Fyffe, V. Galdi, J. A. Giaime, K. D. Giardina, J. R. Gleason, R. Goetz, S. Gras, C. Gray, R. J. S. Greenhalgh, H. Grote, C. J. Guido, K. E. Gushwa, E. K. Gustafson, R. Gustafson, G. Hammond, J. Hanks, J. Hanson, T. Hardwick, G. M. Harry, K. Haughian, J. Heefner, M. C. Heintze, A. W. Heptonstall, D. Hoak, J. Hough, A. Ivanov, K. Izumi, M. Jacobson, E. James, R. Jones, S. Kandhasamy, S. Karki, M. Kasprzack, S. Kaufer, K. Kawabe, W. Kells, N. Kijbunchoo, E. J. King, P. J. King, D. L. Kinzel, J. S. Kissel, K. Kokeyama, W. Z. Korth, G. Kuehn, P. Kwee, M. Landry, B. Lantz, A. Le Roux, B. M. Levine, J. B. Lewis, V. Lhuillier, N. A. Lockerbie, M. Lormand, M. J. Lubinski, A. P. Lundgren, T. MacDonald, M. MacInnis, D. M. Macleod, M. Mageswaran, K. Mailand, S. Márka, Z. Márka, A. S. Markosyan, E. Maros, I. W. Martin, R. M. Martin, J. N. Marx, K. Mason, T. J. Massinger, F. Matichard, N. Mavalvala, R. McCarthy, D. E. McClelland, S. McCormick, G. McIntyre, J. McIver, E. L. Merilh, M. S. Meyer, P. M. Meyers, J. Miller, R. Mittleman, G. Moreno, C. L. Mueller, G. Mueller, A. Mullavey, J. Munch, P. G. Murray, L. K. Nuttall, J. Oberling, J. O’Dell, P. Oppermann, Richard J. Oram, B. O’Reilly, C. Osthelder, D. J. Ottaway, H. Overmier, J. R. Palamos, H. R. Paris, W. Parker, Z. Patrick, A. Pele, S. Penn, M. Phelps, M. Pickenpack, V. Pierro, I. Pinto, J. Poeld, M. Principe, L. Prokhorov, O. Puncken, V. Quetschke, E. A. Quintero, F. J. Raab, H. Radkins, P. Raffai, C. R. Ramet, C. M. Reed, S. Reid, D. H. Reitze, N. A. Robertson, J. G. Rollins, V. J. Roma, J. H. Romie, S. Rowan, K. Ryan, T. Sadecki, E. J. Sanchez, V. Sandberg, V. Sannibale, R. L. Savage, R. M. S. Schofield, B. Schultz, P. Schwinberg, D. Sellers, A. Sevigny, D. A. Shaddock, Z. Shao, B. Shapiro, P. Shawhan, D. H. Shoemaker, D. Sigg, B. J. J. Slagmolen, J. R. Smith, M. R. Smith, N. D. Smith-Lefebvre, B. Sorazu, A. Staley, A. J. Stein, A. Stochino, K. A. Strain, R. Taylor, M. Thomas, P. Thomas, K. A. Thorne, E. Thrane, K. V. Tokmakov, C. I. Torrie, G. Traylor, G. Vajente, G. Valdes, A. A. van Veggel, M. Vargas, A. Vecchio, P. J. Veitch, K. Venkateswara, T. Vo, C. Vorvick, S. J. Waldman, M. Walker, R. L. Ward, J. Warner, B. Weaver, R. Weiss, T. Welborn, P. Weßels, C. Wilkinson, P. A. Willems, L. Williams, B. Willke, I. Wilmut, L. Winkelmann, C. C. Wipf, J. Worden, G. Wu, H. Yamamoto, C. C. Yancey, H. Yu, L. Zhang, M. E. Zucker, and J. Zweizig. Sensitivity of the Advanced LIGO detectors at the beginning of gravitational wave astronomy. Phys. Rev. D, 93:112004, Jun 2016.
* [33] A Staley, D Martynov, R Abbott, R X Adhikari, K Arai, S Ballmer, L Barsotti, A F Brooks, R T DeRosa, S Dwyer, A Effler, M Evans, P Fritschel, V V Frolov, C Gray, C J Guido, R Gustafson, M Heintze, D Hoak, K Izumi, K Kawabe, E J King, J S Kissel, K Kokeyama, M Landry, D E McClelland, J Miller, A Mullavey, B O’Reilly, J G Rollins, J R Sanders, R M S Schofield, D Sigg, B J J Slagmolen, N D Smith-Lefebvre, G Vajente, R L Ward, and C Wipf. Achieving resonance in the Advanced LIGO gravitational-wave interferometer. Classical and Quantum Gravity, 31(24):245010, nov 2014.
* [34] H. Kogelnik and T. Li. Laser beams and resonators. Appl. Opt., 5(10):1550–1567, Oct 1966.
* [35] Koji Arai. On the accumulated round-trip Gouy phase shift for a general optical cavity. LIGO Technical Report LIGO-T1300189-v1, 2013.
* [36] Brett Bochner. Simulating a dual-recycled gravitational wave interferometer with realistically imperfect optics. General Relativity and Gravitation, 35(6):1029–1057, June 2003\.
|
# Deep Reinforcement Learning Based Power Allocation for Minimizing AoI and
Energy Consumption in MIMO-NOMA IoT Systems
Hongbiao Zhu, Qiong Wu, , Qiang Fan, Pingyi Fan, ,
Jiangzhou Wang, and Zhengquan Li This work was supported in part by the
National Natural Science Foundation of China under Grant No. 61701197, in part
by the open research fund of State Key Laboratory of Interatory Services
Networks under Grant No. ISN23-11, in part by the 111 project under Grant No.
B12018, in part by the Future Network Scientific Research Fund Project
(FNSRFP-2021-YB-11). Hongbiao Zhu and Qiong Wu are with the School of Internet
of Things Engineering, Jiangnan University, Wuxi 214122, China, and also with
the State Key Laboratory of Integrated Services Networks (Xidian University),
Xi’an 710071, China (e-mail<EMAIL_ADDRESS>[email protected]). Qiang Fan is with Qualcomm, San Jose, CA 95110, USA
(e-mail: [email protected]). Pingyi Fan is with the Department of Electronic
Engineering, Beijing National Research Center for Information Science and
Technology, Tsinghua University, Beijing 100084, China (Email:
[email protected]). Jiangzhou Wang is with the School of Engineering,
University of Kent, CT2 7NT Canterbury, U.K. (Email: [email protected]).
Zhengquan Li is with the School of Internet of Things Engineering, Jiangnan
University, Wuxi 214122, China, and also with Jiangsu Future Networks
Innovation Institute, Nanjing 211111, China (Email: [email protected]).
###### Abstract
Multi-input multi-out and non-orthogonal multiple access (MIMO-NOMA) internet-
of-things (IoT) systems can improve channel capacity and spectrum efficiency
distinctly to support the real-time applications. Age of information (AoI) is
an important metric for real-time application, but there is no literature have
minimized AoI of the MIMO-NOMA IoT system, which motivates us to conduct this
work. In MIMO-NOMA IoT system, the base station (BS) determines the sample
collection requirements and allocates the transmission power for each IoT
device. Each device determines whether to sample data according to the sample
collection requirements and adopts the allocated power to transmit the sampled
data to the BS over MIMO-NOMA channel. Afterwards, the BS employs successive
interference cancelation (SIC) technique to decode the signal of the data
transmitted by each device. The sample collection requirements and power
allocation would affect AoI and energy consumption of the system. It is
critical to determine the optimal policy including sample collection
requirements and power allocation to minimize the AoI and energy consumption
of MIMO-NOMA IoT system, where the transmission rate is not a constant in the
SIC process and the noise is stochastic in the MIMO-NOMA channel. In this
paper, we propose the optimal power allocation to minimize the AoI and energy
consumption of MIMO-NOMA IoT system based on deep reinforcement learning
(DRL). Extensive simulations are carried out to demonstrate the superiority of
the optimal power allocation.
###### Index Terms:
deep reinforcement learning, age of information, MIMO-NOMA, internet of things
## I Introduction
With the development of Internet-of-Things (IoT)[1, 2, 3], the base station
(BS) can support the real-time applications such as disaster management,
information recommendation, vehicle network, smart city, connected health and
industrial internet through collecting the data sampled by IoT devices[4, 5,
6, 7, 8]. However, the amount of sampled data is enormous and the number of
IoT devices is usually numerous, thus the realization of these IoT
applications need large bandwidth and spectrum access. The multi-input multi-
out and non-orthogonal multiple access (MIMO-NOMA) IoT can transmit data
through the MIMO-NOMA channel to solve these problems, where multiple antennas
are deployed at the BS to improve the channel capacity and multiple IoT
devices access the common bandwidth simultaneously to improve the spectrum
efficiency[9, 10].
The BS collects data during discrete slots in the MIMO-NOMA IoT system. In
each slot, a BS first determines the sample collection requirements and
allocates the transmission power for each IoT device and then sends the
corresponding sample collection requirements and transmission power to each
IoT device. Afterwards, each IoT device determines whether to sample data from
physical world according to its sample collection requirements. Then each IoT
device would adopt its allocated power to transmit the sampled data to the BS
over the MIMO-NOMA channel. In the transmission process, multiple IoT devices
transmit the signals of data by using a common bandwidth and thus a signal of
an IoT device will be interfered by the signals of other devices. To eliminate
the interference, the BS adopts the successive interference cancelation (SIC)
technique to decode the received signal transmitted by each device[11].
Specifically, the BS sorts the power of all received signals in descending
order and decodes the signal with the highest received power by considering
other signals as interferences. Then the BS removes the decoded signal from
the received signals and resorts the received signals to decode the next
signal. The process is repeated until all signals are decoded.
The age of information (AoI) is a metric to measure the freshness of data,
which is defined as the time from the data sampling to the time when the
sampled data are received. In the MIMO-NOMA IoT system, the BS needs to
receive data, i.e., decode the signals of data, timely after they are sampled
to provide the real-time applications, thus the MIMO-NOMA IoT system should
keep low AoI [12][13]. Furthermore, the IoT devices are energy-limited, thus
the MIMO-NOMA IoT system should also keep low energy consumption to prolong
the working time of IoT devices[9]. Hence, the AoI and energy consumption are
two important performance metrics of the MIMO-NOMA IoT system. The sample
collection requirements and power allocation would affect the AoI and energy
consumption of the system[14]. Specifically, for the sample collection
requirements, if the BS selects more IoT devices to sample, the system will
consume more energy because more IoT devices consume energy to sample data.
However, if the BS selects less IoT devices to sample, the data transmitted
from the unselected IoT devices become obsolete, which would increase the AoI
of the system. Hence the sample collection requirements affects both AoI and
energy consumption of the MIMO-NOMA IoT system. For the power allocation, if
an IoT device transmits with high power, the signal transmitted by the IoT
device will be decoded where lots of signals with lower power act the
interferences in the SIC process, which would lead to a low signal-to-
interference-plus-noise ratio (SINR). Otherwise, if an IoT device transmits
data with low power, the SINR would also be deteriorated due to the low
transmission power. The low SINR causes a low transmission rate, which would
cause a long transmission delay and a high AoI of the MIMO-NOMA IoT system.
Hence the power allocation affects the AoI of the MIMO-NOMA IoT system.
Moreover, the power allocation affects the energy consumption directly. Thus,
the transmission power affects both the AoI and energy consumption of the
MIMO-NOMA IoT system. As mentioned above, it is critical to determine the
optimal policy including sample collection requirements and power allocation
to minimize the AoI and energy consumption of the MIMO-NOMA IoT system. To the
best of our knowledge, there is no work to minimize the AoI in the MIMO-NOMA
IoT system, which motivates us to conduct this work.
In the MIMO-NOMA IoT system, the different allocated transmission powers will
impact transmission rate in the SIC process. Moreover, there is the inevitable
stochastic noise in the MIMO-NOMA channel. Deep reinforcement learning (DRL)
can learn the near optimal policy through considering the sample collection
requirements and power allocation as action to interact with the dynamic
stochastic MIMO-NOMA IoT system[15]. In general, the DRL algorithm is suitable
to solve the problem with the continuous or discrete action space exclusively.
However, the space of the sample collection requirements is discrete while the
space of the transmission power is continuous, which poses a challenge to the
problem by DRL. In this paper, we present the relationship between the sample
collection requirements and transmission power, and propose a DRL based
optimal power allocation to minimize the AoI and energy consumption of the
MIMO-NOMA IoT system. 111The source code has been released at:
https://github.com/qiongwu86/MIMO-NOMA_AoI_GA.gitThe main contributions are
summarized as follows.
* 1)
We formulate the joint optimization problem to minimize the AoI and energy
consumption of the MIMO-NOMA IoT system by determining the sample selection
and power allocation.
* 2)
In the formulated optimization problem, the sample selection is discrete and
power allocation is continuous, which can not be solve by traditional DRL
method. We substitute energy model and AoI model into optimization problem,
merge the homogeneous terms containing sample selection, and simplify the
formulated problem to make it suitable to be solved by deep deterministic
policy gradient (DDPG).
* 3)
We design the DRL framework including the state, action and reward function,
then adopt the DDPG algorithm to obtain the optimal power allocation to
minimize the AoI and energy consumption of the MIMO-NOMA IoT system.
* 4)
Extensive simulations are carried out to demonstrate the superiority of the
optimal power allocation by the DDPG algorithm to the power allocation by the
baseline algorithm.
The rest of this paper is organized as follows. Section II reviews the related
work. Section III introduces the system model and formulates the optimization
problem. Section IV simplifies the formulated optimization problem and
presents the near optimal solution by DRL. We carry out some simulation to
demonstrate the effectiveness of our proposed DRL method in Section VI, and
conclude this paper in Section VII.
## II Related Work
In this section, we first review the studies about the AoI in the IoT system,
and then survey the state of the arts on the MIMO-NOMA IoT system.
### II-A AoI in IoT
In [16], Grybosi _et al._ proposed the SIC-aided age-independent random access
(AIRA-SIC) scheme (i.e., a slotted ALOHA fashion) for IoT system, where the
receiver operates SIC to reconstruct collisions of various devices. In [17],
Wang _et al._ focused on the problem that minimizes the weighted sum of AoI
cost and energy consumption in the IoT systems by adjusting sample policy, and
proposed a distributed DRL algorithm based on the local observation of each
device. In [18], Elmagid _et al._ aimed to minimize the AoI at the BS and the
energy consumption of generate status at the IoT devices, and formulated an
optimization problem based on the Markov decision process (MDP), then proved
the monotonicity property of the value function associated with the MDP. In
[19], Li _et al._ designed a resource block (RB) allocation, modulation
selecting and coding selecting scheme for each IoT device based on its channel
condition to minimize the long-term AoI of IoT system. In [20], Hatami _et
al._ employed the reinforcement learning to minimize the average AoI for users
in an IoT system consisting of users, energy harvesting sensors, and a cache-
enabled edge node. In [21], Sun _et al._ aimed to minimize the weighted sum of
the expected average AoI of all IoT devices, propulsion energy of unmanned
aerial vehicle (UAV) and transmission energy of IoT devices by determining the
UAV flight speed, UAV placement and channel resource allocation in the UAV-
assisted IoT system. In [22], Hu _et al._ considered an IoT system where the
UAVs take off from a data center to deliver energy and collect data from
sensor nodes, and then fly back to the data center. They minimized the AoI of
the collected data by dynamic programming (DP) and ant colony (AC) heuristic
algorithms. In [23], Emara _et al._ developed a spatiotemporal framework to
evaluate the peak AoI (PAoI) of the IoT system, and compared the PAoI under
the time-triggered traffic with event-triggered traffic. In [24], Lyu _et al._
considered a marine IoT scenario, where AoI is utilized to represent the
impact of of the packet loss and transmission delay. They investigated the
relationship between AoI and state estimation error, and minimized the state
estimation error by decomposition method. In [25], Wang _et al._ investigated
the impact of AoI on the system cost which consists of control cost and
communication energy consumption of the industrial-internet-of-things (IIoT)
system. They proved that the upper bound of cost is affected by AoI. In [26],
Hao _et al._ maximized the sum of the energy efficiency of the IoT devices
under the constraints of AoI by optimizing transmission power and channel
allocation in a cognitive radio based IoT system. However, none of these works
have taken the MIMO-NOMA channel into account.
### II-B MIMO-NOMA IoT System
In [27], Yilmaz _et al._ proposed a user selection algorithm for MIMO-NOMA IoT
system to improve the sum data rate, and adopted the physical layer network
coding (PNC) to improve the spectral efficiency. In [28], Shi _et al._
considered the downlink of the MIMO-NOMA IoT networks and studied the outage
probability and goodput of the system with the Kronecker model. In [29], Wang
_et al._ proposed that the resource allocation problem consists of the optimal
beamforming strategy and power allocation in the MIMO-NOMA IoT system, where
the beamforming optimization is solved by the zero-forcing method, after that
the power allocation is solved by the convex functions. In [30], Han _et al._
proposed a novel milimeter wave (mmWave) positioning MIMO-NOMA IoT system and
proposed the position error bound (PEB) as a novel performance evaluation
metric. In [31], Zhang _et al._ considered the massive MIMO and NOMA to study
the performance of the IoT system, and calculated the closed form function for
spectral and energy efficiencies. In [32], Chinnadurai _et al._ considered the
heterogeneous cellular network and formulated a problem to maximize the energy
efficiency of the MIMO-NOMA IoT system, where the non-convex problem was
solved based on the branch and reduced bound (BRB) approach. In [33], Gao _et
al._ considered the mmWave massive MIMO and NOMA IoT system to maximize the
weighted sum transmission rate through optimizing the power allocation, then
solved the problem by convex method. In [34], Feng _et al._ considered an UAV
aided MIMO-NOMA IoT system and regarded an UAV as BS. They formulated the
problem to maximize the sum transmission rate of the downlink through
optimizing the placement of UAVs, beam pattern and transmission power, then
solved the problem by convex methods. In [35], Ding _et al._ designed a novel
MIMO-NOMA system consisting of two different users, where user one should be
served with strict quality-of-service (QoS) requirement, and user two accesses
channel by the no-orthogonal way opportunistically, thus the requirement that
small packets of user one in the IoT system should be transmitted in time can
be met. However, these works have not considered the AoI of the MIMO-NOMA IoT
system.
As mentioned above, there is no work considering the AoI in the MIMO-NOMA IoT
system, which motivates us to conduct this work.
## III System Model And Problem Formulation
Figure 1: MIMO-NOMA IoT system. TABLE I: The summary for notations. Notation | Description
---|---
$B$ | Population size of genetic algorithm.
$C_{s}$ | The energy consumption for sample fresh information and generate upload packet.
$c_{m,t}$ | Complex data symbol with 1 as variance.
$d_{m}$ | The communication distance between device $m$ and BS.
$E$ | Number of episodes.
$F_{c}/F_{m}$ | Probability of offsprings in genetic algoritm for crossover / mutation.
$G_{P}/U_{P}$ | Complexity of the primary networks for computing gradients / updating parameters.
$\bm{h}_{m}(t)$ | The channel vector between device $m$ and BS in slot $t$.
$i$ | Index of transition tuples in mini-batch.
$I$ | The number of transition tuples in a mini-batch.
$\mathcal{I}_{m}$ | The set of devices of which the received power is weaker than device $m$.
$J(\mu)$ | The long-term discounted reward under policy $\mu$.
$K$ | The number of antennas equipped in BS.
$l_{m,t}$ | The transmission delay of device $m$ in slot $t$.
$L$ | Loss function.
$l_{m,t}$ | The transmission delay of device $m$ in slot $t$.
$\bm{n}(t)$ | Additive white Gaussian noise.
$N_{GA}$ | Evolution times of genetic algorithm.
$m/M/\mathcal{M}$ | Index / number / set of devices.
$\bm{o}_{t}/\bm{o}_{m,t}$ | State in slot $t$ of all devices / device $m$.
$\bm{p}_{t}$ / $p_{m,t}$ | Transmission power of all devices / device $m$.
$P_{m,t}$ | Maximum transmission power device $m$.
$Q(\bm{o}_{t},\bm{p}_{t})$ | Action-value function under $\bm{o}_{t}$ and $\bm{p}_{t}$.
$Q$ | Packet size.
$r_{t}$ | Reward function.
$\bm{s}_{t}$ / $s_{m,t}$ | Indicator of sample or not for all devices / device $m$.
$S_{d}$ | Complexity of calculating sample decisions based on power allocation.
$t$ / $\mathcal{T}$ | Index / set of slot.
$\mathcal{U}$ | The set of undecoded received power of BS.
$\bm{u}_{t}$ / $u_{m,t}$ | Indicator of transmission success for all devices / device $m$.
$W$ | Bandwidth of system.
$\alpha_{a}$ / $\alpha_{c}$ | Learning rate of actor network / critic network.
$\beta$ | Discouting factor.
$\gamma_{a}$, $\gamma_{c}$ | Weighted factors of reward function.
$\Gamma_{m,t}$ | Received power of BS for device $m$ in slot $t$.
$\Delta_{t}$ | Exploration noise.
$\varepsilon_{m,t}$ | The energy consumed by device $m$ in slot $t$.
$\overline{\varepsilon}$ | The average sum energy consumption in slot $t$.
$\zeta$ / $\zeta^{\prime}$ | Parameters of critic-network / target critic-network.
$\theta$ / $\theta^{\prime}$ / $\theta^{*}$ | Parameters of actor-network / target actor-network / optimal policy.
$\kappa$ | The constant for the update of target networks.
$\mu_{\theta}$ | Policy approximated by actor-network with $\theta$.
$\pi_{m,t}$ | Transmission rate of device $m$ in slot $t$.
$\rho_{m}$ | Normalized channel correlation coefficient.
$\sigma_{R}^{2}$ | Variance for the noise of received signal.
$\phi_{m,t}$ / $\Phi_{m,t}$ | AoI of device $m$ in slot $t$ on device / BS.
$\overline{\Phi}$ | The average sum AoI.
### III-A Scenario description
The network scenario is illustrated in Fig. 1. We consider a MIMO-NOMA IoT
system consisting of a BS with $K$ antennas and a set
$\mathcal{M}=\\{1,\cdots,m,\cdots,M\\}$ of the single-antenna IoT devices.
Here, each IoT device is embedded with a sensor and a transmitter. The time
duration is devided into $T$ slots, each of which is $\tau$. The set of slots
is denoted as $\mathcal{T}=\\{1,\cdots,t,\cdots,T\\}$. At the beginning of
each slot $t$, the BS determines the policy (including the sample collection
requirements of each device $m$, denoted as $s_{m,t}$, and transmission power
of each device $m$, denoted as $p_{m,t}$) and then sends $s_{m,t}$ and
$p_{m,t}$ to each device $m$. If $s_{m,t}=1$, device $m$ will sample data in
slot $t$, and transmit the data to the BS with transmission power $p_{m,t}$
over the MIMO-NOMA channel. Otherwise, it does not sample data in slot $t$.
The key notations are listed in Table I. Next we will construct the MIMO-NOMA
channel model.
### III-B MIMO-NOMA channel model
Let $c_{m,t}$ be the data symbol of device $m$ in slot $t$ with $1$ as
variance, thus the signal of the data transmitted by device $m$ is
$\sqrt{p_{m,t}}c_{m,t}$. Let $\bm{h}_{m}(t)\in\mathbb{C}^{K\times 1}$ be the
channel power gain between BS and device $m$ in slot $t$, thus the
corresponding signal received by BS is $\bm{h}_{m}(t)\sqrt{p_{m,t}}c_{m,t}$.
Note that $c_{m,t}$ is unknown for BS, so that it is difficult for the BS to
calculate the received signal. Hence, the BS needs to adopt the SIC technology
to decode the received signal transmitted by each device, which is expressed
as
$\begin{aligned}
\bm{y}(t)=\sum_{m\in\mathcal{M}}\bm{h}_{m}(t)\sqrt{p_{m,t}}c_{m,t}+\bm{n}(t)\\\
p_{m,t}\in{[0,P_{m,max}]},\forall m\in\mathcal{M},\forall
t\in\mathcal{T}\end{aligned}\quad,$ (1)
where $\bm{n}(t)\in\mathbb{C}^{K\times 1}$ is the complex additive white
Gaussian noise (AWGN) with variance $\sigma_{R}^{2}$ and $P_{m,max}$ is the
maximum transmission power of device $m$.
Similar to [36], it is assumed that $\bm{h}_{m}(t)$ is known by the BS. In
addition, the BS also knows $p_{m,t}$, thus the BS can calculate the power of
the received signal transmitted by device $m$ as
$\Gamma_{m,t}=p_{m,t}||\bm{h}_{m}(t)||^{2}.$ (2)
Then, the BS decodes the received signal transmitted by each device
sequentially with SIC mode of NOMA. For one iteration, the BS decodes the
signal with the highest received power from $\bm{y}(t)$ while considering the
other signals as interference, then removes the decoded signal from
$\bm{y}(t)$ and starts the next iteration until all signals are decoded.
For instance, in an iteration the received power of the signal transmitted by
device $m$ is the highest among the signals without being decoded. Denote
$\mathcal{I}_{m}=\\{k\in\mathcal{M}\mid\Gamma_{k,t}<\Gamma_{m,t}\\}$ as the
set of devices whose signals’ received powers is less than device $m$. Thus
the signal transmitted by each device $k\in\mathcal{I}_{m}$ is deemed as the
interference. In this case, $\bm{y}(t)$ is rewritten as
$\displaystyle\bm{y}(t)=\bm{h}_{m}(t)\sqrt{p_{m,t}}c_{m,t}+\sum_{k\in\mathcal{I}_{m}}\bm{h}_{k}(t)\sqrt{p_{k,t}}c_{k,t}+\bm{n}(t),$
(3)
where $\sum_{k\in\mathcal{I}_{m}}\bm{h}_{k}(t)\sqrt{p_{k,t}}c_{k,t}$ indicates
the interference, thus the signal-to-interference-plus-noise ratio (SINR) of
device $m$ is calculated as
$\begin{split}\gamma_{m,t}&=\frac{p_{m,t}||\bm{h}_{m}(t)||^{2}}{\sum\limits_{k\in\mathcal{I}_{m}}p_{k,t}||\bm{h}_{k}(t)||^{2}+\sigma_{R}^{2}}\\\
&=\frac{\Gamma_{m,t}}{\sum\limits_{k\in\mathcal{I}_{m}}\Gamma_{k,t}+\sigma_{R}^{2}}\\\
\end{split}\quad.$ (4)
The transmission rate of device $m$ in slot $t$ can be derived according to
Shannon capacity formula, i.e.,
$\pi_{m,t}=W\log_{2}(1+\gamma_{m,t}),$ (5)
where $W$ is the bandwidth of the MIMO-NOMA channel.
### III-C AoI model
Denote $\phi_{m,t}$ as the AoI at device $m$ in slot $t$, which can be
calculated as
$\phi_{m,t}=\left\\{\begin{array}[]{l}{0,\qquad\qquad\qquad s_{m,t}=1}\\\
\phi_{m,t-1}+\tau,\qquad otherwise\end{array}.\right.\\\ $ (6)
According to Eq. (6), at the beginning of slot $t$, if device $m$ samples
data, i.e., $s_{m,t}=1$, $\phi_{m,t}$ will be reset to $0$. Otherwise,
$\phi_{m,t}$ will be increased by $\tau$.
Device $m$ will transmit data with transmission power $p_{m,t}$ after sampling
data. If data volume transmitted within a slot is larger than the packet size
$Q$, i.e., $\pi_{m,t}\cdot\tau\geq Q$, device $m$ will transmit data
successfully; otherwise, the transmission is failed. Denote $u_{m,t}=1$ as a
successful transmission by device $m$ in slot $t$ and $u_{m,t}=0$ as an
unsuccessful transmission, we have
$u_{m,t}=\left\\{\begin{array}[]{l}{1,\qquad\pi_{m,t}\cdot\tau\geq Q}\\\
{0,\qquad otherwise}\end{array}.\right.\\\ $ (7)
If a transmission from device $m$ is successful, the AoI at the BS equals the
aggregation of AoI at device $m$ and the transmission delay. Otherwise, the
AoI at the BS is increased by a slot, thus we have
$\Phi_{m,t}=\left\\{\begin{array}[]{l}{\phi_{m,t}+l_{m,t},\qquad u_{m,t}=1}\\\
{\Phi_{m,t-1}+\tau,\qquad otherwise}\end{array},\right.\\\ $ (8)
where $l_{m,t}$ is the transmission delay of device $m$ in slot $t$, which is
calculated as
$l_{m,t}=\frac{Q}{\pi_{m,t}}.$ (9)
The AoI of the MIMO-NOMA IoT system is measured by averaging the AoI of all
devices at BS, i.e.,
$\overline{\Phi}=\frac{1}{T}\sum_{t\in\mathcal{T}}\sum_{m\in\mathcal{M}}\Phi_{m,t}.$
(10)
### III-D Energy consumption model
Since each device consumes energy in data sampling and transmission, the
energy consumption of device $m$ in slot $t$ can be calculated as
$\varepsilon_{m,t}=s_{m,t}C_{s}+p_{m,t}l_{m,t},$ (11)
where $C_{s}$ is the energy consumption for data sampling[17], and
$p_{m,t}l_{m,t}$ is the energy consumption for transmission.
The BS has a stable power supply, hence the energy consumption of BS is
sufficient and thus it is not taken into account in the system. Hence the
energy consumption of the MIMO-NOMA IoT system is measured by averaging the
energy consumption of all devices, i.e.,
$\overline{\varepsilon}=\frac{1}{T}\sum_{t\in\mathcal{T}}\sum_{m\in\mathcal{M}}\varepsilon_{m,t}.$
(12)
### III-E Problem formulation
In this work, our target is to minimize the AoI and energy consumption of the
MIMO-NOMA IoT system, which is impacted by $p_{m,t}$ and $s_{m,t}$. Therefore
the optimization problem is formulated as
$\displaystyle\min_{\bm{s}_{t},\bm{p}_{t}}\left[\gamma_{a}\overline{\Phi}+\gamma_{e}\overline{\varepsilon}\right]$
(13) $\displaystyle s.t.\qquad$ $\displaystyle p_{m,t}\in[0,P_{m,max}],\forall
m\in\mathcal{M},\forall t\in\mathcal{T},$ (13a) $\displaystyle
s_{m,t}\in\\{0,1\\},\forall m\in\mathcal{M},\forall t\in\mathcal{T},$ (13b)
where $\bm{s}_{t}=\\{s_{1,t},\cdots,s_{m,t},\cdots,s_{M,t}\\}$ and
$\bm{p}_{t}=\\{p_{1,t},\cdots,p_{m,t},\cdots,p_{M,t}\\}$, $\gamma_{a}$ and
$\gamma_{e}$ are the non-negative weighted factors. Next we will present a
solution to the problem based on DRL.
## IV DRL Method for Optimization of Power Allocation
In this section, we solve the optimization problem based on the DRL. First, we
design the DRL framework including the state, action and reward function,
where the relationship between the sample collection requirements and
transmission power is derived to facilitate DRL algorithm to solve the
problem. Then we obtain a near optimal power allocation based on the DRL
algorithm.
### IV-A DRL framework
The DRL framework consists of three significant elements: state, action and
reward function. For each slot, the agent observes the current state and takes
the current action according to policy $\mu$, where policy $\mu$ yields the
action based on the state. Then the agent calculates its corresponding reward
under the current state and action according to the reward function, while the
current state in the environment transits to the next state. Next, we will
design agent, state, action and reward function, respectively.
* •
Agent: In each slot, the BS determines the transmission power and sample
collection requirements of each device based on its observation, thus we
consider the BS as the agent.
* •
State: In the system model, the state $\bm{o}_{t}$ observed by the BS in slot
$t$ is defined as
$\bm{o}_{t}=[\bm{o}_{1,t},\cdots,\bm{o}_{m,t},\cdots,\bm{o}_{M,t}],$ (14)
where $\bm{o}_{m,t}$ represents the observation of device $m$, which is
designed as
$\bm{o}_{m,t}=[u_{m,t-1},\gamma_{m,t-1},\Phi_{m,t-1}].$ (15)
Here, $u_{m,t-1}$, $\gamma_{m,t-1}$ and $\Phi_{m,t-1}$ can be calculated by
the BS from the historical data in slot $t-1$.
* •
Action: According to the problem formulated in Eq. (13), the action in slot
$t$ is set as
$\bm{a}_{t}=[\bm{s}_{t},\bm{p}_{t}].$ (16)
$\displaystyle\gamma_{a}\overline{\Phi}+\gamma_{e}\overline{\varepsilon}$ (17)
$\displaystyle=$
$\displaystyle\frac{1}{T}\sum_{t\in\mathcal{T}}\sum_{m\in\mathcal{M}}\Big{[}\gamma_{a}\Phi_{m,t}+\gamma_{e}\varepsilon_{m,t}\Big{]}$
(17a) $\displaystyle=$
$\displaystyle\frac{1}{T}\sum_{t\in\mathcal{T}}\sum_{m\in\mathcal{M}}\Big{[}\gamma_{a}\big{[}(1-u_{m,t})(\Phi_{m,t-1}+\tau)+u_{m,t}(\phi_{m,t}(s_{m,t})+l_{m,t})\big{]}+\gamma_{e}(s_{m,t}C_{s}+p_{m,t}l_{m,t})\Big{]}$
(17b) $\displaystyle=$
$\displaystyle\frac{1}{T}\sum_{t\in\mathcal{T}}\sum_{m\in\mathcal{M}}\Big{[}[\gamma_{a}u_{m,t}\phi_{m,t}(s_{m,t})+\gamma_{e}s_{m,t}C_{s}]+\gamma_{a}[(1-u_{m,t})(\Phi_{m,t-1}+\tau)+u_{m,t}l_{m,t}]+\gamma_{e}p_{m,t}l_{m,t}\Big{]}$
(17c)
$\displaystyle\gamma_{a}\Phi_{m,t}(s_{m,t},p_{m,t})+\gamma_{e}\varepsilon_{m,t}(s_{m,t},p_{m,t})$
(18) $\displaystyle=$
$\displaystyle\gamma_{a}u_{m,t}(1-s_{m,t})(\phi_{m,t-1}+\tau)+\gamma_{e}s_{m,t}C_{s}+\gamma_{a}[(1-u_{m,t})(\Phi_{m,t-1}+\tau)+u_{m,t}l_{m,t}]+\gamma_{e}p_{m,t}l_{m,t}$
(18a) $\displaystyle=$ $\displaystyle
s_{m,t}[\gamma_{e}C_{s}-\gamma_{a}u_{m,t}(\phi_{m,t-1}+\tau)]+\gamma_{a}[u_{m,t}(\phi_{m,t-1}+\tau)+(1-u_{m,t})(\Phi_{m,t-1}+\tau)+u_{m,t}l_{m,t}]+\gamma_{e}p_{m,t}l_{m,t}$
(18b) $\displaystyle=$ $\displaystyle s_{m,t}C_{m,t,1}+C_{m,t,2}$ (18c)
The traditional two DRL algorithms DDPG and Deep Q-Learning (DQN) are suitable
for continuous and discrete action space, respectively. However,
$s_{m,t}\in\\{0,1\\}$ and $p_{m,t}\in[0,P_{m,max}]$ in Eq. (16), thus the
space of $\bm{s}_{t}$ is discrete while the space of $\bm{p}_{t}$ is
continuous. Hence, the optimization problem can neither be solved by DQN nor
DDPG. Next, we will investigate the relationship between $p_{m,t}$ and
$s_{m,t}$ to handle this dilemma.
Substituting Eqs. (10) and (12) into Eq. (13), the optimization objective is
rewritten as Eq. (17a). Then substituting Eqs. (8) and (11) into Eq. (17a), we
can obtain Eq. (17b), where $\phi_{m,t}$ is denoted as $\phi_{m,t}(s_{m,t})$
to indicate that it is the function of $s_{m,t}$. Then reorganizing Eq. (17b),
we have Eq. (17c). The first term of Eq. (17c) is related with $s_{m,t}$, next
we rewrite the first term of Eq. (17c) as Eq. (18) to investigate the
relationship between $s_{m,t}$ and $p_{m,t}$. Substituting Eq. (6) into Eq.
(18), we have Eq. (18a). Then merge the homogeneous terms containing $s_{m,t}$
and $\gamma_{a}$ in Eq. (18a), respectively, we have Eq. (18b). Let
$C_{m,t,1}=\gamma_{e}C_{s}-\gamma_{a}u_{m,t}(\phi_{m,t-1}+\tau)$ and
$C_{m,t,2}=\gamma_{a}[u_{m,t}(\phi_{m,t-1}+\tau)+(1-u_{m,t})(\Phi_{m,t-1}+\tau)+u_{m,t}l_{m,t}]+\gamma_{e}p_{m,t}l_{m,t}$,
thus Eq. (18b) is rewritten as Eq. (18c), where $C_{m,t,1}$ is the coefficient
for homogeneous terms containing $s_{m,t}$ in Eq. (18b), and $C_{m,t,2}$
contains all terms without $s_{m,t}$ in Eq. (18b).
In $C_{m,t,1}$ and $C_{m,t,2}$, $\phi_{m,t-1}$ can be calculated by the BS
based on the historical data in slot $t-1$[17] and $\Phi_{m,t-1}$ is known for
the BS. In addition, BS can calculate $\gamma_{m,t}$ according to Eqs.
(4)-(5), thus $u_{m,t}$ and $l_{m,t}$ can be further calculated according to
Eqs. (7) and (9) given ${p}_{m,t}$, which means that $C_{m,t,1}$ and
$C_{m,t,2}$ depend on ${p}_{m,t}$ and are independent of $s_{m,t}$. Hence, the
optimal sample collection requirements to minimize
$s_{m,t}C_{m,t,1}+C_{m,t,2}$, denoted as $s^{*}_{m,t}$, is achieved when the
term $s_{m,t}C_{m,t,1}$ is minimum, thus we have
$s^{*}_{m,t}=\left\\{\begin{array}[]{l}{1,\qquad C_{m,t,1}<0}\\\ {0,\qquad
otherwise}\end{array}.\right.$ (19)
Hence, the optimal sample collection requirements can be determined according
to Eq. (19) when ${p}_{m,t}$ is given and Eq. (13) can be rewritten as
$\displaystyle\min_{\bm{p}_{t}}\left[\gamma_{a}\overline{\Phi}+\gamma_{e}\overline{\varepsilon}\right]$
(20) $\displaystyle s.t.\qquad$ $\displaystyle p_{m,t}\in[0,P_{m,max}],\forall
m\in\mathcal{M},\forall t\in\mathcal{T},$ (20a) $\displaystyle
s^{*}_{m,t}=\left\\{\begin{array}[]{l}{1,\qquad C_{m,t,1}<0}\\\ {0,\qquad
otherwise}\end{array}.\right.$ (20b)
According to Eq. (20), the action $\bm{a}_{t}$ is only reflected by
$\bm{p}_{t}$. Therefore, DDPG which is suitable for the continuous action
space can be employed as the desired algorithm to solve the optimization
problem in Eq. (20).
* •
Reward function: The BS aims to minimize the AoI and energy consumption of the
MIMO-NOMA IoT system, and the target of the DDPG algorithm is to maximize the
reward function. Therefore the reward function in slot $t$ can be defined as
${r}_{t}(\bm{o}_{t},\bm{p}_{t})=-\sum_{m\in\mathcal{M}}[\gamma_{a}\Phi_{m,t}+\gamma_{e}\varepsilon_{m,t}].$
(23)
Furthermore, the expected long-term discounted reward of the system can be
defined as
$J(\mu)=\mathbb{E}\left[\sum_{t=1}^{T}\beta^{t-1}r_{t}(\bm{o}_{t},\bm{p}_{t})|_{\bm{p}_{t}=\mu(\bm{o}_{t})}\right],$
(24)
where $\beta\in[0,1]$ is the discounting factor, $\bm{p}_{t}=\mu(\bm{o}_{t})$
indicates the action under the state $\bm{o}_{t}$, which is derived through
policy $\mu$. Thus, our objective in this paper becomes finding the optimal
policy to minimize $J(\mu)$.
### IV-B Optimizing power allocation based on DDPG
Figure 2: Flow diagram of DDPG
In this subsection, we will introduce the architecture of the DDPG algorithm
including primary networks (a actor network and a critic network) and target
networks (a target actor network and a target critic-network)[37], where the
actor network is adopted for policy approximation and improvement, the critic
network is adopted for policy evaluation and the target networks are adopted
to improve the stability of algorithm. Both primary and target networks are
neural networks (DNNs). Denote $\theta$, $\zeta$, $\theta^{\prime}$ and
$\zeta^{\prime}$ as parameters of the actor network, critic network, target
actor network and target critic network, respectively, $\mu_{\theta}$ as the
policy approximated by actor-network, and $\Delta_{t}$ as the noise added on
action for exploration in slot $t$. Next, we will present the training stage
of the DDPG algorithm in detail.
The parameters $\theta$ and $\zeta$ are first initialized randomly,
$\theta^{\prime}$ and $\zeta^{\prime}$ are set as $\theta$ and $\zeta$,
respectively. In addition, a replay experience buffer $\mathcal{R}$ is built
up to cache the state transitions (lines 1-3).
Next, the algorithm loops for $E$ episodes. At the beginning of each episode,
the simulation parameters of the system model is reset as $u_{m,0}=0$,
$p_{m,0}=1$ and $\Phi_{m,0}=0$ for each device $m$, $\bm{h}_{m}(0)$ is
initialized randomly. Given $p_{m,0}$ and $\bm{h}_{m}(0)$, the SINR
$\gamma_{m,0}$ is calculated according to Eqs. (2)-(4), then the state of each
device $m$, i.e., $\bm{o}_{m,1}=[u_{m,0},\gamma_{m,0},\Phi_{m,0}]$ is observed
by the agent (lines 4-6).
Afterwards, the algorithm iterates from slot $1$ to $T$. For slot $t$, the
actor network yields the output $\mu_{\theta}(\bm{o}_{t}|\theta)$ under the
observed state $\bm{o}_{t}$ and policy $\mu$ with parameters $\theta$. Then a
noise $\Delta_{t}$ is generated and the agent calculates the transmission
powers of all devices according to
$\bm{p}_{t}=\mu_{\theta}(\bm{o}_{t}|\theta)+\Delta_{t}$. After that the agent
calculates $u_{m,t}$, $s_{m,t}$ and $\gamma_{m,t}$ of each device $m$
according to Eqs. (7), (19) and (4), respectively. Afterwards, the agent
calculates $\Phi_{m,t}$ and $\varepsilon_{m,t}$ according to Eqs. (8) and
(11), respectively, and thus obtain the state of slot $t$, i.e.,
$\bm{o}_{t+1}$, then calculates $r_{t}$ according to Eq. (23). The above tuple
$[\bm{o}_{t},\bm{p}_{t},r_{t},\bm{o}_{t+1}]$ in the replay buffer. Then the
agent inputs $\bm{o}_{t+1}$ into the actor network and starts the next
iteration if the number of samples in replay buffer is not larger than $I$
(lines 7-10).
If the number of tuples in the replay buffer exceeds $I$, the parameters
$\theta$, $\theta^{\prime}$, $\zeta$, and $\zeta^{\prime}$ will be updated to
maximize $J(\mu_{\theta})$. Here, $\theta$ is updated toward the direction of
the gradient $\nabla_{\theta}J(\mu_{\theta})$. Specifically, the agent
uniformly retrieves a mini-batch consisting of $I$ tuples from the replay
buffer. For each tuple $i$, i.e.,
$(\bm{o}_{i},\bm{p}_{i},r_{i},\bm{o}_{i}^{\prime})$
$(i\in\\{1,2,\cdots,I\\})$, the agent inputs $\bm{o^{\prime}}_{i}$ into the
target actor network and outputs
$\bm{p}_{i}^{\prime}=\mu_{\theta^{\prime}}({\bm{o^{\prime}}_{i}}|\theta^{\prime})$,
then inputs $\bm{o}_{i}^{\prime}$ and $\bm{p}_{i}^{\prime}$ into the target
critic network and outputs
$Q^{\zeta^{\prime}}({\bm{o}_{i}^{\prime}},\bm{p}_{i}^{\prime})$, then
calculates the target value as
$y_{i}=r_{i}+\beta
Q^{\zeta^{\prime}}({\bm{o}_{i}^{\prime}},\bm{p}_{i}^{\prime})|_{\bm{p}_{i}^{\prime}=\mu_{\theta^{\prime}}({\bm{o^{\prime}}_{i}}|\theta^{\prime})}.$
(25)
While $\bm{o}_{i}$ and $\bm{p}_{i}$ are the input and
$Q^{\zeta}(\bm{o}_{i},\bm{p}_{i})$ is the output of critic-network, the loss
function can be expressed as
$L(\zeta)=\frac{1}{I}\sum_{i=1}^{I}\left[y_{i}-Q^{\zeta}(\bm{o}_{i},\bm{p}_{i})\right]^{2}.$
(26)
Then the critic network is updated by the gradient descending method with the
gradient of loss function $\nabla_{\zeta}L(\zeta)$ [38] (lines 11-13), i.e.,
$\zeta\leftarrow\zeta-\alpha_{c}\nabla_{\zeta}L(\zeta),$ (27)
where $\alpha_{c}$ is the learning rate of the critic network.
After that, the agent calculates the gradient $\nabla_{\theta}J(\mu_{\theta})$
as [39]
$\begin{split}&\nabla_{\theta}J(\mu_{\theta})\\\
&\approx\frac{1}{I}\sum_{i=1}^{I}\nabla_{\theta}Q^{\zeta}(\bm{o}_{i},\bm{p}_{\mu})|_{\bm{p}_{\mu}=\mu_{\theta}(\bm{o}_{i}|\theta)}\\\
&=\frac{1}{I}\sum_{i=1}^{I}\nabla_{\theta}\mu_{\theta}(\bm{o}_{i}|\theta)\cdot\nabla_{\bm{p}_{\mu}}Q^{\zeta}(\bm{o}_{i},\bm{p}_{\mu})|_{\bm{p}_{\mu}=\mu_{\theta}(\bm{o}_{i}|\theta)}\end{split},$
(28)
where chain rule is applied to derive the gradient of
$Q^{\zeta}(\bm{o}_{i},\bm{p}_{\mu})$ with respect to $\theta$ [39]. Given
$\nabla_{\theta}J(\mu_{\theta})$, actor-network can be updated by gradient
ascending to maximize $J(\mu_{\theta})$, i.e,
$\theta\leftarrow\theta+\alpha_{a}\nabla_{\theta}J(\mu_{\theta}),$ (29)
where $\alpha_{a}$ is the learning rate of actor network.
After the parameters of the primary networks are updated, the parameters of
the target networks are updated based on the parameters of primary networks,
i.e.,
$\begin{split}\zeta^{\prime}&\leftarrow\kappa\zeta+(1-\kappa)\zeta^{\prime}\\\
\theta^{\prime}&\leftarrow\kappa\theta+(1-\kappa)\theta^{{\prime}}\end{split}\quad,$
(30)
where $\kappa$ is a constant much smaller than 1, i.e., $\kappa\ll 1$. (line
15).
Up to now, the iteration for slot $t$ is finished and the agent starts the
next iteration until the number of slots reaches $T$. Then the agent starts
the next episode. When the number of episodes reaches $E$, the training stage
is finished and outputs the near optimal policy. The pseudocode of the
training stage is described in Algorithm 1.
Input: $\gamma$, $\tau$, $\theta$, $\zeta$
Output: optimized DNNs
1 Randomly initialize the $\theta$, $\zeta$;
2 Initialize target networks by $\zeta^{\prime}\leftarrow\zeta$,
$\theta^{\prime}\leftarrow\theta$;
3 Initialize replay experience buffer $\mathcal{R}$;
4 for _episode from $1$ to $E$ _ do
5 Reset simulation parameters for the system model;
6 Receive initial observation state $\bm{o}_{1}$;
7 for _slot $t$ from $1$ to $T$ _ do
8 Generate the transmission power of all devices according to the current
policy, state and exploration noise
$\bm{p}_{t}=\mu_{\theta}(\bm{o}_{t}|\theta)+\Delta_{t}$ ;
9 Execute action $\bm{p}_{t}$, observe reward $r_{t}$ and new state
$\bm{o}_{t+1}$ from the system model;
10 Store transition tuple $(\bm{o}_{t},\bm{p}_{t},r_{t},\bm{o}_{t+1})$ in
$\mathcal{R}$;
11 if _number of tuples in $\mathcal{R}$ is larger than $I$ _ then
12 Randomly sample a mini-batch of $I$ transitions tuples from $\mathcal{R}$;
13 Update the critic network by minimizing the loss function according to Eq.
(27);
14 Update the actor network according to Eq. (29);
15 Update target networks according to Eqs. (30).
16
17
Algorithm 1 Training stage of the DDPG algorithm
Next, the testing stage is initialized to test the performance under the near
optimal policy. Compared with the training stage, the parameter updating
process is omitted in testing process and actions in each slot are generated
by the near optimal policy. The corresponding pseudocode is shown in Algorithm
2, where $\theta^{*}$ is the parameter to achieve the near optimal policy in
the training stage.
1 for _episode from $1$ to $E$ _ do
2 Reset simulation parameters for the system model;
3 Receive initial observation state $\bm{o}_{1}$;
4 for _slot $t$ from $1$ to $T$ _ do
5 Generate the transmission power of all devices according to the near optimal
policy and current state $\bm{p}_{t}=\mu_{\theta}(\bm{o}_{t}|\theta^{m*})$ ;
6 Execute the action $\bm{p}_{t}$;
7 Observe reward the $r_{t}$ and new state $\bm{o}_{t+1}$.
8
Algorithm 2 Testing stage of the DDPG algorithm
### IV-C Complexity Investigation
In this subsection, we investigated the complexity of proposed algorithm.
Denote $G_{P}$ and $U_{p}$ as the computational complexity for computing
gradients and updating parameters of the primary networks, respectively. Since
the architecture of target networks are same as primary networks, the
computational complexity for updating parameters of the target networks are
the same as primary networks. The complexity of proposed algorithm is related
to the number of slots in training process. To be specific, during each slot
the primary networks calculate gradients and updating parameters, while the
target networks update parameters with the parameters of primary networks
according to Eq. (30). Moreover, denote the complexity of calculating sample
decisions based on power allocation as $S_{d}$. Thus the complexity of the
proposed algorithm in a slot is $O(G_{P}+2U_{P}+S_{d})$. Note that the
gradients calculating and parameters updating will be processed until number
of tuples cached in the replay buffer exceeds I, and the proposed algorithm
will loop $E$ episodes which each of episodes contains T slots, to this end
the complexity of proposed algorithm can be expressed as $O((E\cdot
T-I)(G_{P}+2U_{P}+S_{d}))$.
## V Simulation Results and Analysis
In this section, we will conduct simulation to verify the effectiveness of our
proposed power allocation policy. The experiments consist of both the training
and testing stage. The simulation scenario is described in the system model
and the simulation tool is Python 3.6. In the simulation, both actor network
and critic network are the four-layer fully connected DNN with two hidden
layers which are equipped with $400$ and $300$ neurons, respectively. Adam
optimization method [40] is adopted to update the parameters of critic network
and actor network. The noise $\Delta_{t}$ (for exploration) follows the
Ornstein-Uhlenbeck (OU) process with decay-rate $0.15$ and variation $0.004$,
respectively [41]. The small scale fading of each device is initialized by
white Gaussian noise, and the Rayleigh block fading model is employed to
simulate the stochastic small scale fading[42]. The reference channel gain of
each device is $-30dB$ when the communications distance is $1$ meter, the
path-loss exponent is $2$, and the communication distances is randomly set
within a range $[50,100]$ meters. The parameters are summarized in TABLE II.
Figure 3: Learning curves under various number of devices. TABLE II: Values of
the parameters in the experiments. Parameters of system model[43]
---
Parameter | Value | Parameter | Value
$\tau$ | $0.1$ s | $K$ | $4$
$W$ | $18$ kHz | $C_{s}$ | 0.5 J
$P_{m,max}$ | $2$ W | $T$ | $500$
Parameters of Agent[38]
Parameter | Value | Parameter | Value
$\kappa$ | $0.001$ | $I$ | $64$
$E$ | $800$ | $\beta$ | $0.99$
$|\mathcal{R}|$ | $2.5\times 10^{5}$ | $\gamma_{e}$ | $0.5$
$\gamma_{a}$ | $0.5$ | $\alpha_{a}$ | $10^{-3}$
$\alpha_{c}$ | $10^{-4}$ | $F_{c}/F_{m}$ | $0.8/0.5$
$B$ | $10$ | $N_{GA}$ | 50
### V-A Training Stage
Fig. 3 shows the learning curves in the training stage, i.e., rewards in
different episode, for different numbers of IoT devices. It can be seen that
the rewards of different curves rise and fluctuate from episode $0$ to $150$,
which reflects that the agent is learning the policy to maximize the average
reward. After that the learning curves turn out to be stable, which indicates
that the near optimal policy has been learned by the agent. Note that there is
a litter jitters after episode $150$, it is because the agent is adjusting
slightly since the exploration noise prevents the agent from converging into
the local optima. It also can be seen that the large number of devices incurs
a low reward. It is attributed to the fact that each device will be affected
by more interference as the number of devices in the system increases, which
leads to the lower transmission rate. It will prolongs the transmission delay
and further increase the AoI of the system. Then the BS would inform the
devices to consume more energy to sample more frequently and transmit faster,
thus the lower AoI can be guaranteed.
### V-B Testing Stage
In the testing stage, we check the performance of the near optimal policy
obtained in the training stage. Random power allocation policy and Genetic
Algorithm (GA) [44] are adopted for comparison, random power allocation policy
and GA are introduced as follow:
* •
Random policy: Randomly allocates the power of each device $m$ within
$[0,P_{m,max}]$ and the sample collection requirements is obtained according
to Eq. (19).
* •
Genetic Algorithm: In each time slot, BS randomly generates population vector
according to $P_{m,max}$ and population size $B$, which means that there are
$B$ individuals in the population vector, and each individual in the
population vector is the power allocation of all device. BS selects best
individuals in the population vector as offsprings according to fitness, i.e.,
reward function, of each individuals. Then offsprings will evolve for $N_{GA}$
times, for each evolution, the probabilities of crossover and mutation for
these offsprings are $F_{c}$ and $F_{m}$, respectively. Where crossover means
that two individuals in the offsprings exchange power allocation of a random
device, and mutation means that the power allocation of any device in the
offsprings sets in $[0,P_{m,max}]$ randomly. After that, selecting best
individuals from the offsprings that has experienced crossover and mutation as
next offsprings and start next evolution. After all evolutions, select best
individual from the last offsprings, which is the optimal power allocation
derived by GA. After that, BS calculates the optimal sample collection based
on the optimal power allocation according to Eq. (19), then executes the
optimal power allocation derived by GA and optimal sample collection. In the
end, BS iterates into the next time slot.
Figure 4: AoI of the system vs. number of devices.
Fig. 4 presents the AoI of near optimal policy with that of the random policy
and GA. It can be seen that the AoI of the system under the three policies
increases as the number of devices increases. This is because that each device
will suffer from the interference as the number of devices increases, and thus
degrades its transmission delay according to Eq. (9), which would further
increase the AoI of the system. Meanwhile, the near optimal policy and GA
always outperform the random policy, because optimal policy can adjust the
power allocation adaptively according to the observed state, and GA will find
the optimal power allocation according to fitness in the evolutions to ensure
low AoI, while random policy just generates power allocation randomly. It also
can be seen that the near optimal policy outperforms the GA, because GA may
converge into local optima, while DDPG has the exploration noise to prevent
the agent from converging into the local optima.
Figure 5: Energy consumption of the system vs. number of devices. Figure 6:
Average reward vs. number of devices.
Fig. 5 compares the energy consumption of three policies. It can be seen that
the energy consumption increases as the number of devices increases. It is
because that the interferences will increase the AoI of the system, which
imposes the agent to inform the devices to consume more energy to sample more
frequently and transmit faster. Moreover, the increasing number of devices
contributes to the increasing energy consumption according to Eq. (12).
Meanwhile, the near optimal policy and GA always outperform the random policy,
because DDPG and GA can allocate power adaptively to ensure low energy
consumption. Moreover, it also can be seen that the near optimal policy always
outperforms than GA, that is due to the fact that GA do not have the local
optima avoiding scheme like the exploration noise of DDPG.
Figure 7: AoI of the system vs. packet size. Figure 8: Energy consumption vs.
packet size.
Fig. 6 compares the average reward under the three policies, where the reward
is obtained by averaging the test results over all slots. This is shown that
the average reward increases as the number of devices increases. It is because
that the reward function consists of AoI and energy consumption of the system
according to Eq. (23), and both of them increase as the number of devices
increases. Moreover, the average reward under the near optimal policy and GA
are higher than that of random policy. It is attributed to the fact that the
near optimal policy allocates power according to observed state to maximize
the long-term discounted reward, and the GA obtain the optimal power
allocation according to the fitness to maximize the reward. It also can be
seen that near optimal policy is always outperforms than GA, it is because
that GA aim to find the optimal power allocation based on fitness, i.e.,
reward in each slot while lose the sight of long-term reward maximization.
Fig. 7 shows the relationship between the AoI of the system and packet size,
i.e., $Q$, under three policies. It can be seen that the AoI increases as the
packet size increases under the three policies. It is because that according
to Eq. (9), the transmission delay is long when the packet size is large,
which incurs a large AoI of the system. In addition, we can see that the AoI
of the system under the near optimal policy and GA are lower than the AoI
under the random policy. It is because the near optimal policy can adjust the
power allocation based on the observed state, and GA obtain optimal power
allocation according to fitness, which can significantly reduce the AoI of the
system. The gap between near optimal power allocation and GA is caused by the
local optima of GA.
Fig. 8 shows the relationship between the energy consumption of the system and
packet size under three policies. It can be seen that the energy consumption
increases for the three polices when the packet size increases. As shown in
Fig. 7, the transmission delay is long when the packet size is large, thus
incurring the increase of energy consumption of the system. We can see also
that the energy consumption under the near optimal policy and GA are lower
than that of random policy. It is because that the near optimal policy can
adaptively allocate power and GA can obtain optimal power allocation according
to fitness to ensure a lower energy consumption. However, GA may converge into
local optima, thus the energy consumption of GA is lower than the near optimal
policy obtained by DDPG.
## VI Conclusions
In this paper, we formulated a problem to minimize the AoI and energy
consumption of the MIMO-NOMA IoT system. To solve it, we simplified the
formulated problem and proposed the power allocation scheme based on DDPG to
maximize the long-term discounted reward. Extensive simulations have
demonstrated the superiority of the near optimal policy as compared with the
baseline policy. According to the theoretical analysis and simulation results,
we have obtained the following conclusions:
* •
A large number of devices would cause a high AoI of the system. To ensure a
lower AoI, the agent will inform the devices to consume more energy to sample
more frequently and transmit faster, thus incurring the increase of energy
consumption.
* •
A large packet size will lead to long transmission delay, which will further
cause high AoI of the system. In order to reduce the AoI of the system, the
agent would inform the devices to consume more energy to sample more
frequently and transmit faster.
* •
The near optimal policy trained by DDPG outperforms the baseline policy under
different number of users and packet sizes, which has a good capability to
suit the system dynamic variation.
## References
* [1] Q. Wu, H. Liu, C. Zhang, Q. Fan, Z. Li, and K. Wang, “Trajectory protection schemes based on a gravity mobility model in iot,” _Electronics_ , vol. 8, no. 2, 2019. [Online]. Available: https://www.mdpi.com/2079-9292/8/2/148
* [2] Z. Yao, J. Jiang, P. Fan, Z. Cao, and V. Li, “A neighbor-table-based multipath routing in ad hoc networks,” in _The 57th IEEE Semiannual Vehicular Technology Conference, 2003. VTC 2003-Spring._ , vol. 3, 2003, pp. 1739–1743 vol.3.
* [3] J. Fan, S.-t. Yin, Q. Wu, and F. Gao, “Study on refined deployment of wireless mesh sensor network,” in _2010 6th International Conference on Wireless Communications Networking and Mobile Computing (WiCOM)_ , 2010, pp. 1–5.
* [4] Q. Wu, H. Liu, R. Wang, P. Fan, Q. Fan, and Z. Li, “Delay-sensitive task offloading in the 802.11p-based vehicular fog computing systems,” _IEEE Internet of Things Journal_ , vol. 7, no. 1, pp. 773–785, 2020.
* [5] Q. Wang and Z. Wu, “Beamforming optimization and power allocation for user-centric mimo-noma iot networks,” _IEEE Access_ , vol. 9, pp. 339–348, 2021.
* [6] S. Wan, J. Lu, P. Fan, and K. B. Letaief, “To smart city: Public safety network design for emergency,” _IEEE Access_ , vol. 6, pp. 1451–1460, 2018\.
* [7] Q. Wu, Y. Zhao, and Q. Fan, “Time-dependent performance modeling for platooning communications at intersection,” _IEEE Internet of Things Journal_ , vol. 9, no. 19, pp. 18 500–18 513, 2022.
* [8] Q. Wu and J. Zheng, “Performance modeling and analysis of ieee 802.11 dcf based fair channel access for vehicle-to-roadside communication in a non-saturated state,” _Wireless Networks_ , vol. 21, pp. 1 – 11, 2014.
* [9] H. Zhu, Q. Wu, X.-J. Wu, Q. Fan, P. Fan, and J. Wang, “Decentralized power allocation for mimo-noma vehicular edge computing based on deep reinforcement learning,” _IEEE Internet Things J._ , pp. 1–1, 2021.
* [10] X. Chen, J. Lu, P. Fan, and K. B. Letaief, “Massive mimo beamforming with transmit diversity for high mobility wireless communications,” _IEEE Access_ , vol. 5, pp. 23 032–23 045, 2017.
* [11] Y. Gao, B. Xia, K. Xiao, Z. Chen, X. Li, and S. Zhang, “Theoretical analysis of the dynamic decode ordering sic receiver for uplink noma systems,” _IEEE Commun. Lett._ , vol. 21, no. 10, pp. 2246–2249, 2017.
* [12] Z. Bo and W. Saad, “Joint status sampling and updating for minimizing age of information in the internet of things,” _IEEE Trans. Commun._ , vol. 67, no. 11, pp. 7468––7482, 2019.
* [13] Q. Wu, Z. Wan, Q. Fan, P. Fan, and J. Wang, “Velocity-adaptive access scheme for mec-assisted platooning networks: Access fairness via data freshness,” _IEEE Internet Things J._ , vol. 9, no. 6, pp. 4229–4244, 2022.
* [14] K. Wang, F. R. Yu, L. Wang, J. Li, N. Zhao, Q. Guan, B. Li, and Q. Wu, “Interference alignment with adaptive power allocation in full-duplex-enabled small cell networks,” _IEEE Transactions on Vehicular Technology_ , vol. 68, no. 3, pp. 3010–3015, 2019.
* [15] M. Volodymyr, K. Koray, S. David, A. A. Rusu, V. Joel, M. G. Bellemare, G. Alex, R. Martin, A. K. Fidjeland, and O. Georg, “Human-level control through deep reinforcement learning,” _Nature_ , vol. 518, no. 7540, pp. 529–33, 2019.
* [16] J. F. Grybosi, J. L. Rebelatto, and G. L. Moritz, “Age-of-information of sic-aided massive iot networks with random access,” _IEEE Internet Things J._ , pp. 1–1, 2021.
* [17] S. Wang, M. Chen, Z. Yang, C. Yin, W. Saad, S. Cui, and H. V. Poor, “Distributed Reinforcement Learning for Age of Information Minimization in Real-Time IoT Systems,” _arXiv e-prints_ , p. arXiv:2104.01527, 2021.
* [18] M. A. Abd-Elmagid, H. S. Dhillon, and N. Pappas, “Aoi-optimal joint sampling and updating for wireless powered communication systems,” _IEEE Trans. Veh. Technol._ , vol. 69, no. 11, pp. 14 110–14 115, 2020.
* [19] C. Li, Y. Huang, S. Li, Y. Chen, B. A. Jalaian, Y. T. Hou, W. Lou, J. H. Reed, and S. Kompella, “Minimizing aoi in a 5g-based iot network under varying channel conditions,” _IEEE Internet Things J._ , vol. 8, no. 19, pp. 14 543–14 558, 2021.
* [20] M. Hatami, M. Leinonen, and M. Codreanu, “Aoi minimization in status update control with energy harvesting sensors,” _IEEE Trans. Wireless Commun._ , pp. 1–1, 2021.
* [21] M. Sun, X. Xu, X. Qin, and P. Zhang, “Aoi-energy-aware uav-assisted data collection for iot networks: A deep reinforcement learning method,” _IEEE Internet Things J._ , pp. 1–1, 2021.
* [22] H. Hu, K. Xiong, G. Qu, Q. Ni, P. Fan, and K. B. Letaief, “Aoi-minimal trajectory planning and data collection in uav-assisted wireless powered iot networks,” _IEEE Internet Things J._ , vol. 8, no. 2, pp. 1211–1223, 2021\.
* [23] M. Emara, H. Elsawy, and G. Bauch, “A spatiotemporal model for peak aoi in uplink iot networks: Time versus event-triggered traffic,” _IEEE Internet of Things Journal_ , vol. 7, no. 8, pp. 6762–6777, 2020.
* [24] L. Lyu, Y. Dai, N. Cheng, S. Zhu, X. Guan, B. Lin, and X. Shen, “Aoi-aware co-design of cooperative transmission and state estimation for marine iot systems,” _IEEE Internet of Things Journal_ , vol. 8, no. 10, pp. 7889–7901, 2021.
* [25] X. Wang, C. Chen, J. He, S. Zhu, and X. Guan, “Aoi-aware control and communication co-design for industrial iot systems,” _IEEE Internet of Things Journal_ , vol. 8, no. 10, pp. 8464–8473, 2021.
* [26] X. Hao, T. Yang, Y. Hu, H. Feng, and B. Hu, “An adaptive matching bridged resource allocation over correlated energy efficiency and aoi in cr-iot system,” _IEEE Transactions on Green Communications and Networking_ , vol. 6, no. 1, pp. 583–599, 2022.
* [27] S. S. Yilmaz, B. Özbek, M. İlgüy, B. Okyere, L. Musavian, and J. Gonzalez, “User selection for noma based mimo with physical layer network coding in internet of things applications,” _IEEE Internet Things J._ , pp. 1–1, 2021.
* [28] Z. Shi, H. Wang, Y. Fu, G. Yang, S. Ma, F. Hou, and T. A. Tsiftsis, “Zero-forcing-based downlink virtual mimo–noma communications in iot networks,” _IEEE Internet Things J._ , vol. 7, no. 4, pp. 2716–2737, 2020\.
* [29] Q. Wang and Z. Wu, “Beamforming optimization and power allocation for user-centric mimo-noma iot networks,” _IEEE Access_ , vol. 9, pp. 339–348, 2021.
* [30] L. Han, R. Liu, Z. Wang, X. Yue, and J. S. Thompson, “Millimeter-wave mimo-noma-based positioning system for internet-of-things applications,” _IEEE Internet Things J._ , vol. 7, no. 11, pp. 11 068–11 077, 2020.
* [31] S. Zhang, L. Wang, H. Luo, X. Ma, and S. Zhou, “Aoi-delay tradeoff in mobile edge caching with freshness-aware content refreshing,” _IEEE Trans. Wireless Commun._ , vol. 20, no. 8, pp. 5329–5342, 2021.
* [32] S. Chinnadurai and D. Yoon, “Energy efficient mimo-noma hcn with iot for wireless communication systems,” in _2018 International Conference on Information and Communication Technology Convergence (ICTC)_ , 2018, pp. 856–859.
* [33] J. Gao, X. Wang, R. Shen, and Y. Xu, “User clustering and power allocation for mmwave mimo-noma with iot devices,” in _2021 IEEE Wireless Communications and Networking Conference (WCNC)_ , 2021, pp. 1–6.
* [34] W. Feng, N. Zhao, S. Ao, J. Tang, X. Zhang, Y. Fu, D. K. C. So, and K.-K. Wong, “Joint 3d trajectory and power optimization for uav-aided mmwave mimo-noma networks,” _IEEE Transactions on Communications_ , vol. 69, no. 4, pp. 2346–2358, 2021.
* [35] Z. Ding, L. Dai, and H. V. Poor, “Mimo-noma design for small packet transmission in the internet of things,” _IEEE Access_ , vol. 4, pp. 1393–1405, 2016.
* [36] H. Huang, Y. Yang, Z. Ding, H. Wang, H. Sari, and F. Adachi, “Deep learning-based sum data rate and energy efficiency optimization for mimo-noma systems,” _IEEE Trans. Commun._ , vol. 19, no. 8, pp. 5373–5388, 2020\.
* [37] G. Qiao, S. Leng, S. Maharjan, Y. Zhang, and N. Ansari, “Deep reinforcement learning for cooperative content caching in vehicular edge computing and networks,” _IEEE Internet Things J._ , vol. 7, no. 1, pp. 247–257, 2020\.
* [38] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” _arXiv e-prints_ , p. arXiv:1509.02971, Sep. 2015\.
* [39] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller, “Deterministic policy gradient algorithms,” in _2014 International Conference on Machine Learning(ICML)_ , 2014, pp. 387–395.
* [40] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _arXiv preprint arXiv:1412.6980_ , vol. 9, 2015.
* [41] G. E. Uhlenbeck and L. S. Ornstein, “On the theory of the brownian motion,” _Revista Latinoamericana De Microbiología_ , vol. 15, no. 1, pp. 29–35, 1973.
* [42] H. Q. Ngo, E. G. Larsson, and T. L. Marzetta, “Energy and spectral efficiency of very large multiuser mimo systems,” _IEEE Trans. Commun._ , vol. 61, no. 4, pp. 1436–1449, 2013.
* [43] S. Wang, M. Chen, Z. Yang, C. Yin, W. Saad, S. Cui, and H. V. Poor, “Distributed reinforcement learning for age of information minimization in real-time iot systems,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 16, no. 3, pp. 501–515, 2022.
* [44] S. Rahnamayan, H. R. Tizhoosh, and M. M. Salama, “A novel population initialization method for accelerating evolutionary algorithms,” _Computers & Mathematics with Applications_, vol. 53, no. 10, pp. 1605–1614, 2007.
|
11institutetext: Computer Vision Center & Computer Science Department
Universitat Autònoma de Barcelona, Spain
11email: {sbiswas, priba<EMAIL_ADDRESS>22institutetext: CVPR Unit, Indian
Statistical Institute, India
22email<EMAIL_ADDRESS>
# Graph-based Deep Generative Modelling for Document Layout Generation
Sanket Biswas 11 0000-0001-6648-8270 Pau Riba 11 0000-0002-4710-0864 Josep
Lladós 11 0000-0002-4533-4739 Umapada Pal 22 0000-0002-5426-2618
###### Abstract
One of the major prerequisites for any deep learning approach is the
availability of large-scale training data. When dealing with scanned document
images in real world scenarios, the principal information of its content is
stored in the layout itself. In this work, we have proposed an automated deep
generative model using Graph Neural Networks (GNNs) to generate synthetic data
with highly variable and plausible document layouts that can be used to train
document interpretation systems, in this case, specially in digital mailroom
applications. It is also the first graph-based approach for document layout
generation task experimented on administrative document images, in this case,
invoices.
###### Keywords:
Document Synthesis Graph Neural Networks Document Layout Generation.
## 1 Introduction
The variability and diversity of complex layouts and graphical entities in
digital mailroom documents prevent us from tackling document understanding
problems separately, and that such specificity has been a great barrier
towards deriving off-the-shelf document analysis solutions, usable by
nonspecialists. Apparently, OCR-based engines are the most widely recognized
products in this research community. For instance, imagine a business firm
having thousands of documents to process, analyze, and transform to carry out
day-to-day operations. Examples of such documents might include receipts,
invoices, forms, statements, contracts, and many more pieces of data, which
are highly unstructured or semi-structured, and it is essential to be able to
quickly analyze and understand the information embedded within the unevenly
structured data in these cases. In most of these Document Image Analysis and
Recognition(DIAR) applications, the document content has been broadly
classified into two structural entities: (1) physical and (2) logical
structural entities. While the physical structure describes the visual aspect
of the document by representing the specific objects and their mutual
positions, the logical structure assigns a definite semantic meaning to each
of these objects.
In recent times, deep CNN-based methods have tried to deduct the visual
differences between object classes: while the visual characteristics of
certain graphical elements (e.g., plots, charts, figures) differ conspicuously
from text elements, the same cannot be said for tables, where the major
differences from the surrounding content lie mostly stored in the layout
information and its context. Moreover, trying to train these deep CNN models
from scratch may be quite impractical due to the requirement of a large amount
of training examples and the need of precisely annotated document datasets,
which are scarcely available in the community. The key reason may be that most
of these documents (administrative documents, for example) contain sensitive
information (identity name, bank details, health information and so on) and
are not publicly released by government agencies or business firms to be used
in cloud services. Hence, as in many other applications requiring intensive
training, data augmentation through synthetic generation is a solution. In the
case of document structure recognition, there is an important need to generate
synthetic document layouts that can encode the structural information of the
real data and can be used during training to transfer enough knowledge to the
model. Patil et. al. [9] formulated this task as Document Layout Generation
(DLG), where they used a recursive neural network approach to map the
structured representation of semi-structured documents (in the form of tree-
level hierarchies) to a code representation, the space of which is
approximated by a Gaussian. New hierarchies representing plausible 2D document
layouts were sampled from such distributions. In this work, we tackle the
problem by encoding the structured hierarchies in the form of graph
representation.
Graphs possess the ability to represent two types of contextual knowledge: (1)
geometric/intrinsic, spatial structure of the document with positional
information of object categories like tables, and (2) semantic/extrinsic,
conceptual connections between the different object categories in a document.
Therefore, graphs emerge as a suitable model to represent document layouts.
The revolution of deep learning has also seen considerable progress in the
area of graph-based representation and learning. Graph Neural Networks (GNNs)
[14, 6] as deep learning approaches have extended the power of CNNs to non-
Euclidean geometries to capture long distance / different levels semantics
based on the relations between objects. GNN’s eventually learn a state
embedding that contains the neighborhood information for each node (which can
represent different entities or objects). The embedding is constructed at
different graph convolution layers, so it encodes the information of a
subgraph centered at a node. In the scenario of document interpretation, GNNs
embed a description of a local layout as a context of a given document
element. A recent application example was the detection of tables in case of
administrative document images [21, 20].
In summary, the main contributions of this work are as follows: (1) a novel
approach has been proposed for DLG task using GNN’s to generate synthetic data
applied to administrative invoice documents where we render data in the form
of diverse graphs that can actually match the structural characteristics of
the target data. (2) The proposed graph-based generative modelling for such
administrative documents also helps to invoke anonymity for the sensitive
information (e.g. names, addresses, billing information, total amount etc.) it
might contain in the document images. As shown in Fig. 1, the nodes in the
graph represent the different entities (e.g. header, table, supplier etc.) in
the document, while the edges represent the visibility relations (horizontal
or vertical) between the neighbouring nodes. (3) All experiments of our model
have been performed on administrative invoices collection from the RVL-CDIP
[11] dataset. As a result, a new synthetic invoice dataset has been created
for augmenting the train data during table detection and layout analysis
tasks.
The rest of this paper is organized as follows. Section 2 provides a review of
the relevant state of the art. In section 3 we describe the main contribution
of our work. Section 4 provides experimental validation with some relevant
results of our proposed approach, both qualitatively and quantitatively.
Finally, Section 5 concludes the work throwing some light on its future scope
and benefits.
## 2 Related Work
Figure 1: Graph representation of the structure of an invoice image
### 2.1 Geometric Deep Learning
Geometric deep learning [6, 14] has emerged as an extension of deep learning
models to non-Euclidean domains, such as graphs and manifolds. To refer to
neural networks applied to graph-structured data, the term Graph Neural
Networks [23] was coined.
The GNN methods help to learn representations at the node, edge and graph
level considering the underlying topological information. Based on the
fundamental architecture, GNN methods can be aptly divided into two
categories: spatial and spectral methods. Spatial methods extend the idea of
Convolutional Neural Networks (CNNs) for images and define a set of operations
involving the local neighbourhood to compute a new representation [7, 17]. On
the other hand, spectral methods use the knowledge of spectral graph theory
[24] and consider graph Laplacians for defining convolution operations in
graph domain [6, 14]. Gilmer et al. [10] generalized both the domains of GNN,
and defined their approach in terms of a Neural Message Passing (NMP)
pipeline. These fundamental architectures have been further extended to new
tasks involving graphs, such as the generative variational graph autoencoder
[15], learning graph edit distance between a pair of graphs [22], graph
matching [28], etc.
### 2.2 Document Layout Generation
The study and analysis of the structural properties and relations between
entities in documents is a fundamental challenge in the field of information
retrieval. Although local tasks like the Optical Character Recognition (OCR)
have been addressed with a considerably high model performance, the global and
highly variable nature of document layouts has made their analysis some what
more ambiguous. Previous works on structural document analysis mostly relied
on the different kinds of specifically devised methods and applications [1, 4,
12, 19]. Recent works have shown that deep learning based approaches have
significantly improved the performance of these models in quality. A very
standard approach in this regard was proposed by Yang et al. [26] which uses a
joint visual and textual representation in a multimodal way of understanding,
viewing the layout analysis as a pixel-wise segmentation task. But such modern
deep learning based approaches typically require a very heavy amount of high-
quality training data, that often calls for suitable methods to synthetically
generate documents with real-looking layout [16] and content [18]. Our work
actually focuses on the direction of research on synthetic layout generation,
showing that our generated synthetic data can be extremely beneficial to
augment training data for document analysis tasks.
Preserving the reliable representation of layouts has shown to be very useful
in various graphical design contexts, which typically involve highly
structured and content-rich objects. One such recent intuitive understanding
was established by Li et al. [16] in their LayoutGAN, which aims to generate
realistic document layouts using Generative Adversarial Networks (GANs) with a
wireframe rendering layer. Zheng et. al. [29] used a GAN-based approach to
generate document layouts but their work focused mainly on content aware
generation, that primarily uses the content of the document as an additional
prior. Biswas et. al. [2] devised a generative GAN-based model to synthesize
realistic document images, guided by a spatial layout(bounding boxes with
object categories) given as a reference by the user. However to use a more
highly structured object generation, it is very important to focus operate on
the low dimensional vectors unlike CNN’s. Hence, in the most recent
literature, Patil et. al. [9] has exploited this highly structured positional
information along with content to generate document layouts. They have used
recursive neural networks which operate on the low dimensional vectors and
employ two-layer perceptrons to merge any two vectors, which make them
computationally cheaper and help them train with fewer samples. The recursive
neural networks are coupled with Variational Autoencoders (VAEs) in their
resulting model architecture and provides state-of-the-art results for
generating synthetic layouts for 2D documents. They have also introduced a
novel metric for measuring document similarity, called DocSim, and used this
metric to show the novelty and diversity of the generated layouts.
Using geometric relations between the different entities in documents can
actually help to preserve the structural information along with the content as
seen in the work by Riba et. al.[21] on table detection in invoice documents
using GNNs. Figure 1 clearly illustrates how they have used graph modelling
for document images to capture the geometrical structure of an invoice and
using this knowledge can help us to generate more realistic synthetic samples
for training. In this work we have used a similar kind of graph modelling for
exploiting the structural information of an invoice image. Carbonell et.
al.[5] also used GNNs for recognition of structural components like named
entities in semi-structured administrative documents. Traditional generative
models for graphs [25] are usually hand‐crafted to model a particular family
of graphs, and thus they do not have the capacity to directly learn the
generative model from observed data. To find a solution, one such graph-based
generative model using GNNs was proposed by You et. al. [27] on molecular data
generation. They used sequential generation with Recurrent Neural Networks
(RNNs) on top of graph based representations and get state-of-the-art results
on molecular data generation. But there has not been any substantial work in
the literature which has applied such graph-based generative models for
document layout analysis tasks. In this context, it is indeed a challenging
problem which we tackle in this work. As case study, we will work in the
context of administrative documents, primarily focusing on invoices. Automated
generation of synthetic document layouts will allow us to train document
interpretation systems in a more efficient way for all kinds of document
layout analysis tasks.
## 3 Method
Figure 2: The Graph Generation Process: Given figure illustrates the graph
generation process during inference time. The Node-level RNN update encodes
the graph hidden state $h$ , updated by the predicted adjacency vector
$S_{i}^{\pi}$ for every node. The Edge-level RNN updates the sequence of edges
when every new node is added. Figure 3: Qualitative analysis of our generative
model with figures (a), (c) and (e) representing the test graphs from real
invoice data and (b), (d) and (f) representing the generated graphs from our
model.
In this work, we have explored a new research direction in the DIAR domain
using the application of GNN. A hierarchical and scalable framework has been
designed and implemented to exploit highly powerful graph representations in
semi-structured administrative documents like invoices. Every document image
has been modelled as a visibility graph which is fed to our generative model
to synthesize meaningful document layouts. Figure 1 depicts the structure of
an invoice document and how it has been modelled as a visibility graph. We
considered a document graph whose nodes are graphical or named entities such
as tables, figures, header, date, etc. while the edges represent their spatial
relationships. We aim to learn a distribution $p(G)$ over an undirected graph
$G=(V,E)$ defined by the node set $V=\left\\{v_{1},\ldots,v_{n}\right\\}$ and
edge set $E=\left\\{\left(v_{i},v_{j}\right)\mid v_{i},v_{j}\in V\right\\}$
that has a node ordering $\pi$ to map nodes to rows/columns of adjacency
matrix $A^{\pi}$. This node ordering scheme has been adapted to enhance time
efficiency of our training model. An adjacency matrix $A_{i,j}^{\pi}$ under a
node ordering $\pi$ can be represented as
$A_{i,j}^{\pi}=\mathbb{1}\left[\left(\pi\left(v_{i}\right),\pi\left(v_{j}\right)\right)\in
E\right]$. So,in this work, we have proposed a graph generation framework
applied in context to document images, that learns to generate realistic
graphs by training on a representative set of graphs known as visibility
graphs modelled from documents.
The main idea is to represent graphs of different node orderings as sequences,
and then build a generative model on top of these sequences. As illustrated in
the graph generation framework in Fig. 2, we decomposed the entire process
into two parts: one that generates a sequence of nodes (Node-level update) and
then another process that generates a sequence of edges for every new
generated node (Edge-level update) which will be explained in more detail in
the below subsections in a step-wise manner.
### 3.1 Graph to Sequence Mapping
We aim to learn a distribution $p_{model}(G)$ over graphs , based on a set of
observed graphs $G={(G_{1},...,G_{s})}$, sampled from a data distribution
$p(G)$, where each graph, may have a different set of nodes and edges. During
training time instead of learning $p(G)$ directly, whose sample space is
really complex to define, we instead sample a node ordering $\pi$ to get a set
of sequences $S^{\pi}$ as our observations and learn $p(S^{\pi})$ instead.
This help us to learn a model autoregressively due to the $S^{\pi}$ . At
inference time, we can simply sample G directly by computing $p(G)$ without
this mapping. The mapping function $f_{S}$ from graphs to sequences, for a
graph $G\sim p(G)$ with $n$ nodes under node ordering $\pi$ can be determined
in equation 1.
$\begin{split}S^{\pi}=f_{S}(G,\pi)=\left(S_{1}^{\pi},\ldots,S_{n}^{\pi}\right)\end{split}$
(1)
Every element $S_{i}^{\pi}$ is actually an adjacency vector that represents
the edges between the present node $\pi(v_{i})$ and previous nodes
$\pi(v_{j})$ already in graph. This can be further represented by the equation
2.
$\begin{split}S_{i}^{\pi}=\left(A_{1,i}^{\pi},\ldots,A_{i-1,i}^{\pi}\right)^{T},\forall
i\in\\{2,3,4\ldots,n\\}\end{split}$ (2)
### 3.2 GRNN Framework
As a result of the graph to sequence mapping, our next step would be to
generate the adjacency matrix of a graph G by generating these adjacency
vectors $A_{i,i}^{\pi}$ of each node in a step by step sequential process.
This can output different networks with variable number of nodes, while
preserving the important topological properties of the generated graph. While
transforming the learning distribution from $p(G)$ to $p(S^{\pi})$, we can
further decompose $p(S^{\pi})$ as the product of conditional distributions
over the elements due to its sequential nature as shown in equation 3.
$\begin{split}p\left(S^{\pi}\right)=\prod_{i=1}^{n+1}p\left(S_{i}^{\pi}\mid
S_{1}^{\pi},\ldots,S_{i-1}^{\pi}\right)\end{split}$ (3)
Now to model this still complex distribution, we used recurrent neural
networks (RNNs) that consist of a state-transition function and an output-
function as shown in equations 4 and 5:
$\begin{split}h_{i}=f_{\text{trans
}}\left(h_{i-1},S_{i-1}^{\pi}\right)\end{split}$ (4)
$\begin{split}\theta_{i}=f_{\text{out }}\left(h_{i}\right)\end{split}$ (5)
where $h_{i}$ is a vector that encodes the updated generated graph
information, $S_{i-1}^{\pi}$ is the adjacency vector for the last updated node
$i-1$ and $\theta_{i}$ denotes the distribution of next node’s adjacency
vector i.e. $S_{i}^{\pi}\sim{P}_{\theta_{i}}$. So theoretically, the proposed
Graph Recurrent Neural Network (GRNN) framework utilizes a hierarchical RNN as
shown in figure 2, where the first (i.e. graph-level) RNN generates nodes and
update the state of the graph. The second RNN (i.e. edge-level) generates the
edges of a given node. To achieve a scalable modeling, we let these networks
share their weights across all the time steps $i$ during the training phase.
### 3.3 Learning via Breadth-First Search
A great insight in our proposed method is rather than learning to generate
graphs with using the Breadth-First Search (BFS) node orderings, instead of
random node permutations. The BFS function takes a random permutation $\pi$ as
input, picks $\pi(v_{1})$ as starting node and appends the node neighbours
into the BFS queue in an order defined by $\pi$. The modified equation for
mapping function from graphs to sequences can rewritten as shown in equation
6.
$S^{\pi}=f_{S}(G,\operatorname{BFS}(G,\pi))$ (6)
This technique helps the model to be trained on all possible BFS orderings,
instead of all possible node permutations. As the BFS function is
deterministic in nature and many-to-one, i.e. one same ordering can eventually
map multiple permutations, it reduces number of sequences we need to consider.
It also makes the learning much simpler by reducing the number of edge
predictions and adding possible edges only for the nodes that are considered
in the BFS queue itself.
## 4 Experimental Validation
For our experimental validation, we have used the RVL-CDIP dataset[11]. In
particular, the invoice subset has been split into 70 and 30 percent of the
samples for training and evaluation respectively. Additionally, two extra
datasets have been evaluated, namely, Protein [3] and Community[13], to
evaluate the robustness of our method on other domains.
### 4.1 Datasets
#### 4.1.1 RVL-CDIP Invoices [11].
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) is a
well-known document information database containing 16 classes of documents
with about 400,000 images in grayscale. For evaluating our synthetic graph
generation framework, we chose the 518 images from the Invoice class,
annotated with 5 different regions belonging to class header, table, address
and so on. Here each invoice page is represented by a graph, with the nodes
corresponding to the entities of different class in the invoice.
#### 4.1.2 Protein [3].
918 protein graphs have been used with every node representing an amino acid
and two nodes are connected if they are at a distance threshold of 6 Angstrom.
#### 4.1.3 Community [13].
The community dataset contains a collection of 500 two-community graphs, where
each community generated by Erdos-Renyi model (E-R) [8], represents a node in
the graph.
### 4.2 Data Preparation for Graph Representation
In this stage, given the image of a document, we apply physical layout
techniques to detect the graphical regions. Given an invoice document, we
represent each detected entity corresponding to a 7-dimensional vector
containing the position of the bounding boxes and the histogram of its content
(numbers, alphabets or symbol). This encoded information will be used to
generate a visibility graph in order to represent the structural information
of the document. We consider $G=(V,E)$ to be a visibility graph. The set of
edges $E$ represent visibility relations between nodes. Two entities are said
to be connected with an edge if and only if the bounding boxes are vertically
or horizontally visible, i.e. a straight horizontal or vertical line can be
traced between the bounding box of two entities without crossing any other.
Also, long edges covering more than a quarter of the page height are
discarded. An example of a visibility graph sample with corresponding node
embedding is shown in Figure 1.
### 4.3 Training setup for Graph Recurrent Neural Network
Once the document has been processed and visibility graph has been generated,
we feed them to our Graph Recurrent Neural Network (GRNN) model framework with
the 7-dimensional node input space to get projected to a higher order space
encoding with individual node features preserving the structural content
information of the document. The graph-level RNN used in our work uses 4
layered GRU with 128 dimensional hidden state. To output the adjacency vector
prediction, the edge-level RNN uses 4 layered GRU cells with 16 hidden
dimensional state. To get the predicted adjacency vector in the output, the
edge-level RNN maps the 16 dimensional hidden state to a 8 dimensional vector
through a MLP and ReLU activation, then another MLP maps the vector to a
scalar with sigmoid activation. We initialize the the edge-level RNN by the
output of the graph-level RNN when generating the start of sequences
$S_{i-1}^{\pi}$. We use the highest layer hidden state of the graph-level RNN
to initialize with a linear layer to match the dimensionality. During the
training time, ground truth has been used rather than the model’s own
predictions. During the inference time, the model is allowed to use its own
predicted graph samples at each time step to generate a graph. The Adam
Optimizer has been used for minibatch size of 32. We set the learning rate to
be 0.001 which is decayed by 0.2 at every 100th epoch in all experiments.
### 4.4 Evaluation Schema
The evaluation of the quality of generated graphs is quite hard to estimate. A
fair comparison between the test graph and generated graph is required. By
visualizing the sets of test graphs and the generated graphs, a fair
qualitative comparison can be done. From Figure 3 we can infer a qualitative
comparison between the test and generated samples.
For a quantitative evaluation scheme, we have used the Maximum Mean
Discrepancy (MMD) measures to calculate the distance between the two sets of
graphs (in this case, the test sample and the generated sample). In our
experiments, the derived MMD scores between the graphs have been calculated
for degree and clustering coefficient distributions, along with the average
orbit count statistics as shown in Table 1. The lower the scores, the better
the real structure of the entities has been preserved.
### 4.5 Experiments on Administrative Invoice Documents
Experiments on the subset of administrative document (invoices) taken from
RVL-CDIP[11] has been conducted and we report the first baseline for document
layout generation using a Graph Neural Network(GNN) framework.
As illustrated in Figure 3, we illustrate some of the qualitative results with
the document graphs that we generated from our proposed GRNN model. The
visualizations of the graph samples suggest that the generated graphs visually
preserve the appearance of the reference one, so the model roughly learns to
preserve both syntactic and semantic information for different entities.
Eventually, we can create synthetic samples of invoices by generating more and
more graph samples and also providing them during the inference time. Since
this is the first baseline approach to use graph generative models in document
datasets.
However, Table 1 depicts the quantitative results we obtained for RVL-CDIP
Invoice dataset and we compared our model performance with some molecular
datasets like Protein [3] and Community [13] present in the graph literature.
Results clearly show that there is a huge room for improvement in the graph
generative framework for documents when compared to the performance in the
above mentioned benchmark molecular datasets. The ‘tables’ entity is a regular
structured entity and our model works well for generating table classes in
realistic positions. But the title, date and other entities in administrative
documents do not contain uniform information about its structural relations
and its quite difficult for the model to learn those semantic content.
Table 1: Summary of the final model results for Document Layout Generation Dataset | Degree ($\downarrow$) | Clustering ($\downarrow$) | Orbit ($\downarrow$)
---|---|---|---
Protein [3] | 0.014 | 0.002 | 0.039
Community [13] | 0.034 | 0.102 | 0.037
RVL-CDIP Invoices [11] | 0.373 | 0.166 | 0.188
## 5 Conclusion
In this work, we have presented a novel approach to automatically synthesize
document layouts structures. The proposed method is able to understand the
complex interactions among the different layout components and generate
plausible layouts for 2D documents. The graph-based generative approach also
explores the power of GNN’s towards the learning and generation of complex
structured layouts for administrative invoices as a case study.
The future scope of this work will be mainly focused on two research lines.
Firstly, there is a requirement for a more efficient evaluation of
synthetically generated layouts when compared to real document layout samples
both quantitatively and qualitatively. Secondly, exploiting this generated
layout samples for supervision purposes can enhance the performance on well-
defined tasks such as table detection or document layout analysis.
## Acknowledgment
This work has been partially supported by the Spanish projects
RTI2018-095645-B-C21, and FCT-19-15244, and the Catalan projects
2017-SGR-1783, the CERCA Program / Generalitat de Catalunya and PhD
Scholarship from AGAUR (2021FIB-10010). We are also indebted to Dr. Joan Mas
Romeu for all the help and assistance provided during the data preparation
stage for the experiments.
## References
* [1] Baird, H.S., Bunke, H., Yamamoto, K.: Structured document image analysis. Springer Science & Business Media (2012)
* [2] Biswas, S., Riba, P., Lladós, J., Pal, U.: Docsynth: A layout guided approach for controllable document image synthesis. In: International Conference on Document Analysis and Recognition (ICDAR) (2021)
* [3] Borgwardt, K.M., Ong, C.S., Schönauer, S., Vishwanathan, S., Smola, A.J., Kriegel, H.P.: Protein function prediction via graph kernels. Bioinformatics 21(suppl_1), i47–i56 (2005)
* [4] Breuel, T.M.: High performance document layout analysis. In: Proceedings of the Symposium on Document Image Understanding Technology. pp. 209–218 (2003)
* [5] Carbonell, M., Riba, P., Villegas, M., Fornés, A., Lladós, J.: Named entity recognition and relation extraction with graph neural networks in semi structured documents. In: 2020 25th International Conference on Pattern Recognition (ICPR). pp. 9622–9627. IEEE (2021)
* [6] Defferrard, M., Bresson, X., Vandergheynst, P.: Convolutional neural networks on graphs with fast localized spectral filtering. In: Advances in neural information processing systems. pp. 3844–3852 (2016)
* [7] Duvenaud, D.K., Maclaurin, D., Iparraguirre, J., Bombarell, R., Hirzel, T., Aspuru-Guzik, A., Adams, R.P.: Convolutional networks on graphs for learning molecular fingerprints. In: Advances in neural information processing systems (2015)
* [8] Erdős, P., Rényi, A.: On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci 5(1), 17–60 (1960)
* [9] Gadi Patil, A., Ben-Eliezer, O., Perel, O., Averbuch-Elor, H.: Read: Recursive autoencoders for document layout generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. pp. 544–545 (2020)
* [10] Gilmer, J., Schoenholz, S.S., Riley, P.F., Vinyals, O., Dahl, G.E.: Neural message passing for quantum chemistry. arXiv preprint arXiv:1704.01212 (2017)
* [11] Harley, A.W., Ufkes, A., Derpanis, K.G.: Evaluation of deep convolutional nets for document image classification and retrieval. In: 2015 13th International Conference on Document Analysis and Recognition (ICDAR). pp. 991–995. IEEE (2015)
* [12] Kasturi, R., O’gorman, L., Govindaraju, V.: Document image analysis: A primer. Sadhana 27(1), 3–22 (2002)
* [13] Kim, J., Lee, J.G.: Community detection in multi-layer graphs: A survey. ACM SIGMOD Record 44(3), 37–48 (2015)
* [14] Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)
* [15] Kipf, T.N., Welling, M.: Variational graph auto-encoders. arXiv preprint arXiv:1611.07308 (2016)
* [16] Li, J., Yang, J., Hertzmann, A., Zhang, J., Xu, T.: Layoutgan: Generating graphic layouts with wireframe discriminators. arXiv preprint arXiv:1901.06767 (2019)
* [17] Li, Y., Tarlow, D., Brockschmidt, M., Zemel, R.: Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493 (2015)
* [18] Liu, T.F., Craft, M., Situ, J., Yumer, E., Mech, R., Kumar, R.: Learning design semantics for mobile apps. In: Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (2018)
* [19] O’Gorman, L.: The document spectrum for page layout analysis. IEEE Transactions on pattern analysis and machine intelligence 15(11), 1162–1173 (1993)
* [20] Qasim, S.R., Mahmood, H., Shafait, F.: Rethinking table recognition using graph neural networks. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 142–147. IEEE (2019)
* [21] Riba, P., Dutta, A., Goldmann, L., Fornés, A., Ramos, O., Lladós, J.: Table detection in invoice documents by graph neural networks. In: 2019 International Conference on Document Analysis and Recognition (ICDAR) (2019)
* [22] Riba, P., Fischer, A., Lladós, J., Fornés, A.: Learning graph distances with message passing neural networks. In: 2018 24th International Conference on Pattern Recognition (ICPR) (2018)
* [23] Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model. IEEE Transactions on Neural Networks (2008)
* [24] Shuman, D.I., Narang, S.K., Frossard, P., Ortega, A., Vandergheynst, P.: The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE signal processing magazine (2013)
* [25] White, D., Wilson, R.C.: Spectral generative models for graphs. In: 14th International Conference on Image Analysis and Processing (ICIAP 2007). pp. 35–42. IEEE (2007)
* [26] Yang, X., Yumer, E., Asente, P., Kraley, M., Kifer, D., Lee Giles, C.: Learning to extract semantic structure from documents using multimodal fully convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5315–5324 (2017)
* [27] You, J., Ying, R., Ren, X., Hamilton, W.L., Leskovec, J.: Graphrnn: Generating realistic graphs with deep auto-regressive models. arXiv preprint arXiv:1802.08773 (2018)
* [28] Zanfir, A., Sminchisescu, C.: Deep learning of graph matching. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2018)
* [29] Zheng, X., Qiao, X., Cao, Y., Lau, R.W.: Content-aware generative modeling of graphic design layouts. ACM Transactions on Graphics (TOG) 38(4), 1–15 (2019)
|
# On conic-line arrangements with nodes, tacnodes, and ordinary triple points
Alexandru Dimca Université Côte d’Azur, CNRS, LJAD, France and Simion Stoilow
Institute of Mathematics, P.O. Box 1-764, RO-014700 Bucharest, Romania
<EMAIL_ADDRESS>and Piotr Pokora Department of Mathematics, Pedagogical
University of Krakow, Podchora̧żych 2, PL-30-084 Kraków, Poland
<EMAIL_ADDRESS>
###### Abstract.
In the present paper, we study conic-line arrangements having nodes, tacnodes,
and ordinary triple points as singularities. We provide combinatorial
constraints on such arrangements and we give the complete classification of
free arrangements in this class.
###### Key words and phrases:
conic-line arrangements, nodes, tacnodes, freeness, nearly freeness
###### 2010 Mathematics Subject Classification:
Primary 14N20; Secondary 14C20, 32S22
## 1\. Introduction
In the present paper we study a class of conic-line arrangements in the
complex projective plane $\mathbb{P}^{2}_{\mathbb{C}}$, with special attention
to the free arrangements in this class. The theory of free line arrangements
is rather rich and we have many results which provide (at least) a partial
characterization of the freeness. In that subject, the ultimate goal is to
understand whether Terao’s Conjecture is true in its whole generality. On the
other hand, line arrangements show up naturally in algebraic geometry. For
example, Hirzebruch’s inequality appreciated very much in combinatorics is
motivated by many extreme problems in algebraic geometry and it is obtained
with its methods. Based on that, it seems to be quite natural to extend this
set-up to higher degree curves. From our perspective it seems very natural to
start working on arrangement consisting of rational curves in the plane. Here
we study arrangements of smooth conics and lines in the plane. The first main
motivation is that conic-line arrangements admit non-ordinary singularities,
so we can study arrangements having, for instance, tacnodes as singularities.
In general, singularities of such arrangements are in general not quasi-
homogeneous which makes their study quite complicated. By [14, Example 4.1],
we know that Terao’s conjecture does not hold in general for such
arrangements. Let us recall the following counterexample from the
aforementioned paper.
###### Example 1.1.
Consider the following conic-line arrangement
$\mathcal{CL}_{1}\,:\,xy\cdot(y^{2}+xz)\cdot(y^{2}+x^{2}+2xz)=0.$
The intersection point $P=(0:0:1)$ has multiplicity $4$ and it is quasi-
homogeneous (although it is not ordinary). One can show that
$\mathcal{CL}_{1}$ is free with the exponents $(2,3)$. If we perturb a bit
line $y=0$, taking for instance $x-13y=0$, we obtain a new conic-line
arrangement
$\mathcal{CL}_{2}\,:\,x\cdot(x-13y)\cdot(y^{2}+xz)\cdot(y^{2}+x^{2}+2xz)=0.$
In this new arrangement, the intersection point $P=(0:0:1)$ has multiplicity
$4$, but it is not longer quasi-homogeneous, and $\mathcal{CL}_{2}$ is not
free. In fact, the arrangement $\mathcal{CL}_{2}$ is nearly free, as defined
in [6]. Note that in many papers on arrangements of plane curves the
hypothesis that all the singularities are quasi-homogeneous plays a key role,
see for instance [4] and [15].
In the present paper we focus on conic-line arrangements in the plane such
that their singularities are nodes, tacnodes, and ordinary triple points. This
assumption is mostly related with our main scope, to verify Terao’s Conjecture
for as large as possible class of conic-line arrangements. One of our results,
Proposition 4.7, tells us that if $C$ is a free reduced plane curve of degree
$m$ having only nodes, tacnodes, and ordinary triple points, then $m\leq 9$.
Based on that combinatorial restriction, we can perform a detailed search in
order to find conic-line arrangements with nodes, tacnodes, and ordinary
triple points that are free. Our main result, Corollary 5.10, tells us that
the so-called Numerical Terao’s Conjecture holds for our class of conic-line
arrangements. As it was mentioned at the beginning of this section, curve
arrangements attract researchers working both in algebraic geometry and
combinatorics. Due to these reasons, we provide combinatorial constraints on
the weak combinatorics of conic-line arrangements with nodes, tacnodes, and
ordinary triple points. We deliver a Hirzebruch-type inequality for such
arrangements, see Theorem 2.1, and this theorem is in the spirit of results
presented in [13]. Then, using the properties of spectra of singularities as
in the seminal paper by Varchenko [17], we provide bounds on the number of
tacnodes and ordinary triple points, see Theorem 3.1.
## 2\. Hirzebruch-type inequality for conic-line arrangements
We start with presenting our set-up. Let
$\mathcal{CL}=\\{\ell_{1},...,\ell_{d},C_{1},...,C_{k}\\}\subset\mathbb{P}^{2}_{\mathbb{C}}$
be an arrangement consisting of $d$ lines and $k$ smooth conics. We assume
that our conic-line arrangements have $n_{2}$ nodes, $t$ tacnodes, and $n_{3}$
ordinary triple points. We have the following combinatorial count
(2.1) $4\binom{k}{2}+2kd+\binom{d}{2}=n_{2}+2t+3n_{3}.$
###### Proof.
Observe that the left-hand side is the number of pairwise intersections of
curves contained in $\mathcal{CL}$. The right-hand side, according to Bézout’s
theorem, is based on the intersection indices. If $p$ is a node, then the
intersection index of curves meeting at that point is equal to $1$. If $p$ is
a tacnode, then the intersection index of curves meeting at that point is
equal to $2$. Finally, if $p$ is an ordinary triple points, that the
intersection index of curves meeting at that point is equal to $3$. This
completes our justification. ∎
The first result of the present paper is the following Hirzebruch-type
inequality.
###### Theorem 2.1.
Let
$\mathcal{CL}=\\{\ell_{1},...,\ell_{d},C_{1},...,C_{k}\\}\subset\mathbb{P}^{2}_{\mathbb{C}}$
be an arrangement of $d$ lines and $k$ smooth conics and such that $2k+d\geq
12$. Assume that $\mathcal{CL}$ has only $n_{2}$ nodes, $t$ tacnodes, and
$n_{3}$ ordinary triple points. Then
(2.2) $20k+n_{2}+\frac{3}{4}n_{3}\geq d+4t.$
We will prove the above theorem using Langer’s variation on the Miyaoka-Yau
inequality [10] which involves the local orbifold Euler numbers $e_{orb}$ of
singular points. We recall basics on them in a concise way. Let
$(\mathbb{P}^{2}_{\mathbb{C}},\alpha C)$ be an effective and log-canonical
pair, where $C$ is a boundary divisor having only nodes, tacnodes, and
ordinary triple points as singularities. Then
* •
if $q$ is a node, then the local orbifold Euler number is equal to
$e_{orb}(p,\mathbb{P}^{2}_{\mathbb{C}},\alpha C)=(1-\alpha)^{2}$ provided that
$0\leq\alpha\leq 1$,
* •
if $q$ is a tacnode, then $e_{orb}(q,\mathbb{P}^{2}_{\mathbb{C}},\alpha
C)=(1-2\alpha)$ provided that $0\leq\alpha\leq\frac{1}{4}$
* •
if $q$ is an ordinary triple point, then
$e_{orb}(q,\mathbb{P}^{2}_{\mathbb{C}},\alpha
C)\leq\bigg{(}1-\frac{3\alpha}{2}\bigg{)}^{2}$ provided that
$0\leq\alpha\leq\frac{2}{3}$
Here we are read to present our proof of Theorem 2.1.
###### Proof.
Let $C:=\ell_{1}+...+\ell_{d}+C_{1}+...+C_{k}$ be a divisor associated with
$\mathcal{CL}$ and such that $m={\rm deg}\,C=d+2k\geq 12$ – we will see in a
moment why the last assumption is crucial. First of all, we need to choose
$\alpha$ in such a way that $K_{\mathbb{P}^{2}_{\mathbb{C}}}+\alpha C$ is
effective and log canonical. In order to obtain the effectivity of the pair,
one needs to satisfy the condition that $-3+\alpha(2k+d)\geq 0$ which implies
$\alpha\geq 3/(2k+d)$. On the other hand, our pair is log-canonical if
$\alpha\leq{\rm min}\\{1,2/3,1/4\\}$, so $\alpha\leq 1/4$. Due to these two
reasons, we get $\alpha\in\bigg{[}3/(2k+d),1/4\bigg{]}$, and this condition is
non-empty provided that $2k+d\geq 12$. From now on we take
$\alpha=\frac{1}{4}$, and then apply the following inequality
(2.3) $\sum_{p\in{\rm
Sing}(C)}3\bigg{(}\alpha(\mu_{p}-1)+1-e_{orb}(p,\mathbb{P}^{2}_{\mathbb{C}},\alpha
C)\bigg{)}\leq(3\alpha-\alpha^{2})m^{2}-3\alpha m,$
where $\mu_{p}$ is the Milnor number of a singular point $p\in{\rm Sing}(C)$.
This gives us
$3n_{2}\bigg{(}\frac{1}{4}(1-1)+1-(1-1/4)^{2}\bigg{)}+3t\bigg{(}\frac{1}{4}(3-1)+1-(1-1/2)\bigg{)}+3n_{3}\bigg{(}\frac{1}{4}(4-1)+1-(1-3/8)^{2}\bigg{)}$
$\leq\sum_{p\in{\rm
Sing}(C)}3\bigg{(}\alpha(\mu_{p}-1)+1-e_{orb}(p,\mathbb{P}^{2}_{\mathbb{C}},\alpha
C)\bigg{)}.$
After some simple manipulations, we obtain
$\frac{21}{16}n_{2}+3t+\frac{261}{4}n_{3}\leq\frac{11}{16}m^{2}-\frac{3}{4}m.$
Using (2.1), we have $m^{2}=2n_{2}+4t+6n_{3}+4k+d$, and this implies
$11m^{2}-12m=22n_{2}+44t+66n_{3}+44k+11d-24k-12d=22n_{2}+44t+66n_{3}+20k-d.$
Combining the above computations together, we get
$21n_{2}+48t+\frac{261}{4}n_{3}\leq 22n_{2}+44t+66n_{3}+20k-d,$
which completes the proof. ∎
## 3\. Spectra of singularities and constraints on conic-line arrangements
Let $F_{\bullet}:\,H=F^{0}\supset...\supset F^{p}\supset F^{p+1}\supset...$ be
the Hodge filtration on the (reduced) vanishing cohomology
$H=H^{n}(X_{\infty})$ of an isolated singularity $f$, where $X_{\infty}$
denotes the canonical Milnor fibre. The filtration $F_{\bullet}$ is invariant
with respect to the action of the semisimple part of the monodromy $T_{s}$.
Hence $T_{s}$ acts on ${\rm Gr}_{F}^{p}H=F^{p}/F^{p+1}$ and ${\rm
Gr}_{F}^{p}H=\bigoplus_{\lambda}({\rm Gr}_{F}^{p})_{\lambda}$, where $({\rm
Gr}_{F}^{p})_{\lambda}={\rm Gr}_{F}^{p}H_{\lambda}$ is the eigensubspace
corresponding to $\lambda$. Denote by
$\mu_{p}={\rm dim}\,{\rm Gr}_{F}^{p},\,\quad\mu_{\lambda}^{p}={\rm dim}({\rm
Gr}_{F}^{p})_{\lambda}.$
Then $\sum_{p}\mu^{p}=\mu$ is the Milnor number,
$\sum_{\lambda}\mu_{\lambda}^{p}=\mu^{p}$, and
$\sum_{p}\mu_{\lambda}^{p}=\mu_{\lambda}$ is the multiplicity of an eigenvalue
$\lambda$, where $\mu_{\lambda}={\rm dim}H_{\lambda}$. Now to each eigenvalue
$\lambda$ one defines
$\alpha=-(1/2\pi\iota){\rm log}(\lambda),$
where $\iota^{2}=-1$. Since $\lambda$ is a root of the unity, $\alpha$ is a
rational number defined modulo an integer. We normalize $\alpha$ according to
the level $p$ of $\lambda$ with respect to $F_{\bullet}$ by the condition
$\alpha=-\frac{1}{2\pi\iota}{\rm log}\,\lambda,\quad n-p-1<\alpha\leq n-p,$
where this $\lambda$ comes from the action of $T_{s}$ on ${\rm Gr}_{F}^{p}H$.
In this way, one obtains an element of the group $\mathbb{Z}^{\mathbb{Q}}$ of
the form
${\rm
Sp}(f)=(\alpha_{1})+...+(\alpha_{\mu})=\sum_{\alpha}n_{\alpha}\cdot(\alpha),$
with $n_{\alpha}=\mu_{\lambda}^{p}$, which is called the spectrum of the
singularity. The numbers $\alpha$ are called spectral numbers, and the
coefficients $n_{\alpha}$ are the spectral multiplicities. Let us recall some
basic properties of spectral numbers for isolated singularities of
hypersurfaces.
1. (1)
$\alpha_{j}\in(0,n)$ if $n=\dim X$.
2. (2)
The spectrum is an invariant of a singularity.
3. (3)
Symmetry: $\alpha_{i}=\alpha_{\mu-i}$.
4. (4)
_Thom-Sebastiani Principle_ : If $f\in\mathbb{C}\\{x_{0},\ldots x_{m}\\}$ and
$g\in\mathbb{C}\\{y_{0},\ldots,y_{n}\\}$ are two series in separate sets of
variables, the expression
$f\oplus g=f(x_{0},\ldots
x_{m})+g(y_{0},\ldots,y_{n})\in\mathbb{C}\\{x_{0},\ldots
x_{m},y_{0},\ldots,y_{n}\\}$
is called the Thom-Sebastiani sum of $f$ and $g$. Then
${\rm Sp}(f\oplus g)=\\{\alpha+\beta\,:\,\alpha\in{\rm Sp}(f),\beta\in{\rm
Sp}(g)\\}.$
5. (5)
${\rm Sp}(x^{m})=\\{\frac{1}{m},\frac{2}{m},...,\frac{m-1}{m}\\}.$
Using the formulae above, we can compute spectral numbers for nodes $(A_{1})$,
tacnodes $(A_{3})$, and ordinary triple points $(D_{4})$.
1. $(A_{1})$:
This singular point can be locally described by $x^{2}+y^{2}=0$, so ${\rm
Sp}(x^{2})=\\{\frac{1}{2}\\}$, ${\rm Sp}(y^{2})=\\{\frac{1}{2}\\}$, and then
${\rm Sp}(A_{1})=1\cdot 1$.
2. $(A_{3})$:
This singular point can be locally described by $y^{2}+x^{4}=0$, so ${\rm
Sp}(y^{2})=\\{\frac{1}{2}\\}$, ${\rm
Sp}(x^{4})=\\{\frac{1}{4},\frac{1}{2},\frac{3}{4}\\}$, and we obtain ${\rm
Sp}(A_{3})=1\cdot\frac{3}{4}+1\cdot 1+1\cdot\frac{5}{4}$.
3. $(D_{4})$:
This singular point can be locally described by $x^{3}+y^{3}=0$, so ${\rm
Sp}(x^{3})={\rm Sp}(y^{3})=\\{\frac{1}{3},\frac{2}{3}\\}$, and ${\rm
Sp}(D_{4})=1\cdot\frac{2}{3}+2\cdot 1+1\cdot\frac{4}{3}$.
Now we present the main result of this section.
###### Theorem 3.1.
Let
$\mathcal{CL}=\\{\ell_{1},...,\ell_{d},C_{1},...,C_{k}\\}\subset\mathbb{P}^{2}_{\mathbb{C}}$
be an arrangement of $d\geq 0$ lines and $k\geq 0$ smooth conics. Assume that
$\mathcal{CL}$ has only $n_{2}$ nodes, $t$ tacnodes, and $n_{3}$ ordinary
triple points. Let $C=\ell_{1}+...+\ell_{d}+C_{1}+...+C_{k}$ and write
$m:={\rm deg}\,C=d+2k$ as $m=3m^{\prime}+\epsilon$ with
$\epsilon\in\\{1,2,3\\}$. Then one has
$t+n_{3}\leq\binom{m-1}{2}+k-\frac{m^{\prime}(5m^{\prime}-3)}{2}$
and
$n_{3}\leq(m^{\prime}+1)(2m^{\prime}+1).$
###### Proof.
We are going to use the theory of spectra of singularities. Recall that if
$(X,0)$ is the union of $m$ lines passing through the origin of
$\mathbb{C}^{2}$, then the corresponding spectrum is
${\rm
Sp}(X,0)=\sum_{j=1}^{m-1}j\cdot\frac{j+1}{m}+\sum_{j=2}^{m-1}(m-j)\cdot\frac{m+j-1}{m}.$
We apply the semicontinuity property of the spectrum in the form presented by
Steenbrink in [16] (see also [9, Theorem 8.9.8]) for the semicontinuity domain
$B=(\frac{1}{3},\frac{4}{3}]$. If $L$ is a generic line in
$\mathbb{P}^{2}_{\mathbb{C}}$, then the trace of the arrangement
$\mathcal{CL}$ on the complement
$\mathbb{C}^{2}=\mathbb{P}^{2}_{\mathbb{C}}\setminus L$ can be identified with
a deformation $X_{s}$ of a singularity of type $(X,0)$ introduced above.
Therefore we get the following equation
(3.1) ${\rm deg}_{B}\sum_{y}{\rm Sp}(X_{s},y)=n_{2}+3t+4n_{3},$
where the above sum is over singular points $y\in X_{s}$, and ${\rm
deg}_{B}\,\sum_{y}{\rm Sp}(X_{s},y)$ denotes the sum of all spectral
multiplicities for spectral numbers that are contained in domain $B$.
On the other hand, the total degree of the spectrum ${\rm Sp}(X,0)$ is equal
to the Milnor number $\mu(X,0)=(m-1)^{2}$. To get the degree for the
restriction of the spectrum ${\rm Sp}(X,0)$ to the interval
$B=(\frac{1}{3},\frac{4}{3}]$, we have to subtract the sum $S_{1}$ of the
multiplicities of the spectral numbers $\alpha$ such that
$\alpha\leq\frac{1}{3}$, and the sum $S_{2}$ of the multiplicities of the
spectral numbers $\alpha>\frac{4}{3}$. By the symmetry property of the
spectrum, the last case can be replaced by $\alpha<\frac{2}{3}$.
The first sum $S_{1}$ is at least equal to
$S_{1}=1+2+\ldots+(m^{\prime}-1)=\frac{m^{\prime}(m^{\prime}-1)}{2}.$
The second sum $S_{2}$ is at least equal to
$S_{2}=1+2+\ldots+(2m^{\prime}-1)=m^{\prime}(2m^{\prime}-1).$
It follows that
(3.2) ${\rm deg}_{B}\,{\rm
Sp}(X,0)\leq(m-1)^{2}-S_{1}-S_{2}=(m-1)^{2}-\frac{m^{\prime}(5m^{\prime}-3)}{2}.$
Therefore the semicontinuity theorem implies that
(3.3) $n_{2}+3t+4n_{3}\leq(m-1)^{2}-\frac{m^{\prime}(5m^{\prime}-3)}{2}.$
Observe that the combinatorial count (2.1) can be rewritten as
(3.4) $n_{2}+2t+3n_{3}=\binom{m}{2}-k.$
By the above, we can conclude that
$t+n_{3}\leq(m-1)^{2}-\binom{m}{2}+k-\frac{m^{\prime}(5m^{\prime}-3)}{2}=$
$=\binom{m-1}{2}+k-\frac{m^{\prime}(5m^{\prime}-3)}{2}.$
For the second inequality, we choose the semicontinuity domain
$B=(-\frac{1}{3},\frac{2}{3}]$, and for this choice of $B$ we have
(3.5) ${\rm deg}_{B}\,\sum_{y}{\rm Sp}(X_{s},y)=n_{3},$
where the sum is taken over all the singular points $y\in X_{s}$. On the other
hand, we have
(3.6) ${\rm deg}_{B}\,{\rm Sp}(X,0)\leq
1+2+\ldots+(2m^{\prime}+1)=(m^{\prime}+1)(2m^{\prime}+1).$
This completes the proof. ∎
###### Example 3.2.
These bounds are rather good, at least in some cases. In order to see this for
the bound involving $t+n_{3}$, consider Figure 2 where we present a conic-line
arrangement with $d=3$ and $k=2$ having $t=5$ and $n_{3}=3$. In this case
$m=7$, hence $m^{\prime}=2$ and the first inequality in Theorem 3.1 is
$8=t+n_{3}\leq 10.$
Next, consider the dual Hesse arrangement given by
$(x^{3}-y^{3})(y^{3}-z^{3})(x^{3}-z^{3})=0,$
which has $n_{3}=12$ triple points. In this case $m=9$, $m^{\prime}=2$, and
the second inequality in Theorem 3.1 gives us
$12=n_{3}\leq 15.$
###### Remark 3.3.
Since $m^{\prime}=(m-\epsilon)/3$ and $k\leq m/2$, it follows that we have
$t+n_{3}\leq\frac{1}{18}\bigg{(}4m^{2}+m(10\epsilon-9)-5\epsilon^{2}-9\epsilon+18\bigg{)}\approx\frac{2}{9}m^{2}+O(m).$
## 4\. Combinatorial constraints on the freeness of reduced curves
We begin with a general introduction to the subject. Let $C$ be a reduced
curve $\mathbb{P}^{2}_{\mathbb{C}}$ of degree $m$ given by $f\in
S:=\mathbb{C}[x,y,z]$. We denote by $J_{f}$ the Jacobian ideal generated by
the partials derivatives $\partial_{x}f,\,\partial_{y}f,\,\partial_{z}f$.
Moreover, we denote by $r:={\rm mdr}(f)$ the minimal degree of a relation
among the partial derivatives, i.e., the minimal degree $r$ of a triple
$(a,b,c)\in S_{r}^{3}$ such that
$a\cdot\partial_{x}f+b\cdot\partial_{y}f+c\cdot\partial_{z}f=0.$
We denote by $\mathfrak{m}=\langle x,y,z\rangle$ the irrelevant ideal.
Consider the graded $S$-module $N(f)=I_{f}/J_{f}$, where $I_{f}$ is the
saturation of $J_{f}$ with respect to $\mathfrak{m}=\langle x,y,z\rangle$.
###### Definition 4.1.
We say that a reduced plane curve $C$ is _free_ if $N(f)=0$.
Let us recall that for a reduced curve $C:f=0$ we define the Arnold exponent
$\alpha_{C}$ which is the minimum of the Arnold exponents of the singular
points $p$ in $C$. Using the modern language, the Arnold exponents of singular
points are nothing else than the log canonical thresholds of singularities.
###### Definition 4.2.
Let $C\,:\,f=0$ be a reduced curve in $\mathbb{C}^{2}$ which is singular at
$0\in\mathbb{C}^{2}$. Denote by $\phi:Y\rightarrow\mathbb{C}^{2}$ the standard
minimal resolution of singularities, i.e., the smallest resolution that has
simple normal crossings (which exists and it is unique). We write then
$K_{Y}=\phi^{*}K_{\mathbb{C}^{2}}+\sum_{i}a_{i}E_{i}$ and
$\phi^{*}C=\phi_{*}^{-1}C+\sum_{i}b_{i}E_{i}$, where $=$ means the linear
equivalence. Then the log canonical threshold of $C$ in $\mathbb{C}^{2}$ is
defined as
$c_{0}(f)={\rm min}_{i}\bigg{\\{}\frac{a_{i}+1}{b_{i}}\bigg{\\}}.$
Using this local (analytical) description, the Arnold exponent $\alpha_{C}$ of
$C$ is then the minimum over all log canonical thresholds of singular points.
In order to compute the actual values of the log canonical thresholds, we can
us the following result – see for instance [2, Theorem 4.1].
###### Theorem 4.3.
Let $C$ be a reduced curve in $\mathbb{C}^{2}$ which has degree $m$. Then
$c_{0}(f)\geq\frac{2}{m}$, and the equality holds if and only if $C$ is a
union of $m$ lines passing through $0$.
###### Remark 4.4.
If $p=(0,0)\in\mathbb{C}^{2}$ is an ordinary singularity of multiplicity $r$
determined by $C\,:\,f=0$, then $c_{0}(f)=\frac{2}{r}$.
Now we need to compute the log canonical threshold for tacnodes. Since
tacnodes are quasi-homogeneous singularities, then we can use the following
pattern (cf. [4, Formula 2.1]).
Recall that the germ $(C,p)$ is weighted homogeneous of type $(w_{1},w_{2};1)$
with $0<w_{j}\leq 1/2$ if there are local analytic coordinates $y_{1},y_{2}$
centered at $p=(0,0)$ and a polynomial
$g(y_{1},y_{2})=\sum_{u,v}c_{u,v}y_{1}^{u}y_{2}^{v}$ with
$c_{u,v}\in\mathbb{C}$, where the sum is taken over all pairs
$(u,v)\in\mathbb{N}^{2}$ with $uw_{1}+vw_{2}=1$. In this case, we have
$c_{0}(g)=w_{1}+w_{2}.$
###### Remark 4.5.
Let $g=y^{2}+x^{4}$, so $g$ defines a tacnode at $p=(0,0)$. Then
$w_{1}=\frac{1}{2},w_{2}=\frac{1}{4}$, and hence we have
$c_{0}(g)=\frac{3}{4}$
In order to show our main result for this section, recall the following [4,
Theorem 2.1].
###### Theorem 4.6 (Dimca-Sernesi).
Let $C\,:\,f=0$ be a reduced curve of degree $m$ in
$\mathbb{P}^{2}_{\mathbb{C}}$ having only quasi-homogeneous singularities.
Then
${\rm mdr}(f)\geq\alpha_{C}\cdot m-2.$
Since nodes, tacnodes, and ordinary triple points are quasi-homogeneous
singularities, then we can prove the following result.
###### Proposition 4.7.
Let $C\,:\,f=0$ be a reduced curve of degree $m$ in
$\mathbb{P}^{2}_{\mathbb{C}}$ having only nodes, tacnodes, and ordinary triple
points as singularities. Then
${\rm mdr}(f)\geq\frac{2}{3}m-2.$
In particular, if $C$ is free, then $m\leq 9$.
###### Proof.
Since $C$ has nodes, tacnodes, and ordinary triple points as singularities,
then
$\alpha_{C}={\rm
min}\bigg{\\{}1,\frac{3}{4},\frac{2}{3}\bigg{\\}}=\frac{2}{3},$
so by the above result we have
${\rm mdr}(f)\geq\frac{2}{3}m-2.$
If $C$ is a free curve, then
$\frac{2}{3}m-2\leq{\rm mdr}(f)\leq\frac{m-1}{2},$
which gives $m\leq 9$. ∎
###### Remark 4.8.
If we restrict our attention to reduced free curves with nodes and tacnodes,
then analogous computations as above give $m\leq 5$, and this bound is sharp
according to what we shall see in Example 4.14 in the forthcoming section. In
fact, we can show that for every $m\in\\{3,4,5\\}$ there exists a conic-line
arrangement having nodes and tacnodes with $k\geq 1$ that is free.
###### Remark 4.9.
Observe that Proposition 4.7 is sharp in the class of reduced free curves –
the dual Hesse arrangement of $9$ lines and $12$ triple points considered in
Example 3.2 is free.
Let
$\mathcal{CL}=\\{\ell_{1},...,\ell_{d},C_{1},...,C_{k}\\}\subset\mathbb{P}^{2}_{\mathbb{C}}$
be a free arrangement of $d\geq 1$ lines and $k\geq 1$ conics. We are going to
use the following homological characterization of the freeness, see for
instance [6], which can be checked on specific examples using `Singular` [3],
or other computer algebra software.
###### Theorem 4.10.
Let $C\subset\mathbb{P}^{2}_{\mathbb{C}}$ be a reduced curve of degree $m$ and
let $f=0$ be its defining equation. Then $C$ is free if and only if the
minimal free resolution of the Milnor algebra $M(f)=S/J_{f}$ has the following
form:
$0\rightarrow S(-d_{1}-(m-1))\oplus S(-d_{2}-(m-1))\rightarrow
S^{3}(-m+1)\rightarrow S\rightarrow M(f)\rightarrow 0$
with $d_{1}+d_{2}=m-1$. In particular, if $d_{1}\leq d_{2}$, then ${\rm
mdr}(f)=d_{1}\leq\frac{m-1}{2}$.
We will need additionally the following lemma, see for instance [4, Lemma
4.4].
###### Lemma 4.11.
If $C$ is a free plane curve of degree $m$, then the exponents $(d_{1},d_{2})$
are positive integers satisfying the following system of equations:
$d_{1}+d_{2}=m-1,\quad\quad d_{1}d_{2}=(m-1)^{2}-\tau(C),$
where $\tau(C)$ denotes the total Tjurina number.
If now $\mathcal{CL}$ is a conic-line arrangement having degree $m=2k+d$ with
$n_{2}$ nodes, $t$ tacnodes, and $n_{3}$ ordinary triple points, then the
above lemma can be rewritten as
(4.1) $d_{1}+d_{1}=m-1,\quad\quad
d_{1}^{2}+d_{2}^{2}+d_{1}d_{2}=n_{2}+3t+4n_{3},\quad\quad d_{1}\leq d_{2}.$
So our problem reduces to possible geometrical realizations of some positive
integer solutions to (4.1).
###### Example 4.12.
Let $m=3$, so we have one conic and one line. An easy inspection tells us that
$d_{1}=d_{2}=1$, $n_{2}=n_{3}=0$, and $t=1$ satisfies (4.1). Let us consider
$\mathcal{CL}_{3}:\quad(x-z)\cdot(x^{2}+y^{2}-z^{2})=0.$
Observe that $\mathcal{CL}_{3}$ is exactly a line tangent to a conic. We can
compute the minimal free resolution of the Milnor algebra $M(f)=S/J_{f}$ which
has the following form:
$0\rightarrow S^{2}(-3)\rightarrow S^{3}(-2)\rightarrow S\rightarrow
M(f)\rightarrow 0,$
so $\mathcal{CL}_{3}$ is free with the exponents $(1,1)$.
###### Example 4.13.
Consider now the case with $m=4$, and note that there are many solutions to
(4.1). Take $d_{1}=1$ and $d_{2}=2$, then we can find the following
Diophantine solution, namely $t=2$ and $n_{2}=1$. Consider
$\mathcal{CL}_{4}:\quad(x^{2}-z^{2})\cdot(x^{2}+y^{2}-z^{2})=0,$
hence two lines tangent to the same conic. Then the minimal free resolution of
the Milnor algebra $M(f)=S/J_{f}$ has the following form:
$0\rightarrow S(-5)\oplus S(-4)\rightarrow S^{3}(-3)\rightarrow S\rightarrow
M(f)\rightarrow 0,$
so $\mathcal{CL}_{3}$ is indeed free with the exponents $(1,2)$.
###### Example 4.14.
Consider now the case with $m=5$. One positive integer solution to (4.1) is
$d_{1}=d_{2}=2$ and $n_{2}=t=3$, $n_{3}=0$. Consider the following conic-line
arrangement
$\mathcal{CL}_{5}:(y-z)\cdot(x^{2}-z^{2})\cdot(x^{2}+y^{2}-z^{2})=0.$
The minimal free resolution of the Milnor algebra $M(f)=S/J_{f}$ has the
following form:
$0\rightarrow S^{2}(-6)\rightarrow S^{3}(-4)\rightarrow S\rightarrow
M(f)\rightarrow 0,$
so $\mathcal{CL}_{5}$ is free.
Another positive integer solution to (4.1) is $d_{1}=d_{2}=2$ and $n_{2}=t=0$,
$n_{3}=3$. Consider now the second case and the following arrangement
$\mathcal{CL}_{5}^{{}^{\prime}}:y\cdot(x+y-4z)\cdot(x-y+4z)\cdot(x^{2}+y^{2}-16z^{2})=0.$
Observe that $\mathcal{CL}_{5}^{\prime}$ has $n_{2}=t=0$ and $n_{3}=3$. The
minimal free resolution of the Milnor algebra $M(f)=S/J_{f}$ has the same form
as above, namely
$0\rightarrow S^{2}(-6)\rightarrow S^{3}(-4)\rightarrow S\rightarrow
M(f)\rightarrow 0,$
so $\mathcal{CL}_{5}^{{}^{\prime}}$ is free.
Figure 1. Free conic-line arrangements with $m=5$ and $k=1$.
## 5\. Classification of free conic-line arrangements with nodes, tacnodes
and triple points
After the above warm-up, we are ready to present the first classification
result. It gives a complete characterization of free arrangements having
$d\in\\{1,2,3\\}$ lines and $k\geq 1$ smooth conics. We start with a general
discussion about free conic-line arrangements with nodes, tacnodes, and
ordinary triple points.
If $\mathcal{CL}$ is free with the exponents $(d_{1},d_{2})$, then we must
have
$d_{1}+d_{2}=2k+d-1$
and
$d_{1}d_{2}=(2k+d-1)^{2}-n_{2}-3t-4n_{3}=(2k+d-1)^{2}-(n_{2}+2t+3n_{3})-t-n_{3}.$
Using the combinatorial count (2.1), we obtain
(5.1) $d_{1}d_{2}=2k^{2}-2k+1+2kd+\frac{d^{2}-3d}{2}-t-n_{3}.$
If we multiply equation (5.1) by $2$, add it to (2.1), then we obtain
$2d_{1}d_{2}=2k^{2}+2k(d-1)+\frac{(d-1)(d-4)}{2}+n_{2}+n_{3}.$
Since $d_{1}d_{2}\leq(2k+d-1)^{2}/4$, we get
(5.2) $n_{2}+n_{3}\leq\frac{3}{2}(d-1).$
Using once again $(\ref{comb:naiv})$, we obtain
(5.3) $t\geq k^{2}-k+kd+\frac{(d-1)(d-3)}{4}-n_{3}.$
If $k\geq 2$, two conics $C_{1}$ and $C_{2}$ from the arrangement
$\mathcal{CL}$ can be in one of the following $3$ situations:
1. (1)
$|C_{1}\cap C_{2}|=4$, and then all the intersection points are nodes.
2. (2)
$|C_{1}\cap C_{2}|=3$, and then one intersection point produces a tacnode in
$\mathcal{CL}$, and the other two intersection points will give nodes.
3. (3)
$|C_{1}\cap C_{2}|=2$, and then two intersection points produce two tacnodes
in $\mathcal{CL}$.
Let $m_{j}$ with $j\in\\{2,3,4\\}$ be the number of pairs of conics in
$\mathcal{CL}$ such that $|C_{1}\cap C_{2}|=j$. Then the number of tacnodes
coming from the contact of $2$ conics is
(5.4) $t^{\prime}=2m_{2}+m_{3}.$
We also have, by counting pairs of conics in two different ways,
(5.5) $m_{2}+m_{3}+m_{4}=\binom{k}{2}$
and
(5.6) $2m_{3}+4m_{4}\leq n_{2}+n_{3}.$
The last inequality follows from the fact that the nodes coming from
$C_{1}\cap C_{2}$ will give either nodes or triple points in $\mathcal{CL}$.
To evaluate the number of tacnodes created by a line in $\mathcal{CL}$, we
need the following.
###### Lemma 5.1.
Let $C_{1}$ and $C_{2}$ be two smooth conics in $\mathcal{CL}$ such that
$|C_{1}\cap C_{2}|=2$. Then any line $L$ in $\mathcal{CL}$ is tangent to at
most one of the conics $C_{1}$ and $C_{2}$.
###### Proof.
Since the pair of conics $C_{1}$ and $C_{2}$ gives rise to two tacnodes, then
up-to a linear change of coordinates one can take
$C_{1}:x^{2}+y^{2}-z^{2}=0\text{ and }C_{2}:x^{2}+y^{2}-r^{2}z^{2}=0$
where $r\in\mathbb{C}$, $r\neq 0,\pm 1$ – please consult [12, Proposition 3].
It follows that the dual $C_{1}^{\vee}$ is given by $x^{2}+y^{2}-z^{2}=0$,
while the dual $C_{2}^{\vee}$ is given by
$x^{2}+y^{2}-\frac{1}{r^{2}}z^{2}=0$. A common tangent for $C_{1}$ and $C_{2}$
corresponds to a point in the intersection $C_{1}^{\vee}\cap C_{2}^{\vee}$,
and hence it is given by equation $T_{\pm}:x\pm y=0$. However, both these
lines pass through one of the two tacnodes situated at $(1:\pm 1:0)$. Hence
$L$ cannot be tangent to two conics, since the curve ${\mathcal{C}}$ has only
nodes, ordinary triple points and tacnodes as singularities. ∎
###### Theorem 5.2.
Let $\mathcal{CL}$ be an arrangement of $0\leq d\leq 3$ lines and $k\geq 1$
smooth conics with $n_{2}$ nodes, $t$ tacnodes, and $n_{3}$ ordinary triple
points. Assume that $\mathcal{CL}$ is free, then the following pairs are
admissible:
$(d,k)\in\\{(1,1),(2,1),(3,1),(3,2)\\}.$
###### Proof.
We need to consider some cases.
Case $d=0$. Then (5.2) implies $n_{2}+n_{3}<0$, which is clearly impossible.
Case $d=1$. Then $n_{2}+n_{3}\leq 0$, and hence $n_{2}=n_{3}=0$. Using the
combinatorial count (2.1) we get $t=k^{2}$. The case $k=1$ is clearly
possible, i.e., a conic plus a tangent line form a free curve, see Example
4.12. We show now that the case $k>1$ is impossible. Using (5.5), we see that
$m_{3}=m_{4}=0$, and hence any two conics in $\mathcal{CL}$ meet in two
points, as in Lemma 5.1 above.
There are $\binom{k}{2}$ pairs of conics, hence the number of tacnodes
obtained as intersection of two conics is $t^{\prime}=k^{2}-k$. Recall also
that by [7] one has $k\in\\{2,3,4\\}$. The only possibility to have $t=k^{2}$
tacnodes is that the unique line, call it $L$, is tangent to all the conics
simultaneously. In the light of Lemma 5.1, $L$ cannot be tangent to each conic
in $\mathcal{CL}$. This completes the proof in this case.
Case $d=2$. The case $k=1$ is possible, see Example 4.13. We show that the
cases $k\geq 2$ are impossible. One has $n_{2}+n_{3}\leq 1$ and using (5.3)
above, we get
$t\geq k^{2}-k+(2k-\frac{1}{4}-n_{3}).$
When $k\geq 2$, one has $2k-\frac{1}{4}-n_{3}>2$, and hence
$t\geq k^{2}-k+3.$
Combining the condition $n_{2}+n_{3}\leq 1$ and inequality (5.6), we get
$m_{3}=m_{4}=0$. It means, in particular, that $m_{2}=\binom{k}{2}$,
$t^{\prime}=k^{2}-k$, and $k\in\\{2,3\\}$. An easy inspection, performed along
the lines of Lemma 5.1, shows that a line can add at most one tacnode, and in
this case one must have
$t\leq k^{2}-k+2,$
hence we have a contradiction.
Case $d=3$. Observe that by Proposition 4.7 we have $k\in\\{1,2,3\\}$. The
case $k=1$ is possible, see Example 4.14. We show now that $k=3$ is
impossible. In this case one has $n_{2}+n_{3}\leq 3$ and using formula (5.3)
above, we get
(5.7) $t\geq k^{2}-k+(3k-n_{3}).$
Then (5.6) and $n_{2}+n_{3}\leq 3$ lead us to $m_{3}\leq 1$ and $m_{4}=0$. If
$m_{3}=0$, then we use Lemma 5.1 to get a contradiction since $3k-n_{3}\geq
6>d=3$. Assume that $m_{3}=1$, so let’s say that $|C_{1}\cap C_{2}|=3$. The
maximal number of tacnodes coming from the contact of $3$ lines and conics in
such an arrangement is $5$. Indeed, two lines can be tangents to both $C_{1}$
and $C_{2}$, giving $4$ tacnodes, and the third line can be tangent to at most
one conic in $\mathcal{CL}$. It follows that
$t\leq 2m_{2}+m_{3}+5=k(k-1)+4.$
Since $k=3$ one has $9-n_{3}\geq 6>4$, and this contradiction proves the
claim.
Consider the remaining case $k=2$. First we assume that $m_{3}=0$. In order
the have $2d-n_{3}\leq d$, we must have $n_{3}=3$ and hence $n_{2}=0$.
However, any line, say $L_{1}$, is tangent to one of the two conics, say to
$C_{1}$, and it is secant to the other, so $L\cap C_{2}=\\{p,q\\}$. The point
$p$ cannot be a node, so one of the remaining lines, say $L_{2}$, passes
through $p$ and is tangent to $C_{1}$, and the other line, say $L_{3}$, is
passing through $q$, and is again tangent to $C_{1}$. The third triple point
should be
$r=C_{2}\cap L_{2}\cap L_{3}.$
This configuration is geometrically realizable, we can take for instance
$\mathcal{CL}_{7}:(x^{2}+y^{2}-z^{2})\cdot(x^{2}+y^{2}-4z^{2})\cdot(x-z)\bigg{(}y+\frac{\sqrt{3}}{3}x+\frac{2\sqrt{3}}{3}\bigg{)}\cdot\bigg{(}y-\frac{\sqrt{3}}{3}x-\frac{2\sqrt{3}}{3}\bigg{)}=0.$
Moreover, it is unique up to a projective transformation. Indeed, we use again
[12, Proposition 3] which implies that one may assume that the two conics
$C_{1}$ and $C_{2}$ are concentric circles, $C_{1}$ with radius $r_{1}=1$, and
$C_{2}$ with radius $r_{2}\in\mathbb{C}$, $r_{2}\neq 0,\pm 1$. Then we note
that a triangle in which the center of the inscribed circle coincides with the
center of the circumscribed circle is necessarily equilateral. This implies
that $r_{2}=2$, and the corresponding equation is given above, where the
vertices of the equilateral triangle in the affine plane $z=1$ are $(1,0)$ and
$(-\frac{1}{2},\pm\frac{\sqrt{3}}{2})$. Using `Singular` we can compute the
minimal free resolution of the Milnor algebra of $\mathcal{CL}_{7}$, it has
the following form
$0\rightarrow S^{2}(-9)\rightarrow S^{3}(-6)\rightarrow S\rightarrow
M(f)\rightarrow 0,$
so $\mathcal{CL}_{7}$ is free.
Finally, we consider the case $k=2$ and $m_{3}=1$, i.e., the two conics are
tangent in one point. In order to satisfy (5.7), we have to use the 3 lines to
create many tacnodes and many triple points, in fact we need $t+n_{3}\geq 8$
of such singular points. We can use the first two lines to get two tangents to
both conics, hence $4$ new tacnodes. The third line can create either $2$
triple points and $2$ double points, one triple point and $4$ nodes, or a
tacnode and $4$ nodes. Neither possibility satisfies $t+n_{3}\geq 8$, and this
completes the proof. ∎
Figure 2. The unique free arrangement of $2$ conics and $3$ lines with $t=5$
and $n_{3}=3$.
Now we are going to discuss non-freeness for conic-line arrangements with
$d\in\\{4,5,6,7\\}$ – please bear in mind that the upper bound on the number
of lines follows from Proposition 4.7.
###### Proposition 5.3.
Let $\mathcal{CL}$ be an arrangement of $d=4$ lines and $k\geq 1$ smooth
conics having only nodes, tacnodes, and ordinary triple points as
singularities. Then $\mathcal{CL}$ is never free.
###### Proof.
If we assume that ${\mathcal{C}}$ is free, then formula (5.2) implies that
(5.8) $n_{2}+n_{3}\leq 4$
and by (5.3) we obtain
(5.9) $t\geq k^{2}-k+4k+\frac{3}{4}-n_{3}.$
Using (5.6), it follows that
(5.10) $2m_{3}+4m_{4}\leq 4.$
In particular, we have $m_{3}+m_{4}\leq 2$. Consider the graph
$\Gamma({\mathcal{C}})$ associated to the conics in ${\mathcal{C}}$, whose
vertices are these conics, and two vertices $C_{i}$ and $C_{j}$ are connected
by an edge (resp. by a double edge) if and only if $|C_{i}\cap C_{j}|=3$
(resp. $|C_{i}\cap C_{j}|=4$) – please consult [12] for more details. It
follows that this graph has at most two edges. To get the maximum number of
tacnodes, Lemma 5.1 implies that the $4$ lines have to be tangent lines to the
conics connected by an edge to other conic. There are either $3$ conics in a
chain connected by edges, or two pairs of conics, each pair connected by an
edge. A simple analysis show that the maximum number of tacnodes created in
this way is $8$. Hence the number of tacnodes satisfies
(5.11) $t\leq t^{\prime}+8=k^{2}-k-m_{3}-2m_{4}+8\leq k^{2}-k+8.$
Combining this with inequality (5.9) we get
$4k-3\leq 4k+1-n_{3}\leq 8,$
which implies $k\leq 2$.
Case: $k=2$.
If $m_{2}=1$, then any line is tangent to at most one conic, by Lemma 5.1,
hence we have $t\leq 6$ and (5.9) yields a contradiction.
If $m_{3}=1$, the inequality (5.9) implies $t+n_{3}\geq 11$. We can use $2$
lines, say $L_{1}$ and $L_{2}$ to create $4$ new tacnodes ($5$ in total) and a
new node ($3$ in total). The third line $L_{3}$ can produce $2$ triple points
if it passes through the $2$ nodes in the intersection $C_{1}\cap C_{2}$, but
it will create $2$ new nodes as the intersections with $L_{1}$ and $L_{2}$.
This stands in contradiction with (5.8). If $L_{3}$ passes through the node
$L_{1}\cap L_{2}$, then it will be secant to both $C_{1}$ and $C_{2}$, thus
creating $4$ new nodes, again it stands in contradiction with (5.8). Finally,
if $L_{3}$ is tangent to one of the conics, it would be secant for the other,
creating $4$ new nodes, $2$ on that second conic and $2$ on $L_{1}\cap L_{2}$,
a contradiction with respect to (5.8).
If $m_{4}=1$, $C_{1}\cap C_{2}$ gives rise to $4$ nodes. As soon as we add $2$
lines, a new node will be created (and some old nodes transformed in triple
points), but in any case we get a contradiction with (5.8).
Hence the case $k=2$ is impossible.
Case: $k=1$.
Since $k=1$, we clearly have $t\leq 4$. The only possibility to have (5.9)
satisfied is that $n_{3}\geq 1$. If we look on the (sub)arrangement
$\mathcal{A}$ of the $4$ lines in $\mathcal{CL}$, we have $2$ cases to
discuss.
Case 1: $\mathcal{A}$ has $3$ concurrent lines.
Let $L_{1},L_{2}$ and $L_{3}$ be the lines meeting at a point $p$, and $L_{4}$
is a secant line, meeting $L_{j}$ at $q_{j}$ for $j=1,2,3$.
If $n_{3}=1$, we get from (5.9) that $t\geq 4$, which is impossible. Indeed,
only two lines among $L_{1},L_{2}$ and $L_{3}$ can be tangent to the conic, so
we get $t\leq 3$.
If $n_{3}=2$, the second triple point is at the intersection of a secant line
to the conic passing through $p$, say $L_{2}$, the conic and the line $L_{4}$.
In this case, we get from (5.9) that $t\geq 3$, which is impossible, since
$L_{2}$ and $L_{4}$ cannot be tangent to the conic.
If $n_{3}=3$, the new triple point is at the intersection of a new secant line
to the conic passing through $p$, say $L_{1}$, the conic and the line $L_{4}$.
In this case, we get from (5.9) that $t\geq 2$, which is impossible, since
$L_{1}$, $L_{2}$ and $L_{4}$ cannot be tangent to the conic.
Finally, if $n_{3}=4$, then the 3 triple points distinct from $p$ are all
situated on the conic and on the line $L_{4}$, which is impossible.
Case 2: $\mathcal{A}$ is a nodal arrangement.
It follows that now all the triple points are on the conic. More precisely,
the conic passes through $n_{3}$ nodes of the line arrangement $\mathcal{A}$.
Hence $\mathcal{CL}$ has $n_{3}$ triple points and at least $6-n_{3}$ double
points. It follows that $n_{2}+n_{3}\geq 6$, a contradiction with (5.8). ∎
###### Proposition 5.4.
Let $\mathcal{CL}$ be an arrangement of $d=5$ lines and $k\geq 1$ smooth
conics having only nodes, tacnodes, and ordinary triple points. Then
$\mathcal{CL}$ is never free.
###### Proof.
If we assume that $\mathcal{CL}$ is free. Using Proposition 4.7, we get
$m=2k+5\leq 9$, and hence $k\leq 2$. Then formula (5.2) implies that
(5.12) $n_{2}+n_{3}\leq 6$
and by (5.3) we get
(5.13) $t\geq k^{2}-k+5k+2-n_{3}.$
Case $k=2$.
If $m_{2}=1$, then any line is tangent to at most one conic, which follows
from Lemma 5.1, so we have $t\leq 7$ and (5.13) yields a contradiction.
If $m_{3}=1$, the inequality (5.13) implies $t+n_{3}\geq 14$. We can use $2$
lines, say $L_{1}$ and $L_{2}$, to create $4$ new tacnodes ($5$ in total) and
a new node ($3$ in total). The third line $L_{3}$ can produce $2$ triple
points if it passes through the $2$ nodes in the intersection $C_{1}\cap
C_{2}$, but will create $2$ new nodes as intersections with $L_{1}$ and
$L_{2}$. It means that the remaining $2$ lines should not create any new node
or triple point, which is impossible. If $L_{3}$ passes through the node
$L_{1}\cap L_{2}$, then it will be secant to both $C_{1}$ and $C_{2}$, thus
creating $4$ new nodes, a contradiction with (5.8). Finally, if $L_{3}$ is
tangent to one of the conics, it would be secant for the other, creating $4$
new nodes, $2$ on that second conic and $2$ on $L_{1}\cap L_{2}$. This
contradicts (5.8).
If $m_{4}=1$, $C_{1}\cap C_{2}$ gives rise to 4 nodes. As soon as we add $3$
lines, at least $3$ new nodes will be created (and some old nodes transformed
in triple points), but in any case we get a contradiction with (5.8).
Hence the case $k=2$ is impossible.
Case: $k=1$.
The above shows that the only possibility to have (5.12) and (5.13) satisfied
is that $k=1$ and $n_{3}\geq 2$. If we look on the (sub)arrangement
$\mathcal{A}$ of the $5$ lines in $\mathcal{CL}$, there are $3$ cases to
discuss.
Case 1: $\mathcal{A}$ has $2$ triple points.
If we denote by $p_{1}$ and $p_{2}$ the two triple points, then the line
determined by these points must be in $\mathcal{A}$. We denote this line by
$L$. The other $2$ lines passing through $p_{1}$ ($p_{2}$, respectively) are
denoted by $L_{1}$ and $L_{1}^{\prime}$ ($L_{2}$ and $L_{2}^{\prime}$,
respectively). There are $4$ nodes in the arrangement $\mathcal{A}$, the
intersections $L_{1}\cap L_{2}$, $L_{1}\cap L_{2}^{\prime}$,
$L_{1}^{\prime}\cap L_{2}$, and $L_{1}^{\prime}\cap L_{2}^{\prime}$.
If $n_{3}=2$, we get from (5.13) that $t\geq 5$, which is impossible since
$L$, $L_{1}$ and $L_{1}^{\prime}$ cannot be all tangent to the conic.
If $n_{3}=3$, the new triple point $q_{1}$ is at the intersection of two
secant lines with the conic. In this case, we get from (5.9) that $t\geq 4$,
which is impossible, since the $2$ lines meeting at $q_{1}$ cannot be tangent
to the conic.
If $n_{3}=4$, then there are $2$ triple points $q_{1}$ and $q_{2}$ situated on
the conic, coming from the intersection of at least $3$ lines from
$\mathcal{A}$ with the conic. In this case, we get from (5.9) that $t\geq 3$,
which is impossible since the lines passing through $q_{1}$ or $q_{2}$ cannot
be tangent to the conic.
If $n_{3}=5$, then there are $3$ triple points $q_{1}$, $q_{2}$ and $q_{3}$
situated on the conic, coming from the intersection of at least $4$ lines from
$\mathcal{A}$ with the conic. In this case, we get from (5.9) that $t\geq 2$,
which is impossible since the lines passing through $q_{1}$, $q_{2}$ or
$q_{3}$ cannot be tangent to the conic.
Finally, if $n_{3}=6$, then $n_{2}=0$, the conic, call it $Q$, passes through
all the $4$ nodes of $\mathcal{A}$ and it is tangent to the line $L$. The
conics passing through the $4$ nodes form a pencil of conics, determined by
the degenerate conics $Q_{1}$ and $Q_{2}$, where $Q_{1}$ (resp. $Q_{2}$) is
the union $L_{1}\cup L_{1}^{\prime}$ (resp. $L_{2}\cup L_{2}^{\prime}$). Note
that these two degenerate conics $Q_{1}$ and $Q_{2}$ meet the line $L$ in one
point, namely $Q_{1}\cap L=p_{1}$ and $Q_{2}\cap L=p_{2}$. The conic $Q$ is in
this pencil ${\alpha}Q_{1}+{\beta}Q_{2}$, and meets the line $L$ in one point
as well. This is a contradiction, since in a pencil only two members can meet
a given line in a single point. To check this claim, we assume that $L$ is
given by $x=0$. Then the condition that ${\alpha}Q_{1}+{\beta}Q_{2}$ meets
line $L$ in one point is a quadratic form in $y,z$, with coefficients being
linear forms in ${\alpha},{\beta}$, such that it has zero discriminant, which
yields the vanishing of the quadratic form in ${\alpha},{\beta}$.
Case 2: $\mathcal{A}$ has a triple point.
It follows that $\mathcal{A}$ has $7$ nodes. To create $n_{3}$ triple points
in $\mathcal{CL}$, the conic passes through $n_{3}-1$ nodes of the line
arrangement ${\mathcal{A}}$. Hence $\mathcal{CL}$ has $n_{3}$ triple points
and at least $7-(n_{3}-1)$ double points. It follows that $n_{2}+n_{3}\geq 8$,
a contradiction with respect to (5.12).
Case 3: $\mathcal{A}$ is a nodal arrangement.
It follows that now $\mathcal{A}$ has $10$ nodes. To create $n_{3}$ triple
points in $\mathcal{CL}$, the conic passes through $n_{3}$ nodes of the line
arrangement $\mathcal{A}$. Hence $\mathcal{CL}$ has $n_{3}$ triple points and
at least $10-n_{3}$ double points. It follows that $n_{2}+n_{3}\geq 10$, a
contradiction with (5.12).
∎
###### Proposition 5.5.
Let $\mathcal{CL}$ be an arrangement of $d=6$ lines and $k\geq 1$ smooth
conics having only nodes, tacnodes, and ordinary triple points. Then
$\mathcal{CL}$ is never free.
###### Proof.
If we assume that $\mathcal{CL}$ is free, then using Proposition 4.7 we get
$m=2k+6\leq 9$, and hence $k=1$. Then the formula (5.2) implies that
(5.14) $n_{2}+n_{3}\leq 7$
and by (5.3) we have
(5.15) $t\geq 10-n_{3}.$
Since $t\leq 6$, the only possibility to have (5.14) and (5.15) been satisfied
is $n_{3}\geq 4$. Consider the (sub)arrangement $\mathcal{A}$ of $6$ lines in
$\mathcal{CL}$. Then $\mathcal{A}$ has only double and triple points, and let
us denote by $n_{2}^{\prime}$ and $n_{3}^{\prime}$ their respective numbers.
It is known that
(5.16) $n_{2}^{\prime}+3n_{3}^{\prime}=\binom{6}{2}=15.$
The conic in $\mathcal{CL}$ has to pass through $n_{3}-n_{3}^{\prime}$ nodes
of $\mathcal{A}$, and hence $n_{2}\geq n_{2}^{\prime}-(n_{3}-n_{3}^{\prime})$.
It follows that
$n_{2}+n_{3}\geq n_{2}^{\prime}+n_{3}^{\prime}=15-2n_{3}^{\prime}.$
It is well-known that the maximal number of triple points $n_{3}^{\prime}$ in
this case is $4$, and the arrangement is projectively equivalent to
${\mathcal{A}}_{0}:(x^{2}-y^{2})(x^{2}-z^{2})(y^{2}-z^{2})=0.$
Combining this fact with inequality in (5.14), it follows that we are exactly
in this case, that is $n_{2}^{\prime}=3$ and $n_{3}^{\prime}=4$. Moreover, the
intersection between the conic and the lines in $\mathcal{A}$ should not add
any new point to the set of seven multiple points of $\mathcal{A}$. Note that
each line contains exactly one double point. This would imply that the conic
is tangent to the 6 lines (in new points given rise to tacnodes), but this is
clearly impossible as we have seen above ($3$ concurrent lines cannot all be
tangent to the same conic). ∎
###### Proposition 5.6.
Let $\mathcal{CL}$ be an arrangement of $d=7$ lines and $k\geq 1$ smooth
conics having only nodes, tacnodes, and ordinary triple points as
singularities. Then $\mathcal{CL}$ cannot free.
###### Proof.
If we assume that $\mathcal{CL}$ is free, then using Proposition 4.7 we get
$m=2k+7\leq 9$, and hence $k=1$. Then the formula (5.2) implies that
(5.17) $n_{2}+n_{3}\leq 9$
and the formula (5.3) gives
(5.18) $t\geq 13-n_{3}.$
Since $t\leq 7$, the only possibility to have (5.17) and (5.18) been satisfied
is $n_{3}\geq 6$. Consider the (sub)arrangement $\mathcal{A}$ of $7$ lines in
$\mathcal{CL}$. Then $\mathcal{A}$ has only double and triple points, and let
us denote by $n_{2}^{\prime}$ and $n_{3}^{\prime}$ their respective numbers.
By the combinatorial count, we have
(5.19) $n_{2}^{\prime}+3n_{3}^{\prime}=\binom{7}{2}=21.$
The conic in $\mathcal{CL}$ has to pass through $n_{3}-n_{3}^{\prime}$ nodes
of $\mathcal{A}$, and hence $n_{2}\geq n_{2}^{\prime}-(n_{3}-n_{3}^{\prime})$.
It follows that
$n_{2}+n_{3}\geq n_{2}^{\prime}+n_{3}^{\prime}=21-2n_{3}^{\prime}.$
Hence $n_{3}^{\prime}\geq 6$. Using the classification of line arrangements
with ${\rm mdr}(f)\leq 2$ presented in [1], it follows that our arrangement
$\mathcal{A}$ satisfies ${\rm mdr}(f)\geq 3$. A calculation of the total
Tjurina number, under the assumption that $n_{3}^{\prime}\geq 6$, shows us
that $n_{3}^{\prime}=6$, $n_{2}^{\prime}=3$, ${\rm mdr}(f)=3$, and the
arrangement $\mathcal{A}$ has to be free. It follows that the arrangement
$\mathcal{A}$ is projectively equivalent to
${\mathcal{A}}_{0}:xyz(x+y)(x+z)(y-z)(x+y+z)=0,$
see [1, Theorem 2.6], where this arrangement occurs as IIIc. Moreover, exactly
as in the previous proof, the intersections between the conic and the lines in
$\mathcal{A}$ should not add any new point to the set of seven multiple points
of ${\mathcal{A}}$. Note that $4$ of the $7$ lines, denoted here by
$L_{2},L_{3},L_{4}$, and $L_{5}$, where $L_{j}$ means the line given by the
$j$-th factor in the equation of $\mathcal{A}$, contain each $3$ triple
points, while the remaining $3$ lines, $L_{1},L_{6}$ and $L_{7}$, contain each
$2$ triple points and $2$ double points. This implies that the conic is
tangent to $4$ lines $L_{2},L_{3},L_{4}$, and $L_{5}$ (in new points given
rise to tacnodes), and passes through the remaining $3$ double points, located
at
$(0:1:1),\ (0:-1:1)\text{ and }(-2:1:1).$
A direct computation shows that such a conic does not exist. ∎
In conclusion, we can state the following complete classification result.
###### Theorem 5.7.
Let $\mathcal{CL}$ be an arrangement of $d\geq 1$ lines and $k\geq 1$ smooth
conics having only nodes, tacnodes, and ordinary triple points as
singularities. Then $\mathcal{CL}$ is free if and only if one of the following
cases occur: In each case we list the numbers $n_{2}$, $t$, and $n_{3}$ of
nodes, tacnodes, and ordinary triple points, respectively.
1. (1)
$d=k=1$ and $\mathcal{CL}$ consists of a smooth conic and a tangent line. In
this case, $n_{2}=n_{3}=0$, $t=1$.
2. (2)
$d=2$, $k=1$ and $\mathcal{CL}$ consists of a smooth conic and two tangent
lines. In this case $n_{2}=1$, $n_{3}=0$, $t=2$.
3. (3)
$d=3$, $k=1$ and either $\mathcal{CL}$ is a smooth conic inscribed in a
triangle, or $\mathcal{CL}$ is a smooth conic circumscribed in a triangle. In
the first case we have $n_{2}=3$, $n_{3}=0$, $t=3$, and in the second case we
have $n_{2}=t=0$, $n_{3}=3$.
4. (4)
$d=3$, $k=2$ and $\mathcal{CL}$ consists of a triangle $\Delta$, a smooth
conic inscribed in $\Delta$, and another smooth conic circumscribed in
$\Delta$. In this case, $n_{2}=0$, $n_{3}=3$, $t=5$.
In particular, a free conic-line arrangement having only nodes, tacnodes, and
ordinary triple points is determined up to a projective equivalence by the
numerical data $n_{2}$, $n_{3}$ and $t$.
Before we formulate the final corollary for this section, we need the
following notations inspired by [11].
###### Definition 5.8.
We say that two conic-line arrangements in $\mathbb{P}^{2}_{\mathbb{C}}$ with
nodes, tacnodes, and ordinary triple points have the same weak combinatorics
if these arrangements have the same list of invariants $(m;n_{2},t,n_{3})$
with $m$ being degree of the arrangements.
###### Conjecture 5.9 (Numerical Terao’s Conjecture).
Let $\mathcal{CL}_{1},\mathcal{CL}_{2}\subset\mathbb{P}^{2}_{\mathbb{C}}$ be
two conic-line arrangements with nodes, tacnodes, and ordinary triple points.
Assume that $\mathcal{CL}_{1}$ is free and $\mathcal{CL}_{1}$,
$\mathcal{CL}_{2}$ have the same weak combinatorics, then $\mathcal{CL}_{2}$
is also free.
###### Corollary 5.10.
Numerical Terao’s Conjecture holds for conic-line arrangements with nodes,
tacnodes, and ordinary triple points.
###### Proof.
Note that the equation (3.4) implies that the list of invariants
$(m;n_{2},t,n_{3})$ determines the number $k$ of conics and the number
$d=m-2k$ of lines in the arrangements $\mathcal{CL}_{1}$ and
$\mathcal{CL}_{2}$. Then Theorem 5.7 implies that $k=1$ or $k=2$. In each
case, using the fact that $k$ and $d$ are very small, it is easy to see that
up to a projective transformation, the possibilities for $\mathcal{CL}_{2}$
are exactly those listed in Theorem 5.7, and hence the arrangement
$\mathcal{CL}_{2}$ is free as well. ∎
###### Remark 5.11.
Numerical Terao’s Conjecture can be formulated, in principle, for all reduced
singular plane curves. As it was showed in [11], Numerical Terao’s Conjecture
fails for some (triangular) line arrangements. On the other hand, it holds for
line arrangements having only points of multiplicity $\leq 3$. Indeed,
Proposition 4.7 shows that such a free line arrangement $\mathcal{A}:f=0$ has
to satisfy $m=\deg f\leq 9$. Then Theorem 4.10 implies that either $d_{1}={\rm
mdr}(f)\leq 3$ or $d_{1}={\rm mdr}(f)=4$ and $d=9$. Note that if
$\mathcal{A}^{\prime}:f^{\prime}=0$ has the same weak combinatorics as
${\mathcal{A}}:f=0$, then $\tau({\mathcal{A}})=\tau({\mathcal{A}}^{\prime})$,
which implies that $r^{\prime}={\rm mdr}(f^{\prime})\leq{\rm mdr}(f)$ using
the maximality of the Tjurina number for free reduced curves, see [8]. In the
first case, one concludes using the complete classification of line
arrangements with ${\rm mdr}(f)\leq 3$, see [1]. In the second case, we use
again the maximality of the Tjurina number of free curves according to [8] and
we conclude that $\tau({\mathcal{A}})=\tau({\mathcal{A}}^{\prime})=48$,
$n_{2}=0$ and $n_{3}=12$. The only line arrangement with these invariants is
the line arrangement in Example 3.2 above, which is indeed free with the
exponents $(4,4)$.
## Funding
The first author was partially supported by the Romanian Ministry of Research
and Innovation, CNCS - UEFISCDI, Grant PN-III-P4-ID-PCE-2020-0029, within
PNCDI III. The second author was partially supported by the National Science
Center (Poland) Sonata Grant Nr 2018/31/D/ST1/00177. We want to thank an
anonymous referee for comments on that paper.
## Data availability
Not applicable as the results presented in this manuscript rely on no external
sources of data or code.
## References
* [1] R. Burity and S. Tohaneanu, Logarithmic derivations associated to line arrangements. J. Algebra 581: 327 – 352 (2021).
* [2] I. Cheltsov, Log canonical thresholds on hypersurfaces. Sb. Math. 192(7–8): 1241 – 1257 (2001).
* [3] W. Decker, G.-M. Greuel, G. Pfister, and H. Schönemann, Singular 4-1-1 — A computer algebra system for polynomial computations. http://www.singular.uni-kl.de, 2018.
* [4] A. Dimca and E. Sernesi, Syzygies and logarithmic vector fields along plane curves. (Syzygies et champs de vecteurs logarithmiques le long de courbes planes.) J. Éc. Polytech., Math. 1: 247 – 267 (2014).
* [5] A. Dimca, Freeness versus Maximal Global Tjurina Number for Plane Curves. Math. Proc. Camb. Philos. Soc. 163(1): 161 – 172 (2017).
* [6] A. Dimca and G. Sticlaru, Free and Nearly Free Curves vs. Rational Cuspidal Plane Curves. Publ. Res. Inst. Math. Sci. 54(1): 163 – 179 (2018).
* [7] A. Dimca, M. Janasz, P. Pokora, On plane conic arrangements with nodes and tacnodes. arXiv:2108.04004.
* [8] A. A. du Plessis and C.T.C. Wall, Application of the theory of the discriminant to highly singular plane curves, Math. Proc. Camb. Phil. Soc. 126: 256 – 266 (1999).
* [9] V. Kulikov, Mixed Hodge Structures and Singularities. Cambridge Tracts in Math. 132, Cambridge University Press, Cambridge (1998).
* [10] A. Langer, Logarithmic orbifold Euler numbers with applications. Proc. London Math. Soc. 86: 358 – 396 (2003).
* [11] S. Marchesi and J. Valés, Triangular arrangements on the projective plane. arXiv:1903.08885.
* [12] G. Megyesi, Configurations of conics with many tacnodes. Tohoku Math. J., II. Ser. 52(4): 555 – 577 (2000).
* [13] P. Pokora and T. Szemberg, Conic-line arrangements in the complex projective plane. arXiv:2002.01760.
* [14] H. Schenck and S. Tohaneanu, Freeness of conic-line arrangements in $\mathbb{P}^{2}$. Comment. Math. Helv. 84(2): 235 – 258 (2009).
* [15] H. Schenck, H. Terao, M. Yoshinaga, Logarithmic vector fields for curve configurations in $\mathbb{P}^{2}$ with quasihomogeneous singularities. Math. Res. Lett. 25: 1977–1992 (2018).
* [16] J. H. M. Steenbrink, Semicontinuity of the singularity spectrum. Invent. Math. 79: 557 – 565 (1985).
* [17] A. Varchenko, Semicontinuity of the spectrum and an upper bound for the number of singular points of the projective hypersurface. (Russian) Dokl. Akad. Nauk SSSR 270(6): 1294 – 1297 (1983).
|
11institutetext: Dept. of Informatics, University of Oslo
# Ready, set, Go!
Data-race detection and the Go language††thanks: Supported by the bilateral
project UTF-2018-CAPES-Diku/10001 “Modern Refactoring”.
Daniel Schnetzer Fava Martin Steffen
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Data races are often discussed in the context of lock acquisition and release,
with race-detection algorithms routinely relying on _vector clocks_ as a means
of capturing the relative ordering of events from different threads. In this
paper, we present a data-race detector for a language with channel
communication as its sole synchronization primitive, and provide a semantics
directly tied to the _happens-before_ relation, thus forging the notion of
vector clocks.
## 1 Introduction
One way of dealing with complexity is by partitioning a system into
cooperating subcomponents. When these subcomponents compete for resources,
coordination becomes a prominent goal. One common programming paradigm is to
have threads cooperating around a pool of shared memory. In this case,
coordination involves, for example, avoiding conflicting accesses to memory.
Two concurrent accesses constitute a _data race_ if they reference the same
memory location and at least one of the accesses is a write. Because data
races can lead to counter intuitive behavior, it is important to detect them.
The problem of data-race detection in shared memory systems is well studied in
the context of lock acquisition and release. When it comes to message passing,
the problem of concurrent accesses to _channels_ , in the absence of shared
memory, is also well studied—the goal, in these cases, is to achieve
determinism rather than race-freedom [6, 7, 38]. What is less prominent in the
race-detection literature is the study of channel communication as the
synchronization primitive for shared memory systems. In this paper, we present
exactly that; a dynamic data-race detector for a language in the style of Go,
featuring channel communication as means of coordinating accesses to shared
memory.
We fix the syntax of our calculus in Section 3 and present a corresponding
operational semantics. The configurations of the semantics keep track of
memory events (i.e. of read and write accesses to shared variables) such that
the semantics can be used to detect races. A proper book-keeping of events
also involves tracking _happens-before_ information. In the absence of a
global clock, the happens-before relation is a vehicle for reasoning about the
relative order of execution of different threads [18]. We describe the race
detection task and present a framework, called Grace [9], that is based on
what we call _happens-before sets_. Different from other race detectors, which
often employ vector clocks (VCs) as a mechanism for capturing the happen-
before relation, we tie our formalization more closely to the concept of
happens-before. The proposed approach, based on what we call happens-before
sets, allows for garbage collection of “stale” memory access information that
would otherwise be tracked. Although, in the worst case, the proposed detector
requires a larger foot-print when compared to VC-based implementations, we
conjecture the existence of a hybrid approach that can offer benefits from
both worlds.
Our race detector is built upon a previous result [10], where we formalize a
weak memory model inspired by the Go specification [12]. The core of the paper
was a proof of the DRF-SC guarantee, meaning, we proved that the proposed
relaxed memory model behaves Sequentially Consistently (SC) when running Data-
Race Free (DRF) programs. The proof hinges on the fact that, in the absence of
races, all threads agree on the contents of memory. The scaffolding used in
the proof contains the ingredients for the race detector presented in this
paper. We should point out, however, that the operational semantics presented
here and used for race detection is _not_ a weak semantics.111Note that while
the mentioned semantics of [10] differs from the one presented here, both
share some commonalities. Both representations are based on appropriately
recording information of previous read and write events in their run-time
configuration. In both versions, a crucial ingredient of the book-keeping is
connecting events in happens-before relation. The purpose of the book-keeping
of events, however, is different: in [10], the happens-before relation serves
to operationally formalize the weak memory model (corresponding roughly to
PSO) in the presence of channel communication. In the current paper, the same
relation serves to obtain a race detector. Both versions of the semantics are
connected by the DRF-SC result, as mentioned. Apart from the additional
information for race detection, the semantics is “strong” in that it
formalizes a memory guaranteeing _sequential consistency._ To focus on a form
of strong memory is not a limitation. Since we have established that a
corresponding weak semantics enjoys the crucial DRF-SC property [10], the
strong and weak semantics agree up to the first encountered race condition.
Given that even racy program behaves sequentially consistently up to the point
in which the first data-race is encountered, a complete race detector can
safely operate under the assumption of sequential consistency.
The remainder of the paper is organized as follows. Section 2 presents
background information on data races and synchronization via message passing
that are directly related to the formalization of our approach to race
detection. Section 3 formalizes race detection in the context of channel
communication as sole synchronization mechanism. We turn our attention to the
issue of efficiency in Section 4. Section 5 gives a detailed comparison of our
algorithm and VC-based algorithms for the acquire-release semantics. Section 6
puts our work in the perspective of trace theory. Section 7 examines related
work. Section 8 provides a conclusion and touches on future work.
## 2 Background
##### Read and write conflicts.
Memory accesses conflict if they target the same location and at least one of
the accesses is a _write_ —there are no read-read conflicts. A data race
constitutes of conflicting accesses that are unsynchronized.
Listing 1: Program with race condition. [12]
⬇
var a string
func main() {
go func() { a = ”hello” }()
print(a)
}
Take the Go code of Listing 1 as an example. There, the main function invokes
an anonymous function; this anonymous function sets the global variable “a” to
“hello”. Note, however, that the call is prepended with the keyword go. When
this keyword is present in a function invocation, Go spawns a new thread (or
goroutine), and the caller continues execution without waiting for the callee
to return. The main and the anonymous functions access the same shared
variable in a conflicting manner (i.e. one of the accesses is a write). Since
both the main and the anonymous functions run in parallel and no
synchronization is used (as evidenced by the lack of channel communication),
the two accesses are also concurrent. This allows us to conclude that this
program has a race.
A data race _manifests_ itself when an execution step is immediately followed
by another and the two steps are conflicting. This definition is the closest
one can get to a notion of simultaneity in an operational semantics, where
memory interactions are modeled as instantaneous atomic steps. While manifest
races are obvious and easy to account for, races in general can involve
accesses that are arbitrarily far apart in a linear execution. A “memory-less”
detector can fail to report races, for example non-manifest races, that could
otherwise be flagged by more sophisticated race detectors. The ability to flag
non-manifest data-races is correlated with the amount of information kept and
the length in which this information is kept for. In general, recording more
information and storing it for longer leads to higher degrees of
“completeness” at the expense of higher run-time overheads.222It should go
without saying that observing one execution as being race free is not enough
to assert race-freedom of the program, even if one has observed a complete
trace of a terminating run of a program. Completeness can at best be expected
with respect to alternative schedules or linearizations of a given execution.
We break down the notions of read-write and write-write conflicts into a more
fine-grained distinction. Inspired by the notion of data hazards in the
computer architecture literature, we break down read-write conflicts into
read-after-write (RaW) and write-after-read (WaR) conflicts. To keep
consistent with this nomenclature, we refer to write-write conflicts as write-
after-write (WaW).333The mentioned “temporal” ordering and the use of the word
“after” refers to the occurrence of events in the trace or execution of the
running program. It is incorrect to conflate the concept of happens-before
with the ordering of occurrences in a trace. For instance, in a RaW situation,
the read step occurs after a write in an execution, i.e., the read is
mentioned after the write in the linearization. This order of occurrence does
_not_ mean, however, that the read happens-after the write or, conversely, the
write happens-before the read. Actually, for a RaW race (same as for the other
kinds of races), the read occurs after the write but the accesses are
_concurrent_ , which means that they are _unordered_ as far as the happens-
before relation is concerned. Going back to the example in Listing 1, there
are two possible executions: one in which the spawned goroutine writes “hello”
to the shared variable after the main function prints it, and another
execution in which the print occurs after the writing of the variable. The
first execution illustrates a write-after-read race, while the first
illustrates a read-after-write. Note that this example does not contain a
write-after-write race.
We make the distinction between the detection of after-write races and the
detection of write-after-read ones. As we will see in Section 3.3, the
detection of after-write races can be done with little overhead. The detection
of after-read, however, cannot.
When reading or writing a variable, it must be checked that conflicting
accesses _happened-before_ the current access. The check must happen from the
perspective of the thread attempting the access. In other words, the question
of whether an event occurred in the “definite past” (i.e., whether an event is
in happened-before relation with “now”) is _thread-local_ ; threads can have
different views on whether an event belongs to the past. This thread-local
nature is less surprising than it may sound: if one threads executes two steps
in sequence, the second step can safely assume that the first has taken
effect; after all, that is what the programmer must have intended by
sequentially composing instructions in the given program order. Such
guarantees hold locally, which is to say that the semantics _respects program
order within a thread_. It is possible, however, for steps to not take effect
in program order. A compiler or hardware may rearrange instructions, and it
often does so in practice. What must remain true is that these reorderings
cannot be observable from the perspective of a single thread. When it comes to
more than one thread, however, agreement on what constitutes the past cannot
be achieved without synchronization. Synchronization and consensus are
integrally related.444In the context of channel communication and weak memory,
the connection between synchronization and consensus is discussed in a precise
manner in our previous work; see the _consensus lemmas_ of [10]. Specifically,
given a thread $t$, events from a different thread $t^{\prime}$ are not in the
past of $t$ unless synchronization forces them to be.
##### Synchronization via bounded channels.
In the calculus presented here, channel communication is the only way in which
threads synchronize. Channels can be created dynamically and closed; they are
also first-class data, which means channel identifiers can be passed as
arguments, stored in variables, and sent over channels. Send and receive
operations are central to synchronization. Clearly, a receive statement is
synchronizing in that it is potentially blocking: a thread blocks when
attempting to receive from an empty channel until, if ever, a value is made
available by a sender. Since channels here are bounded, there is also
potential for blocking when sending, namely, when attempting to send on a
channel that is full.
We can use a channel c to eliminate the data race in Listing 1 as follows: the
anonymous function sends a message to communicate that the shared variable has
been set. Meanwhile, the main thread receives from the channel before printing
the shared variable.
Listing 2: Repaired program.
⬇
var a string
var c = make(chan bool, 1);
func main() {
go func() { a = ”hello”; c <- true }()
<- c
print(a)
}
The happens-before memory model stipulates, not surprisingly, a causal
relationship between the communicating partners [12]:
A send on $c$ happens-before the corresponding receive from $c$ completes. (1)
Given that channels have finite capacity, a thread remains blocked when
sending on a full channel until, if ever, another process frees a slot in the
channel’s buffer. In other words, the sender is blocked until another thread
receives from the channel. Correspondingly, there is a happens-before
relationship between a receive and a subsequent send on a channel with
capacity $k$ [12]:
The $i^{\mathit{th}}$ receive from $c$ happens-before the
$(i+k)^{\mathit{th}}$ send on $c$ completes. (2)
Interestingly, because of this rule, a causal connection is forged between the
sender and _some previous_ receiver who is otherwise unrelated to the current
send operation. When multiple senders and receivers share a channel, rule (2)
implies that it is possible for two threads to become related (via happens-
before) without ever directly exchanging a message.555Communication means
sending a message to or receiving a message from a channel; messages are not
addressed to or received from specific threads. Thus, sharing the channel by
performing sends and receives does not necessarily make two threads
“communication partners.” Two threads are partners when one receives a message
deposited by the other.
The indirect relation between a sender and a prior receiver, postulated by
rule (2), allows channels to be used as locks. In fact, free and taken binary
locks are analogous to empty and full channels of capacity one. A process
takes and releases locks for the purpose of synchronization (such as assuring
mutually exclusive access to shared data) without being aware of
“synchronization partners.” In the (mis-)use of channels as locks, there is
also no inter-process communication. Instead, a process “communicates” with
itself: In a proper lock protocol, the process holding a lock (i.e. having
performed a send onto a channel) is the only one supposed to release the lock
(i.e. performing the corresponding receive). Thus, a process using a channel
as lock receives its own previously sent message—there is no direct inter-
process exchange. Note, however, synchronization still occurs: subsequent
accesses to a critical region are denied by sending onto a channel and making
it full. See Section 3.5.2 for a more technical elaboration.
To establish a happens-before relation between sends and receives, note the
distinction, between a channel operation and its _completion_ in the
formulation of rules (1) and (2). The order of events in a concurrent system
is partial; not only that, it is strictly partial since we don’t think of an
event as happening-before itself. A strict partial order is an irreflexive,
transitive, and asymmetric relation. In the case of synchronous channels, if
we were to ignore the distinction between an event and its completion,
according to rule (1), a send would then happen-before its corresponding
receive, and, according to rule (2), the receive would happen-before the send.
This cycle breaks asymmetry. Asymmetry can be repaired by interpreting a
send/receive pair on a synchronous channel as a single operation; indeed, it
can be interpreted as a _rendezvous_.
The distinction between a channel operation and its completion is arguably
more impactful when it comes to buffered channels. For one, it prevents sends
from being in happens-before with other sends, and receives from being in
happens-before with other receives. To illustrate, let
$\mathrel{\mathtt{sd}}^{i}$ and $\mathrel{\mathtt{rv}}^{i}$ represent the
$i^{\mathit{th}}$ send and receive on a channel. If we remove from rules (1)
and (2) the distinction between an operation and its completion, the
$i^{\mathit{th}}$ receive would then happens-before the $(i+k)^{\mathit{th}}$
send—based on rule (2)—and the $(i+k)^{\mathit{th}}$ send would happens-before
the $(i+k)^{\mathit{th}}$ receive—based on rule (1):
$\mathrel{\mathtt{rv}}^{i}~{}\rightarrow_{\mathsf{hb}}~{}\mathrel{\mathtt{sd}}^{i+k}~{}\rightarrow_{\mathsf{hb}}~{}\mathrel{\mathtt{rv}}^{i+k}$
By transitivity of the happens-before relation, we would then conclude that
the $i^{\mathit{th}}$ receive happens-before the $(i+k)^{\mathit{th}}$
receive, which would happen-before the $(i+2k)^{\mathit{th}}$ receive and so
on. As a consequence, a receive operation would have a lingering effect
through-out the execution of the program—similarly for send operations. This
accumulation of effects can be counter intuitive for the application
programmer, who would be forced to reason about arbitrarily long histories.
## 3 Data-race detection
We start in Section 3.1 by presenting the abstract syntax of our calculus and,
in Section 3.2, an overview of the operational semantics used for data-race
detection. The race detector itself is introduced incrementally. We start in
Section 3.3 with a simple detector that has a small footprint but that is
limited to detecting after-write races. We build onto this first iteration of
the detector in Section 3.4, making it capable of detecting after-write as
well as after-read races. The detector’s operation is illustrated by examples
in Section 3.5. Later, in Section 4, we turn to the issue of efficiency and
introduce “garbage collection” as a mean to reduce the detector’s footprint.
These race detectors can be seen as augmented versions of an underlying
semantics without additional book-keeping related to race checking. This
“undecorated” semantics, including the definition of internal steps and a
notion of structural congruence, can be found in Appendix
LABEL:sec:gomm.race.semantics.strong.
### 3.1 A calculus with shared variables and channel communication
We formalize our ideas in terms of an idealized language shown in Figure 1 and
inspired by the Go programming language.
$\begin{array}[t]{rcl@{\quad}l}v&::=&r\ \mathrel{|}\
\underline{n}&\text{values}\\\ e&::=&t\ \mathrel{|}\ v\ \mathrel{|}\
\mathrel{\mathtt{load}}z\ \mathrel{|}\ z:=v\ \mathrel{|}\
\mathrel{\mathtt{go}}t&\text{expressions}\\\ &\
\mathrel{|}&\mathrel{\mathtt{if}}v\mathrel{\mathtt{then}}t\mathrel{\mathtt{else}}t\\\
&\ \mathrel{|}&\mathrel{\mathtt{make}}(\mathrel{\mathtt{chan}}T,v)\
\mathrel{|}\ \mathop{\leftarrow v}\ \mathrel{|}\ v\leftarrow v\ \mathrel{|}\
\mathrel{\mathtt{close}}v&\\\ g&::=&v\leftarrow v\ \mathrel{|}\
\mathop{\leftarrow v}\ \mathrel{|}\
\mathrel{\mathtt{default}}&\text{guards}\\\
t&::=&\mathrel{\mathtt{let}}r=e\mathrel{\mathtt{in}}t\ \mathrel{|}\
\sum_{i}\mathrel{\mathtt{let}}r_{i}=g_{i}\mathrel{\mathtt{in}}t_{i}&\text{threads}\end{array}$
Figure 1: Abstract syntax
The syntax is basically unchanged from [10]. _Values_ $v$ can be of two forms:
$r$ denotes local variables or registers; $n$ is used to denote references or
names in general and, in specific, $p$ for processes or goroutines, $m$ for
memory events, and $c$ for channel names. We do not explicitly list values
such as the unit value, booleans, integers, etc. We also omit compound local
expressions like $e_{1}+e_{2}$. Shared variables are denoted by $x$, $z$,
etc., $\mathrel{\mathtt{load}}z$ represents reading the shared variable $z$
into the thread, and $z:=v$ denotes writing to $z$. References are dynamically
created. A new channel is created by
$\mathrel{\mathtt{make}}(\mathrel{\mathtt{chan}}T,v)$, where $T$ represents
the type of values carried by the channel and $v$ a non-negative integer
specifying the channel’s capacity. Sending a value $v$ over a channel $c$ and
receiving a value as input from a channel are denoted respectively as
$c\leftarrow v$ and $\mathop{\leftarrow c}$. After the operation
$\mathrel{\mathtt{close}}$, no further values can be sent on the specified
channel. Attempting to send values on a closed channel leads to a panic.
Starting a new asynchronous activity, called goroutine in Go, is done using
the $\mathrel{\mathtt{go}}$-keyword. In Go, the
$\mathrel{\mathtt{go}}$-statement is applied to function calls only. We omit
function calls, asynchronous or otherwise, as they are orthogonal to the
memory model’s formalization. The select-statement, here written using the
$\sum$-symbol, consists of a finite set of branches (or communication clauses
in Go-terminology). These branches act as guarded threads. General expressions
in Go can serve as guards. Our syntax requires that only communication
statements (i.e., channel sending and receiving) and the
$\mathrel{\mathtt{default}}$-keyword can serve as guards. This does not reduce
expressivity and corresponds to an A-normal form representation [32]. At most
one branch is guarded by $\mathrel{\mathtt{default}}$ in each select-
statement. The same channel can be mentioned in more than one guard. “Mixed
choices” [27, 28] are also allowed, meaning that sending- and receiving-guards
can both be used in the same select-statement. We use
$\mathrel{\mathtt{stop}}$ as syntactic sugar for the empty select statement;
it represents a permanently blocked thread. The
$\mathrel{\mathtt{stop}}$-thread is also the only way to syntactically
“terminate” a thread, meaning that it is the only element of $t$ without
syntactic sub-terms.
The $\mathrel{\mathtt{let}}$-construct
$\mathrel{\mathtt{let}}r=e\mathrel{\mathtt{in}}t$ combines sequential
composition and scoping for local variables $r$. After evaluating $e$, the
rest $t$ is evaluated where the resulting value of $e$ is handed over using
$r$. The let-construct acts as a binder for variable $r$ in $t$. When $r$ does
not occur free in $t$, $\mathrel{\mathtt{let}}$ boils down to _sequential
composition_ and, therefore, is more conveniently written with a semicolon.
See also Figure LABEL:fig:race.syn.sugar in the appendix for syntactic sugar.
### 3.2 Overview of the operational semantics
To capture the notion of ordering of events between threads, an otherwise
unadorned operational semantics (equation (LABEL:eq:gomm.race.naked.configs))
is equipped with additional information: each thread and memory location
tracks the events it is aware of as having happened-before—see the happens-
before set $E_{\mathit{hb}}$ in the run-time configurations of equation (3)
and (4), this set is present in terms corresponding to threads, $p\langle
E_{\mathit{hb}},t\rangle$, as well as memory locations,
$(\\!\\!|E_{\mathit{hb}},\,z{:=}v|\\!\\!)$ or
$m(\\!\\!|E_{\mathit{hb}}^{r},\,z{:=}v|\\!\\!)$. Depending on the capabilities
of the race detector, slightly different information is tracked as having
happened-before (i.e. stored in a happens-before set).
#### 3.2.1 After-write races
When detecting after-write races (i.e. RaW and WaW), in order to know whether
a subsequent access to the same variable occurs without proper
synchronization, one has to remember additional information concerning past
write-events. Specifically, it must be checked that all write events to the
same variable _happened-before_ the current access. The happens-before set is
then used to store information pertaining to write events; read events are not
tracked. Also, terms representing a memory location have a different shape
when compared to the undecorated semantics. In the undecorated semantics, the
content $v$ of a variable $z$ is written as a pair $(\\!\\!|z{:=}v|\\!\\!)$.
When after-write races come into play, it is not enough to store the last
value written to each variable; we also need to identify write events
associated with the variable. Thus, an entry in memory takes the form
$(\\!\\!|E_{\mathit{hb}},\,z{:=}v|\\!\\!)$ where $E_{\mathit{hb}}$ holds
identifiers $m$, $m^{\prime}$, etc. that uniquely identify write events to
$z$—contrast the run-time configurations in equation
(LABEL:eq:gomm.race.naked.configs) and (3). The number of prior write events
that need to be tracked can be reduced for the sake of efficiency, in which
case the term representing a memory location takes the form
$m(\\!\\!|E_{\mathit{hb}}^{r},\,z{:=}v|\\!\\!)$ where $m$ is the identifier of
the most recent write to $z$. See equation (4).
#### 3.2.2 Write-after-read races
Besides the detailed coverage of RaW and WaW races in Section 3.3, we describe
the detection of _write-after-read_ races in Section 3.4. When it comes to
WaR, the race checker needs to remember information about past reads in
addition to past write events. Abstractly, a read event represents the fact
that a load-statement has executed. Thus, the set $E_{\mathit{hb}}$ of an
entry $(\\!\\!|E_{\mathit{hb}},\,z{:=}v|\\!\\!)$ in memory holds identifiers
of both read and write events.
In the strong semantics, a read always observes one definite value which is
the result of one particular write event. Therefore, the configuration
contains entries of the form $m(\\!\\!|E_{\mathit{hb}}^{r},\,z{:=}v|\\!\\!)$
where $m$ is the identifier of the “last” write event and
$E_{\mathit{hb}}^{r}$ is a set of identifiers of read events, namely those
that accumulated after $m$. Note that “records” of the form
$m(\\!\\!|E_{\mathit{hb}}^{r},\,z{:=}v|\\!\\!)$ can be seen as $n+1$ recorded
events, one write event together with $n\geq 0$ read-events. This definition
of records with one write per variable stands in contrast to a weak semantics,
where many different write events may be observable by a given read [10].
#### 3.2.3 Synchronization
Channel communication propagates happens-before information between threads,
and thus, affects synchronization. In the operational rules, each channel $c$
is actually realized with _two_ channels, which we refer to as _forward_ ,
$c_{f}$, and _backward_ , $c_{b}$—see Figure 4. The forward part serves to
communicate a value transmitted from a sender to a receiver; it also
stipulates a causal relationship between the communicating partners [12]—see
rule (1) of page 1. To capture this relationship in the context of race
checking, the sender also communicates its current information about the
happens-before relation to the receiver. The communication of happens-before
information is accomplished by the transmission of $E_{\mathit{hb}}$ over
channels; see rule R-Rec in Figure 4.
The memory model also stipulates a happens-before relationship between a
receive and a subsequent send on a channel with capacity $k$—see rule (2) of
page 2. While we refer to the _forward channel_ as carrying a message from a
sender to a receiver, the backward part of the channel is used to model the
indirect connection between some prior receiver and a current sender; see
R-Send in Figure 4.
The interplay between forward and backward channels can also be understood as
a form of flow control. Entries in the backward channel’s queue are not values
deposited by threads. Instead, they can be seen as tickets that grant senders
a free slot in the communication channel, i.e., the forward channel.666In the
case of lossy channels, backward channels are sometimes used for the purpose
of error control and regulating message retransmissions, where the receiver of
messages informs the sender about the successful or also non-successful
reception of a message. Here, channels are assumed non-lossy and there is no
need for error control. In that sense, the term “backward” should not be
interpreted as communication _back_ to the receiver in the form of an
acknowledgment. Thus, the number of “messages” in the backward channel capture
the notion of fullness: a channel is full if the backward channel is empty.
See rule R-Send in Figure 4 or Figure 0.A.3 for the underlying semantics
without race checking. When a channel of capacity $k$ is created, the forward
queue is empty and the backward queue is initialized so that it contains dummy
elements ${E_{\mathit{hb}}}_{\bot}$ (cf. rule R-Make). The dummy elements
represent the number of empty or free slots in the channel. Upon creation, the
number of dummy elements equals the capacity of the channel.
As discussed in Section 2, there is a distinction between a synchronization
operation and its completion. A send/receive pair on a synchronous channel can
be seen as a rendezvous operation; captured in our semantics by the R-Rend
reduction rule of Figure 4. When it comes to asynchronous communication, the
distinction between a channel operation and its completion is handled by the
fact that send and receive operations update a thread’s local state but do not
immediately transmit the updated state onto the channel—see rules R-Send and
R-Rec in Figure 4.
### 3.3 Detecting read-after-write (RaW) and write-after-write (WaW) races
To detect “after-write” races, run-time configurations are given following
syntax:
$R::=p\langle E_{\mathit{hb}},t\rangle\ \mathrel{|}\
(\\!\\!|E_{\mathit{hb}}^{z},\,z{:=}v|\\!\\!)\ \mathrel{|}\ \bullet\
\mathrel{|}\ R\parallel R\ \ \mathrel{|}\ c[q]\ \mathrel{|}\ \nu n\ R\ .$ (3)
Configurations are considered up-to structural congruence, with the empty
configuration $\bullet$ as neutral element and $\parallel$ as associative and
commutative. The definition is standard and included in Appendix
LABEL:sec:gomm.race.cong. Likewise relegated to the appendix are _local_
reduction rules, i.e., those not referring to shared variables or channels
(see Appendix LABEL:sec:gomm.race.steps.local).
In the configurations, a triple $(\\!\\!|E_{\mathit{hb}}^{z},\,z{:=}v|\\!\\!)$
not only stores the current value of $z$ but also records the unique
identifiers $m$, $m^{\prime}$, etc of every write _event_ to $z$ in
$E_{\mathit{hb}}^{z}$.777We will later use the term “event” also when talking
about histories or traces. There, events carry slightly different information.
For instance, being interested in the question whether a history contains
evidence of a race, it won’t be necessary to mention the actual value being
written in the write event in the history. Both notions of events, of course,
hang closely together. It should be clear from the context whether we are
referring to events as part of a linear history or recorded as part of the
configuration. When being precise, we refer to a configuration event as
_recorded_ event. Since recorded events in the semantics are uniquely labeled,
we also allow ourselves to use words like “event $m$” even if $m$ is just the
identifier for the recorded event $m(\\!\\!|z{:=}v|\\!\\!)$. A write to memory
updates a variable’s value and also generates a fresh identifier $m$. In order
to record the write event, the tuple
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m,!z)}}$
is placed in the happens-before set of the term representing the memory
location that has been written to. The initial configuration starts with one
write-event per variable and the semantics maintains this uniqueness as an
invariant. In effect, the collection of recorded write events behave as a
mapping from variable to values.888The fact that memory behaves like a mapping
is consistent with the strong memory assumption.
A thread $t$ is represented as $p\langle E_{\mathit{hb}},t\rangle$ at run-
time, with $p$ serving as identifier. To be able to determine whether a next
action should be flagged as race or not, a goroutine keeps track of happens-
before information corresponding to past write events. An event mentioned in
$E_{\mathit{hb}}$ is an event of the past, as opposed to being an event that
simply occurred in a prior step. An event is “concurrent” if it occurred in a
prior step but is not in happens-before relation with the current thread
state. Concurrent memory events are potentially in conflict with a thread’s
next step. More precisely, if the memory record
$(\\!\\!|E_{\mathit{hb}}^{z},\,z{:=}v|\\!\\!)$ is part of the configuration,
then it is safe for thread $p\langle E_{\mathit{hb}},t\rangle$ to write to $z$
if $E_{\mathit{hb}}^{z}\subseteq E_{\mathit{hb}}$. Otherwise, there exist a
write to $z$ that is not accounted for by thread $p$ and a WaW conflict is
raised. Similar when reading from a variable.
Data-races are marked as a transition to an exception $\mathsf{E}$—see the
derivation rules of Figure 3, and, when write-after-read races are considered,
Figure 7. The exception takes as argument a set containing the prior memory
operations that conflict and are concurrent with the attempted memory access.
$\begin{array}[b]{l}{\small{tensy\vbox{\hbox spread0.0pt{\hskip 0.0pt plus
0.0001fil\hbox{$\displaystyle\penalty
1{{\begin{array}[]{c}E_{\mathit{hb}}^{z}\subseteq
E_{\mathit{hb}}\quad\quad\mathit{fresh}(m^{\prime})\quad\quad
E_{\mathit{hb}}^{\prime}=\\{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m^{\prime},!z)}}\\}\cup
E_{\mathit{hb}}\quad\quad E_{\mathit{hb}}^{\prime
z}=\\{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m^{\prime},!z)}}\\}\cup
E_{\mathit{hb}}^{z}\end{array}}}$}\hskip 0.0pt plus
0.0001fil}\hbox{\hbox{\kern 0.0pt\vrule
height=0.25002pt,depth=0.25002pt,width=282.58698pt\hbox{\kern
2.70003pt$\mbox{{R-Write}}$}}}\hbox{\kern 35.20657pt\hbox{$\displaystyle
p\langle
E_{\mathit{hb}},z:=v^{\prime};t\rangle\parallel(\\!\\!|E_{\mathit{hb}}^{z},\,z{:=}v|\\!\\!)\xrightarrow{}p\langle
E_{\mathit{hb}}^{\prime},t\rangle\parallel(\\!\\!|E_{\mathit{hb}}^{\prime
z},\,z{:=}v^{\prime}|\\!\\!)$}}}\par}}\\\ \\\\[-11.00008pt] \end{array}$
Figure 2: Operational semantics augmented for RaW and WaW race detection
$\begin{array}[b]{l}{\small{tensy\vbox{\hbox spread0.0pt{\hskip 0.0pt plus
0.0001fil\hbox{\kern 58.79764pt\hbox{$\displaystyle\penalty
1{{\begin{array}[]{c}E_{\mathit{hb}}^{z}\not\subseteq
E_{\mathit{hb}}\end{array}}}$}}\hskip 0.0pt plus 0.0001fil}\hbox{\hbox{\kern
0.0pt\vrule height=0.25002pt,depth=0.25002pt,width=166.59026pt\hbox{\kern
2.70003pt$\mbox{{R-Write-$\mathsf{E}_{WaW}$}}$}}}\hbox{\kern
0.0pt\hbox{$\displaystyle p\langle
E_{\mathit{hb}},z:=v^{\prime};t\rangle\parallel(\\!\\!|E_{\mathit{hb}}^{z},\,z{:=}v|\\!\\!)\xrightarrow{}\mathsf{E}\left({E_{\mathit{hb}}^{z}-E_{\mathit{hb}}}\right)$}}}\vspace{2pt}\par}}\\\
\\\\[-11.00008pt] \end{array}$
Figure 3: Exception conditions for RaW and WaW data-race detection
Goroutines synchronize via message passing, which means that channel
communication must transfer happens-before information between goroutines.
Suppose a goroutine $p$ has just updated variable $z$ thus generating the
unique label $m$. The tuple
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m,!z)}}$
is placed in the happens-before set of both the thread $p$ and the memory
record associated with $z$. At this point, $p$ is the only goroutine whose
happens-before set contains the label $m$ associated with this write-record.
No other goroutine can read or write to $z$ without causing a data-race. When
$p$ sends a message onto a channel, the information about $m$ is also sent.
Suppose now that a thread $p^{\prime}$ reads from the channel and receives the
corresponding message before $p$ makes any further modifications to $z$. The
tuple
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m,!z)}}$
is added to $p^{\prime}$’s happens-before set, so both $p$ and $p^{\prime}$
are aware of $z$’s most recent write to $z$. The existence of $m$ in both
goroutine’s happens-before sets implies that either $p$ or $p^{\prime}$ are
allowed to update $z$’s value. The rules for channel communication are given
in Figure 4. They will remain unchanged when we extend the treatment to RaW
conflicts. The exchange of happens-before information via channel
communication is also analogous to the treatment of the weak semantics in
[10].
$\begin{array}[b]{l}{\small{tensy\vbox{\hbox spread0.0pt{\hskip 0.0pt plus
0.0001fil\hbox{\kern 57.76375pt\hbox{$\displaystyle\penalty
1{{\begin{array}[]{c}q=[{E_{\mathit{hb}}}_{\bot},\ldots,{E_{\mathit{hb}}}_{\bot}]\quad\quad|\,q\,|=v\quad\quad\mathit{fresh}(c)\end{array}}}$}}\hskip
0.0pt plus 0.0001fil}\hbox{\hbox{\kern 0.0pt\vrule
height=0.25002pt,depth=0.25002pt,width=276.44908pt\hbox{\kern
2.70003pt$\mbox{{R-Make}}$}}}\hbox{\kern 0.0pt\hbox{$\displaystyle p\langle
E_{\mathit{hb}},\mathrel{\mathtt{let}}r=\
\mathrel{\mathtt{make}}(\mathrel{\mathtt{chan}}T,v)\mathrel{\mathtt{in}}t\rangle~{}~{}\xrightarrow{}~{}~{}\mathbf{\nu}c\
(p\langle
E_{\mathit{hb}},\mathrel{\mathtt{let}}r=c\mathrel{\mathtt{in}}t\rangle\parallel
c_{f}[]\parallel c_{b}[q])$}}}\par}}\\\ \\\\[-11.00008pt]
{\small{tensy\vbox{\hbox spread0.0pt{\hskip 0.0pt plus 0.0001fil\hbox{\kern
65.76999pt\hbox{$\displaystyle\penalty
1{{\begin{array}[]{c}\lnot\mathit{closed}(c_{f}[q_{2}])\quad\quad
E_{\mathit{hb}}^{\prime}=E_{\mathit{hb}}+E_{\mathit{hb}}^{\prime\prime}\end{array}}}$}}\hskip
0.0pt plus 0.0001fil}\hbox{\hbox{\kern 0.0pt\vrule
height=0.25002pt,depth=0.25002pt,width=266.72298pt\hbox{\kern
2.70003pt$\mbox{{R-Send}}$}}}\hbox{\kern 0.0pt\hbox{$\displaystyle
c_{b}[q_{1}::E_{\mathit{hb}}^{\prime\prime}]\parallel p\langle
E_{\mathit{hb}},c\leftarrow v;t\rangle\parallel
c_{f}[q_{2}]~{}~{}\xrightarrow{}~{}~{}c_{b}[q_{1}]\parallel p\langle
E_{\mathit{hb}}^{\prime},t\rangle\parallel
c_{f}[{\color[rgb]{0.56,0.74,0.56}\definecolor[named]{pgfstrokecolor}{rgb}{0.56,0.74,0.56}(v,E_{\mathit{hb}})}::q_{2}]$}}}\par}}\\\
\\\\[-11.00008pt] {\small{tensy\vbox{\hbox spread0.0pt{\hskip 0.0pt plus
0.0001fil\hbox{\kern 55.59444pt\hbox{$\displaystyle\penalty
1{{\begin{array}[]{c}v\not=\bot\quad\quad
E_{\mathit{hb}}^{\prime}=E_{\mathit{hb}}+E_{\mathit{hb}}^{\prime\prime}\end{array}}}$}}\hskip
0.0pt plus 0.0001fil}\hbox{\hbox{\kern 0.0pt\vrule
height=0.25002pt,depth=0.25002pt,width=216.49182pt\hbox{\kern
2.70003pt$\mbox{{R-Rec}}$}}}\hbox{\kern
0.0pt\hbox{$\displaystyle\begin{array}[]{rcl}c_{b}[q_{1}]\parallel&p\langle
E_{\mathit{hb}},\mathrel{\mathtt{let}}r=\mathop{\leftarrow
c}\mathrel{\mathtt{in}}t\rangle&\parallel
c_{f}[q_{2}::{\color[rgb]{0.56,0.74,0.56}\definecolor[named]{pgfstrokecolor}{rgb}{0.56,0.74,0.56}(v,E_{\mathit{hb}}^{\prime\prime})}]~{}~{}\xrightarrow{}{}{}\\\
c_{b}[E_{\mathit{hb}}::q_{1}]\parallel&p\langle
E_{\mathit{hb}}^{\prime},\mathrel{\mathtt{let}}r=v\mathrel{\mathtt{in}}t\rangle&\parallel
c_{f}[q_{2}]\end{array}$}}}\par}}\\\ \\\\[-11.00008pt]
{\small{tensy\vbox{\hbox spread0.0pt{\hskip 0.0pt plus 0.0001fil\hbox{\kern
99.52374pt\hbox{$\displaystyle\penalty
1{{\begin{array}[]{c}E_{\mathit{hb}}^{\prime}=E_{\mathit{hb}}+E_{\mathit{hb}}^{\prime\prime}\end{array}}}$}}\hskip
0.0pt plus 0.0001fil}\hbox{\hbox{\kern 0.0pt\vrule
height=0.25002pt,depth=0.25002pt,width=260.91496pt\hbox{\kern
2.70003pt$\mbox{{R-Rec${}_{\bot}$}}$}}}\hbox{\kern 0.0pt\hbox{$\displaystyle
p\langle E_{\mathit{hb}},\mathrel{\mathtt{let}}r=\mathop{\leftarrow
c}\mathrel{\mathtt{in}}t\rangle\parallel
c_{f}[{\color[rgb]{0.56,0.74,0.56}\definecolor[named]{pgfstrokecolor}{rgb}{0.56,0.74,0.56}(\bot,E_{\mathit{hb}}^{\prime\prime})}]~{}~{}\xrightarrow{}~{}~{}p\langle
E_{\mathit{hb}}^{\prime},\mathrel{\mathtt{let}}r=\bot\mathrel{\mathtt{in}}t\rangle\parallel
c_{f}[{\color[rgb]{0.56,0.74,0.56}\definecolor[named]{pgfstrokecolor}{rgb}{0.56,0.74,0.56}(\bot,E_{\mathit{hb}}^{\prime\prime})}]$}}}\par}}\\\
\\\\[-11.00008pt] {\small{tensy\vbox{\hbox spread0.0pt{\hskip 0.0pt plus
0.0001fil\hbox{\kern 80.1457pt\hbox{$\displaystyle\penalty
1{{\begin{array}[]{c}E_{\mathit{hb}}^{\prime}={E_{\mathit{hb}}}_{1}+{E_{\mathit{hb}}}_{2}\end{array}}}$}}\hskip
0.0pt plus 0.0001fil}\hbox{\hbox{\kern 0.0pt\vrule
height=0.25002pt,depth=0.25002pt,width=227.1989pt\hbox{\kern
2.70003pt$\mbox{{R-Rend}}$}}}\hbox{\kern
0.0pt\hbox{$\displaystyle\begin{array}[]{rcll}c_{b}[]\parallel&p_{1}\langle{E_{\mathit{hb}}}_{1},c\leftarrow
v;t\rangle&\parallel
p_{2}\langle{E_{\mathit{hb}}}_{2},\mathrel{\mathtt{let}}r=\mathop{\leftarrow
c}\mathrel{\mathtt{in}}t_{2}\rangle&\parallel
c_{f}[]~{}~{}\xrightarrow{}{}{}\\\ c_{b}[]\parallel&p_{1}\langle
E_{\mathit{hb}}^{\prime},t\rangle&\parallel p_{2}\langle
E_{\mathit{hb}}^{\prime},\mathrel{\mathtt{let}}r=v\mathrel{\mathtt{in}}t_{2}\rangle&\parallel
c_{f}[]\end{array}$}}}\par}}\\\ \\\\[-11.00008pt] \end{array}$
Figure 4: Operational semantics augmented for race detection: channel
communication
As in Fava et al., [10], “the R-Close rule closes both sync and async
channels. Executing a receive on a _closed_ channel results in receiving the
end-of-transmission marker $\bot$ (cf. rule R-Rec⊥) and updating the local
state $E_{\mathit{hb}}$ in the same way as when receiving a properly sent
value. The “value” $\bot$ is not removed from the queue, so that all clients
attempting to receive from the closed channel obtain the communicated happens-
before synchronization information.”
Finally, goroutine creation is a synchronizing operation where the child, who
is given a unique identifier $p^{\prime}$, inherits the happens-before set
from the parent—see the R-Go rule in Figure 5.
$\begin{array}[b]{l}\end{array}$
Figure 5: Operational semantics augmented for race detection: thread creation
### 3.4 Detecting write-after-read (WaR) races
In the previous section, the detection of read-after-write and write-after-
write races required happens-before sets to contain write labels only. The
detection of write-after-read races requires recording read labels, as well. A
successful read of variable $z$ causes a fresh read label, say $m^{\prime}$,
to be generated. The pair
${\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}(m^{\prime},?z)}$
is added to the reader’s happens-before set as well as to the record
associated with $z$ in memory—see rule R-Read of Figure 6.
$\begin{array}[b]{l}{\small{tensy\vbox{\hbox spread0.0pt{\hskip 0.0pt plus
0.0001fil\hbox{$\displaystyle\penalty
1{{\begin{array}[]{c}E_{\mathit{hb}}^{z}\subseteq
E_{\mathit{hb}}\quad\quad\mathit{fresh}(m^{\prime})\quad\quad
E_{\mathit{hb}}^{\prime}=\\{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m^{\prime},!z)}}\\}\cup
E_{\mathit{hb}}\quad\quad E_{\mathit{hb}}^{\prime
z}=\\{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m^{\prime},!z)}}\\}\cup
E_{\mathit{hb}}^{z}\end{array}}}$}\hskip 0.0pt plus
0.0001fil}\hbox{\hbox{\kern 0.0pt\vrule
height=0.25002pt,depth=0.25002pt,width=282.58698pt\hbox{\kern
2.70003pt$\mbox{{R-Write}}$}}}\hbox{\kern 35.20657pt\hbox{$\displaystyle
p\langle
E_{\mathit{hb}},z:=v^{\prime};t\rangle\parallel(\\!\\!|E_{\mathit{hb}}^{z},\,z{:=}v|\\!\\!)\xrightarrow{}p\langle
E_{\mathit{hb}}^{\prime},t\rangle\parallel(\\!\\!|E_{\mathit{hb}}^{\prime
z},\,z{:=}v^{\prime}|\\!\\!)$}}}\vspace{6pt}\par}}\\\ \\\\[-11.00008pt]
\end{array}{\\\ }{\\\\[-11.00008pt] }$
Figure 6: Operational semantics augmented for data-race detection
$\begin{array}[b]{l}{\small{tensy\vbox{\hbox spread0.0pt{\hskip 0.0pt plus
0.0001fil\hbox{\kern 30.23512pt\hbox{$\displaystyle\penalty
1{{\begin{array}[]{c}E_{\mathit{hb}}^{z}\not\subseteq
E_{\mathit{hb}}\quad\quad E_{\mathit{hb}}^{z}\downarrow_{?}\subseteq
E_{\mathit{hb}}\end{array}}}$}}\hskip 0.0pt plus 0.0001fil}\hbox{\hbox{\kern
0.0pt\vrule height=0.25002pt,depth=0.25002pt,width=166.59026pt\hbox{\kern
2.70003pt$\mbox{{R-Write-$\mathsf{E}_{WaW}$}}$}}}\hbox{\kern
0.0pt\hbox{$\displaystyle p\langle
E_{\mathit{hb}},z:=v^{\prime};t\rangle\parallel(\\!\\!|E_{\mathit{hb}}^{z},\,z{:=}v|\\!\\!)\xrightarrow{}\mathsf{E}\left({E_{\mathit{hb}}^{z}-E_{\mathit{hb}}}\right)$}}}\vspace{2pt}\par}}\\\
\\\\[-11.00008pt] {\small{tensy\vbox{\hbox spread0.0pt{\hskip 0.0pt plus
0.0001fil\hbox{\kern 58.73262pt\hbox{$\displaystyle\penalty
1{{\begin{array}[]{c}E_{\mathit{hb}}^{z}\downarrow_{?}\nsubseteq
E_{\mathit{hb}}\end{array}}}$}}\hskip 0.0pt plus 0.0001fil}\hbox{\hbox{\kern
0.0pt\vrule height=0.25002pt,depth=0.25002pt,width=166.59024pt\hbox{\kern
2.70003pt$\mbox{{R-Write-$\mathsf{E}_{WaR}$}}$}}}\hbox{\kern
0.0pt\hbox{$\displaystyle p\langle
E_{\mathit{hb}},z:=v^{\prime};t\rangle\parallel(\\!\\!|E_{\mathit{hb}}^{z},\,z{:=}v|\\!\\!)\xrightarrow{}\mathsf{E}\left({E_{\mathit{hb}}^{r}-E_{\mathit{hb}}}\right)$}}}\vspace{2pt}}}\\\
\\\\[-11.00008pt] \end{array}{\\\ }{\\\\[-11.00008pt] }$
Figure 7: Exception conditions for WaR data-race detection
In order for a write to memory to be successful, the writing thread must not
only be aware of previous write events to a given shared variable, but must
also account for all accumulated reads to the variable. A write-after-read
data-race is raised when a write is attempted by a thread and the thread is
unaware of some previous reads to $z$. In other words, there exist some read-
label in the happens-before set associated with the variable’s record, say
$r\in E_{\mathit{hb}}^{z}\downarrow_{?}$, that is not in the thread’s happen-
before set, $r\notin E_{\mathit{hb}}$. The projection $\downarrow_{?}$
essentially filters out write events from the happens-before set. Under these
circumstances, the precondition $E_{\mathit{hb}}^{z}\downarrow_{?}\nsubseteq
E_{\mathit{hb}}$ of the R-Write-$\mathsf{E}_{WaR}$ rule is met and a race is
reported.
Compared to the detector of Section 3.3, the reporting of WaW races in rule
R-Write-$\mathsf{E}_{WaW}$ is augmented with the precondition
$E_{\mathit{hb}}^{z}\downarrow_{?}\subseteq E_{\mathit{hb}}$. Without this
precondition, there would be non-determinism when reporting WaW and WaR
conflicts.999Consider the scenario in which $p$ writes to and then reads from
the shared variable $z$. Say the write to $z$ generates a label $w$ and the
read generates $r$. If a thread $p^{\prime}$ attempts to write to $z$ without
first communicating with $p$, $p^{\prime}$ will not be aware of the prior read
and write events. In other words, the happens-before set of $p^{\prime}$ will
contain neither
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(w,!z)}}$
nor
${\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}(r,?z)}$.
Both rules R-Write-$\mathsf{E}_{WaW}$ and R-Write-$\mathsf{E}_{WaR}$ are
enabled in this case. However, the read happens-after the write that generated
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(w,!z)}}$.
Note, however, that when both WaW and WaR apply, the read in the WaR race
happens-after the write involved in the WaW race. We favor to resolve this
non-determinism and to report the most recent conflict.
The detector presented here can flag all conflicts: read-after-write, write-
after-write, and write-after-read. In Section 4 we also make the detector
efficient by “garbage collecting” stale information. But before then, let us
look at a couple of examples that illustrate the detector’s operation.
### 3.5 Examples
We will look at two examples of properly synchronized programs. The first is a
typical usage of channel communication; one in which an action is placed in
the past of another. The second example relies on mutual exclusion instead. In
this case, we know that actions are not concurrent, but we cannot infer an
order between them. By contrasting the two examples in Section 3.5.3, we
derive observations related to determinism and constructivism.
#### 3.5.1 Message passing
Message passing, depicted in Figure 8, involves a producer writing to a shared
variable and notifying another thread by sending a message onto a channel. A
consumer receives from the channel and reads from the shared variable.
$\displaystyle p_{1}\langle{E_{\mathit{hb}}}_{1},z:=42;\ c\leftarrow 0\rangle$
$\displaystyle p_{2}\langle{E_{\mathit{hb}}}_{2},\mathop{\leftarrow c};\
\mathrel{\mathtt{load}}z\rangle$ Figure 8: Message passing example.
The access to the shared variable is properly synchronized. Given the
operational semantics presented in this chapter, we can arrive at this
conclusion as follows. A fresh label, say $m$, is generated when $p_{1}$
writes to $z$. The memory record involving $z$ is updated with this fresh
label, and the pair
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m,!z)}}$
is placed into $p_{1}$’s happens-before set, thus yielding
${E_{\mathit{hb}}}_{1}^{\prime}$. A send onto $c$ sends not only the message
value, $0$ in this case, but also the happens-before set of the sender,
${E_{\mathit{hb}}}_{1}^{\prime}$, see rule R-Send. The act of receiving from
$c$ blocks until a message is available. When a message becomes available, the
receiving thread receives not only a value but also the happens-before set of
the sender at the time that the send took place, see rule R-Rec. Thus, upon
receiving from $c$, $p_{2}$’s happens-before set is updated to contain
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m,!z)}}$.
Receiving from the channel places the writing to $z$ by $p_{1}$ into $p_{2}$’s
definite past. The race-checker makes sure of this fact by inspecting
$p_{2}$’s happens-before set when $p_{2}$ attempts to load from $z$. In other
words, the race-checker checks that the current labels associated with $z$ in
the configuration are also present in the happens-before set of the thread
performing the load.
The message passing example illustrates synchronization as imposing of an
order between events belonging to different threads. The message places the
producer’s write in the past of the consumer’s read. Next, we will look into
an example in which synchronization is achieve via mutual exclusion. Two
threads, $p_{1}$ and $p_{2}$, are competing to write to the same variable. We
will not be able to determine which write happens-before the other. Even
though we cannot infer the order, we can determine that a happens-before order
exists and, therefore, that the program is properly synchronized.
#### 3.5.2 Mutual exclusion
Figure 9 shows a typical mutual exclusion scenario. It involves two threads
writing to a shared variable $z$. Before writing, a thread sends a message
onto a channel $c$ which capacity $|\,{c}\,|=1$. After writing, it receives
from $c$.101010Note that the channel is being used as a semaphore [8]. Sending
on the channel is analogous to a semaphore wait or P operation. Receive is
analogous to signal or V. The wait decrements the value of the semaphore and,
if the new value is negative, the process executing the wait is blocked. A
signal increments the value of the semaphore variable, thus allowing another
process (potentially coming from the pool of previously blocked processes) to
resume. Similarly, a send operation decrements the number of available slots
in the channel’s queue, while a receive increments it. Sending on a channel
with capacity 1 can only take place if the channel is empty; meaning, all
previous sends are matched with a corresponding receive.
$\displaystyle p_{1}\langle c\leftarrow 0;\ z:=17;\ \mathop{\leftarrow
c}\rangle$ $\displaystyle p_{2}\langle c\leftarrow 0;\ z:=42;\
\mathop{\leftarrow c}\rangle$ Figure 9: Mutual exclusion example.
A send and its corresponding receive do not directly contribute to
synchronization in this example. The send is matched by a receive from the
same thread; nothing new is learned from this exchange. To illustrate this
point, which may come as a surprise, let us look at an execution. Say $p_{1}$
is the first to send $0$ onto $c$. Then $p_{1}$’s happens-before set
${E_{\mathit{hb}}}_{1}$ is placed onto the channel along with the value of
$0$. The thread then proceeds to write to $z$, which generates a fresh label,
say $m^{\prime}$; the pair
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m^{\prime},!z)}}$
is placed on $p_{1}$’s happens-before set. When receiving from $c$, $p_{1}$
does not learn anything new! It receives the message $0$ and a “stale”
happens-before set ${E_{\mathit{hb}}}_{1}$. The receiver’s happens-before set,
${E_{\mathit{hb}}}_{1}^{\prime}$, is updated to incorporate the stale happens-
before set, but this “update” causes no effective change:
$\displaystyle{E_{\mathit{hb}}}_{1}^{\prime}\cup{E_{\mathit{hb}}}_{1}$
$\displaystyle=({E_{\mathit{hb}}}_{1}\cup\\{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m^{\prime},!z)}}\\})\cup{E_{\mathit{hb}}}_{1}$
$\displaystyle={E_{\mathit{hb}}}_{1}\cup\\{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m^{\prime},!z)}}\\}$
$\displaystyle={E_{\mathit{hb}}}_{1}^{\prime}$
The explanation for why the program is synchronized, in this case, is more
subtle. It involves reasoning about the channel’s capacity. Recall that,
according to rule (2) on page 2, the $i^{\mathit{th}}$ receive from a channel
with capacity $k$ happens before the $(i+k)^{\mathit{th}}$ send onto the
channel completes. Since channel capacity is $1$ in our example, rule (2)
implies that the first receive from the channel happens-before the second send
completes. If $p_{1}$ is the first to write to $z$, then $p_{1}$ is also the
first to receive from $c$. Receiving from $c$ places $p_{1}$’s happens-before
set onto the backward channel (see rule R-Rec). This happens-before set
contains the entry
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m^{\prime},!z)}}$
registering $p_{1}$’s write to $z$. Upon _sending_ onto $c$, $p_{2}$ receives
from the backward channel and learns of $p_{1}$’s previous write. Thus, by the
time $p_{2}$ writes to $z$, the write by $p_{1}$ has been-placed onto
$p_{2}$’s definite past. Since no concurrent accesses exist, the race checker
does not flag this execution as racy.
Similarly, $p_{2}$ could first send onto $c$ and write to $z$. The argument
for the proper synchronization of this alternate run would proceed in the same
way. Therefore, even though it is not possible to infer who, among $p_{1}$ and
$p_{2}$, writes to $z$ first, we know that one of the writes is in a happens-
before relation with the other. This knowledge is enough for us to conclude
that the program is properly synchronized.
This example shows that channels are excessively powerful when it comes to
implementing mutual exclusion, as evidenced by the fact that the forward queue
associated with the channel is not utilized. When it comes to mutual
exclusion, a more parsimonious synchronization mechanism suffices. Indeed, the
_acquire_ and _release_ semantics associated with locks is a perfect fit. When
acquiring a lock, a thread learns about the memory operations that precede the
lock’s release. In other words, memory operations preceding a lock’s release
are put in happens-before with respect to a thread that acquires the lock.
Assuming a lock $l$ starts with empty happens-before information, say
$l[\emptyset]$, the rules Acquire and Release capture a lock’s behavior.
$\begin{array}[b]{l}{\small{tensy\vbox{\hbox spread0.0pt{\hskip 0.0pt plus
0.0001fil\hbox{\kern 43.46857pt\hbox{$\displaystyle\penalty
1{{\begin{array}[]{c}E_{\mathit{hb}}^{\prime}=E_{\mathit{hb}}\cup
E_{\mathit{hb}}^{\prime\prime}\end{array}}}$}}\hskip 0.0pt plus
0.0001fil}\hbox{\hbox{\kern 0.0pt\vrule
height=0.25002pt,depth=0.25002pt,width=147.80461pt\hbox{\kern
2.70003pt$\mbox{{Acquire}}$}}}\hbox{\kern 0.0pt\hbox{$\displaystyle p\langle
E_{\mathit{hb}},\mathrel{\mathtt{acq}}(l);t\rangle\parallel
l[E_{\mathit{hb}}^{\prime\prime}]~{}~{}\xrightarrow{}~{}~{}p\langle
E_{\mathit{hb}}^{\prime},t\rangle\parallel l[]$}}}\par}}\\\ \\\\[-11.00008pt]
{\small{tensy\vbox{\hbox spread0.0pt{\hskip 0.0pt plus 0.0001fil\hbox{\kern
41.85606pt\hbox{$\displaystyle\penalty
1{{\begin{array}[]{c}E_{\mathit{hb}}^{\prime}=E_{\mathit{hb}}\cup
E_{\mathit{hb}}^{\prime\prime}\end{array}}}$}}\hskip 0.0pt plus
0.0001fil}\hbox{\hbox{\kern 0.0pt\vrule
height=0.25002pt,depth=0.25002pt,width=144.5796pt\hbox{\kern
2.70003pt$\mbox{{Release}}$}}}\hbox{\kern 0.0pt\hbox{$\displaystyle p\langle
E_{\mathit{hb}},\mathrel{\mathtt{rel}}(l);t\rangle\parallel
l[]~{}~{}\xrightarrow{}~{}~{}p\langle E_{\mathit{hb}},t\rangle\parallel
l[E_{\mathit{hb}}^{\prime}]$}}}}}\end{array}$
Note that an acquired lock, represented by $l[]$, cannot be re-acquired
without a prior release, and that a released lock, meaning
$l[E_{\mathit{hb}}]$, cannot re-released without a prior acquire.111111When
releases are matched by an prior acquire from the same thread, then happens-
before information accumulates monotonically, meaning, a thread learns about
all previous releases, not just the most recently occurring one. While a
thread’s happens-before is updated on both sends and receives, with locks,
only the acquisition updates a thread’s happens-before information.
Surrounding code with a call to acquire at the beginning and release at the
end is sufficient for ensuring mutual exclusion. The full generality of
channels is not required.
#### 3.5.3 Determinism, confluence, and synchronization
In the message passing example of Section 3.5.1, we are able to give a
constructive proof-sketch of the synchronization between $p_{1}$ and $p_{2}$;
the “proof” puts an event from $p_{1}$ in the past of $p_{2}$. In the mutual
exclusion example of Section 3.5.2, no such guarantee is possible. Instead, we
give a non-constructive “proof” that $p_{1}$ and $p_{2}$ are synchronized by
arguing that either $p_{1}$’s actions are in the past of $p_{2}$’s or vice
versa. The _law of excluded middle_ is used in this non-constructive argument.
The absence of constructivism is tied to the absence of determinism. While in
the message passing example the program is deterministic, in the mutual
exclusion example it is not. There is no _data_ race in the mutual exclusion
example, but there is still a “race” insofar as the two threads compete for
access to a shared resource. The resource, in this case, is the channel, which
is being used as a lock. The two threads race towards acquiring the lock (i.e.
sending onto the channel) first. The initial configuration has two
transitions, one in which $p_{1}$ acquires the lock first and one in which
$p_{2}$ does. These transitions are non-confluent.
When it comes to reasoning about programs that model hardware, the lack of
constructivism and the non-confluence in the use of channels as locks is a
hindrance. Deterministic languages and constructive logics are needed in order
to rule out scenarios in which two logic gates attempt to drive the same _via_
with different logic values (i.e. a short circuit) [2]. In the case of channel
communication and in the absence of shared memory, determinism can be achieved
by enforcing ownership on channels; for example, by making sure a single
thread can read and a single thread can write on a given channel at any given
point in the execution [37]. It is possible for the ownership on channels to
be passed around the threads in a way that preserves determinism [38].
The examples show that the absence of absence of data races is not enough to
ensure determinism. In general, however, determinism is not a requirement.
Many applications require “only” data-race freedom.
## 4 Efficient data-race detection
We have been gradually introducing a data-race checker. In Section 3.3, we
presented a simple checker that flags after-write races (WaW and RaW) but is
not equipped for write-after-read (WaR) detection. In Section 3.4, we
augmented the detector to handle WaR. Here, we discuss how these detectors can
be implemented efficiently; where efficiency is gained by employing “garbage
collection” to reduce the detector’s memory footprint. Note that keeping one
record per variable is already a form of efficiency gain. In a relaxed memory
model, since there may be more than one value associated with a variable at
any point in the execution, one might keep one record per memory event [10].
The first step towards a smaller footprint is to realize that, if the
underlying memory model supports the DRF-SC guarantee, a data-race detector
can be built assuming sequential consistency. The reason being that, when a
data race is flagged, execution stops at the point in which the weak and
strong memory models’ executions would diverge.
Knowing that memory events can overtake each other, in this section we discuss
how stale or redundant information can be garbage collected. More precisely,
we show how to garbage collect the data structures that hold happens-before
information, that is, the thread-local happens-before set and the per-memory-
location one.
### 4.1 Most recent write
Terms representing a memory location have taken different shapes when compared
to the undecorated semantics. In the undecorated semantics, the content $v$ of
a variable $z$ is written as a pair $(\\!\\!|z{:=}v|\\!\\!)$. For after-write
race detection, an entry in memory took the form of
$(\\!\\!|E_{\mathit{hb}},\,z{:=}v|\\!\\!)$ with $E_{\mathit{hb}}$ holding
information about prior write events. Our first optimization comes from
realizing that we do not need to keep a set of prior write events. We can
record only the most recent write and still be able to flag all after-write
racy _executions._ With this optimization, we may fail to report all
_accesses_ involved in the race, but we will still be able to report the
execution as racy and to flag the most recent conflicting write event. This
optimization is significant; it reduces the arbitrarily large set of prior
write events to a single point.
An intuitive argument for the correctness of the optimization comes from
noticing that a successful write to a variable can be interpreted as the
writing thread taking _ownership_ of the variable. Suppose a goroutine $p$ has
just updated variable $z$. At this point, $p$ is only goroutine whose happens-
before set contains the label, say $m$, associated with this write-record. The
placement of the new label into $p$’s happens-before set can be seen as
recording $p$’s _ownership_ of the variable: a data-race is flagged if any
other thread attempts to read or write to $z$ without first synchronizing with
$p$—see the check
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m,!z)}}\in
E_{\mathit{hb}}$ in the premise of the R-Write and R-Read rules of Figure 10.
When $p$ sends a message onto a channel, the information about $m$ is also
sent. Suppose now that a thread $p^{\prime}$ reads from the channel and
receives the corresponding message before $p$ makes any further modifications
to $z$. The tuple
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m,!z)}}$
containing the write-record’s label is added to $p^{\prime}$’s happens-before
set. Now both $p$ and $p^{\prime}$ are aware of $z$’s most recent write to
$z$. The existence of $m$ in both goroutine’s happens-before sets imply that
either $p$ or $p^{\prime}$ are allowed to update $z$’s value. We can think of
the two goroutines as _sharing_ $z$. Among $p$ and $p^{\prime}$, whoever
updates $z$ first (re)gains the _exclusive_ rights to $z$.
It may be worth making a parallel with hardware and cache coherence protocols.
Given the derivation rules, we can write a race detector as a state machine.
Compared to the Modified-Exclusive-Shared-Invalid protocol (MESI), our
semantics does not have the _modified_ state: all changes to a variable are
immediately reflected in the configuration, there is no memory hierarchy in
the memory model. As hinted above, the other states can be interpreted as
follows: If the label of the most recent write to a variable is only recorded
in one goroutine’s happens-before set, then we can think of the goroutine as
having _exclusive_ rights to the variable. When a number of goroutines contain
the pair
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m,!z)}}$
in their happen-before set with $m$ being the label of the most recent write,
then these goroutines can be thought to be _sharing_ the variable. Other
goroutines that are unaware of the most recent write can be said to hold
_invalid_ data.
### 4.2 Runtime configuration and memory related reduction rules
Given the “most recent write” optimization above, and, if we were satisfied
with after-write conflicts, an entry in memory would take the form of
$m(\\!\\!|z{:=}v|\\!\\!)$, with the label $m$ uniquely identifying the event
associated with $v$ having been stored into $z$. Being able to flag after-
write but not write-after-read races may be an adequate trade-off between
completeness and efficiency. By not having to record read events, a simplified
detector tailored for after-write race detection has a much smaller footprint
than when read-after-write conflicts are also taken into account. Besides, a
write-after-read race that is not flagged in an execution may realize itself
as a read-after-write race in another run, and then be flagged by the
simplified detector.121212Intuitively, say
$S_{0}\xrightarrow{e_{0}}S_{1}\xrightarrow{e_{1}}\cdots\xrightarrow{e_{n-1}}S_{n}$
is a run starting from an initial configuration $S_{0}$. Let $\bowtie$ be an
independence relation on events, meaning, given
$S_{i}\xrightarrow{e_{i}}S_{i+1}\xrightarrow{e_{i+1}}S_{i+2}$, we say that
$e_{i}\bowtie e_{i+1}$ if there exist $S^{\prime}$ such that
$S_{i}\xrightarrow{e_{i+1}}S^{\prime}\xrightarrow{e_{i}}S_{i+2}$. The
independence relation induces an equivalence relation on traces, namely,
traces are equivalent if they can be derived from one another via the
permutation of independent events. It can be shown that if
$S_{0}\xrightarrow{h}S_{n}$ is a run containing a write-after-read race, the
exist an equivalent run in which the race materializes as a read-after-write
race.
In contrast, the detection of write-after-read races requires more book-
keeping: we need read- in addition to write-labels. This addition is required
because a WaR conflict can ensue between an attempted write and _any_ previous
unsynchronized read to the same variable. Therefore, the race-checker is made
to remember all such potentially troublesome reads.131313Since depending on
scheduling, a WaR data-race can manifest itself as RaW race, one option would
be not add instrumentation for WaR race detection and, instead, hope to flag
the RaW manifestation instead. Such practical consideration illustrates the
trade-off between completeness versus run-time overhead. The runtime
configuration is thus modified, this time as to contain entries of the form
$m(\\!\\!|E_{\mathit{hb}}^{r},\,z{:=}v|\\!\\!)$. The label $m$ identifies of
the most recent write event to $z$ and the set $E_{\mathit{hb}}^{r}$ holds-
read event identifiers, namely, the identifiers of reads that accumulated
after $m$.
$R::=p\langle E_{\mathit{hb}},t\rangle\ \mathrel{|}\
m(\\!\\!|E_{\mathit{hb}}^{r},\,z{:=}v|\\!\\!)\ \mathrel{|}\ \bullet\
\mathrel{|}\ R\parallel R\ \ \mathrel{|}\ c[q]\ \mathrel{|}\ \nu n\ R\ .$ (4)
Note that _records_ of the form
$m(\\!\\!|E_{\mathit{hb}}^{r},\,z{:=}v|\\!\\!)$ can be seen as $n+1$ recorded
events: one write together with $n\geq 0$ read events.
The formal semantics maintains the following invariants. First, the happens-
before information $E_{\mathit{hb}}^{r}$ in
$m(\\!\\!|E_{\mathit{hb}}^{r},\,z{:=}v|\\!\\!)$ contains information of the
form
${\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}(m^{\prime},?z)}$
only, i.e., there are no write events and all read-events concern variable
$z$. Also, the event labels are unique for both reads and writes. In an abuse
of notation, we may refer to $m$ being in $E_{\mathit{hb}}^{r}$ and write
$m\in E_{\mathit{hb}}^{r}$ meaning, more precisely,
${\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}(m,?z)}\in
E_{\mathit{hb}}^{r}$.
$\begin{array}[b]{l}{\small{tensy\vbox{\hbox spread0.0pt{\hskip 0.0pt plus
0.0001fil\hbox{$\displaystyle\penalty
1{{\begin{array}[]{c}{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m,!z)}}\in
E_{\mathit{hb}}\quad\quad E_{\mathit{hb}}^{r}\subseteq
E_{\mathit{hb}}\quad\quad\mathit{fresh}(m^{\prime})\quad\quad
E_{\mathit{hb}}^{\prime}=\\{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m^{\prime},!z)}}\\}\cup\left(E_{\mathit{hb}}-E_{\mathit{hb}}\downarrow_{z}\right)\end{array}}}$}\hskip
0.0pt plus 0.0001fil}\hbox{\hbox{\kern 0.0pt\vrule
height=0.25002pt,depth=0.25002pt,width=283.26646pt\hbox{\kern
2.70003pt$\mbox{{R-Write}}$}}}\hbox{\kern 31.01244pt\hbox{$\displaystyle
p\langle E_{\mathit{hb}},z:=v^{\prime};t\rangle\parallel
m(\\!\\!|E_{\mathit{hb}}^{r},\,z{:=}v|\\!\\!)\xrightarrow{}p\langle
E_{\mathit{hb}}^{\prime},t\rangle\parallel
m^{\prime}(\\!\\!|\emptyset,\,z{:=}v^{\prime}|\\!\\!)$}}}\vspace{10pt}\par}}\\\
\\\\[-11.00008pt] \end{array}{\\\ }{\\\\[-11.00008pt] }$
Figure 10: Operational semantics augmented for efficient data-race detection
### 4.3 Garbage collection of happens-before sets
Knowledge of past events contained in a happens-before set $E_{\mathit{hb}}$
is naturally monotonically increasing. For example, each time a goroutine
learns about happens-before information, it adds to its pool of knowledge. In
particular, events that are known to have “happened-before” cannot, by
learning new information, become “concurrent.” An efficient semantics,
however, does not accumulate happens-before information indiscriminately;
instead, it purges redundant information. We say “redundant” for the purpose
of flagging racy executions, but leaving out conflicting accesses that have
been overtaken by more recent memory events.
#### 4.3.1 Garbage collection on _writes_
For a thread $t$ to successfully write to $z$, all previously occurring
accesses to $z$ must be in happens-before with the thread’s current state. One
optimization comes from realizing that we can purge all information about
prior accesses the variable $z$ from the happens-before set of the writing
thread $t$. We call these prior accesses _redundant_ from the point of view of
flagging racy executions. The reason for the correctness of this optimization
is as follows: All future access of $t$ to $z$ are synchronized with the
redundant accesses, after all, the accesses are recorded in $t$’s happens-
before set. Therefore, from the perspective of $t$, these accesses do not
affect data-race detection. For the same reason, if a thread $t^{\prime}$
synchronizes with $t$, there is no race to report if and when $t^{\prime}$
accesses memory—the absence of these redundant accesses from $t^{\prime}$’s
happens-before is, therefore, inconsequential. Finally, if $t^{\prime}$ does
not synchronize with $t$, then an access to $z$ is racy because it is
unsynchronized with $t$’s most recent write, regardless of the redundant prior
accesses. Note that this optimization allows us to flag all racy _executions_
even if we fail to report some of the accesses involved in the race.
Rule R-Write of Figure 10 embodies this discussion. Before writing, the rule
checks that the attempted write happens-after all previously occurring
accesses to $z$. This check is done by two premises: premise
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m,!z)}}\in
E_{\mathit{hb}}$ makes sure that the most recent write to $z$, namely, the one
that produced event
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m,!z)}}$,
is in happens-before with the current thread state $E_{\mathit{hb}}$. As per
discussion in Section 4.1, being synchronized with the most recent write means
the thread is synchronized with all writes up to that point in the execution.
The other premise, $E_{\mathit{hb}}^{r}\subseteq E_{\mathit{hb}}$, makes sure
that the attempted write is in happens-after read accesses to $z$. If these
two premises are satisfied, the write can proceed and prior accesses to $z$
are garbage collected from the point of view of $t$. The filtering of
redundant accesses is done by subtracting $E_{\mathit{hb}}\downarrow_{z}$ in
$E_{\mathit{hb}}^{\prime}=\\{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m^{\prime},!z)}}\\}\cup\left(E_{\mathit{hb}}-E_{\mathit{hb}}\downarrow_{z}\right)$
where $\downarrow_{z}$ projects the happens-before set down to operations on
variable $z$. Finally, the write rule also garbage collects the in-memory
record $E_{\mathit{hb}}^{r}$ by setting it to $\emptyset$,141414As per
discussion in Section 4.1, a term representing a memory location
$m(\\!\\!|E_{\mathit{hb}}^{r},\,z{:=}v|\\!\\!)$ records in
$E_{\mathit{hb}}^{r}$ all the reads to $z$ that have accumulated after the
write that generated the write label $m$. When a new write $m^{\prime}$ of
value $z:=v^{\prime}$ ensues, we update the memory term to record this new
write and we reset its corresponding $E_{\mathit{hb}}^{r}$ to $\emptyset$.
meaning that no read event have accumulated after the write yet.
#### 4.3.2 Garbage collection on _reads_
We also garbage collect on load operations. Say $t$ reads from $z$, thus
generating event
${\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}(m^{\prime},?z)}$.
Let us call _redundant_ the memory accesses to $z$ in $t$’s happens-before set
at the time event
${\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}(m^{\prime},?z)}$
takes place, with the exception of
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m,!z)}}$.
A read operation can only conflict with a future write; there are not read-
read conflicts. For a future write to take place, the writing thread will need
to synchronize with a thread that “knows” about the read
$m^{\prime}$.151515“Knowing about the read $m^{\prime}$” is a necessary
condition for a thread to successfully write to $z$, but it is not a
sufficient one. There may exist other reads, say $m^{\prime\prime}$,
$m^{\prime\prime\prime}$, etc that are concurrent with $m^{\prime}$. A thread
needs to synchronize with all such concurrent reads before it can successfully
write to $z$. Any thread that knows of $m^{\prime}$ would also know about the
redundant access to $z$ and know of
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m,!z)}}$.
In other words, $m^{\prime}$ and $m$ subsume all happened-before accesses of
$z$ from the perspective of $t$. Therefore, we can garbage collect all such
accesses by filtering them out of the thread’s happen-before set, as in
$E_{\mathit{hb}}^{\prime}=\\{{\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}(m^{\prime},?z)}\\}\cup\left(E_{\mathit{hb}}-E_{\mathit{hb}}\downarrow_{z}\right)\cup\\{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m,!z)}}\\}\text{.}$
These redundant accesses are also filtered out of the in-memory happens-before
set:
$E_{\mathit{hb}}^{\prime
r}=\\{{\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}(m^{\prime},?z)}\\}\cup\left(E_{\mathit{hb}}^{r}-E_{\mathit{hb}}\downarrow_{z}\right)\text{.}$
#### 4.3.3 Off-line garbage collection and channel communication
The garbage collector rules of Figure 11 can be run non-deterministically
during the execution of a program. Rule R-GC eliminates stale entries from the
happens-before set of a thread. It can be sensible to perform garbage
collection also _after_ a thread interacts with a channel, as happens-before
information communicated via channels are likely to become stale. For example,
suppose a thread, whose happens-before set does not contain stale entries,
sends onto a channel and continues executing. By the time a receive takes
place, the happens-before set transmitted via the channel may have become
stale. Similarly for happens-before transmitted between receives and prior
sends via the backward channel. Alternatively, we may choose an implementation
in which the happens-before of inflight messages are also gargabe collected,
in which case we would process the happens-before sets in a channel’s forward
and backquard queues.
$\begin{array}[b]{l}\end{array}{\\\ }{\\\\[-11.00008pt] }$
Figure 11: Off-line garbage collection
## 5 Comparison with vector-clock based race detection
Vector clocks (VCs) are a mechanism for capturing the happen-before relation
over events emanating from a program’s execution [22]. A _vector clock_
$\mathds{V}$ is a function
$\mathrel{\mathtt{Tid}}\rightarrow\mathrel{\mathtt{Nat}}$ which records a
clock, represented by a natural number, for each thread in the system. “VCs
are partially-ordered ($\sqsubseteq$) in a pointwise manner, with an
associated join operation ($\sqcup$) and minimal element ($\bot_{V}$). In
addition, the helper function $inc_{t}$ increments the $t$-component of a VC”
[11].
$\displaystyle\mathds{V}_{1}\sqsubseteq\mathds{V}_{2}~{}~{}$
$\displaystyle\text{iff}~{}~{}\forall
t.~{}\mathds{V}_{1}(t)\leq\mathds{V}_{2}(t)$
$\displaystyle\mathds{V}_{1}\sqcup\mathds{V}_{2}~{}~{}$
$\displaystyle=~{}~{}\lambda t.~{}max(\mathds{V}_{1}(t),\mathds{V}_{2}(t))$
$\displaystyle\bot_{V}~{}~{}$ $\displaystyle=~{}~{}\lambda t.~{}0$
$\displaystyle inc_{t}(\mathds{V})~{}~{}$ $\displaystyle=~{}~{}\lambda
u.~{}\mathrel{\mathtt{if}}u=t\mathrel{\mathtt{then}}\mathds{V}(u)+1\mathrel{\mathtt{else}}\mathds{V}(u)$
Using vector clocks, Pozniansky and Schuster, [29] proposed a data-race
detection algorithm referred to as Djit$\overset{+}{\text{.}\hskip 3.00003pt}$
Their algorithm works as follows. Each thread $t$ is associated with a vector
clock $\mathds{C}_{t}$. Entry $\mathds{C}_{t}(t)$ stores the current time at
$t$, while $\mathds{C}_{t}(u)$ for $u\neq t$ keeps track of the time of the
last operation “known” to $t$ as having been performed by thread $u$.
The algorithm also keeps track of memory operations. Each memory location $x$
has two vector clocks, one associated with reads, $\mathds{R}_{x}$, and
another with writes, $\mathds{W}_{x}$. The clock of he last read from variable
$x$ by thread $t$ is recorded in $\mathds{R}_{x}(t)$; similar for
$\mathds{W}_{x}(t)$ and writes to $x$ by $t$. When it comes to reading from
memory, a race is flagged when a thread $t$ attempts to read from $x$ while
being “unaware” of some write to $x$. Precisely, a race is flagged when $t$
attempts to read from $x$ and there exists a write to $x$ by thread $u$,
$\mathds{W}_{x}(u)$, that is not accounted for by $t$, meaning
$\mathds{W}_{x}(u)\geq\mathds{C}_{t}(u)$, or, equivalently,
$\mathds{W}_{x}\not\sqsubseteq\mathds{C}_{t}$. If $t$ succeeds in reading from
$x$, then $\mathds{R}_{x}(t)$ is updated to the value of $\mathds{C}_{t}(t)$.
When it comes to writing to memory, a race is flagged when $t$ attempts to
write to $x$ while being unaware of some read or write to $x$, meaning either
$\mathds{R}_{x}\not\sqsubseteq\mathds{C}_{t}$ or
$\mathds{W}_{x}\not\sqsubseteq\mathds{C}_{t}$. If $t$ succeeds in writing to
$x$, then $\mathds{W}_{x}(t)$ is updated to $\mathds{C}_{t}(t)$.
A thread’s clock is advanced when the thread executes synchronization
operations, which have bearing on the happens-before relation. The algorithm
was proposed in the setting of locks; each lock $m$ is associated with a
vector clock $\mathds{L}_{m}$. When a thread $t$ acquires $m$, then
$\mathds{C}_{t}$ is updated to $\mathds{C}_{t}\sqcup\mathds{L}_{m}$. Acquiring
a lock is analogous to receiving from a channel with buffer size one: the
receiving thread updates its vector clock by incorporating the VC previously
“stored” in the lock. When a thread $t$ releases a lock $m$, the vector clock
$\mathds{L}_{m}$ is updated to $\mathds{C}_{t}$ and thread’s clock is
advanced, meaning $\mathds{C}_{t}:=inc_{t}(\mathds{C}_{t})$. We can think of
lock release as placing a message, namely the vector clock associated with the
releasing thread, into a buffer of size one. Thus, in comparison with the
approach presented in our paper, lock operations are a special case of
buffered channel communication. Our paper deals with channels of arbitrary
size and their capacity limitations.
A significant difference between our approach and Djit+ is that we dispense
with the notion of vector clocks. Vector clocks are a conceptual vehicle to
capturing partial order of events. Instead of relying on VCs, our
formalization is tied directly to the concept of happens-before. Vector clocks
are expensive. VCs require $O(\tau)$ storage space and common operations on
VCs consume $O(\tau)$ time where $\tau$ is the number of entries in the
vector. In the case of race detection, $\tau$ is the number of threads spawn
during the execution of a program. It turns out that not all uses of VCs in
Djit+ are strictly necessary. In fact, Flanagan and Freund, [11] introduce
the concept of _epoch_ , which consists of a pair ${c}@{t}$ where $c$ is a
clock and $t$ a thread identifier. They then replace $\mathds{W}_{x}$, the
vector clock tracking writes to $x$, with a single epoch. This epoch captures
the clock and thread identity associated with the most recent write to $x$.
Similarly, in our approach, a memory location is associated with the
identifier of only the most-recent write to that location. Any thread who is
“aware” of this identifier is allowed to read from the corresponding variable.
FastTrack also reduces the dependency on vector clocks by replacing
$\mathds{R}_{x}$ with the epoch of the most recent read to $x$. However, since
reads are not totally ordered, FastTrack dynamically switches back to a vector
clock representation when needed. Similar to FastTrack, we record the most
recent (unordered) reads which, in the best case, involves an $O(1)$-memory
footprint and $O(\tau)$ at the worst.
When it comes to per-thread memory consumption, however, our approaches look
very different. While Djit+’s and FastTrack’s worst-case memory consumption
per thread is $O(\tau)$, our is $O(\nu\tau)$ where $\nu$ is the number of
shared variables in a program.161616We believe the worst case is a degenerate
case unlikely to happen: it involves every thread reading from every shared
variable and then exchanging messages as to inform everyone else about their
read events. Vector clocks’ memory efficiency, when compared to happens-before
sets, come from VC’s ability to succinctly capture the per-thread accesses
that take place in between advances of a clock. A thread’s clock is advanced
when the thread releases a lock.171717If channels were used instead of locks,
the advance would take place when a thread sends onto or receives from a
channel. All accesses made by a thread $t$ in a given clock $c$ are captured
by the clock: if another thread $u$ “knows” the value $c$ of $t$’s clock, then
$u$ is in happens-after with _all_ accesses made by $t$—that is, all accesses
up to when $t$’s clock was advanced to $c+1$. In contrast, the happens-before
set representation is much more coarse. We keep track of individual accesses,
as opposed to lumping them together into a clock number. This coarseness
explains the extra factor of $\nu$ in the worst-case analysis of the happens-
before set solution. Although being a disadvantage in the worst case scenario,
it does provide benefits, as we discuss next.
Note that the vector-clocks associated with threads and locks grow
monotonically. By _growing monotonically_ we do not mean that time marches
forward to increasing clock values. Instead, we mean that the number of clocks
in a vector grows without provisions for the removal of entries. This growth
can lead to the accumulation of “stale” information, where by stale we mean
information that is not useful from the point of view of race detection. This
growth stands in contrast to our approach to garbage collection. Stale
information is purged from happens-before sets, which means they can shrink
back to size zero after having grown in size.
Let us look at an example that illustrates this difference in treatment of
stale information. Consider the producer/consumer paradigm, where a thread
produces information to be consumed by other threads. Say $p_{0}$ produces
information by writing to the shared variable $z$. The thread then notifies
consumers, $p_{1}$ and $p_{2}$, by sending a message on channel $c$. The
consumers read from $z$ and signal the fact that they are done consuming by
sending onto channel $d$. The producer $p_{0}$ writes to $z$ again once it has
received the consumers’ messages.
$\begin{array}[]{lccll}\text{Producer}&&&\lx@intercol\hfil\text{Consumers}\hfil\lx@intercol\\\
~{}~{}~{}~{}~{}p_{0}&&&~{}~{}~{}p_{1}&~{}~{}~{}p_{2}\\\ \hline\cr\\\\[-4.0pt]
z:=42;&&&\mathop{\leftarrow c};&\mathop{\leftarrow c};\\\ c\leftarrow 0;\
c\leftarrow 0;&&&\mathrel{\mathtt{load}}z;&\mathrel{\mathtt{load}}z;\\\
\mathop{\leftarrow d};\ \mathop{\leftarrow d};&&&d\leftarrow 0&d\leftarrow
0\\\ z:=43&&&&\end{array}$
Let us run this example against a prototype implementation [9] of our proposed
race detector, called Grace, and against FastTrack. Consider the point in the
execution after $p_{0}$ has written to $z$, the consumers have read from $z$
and notified $p_{0}$, and $p_{0}$ is about to write to $z$ again. Below is the
state of the detectors at this point. The information contained in the
happens-before sets and the vector-clocks is very similar. There are three
entries for $p_{0}$, and two entries for $p_{1}$ and $p_{2}$ each.
Happens-before sets | | Vector-clocks
---|---|---
$E_{\mathit{hb}}^{p_{0}}=\\{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m_{0},!z)}},{\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}(m_{1},?z)},{\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}(m_{2},?z)}\\}$ | | $\mathds{C}_{p_{0}}=\bot[p_{0}\mapsto 6,p_{1}\mapsto 1,p_{2}\mapsto 1]$
$E_{\mathit{hb}}^{p_{1}}=\\{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m_{0},!z)}},{\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}(m_{1},?z)}\\}$ | | $\mathds{C}_{p_{1}}=\bot[p_{0}\mapsto 2,p_{1}\mapsto 2]$
$E_{\mathit{hb}}^{p_{2}}=\\{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m_{0},!z)}},{\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}(m_{2},?z)}\\}$ | | $\mathds{C}_{p_{2}}=\bot[p_{0}\mapsto 3,p_{2}\mapsto 2]$
The happens-before set $E_{\mathit{hb}}^{p_{0}}$ show the reads by $p_{1}$ and
$p_{2}$ as being in happens-before with respect to $p_{0}$, along with
$p_{0}$’s own write to $z$. It also shows $p_{1}$ and $p_{2}$ as being “aware”
of $p_{0}$’s write to $z$, as well as being “aware” of their own reads to $z$.
The same information is captured by the vector clocks. Recall that the bottom
clock, $\bot$, maps every process-id to the clock value of $0$. Thus, the VC
associated with $p_{0}$ contains $p_{0}$’s clock (which happens to be 6) as
well as the clock associated with the reads to $z$ by $p_{1}$ and $p_{2}$. In
this execution, $p_{0}$’s clock was $2$ when the thread wrote to $z$. Thus,
the entry $p_{0}\mapsto 2$ in $\mathds{C}_{p_{1}}$ and the entry $p_{0}\mapsto
3$ in $\mathds{C}_{p_{2}}$ place the write to $z$ by $p_{0}$ in $p_{1}$’s and
$p_{2}$’s past.
The difference between our approach the VC based approach is evidenced in the
next step of execution, when $p_{0}$ writes to $z$ for the second time. This
write subsumes all previous memory interactions on $z$. In other words, this
write is in happens-after with respect to all reads and writes to $z$ up to
this point in the execution of the program. Therefore, it is sufficient for a
thread to synchronize with $p_{0}$ before issuing a new read or write to $z$;
also, it is no longer necessary to remember the original write to $z$ and the
reads from $z$ by $p_{1}$ and $p_{2}$. Here are the happens-before sets and
vector-clocks in the next step of execution, meaning, after $p_{0}$ writes to
$z$ the second time:
Happens-before sets | | Vector-clocks
---|---|---
$E_{\mathit{hb}}^{p_{0}}=\\{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{(m_{3},!z)}}\\}\phantom{,{\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}(m_{1},?z)},{\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}(m_{2},?z)}}$ | | $\mathds{C}_{p_{0}}=\bot[p_{0}\mapsto 6,p_{1}\mapsto 1,p_{2}\mapsto 1]$
$E_{\mathit{hb}}^{p_{1}}=\\{\\}$ | | $\mathds{C}_{p_{1}}=\bot[p_{0}\mapsto 2,p_{1}\mapsto 2]$
$E_{\mathit{hb}}^{p_{0}}=\\{\\}$ | | $\mathds{C}_{p_{2}}=\bot[p_{0}\mapsto 3,p_{2}\mapsto 2]$
The happens-before sets are mostly empty; the only entry corresponds to the
most recent write to $z$, which is known to $p_{0}$. Meanwhile, the vector
clocks are unchanged. Note, however, that every entry with the exception of
$p_{0}\mapsto 6$ in $\mathds{C}_{p_{0}}$ is stale. In other words, with the
exception of $p_{0}\mapsto 6$, the presence or absence of all other entries
does not alter a thread’s behavior. To illustrate this point, take entry
$p_{0}\mapsto 2$ in $\mathds{C}_{p_{1}}$ as an example: if $p_{1}$ were to
attempt to access $z$, a data race will ensue regardless of whether or not the
entry $p_{0}\mapsto 2$ is in $p_{1}$’s vector clock. Therefore, ideally, we
would want these stale entries purged from the vector-clocks of $p_{0}$,
$p_{1}$, and $p_{2}$. Concretely, we would want
$\mathds{C}_{p_{0}}=\bot[p_{0}\mapsto 6]$ and
$\mathds{C}_{p_{1}}=\mathds{C}_{p_{2}}=\bot$.
Similar unbounded growth occurs in the VCs associated with locks,181818The
_acquire_ grows the VC associated with the acquiring thread; the _release_
sets the VC of the corresponding lock to the VC of the acquiring thread. thus
also leading to the accumulation of stale information. We conjecture that an
approach that purges stale information from VCs, similar to our notion of
garbage collection, would be highly be beneficial. VC-based implementations
are very efficient in managing the memory overhead associated with variables.
For example TSan, a popular race-detection library based on vector clocks and
that comes with the Go tool chain, stores one write and a small number of
reads per memory location (the number of reads stored is 4 in the current
implementation) [15]. Capping the number of tracked read events leads to false
negatives; the cap a fair compromise between _recall_ and memory consumption.
In order to further reduce the memory foot-print of modern race detection
implementations, we are thus left with devising approaches to managing
threads’ and locks’ memory overhead.
Unfortunately, reducing memory pressure on vector-clocks associated with
threads and locks is arguably more difficult than reducing memory pressure on
VCs associated with shared variables. In one hand, if a variable does not
“remember” a read or write to itself as having happened-before, then the
variable becomes more permissive from the point of view of race detection;
meaning, more threads would be able to interact with this variable without
raising a data-race, even when races should have been reported. On the other
hand, if a _thread_ “forgets” about some prior read or write access that have
taken place on a variable, a spurious data race may be raised. Thus, while
dropping clock entries in the VCs associated with variables can introduce
false negatives, dropping clock entries from VCs associated with threads and
locks introduce false positives. From a practical perspective, false negatives
are acceptable and can even be mitigated,191919Provided we run a program
enough times, we can randomly evict entries from a VC or happens-before set
associated with a variable such that we eventually flag all existing races of
the program. however, being warned of non-existing races is overwhelming to
the application programmer, which means false positives are generally not
tolerated.
## 6 Connections with trace theory
Our operational semantics mimics the Go memory model in defining
synchronization in terms of channel communication. Specifically, we abide by
rules (1) and (2), which establish a happens-before relation between a send
and the completion of its corresponding receive, and, due to the boundedness
of channels, between a receive and the completion of a future send. However,
these are not the only imposition by the semantics on the order of events.
Channels act as FIFO queues in both Go [4] as well as in our operational
semantics. However, neither Go nor our operational semantics establish a
happens-before relation between consecutive sends or consecutive receives. For
example, the $i^{\mathit{th}}$ send on a channel $c$ does not happens-before
the $(i+1)^{\mathit{th}}$ send on $c$. Therefore, there exist events that are
necessarily ordered, but that are not in happens-before relation.
It is tempting to think of happens-before in terms of observations, where $a$
and $b$ are in happens-before if and only if we observe $a$ followed by $b$,
and never the other way around. This intuition is captured by the following
tentative definition:
Let $\mathrel{\mathtt{idx}}({a},{h})$ be the index of event $a$ in a run $h$.
Given the set of runs $H$ starting from an initial configuration, we say that
event $a$ _happens-before_ $b$ if-and-only-if, for all runs $h\in H$ such that
$a,b\in h$, $\mathrel{\mathtt{idx}}({a},{h})<\mathrel{\mathtt{idx}}({b},{h})$.
When it comes to weak memory systems, there exist events that are ordered
according to the above tentative definition but that are not in happens-before
relation. Take the improperly synchronized message-passing example of Figure
12 as an example. In this example, a thread $p_{0}$ writes to a shared
variable `z` and sets a flag; another thread, $p_{1}$, checks the flag reads
from `z` if the flag has been set.
$p_{0}$ | $p_{1}$
---|---
z := 42; | $(A)$ | r = load done; | $(C)$
done := true; | $(B)$ | if r then |
| | load z | $(D)$
Figure 12: Message passing example.
If $A$ and $B$ are the first and second instructions in thread $p_{0}$, and
$C$ and $D$ are the loads of the flag and of the shared variable $z$ in
$p_{1}$, then program order gives rise to $A\rightarrow_{\mathsf{hb}}B$ and
$C\rightarrow_{\mathsf{hb}}D$. We also have that the load of $z$ in $D$ only
occurs if the value of the flag observed by thread $p_{1}$ is `true`, which
means it was previously set by thread $p_{0}$ in $B$. Therefore, in all runs
in which $D$ is observed, $B$ necessarily occurs earlier in the execution.
This necessity does not, however, place $B$ and $D$ in happens-before
relation. Under many flavors of weak memory, the memory accesses between the
two threads are not synchronized. As the example shows, our tentative
definition of happens-before as always-occurring-before or necessarily-
occurring-before does not work for weak memory systems. How about for
sequential consistent ones?
In the program of Figure 13, thread $p_{0}$ sends values $0$ and $1$ into
channel $c$ consecutively. Concurrently, thread $p_{1}$ writes $42$ to a
shared variable $z$ and receives from the channel, while thread $p_{2}$ first
receives from the channel and conditionally reads from $z$. From this program,
we construct an example in which events are necessarily ordered but are not in
happens-before—even if we assume sequential consistency.
$\displaystyle p_{0}\langle c\leftarrow 0;c\leftarrow 1\rangle$ $\displaystyle
p_{1}\langle z:=42;\ \mathop{\leftarrow c}\rangle$ $\displaystyle
p_{2}\langle\mathrel{\mathtt{let}}r:=\mathop{\leftarrow c}\
\mathrel{\mathtt{in}}\ \mathrel{\mathtt{if}}r=1\ \mathrel{\mathtt{then}}\
\mathrel{\mathtt{load}}z\rangle$ Figure 13: Conditional race example.
To illustrate this point, let us consider an execution of the program. Let
$({o})_{p}$ be a trace event capturing the execution of operation $o$ by
threads $p$. Let also ${z}!{}$ and ${z}?{}$ represent a write and read
operation on the shared variable $z$, and $\mathrel{\mathtt{sd}}\ {c}\ {}$ and
$\mathrel{\mathtt{rv}}\ {c}\ {}$ represent send and receive operations on
channel $c$. Assuming channel capacity $|\,{c}\,|\geq 2$, the sequence below
is a possible trace obtained from the execution of the program. Note that the
if-statement’s reduction is interpreted as an internal or silent transition:
Given that $p_{1}$ receives from $c$ before $p_{2}$ does, the value received
by $p_{2}$ must be $1$ as opposed to $0$. Therefore, $p_{2}$ takes the branch
and reads from the shared variable $z$. Figure 14 shows the partial order on
events for this execution.
$p_{0}$$p_{1}$$p_{2}$$\mathrel{\mathtt{sd}}\ {c}\
{0}$$~{}~{}\mathrel{\mathtt{sd}}\ {c}\ {1}$${z}!{}$$\mathrel{\mathtt{rv}}\
{c}\ {}$$\mathrel{\mathtt{rv}}\ {c}\ {}$${z}?{}$ Figure 14: Partial order on
conditional-race example.
Program order is captured by the vertical arrows in the diagram; channel
communication is captured by the solid diagonal arrows. As per discussion in
Section 3.2.3, we make the distinction between a channel operation and its
completion. A channel operations is depicted as two half-circles; the
operation’s completion is captured by the bottom half-circle. That way, a send
(top of the half-circle) happens-before its corresponding receive completes
(bottom half).
Now, given that the send operations are in happens-before, meaning
$({\mathrel{\mathtt{sd}}\ {c}\
{0}})_{p_{0}}\rightarrow_{\mathsf{hb}}({\mathrel{\mathtt{sd}}\ {c}\
{1}})_{p_{0}}$, and that channels are First-In-First-Out (FIFO), the reception
of value $0$ from $c$ must occur before the reception of $1$. This requirement
is captured by the dotted arrow in the diagram. However, according to the
semantics of channel communication (i.e. rules (1) and (2) of page 1), this
order does not impose a happens-before relation between the receiving events.
In other words, there exist events that are necessarily ordered, but not in
happens-before relation to one another.
The failure of our tentative definition of happens-before as necessarily-
occurring-before, given early in this section, has subtle implications as
discussed next.
### 6.1 Happens-before, traces, and commutativity of operations
Traces come from observing the execution of a program and are expressed as
strings of events. In a concurrent system, however, events may not be causally
related, which means that the order of some events is not pre-imposed. In
reality, instead of sequences, events in a concurrent system form a partially
ordered set (see Figure 14 for an example). As advocated by Mazurkiewicz,
[23], it is useful to combine sequential observations with a dependency
relation for studying “the nonsequential behaviour of systems via their
sequential observations.” By defining an _independence relation_ on events, it
is possible to derive a notion of equivalence on traces: two traces are
equivalent if it is possible to transform one into the other “by repeatedly
commuting adjacent pairs of independent operations” [17].
One way to define independence is as follows: Given a run
$R_{i}\xrightarrow{a}\cdot\xrightarrow{b}R$, we say that $a$ and $b$ are
independent if $R_{i}\xrightarrow{b}\cdot\xrightarrow{a}R$, meaning,
* •
$b$ is enabled at $R_{i}$,
* •
$a$ is enabled at $R_{i}\xrightarrow{b}\cdot$, and
* •
there exists an $R^{\prime}$ such that
$R_{i}\xrightarrow{b}R^{\prime}\xrightarrow{a}R$.
Clearly, if $a$ happens-before $b$, then $a$ and $b$ cannot be swapped in a
trace. So, independence between two events means (at least) the absence of
happens-before relation between them. But happens-before is not all that needs
to be considered in the definition of independence.
When translating a partial order of events to a trace, not every linearization
that respects the happens-before relation is a valid trace. Some
linearizations of the partial order may not be “realizable” by the operational
semantics. In other words, there can be traces that abide by the happens-
before relation but that cannot be generated from the execution of a program.
For example, we can obtain the following linearization given the partial order
of Figure 14: This linearization respects the partial order based on the
happens-before relation: program order is respected, so is the relation
between sends and their corresponding receives. However, this linearization
breaks the first-in-first-out assumption on channels. FIFO is broken because,
in order for $p_{2}$ to read from $z$, it must be that it received the value
of $1$ from the channel. But $p_{2}$ is the first thread to receive from the
channel and, since $0$ was the first value into the channel, it must also have
been the first value read from the channel. Therefore, the linearization in
Trace LABEL:trace:conditional.race.not.realizable is not “realizable” by the
operational semantics. While happens-before restricts the commutation of trace
operations, there exist other operations that are ordered (though not ordered
by happens-before) and that, consequently, must not commute.
The difficulty in conciliating the commutativity of trace events with the
happens-before relation remains counterintuitive today, even though its
origins are related to an observation made years ago in a seminal paper by
Lamport, [18]. In the paper, Lamport, points out that “anomalies” can arise
when there exist orderings that are external to the definition of happens-
before—see the “Anomalous Behavior” section of [18]. In order to avoid these
anomalies, one suggestion from the paper is to expand the notion of happens-
before so that, if $a$ and $b$ are necessarily ordered, then $a$ and $b$ are
also in happens-before.
Let us analyze the consequences of rolling FIFO notions into the definition of
happens-before. Given the example of Figure 13, since the sends are ordered in
a happens-before relation, and the channel is FIFO, one can argue that the
receive events should also be ordered by happens-before. According to this
argument, we ought to promote the dotted line in Figure 14 to a solid
$\rightarrow_{\mathsf{hb}}$ arrow. This modification would make the example
well-synchronized. In one hand, given that the write to $z$ by $p_{1}$ and the
read from $z$ by $p_{2}$ are _always_ separated by events (by the two receive
events in specific), interpreting the two memory accesses as being
synchronized seems rather fitting: the two memory accesses cannot happen
simultaneously, nor can they exist side-by-side in a trace.
There are downsides to this approach. For one, the resulting semantics
deviates from Go’s, but, more importantly, such a change does impact
synchronization in counter intuitive ways. Specifically, making the dotted
arrow a happens-before arrow would imply that a receiver (in this case
$p_{2}$) can learn about prior events that are not known by the corresponding
sender. If the dotted arrow is promoted to a synchronization arrow, the write
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{({{z}!{}})_{p_{1}}}}$
is communicated to $p_{2}$ via $p_{0}$ without $p_{0}$ itself being “aware” of
the write. In other words, the write identifier is transmitted via $p_{0}$ but
is not present in $p_{0}$’s happens-before set.
We follow Go and allow for some events to always occur in order without
affecting synchronization. Consequently, such ordered events are not
considered to be in _happens-before_ order. A less clear consequence, however,
is that races can longer be defined as simultaneous (or side-by-side) accesses
to a shared variable. This point is explored next.
### 6.2 Manifest data races
Section 2 mentioned the concept of manifest data race; below we give a
concrete definition.
###### Definition 1 (Manifest data race)
A well-formed configuration $R$ contains a _manifest data race_ if either
hold:
$\displaystyle
R\xrightarrow{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{({{z}!{}})_{p_{1}}}}}\text{~{}and~{}}R\xrightarrow{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{({{z}!{}})_{p_{2}}}}}$
(manifest write-write race on $z$) $\displaystyle
R\xrightarrow{{\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}({{z}?{}})_{p_{1}}}}\text{~{}and~{}}R\xrightarrow{{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{({{z}!{}})_{p_{2}}}}}$
(manifest read-write race on $z$)
for some $p_{1}\not=p_{2}$.
Manifest data races can also be defined on traces.
###### Definition 2 (Manifest data race)
A well-formed trace $h$ contains a _manifest data race_ if either
$\displaystyle{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{({{z}!{}})_{p_{1}}}}~{}$
$\displaystyle{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{({{z}!{}})_{p_{2}}}}\,$
(manifest write-after-write)
$\displaystyle{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{({{z}!{}})_{p_{1}}}}~{}$
$\displaystyle{\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}({{z}?{}})_{p_{2}}}\,$
(manifest read-after-write)
$\displaystyle{\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}({{z}?{}})_{p_{1}}}~{}$
$\displaystyle{\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{({{z}!{}})_{p_{2}}}}$
(manifest write-after-read)
are a sub-sequence of $h$ and where $p_{1}\not=p_{2}$).
While manifest races are obvious, races in general may involve accesses that
are arbitrarily “far apart” in a linear execution. By bring conflicting
accesses side-by-side, we could show irrefutable evidence of a race that,
otherwise, may be obscured in a trace. Let $h\sqsubseteq h^{\prime}$ represent
the fact that $h^{\prime}$ is derivable from $h$ by the repeated commutation
of adjacent pairs of independent operations. If $h\sqsubseteq h^{\prime}$ and
$h^{\prime}$ contains a manifest data race, then we say $h$ contains a data-
race. This definition of races seems unequivocal. From here, soundness and
completeness of a race detector may be defined as such:
###### Theorem 6.1
(Soundness) If $S_{0}\xrightarrow{h}$ is a run flagged by a data-race
detector, then $h\sqsubseteq h_{dr}$ with $h_{dr}$ containing a manifest data-
race.
###### Theorem 6.2
(Completeness) Let $S_{0}\xrightarrow{h}$ be a run such that $h\sqsubseteq
h_{dr}$ and $h_{dr}$ contains a manifest race. Then $S_{0}\xrightarrow{h}$ is
flagged by the data-race detector.
Theorems 6.1 and 6.2 are also clear and unequivocal. More importantly, they
link two world views: the view of races as unsynchronized accesses with
respect to the happens-before relation and a view of races in terms of
commutativity of trace events à la Mazurkiewicz. The problem with the concept
of manifest data race and Theorems 6.1 and 6.2, however, is that when the
definition of independence is made to respect FIFO order as well as the
happens-before relation, the notion of manifest data race is no longer
attainable. In other words, given a definition of independence which respects
FIFO and happens-before, there exist racy traces from which a manifest data
race is not derivable.
The program of Figure 13 gives rise to such an example. The access to $z$ by
$p_{2}$ only occurs if $p_{2}$ receives the second message sent on the
channel. In other words, the existence of event
${\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}({{z}?{}})_{p_{2}}}$
in a trace is predicated on the order of execution of channel operations:
$p_{2}$ only reads from $z$ if the other thread, $p_{1}$, receives from $c$
before $p_{2}$ does.202020In this example, we use the value of the message
received on a channel to branch upon. But since a receive from a channel
changes a thread’s “visibility” of what is in memory, it is possible to craft
a similar example in which all message values are unit but in which a thread’s
behavior changes due to a change in the ordering of the receives. This
requirement places the receive operations between the memory operations.
Therefore, a trace in which
${\color[rgb]{0.7,0.2,0.2}\definecolor[named]{pgfstrokecolor}{rgb}{0.7,0.2,0.2}{({{z}!{}})_{p_{1}}}}$
and
${\color[rgb]{0.2,0.2,0.7}\definecolor[named]{pgfstrokecolor}{rgb}{0.2,0.2,0.7}({{z}?{}})_{p_{2}}}$
are side-by-side is not attainable. Yet, as discussed previously, the accesses
to $z$ are not ordered by happens-before, and, therefore, are concurrent.
Since the accesses are also conflicting, they constitute a data race.
It seems that Mazurkiewicz traces are “more compatible” with confluence
checking than data-race checking. In data-race checking, there are non-
confluent runs that do not exhibit data races; these runs are non-confluent
because they have “races on channels.” In our example, the two receives from
$p_{1}$ and $p_{2}$ are in competition for access to the channel. These
receive operations are concurrent and non-confluent. Finally, the example also
hints at the perhaps more fundamental observation: that races have little to
do with _simultaneous_ accesses to a shared variable but instead with
_unsynchronized_ accesses. While simultaneous accesses are clearly
unsynchronized, not all unsynchronized accesses may be made
simultaneous.212121There may not exist a configuration from which two
transitions are possible; transitions that involve conflicting memory
accesses. Yet, it is possible for two access separated “in time” to be
unsynchronized.
## 7 Related work
Race detection via the analysis of source code is an undecidable problem.
Regardless, race detectors via the static analysis of source code [24, 39, 3]
exist and have found application in industry. More recently, Blackshear et
al., [3] implement a static analysis tool called RacerD to help the
parallelization of previously sequential Java source code. The tool over
approximates the behavior of programs and can, thereby, reject programs that
turn out to be data-race free. This over approximation was not a hindrance, as
even conservative parallelization efforts can lead to gains over purely
sequential code.
By and large, however, instead of flagging races in a program as a whole, race
detectors have resorted the analysis of particular _runs_ of a program. To
that end, detectors instrument the program so that races are either flagged
during execution, in what is called _on-line_ or _on-the-fly_ race detection,
or on logs captured during execution and analyzed postmortem. Even still,
dynamic race detection is NP-hard [25] and many techniques have been proposed
for detection at scale. Broadly, these techniques involve static analysis used
to reduce the number of runtime checks [11][31], and heuristics that trade
false-positive [33, 30, 5] or false-negative rates [21] for better space/time
utilization. For example, by allowing races to sometimes go undetected,
_sampling_ race detectors let go of completeness in favor of lower overheads.
One common heuristic, called the _cold region hypothesis_ , is to sample more
frequently from less executed regions of the program. This rule-of-thumb
hinges on the assumption that faults are more likely to already have been
identified and fixed if they occur in the hot regions of a program [21].
Alternatively, by going after a proxy instead of an actual race, imprecise
race detectors let go of soundness. The prominent examples here are Eraser’s
LockSet [33] and Locksmith [30], which enforce a lock-based synchronization
discipline. A violation of the discipline is a _code smell_ but not
necessarily a race. The amalgamation of different approaches have also been
investigated, leading to _hybrid race detectors_. For example, O’Callahan and
Choi, [26] combined LockSet-based detection with happens-before information
reconstructed from vector clocks; Choi et al., [5] extended LockSet to
incorporate static analyses.
Another avenue of inquiry has lead to _predictive race detection_ [36, 16],
which attempts to achieve higher detection capabilities by extrapolating
beyond individual runs. Huang et al., [16] incorporate abstracted control
flow information and formulate race detection as a constraint solving problem.
With the goal of observing more races per run, Smaragdakis et al., [36]
introduce a new relation, called _causally-precedes_ , which is a
generalization of the happens-before relation.
A number of papers address race detection in the context of channel
communication [6, 7, 38]. Some of the papers, however, do not speak of shared
memory but, instead, define races as conflicting channel accesses. In that
setting, the lack of conflicting accesses to channels imply determinacy. A
different angle is taken by Terauchi and Aiken, [38], who, among different
kinds of channels, define a buffered channel whose buffer is overwritten by
every write (i.e. send) but never modified by a read (i.e. receive). This kind
of channel, referred to as a _cell_ , behaves, in essence, as shared memory.
The goal of Terauchi and Aiken, [38] is, still, determinacy. Having conflated
the concept of shared memory as a channel, determinacy is then achieved by
ensuring the absence of conflicting accesses to channels. Our goal, however,
is different: we aim to detect data-races but do not want to go as far as
ensuring determinacy. Therefore, our approach allows “races” on channel
accesses. From a different perspective, however, the work of Terauchi and
Aiken, [38] can be seen as complementary to ours: We conjecture that their
type system can serve as the basis for a static data-race detector.
Among the dynamic data-race detection tools from industry, Banerjee et al.,
[1] discuss different race detection algorithms including one used by the
Intel Thread Checker. The authors describe _adjacent conflicts_ , which is
similar to our notion of side-by-side or manifest data race. The paper also
classifies races similar to our WaR, RaW, and WaW classification.
Go has a race detector integrated to its tool chain [13]. The `-race` command-
line flag instructs the Go compiler to instrument memory accesses and
synchronization events. The race detector is built on top of Google’s
sanitizer project [14] and TSan in particular [34, 15]. TSan is part of the
LLVM’s runtime libraries [35, 20]; it works by instrumenting memory accesses
and monitoring locks acquisition and release as well as thread forks and
joins. Note, however, that channel communication is the vehicle for achieving
synchronization in Go. Even though locks exist, they are part of a package,
while channels are built into language. Yet, the race detector for Go sits at
a layer underneath. In this paper we study race detection with channel
communication taking a central role. Also, different from TSan, we employ
propose a technique based on what we call _happens-before sets_ as opposed to
vector clocks. The consequences of this decision is discussed in detail on
Section 5.
It is also relevant to point out that, in the absence of the DRF-SC guarantee,
one may resort to finding data races involving weak memory behavior. Since the
full C/C++11 memory model can harbor such races, and with the goal of finding
data races in production level code, Lidbury and Donaldson, [19] extend the
ThreadSanitizer (TSan) [34, 15] to support a class of non-sequentially
consistent executions.
## 8 Conclusion
We presented a dynamic data-race detector for a language in the style of Go:
featuring channel communication as sole synchronization primitive. The
proposed detector records and analyzes information locally and is well-suited
for online detection.
Our race detector is built upon a previous result [10], where we formalize a
weak memory model inspired by the Go specification [12]. In that setting, we
recorded memory read- and write-events that were in happens-before relation
with respect to a thread’s present operation. This information was stored in a
set called $E_{\mathit{hb}}$ or the happens-before set of a thread, and it was
used to regulate a thread’s visibility of memory events. The core of the paper
was a proof of the DRF-SC guarantee, meaning, we proved that the proposed
relaxed memory model behaves sequentially consistently in the absence of data
races. The proof hinges on the fact that, in the absence of races, all threads
agree on the contents of memory; see the _consensus lemma_ in [10]. The
scaffolding used in the proof of the consensus lemma contains the ingredients
used of the race detectors presented in this paper. Based on our experience,
we conjecture that one may automatically derive a race detector given a weak
memory model and its corresponding proof of the DRF-SC guarantee.
In the DRF-SC the proof of [10], we show that if a program is racy, it behaves
sequentially consistent up to the point in which the first data-race is
encountered. In other words, this first point of divergence sets in motion all
behavior that is not sequentially consistent and which arise from the weakness
in the memory model. With this observation, we argue that a race detector can
operate under the assumption of sequential consistency. This is a useful
simplification, as sequential consistent memory is conceptually much simpler
than relaxed memories. If the data-race detector flags the first evidence of a
data-race, then program behavior is sequentially consistent up to that point.
Avenues for future work abound. In contrast to data-race detectors based on
vector clocks, our approach using happens-before sets does not provide as
terse of a representation for the collection of memory events performed by a
thread in between synchronization points. In effect, out approach has a larger
foot-print, which ought to be mitigated. On the other hand, our thorough
expunging of stale information can serve as inspiration to vector clock based
approaches, which allow for the accumulation of stale information—see Section
6. Another extension would be to statically analyze a target program with the
goal of removing dynamic checks or ameliorating the detector’s memory
consumption. Here, we may be able to borrow from the research on static
analysis for dynamic race-detection in the context of lock-based
synchronization disciplines.
## References
* Banerjee et al., [2006] Banerjee, U., Bliss, B., Ma, Z., and Petersen, P. (2006). A theory of data race detection. In Proceedings of the 4th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging, held in conjunction with the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2006), PADTAD 2006, Portland, Maine, USA, July 17, 2006, pages 69–78.
* Benveniste et al., [2003] Benveniste, A., Caspi, P., Edwards, S. A., Halbwachs, N., Guernic, P. L., and de Simone, R. (2003). The synchronous languages 12 years later. Proceedings of the IEEE, 91(1):64–83.
* Blackshear et al., [2018] Blackshear, S., Gorogiannis, N., O’Hearn, P. W., and Sergey, I. (2018). Racerd: compositional static race detection. PACMPL, 2(OOPSLA):144:1–144:28.
* Channel types, Go language specification, [2016] Channel types, Go language specification (2016). Channel types, the Go programming language specification. https://golang.org/ref/spec#Channel_types.
* Choi et al., [2002] Choi, J., Lee, K., Loginov, A., O’Callahan, R., Sarkar, V., and Sridharan, M. (2002). Efficient and precise datarace detection for multithreaded object-oriented programs. In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI) Berlin, Germany, pages 258–269. ACM.
* Cypher and Leu, [1995] Cypher, R. and Leu, E. (1995). Efficient race detection for message-passing programs with nonblocking sends and receives. In Proceedings. Seventh IEEE Symposium on Parallel and Distributed Processing, pages 534–541. IEEE.
* Damodaran-Kamal and Francioni, [1993] Damodaran-Kamal, S. K. and Francioni, J. M. (1993). Nondeterminancy: testing and debugging in message passing parallel programs. ACM SIGPLAN Notices, 28(12):118–128.
* [8] Dijkstra, E. W. (n.d.). Over de sequentialiteit van procesbeschrijvingen. Circulated privately.
* Fava, [2020] Fava, D. (2020). Grace: a race detector based on happens-before sets. https://github.com/dfava/grace.
* Fava et al., [2018] Fava, D., Steffen, M., and Stolz, V. (2018). Operational semantics of a weak memory model with channel synchronization. Journal of Logic and Algebraic Methods in Programming. An extended version of the FM’18 publication with the same title.
* Flanagan and Freund, [2009] Flanagan, C. and Freund, S. N. (2009). FastTrack: Efficient and precise dynamic race detection. In Hind, M. and Diwan, A., editors, ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 121–133. ACM.
* Go memory model, [2014] Go memory model (2014). The Go memory model. https://golang.org/ref/mem. Version of May 31, 2014, covering Go version 1.9.1.
* golang.race.detector, [2013] golang.race.detector (2013). https://blog.golang.org/race-detector.
* google.sanitizer, [2014] google.sanitizer (2014). https://github.com/google/sanitizers.
* google.thread.sanitizer, [2015] google.thread.sanitizer (2015). https://github.com/google/sanitizers/wiki/ThreadSanitizerAlgorithm.
* Huang et al., [2014] Huang, J., Meredith, P. O., and Rosu, G. (2014). Maximal sound predictive race detection with control flow abstraction. In ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’14, Edinburgh, United Kingdom - June 09 - 11, 2014, pages 337–348. ACM.
* Katz and Peled, [1992] Katz, S. and Peled, D. (1992). Defining conditional independence using collapses. Theoretical Computer Science, 101.
* Lamport, [1978] Lamport, L. (1978). Time, clocks, and the ordering of events in a distributed system. Communications of the ACM, 21(7):558–565.
* Lidbury and Donaldson, [2017] Lidbury, C. and Donaldson, A. F. (2017). Dynamic race detection for C++11. In Castagna, G. and Gordon, A. D., editors, Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2017, Paris, France, January 18-20, 2017, pages 443–457. ACM.
* llvm.thread.sanitizer, [2011] llvm.thread.sanitizer (2011). https://clang.llvm.org/docs/ThreadSanitizer.html.
* Marino et al., [2009] Marino, D., Musuvathi, M., and Narayanasamy, S. (2009). Literace: effective sampling for lightweight data-race detection. In ACM Sigplan notices, pages 134–143.
* Mattern, [1988] Mattern, F. (1988). Virtual time and global states in distributed systems. In Proceedings of the International Conference on Parallel and Distributed Algorithms, pages 215–226.
* Mazurkiewicz, [1987] Mazurkiewicz, A. (1987). Trace theory. In Brauer, W., Reisig, W., and Rozenberg, G., editors, Petri Nets: Applications and Relationships to Other Models of Concurrency, (Advances in Petri Nets 1986) Part II, volume 255 of Lecture Notes in Computer Science, pages 279–324. Springer Verlag.
* Naik et al., [2006] Naik, M., Aiken, A., and Whaley, J. (2006). Effective static race detection for Java. In Proceedings of the 27th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 308–319. ACM.
* Netzer and Miller, [1990] Netzer, R. H. B. and Miller, B. P. (1990). On the complexity of event ordering for shared-memory parallel program executions. In Proceedings of the 1990 International Conference on Parallel Processing, Urbana-Champaign, IL, USA, August 1990. Volume 2: Software., pages 93–97.
* O’Callahan and Choi, [2003] O’Callahan, R. and Choi, J.-D. (2003). Hybrid dynamic data race detection. In Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP 2003, June 11-13, 2003, San Diego, CA, USA, pages 167–178.
* Palamidessi, [1997] Palamidessi, C. (1997). Comparing the expressive power of the synchronous and the asynchronous $\pi$-calculus. In Proceedings of POPL ’97, pages 256–265. ACM.
* Peters and Nestmann, [2012] Peters, K. and Nestmann, U. (2012). Is it a “good” encoding of mixed choice? In Proceedings of the International Conference on Foundations of Software Science and Computation Structures (FoSSaCS ’12), volume 7213 of Lecture Notes in Computer Science, pages 210–224. Springer Verlag.
* Pozniansky and Schuster, [2003] Pozniansky, E. and Schuster, A. (2003). Efficient on-the-fly data race detection in multi-threaded C++ programs. In Proceedings of the 9th ACM Symposium on Principles and Practice of Parallel Programming (PPoPP’03).
* Pratikakis et al., [2006] Pratikakis, P., Foster, J. S., and Hicks, M. W. (2006). LOCKSMITH: Context-sensitive correlation analysis for race detection. In Proceedings of the 27th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 320–331. ACM.
* Rhodes et al., [2017] Rhodes, D., Flanagan, C., and Freund, S. N. (2017). Bigfoot: Static check placement for dynamic race detection. In Proceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2017, Barcelona, Spain, June 18-23, 2017, pages 141–156.
* Sabry and Felleisen, [1992] Sabry, A. and Felleisen, M. (1992). Reasoning about programs in continuation-passing style. In Clinger, W., editor, Conference on Lisp and Functional Programming (San Francisco, California), pages 288–298. ACM.
* Savage et al., [1997] Savage, S., Burrows, M., Nelson, G., Sobalvarro, P., and Anderson, T. (1997). Eraser: A dynamic data race detector for multithreaded programs. ACM Transactions on Computer Systems, 15(4):391–411.
* Serebryany and Iskhodzhanov, [2009] Serebryany, K. and Iskhodzhanov, T. (2009). Threadsanitizer: data race detection in practice. In Proceedings of the Workshop on Binary Instrumentation and Applications, pages 62–71. ACM.
* Serebryany et al., [2011] Serebryany, K., Potapenko, A., Iskhodzhanov, T., and Vyukov, D. (2011). Dynamic race detection with llvm compiler. In International Conference on Runtime Verification, pages 110–114. Springer.
* Smaragdakis et al., [2012] Smaragdakis, Y., Evans, J., Sadowski, C., Yi, J., and Flanagan, C. (2012). Sound predictive race detection in polynomial time. In Proceedings of POPL ’12, pages 387–400. ACM.
* Steffen and Nestmann, [1995] Steffen, M. and Nestmann, U. (1995). Typing confluence. Interner Bericht IMMD7-xx/95, Informatik VII, Universität Erlangen-Nürnberg.
* Terauchi and Aiken, [2008] Terauchi, T. and Aiken, A. (2008). A capability calculus for concurrency and determinism. ACM Transactions on Programming Languages and Systems (TOPLAS), 30(5):27.
* Voung et al., [2007] Voung, J. W., Jhala, R., and Lerner, S. (2007). RELAY: Static race detection on millios of lines of code. In Proceedings oof the 6th Joint Meeting of the European Software Engineering Conference and ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 205–214.
Figure 16: Local steps
### 0.A.3 Memory interactions and channel communication
Reading and writing, the two basic memory interactions, are covered in Figure
0.A.3 and channel communication in Figure 0.A.3. Compared to the semantics for
race detection (cf. Figures 6 and 4), the semantics here is done without extra
information and book-keeping of happens-before information. Related to that,
|
# Simulating Chiral Magnetic and Separation Effects with Spin-Orbit Coupled
Atomic Gases
Xu-Guang Huang Physics Department and Center for Particle Physics and Field
Theory, Fudan University, Shanghai 200433, China.
###### Abstract
The chiral magnetic and chiral separation effects—quantum-anomaly-induced
electric current and chiral current along an external magnetic field in
parity-odd quark-gluon plasma—have received intense studies in the community
of heavy-ion collision physics. We show that analogous effects occur in
rotating trapped Fermi gases with Weyl-Zeeman spin-orbit coupling where the
rotation plays the role of an external magnetic field. These effects can
induce a mass quadrupole in the atomic cloud along the rotation axis which may
be tested in future experiments. Our results suggest that the spin-orbit
coupled atomic gases are potential simulators of the chiral magnetic and
separation effects.
The recent experimental breakthroughs in generating synthetic spin-orbit
coupling (SOC) in both bosonic Lin:2011 and fermionic gases Wang:2012 ;
Cheuk:2012 have opened a new era for cold atomic physics. In these
experiments, a pair of Raman lasers induced an equal mixture of Rashba and
Dresselhaus SOCs between the (psuedo)spin-$1/2$ internal degree of freedom and
the orbital motion of the atoms. Very promisingly, other types of SOC, e.g.,
the Weyl SOC Anderson:2012 ; Anderson:2013 ; Li:2012 , could also be realized.
The presence of the SOC modifies the dynamics on both single-particle and
many-body levels and a variety of novel properties have been explored.
Furthermore, it is very suggestive that the spin-orbit coupled atomic gases
may provide ideal platforms to simulate intriguing phenomena that have
topological origins, e.g., the topological insulators or superfluid and
Majorana fermions Jiang:2011 ; Liu:2012 ; Zhang:2013 ; Ruhman:2015 , the spin
Hall effect Beeler:2013 ; Kennedy:2013 , and the Berezinskii-Kosterlitz-
Thouless transition He:2011 ; Xu:2015 . See Refs. review:2013 ; review:2014 ;
review:2015 for reviews.
In this article, we demonstrate that yet another topological phenomenon, the
quantum anomaly, can also be realized in a special setup for the spin-orbit
coupled atomic gases, namely the trapped rotating atomic gases with Weyl-
Zeeman SOC. As a consequence of this quantum anomaly, the currents of opposite
chiralities (see below for definition) are generated in parallel or anti-
parallel to the rotation axis. These currents mimic the chiral magnetic effect
(CME) Kharzeev:2008 ; Fukushima:2008 and chiral separation effect (CSE)
Son:2004 ; Met:2005 that are intensively studied in the context of quark-
gluon plasma (QGP) produced in heavy-ion collisions, with now the rotation
playing the role of an external magnetic field.
The QGP version of the CME and CSE is expressed as
$\displaystyle{\bf j}_{V}=\frac{e_{f}N_{c}\mu_{A}}{2\pi^{2}\hbar^{2}}{\bf
B},\;\;\;\;{\bf j}_{A}=\frac{e_{f}N_{c}\mu}{2\pi^{2}\hbar^{2}}{\bf B},$ (1)
for each flavor of light quarks, where $e_{f}$ is the electric charge of quark
with flavor $f$, ${\bf j}_{V}=\langle{\bar{\psi}}{\bm{\gamma}}\psi\rangle$ and
${\bf j}_{A}=\langle{\bar{\psi}}{\bm{\gamma}}\gamma_{5}\psi\rangle$ are
electric and chiral currents, $N_{c}=3$ is the color degeneracy, and $\mu$ and
$\mu_{A}$ are vector and chiral chemical potentials. Experimentally, signals
consistent with CME and CSE have been observed in heavy-ion collisions at
Relativistic Heavy Ion Collider (RHIC) STAR and Large Hadron Collider (LHC)
ALICE . In these collisions, extremely strong magnetic fields arise due to the
fast motion of the ions Skokov ; Huang ; Huang2015 , and these magnetic fields
induce charge separation and chirality separation in the QGP via CME and CSE
which in turn lead to special azimuthal distributions of the charged hadrons
that are finally measured by the detectors.
It is worth noting that the CME was also discussed in astrophysical context
Vilenkin:1980 and more recently in Weyl and Dirac semimetals Burkov:2012 ;
Vaz:2013 ; Basar:2014 ; Land:2014 ; weyl:2014 . In Weyl and Dirac semimetals,
the low energy excitations are Weyl and Dirac fermions. These materials can
exhibit finite $\mu_{5}$ due to their special band structure or by applying
parallel electric and magnetic fields, and thus open the possibility of
realizing the CME. Comparing to these previously explored systems, the atomic
gases with Weyl-Zeeman SOC enable greater flexibility in controlling the
parameters and thus provide not only simulators but also a unique mean to
exploit new features (e.g., those raised by the presence of the Zeeman
splitting field) of the CME and CSE.
Semiclassical approach.— We begin by considering the following single-particle
Hamiltonian for spin-$1/2$ atoms (either bosons or fermions) in three
dimensions (3D),
$\displaystyle{\cal H}=\frac{{\mathbf{p}}^{2}}{2m}-\mu-{\bf
W}({\mathbf{p}})\cdot{\bm{\sigma}},$ (2)
where $m$ is the mass, $\mu$ is the chemical potential, and ${\mathbf{p}}$ is
the canonical momentum. The third term expresses a generic SOC where
${\bm{\sigma}}$ is the Pauli matrix and ${\bf W}$ represents a momentum-
dependent magnetic field. Its form will be given when necessary. The
Hamiltonian (2) represents a two-band system with band dispersions
$\varepsilon_{c}({\mathbf{p}})=p^{2}/(2m)-\mu-cW({\mathbf{p}}),c=\pm$.
Correspondingly, we define the chirality of band $c$ to be right-handed (left-
handed) if $c=+$ ($c=-$). We note that the chirality we defined here is
commonly called helicity in literature. We choose the term “chirality” to keep
the consistence with the terminologies “chiral anomaly”, “chiral magentic
effect”, etc.
Now consider that the atoms are trapped by an external potential
$V({\mathbf{x}})$ and at the meantime are subject to rotation with angular
velocity ${\bm{\omega}}$ which we assume to be a constant. The effect of the
trapping and rotation can be described by a gauge potential
$A^{\mu}=(A_{0},{\bf A})$ with
$A_{0}({\mathbf{x}})=V({\mathbf{x}})-(m/2)({\bm{\omega}}\times{\mathbf{x}})^{2}-\mu$
and ${\bf A}({\mathbf{x}})=m{\bm{\omega}}\times{\mathbf{x}}$, and then the
Hamiltonian becomes
$\displaystyle{\cal H}=\frac{[{\mathbf{p}}-{\bf
A}({\mathbf{x}})]^{2}}{2m}-{\bf W}[{\mathbf{p}}-{\bf
A}({\mathbf{x}})]\cdot{\bm{\sigma}}+A_{0}({\mathbf{x}}).$ (3)
Note that the Hamiltonian of atoms with SOC in the rotating frame is generally
time dependent and the minimal substitution adoptted in going from Eq. (2) to
Eq. (3) may not be applicable. However, for situations that, e.g., both the
trap and the lasers inducing the SOC are rotating and the SOC is linear in
momentum, one can find a time-dependent unitary transformation to eliminate
the time dependence from the Hamiltonian; see Ref. timedep and the references
therein. Nevertheless, the present study is restricted to such a situation and
the use of the time-independent Halmitonian (3) is justified.
We now derive a set of semiclassical equations of motion (EOM) for the orbital
variables ${\mathbf{x}}$ and ${\mathbf{p}}$. (For non-rotating spin-orbit
coupled atomic gases, a similar derivation is given in Ref. bijl .) At
semiclassical level, we treat ${\mathbf{x}},{\mathbf{p}}$, and ${\bm{\sigma}}$
as classical variables, and their EOM are easily derived from Hamiltonian (3):
$\displaystyle\dot{{\mathbf{x}}}$ $\displaystyle=$
$\displaystyle\nabla_{\mathbf{p}}\xi-\nabla_{\mathbf{p}}{\bf
W}\cdot{\bm{\sigma}},$ (4) $\displaystyle\dot{{\mathbf{p}}}$ $\displaystyle=$
$\displaystyle-\nabla_{\mathbf{x}}\xi+\nabla_{\mathbf{x}}{\bf
W}\cdot{\bm{\sigma}},$ (5) $\displaystyle\dot{{\bm{\sigma}}}$ $\displaystyle=$
$\displaystyle\frac{2}{\hbar}{\bm{\sigma}}\times{\bf W},$ (6)
where $\xi({\mathbf{x}},{\mathbf{p}})=[{\mathbf{p}}-{\bf
A}({\mathbf{x}})]^{2}/(2m)+A_{0}({\mathbf{x}})$ and ${\bf
W}({\mathbf{x}},{\mathbf{p}})={\bf W}[{\mathbf{p}}-{\bf A}({\mathbf{x}})]$. To
proceed, we make an adiabatic approximation to the spin dynamics, that is, we
treat the orbital degrees of freedom ${\mathbf{x}},{\mathbf{p}}$ as slow
variables while the spin ${\bm{\sigma}}$ as fast variable and solve Eq. (6) up
to first order in time derivatives of the orbital variables. This gives
$\displaystyle{\bm{\sigma}}\approx c{\bf w}+c\frac{\hbar}{2W}{\bf
w}\times\dot{{\bf w}},$ (7)
where ${\bf w}={\bf W}/W$. This procedure is essentially equivalent to solving
${\bm{\sigma}}$ up to first order in $\hbar$. Inserting Eq. (7) to Eqs.
(4)-(5) we obtain
$\displaystyle\dot{{\mathbf{x}}}$ $\displaystyle=$
$\displaystyle\nabla_{\mathbf{p}}(\xi-
cW)+c\hbar{\bm{\Omega}}_{px}\cdot\dot{{\mathbf{x}}}+c\hbar{\bm{\Omega}}_{pp}\cdot\dot{{\mathbf{p}}},$
(8) $\displaystyle\dot{{\mathbf{p}}}$ $\displaystyle=$
$\displaystyle-\nabla_{\mathbf{x}}(\xi-
cW)-c\hbar{\bm{\Omega}}_{xx}\cdot\dot{{\mathbf{x}}}-c\hbar{\bm{\Omega}}_{xp}\cdot\dot{{\mathbf{p}}},$
(9)
where the tensors $\Omega_{AB}^{ij}$ with $A,B=x,p$ are defined as
$\displaystyle\Omega_{AB}^{ij}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left(\frac{\partial{\bf w}}{\partial
A_{i}}\times\frac{\partial{\bf w}}{\partial B_{j}}\right)\cdot{\bf w}.$ (10)
In terms of the kinetic momentum, ${\mathbf{k}}\equiv{\mathbf{p}}-{\bf
A}({\mathbf{x}})$, Eqs. (8)-(9) can be recast to more compact and
transparently gauge invariant forms Sundaram ; Xiao:2010 ,
$\displaystyle\dot{{\mathbf{x}}}$ $\displaystyle=$
$\displaystyle\nabla_{\mathbf{k}}\varepsilon_{c}+c\hbar\dot{{\mathbf{k}}}\times{\bm{\Omega}},$
(11) $\displaystyle\dot{{\mathbf{k}}}$ $\displaystyle=$ $\displaystyle{\bf
E}+\dot{{\mathbf{x}}}\times{\bf B},$ (12)
where $\varepsilon_{c}({\mathbf{k}})=k^{2}/(2m)-\mu-cW({\mathbf{k}})$ and
$\displaystyle E_{i}$ $\displaystyle=$ $\displaystyle-\frac{\partial
A_{0}}{\partial x_{i}},$ (13) $\displaystyle B_{i}$ $\displaystyle=$
$\displaystyle\epsilon_{ijk}\frac{\partial A_{k}}{\partial
x_{j}}=2m\omega_{i},$ (14) $\displaystyle\Omega_{i}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\epsilon_{ijk}\Omega_{pp}^{jk},$ (15)
are the effective electric field, magnetic field, and Berry curvature,
respectively. Equation (14) exhibits the equivalence between ${\bf B}$ and
${\bm{\omega}}$. Note that for relativistic massless particles under rotation,
the effective magnetic field would be momentum-dependent Stephanov , ${\bf
B}\sim 2|{\mathbf{k}}|{\bm{\omega}}$, and thus the rotation is no longer
equivalent to a magnetic field. In that case, the rotation can induce an
independent current other than the CME/CSE which is called chiral vortical
effect (CVE) chialve ; chialve2 ; chialve3 ; Son2 ; cvw . In the non-
relativistic case, the CVE is equivalent to CME/CSE.
From Eq. (11) and Eq. (12) we obtain
$\displaystyle\sqrt{G_{c}}\dot{{\mathbf{x}}}$ $\displaystyle=$
$\displaystyle\nabla_{\mathbf{k}}\varepsilon_{c}+c\hbar{\bf
E}\times{\bm{\Omega}}+c\hbar({\bm{\Omega}}\cdot\nabla_{\mathbf{k}}\varepsilon_{c}){\bf
B},$ (16) $\displaystyle\sqrt{G_{c}}\dot{{\mathbf{k}}}$ $\displaystyle=$
$\displaystyle{\bf E}+\nabla_{\mathbf{k}}\varepsilon_{c}\times{\bf
B}+c\hbar({\bf E}\cdot{\bf B}){\bm{\Omega}},$ (17)
where $\sqrt{G_{c}}=1+c\hbar{\bf B}\cdot{\bm{\Omega}}$, its physical meaning
will be clear soon. These are the semiclassical equations for the orbital
motion of atoms of chirality $c$. In these equations, the quantum effects are
reflected in the Berry curvature terms.
We will hereafter focus on Fermi gases and we will use the natural units
$\hbar=k_{B}=1$.
In the presence of the Berry curvature, the invariant measure of the phase
space integration for atoms of chirality $c$ needs to be modified to
$\sqrt{G_{c}}d^{3}{\mathbf{x}}d^{3}{\mathbf{k}}/(2\pi)^{3}$ Xiao ; dual:2006 .
With this notification, one is able to write down a kinetic equation for the
distribution function $f_{c}$ of chirality $c$,
$\displaystyle\partial_{t}f_{c}+\dot{{\mathbf{x}}}\cdot\nabla_{\mathbf{x}}f_{c}+\dot{{\mathbf{k}}}\cdot\nabla_{\mathbf{k}}f_{c}=I[f_{c}],$
(18)
where $\dot{{\mathbf{x}}}$ and $\dot{{\mathbf{k}}}$ are given by Eqs.
(16)-(17). In the context of relativistic chiral medium like QGP, similar
kinetic equation has been derived recently Stephanov ; Yamamoto ; Wangqun ;
Wangqun2 ; Mannuel . The density and current of chirality $c$ are defined as
$\displaystyle n_{c}$ $\displaystyle=$
$\displaystyle\int\frac{d^{3}{\mathbf{k}}}{(2\pi)^{3}}\sqrt{G_{c}}f_{c},$ (19)
$\displaystyle{\bf j}_{c}$ $\displaystyle=$
$\displaystyle\int\frac{d^{3}{\mathbf{k}}}{(2\pi)^{3}}\sqrt{G_{c}}\dot{{\mathbf{x}}}_{c}f_{c},$
(20)
and the continuity equation, following Eq. (18), reads
$\displaystyle\partial_{t}n_{c}+\nabla_{\mathbf{x}}\cdot{\bf j}_{c}$
$\displaystyle=$ $\displaystyle c({\bf E}\cdot{\bf
B})\int\frac{d^{3}{\mathbf{k}}}{(2\pi)^{3}}f_{c}\nabla_{\mathbf{k}}\cdot{\bm{\Omega}}$
(21) $\displaystyle=$ $\displaystyle
cf_{c}({\mathbf{k}}_{0})\frac{F}{4\pi^{2}}{\bf E}\cdot{\bf B},$
where we suppose that the collision kernel conserves the particle number for
each chirality. If not, there would be a term
$\int{d^{3}{\mathbf{k}}}/{(2\pi)^{3}}\sqrt{G_{c}}I[f_{c}]$ on the right-hand
side. The ${\mathbf{k}}_{0}$ in the second line specifies the location of the
Berry monopole which coincides with the band-crossing point determined by
${\bf W}({\mathbf{k}}_{0})={\bf 0}$. The $F$ is the total Berry curvature flux
associated with the Berry monopole. Its explicit expression is
$\displaystyle F$ $\displaystyle=$
$\displaystyle\frac{\epsilon_{ijm}}{8\pi}\int_{S^{2}}d^{2}\Sigma_{i}{\bf
w}\cdot\left(\frac{\partial{\bf w}}{\partial k_{j}}\times\frac{\partial{\bf
w}}{\partial k_{m}}\right).$ (22)
This counts the winding number of the map ${\bf w}:S^{2}\rightarrow S^{2}$ and
thus $F\in\pi_{2}(S^{2})=\mathbb{Z}$.
If the right-hand side does not vanish, Eq. (21) represents a quantum anomaly
for the current of chirality $c$ in the form analogous to the chiral anomaly
in gauge field theory. This can be seen more clearly if we consider a Fermi
gas at zero temperature with pure Weyl SOC, ${\bf W}=\lambda{\mathbf{k}}$,
where $\lambda$ is the strength of the SOC. In this case,
${\bm{\Omega}}={\mathbf{k}}/(2k^{3})$,
$\nabla_{\mathbf{k}}\cdot{\bm{\Omega}}=2\pi\delta^{(3)}({\mathbf{k}})$, $F=1$,
${\mathbf{k}}_{0}={\bf 0}$, and $f_{c}({\mathbf{k}}_{0})=1$. Thus the right-
hand side of Eq. (21) reads $c({\bf E}\cdot{\bf B})/(4\pi^{2})$, which
coincides exactly with the $U(1)$ chiral anomaly. In this case, the
conservation of particle numbers of chirality $c$ which is proportional to the
volume of its corresponding Fermi sphere is violated by the flux of the Berry
curvature across the Fermi surface. For a Weyl-Zeeman SOC, ${\bf
W}({\mathbf{k}})=\lambda{\mathbf{k}}+h\hat{\bf z}$ with $h>0$ being a constant
Zeeman field, the Berry curvature is
$\displaystyle{\bm{\Omega}}$ $\displaystyle=$
$\displaystyle\lambda^{2}\frac{{\bf W}({\mathbf{k}})}{2W^{3}},$ (23)
the Berry monopole locates at ${\mathbf{k}}_{0}=-h/\lambda\hat{\bf z}$, and
the winding number $F=1$. At zero temperature, we have
$f_{c}({\mathbf{k}}_{0})=\theta[\mu-h^{2}/(2m\lambda^{2})]$ which vanishes if
the Zeeman field $h>\lambda\sqrt{2m\mu}$. Thus the system has quantum anomaly
only when $h<\lambda\sqrt{2m\mu}$. The absence of the quantum anomaly when
$h>\lambda\sqrt{2m\mu}$ reflects a change of the Fermi-surface topology as
shown in Fig. 1.
We note here that there is no quantum anomaly for Rashba-Dresselhaus SOC ${\bf
W}({\mathbf{k}})=(\lambda k_{y}-\eta k_{x},-\lambda k_{x}+\eta k_{y},h)$ with
$\lambda,\eta,h>0$ being constants, because its Berry curvature,
$\displaystyle{\bm{\Omega}}$ $\displaystyle=$
$\displaystyle\frac{(\lambda^{2}-\eta^{2})h}{2[h^{2}+(\lambda^{2}+\eta^{2})(k_{x}^{2}+k_{y}^{2})-2\lambda\eta
k_{x}k_{y}]^{3/2}}\hat{\bf z},$ (24)
leads to zero winding number.
Figure 1: (Color online) Upper panels: The dispersion relations
$\varepsilon_{c}({\mathbf{k}}_{\perp}=0,k_{z})$. Lower panels: The Fermi-
surface topologies in $(k_{z},k_{x})$ plane. The green dashed line represents
the chemical potential. Blue (red) lines are for right-(left-)handed fermions.
When $h<\lambda\sqrt{2m\mu}$, the Berry monopole (the black point) locates
inside both the Fermi surfaces; At $h=\lambda\sqrt{2m\mu}$ the two Fermi
surfaces touch; when $h>\lambda\sqrt{2m\mu}$, the Berry monopole moves out
both the Fermi surfaces.
Before we proceed, let us comment on the validity regime of the semiclassical
approach. To arrive at Eqs. (16)-(17) the inter-band transition has been
neglected which means that the force acting on the atoms cannot be strong. In
particular, this require that $\sqrt{|{\bf B}|}\ll W({\mathbf{p}})/\lambda$;
see Refs. Sundaram ; Xiao:2010 ; Stephanov for more discussions. Obviously,
this condition is violated if ${\mathbf{k}}$ is close to the Berry monopole
where $W({\mathbf{p}})$ is small. Thus the phase space integral in Eq. (21)
should be understood to exclude the region
$|{\mathbf{k}}-{\mathbf{k}}_{0}|<\Delta$ around the Berry monopole with
$\Delta$ large enough so that we can apply the classical description to
particles outside of it. The value of $\Delta$ depends on ${\bf B}$ and ${\bf
E}$. For example, for pure Weyl SOC, $\Delta$ should be larger than
$\sqrt{|{\bf B}|}$; this actually constrains the magnitude of ${\bf B}$:
because $\Delta$ should not exceed $k_{F}$, the maximum $|{\bf B}|$ should not
exceed $k_{F}^{2}$. This implies that the rotation frequency should not exceed
$\varepsilon_{F}$ in order to guarantee the validity of the semiclassical
approach. Our numerical simulations will always be within the validity regime
of the semiclassical approach.
Chiral magnetic and chiral separation effects.— A consequence of this quantum
anomaly is the appearance of CME and CSE. To see this, we substitute Eq. (16)
into Eq. (20),
$\displaystyle{\bf j}_{c}$ $\displaystyle=$
$\displaystyle\int\frac{d^{3}{\mathbf{k}}}{(2\pi)^{3}}f_{c}\nabla_{\mathbf{k}}\varepsilon_{c}+c{\bf
E}\times\int\frac{d^{3}{\mathbf{k}}}{(2\pi)^{3}}{\bm{\Omega}}f_{c}$ (25)
$\displaystyle+c{\bf
B}\int\frac{d^{3}{\mathbf{k}}}{(2\pi)^{3}}({\bm{\Omega}}\cdot\nabla_{\mathbf{k}}\varepsilon_{c})f_{c}.$
The first term on the right-hand side is the normal number current, the second
term is the (intrinsic) anomalous Hall effect and the last term represents a
${\bf B}$-induced current which we denote by ${\bf j}_{c}^{\rm{\bf B}-ind}$:
$\displaystyle{\bf j}^{\rm{\bf B}-ind}_{c}$ $\displaystyle=$
$\displaystyle\chi_{c}{\bf B},$ (26) $\displaystyle\chi_{c}$ $\displaystyle=$
$\displaystyle
c\int\frac{d^{3}{\mathbf{k}}}{(2\pi)^{3}}({\bm{\Omega}}\cdot\nabla_{\mathbf{k}}\varepsilon_{c})f_{c},$
(27)
where $\chi_{c}$ is the ${\bf B}$-induced conductivity (BIC) of chirality $c$.
Let us consider $f_{c}$ to be the Fermi-Dirac distribution,
$\displaystyle f_{c}=f_{c}^{0}=\frac{1}{e^{\beta\varepsilon_{c}}+1}\;\;\;{\rm
for\;fermions}.$ (28)
In this case, one finds that
$\displaystyle\chi_{R}=-\chi_{L}=\frac{T}{4\pi^{2}}\ln\left[1+\exp{(-\beta\varepsilon_{0})}\right]$
(29)
with $\varepsilon_{0}=\mu[h^{2}/(2m\mu\lambda^{2})-1]$. Note that the
$\lambda$ and $h$ dependence of $\chi_{c}$ is through their ratio $h/\lambda$,
and thus when $h=0$ the BIC is independent of $\lambda$ as long as it is
nonzero. We present the numerical results for $\chi_{R}$ in Fig. 2.
Figure 2: (Color online) The ${\bf B}$-induced conductivity, $\chi_{R}$, for
Fermi gases with Weyl-Zeeman SOC as function of the temperature (left panel)
and $h/(\lambda\sqrt{2m\mu})$ (right panel). Note that $\chi_{L}=-\chi_{R}$.
It is seen that the BIC is enhanced at finite temperature and suppressed by
large Zeeman field or small SOC strength. The latter effect is more
transparent at zero temperature at which $f_{c}^{0}=\theta(-\varepsilon_{c})$
and $\chi_{c}$ can be analytically obtained:
$\displaystyle\chi_{c}(T=0)$ $\displaystyle=$
$\displaystyle\frac{c}{4\pi^{2}}\left(\mu-\frac{h^{2}}{2m\lambda^{2}}\right)\theta\left(\mu-\frac{h^{2}}{2m\lambda^{2}}\right).$
(30)
Thus, the ${\bf B}$-induced currents disappear when $h>\lambda\sqrt{2m\mu}$
which again reflects a Fermi-surface topology transition (see Fig. 1), in
parallel to the absence of the quantum anomaly.
The above result $\chi_{R}=-\chi_{L}$ reflects the fact that the right-handed
and left-handed atoms have the same chemical potential. If the atomic cloud
contains domains (a possible realization will be presented in next section) in
which the right-handed and left-handed chemical potentials differ, say,
$\displaystyle\mu_{R}=\mu+\mu_{A},\;\;\mu_{L}=\mu-\mu_{A},$ (31)
the two BICs, $\chi_{R}$ and $\chi_{L}$, in these domains will also differ in
magnitude and Eq. (30) becomes
$\displaystyle\chi_{c}(T=0)$ $\displaystyle=$
$\displaystyle\frac{c}{4\pi^{2}}\left(\mu_{c}-\frac{h^{2}}{2m\lambda^{2}}\right)\theta\left(\mu_{c}-\frac{h^{2}}{2m\lambda^{2}}\right).$
(32)
At zero Zeeman field, by inserting $\chi_{c}(T=0)$ into Eq. (26) we obtain the
${\bf B}$-induced vector and chiral currents:
$\displaystyle{\bf j}_{V}^{\rm{\bf B}-ind}$ $\displaystyle\equiv$
$\displaystyle{\bf j}_{R}^{\rm{\bf B}-ind}+{\bf j}_{L}^{\rm{\bf
B}-ind}=\frac{\mu_{A}}{2\pi^{2}}{\bf B},$ (33) $\displaystyle{\bf
j}_{A}^{\rm{\bf B}-ind}$ $\displaystyle\equiv$ $\displaystyle{\bf
j}_{R}^{\rm{\bf B}-ind}-{\bf j}_{L}^{\rm{\bf B}-ind}=\frac{\mu}{2\pi^{2}}{\bf
B}.$ (34)
These equations express the CME and the CSE in forms consistent with Eq. (1).
Finally, we note that the BICs do not receive perturbative corrections from
scattering Son2 ; Hou ; Banerjee ; Jensen ; Satow (See the Method.). This
originates from the fact that the chiral anomaly is free of renormalization
(the Adler-Bardeen theorem).
Figure 3: (Color online) A schematic illustration of CME and CSE induced
chiral dipole and mass quadrupole in Fermi gases with Weyl SOC. First, the CSE
drives a chirality separation along the rotation axis with $\mu_{A}>0$ in the
upper tip and $\mu_{A}<0$ in the lower tip (the left panel). Then the CME in
turn drives particle number or equivalently the mass to flow away from the
center and the atomic cloud acquires a mass quadrupole along the rotation axis
(the right panel).
Chiral dipole and mass quadrupole.— Now we turn to the phenomenology of the
CME and CSE. We consider a Weyl-Zeeman spin-orbit coupled Fermi gas in normal
phase with $\mu>0$. Once it is rotating, the CSE (34) will drive the chirality
to move along ${\bf B}$ and causes a macroscopic separation of the chirality,
see Fig. 3 (left). This chiral dipolar distribution naturally forms two
separated domains, one with $\mu_{A}>0$ in the upper space while another with
$\mu_{A}<0$ in the lower space. Then in these domains the CME (33) in turn
drives particle number or equivalently the mass to flow away from the center
and the atomic cloud acquires a mass quadrupole along ${\bf B}$, see Fig. 3
(right). In the context of heavy-ion collisions, similar mechanism was
proposed to generate a charge quadrupole in QGP which may induce a difference
between the elliptic flows of $\pi^{+}$ and $\pi^{-}$ shov ; cmvexp that has
been detected at RHIC Wangcmv .
The above argument is true only at the qualitative level, to reveal the real
dynamical process, one needs to solve the coupled evolution problem of the
right- and left-handed currents. Let us consider a pure Weyl SOC and the $T=0$
case. The general forms of the right-handed and left-handed currents should
contain diffusion and normal conducting terms, so they read (the anomalous
Hall terms vanish for pure Weyl SOC because of the time-reversal symmetry)
$\displaystyle{\bf j}_{R}$ $\displaystyle=$
$\displaystyle\frac{\mu_{R}}{4\pi^{2}}{\bf B}-D_{R}\nabla n_{R}+\sigma{\bf
E},$ (35) $\displaystyle{\bf j}_{L}$ $\displaystyle=$
$\displaystyle-\frac{\mu_{L}}{4\pi^{2}}{\bf B}-D_{L}\nabla n_{L}+\sigma{\bf
E},$ (36)
where $D_{c}$ is the diffusion constant and $\sigma$ is the normal
conductivity, they are linked by the Einstein relations $\sigma=D_{R}(\partial
n_{R}/\partial\mu_{R})=D_{L}(\partial n_{L}/\partial\mu_{L})$. Consider a
small fluctuation $\delta n_{c}$ in density $n_{c}$. This will induce a small
departure of the chemical potential from the background value,
$\mu_{c}=\mu_{\rm bg}+\delta\mu_{c}$, where $\mu_{\rm bg}=-A_{0}$ is linked to
${\bf E}$ by ${\bf E}=\nabla\mu_{\rm bg}$. Substituting ${\bf j}_{c}$ to Eq.
(21) and keeping linear order terms in $\delta n_{c}$ we obtain
$\displaystyle\frac{\partial\delta n_{R}}{\partial
t}+\frac{1}{4\pi^{2}}\frac{\partial\mu_{R}}{\partial n_{R}}{\bf
B}\cdot\nabla\delta n_{R}-D_{R}\nabla^{2}\delta n_{R}=0,$ (37)
$\displaystyle\frac{\partial\delta n_{L}}{\partial
t}-\frac{1}{4\pi^{2}}\frac{\partial\mu_{L}}{\partial n_{L}}{\bf
B}\cdot\nabla\delta n_{L}-D_{L}\nabla^{2}\delta n_{L}=0.$ (38)
These two equations represent two wave modes with dispersions
$E_{R}({\mathbf{k}})=v_{R}k-iD_{R}k^{2}$,
$E_{L}({\mathbf{k}})=v_{L}k-iD_{L}k^{2}$, one propagating along ${\bf B}$ with
velocity $v_{R}=B({\partial\mu_{R}}/{\partial n_{R}})/(4\pi^{2})$ and another
opposite to ${\bf B}$ with velocity $v_{L}=B({\partial\mu_{L}}/{\partial
n_{L}})/(4\pi^{2})$. We call them chiral magnetic waves (CMWs) in accordance
with the same wave modes found in the context of QGP cmv . It is the CMWs that
develops the chiral dipole and the mass quadrupole. Unlike the situation in
QGP, in our setup the trapping potential will finally balance the driving
force due to the CMWs and establish a new mechanical equilibrium. The new
equilibrium will be characterized by the position-dependent chemical potential
$\displaystyle\mu_{c}({\mathbf{x}})$ $\displaystyle=$
$\displaystyle\mu-\tilde{V}({\mathbf{x}})+\Delta\mu_{c}({\mathbf{x}}),$ (39)
where
$\tilde{V}=(m/2)\left(\tilde{\omega}_{\perp}^{2}(x^{2}+y^{2})+\omega_{z}^{2}z^{2}\right)$
is the effective trapping potential and
$\tilde{\omega}_{\perp}^{2}=\omega_{\perp}^{2}-\omega^{2}$. The chemical
potential shift $\Delta\mu_{c}({\mathbf{x}})$ is determined by the mechanical
equilibration condition ${\bf j}_{R}={\bf j}_{L}=0$. The numerical results are
shown in Fig. 4 where we simulate $5\times 10^{5}$ fermions in the harmonic
trap. Length is in unit of $1/\sqrt{m\omega_{z}}$, and we assume the
transverse effective trapping frequency $\tilde{\omega}_{\perp}$ being kept to
be $\omega_{z}$ when the ${\bf B}$-field changes. Experimentally, the chiral
dipole may be hard to detect but the mass quadrupole profile can be easily
detected by, e.g., light absorption images.
Figure 4: (Color online) Mass quadrupole (upper panels) and chiral dipole
(lower panels) induced by CME and CSE at different rotating frequencies.
Length is in unit of $1/\sqrt{m\omega_{z}}$.
Conclusion.— In summary, we have demonstrated that if the atomic gases with
Weyl-Zeeman SOC is 1) trapped by an external potential and 2) under rotation,
there can appear a quantum anomaly in the chiral currents. A consequence of
this chiral anomaly is the chiral magnetic effect and chiral separation
effect. The CME and CSE cause macroscopic separation of chirality and a mass
quadrupole along the rotation axis in the fermionic atomic cloud which may
possibly be detected in cold atomic experiments.
In the context of QGP, there has been found other transport phenomena that
stem from the quantum anomaly, e.g., the chiral electric separation effect
Huang2 ; Jiang , which may also be realized in spin-orbit coupled atomic gases
in the similar setup as we discussed in this Report. In addition, it is clear
from the derivation that the chiral anomaly may exist also in Bose gases. How
the quantum-anomaly-induced transports in bosonic gases also deserve detailed
exploration in future works.
Method.— Now we show the robustness of ${\bf B}$-induced currents against
scattering. Let us consider the collision kernel $I[f_{c}]$ in the relaxation
time approximation,
$\displaystyle I[f_{c}]=-\frac{\delta f_{c}}{\tau},$ (40)
where $\delta f_{c}=f_{c}-f_{c}^{0}\propto{\bf B}$ (which is assumed to be
small in this calculation) with $f_{c}^{0}$ the equilibrium distribution. We
now show that $\chi_{c}$ is independent of $\tau$. The relaxation time $\tau$
is assumed to be independent of the chirality, and we use it to characterize
the interaction strength among atoms. Concretely, the relaxation time can be
expressed as $\tau=1/(n\bar{v}\sigma_{T})$ with $n$ the density, $\bar{v}$ the
average velocity, $\sigma_{T}=4\pi a^{2}/(1+\bar{k}^{2}a^{2})$ the total cross
section, and $a$ the scattering length. For fermions, at low temperature,
$\tau\sim 3\pi m(1+(k_{F}a)^{2})/(4k_{F}^{2}(k_{F}a)^{2})$. At high
temperature, $\tau\sim m^{3/2}\sqrt{T}/n$.
To proceed, let us turn the ${\bf E}$-field off and assume a steady state with
no spatial dependence of $f_{c}$ and ${\bf B}$. Substituting $f_{c}$ into the
kinetic equation (18) in the main text, at linear order in ${\bf B}$, we
obtain
$\displaystyle\delta
f_{c}\approx-\frac{\tau}{\sqrt{G_{c}}}(\nabla_{\mathbf{k}}\varepsilon_{c}\times{\bf
B})\cdot\nabla_{\mathbf{k}}f_{c}^{0}.$ (41)
The ${\bf B}$-field induced current of chirality $c$ is given by
$\displaystyle{\bf j}_{c}^{{\bf B}-{\rm
ind}}=\int\frac{d^{3}{\mathbf{k}}}{(2\pi)^{3}}\sqrt{G_{c}}\dot{{\mathbf{x}}}(f_{c}^{0}+\delta
f_{c}),$ (42)
where the first term gives the result (26) in the main text and the second
term
$\displaystyle\int\frac{d^{3}{\mathbf{k}}}{(2\pi)^{3}}\sqrt{G_{c}}\dot{{\mathbf{x}}}\delta
f_{c}$ (43) $\displaystyle=$
$\displaystyle-\tau\int\frac{d^{3}{\mathbf{k}}}{(2\pi)^{3}}\dot{{\mathbf{x}}}(\nabla_{\mathbf{k}}\varepsilon_{c}\times{\bf
B})\cdot\nabla_{\mathbf{k}}\varepsilon_{c}\frac{\partial
f_{c}^{0}}{\partial\varepsilon_{c}}=0.$
Thus the ${\bf B}$-induced currents are solely given by CME and CSE and are
free of perturbative corrections.
## References
* (1) Lin, Y.-J., Jimenez-Garcia, K. & Spielman, I. B. Spin-orbit-coupled Bose-Einstein condensates. Nature 471, 83 (2011).
* (2) Wang, P. _et al._ Spin-orbit coupled degenerate Fermi gases. Phys. Rev. Lett. 109, 095301 (2012).
* (3) Cheuk, L. W. _et al._ Spin-injection spectroscopy of a spin-orbit coupled Fermi gas. Phys. Rev. Lett. 109, 095302 (2012).
* (4) Anderson, B. M., Juzeliūnas, G., Galitski, V. M. & Spielman, I. B. Synthetic 3D spin-orbit coupling. Phys. Rev. Lett. 108, 235301 (2012).
* (5) Anderson, B. M., Spielman, I. B. & Juzeliūnas, G. Magnetically generated spin-orbit coupling for ultracold atoms. Phys. Rev. Lett. 111, 125301 (2013).
* (6) Li, Y., Zhou, X. & Wu, C. Two- and three-dimensional topological insulators with isotropic and parity-breaking Landau levels. Phys. Rev. B 85, 125122 (2012).
* (7) Jiang, L. _et al._ Majorana fermions in equilibrium and in driven cold-atom quantum wires. Phys. Rev. Lett. 106, 220402 (2011).
* (8) Liu, X.-J. & Hu, H. Topological superfluid in one-dimensional spin-orbit-coupled atomic Fermi gases. Phys. Rev. A 85, 033622 (2012).
* (9) Zhang, W. & Yi, W. Topological Fulde-Ferrell-Larkin-Ovchinnikov states in spin-orbit-coupled Fermi gases. Nature Communications 4, 2711 (2013).
* (10) Ruhman, J., Berg, E., & Altman, E. Topological states in a one-dimensional Fermi gas with attractive interaction. Phys. Rev. Lett. 114, 100401 (2015).
* (11) Beeler, M. C. _et al._ The spin Hall effect in a quantum gas. Nature 498, 201 (2013).
* (12) Kennedy, C. J., Siviloglou, G. A., Miyake, H., Burton, W. C. & Ketterle, W. Spin-orbit coupling and quantum spin Hall effect for neutral atoms without spin flips. Phys. Rev. Lett. 111, 225301 (2013).
* (13) He, L. & Huang, X.-G. BCS-BEC crossover in 2D Fermi gases with Rashba spin-orbit coupling. Phys. Rev. Lett. 108, 145302 (2012).
* (14) Xu, Y. & Zhang, C. Berezinskii-Kosterlitz-Thouless phase transition in 2D spin-orbit-coupled Fulde-Ferrell superfluids. Phys. Rev. Lett. 114, 110401 (2015).
* (15) Galitski, V. & Spielman, I. B. Spin-orbit coupling in quantum gases. Nature 494, 49 (2013).
* (16) Goldman, N., Juzeliūnas, G., Öhberg, P. & Spielman, I. B. Light-induced gauge fields for ultracold atoms. Rep. Prog. Phys. 77, 126401 (2014).
* (17) Zhai, H. Degenerate quantum gases with spin-orbit coupling: a review. Rep. Prog. Phys. 78, 026001 (2015).
* (18) Kharzeev, D. E., McLerran, L. D. & Warringa, H. J. The effects of topological charge change in heavy ion collisions: “Event by event P and CP violation”. Nucl. Phys. A 803, 227 (2008).
* (19) Fukushima, K., Kharzeev, D. E. & Warringa, H. J. Chiral magnetic effect. Phys. Rev. D 78, 074033 (2008).
* (20) Son, D. T. & Zhitnitsky, A. R. Quantum anomalies in dense matter. Phys. Rev. D 70, 074018 (2004).
* (21) Metlitski, M. A. & Zhitnitsky, A. R. Anomalous axion interactions and topological currents in dense matter. Phys. Rev. D 72, 045011 (2005).
* (22) Vilenkin, A. Equilibrium parity-violating current in a magnetic field. Phys. Rev. D 22, 3080 (1980).
* (23) Zyuzin, A. A. & Burkov, A. A. Topological response in Weyl semimetals and the chiral anomaly. Phys. Rev. B 86, 115133 (2012).
* (24) Vazifeh, M. M. & Franz, M. Electromagnetic response of Weyl semimetals. Phys. Rev. Lett. 111, 027201 (2013).
* (25) Basar, G., Kharzeev, D. E. & Yee, H.-U. Triangle anomaly in Weyl semimetals. Phys. Rev. B 89, 035142 (2014).
* (26) Landsteiner, K. Anomalous transport of Weyl fermions in Weyl semimetals. Phys. Rev. B 89, 075124 (2014).
* (27) Li, Q. _et al._ Observation of the chiral magnetic effect in ZrTe5. arXiv: 1412.6543.
* (28) Abelev, B. I. _et al._(STAR Collaboration) Azimuthal charged-particle correlations and possible local strong parity violation. Phys. Rev. Lett. 103, 251601 (2009).
* (29) Abelev, B. _et al._(ALICE Collaboration) Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV. Phys. Rev. Lett. 110, 012301 (2013).
* (30) Skokov, V., Illarionov, A. Yu. & Toneev, V. Estimate of the magnetic field strength in heavy-ion collisions. Int. J. Mod. Phys. A 24, 5925 (2009).
* (31) Deng, W.-T. & Huang, X.-G. Event-by-event generation of electromagnetic fields in heavy-ion collisions. Phys. Rev. C 85, 044907 (2012).
* (32) Deng, W.-T. & Huang, X.-G. Electric fields and chiral magnetic effect in Cu + Au collisions. Phys. Lett. B 742, 296 (2015).
* (33) Zhou, X., Li, Y., Cai, Z. & Wu, C. Unconventional states of bosons with the synthetic spin Corbit coupling. J. Phys. B: At. Mol. Opt. Phys. 46, 134001 (2013).
* (34) van der Bijl, E. & Duine, R. A. Anomalous Hall conductivity from the dipole mode of spin-orbit-coupled cold-atom systems. Phys. Rev. Lett. 107, 195302 (2011).
* (35) Sundaram, G. & Niu, Q. Wave-packet dynamics in slowly perturbed crystals: Gradient corrections and Berry-phase effects. Phys. Rev. B 59, 14915 (1999).
* (36) Xiao, D., Chang, M.-C. & Niu, Q. Berry phase effects on electronic properties. Rev. Mod. Phys. 82, 1959 (2010).
* (37) Stephanov, M. A. & Yin, Y. Chiral kinetic theory. Phys. Rev. Lett. 109, 162001 (2012).
* (38) Kharzeev, D. E. Topologically induced local PandCP violation in QCD$\times$QED. Ann. Phys. (N.Y.) 325, 205 (2010).
* (39) Landsteiner, K., Megias, E. & Pena-Benitez, F. Gravitational anomaly and transport phenomena. Phys. Rev. Lett. 107, 021601 (2011).
* (40) Vilenkin, A. Parity nonconservation and rotating black holes. Phys. Rev. Lett. 41, 1575 (1978).
* (41) Son, D. T. & Surowka, P. Hydrodynamics with triangle anomalies. Phys. Rev. Lett. 103, 191601 (2009).
* (42) Jiang, Y., Huang, X.-G. & Liao, J. Chiral vortical wave and induced flavor charge transport in a rotating quark-gluon plasma. Phys. Rev. D 92, 071501 (2015).
* (43) Xiao, D., Shi, J. & Niu, Q. Berry phase correction to electron density of states in solids. Phys. Rev. Lett. 95, 137204 (2005).
* (44) Duval, C., Horvath, Z., Horvathy, P., Martina, L. & Stichel, P. Berry phase correction to electron density in solids and “exotic” dynamics. Mod. Phys. Lett. B 20, 373 (2006).
* (45) Son, D. T. & Yamamoto, N. Berry curvature, triangle anomalies, and the chiral magnetic effect in Fermi liquids. Phys. Rev. Lett. 109, 181602 (2012).
* (46) Gao, J.-H., Liang, Z.-T., Pu, S., Wang, Q. & Wang, X.-N. Chiral anomaly and local polarization effect from the quantum kinetic approach. Phys. Rev. Lett. 109, 232301 (2012).
* (47) Chen, J.-W., Pu, S., Wang, Q. & Wang, X.-N. Berry curvature and four-dimensional monopoles in the relativistic chiral kinetic equation. Phys. Rev. Lett. 110, 262301 (2013).
* (48) Manuel, C. & Torres-Rincon, J. M. Kinetic theory of chiral relativistic plasmas and energy density of their gauge collective excitations. Phys. Rev. D 89, 096002 (2014).
* (49) Hou, D., Liu, H. & Ren, H.-C. Some field theoretic issues regarding the chiral magnetic effect. JHEP 1105, 046 (2011).
* (50) Banerjee, N. _et al._ Constraints on fluid dynamics from equilibrium partition functions. JHEP 1209, 046 (2012).
* (51) Jensen,K. Triangle anomalies, thermodynamics, and hydrodynamics. Phys. Rev. D 85, 125017 (2012).
* (52) Satow, D. & Yee, H. U. Chiral magnetic effect at weak coupling with relaxation dynamics. Phys. Rev. D 90, 014027 (2014).
* (53) Gorbar, E. V., Miransky, V. A. & Shovkovy, I. A. Normal ground state of dense relativistic matter in a magnetic field. Phys. Rev. D 83, 085003 (2011).
* (54) Burnier, Y., Kharzeev, D. E., Liao, J. & Yee, H.-U. Chiral magnetic wave at finite baryon density and the electric quadrupole moment of the quark-gluon plasma. Phys. Rev. Lett. 107, 052303 (2011).
* (55) Wang, G. _et al._(STAR Collaboration) Search for Chiral magnetic effects in high-energy nuclear collisions. Nucl.Phys. A 904, 248c(2013).
* (56) Kharzeev, D. E. & Yee, H.-U. Chiral magnetic wave. Phys. Rev. D 83, 085007 (2011).
* (57) Huang, X.-G. & Liao, J. Axial current generation from electric field: chiral electric separation effect. Phys. Rev. Lett. 110, 232302 (2013).
* (58) Jiang, Y., Huang, X.-G. & Liao, J. Chiral electric separation effect in the quark-gluon plasma. Phys. Rev. D 91, 045001 (2015).
Acknowledgments We acknowledge the support from Fudan University Grant No.
EZH1512519, Shanghai Natural Science Foundation No. 14ZR1403000, the Key
Laboratory of Quark and Lepton Physics (MOE) of CCNU (Grant No. QLPL20122),
the Young 1000 Talents Program of China, and Scientific Research Foundation of
State Education Ministry for Returned Scholars.
Author Contributions
X.G.H conceived and conducted the research and wrote the manuscript.
Correspondence and requests for materials should be addressed to X.G.H.
([email protected]).
Competing Interests
The author declares that he has no competing financial interests.
|
# Thermal Creep on Mars: Visualizing a Soil Layer under Tension
Tetyana Bila University of Duisburg-Essen, Faculty of Physics, Lotharstr.
1-21, 47057 Duisburg, Germany Jonathan Kollmer University of Duisburg-Essen,
Faculty of Physics, Lotharstr. 1-21, 47057 Duisburg, Germany Jens Teiser
University of Duisburg-Essen, Faculty of Physics, Lotharstr. 1-21, 47057
Duisburg, Germany Gerhard Wurm University of Duisburg-Essen, Faculty of
Physics, Lotharstr. 1-21, 47057 Duisburg, Germany
###### Abstract
At low ambient pressure, temperature gradients in porous soil lead to a gas
flow, called thermal creep. With this regard, Mars is a unique as the
conditions for thermal creep to occur in natural soil only exist on this
planet in the solar system. Known as Knudsen compressor, thermal creep induces
pressure variations. In the case of Mars, there might be a pressure maximum
below the very top dust particle layers of the soil, which would support
particle lift and might decrease threshold wind velocities necessary to
trigger saltation or reduce angles of repose on certain slopes. In laboratory
experiments, we applied diffusing wave spectroscopy (DWS) to trace minute
motions of grains on the nm-scale in an illuminated simulated soil. This way,
DWS visualizes pressure variations. We observe a minimum of motion which we
attribute to the pressure maximum $\sim 2$ mm below the surface. The motion
above but especially below that depth characteristically depends on the
ambient pressure with a peak at an ambient pressure of about 3 mbar for our
sample. This is consistent with earlier work on ejection of particle layers
and is in agreement to a thermal creep origin. It underlines the supporting
nature of thermal creep for particle lift which might be especially important
on Mars.
Mars, thermal creep, particle lift, diffusing wave spectroscopy
††journal: Planetary Science Journal
## 1 Introduction
Sand and dust are mobile on the surface of Mars, readily observable from small
scale dust devils over sand dune motion to global dust storms (Fenton, 2020;
Viúdez-Moreiras et al., 2020; Toigo et al., 2018; Lorenz et al., 2021; Heyer
et al., 2020; Chojnacki et al., 2019). This requires some lifting mechanism
and gas drag related to wind and eolian transport is undoubtedly a main driver
(Rasmussen et al., 2015). It has a long tradition though to argue that winds
on Mars are just regularly slightly too weak to account for particle lift
(Greeley et al., 1980; Kok et al., 2012).
There are ways where rather low shear stress might suffice. E.g., the
continuation threshold for saltation is way lower than the initial threshold
(Kok, 2010), aggregates are easier to be picked up (Merrison et al., 2007),
settling at low gravity might make the soil more prone to wind drag (Musiolik
et al., 2018), sporadic motion by gusts might also contribute (Swann et al.,
2020), and electrostatic forces might play a role as well (Esposito et al.,
2016; Kruss et al., 2021). So, it might be debatable if the wind on Mars has a
general problem to initiate particle motion but, in any case, if at its best
the wind speed is always only close to the entrainment threshold velocity,
then every other effect might play a very important role in providing
conditions in favor of particle mobility (Neakrase et al., 2016).
Among the supporting mechanisms for particle lift or downhill flow might also
be sub-soil processes. Resurfacing water in selected locations has been
discussed as a sediment transport mechanism (Raack et al., 2017), also
pressure excursions coming along with the vortex of a dust devil might provide
lift to surface material (Balme & Hagermann, 2006; Koester & Wurm, 2017; Bila
et al., 2020). Especially in this latter case, the pressure within the soil is
higher than above the soil.
The ambient pressure range of mbar allows one more lifting mechanism on Mars
which also leads to a subsurface overpressure, eventually. This is based on a
Knudsen compressor induced by thermal creep within the soil as the surface is
insolated. We explain this in detail in the following section. The effect can
be large but occurs on a small scale and therefore is intrinsically difficult
to be measured directly. Therefore, a visualization is the focus of our work
here.
## 2 Thermal creep induced sub soil overpressure
The underlying physical principle of thermal creep and Knudsen compressors are
not widely spread and to some degree counter intuitive, so we review the
basics here along the lines of fig. 1. Part ”a” shows the general gas flow
situation in a single open tube with a temperature gradient along its wall. In
a thin layer on the order of the mean free path of the molecules
$\lambda_{g}$, gas molecules then get a net component of motion in the
direction from cold to warm. This is owed to the non-thermal equilibrium
situation and is called thermal creep. It is just that at a given constant
pressure, on the cold side the rate of molecules traveling to the warm side is
higher than the other way round. As one usually recalls heating a gas
reservoir with expansion or pressure increase and therefore a motion from warm
to cold, this flow is not very intuitive. In large tubes, it also essentially
goes unnoticed without larger effects outside the tube as there is ample space
in the center of the tube to lead to a backflow without much pressure
difference needed.
This changes for small tubes with an overall diameter on the order of the mean
free path as pictured in fig. 1 b. Thermal creep now dominates over the total
cross section. There can certainly still be a backflow but now this would need
to be driven by a significant pressure gradient. Therefore, if, e.g., the end
of the tubes would be closed, this would lead to a pressure increase on the
warm side until thermal creep flow and pressure-driven backflow would balance.
This would then be called a Knudsen compressor (s. fig. 1 d) according to
Knudsen who studied the pressure conditions at the end of small tubes in the
early twentieth century (Knudsen, 1910). At perfect conditions, he could
realize situations where the ratio between pressures at the warm to the cold
side was on the order of factor 10.
Figure 1: The basic principle of thermal creep and Knudsen compressor. a) Gas
flow in an open tube of the large cross section with a temperature gradient
between the tube ends. In a tube with a large cross section, there are two
opposing gas flows: a gas flow from the colder end to the warmer end in a thin
layer on the order of the mean free path of molecules $\lambda_{g}$, called
thermal creep, and a backflow from the warmer to the colder end. b) In a tube
with a cross section on the order of $\lambda_{g}$, the thermal creep gas flow
dominates. c-d) A tube with a cross section on the order of $\lambda_{g}$ with
a temperature gradient between its ends. On the warmer side the tube is
followed by a smaller tube of the same cross section having a constant
temperature. The thermal creep gas flow cannot proceed into the smaller tube
and the pressure increases at the junction. This leads to a pressure-driven
gas flow through the smaller tube. The pressure adjusts itself until the
incoming temperature gradient-driven gas flow equals the pressure-driven gas
flow. e) In-depth basic pressure distribution in a dust bed heated from above.
$y_{0}$ denotes the surface of a particle bed. In the near-surface layer from
$y_{0}$ to $y_{1}$ the temperature is constant. Below $y_{1}$ down to $y_{2}$
the temperature gradient is present.
So at its extremes, thermal creep in a tube can either lead to a gas flow of
maximum rate, if the ends of the tube are open, or thermal creep leads to a
maximum pressure increase, if the flow cannot proceed, i.e. if it is blocked
by walls or if there are adjacent volumes but if these are negligible in
sizes. The situation relevant here is somewhere in between and pictured in
fig. 1 c. If we just add a (small) tube on the warm side which is at a
constant temperature, then there is no thermal creep that can transport the
incoming flow further on, so the pressure increases and the flow speed
decreases until the pressure gradient along the small tube transports the flow
onward and the temperature-driven thermal creep flow equals the pressure-
driven gas flow.
As outlined and entering in detail, thermal creep effects depend on the ratio
$Kn$ between the mean free path of the gas molecules $\lambda_{g}$ and tube
diameter $d$. This is known as Knudsen number. If the Knudsen number is small,
tubes are relatively large and no pressure can build up. If the Knudsen number
is high, the tube is very small and the flow rate eventually gets too small to
sustain pressure differences. Therefore, thermal creep effects are most
pronounced under conditions where the Knudsen number is around $Kn\sim 1$.
To make the connection to Martian soil, the question is how the tube picture
can be applied to a porous granular bed. While the idea would be the same,
namely that thermal creep transports gas through the pores, this situation is
more complex. There are pores of varying size. So for one thing, the Knudsen
number is not well defined a priori. A simple analogy is shown in fig. 2 from
Steinpilz et al. (2017), where the granular bed is idealized by a regular
cubic structure. Koester et al. (2017) studied various real particle beds
experimentally and showed that indeed a granular medium can very well be
described by a set of tubes where the diameter of the tube essentially equals
the average particle (pore) diameter, which would be slightly larger than the
sketch suggests.
Finally, as with the two tubes in fig. 1 c, we can have different parts of a
particle bed or soil. Thermal creep can pump gas without the need for a
pressure difference in a region with a temperature gradient. If this connects
i.e. to a top layer at a constant temperature, pressure builds up until gas is
driven further in the usual way by a pressure difference which is known as
Darcy flow. More detailed equations for calculating thermal creep for granular
matter can be found in Koester et al. (2017). The pressure gradient-driven top
layer under tension has a certain thickness, which we visualize below as the
overpressure will always lead to at least minute particle motions upwards and
downwards and there will be a minimum of particle motion close to the pressure
maximum.
Figure 2: Model of a granular medium as a collection of pore-size tubes (from
Steinpilz et al. (2017)).
There have been observations of particle motions before but at such large
levels of incident light flux that the pressure gradients were large enough to
eject grains directly against gravity and cohesion. First experiments on
particle lifting at low pressure when illuminated were carried out by Wurm &
Krauss (2006) but among the possible drivers, all related to non-equilibrium
low pressure gas physics, thermal creep generated overpressure was only
recognized by de Beule et al. (2014), supported by a number of works in
between (Wurm et al., 2008; Kelling & Wurm, 2009; Kelling et al., 2011;
Kocifaj et al., 2010, 2011).
Kelling et al. (2011) find aggregates on the order of 100 $\rm\mu m$ being
ejected in experiments. Numerical simulations in Kocifaj et al. (2011) show
that the temperature changes from flat to strongly decreasing at a few mm
depth. The layer under tension was probed by laboratory measurements by de
Beule et al. (2015) based on the thickness of an ejected layer to be up to a
few 100 $\rm\mu m$.
This overpressure might not be strong enough on its own under Martian sunlight
but can reduce the threshold shear velocity for particle lift by about 20 %
(Küpper & Wurm, 2015). The effect can be amplified under certain conditions of
illumination, i.e. at a shadow boundary (Kuepper & Wurm, 2016). This might
help explaining Recurring Slope Lineae on Mars (Schmidt et al., 2017).
In any case, the top layer that is active is rather thin. While this is
consistent with temperature profiles modeled, this is only on the order of the
grain size or slightly larger. Therefore, the pressure profile cannot be
probed directly.
In the spirit of evaluating different methods for tracing the pressure or at
least the maximum pressure line, we employed a new method here, called
diffusing wave spectroscopy and we present first measurements which qualify
this technique as suitable visualization tool for thermal creep pressure
variations.
Figure 3: Top: The experimental setup used. 1) Vacuum chamber, 2) polarizing
filter, 3) camera lens, 4) camera, 5) red filter, 6) lens with f=300 mm, 7)
lens with f=10.8 mm, 8) polarizing filter, 9) laser, 10) optical fiber, 11)
digital thermometer, 12) analog-to-digital converter, 13) pressure display
unit and 14) vacuum gauge. Middle and bottom figures show the sketch of the
setup in top view and side view respectively. A granular soil simulant sample
within a vessel is placed in a vacuum chamber. Heating from the top (IR
radiation) simulates insolation. The speckle pattern of a laser backscattered
from the sample from the side is imaged by a camera. Figure 4: Volume size
distribution of the basalt sample used. From this distribution the mean
particle diameter is calculated to $51\pm 28\,\mathrm{\upmu m}$.
## 3 Experiment
Figure 3 middle and bottom presents a sketch of the experimental setup. A
vessel filled with sample soil material is placed inside a vacuum chamber. The
volume of the vessel holding the sample is $(9.5\times 9\times
1)\,\mathrm{cm^{3}}$, where the sample takes a volume of $(3.8\times 9\times
1)\,\mathrm{cm^{3}}$. One side of the vessel is transparent with a 2 mm flat
glass plate.
Heating of the surface by infrared radiation is induced by a heated wire
placed about $3\,\mathrm{cm}$ above the sample surface. We did not quantify
the radiative flux here as our focus is on the DWS technique and what it might
provide as information on particle motion. As sample we use basalt with a
broad size distribution (Figure 4) and a mean diameter of $51\pm
28\,\mathrm{\upmu m}$. The bulk density of the sample is about
$1.7\,\mathrm{g/cm^{3}}$, which results in a porosity of about 0.44.
A red (633 nm), 2 mW beam shaped ($\sim$ 3 cm diameter) laser illuminates a
large part of the soil from the side, especially including the surface region.
The scattered light is recorded by a video camera. The optical pathways of
laser and camera hold crossed linear polarizers to exclude geometrically
reflected light but limit the detection to light scattered by the sample which
changes the polarization of the incoming radiation. The camera is focused on
the front glass plate plane and has a resolution of $10\,\mathrm{\upmu
m/pixel}$. Due to constructive and destructive interference it records a
speckle image that changes with slight movements of the sample particles.
The experiment was carried out at 8 different pressures $1.9\pm 0.1$ mbar,
$2.5\pm 0.1$ mbar, $3.6\pm 0.1$ mbar, $5.2\pm 0.1$ mbar, $8.2\pm 0.1$ mbar,
$148.9\pm 0.4$ mbar, $293.9\pm 0.6$ mbar and $780\pm 2$ mbar. The initial
pump-down took tens of hours (a weekend) in order to assure that most of the
water within the pore space is evaporated and not influencing the
measurements. Final adjustments were made before each measurement using a
precision valve. The temperature before the heater was switched on was
$296.1\pm 0.5$ K for all measurements. The maximum temperature at the end of
the measurement depends on the pressure. Figure 9 (bottom) shows the pressure
dependence of the absolute temperature increase ($\Delta T$) of the top soil
layers.
Figure 5: Example of a time evolution of the correlation for the measurement
at $5.2\,\mathrm{mbar}$; times after the heater is switched on are imprinted
in each image. The dashed green line indicates the position of the sample
surface. $y$ denotes downward direction and $x$ is the horizontal. The size of
one metapixel is 20 pixel.
## 4 Diffusing wave spectroscopy
A speckle pattern is the essential part of diffusing wave spectroscopy, which
is a technique which visualizes very small displacements in granular materials
(Crassous, 2007; Erpelding et al., 2010; Amon et al., 2017; Weitz & Pine,
1993). In the used backscattering geometry, the laser beam illuminates the
granular bed from the side. The light is then entering the granular medium, is
scattered several times on various grains and part of the radiation leaves the
sample again on the same side. As mentioned, due to the coherent nature of the
light source and interference of the various scattered beams, a focused image
results in a speckle image with bright spots for constructive interference and
dark ones if pathways result in destructive intereference.
The scattering medium, in our case the soil simulant, is characterized by its
transport mean free path $\lambda$. On their way through the sample, the
photons scan a volume on average in the order of $\lambda^{3}$ (Erpelding et
al., 2008). Hence, the scattering process limits the spatial resolution of the
method to the transport mean free path. Several pixels of an image are grouped
into a _metapixel_. In our case, the speckles have a size of about $2$ pixel
$(20\rm\upmu m)$ and we take 20 by 20 pixels as metapixel. $\langle I\rangle$
is the average of the brightness $I$ over all pixels inside a metapixel.
If grains shift on the size scale of the wavelength, the speckle pattern
changes. Therefore, areas where displacements take place can be visualised in
a correlation plot comparing corresponding metapixels from images taken at
different times $t$. The intensity correlation function $G_{l}$ is calculated
for the corresponding $l$-th metapixel between the initial frame at $t=0\,$s
and one of the following frames as
$G_{l}=\frac{\langle I_{\textit{0l}}\cdot I_{\textit{tl}}\rangle-\langle
I_{\textit{0l}}\rangle\langle I_{\textit{tl}}\rangle}{\sqrt{\langle
I_{\textit{0l}}^{2}\rangle-\langle
I_{\textit{0l}}\rangle^{2}}\cdot\sqrt{\langle
I_{\textit{tl}}^{2}\rangle-\langle I_{\textit{tl}}\rangle^{2}}}.$ (1)
In eq. 1, each metapixel $l$ of a frame is represented by its brightness
$I_{\textit{tl}}$ where t denotes the frame at time $t$. $\langle
I_{\textit{0l}}\cdot I_{\textit{tl}}\rangle$ denotes the average of the
element wise multiplication of respective pixels within the two metapixels.
Figure 6: Same as fig. 5 but without heating at $\sim 1190$ mbar ambient
pressure.
At $t=0\,$s the sample is undeformed. The deformation develops with time and
therefore can be seen as a decrease of the correlation value. A correlation
value close to 1 indicates that no or very weak displacement of scattering
particles has taken place. The correlation values can be plotted spatially
resolved, building a correlation map where each correlation value is a measure
for the deformation occurring in a cell of $\lambda^{3}$-size.
Fig. 5 shows an example of the correlation plot for the heated sample in the
illuminated part of the vessel over time. Fig. 6 shows the same as fig. 5, but
for the sample at room temperature without heating.
## 5 Results and discussion
To characterize the depth dependence of the particle motion in more detail, we
average the data horizontally. Fig. 7 then shows the time evolution of the
depth profile in absolute spatial units now. We reversed the colors here for
visibility, i.e. bright colors refer to large motions. Especially deeper
layers down to 10 mm loose correlations over time, i.e. show large particle
motion on timescales of minutes. This is consistent with heat conduction
timescales. We did not follow the evolution longer than 400 s here. While heat
is transported further downwards, the correlations do no longer change in the
top layers, eventually.
To discuss the depth profiles we therefore then choose the profile of the
latest time at 395 s for each of the 8 pressures sampled. These are shown in
fig. 8.
To show the characteristics of the depth profiles we divide the data into
different pressure ranges. Fig. 8 top shows the depth profile for the three
highest pressures from about 149 mbar to 780 mbar. No thermal creep should be
present at this high pressures and, indeed, the curves are all essentially the
same. Here we focus on the depth down to $5\,$mm. These motion depth profiles
at higher pressure can clearly be split into two parts. Down to 2 mm, the
motion decreases roughly like a power law (linear in log-log plot). Below 2 mm
the motion profile is flat. As thermal creep is not acting, the motion close
to the surface has to have a different origin. We cannot pin down the reason
but in general, the topmost grains are bound the least, are not compressed by
the grains close by and therefore are the most mobile. So they are moved the
easiest by any disturbance. The further down, the more confined are the grains
until disturbances are too faint to result in any motion. Disturbances might
be simple gas flow or expansion as the sample heats up but, again, we do not
know the exact reason. In any case, these profiles provide a firm base for
comparison to the lower pressure range.
Figure 7: Strength of the motion of grains with depth y over time at 5.2 mbar
ambient pressure. Dark color marks little motion (reversed color from
correlation plot fig. 5).
To make the difference clear, we now overplot one example of a depth profile
for the lower pressure range (5.2 mbar) in fig. 8 middle where we expect
thermal creep to act. For completeness, we plot all low pressure data in fig.
8 bottom. In comparison to the higher pressure range, the low pressure data
have a simple but very distinguished shape with 3 points to note.
* •
The topmost decrease of motion is still a power law and essentially one with
the same power as for higher pressures but the absolute value is shifted
upwards. This implies more motion in the top layer. This is consistent with a
subsurface overpressure with a pressure gradient above that moves all grains
up to the very top somewhat more.
* •
There is a clear minimum for all low pressure data at about 2 mm. The position
of this minimum does not change significantly.
* •
Finally, the motion now increases again deeper within the particle bed. This
is also consistent with a subsurface overpressure that works in both
directions upwards but also downwards.
The motion of grains requires forces acting on them. These can be provided by
the sub-soil pressure gradients induced by thermal creep. As indicated in fig.
1 e, there is a pressure maximum close to the surface, as the upward thermal
creep gas flow can only be maintained by a pressure-driven gas flow in the
upper layers with little temperature gradient. Further down below the surface,
along the flat temperature distribution in the near-surface layer, there will
be another transition to a flat temperature distribution, which might result
in pressure variations, though this is not shown in fig. 1 d which was focused
on the top layer.
Figure 8: Motion depth profiles at the latest time (395 s); top: high
pressure data with clear split in two parts, a downward decreasing and then
nearly constant part; middle: overplotted example of low pressure data with an
additional increase at larger depths. The orange dashed line is a fit with
$r_{high}=const.$ on the flat parts of all three of the higher pressure depth
profiles. $r_{low}$ is the value of the depth profile of lowest pressure at
$y=5\,$mm. $\Delta r$ is the difference between $r_{low}$ and $r_{high}$;
bottom: all low pressure data.
It seems reasonable to assume that these pressure variations drive the motion
of grains seen in fig. 7. It has to be noted though that the location of
particle motion and the location of gas pressure extrema might not necessarily
coincide 1 to 1 a priori. The gas pressure gradient implies certain local
forces on the grains but forces on grains will in addition be commuted to the
other grains by force chains. In other words, the minimum of particle motion
traces an equilibrium spot where upward and downward directed forces would
balance but even a bulk motion, i.e. large scale lift or compression would be
seen as motion in DWS as it always comes with small variations. Large scale
motions are not present in the data though. The sample further below the
surface only moves significantly once the temperature rises in that region as
the heat is conducted downwards. Any motion further down in the particle bed
occurring then is not significantly influencing the layers above, so pressure
gradients and particle motion can be considered to act only locally.
Figure 9: Top: pressure dependence of $\Delta r$; Center: dependence of
$\Delta r$ on Knudsen number; Bottom: Temperature difference at the end of the
measurement over the ambient pressure at the beginning of the measurement.
The most pronounced differences with ambient pressure are the strengths of the
motion increase below the surface. We therefore studied the differences
$\Delta r=r_{low}-r_{high}$ over pressure and, indeed, we observe a systematic
variation as shown in fig. 9 (top). The effect is strongest at an ambient
pressure of about 3 mbar rapidly decreasing to the offset value as the
pressure is increased by a few mbar. In terms of the Knudsen number
$\rm{Kn}=\lambda_{g}/d$ with the mean free path of the gas molecules
$\lambda_{g}$ and an efficient pore diameter of $d=20\upmu\mathrm{m}$ the
maximum is located at about Kn = 1 (fig. 9 (center)).
This trend is consistent with earlier work by de Beule et al. (2015). They
find that the layer removed in their experiments at three pressures of 0.1, 1,
and 10 mbar is largest at 1 mbar. So we clearly trace the same effect. More
specific, we find that the maximum is situated at a Knudsen number of 1. As
thermal creep is most efficient at Kn=1 (Chambers, 2004) and does not work at
large ambient pressure and less at lower ambient pressure (Koester et al.,
2017), this strongly supports the idea that thermal creep is the underlying
mechanism for the particle motion.
## 6 Caveats
### 6.1 Boundary effects
The vessel holding the sample has a relatively small thickness. This is
intended as it e.g. allows a faster cooling after one measurement before the
next is started. However, it also means that the boundaries, i.e. the glass
windows on the side have a significant influence on the temperature
distribution. This cannot be avoided as we need a transparent wall that allows
DWS at all and the particle columns need a stabilizing wall. With an order of
magnitude of 1 W/(m K), the thermal conductivity of the walls is orders of
magnitudes larger than the thermal conductivity of the sample. Therefore, the
temperature of the window next to the relevant top few mm can be considered as
being constant. So this is a heat source for the sample but overlaying as
constant temperature it will not change the principle vertical stratification
of the temperature differences and therefore, the position of the pressure
maximum will only change slightly.
### 6.2 Other motion
The maximum in motion at around 10 mm (s. fig. 7) is also interesting but is
not the focus of this work. We currently have no sophisticated model to
explain these data.
### 6.3 Temperature dependence on pressure
Fig. 9 (bottom) shows that the overall temperature increases at low pressure
but since it reaches a rather constant plateau, we do not consider this to
influence our thermal creep measurements.
## 7 Conclusion
Thermal creep gas flow can significantly support the lifting of grains on
Mars. However, the pressure variations are small and change on sub-mm spatial
scales. Implementing arrays of commercial pressure sensors on sub-mm scale
within a dust bed and resolutions on the Pa level without influencing the
temperature profile if the dust bed is illuminated is currently virtually
impossible. Therefore, the pressure gradient itself is hard to be measured
directly. With this lack of direct confirmation, even if the effect of thermal
creep on larger scales is well known, every indirect method can be an
important verification. After all, thermal creep is not something encountered
on Earth in natural settings and it does not come intuitively (we think).
Tracing the gas motion in and out of a dust bed was a milestone to attribute
particle lifting to thermal creep gas flow (de Beule et al., 2014). Observing
the direct lifting of whole layers of grains in a work by de Beule et al.
(2015) could pin down an active layer of $\sim 0.2$ mm within a dust bed,
where we refer to activity as being under tension.
Here, this pressure maximum is visualized by subtle particle motions made
visible by diffusing wave spectroscopy. The depth of about 2 mm is slightly
larger than these earlier results but noting the different methodology and
observing the same typical dependence of particle motion on ambient pressure,
these findings are in agreement. Our results therefore provide one more
confirmation of the effect of thermal creep under Martian conditions making
thermal creep an ever more likely effect to influence the motion of grains on
the Martian surface quite generally.
## acknowledgements
This project is supported by DLR Space Administration with funds provided by
the Federal Ministry for Economic Affairs and Climate Action (BMWK) under
grant numbers 50WM1943 and 50WM2049. This project also has received funding
from the European Union’s Horizon 2020 research and innovation program under
grant agreement No 101004052. The manuscript gained significantly by the
reviews of Frédéric Schmidt and an anonymous reviewer.
## References
* Amon et al. (2017) Amon, A., Mikhailovskaya, A., & Crassous, J. 2017, Review of Scientific Instruments, 88, 051804, doi: 10.1063/1.4983048
* Balme & Hagermann (2006) Balme, M., & Hagermann, A. 2006, Geophys. Res. Lett., 33, L19S01, doi: 10.1029/2006GL026819
* Bila et al. (2020) Bila, T., Wurm, G., Onyeagusi, F. C., & Teiser, J. 2020, Icarus, 339, 113569, doi: 10.1016/j.icarus.2019.113569
* Chambers (2004) Chambers, A. 2004, Modern Vacuum Physics, Masters Series in Physics and Astronomy (CRC Press), 25–48
* Chojnacki et al. (2019) Chojnacki, M., Banks, M. E., Fenton, L. K., & Urso, A. C. 2019, Geology, 47, 427, doi: 10.1130/G45793.1
* Crassous (2007) Crassous, J. 2007, The European Physical Journal E, 23, 145, doi: 10.1140/epje/i2006-10079-y
* de Beule et al. (2015) de Beule, C., Wurm, G., Kelling, T., Koester, M., & Kocifaj, M. 2015, Icarus, 260, 23, doi: 10.1016/j.icarus.2015.06.002
* de Beule et al. (2014) de Beule, C., Wurm, G., Kelling, T., et al. 2014, Nature Physics, 10, 17, doi: 10.1038/nphys2821
* Erpelding et al. (2008) Erpelding, M., Amon, A., & Crassous, J. 2008, Physical Review E, 78, doi: 10.1103/physreve.78.046104
* Erpelding et al. (2010) —. 2010, EPL (Europhysics Letters), 91, 18002, doi: 10.1209/0295-5075/91/18002
* Esposito et al. (2016) Esposito, F., Molinaro, R., Popa, C. I., et al. 2016, Geophysical Research Letters, 43, 5501, doi: 10.1002/2016GL068463
* Fenton (2020) Fenton, L. K. 2020, Icarus, 352, 114018, doi: 10.1016/j.icarus.2020.114018
* Greeley et al. (1980) Greeley, R., Leach, R., White, B., Iversen, J., & Pollack, J. B. 1980, Geophys. Res. Lett., 7, 121, doi: 10.1029/GL007i002p00121
* Heyer et al. (2020) Heyer, T., Raack, J., Hiesinger, H., & Jaumann, R. 2020, Icarus, 351, 113951, doi: 10.1016/j.icarus.2020.113951
* Kelling & Wurm (2009) Kelling, T., & Wurm, G. 2009, Phys. Rev. Lett., 103, 215502, doi: 10.1103/PhysRevLett.103.215502
* Kelling et al. (2011) Kelling, T., Wurm, G., Kocifaj, M., Klačka, J., & Reiss, D. 2011, Icarus, 212, 935, doi: 10.1016/j.icarus.2011.01.010
* Knudsen (1910) Knudsen, M. 1910, Annalen der Physik, 336, 633, doi: 10.1002/andp.19103360310
* Kocifaj et al. (2011) Kocifaj, M., Klačka, J., Kelling, T., & Wurm, G. 2011, Icarus, 211, 832, doi: 10.1016/j.icarus.2010.10.006
* Kocifaj et al. (2010) Kocifaj, M., Klačka, J., Wurm, G., Kelling, T., & Kohút, I. 2010, MNRAS, 404, 1512, doi: 10.1111/j.1365-2966.2010.16370.x
* Koester et al. (2017) Koester, M., Kelling, T., Teiser, J., & Wurm, G. 2017, Ap&SS, 362, 171, doi: 10.1007/s10509-017-3154-4
* Koester & Wurm (2017) Koester, M., & Wurm, G. 2017, Planet. Space Sci., 145, 9, doi: 10.1016/j.pss.2017.07.005
* Kok (2010) Kok, J. F. 2010, Phys. Rev. Lett., 104, 074502, doi: 10.1103/PhysRevLett.104.074502
* Kok et al. (2012) Kok, J. F., Parteli, E. J. R., Michaels, T. I., & Karam, D. B. 2012, Reports on Progress in Physics, 75, 106901, doi: 10.1088/0034-4885/75/10/106901
* Kruss et al. (2021) Kruss, M., Salzmann, T., Parteli, E., et al. 2021, Planetary Science Journal, 2, 238, doi: 10.3847/PSJ/ac38a4
* Kuepper & Wurm (2016) Kuepper, M., & Wurm, G. 2016, Icarus, 274, 249, doi: 10.1016/j.icarus.2016.02.049
* Küpper & Wurm (2015) Küpper, M., & Wurm, G. 2015, Journal of Geophysical Research (Planets), 120, 1346, doi: 10.1002/2015JE004848
* Lorenz et al. (2021) Lorenz, R. D., Lemmon, M. T., & Maki, J. 2021, Icarus, 364, 114468, doi: 10.1016/j.icarus.2021.114468
* Merrison et al. (2007) Merrison, J. P., Gunnlaugsson, H. P., Nørnberg, P., Jensen, A. E., & Rasmussen, K. R. 2007, Icarus, 191, 568, doi: 10.1016/j.icarus.2007.04.035
* Musiolik et al. (2018) Musiolik, G., Kruss, M., Demirci, T., et al. 2018, Icarus, 306, 25, doi: 10.1016/j.icarus.2018.01.007
* Neakrase et al. (2016) Neakrase, L. D. V., Balme, M. R., Esposito, F., et al. 2016, Space Sci. Rev., 203, 347, doi: 10.1007/s11214-016-0296-6
* Raack et al. (2017) Raack, J., Conway, S. J., Herny, C., et al. 2017, Nature Communications, 8, 1151, doi: 10.1038/s41467-017-01213-z
* Rasmussen et al. (2015) Rasmussen, K. R., Valance, A., & Merrison, J. 2015, Geomorphology, 244, 74, doi: 10.1016/j.geomorph.2015.03.041
* Schmidt et al. (2017) Schmidt, F., Andrieu, F., Costard, F., Kocifaj, M., & Meresescu, A. G. 2017, Nature Geoscience, 10, 270, doi: 10.1038/ngeo2917
* Steinpilz et al. (2017) Steinpilz, T., Teiser, J., Koester, M., Schywek, M., & Wurm, G. 2017, MicST, 29, 325, doi: 10.1007/s12217-017-9550-0
* Swann et al. (2020) Swann, C., Sherman, D. J., & Ewing, R. C. 2020, Geophys. Res. Lett., 47, e84484, doi: 10.1029/2019GL084484
* Toigo et al. (2018) Toigo, A. D., Richardson, M. I., Wang, H., Guzewich, S. D., & Newman, C. E. 2018, Icarus, 302, 514, doi: 10.1016/j.icarus.2017.11.032
* Viúdez-Moreiras et al. (2020) Viúdez-Moreiras, D., Newman, C. E., Forget, F., et al. 2020, Journal of Geophysical Research (Planets), 125, e06493, doi: 10.1029/2020JE006493
* Weitz & Pine (1993) Weitz, D. A., & Pine, D. J. 1993, in Monographs on the physics and chemistry of materials, Vol. 49, Dynamic Light Scattering: The Method and Some Applications, ed. W. Brown (Clarendon Press), 652 – 720
* Wurm & Krauss (2006) Wurm, G., & Krauss, O. 2006, Phys. Rev. Lett., 96, 134301, doi: 10.1103/PhysRevLett.96.134301
* Wurm et al. (2008) Wurm, G., Teiser, J., & Reiss, D. 2008, Geophys. Res. Lett., 35, L10201, doi: 10.1029/2008GL033799
|
# On The Persona-based Summarization of Domain-Specific Documents
1Ankan Mullick 1Sombit Bose 1Rounak Saha∗ 2Ayan Kumar Bhowmick
1Pawan Goyal 1 Niloy Ganguly 2Prasenjit Dey 2Ravi Kokku
{ankanm, sbcs.sombit<EMAIL_ADDRESS>
{pawang<EMAIL_ADDRESS>{ayan, prasenjit<EMAIL_ADDRESS>
1Computer Science and Engineering Department, IIT Kharagpur, India. 2Emergence
AI Authors contributed equally
###### Abstract
In an ever-expanding world of domain-specific knowledge, the increasing
complexity of consuming, and storing information necessitates the generation
of summaries from large information repositories. However, every persona of a
domain has different requirements of information and hence their
summarization. For example, in the healthcare domain, a persona-based (such as
Doctor, Nurse, Patient etc.) approach is imperative to deliver targeted
medical information efficiently. Persona-based summarization of domain-
specific information by humans is a high cognitive load task and is generally
not preferred. The summaries generated by two different humans have high
variability and do not scale in cost and subject matter expertise as domains
and personas grow. Further, AI-generated summaries using generic Large
Language Models (LLMs) may not necessarily offer satisfactory accuracy for
different domains unless they have been specifically trained on domain-
specific data and can also be very expensive to use in day-to-day operations.
Our contribution in this paper is two-fold: 1) We present an approach to
efficiently fine-tune a domain-specific small foundation LLM using a
healthcare corpus and also show that we can effectively evaluate the
summarization quality using AI-based critiquing. 2) We further show that AI-
based critiquing has good concordance with Human-based critiquing of the
summaries. Hence, such AI-based pipelines to generate domain-specific persona-
based summaries can be easily scaled to other domains such as legal,
enterprise documents, education etc. in a very efficient and cost-effective
manner.
On The Persona-based Summarization of Domain-Specific Documents
1Ankan Mullick 1Sombit Bose††thanks: Authors contributed equally 1Rounak
Saha∗ 2Ayan Kumar Bhowmick 1Pawan Goyal 1 Niloy Ganguly 2Prasenjit Dey 2Ravi
Kokku {ankanm, sbcs.sombit<EMAIL_ADDRESS>{pawang,
<EMAIL_ADDRESS>{ayan, prasenjit<EMAIL_ADDRESS>1Computer Science
and Engineering Department, IIT Kharagpur, India. 2Emergence AI
## 1 Introduction
In the rapidly expanding digital world, the exponential growth of domain-
specific knowledge has posed unprecedented challenges in efficiently storing
and consuming vast information repositories. With the increasing complexity of
managing such information, the need for generation of precise and specific
summaries becomes important. This need becomes particularly evident for
domain-specific data as there exist different personas within a domain who
have different information requirements which should be reflected in generated
summaries. For instance, if we consider the healthcare domain, there exist
diverse personas111http://tiny.cc/x1guwz ranging from healthcare professionals
like doctors and nurses to patients who require targeted information
customized to their specific roles and comprehension levels.
Traditional generic approaches to summarization have often relied on humans to
perform this high cognitive load task. However, as the volume and diversity of
information burgeon with growing number of domains and personas, human
generated persona-based summaries encounter limitations in scalability, cost-
effectiveness, and consistency. The inherent subjectivity and variability
among different human summarizers hinder the reliability and efficiency of
such an approach. Although there have been several approaches in prior
literature with focus on generic summaries through extractive and abstractive
methods Paulus et al. (2017); Erkan and Radev (2004) as well as goal-oriented
summaries Hayashi et al. (2021); Zhu et al. (2022) but none of them have
focused on persona-based summarization of domain-specific information.
Goldsack et al. (2023); Luo et al. (2022) focus on building layman-
summarization comprehensible to non-technical audiences but do not
differentiate the various technical summaries based on persona and they also
do not use LLMs as an alternative evaluator. Our work differs in the sense
that we develop a pipelined approach that generates persona-specific training
summaries (doctor, patient, normal person), fine-tune small-size LLMs on this
data, and use GPT-4 to efficiently evaluate summary quality.
One possible solution is to harness the power of generic large language models
(LLMs) such as GPT-4 to automate the generation of persona-based summaries as
such models have been used to generate data for other NLP tasks Sun et al.
(2023); Yu et al. (2023). ChatGPT 222https://chat.openai.com/ is also used in
educational data generation Kieser et al. (2023); Maddigan and Susnjak (2023).
However, AI-generated summaries using generic LLMs may not be guaranteed to
achieve optimal accuracy across different domains unless they are trained on
domain-specific data and they can also be very expensive to use for daily
repeated inferences. In this paper, we take a step towards introducing a two-
fold contribution aimed at overcoming these challenges of generating domain-
specific persona-based summaries. Firstly, we present an efficient approach
towards the training of domain-specific, small-sized Large Language Models
(LLMs) on a corpus related to healthcare domain. Though data distillation from
a stronger model for supervised fine-tuning is a standard method, the novelty
of our work lies in the fact that we effectively employ the approach in this
context to build a cost-optimised summarization framework catering to
different healthcare persona given the scarcity of domain-specific data. This
approach addresses the limitations of generic LLMs by aligning the trained
model specifically to the intricacies of summaries in the healthcare domain.
Moreover, we showcase the effectiveness of utilizing AI-based critiquing for
the evaluation of summarization quality, providing a more automated and
scalable solution.
Secondly, we demonstrate the strong agreement between AI-based and human-based
critiquing of generated summaries, establishing the reliability of our
proposed approach. This not only validates the effectiveness of domain-
specific small LLM-based models in generating accurate summaries but also
opens up avenues for scalability across diverse domains. The implications of
our findings extend beyond healthcare, as the proposed AI pipeline can be
seamlessly adapted to other domains, including legal, corporate documents,
education, and more, offering a versatile and cost-effective solution for
generating persona-based summaries.
## 2 Proposed Framework
We describe here our framework for training our small-size domain-specific
LLM, generation of data for finetuning and evaluation, and other model
baselines that we compare our model against.
### 2.1 Dataset
We create persona-based dataset (named ‘Persona-Data’) utilizing
GPT-4333https://openai.com/gpt-4 with specific prompts on 1455 articles from
the publicly available WebMD444https://www.webmd.com/ website. The mean ratio
between summary length and document length is $0.2:1$. After data generation
using GPT-4, we did a step by step validation of the generated summaries using
an automated approach followed by manual verification to filter out the bad
generations. Around $4.89\%$ of document-summary pairs were filtered out. We
provide the detailed filtering steps with criteria and removal pairs (in %) as
shown in Table 1.
Filtering Steps with Criteria (removed) % Step 1: Too many special characters
and other string (HTML tags and # ) 1.52 Step 2: Incomplete Summary (By
checking punctuations) 0.86 Step 3: Conflict identification - Very similar
summaries of different persona 1.12 Step 4: If the summary contains Medical
Terms or numbers not present in the document (using QuickUMLS -
https://github.com/Georgetown-IR-Lab/QuickUMLS/) 1.39 Overall summaries
filtered out 4.89
Table 1: Step-by-Step Data Filtering
These articles, related to healthcare, form the basis for creation of their
summaries related to three distinct persona: (a) Doctor: Summaries focus on
medical terminology, guidelines and provide detailed technical information
suitable for medical professionals. (b) Patient: Summaries are easily
understandable, addressing patient concerns without excessive technical
jargon, focusing on top-level information. (c) Normal Person: Summaries are
tailored for a general audience without medical background, presented in
simple language and engaging for laypersons while avoiding technical terms.
The dataset comprises 1091, 73 and 291 articles for training, validation, and
testing respectively. Additionally, we select 50 WebMD articles and generate
manual summaries for three personas (termed as Annotated-Data) using the
Prolific555https://www.prolific.com/ annotation platform and Doctors to
evaluate the GPT-4 generated summaries against human curated
summaries666Code/Dataset details are in https://github.com/ankan2/persona-
healthcare (Details are in Appendix F).
Model Rouge1 Rouge2 RougeL Meteor Bleu BERT-Prec BERT-Rec BERT-F1 Falcon 23.3
5.0 12.8 12.4 1.3 78.6 73.2 75.8 BART 23.6 8.1 15.0 11.3 0.8 83.8 74.9 79.1
Pegasus 27.8 8.3 16.2 14.3 2.0 80.7 75.0 77.8 T5-FT 42.6 15.6 23.7 30.2 11.4
84.2 80.7 82.4 FT5-FT 41.2 15.5 23.3 29.0 11.2 83.9 79.9 81.9 LED-B 33.3 9.3
17.4 17.3 3.3 80.7 76.7 78.6 LED-L 39.8 12.3 25.6 25.3 9.3 82.5 78.7 80.6
L-V-7b 14.2 4.0 8.5 6.7 0.6 72.0 65.7 68.7 L-V-13b 24.1 7.3 14.3 11.5 1.1 80.8
73.5 77.0 L-V-70b 25.1 7.2 13.7 16.2 3.7 65.7 64.6 65.1 L-F-7b 45.9 17.2 25.2
32.6 12.5 83.6 82.6 83.1 L-F-13b 53.7 24.3 33.8 38.3 18.4 87.4 85.3 86.3
Table 2: Traditional Metrics’ based evaluation results (all are in %)
Model Rouge1 Rouge2 RougeL Meteor Bleu BERT-Prec BERT-Rec BERT-F1 Doctor 53.9
24.7 34.0 37.0 18.7 88.0 85.1 86.5 Patient 53.5 24.2 33.5 36.7 18.3 87.2 84.8
86.0 Nor-Per 53.6 23.9 33.9 41.0 18.1 86.9 86.1 86.5 Average 53.7 24.3 33.8
38.3 18.4 87.4 85.3 86.3
Table 3: Traditional Metrics’ based evaluation on different persona using
Llama2-13b Finetuning model (in %)
### 2.2 Model Architecture
Our training process consists of employing small foundation LLMs such as
Llama2 and finetuning such models on the training set of the WebMD data
described above. Specifically, we use supervised fine-tuning approach Li et
al. (2023) on the pre-trained vanilla model versions of Llama2.
Llama2777https://ai.meta.com/llama/: We perform supervised fine-tuning (SFT)
on the pretrained vanilla Llama2-7b and Llama2-13b models using the training
data that consists of prompt-completion pairs where the prompt comprises a
WebMD article in the training set and the instruction to generate a summary
based on a persona (doctor/patient/normal person) and the completion is the
corresponding persona-based summary generated using GPT-4. We use a parameter
efficient finetuning approach i.e. Quantized Low-Rank Adaptation (QLoRA)
Dettmers et al. (2023) to optimize the training process. After training, the
finetuned Llama2-7b and Llama2-13b models (referred to as L-F-7b and L-F-13b
respectively) acquire the ability to generate a persona-based summary for a
given medical article depending on the persona specified in the prompt.
Baselines: For comparison with our finetuned models on the persona-based
summary generation task, we use different state-of-the-art models as baselines
such as Falcon 7b-instruction tuned model Penedo et al. (2023), BART-large
Lewis et al. (2019), instruction-tuned Pegasus Zhang et al. (2020) and
Longformer Beltagy et al. (2020) Base (LED-B), Large
(LED-L)888https://huggingface.co/allenai/led-base-16384. Besides these, we
also use finetuned versions of T5-Large Raffel et al. (2020) (T5-FT) and
Flan-T5-Large Chung et al. (2022) (FT5-FT) on our training data as baselines.
Further, we also compare the performance with the different vanilla Llama2
model variants (7b, 13b, 70b referred to as L-V-7b, L-V-13b and L-V-70b
respectively).
## 3 Evaluation and Results
We evaluate the performance of our finetuned models (L-F-7b and L-F-13b) in
terms of generating high quality persona-based summaries for medical articles
and compare against the baseline models.
Evaluation metrics: Our evaluation relies on two different approaches:
(i) Traditional \- Here we use traditional metrics such as Rouge [1, 2 and L]
Lin (2004), Meteor Banerjee and Lavie (2005), Bleu Papineni et al. (2002),
BERTScore Zhang et al. (2019) [Precision (BERT-Prec), Recall (BERT-Rec) and
F1-score (BERT-F1)] to assess the quality of generated summaries.
(ii) GPT-4 critique \- Here we use the GPT-4 LLM as a critic to evaluate the
quality of the model generated summaries against the gold standard GPT-4
generated summary (Section 2) from different dimensions. Specifically, we
provide suitable critique based prompts to GPT-4 where we evaluate the
summaries based on a set of five predefined criterias (termed as GPT-4
criteria) defined below:
Criteria 1: Relevance (Rel): The extent to which the generated persona-based
summary is relevant to the intended persona (doctor/patient/normal person)
given the document.
Criteria 2: Coverage (Cov): The extent to which the generated persona-based
summary correctly covers important key points described in the gold standard
persona-based summary of the document.
Criteria 3: Impurity (Imp): The extent to which the persona-based summary does
not contain information specific to all other possible personas
$\\{persona\\_set$ \- persona$\\}$.
Criteria 4: Quality (Qlt): The extent to which the persona-based summary is of
overall good quality from the perspective of the intended persona.
Criteria 5: Goodness (Gds): Extending from 4, we manually verify the goodness
of the summary.
(Details of these criteria with prompts are provided in Appendix D).
Results: We provide a comparison of our finetuned models against the baselines
in terms of both traditional metrics as well as GPT-4 criteria in Tables 2 and
4 respectively (all values are in $\%$) on the WebMD test set of size $873$
(prompt specific to three different persona each for $291$ articles). Table 2
infers that both our finetuned models (L-F-7b and L-F-13b) achieve superior
performance compared to the baseline methods in terms of traditional metrics.
In fact, finetuned Llama2-13b (L-F-13b) outperforms the baselines in terms of
all the traditional metrics, demonstrating the superiority of our finetuning
approach which helps to adapt the model to the healthcare domain and perform
better on specific applications such as persona-based summarization. Similar
observation holds true when we compare the values of the GPT-4 critique based
criteria shown in Table 4. Here we also compare the quality of the gold
standard GPT-4 generated summaries in terms of the GPT-4 critique based
criteria. We find that the finetuned Llama2-13b model (L-F-13b) can generate
summaries pretty close in quality to the gold standard, while being much
faster in terms of training and inference time as well as cost-effective and
cheaper in terms of memory requirement.
Model Rel Cov Imp Qlt Gds Falcon 56.3 45.4 82.0 50.6 46.8 BART 65.4 42.6 84.8
49.7 25.8 Pegasus 47.7 33.0 74.0 36.1 11.5 T5-FT 72.4 70.1 84.9 67.6 78.1
FT5-FT 72.2 70.2 88.0 68.3 80.3 LED-B 36.1 19.2 79.3 31.3 17.6 LED-L 69.1 56.2
82.3 59.3 56.6 L-V-7b 19.1 18.4 41.3 16.6 15.2 L-V-13b 32.1 29.1 73.1 28.5
23.4 L-V-70b 49.7 45.1 78.0 46.4 47.8 L-F-7b 75.8 58.7 85.8 63.8 58.6 L-F-13b
93.5 90.1 91.7 88.5 99.1 GPT-4 98.2 96.3 98.6 98.5 99.7
Table 4: GPT-4 Critique evaluation results (in %)
Framework: We use 80GB A100 GPU, 210MHz clock cycle along with NLTK/SpaCy
python packages for all experiments. For 6 epochs, Llama2-13b takes 20 hrs for
finetuning and 3 hrs for inference, Llama2-7b takes 8 hrs for finetuning and
2.5 hrs for inference (Details in Appendix E.3).
Ablation study: Here we investigate the quality of persona-based summaries
generated by different variations of our best performing finetuned Llama2-13b
(L-F-13b) model on WebMD test set:
(A) Performance specific to different persona: Table 3 and 5 shows the
performance in terms of standard evaluation metrics and outcomes of GPT-4
critique based criteria for each of the three persona [Doctor, Patient and
Normal Person (Nor-Per)] for the best performing Llama2-13b Finetuned model
(L-F-13b). We observe that the model performs uniformly across the three
persona which confirms that our finetuned model generalizes well across
multiple persona, generating distinct persona-based summaries for the same
medical article.
Persona Rel Cov Imp Qlt Gds Doctor 90.0 89.1 91.0 86.2 98.8 Patient 94.4 90.4
92.2 88.5 99.0 Nor-Per 93.2 91.0 91.8 87.7 99.3 Average 93.5 90.1 91.7 88.5
99.1
Table 5: GPT-4 critique on different persona using
Llama2-13b Finetuning model (in %)
(B) Validation of GPT-4 generated gold standard summaries: To verify the
robustness of the GPT-4 generated summaries for the WebMD articles and to
mitigate the GPT-4 introduced inherent bias in generated summaries, we perform
different types of human annotation experiments:
(i) Persona-based Summary of GPT-4: We randomly select $50$ different WebMD
articles and provide three different persona-based (doctor, patient, normal
person) summaries (without their actual labels) to three different doctors
with domain knowledge expertise along with a good working proficiency in
English and ask them to identify the intended persona, i.e. - which summary
belongs to which specific persona. Initial human labeling is done by two
doctors and any annotation discrepancy is checked and resolved by the third
doctor after discussing with others. The inter-annotator agreement is found to
be $0.91$. On comparing with actual persona labels, we found that human labels
have $86.67\%$ accuracy for correctly identifying the actual persona which
shows the reliability of GPT-4 generated persona-based summaries.
(ii) Content Quality Check: We ask human annotators (doctors) to annotate
summaries on the basis of whether the persona-based summary is relevant while
correctly covering appropriate key points based on information need of
different persona and its overall usefulness. 96% of the GPT-4 generated
summaries for different personas are found to be useful by human annotators
(doctors).
(iii) GPT-4 Generated and Ground Truth Summary Check: Both doctors with domain
expertise and GPT-4 evaluate 50 document-summary pairs in terms of whether the
persona-based summary is relevant and correctly covers appropriate key points
based on information need of different persona, each for GPT-4 generated
summaries and ground truth summaries generated by annotators. We obtain an
inter-annotator agreement of $0.893$ which signifies strong consensus between
human and GPT-4 based evaluation. We separately test our best fine-tuned
(Llama2-13b-FT) model on the human-generated summaries (50 articles) and
obtain the following scores for different metrics - Traditional: R1-52.9,
R2-24.1, RL-33.2, Meteor-38.7, Bleu-18.0, Bert-P-87.7, Bert-R-84.9,
Bert-F1-86.3; GPT-4 criteria: Rel-91.2, Cov-90.8, Imp-90.4, Qlt-88.7,
Gds-98.5. This shows that there is strong alignment between results of on
human generated and GPT-4 generated summaries, signifying the high quality of
GPT-4 generated summaries.
(C) Validation of finetuned model generated summaries: To further investigate
the reliability of our finetuned model generated summaries, we choose $50$
different WebMD articles and provide persona-based summaries for each persona
(generated by GPT-4 in ground truth v/s Llama2-13b finetuned model generated)
to two doctors to annotate: (i) whether finetuned generated summary is better,
(ii) Both are Good, (iii) Ground Truth/GPT-4 summary is better and (iv) Both
are bad. We find that for - 20% cases Llama2-13b finetuned model summaries are
better (i), for 50% cases both finetuned and ground truth generated summaries
are good (ii) and rest 30% cases ground truth generated summaries are better
(iii) and no instance is found where both performs bad (iv).
(D) Different LLM Evaluators: We evaluate the fine-tuned model generated
summaries in the test set with Gemini model999https://gemini.google.com/
keeping the same prompts and criteria as used earlier for GPT-4 and the obtain
values of the same LLM based criteria are: Rel - 95.2, Cov - 92.4, Imp - 87.6,
Qlt - 90.7, Gds - 99.4. Thus, Gemini scores are also aligned with GPT-4 scores
with a correlation coefficient of 0.808 (Gemini provides higher scores for all
criteria except Criteria 3 - ‘Imp’). This verifies that the GPT-4 based
evaluation is impartial and robust.
(E) Llama2-13b performance on Other data: We test our best performing
Llama2-13b finetuned model on healthcare domain articles of OASUM Yang et al.
(2022) dataset which is publicly available. We select OASUM articles with
aspects related to healthcare [Death, Diagnosis, Differential Diagnosis and
Diagnosis Classification] and obtain $234$ such documents. We perform GPT-
critique based evaluation and observe that $82.77\%$ of the summaries are
labeled as good which signifies the robustness of our model in terms of
generating high quality summaries.
## 4 Conclusion
In this paper, we propose a framework for the efficient training of a small
foundation LLM on AI-generated datasets to obtain high quality domain-specific
persona-based summaries. Our focus is on training a finetuned version of
Llama2 on a corpus related to healthcare domain such that the trained model
captures the intricacies of persona-based summaries in healthcare domain. We
also demonstrate the effectiveness of using AI-based critiquing for the
evaluation of the model generated summaries, providing a more automated and
scalable solution. Our experiments also reveal the superior quality of
persona-based summaries generated by our finetuned model compared to
contemporary baselines. Further, AI-based critiquing of the summaries show
high inter-annotator agreement with human-based critiquing methods, further
confirming the effectiveness of our proposed approach. We plan to extend our
work in generating accurate persona-based summaries for documents in other
domains such as legal, enterprises, education and more, which is the focus of
our future work.
## Acknowledgements
The work was supported in part by a research grant from IIT KHARAGPUR AI4ICPS
I HUB FOUNDATION.
## Limitation and Discussion
There are a few limitations in our works - (i) Not all LLMs are useful
(similar to GPT-4) to generate personalized contents properly - like GPT2,
GPT3.5, Llama2-vanilla models do not perform very well mostly due to
hallucination and not covering important informations. (ii) We only explore
the data from healthcare domain, but we plan to extend our work to other
domains such as legal, corporate and education among others. (iii) Our
experimental dataset is only English in heathcare domain, we wish to extend
the work in multilingual setup, specifically for low-resource settings in
diverse domains. (iv) Prompt command is very important. Unless, we
specifically mention with explanation in Prompt about different Persona
(Doctor/Patient/Normal Person), GPT-4 does not perform well to generate
appropriate summary.
## Ethical Concerns
We use the publicly available content of the WebMD platform for non-commercial
and academic purpose only without violating any ethical concerns. The dataset
neither reveals any personal sensitive information of the patients nor any
toxic statement. Consent has been taken from all annotators including doctors.
For experiments, we use publicly available free frameworks - Llama2, Falcon,
BART, Pegasus, T5-FT, Flan-T5 (FT5-FT), LED-Base, LED-Large, Llama2-7b, 13b
and 70b - vanilla and finetune.
## References
* Akhtar et al. (2017) Nadeem Akhtar, Nashez Zubair, Abhishek Kumar, and Tameem Ahmad. 2017. Aspect based sentiment oriented summarization of hotel reviews. _Procedia computer science_ , 115:563–571.
* Banerjee and Lavie (2005) Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In _Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization_ , pages 65–72.
* Beltagy et al. (2020) Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. _arXiv preprint arXiv:2004.05150_.
* Chan et al. (2023) Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. 2023. Chateval: Towards better llm-based evaluators through multi-agent debate. _arXiv preprint arXiv:2308.07201_.
* Chintagunta et al. (2021) Bharath Chintagunta, Namit Katariya, Xavier Amatriain, and Anitha Kannan. 2021. Medically aware gpt-3 as a data generator for medical dialogue summarization. In _Machine Learning for Healthcare Conference_ , pages 354–372. PMLR.
* Chopra et al. (2016) Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In _Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies_ , pages 93–98.
* Chung et al. (2022) Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. _arXiv preprint arXiv:2210.11416_.
* Coavoux et al. (2019) Maximin Coavoux, Hady Elsahar, and Matthias Gallé. 2019. Unsupervised aspect-based multi-document abstractive summarization. In _Proceedings of the 2nd Workshop on New Frontiers in Summarization_ , pages 42–47.
* Dettmers et al. (2023) Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efficient finetuning of quantized llms. _arXiv preprint arXiv:2305.14314_.
* Erkan and Radev (2004) Günes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. _Journal of artificial intelligence research_ , 22:457–479.
* Gao et al. (2024) Mingqi Gao, Xinyu Hu, Jie Ruan, Xiao Pu, and Xiaojun Wan. 2024. Llm-based nlg evaluation: Current status and challenges. _arXiv preprint arXiv:2402.01383_.
* Goldsack et al. (2023) Tomsa Goldsack, Zheheng Luo, Qianqian Xie, Carolina Scarton, Matthew Shardlow, Sophia Ananiadou, and Chenghua Lin. 2023. Overview of the biolaysumm 2023 shared task on lay summarization of biomedical research articles. _arXiv preprint arXiv:2309.17332_.
* Guha et al. (2021) Souradip Guha, Ankan Mullick, Jatin Agrawal, Swetarekha Ram, Samir Ghui, Seung-Cheol Lee, Satadeep Bhattacharjee, and Pawan Goyal. 2021. Matscie: An automated tool for the generation of databases of methods and parameters used in the computational materials science literature. _Computational Materials Science (Comput. Mater. Sci.)_ , 192:110325.
* Hayashi et al. (2021) Hiroaki Hayashi, Prashant Budania, Peng Wang, Chris Ackerson, Raj Neervannan, and Graham Neubig. 2021. Wikiasp: A dataset for multi-domain aspect-based summarization. _Transactions of the Association for Computational Linguistics_ , 9:211–225.
* Kieser et al. (2023) Fabian Kieser, Peter Wulff, Jochen Kuhn, and Stefan Küchemann. 2023. Educational data augmentation in physics education research using chatgpt. _Physical Review Physics Education Research_ , 19(2):020150.
* Lewis et al. (2019) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. _arXiv preprint arXiv:1910.13461_.
* Li et al. (2023) Zongxi Li, Xianming Li, Yuzhang Liu, Haoran Xie, Jing Li, Fu-lee Wang, Qing Li, and Xiaoqin Zhong. 2023. Label supervised llama finetuning. _arXiv preprint arXiv:2310.01208_.
* Lin (2004) Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In _Text summarization branches out_ , pages 74–81.
* Liu et al. (2023) Yuxuan Liu, Tianchi Yang, Shaohan Huang, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, and Qi Zhang. 2023. Calibrating llm-based evaluator. _arXiv preprint arXiv:2309.13308_.
* Luo et al. (2022) Zheheng Luo, Qianqian Xie, and Sophia Ananiadou. 2022. Readability controllable biomedical document summarization. In _Findings of the Association for Computational Linguistics: EMNLP 2022_ , pages 4667–4680.
* Maddigan and Susnjak (2023) Paula Maddigan and Teo Susnjak. 2023. Chat2vis: Generating data visualisations via natural language using chatgpt, codex and gpt-3 large language models. _IEEE Access_.
* Meng et al. (2022) Yu Meng, Jiaxin Huang, Yu Zhang, and Jiawei Han. 2022. Generating training data with language models: Towards zero-shot language understanding. _Advances in Neural Information Processing Systems_ , 35:462–477.
* Mihalcea and Tarau (2004) Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In _Proceedings of the 2004 conference on empirical methods in natural language processing_ , pages 404–411.
* Mukherjee et al. (2020) Rajdeep Mukherjee, Hari Chandana Peruri, Uppada Vishnu, Pawan Goyal, Sourangshu Bhattacharya, and Niloy Ganguly. 2020. Read what you need: Controllable aspect-based opinion summarization of tourist reviews. In _Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval_ , pages 1825–1828.
* Mullick (2023a) Ankan Mullick. 2023a. Exploring multilingual intent dynamics and applications. _IJCAI Doctoral Consortium_.
* Mullick (2023b) Ankan Mullick. 2023b. Novel intent detection and active learning based classification (student abstract). _arXiv e-prints_ , pages arXiv–2304.
* Mullick et al. (2024) Ankan Mullick, Akash Ghosh, G Sai Chaitanya, Samir Ghui, Tapas Nayak, Seung-Cheol Lee, Satadeep Bhattacharjee, and Pawan Goyal. 2024. Matscire: Leveraging pointer networks to automate entity and relation extraction for material science knowledge-base construction. _Computational Materials Science_ , 233:112659.
* Mullick et al. (2018a) Ankan Mullick, Surjodoy Ghosh D, Shivam Maheswari, Srotaswini Sahoo, Suman Kalyan Maity, and Pawan Goyal. 2018a. Identifying opinion and fact subcategories from the social web. In _Proceedings of the 2018 ACM International Conference on Supporting Group Work_ , pages 145–149.
* Mullick et al. (2016) Ankan Mullick, Pawan Goyal, and Niloy Ganguly. 2016. A graphical framework to detect and categorize diverse opinions from online news. In _Proceedings of the Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media (PEOPLES)_ , pages 40–49.
* Mullick et al. (2017a) Ankan Mullick, Pawan Goyal, Niloy Ganguly, and Manish Gupta. 2017a. Extracting social lists from twitter. In _Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017_ , pages 391–394.
* Mullick et al. (2018b) Ankan Mullick, Pawan Goyal, Niloy Ganguly, and Manish Gupta. 2018b. Harnessing twitter for answering opinion list queries. _IEEE Transactions on Computational Social Systems_ , 5(4):1083–1095.
* Mullick et al. (2017b) Ankan Mullick, Shivam Maheshwari, Pawan Goyal, and Niloy Ganguly. 2017b. A generic opinion-fact classifier with application in understanding opinionatedness in various news section. In _Proceedings of the 26th International Conference on World Wide Web Companion_ , pages 827–828.
* Mullick et al. (2023) Ankan Mullick, Ishani Mondal, Sourjyadip Ray, R Raghav, G Chaitanya, and Pawan Goyal. 2023. Intent identification and entity extraction for healthcare queries in indic languages. In _Findings of the Association for Computational Linguistics: EACL 2023_ , pages 1825–1836.
* Mullick et al. (2022a) Ankan Mullick, Abhilash Nandy, Manav Kapadnis, Sohan Patnaik, R Raghav, and Roshni Kar. 2022a. An evaluation framework for legal document summarization. In _Proceedings of the Thirteenth Language Resources and Evaluation Conference_ , pages 4747–4753.
* Mullick et al. (2022b) Ankan Mullick, Abhilash Nandy, Manav Nitin Kapadnis, Sohan Patnaik, and R Raghav. 2022b. Fine-grained intent classification in the legal domain. _arXiv preprint arXiv:2205.03509_.
* Mullick et al. (2022c) Ankan Mullick, Shubhraneel Pal, Tapas Nayak, Seung-Cheol Lee, Satadeep Bhattacharjee, and Pawan Goyal. 2022c. Using sentence-level classification helps entity extraction from material science literature. In _Proceedings of the Thirteenth Language Resources and Evaluation Conference_ , pages 4540–4545.
* Mullick et al. (2019) Ankan Mullick, Sourav Pal, Projjal Chanda, Arijit Panigrahy, Anurag Bharadwaj, Siddhant Singh, and Tanmoy Dam. 2019. D-fj: Deep neural network based factuality judgment. _Technology_ , 50:173.
* Mullick et al. (2022d) Ankan Mullick, Sukannya Purkayastha, Pawan Goyal, and Niloy Ganguly. 2022d. A framework to generate high-quality datapoints for multiple novel intent detection. _arXiv preprint arXiv:2205.02005_.
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th annual meeting of the Association for Computational Linguistics_ , pages 311–318.
* Paulus et al. (2017) Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. _arXiv preprint arXiv:1705.04304_.
* Penedo et al. (2023) Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. 2023. The refinedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only. _arXiv preprint arXiv:2306.01116_.
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _The Journal of Machine Learning Research_ , 21(1):5485–5551.
* See et al. (2017) Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. _arXiv preprint arXiv:1704.04368_.
* Sun et al. (2023) Zhaoyi Sun, Hanley Ong, Patrick Kennedy, Liyan Tang, Shirley Chen, Jonathan Elias, Eugene Lucas, George Shih, and Yifan Peng. 2023. Evaluating gpt-4 on impressions generation in radiology reports. _Radiology_ , 307(5):e231259.
* Vig et al. (2021) Jesse Vig, Alexander R Fabbri, Wojciech Kryściński, Chien-Sheng Wu, and Wenhao Liu. 2021. Exploring neural models for query-focused summarization. _arXiv preprint arXiv:2112.07637_.
* Xu and Lapata (2020) Yumo Xu and Mirella Lapata. 2020. Query focused multi-document summarization with distant supervision. _arXiv preprint arXiv:2004.03027_.
* Yang et al. (2022) Xianjun Yang, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Xiaoman Pan, Linda Petzold, and Dong Yu. 2022. Oasum: Large-scale open domain aspect-based summarization. _arXiv preprint arXiv:2212.09233_.
* Ye et al. (2022) Jiacheng Ye, Jiahui Gao, Jiangtao Feng, Zhiyong Wu, Tao Yu, and Lingpeng Kong. 2022. Progen: Progressive zero-shot dataset generation via in-context feedback. _arXiv preprint arXiv:2210.12329_.
* Yu et al. (2023) Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, and Chao Zhang. 2023. Large language model as attributed training data generator: A tale of diversity and bias. _arXiv preprint arXiv:2306.15895_.
* Zhang et al. (2020) Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In _International Conference on Machine Learning_ , pages 11328–11339. PMLR.
* Zhang et al. (2019) Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. _arXiv preprint arXiv:1904.09675_.
* Zhou et al. (2023) Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, and Jiawei Han. 2023. Don’t make your llm an evaluation benchmark cheater. _arXiv preprint arXiv:2311.01964_.
* Zhu et al. (2022) Haichao Zhu, Li Dong, Furu Wei, Bing Qin, and Ting Liu. 2022. Transforming wikipedia into augmented data for query-focused summarization. _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , 30:2357–2367.
## Appendix
## Appendix A Related Work
In this section, we conduct a survey of state-of-the-art literature on closely
related topics.
Summarization: We perform a survey of various summarization techniques, that
encompasses generic approaches such as abstractive Chopra et al. (2016); See
et al. (2017); Paulus et al. (2017) and extractive summarization like TextRank
Mihalcea and Tarau (2004) and LexRank Erkan and Radev (2004). Subsequent
research has explored summarization in specific directions, including aspect-
based summaries Hayashi et al. (2021); Coavoux et al. (2019); Mukherjee et al.
(2020); Akhtar et al. (2017) and query-focused summaries Vig et al. (2021);
Zhu et al. (2022); Xu and Lapata (2020), but none focuses on domain-specific
persona-based summarization.
Persona Concept: Goldsack et al. (2023); Luo et al. (2022) focus on building
lay-summarization for comprehensible to non-technical audiences but do not
have the differentiating factor of persona concept to distinguish the various
technical summaries. based on persona and no usage of LLMs as an alternative
metrics to evaluate the summaries. Our work differs from the fact that we show
a pipelined approach - data generation, persona-specific (doctor vs patient vs
normal person) summarization with key points and GPT-4 based evaluation to
save time and assist different persona to augment their knowledge on it to
make conclusions more efficiently. The concept of Persona also different from
the notion of intent Mullick et al. (2022d, 2023, b); Mullick (2023b, a);
Mullick et al. (2022a), entity-relation Mullick et al. (2024); Guha et al.
(2021); Mullick et al. (2022c) or opinion/fact Mullick et al. (2017b, 2016,
2018a, 2018b, 2019, a) idea. Our work differs from the fact that we show a
pipelined approach - data generation, ‘persona concept’ specific (doctor vs
patient vs normal person) summarization with key points and GPT-4 based
evaluation to save time and assist different persona to augment their
knowledge on it to make conclusions more efficiently.
Data Generation using LLMs: Large Language Models (LLMs), such as the GPT
family, have been utilized to generate training datasets for NLP tasks,
addressing data scarcity issues in a cost effective manner Yu et al. (2023);
Meng et al. (2022); Ye et al. (2022). ChatGPT 101010https://chat.openai.com/
aids in educational data augmentation and data visualization Kieser et al.
(2023); Maddigan and Susnjak (2023); however, in healthcare domain, GPT-3 and
GPT-4 are employed for generating medical dialogue summarization data
Chintagunta et al. (2021) and radiology reports Sun et al. (2023). However, no
prior work focuses on benchmark domain-specific persona-based summary data
generation with appropriate human validation.
LLMs as Evaluation Metrics: The rise of LLMs presents a potential, cost-
effective alternative for evaluating various NLP tasks. Existing efforts
include a taxonomy of LLM-based NLG evaluation methods Gao et al. (2024), the
development of ‘ChatEval’ for assessing response quality Chan et al. (2023),
and proposed guidelines for LLM-based evaluation Zhou et al. (2023). While
some studies explore LLM-based assessments with human alignment Liu et al.
(2023), there is a lack of work utilizing LLMs to evaluate solution
architectures comprehensively with a critique scoring system.
Our paper takes a significant step towards addressing the shortcomings of
prior literature in terms of persona-based summary data generation followed by
human valiadation as well as performing LLM based critic evaluation.
## Appendix B Dataset Requirement
Access to robust, comprehensive datasets is crucial for training NLP models to
understand and generate persona-specific content, emphasizing the challenges
of data scarcity and summary-making capabilities. Manual annotation of large
healthcare datasets is both costly and time-intensive, demanding domain
expertise and meticulous attention to detail. Consider the example where $n$
individuals generate summaries for $m$ persona, resulting in $n\times m$
distinct summary generations for a single article — it is a highly expensive
and time consuming process. Moreover, acquiring suitable human resources for
labeling is challenging, as people are often hesitant to undertake the tedious
and difficult task of summary generation, even with a standard payment
agreement.
We provide healthcare data to the Prolific annotation platform with specific
criteria: annotators with a PhD / Graduate degree, a medical background,
approval rates of 90%-100%, and expertise in medicine. However, out of 157,341
potential annotators, only 189 meet these criteria, and even among them, there
is a high rejection rate of 71.43% for selecting documents for manual summary
generation, despite offering more than standard payment. Further, eligible
annotators tend to heavily rely on ChatGPT and similar automated approaches,
leading to the need for extensive re-evaluations and revisions, along with
issues related to annotator rejection, even to assess a limited number of
documents. Hence, obtaining a high quality dataset remains a formidable
challenge.
## Appendix C GPT-4 Bias
It is acknowledged that using summaries generated exclusively by GPT-4 could
introduce biases inherent in its summarization capabilities, it may also be
noted that alternatives, such as human evaluation, also carry their own
biases. Despite the potential for bias, leveraging GPT-4 for summarization may
still be a pragmatic choice, especially in scenarios where access to diverse
datasets or sophisticated validation methods is limited. However, in this
work, we remain vigilant, recognizing the limitations inherent in both
automated and human-generated summaries, and take proactive steps such as
human intervention to validate and contextualise the results to mitigate
biases to the best extent possible within the given constraints.
## Appendix D Prompts
We use prompting in three stages - data generation, finetune-inference and
critique. There are two kinds of prompts - system prompt and user prompt.
### D.1 Data Generation
system : You are an AI assistant who are to generate a summary of a medical
document specific to a certain persona which can be doctor, patient, normal
person. The summary of a medical document should be generated from the
perspective of the respective persona.
user : Summarize the medical document given below from the perspective of a
{persona} [doctor/patient/normal person] and return the summary only. The
medical document is as follows: Document: {document}
### D.2 Finetune and Inference prompt
user prompt \- Summarize the medical document given below from the perspective
of a persona:
$\\#\\#\\#$ Document: document
### D.3 Critique
system: You are an AI assistant who is to evaluate the summary of a medical
document specific to a certain persona which can be doctor, patient or a
normal person. A doctor requires a detailed and technical summary about the
medical document. Patients require a layman’s summary about the medical
document, with information about things like causes, effects, treatment etc.
that may be helpful to them. A normal person has no medical knowledge and
requires a generic summary about the medical document. You need to return a
score between 0 and 1 reflecting the quality of the generated summary based on
some criteria.
user: You are given a medical document and the corresponding summary of the
document generated from the perspective of a {persona} predicted by a language
model as follows.
Document: {document}
Ground truth summary : {label summary}
Summary from the perspective of a {persona} [doctor/patient/normal person]:
{model generated summary}
Evaluate the above persona based summary for the document in terms of each of
the following criteria and return only a score between 0 and 1 without any
explanation:
* •
The extent to which the generated summary is relevant to the specific persona
{persona}[doctor/patient/normal person] based summary of the document.
* •
The extent to which the generated persona-based summary correctly covers all
the important key points described in the persona
{persona}[doctor/patient/normal person] based summary of the document.
* •
The extent to which the summary does not contain information specific to all
other possible personas {persona_set - persona}[doctor/patient/normal person]
based summary.
* •
Rate the summary from the point of view of the persona – whether the summary
is good, average, or bad. A good summary effectively captures the essential
points, presenting them clearly and concisely. It maintains accuracy,
encourages reader engagement, and serves as a compelling introduction to the
content. An average summary conveys the main points but may lack some clarity
or detail, presenting a decent overview without standing out in terms of
conciseness or precision. It provides a basic understanding but not from a
more refined focused summary and fails to accurately convey the main points,
containing inaccuracies or misinterpretations. It is either overly verbose or
lacks coherence, making it difficult for the reader to grasp the core
information effectively.
* •
Calculated summary from the point of view of the persona [Good/Bad/Average]
[Calculated from 4 with the help of manual annotation]
## Appendix E Experiments
### E.1 Varying Training Size Dataset
To understand the effect of training data size on the performance, we vary the
WebMD training data for the Llama2-13b model - taking $10$-shot ($k$-shot
settings where $k$=$10$), 10%, 40% and 70% of the initial training data, and
finetune the Llama2-13b model with same parameter and hyper-parameter settings
and the five criterias of GPT-4 Critique outcome (in %) are shown in Fig 1. We
see that with increasing the dataset size, the performance of Llama2-13b
improves in terms of GPT-4 critique and traditional metrics. Even at 40% of
the dataset, the model is able to achieve a very good performance. It shows
the effectiveness of the Llama2-13b model. It also infers that even with very
little amount of data in $10$-shot Llama2-13b can able to generate appropriate
persona and aspect based summary.
Figure 1: Different Training Data Sizes
### E.2 Tuning generation parameters during model inference:
We investigate the impact of tuning the max-new-token and temperature
generation parameters on the performance of our finetuned model during
inference. The variation in performance in terms of the five GPT-4 critique
based criteria are shown in Figures 2(a) and 2(b) respectively. We observe
that our model performs the best for a temperature of $0$ and performance
degrades significantly as we increase the temperature beyond $0.4$. Similarly,
the best model performance is achieved for a max-new-token size of $350$. We
have used NLTK, Spacy, openai (version=0.28), huggingface_hub, torch and
transformers python packages for all experiment.
(a) w.r.t. max-new-token
(b) w.r.t. temperature
Figure 2: Variations of Llama2-13b-Finetune (in %)
### E.3 Time and GPU
We experiment on 80GB A100 GPU with GPU clock cycle 210 MHz. The finetuning
and inference time of our finetuned models are in Table 6.
Model | Finetune Time | Inference Time
---|---|---
Llama2-7b | 8 hrs | 2hrs 30 mins
Llama2-13b | 20 hrs | 2hrs 50 mins
Table 6: Model Training Time [using 80GB A100 GPU]
## Appendix F Human Annotations
Annotation Guidelines for Comparative Rating: We provide instructions with
explanations of different persona and ask to identify which summary belongs to
which persona as shown in Fig 5. We also provide the link of document along
with distinct summaries of GPT-4 and Llama2-13b finetune model for comprison
as shown in Fig 6. The instructions are the following -
“You are given a summary of a medical document specific to the perspective of
a certain group of people (doctor, patient and normal person).
A doctor requires a detailed and technical summary about the medical document.
A patient requires a layman summary about the medical document, with
information about things like causes, effects, treatment etc. that may be
helpful to him. A patient only requires a top level view of the extensive
medical details and not so much medical details like a doctor.
A normal person has no medical knowledge and requires a generic summary about
the medical document and does not require extensive medical details.”
Annotation Guidelines to Prolific and Doctors
For annotator selection, we have several criterias. Annotator selection
includes specific criteria such as ‘Degree subject’ in Health and welfare,
‘Highest education level completed’ as Doctorate degree or Graduate degree,
‘Fluent languages’ in English, ‘Approval Rate’ of 90–100, ‘Subject’ in
Medicine, and ‘Employment Sector’ as Doctor. Further annotations are conducted
by graduate doctors (details in Section - Human Evaluation Part).
Following is the annotation guideline to ‘Prolific’ annotation platform -
Objective: Generate Personified Summaries by Prolific
Introduction: In this study, you are tasked with generating a summary tailored
to three different personas: a Doctor, a Patient, and a Normal Person. You
will be provided with a document link containing a Source Document (SD) –
which can be a medical research document or a general article related to
health from https://www.webmd.com/. . Additionally, a Persona (P) will be
present.
Your Task: Read the Source Document (and Persona) and craft three summaries,
each targeted towards one of the personas mentioned. Use your understanding
and perspective to tailor the information in a way that is most relevant and
comprehensible to each persona.
Summary Persona:
Doctor Persona: Craft a summary that focuses on medical terminology,
guidelines, and provides information suitable for a medical professional.
Emphasize technical accuracy and relevance to medical practice. A doctor
requires a detailed and technical summary about the medical document.
Patient Persona: Generate a summary with a patient-centric approach, avoiding
excessive technical jargon. Ensure that the information is clear, easily
understandable, and addresses concerns that a patient might have. A patient
requires a non-technical summary about the medical document, with information
about things like causes, effects, treatment etc. that may be helpful to him.
A patient only requires a top level view of the extensive medical details and
not so much medical details like a doctor.
Normal Person Persona: Tailor a summary for a general audience without a
medical background. Use simple language, avoid technical terms, and present
the information in a way that is accessible and engaging to a layperson. A
normal person has no medical knowledge and requires a generic summary about
the medical document and does not require extensive medical details.
Instructions: 1\. Carefully review the Source Document and the Persona.
Consider the specific needs and understanding level of each persona while
generating the summaries.
2\. No additional software download is required. Use a browser, preferably
Google Chrome, and ensure a stable internet connection.
3\. Allocate time judiciously for crafting each of the three summaries based
on the provided 2 SD instances.
4\. After completion, you will be asked to provide feedback on the generation
exercise, platform interaction, and details about your academic background,
age, country of birth, and any medical background or experience with model-
generated summaries.
Payment Requirements: Upon completing the study, click on the provided link
containing the completion code to redirect you to the Prolific platform.
Payment will be processed within one to two weeks.
Ethical Considerations:
Adhere to strict confidentiality and data protection standards to ensure the
privacy of medical information. If you have concerns or questions, feel free
to reach out, as this study aligns with ethical guidelines.
This study aims to harness diverse perspectives, including those of medical
professionals, to refine the generation of personified summaries for enhanced
utility in various contexts.
Next, the details while providing the documents - “You will be given 2
documents in the next 2 pages and you need to write the summaries with respect
to Doctor, Patient and Normal Person (as in example).
Please do not use ChatGPT/GPT-4 or any Large Language Models - all summaries
should be generated by human properly. It is a strict instruction and will be
checked manually - if found any issue: it will be rejected and re-doing will
be required.
Your summary length (word count) is approximately 15% - 20% of the document
length (word count) for three different types.”
## Appendix G Examples
Two human annotated examples in doctor persona is shown in Fig 3 where the
GPT-4 generated summary is better and Fig 4 where LLAMA-2 generated summary is
better. Two examples of human annotation interface is shown in Fig 5 and Fig 6
respectively.
Figure 3: GPT-4 generated summary better than LLAMA2-13b model generated
summary[persona : doctor] Figure 4: LLAMA2-13b model generated summary better
than GPT-4 generated summary[persona : doctor] Figure 5: Persona identify
experiment example snapshot Figure 6: Llama2-13b finetune and GPT-4 summary
comparison experiment example snapshot
|
# On hyperbolic characteristic functions from an analytic and a free-
probability point of view
Zbigniew J. Jurek (University of Wrocław)
(May 25, 2020)
> Abstract. For free-probability Voiculescu transforms, analogous to
> hyperbolic characteristic functions, we show how to get their representing
> measures in an integral form. For that purpose it is enough to know those
> transforms only on the imaginary axis. This is in a contrast to a complex
> analysis where one needs to know them in some domains in the complex plane.
>
> _Mathematics Subject Classifications_(2020): Primary 60E10, 60E07, 44A20.
>
> _Key words and phrases:_ a characteristic function; hyperbolic
> characteristic functions; infinite divisibility; Lévy measure; Voiculescu
> free-infinite divisibility; Laplace transform; convolution.
>
> _Abbreviated title: On hyperbolic characteristic function._
Author’s address:
Institute of Mathematics, University of Wrocław, Pl. Grunwaldzki 2/4,
50-384 Wrocław, Poland;
Email<EMAIL_ADDRESS>; www.math.uni.wroc.pl/$\sim$zjjurek
In a classical complex analysis one of the fundamental results is the integral
representation of _analytic functions_ from the upper to the lower complex
half-plane. Those functions, say $H$, admit unique canonical form
$H(z)=a+\int_{\mathbb{R}}\frac{1+zx}{z-x}\rho(dx)=a-\int_{\mathbb{R}}\big{[}\frac{1}{x-z}-\frac{x}{1+x^{2}}\big{]}(1+x^{2})\rho(dx),\
(\star)$
where $a\in\mathbb{R}$ is a constant, $\rho$ is a finite (Borel) measure on a
real line $\mathbb{R}$; (in a literature $H$ are coined as Pick functions and
a representation $(\star)$ is called Nevalinna Theorem.) One simply notes that
a constant $a=\Re H(i)$ (a real part), and a total mass
$\rho(\mathbb{R})=-\Im(H(i)$. Finally, for a measure $\rho$ we have an
inversion formula
$\rho([c,d])=\lim_{\epsilon\to 0^{+}}\frac{1}{\pi}\int_{c}^{d}\Im
H(x+i\epsilon)dx,\ \mbox{whenever }\ \rho(\\{c,d\\})=0;\qquad\ (\star\star)$
cf.Akhiezer(1965), p.125, or Lang(1975), p.380, or Bondensson(1992), p.21.
However, what can be said about a measure $\rho$ if we only have values
$H(it)$, for $t\neq 0,$ and we don’t know if it is a restriction of an
analytic function to the imaginary axis ? Note that in $(\star\star)$ we need
to know $H$ in some strips of the complex plane to retrieve a measure $\rho$.
Never the less, in Jankowski and Jurek (2012), Theorem 1, there is an
inversion procedure that allows us to identify a measure $\rho$ or more
precisely its characteristic function $\hat{\rho}$.
In a couple of last decades representation of the form $(\star)$ appeared in
so-called _free-probability_ as a free-analog of the classical Lévy-Khintchine
formula for infinite divisible characteristic functions (Fourier transforms).
In this note we show applications of the inversion formula from Jankowski and
Jurek (2012) for $\tilde{C},\tilde{S},\tilde{T}$ free-analogs of the classical
hyperbolic functions $C,S$ and $T$. Recall that $C,S$ and $T$ are defined by
their characteristic functions as follows
$\phi_{C}(t):=\frac{1}{\cosh(t)},\ \ \phi_{S}(t):=\frac{\sinh(t)}{t},\ \
\phi_{T}(t):=\frac{\tanh(t)}{t},\ t\in\mathbb{R}.$
In free-probability variables $\tilde{C},\tilde{S},\tilde{T}$ are given by
their Voiculescu transforms $V_{\tilde{C}}(z),V_{\tilde{S}}(z)$ and
$V_{\tilde{T}}(z),z\in\mathbb{C}^{+}$, although in our approach to free-
probability theory we consequently use only purely imaginary $z=it,t\neq 0$;
cf. Jurek (2019), Corollaries 3 ,4 and 5, respectively.
Let us recall that hyperbolic characteristic functions, from an infinite
divisibility point of view, were studied in Pitman and Yor (2003) and from a
selfdecomposability point of view in Jurek (1997) (as infinite series of
independent exponentially distributed variables), and in Jurek-Yor (2004)
(from stochastic representations of their background driving processes). The
last description can be done as all hyperbolic characteristic functions are
selfdecomposable ones and therefore they admit a representation by random
integrals; cf. Jurek and Mason (1993), Chapter 3, Theorem 3.6.8 or Jurek and
Vervaat (1983), Theorem 3.2.
1\. Introduction.
Let us for an index $X$ (where $X$ can be a random variable or a measure or a
characteristic function) define a function $V_{X}$ on the imaginary axis
$i(\mathbb{R}\setminus{\\{0\\}})$ as follows
$V_{X}(it):=a_{X}+\int_{\mathbb{R}}\frac{1+itx}{it-x}m_{X}(dx),\ \ t\neq 0,$
(1)
where $a_{X}\in\mathbb{R}$ and $m_{X}$ is a non-negative, finite Borel
measure. Then note that $V_{X}(i)=a_{X}-i\,m_{X}(\mathbb{R})$ and hence we get
$a_{X}=\Re V_{X}(i)\in\mathbb{R},\ \ \ m_{X}(\mathbb{R})=-\Im
V_{X}(i)\in[0,\infty).$ (2)
Furthermore, if $\hat{m}_{X}(s):=\int_{\mathbb{R}}e^{isx}m_{X}(dx)$ denotes _a
characteristic function of a measure $m_{X}$_ then its _Laplace transform
$\mathfrak{L}[\hat{m}_{X};w]$, for $w>0$_, satisfies equality
$\mathfrak{L}[\hat{m}_{X};w]=\int_{0}^{\infty}\hat{m}_{X}(x)e^{-wx}dx=\frac{iV_{X}(-iw)-i\Re
V_{X}(i)-w\Im V_{X}(i)}{w^{2}-1};$ (3)
cf. Theorem 1 in Jankowski and Jurek (2012). Equivalently,
$\mathfrak{L}[\hat{m_{X}}(s)+ia_{X}\sinh(s)-m_{X}(\mathbb{R})\cosh(s);w]=\frac{iV_{X}(-iw)}{w^{2}-1},\
\ \ \mbox{because}\\\ \mathfrak{L}\big{[}\sinh x;w]=\frac{1}{w^{2}-1},\
\mbox{and}\ \mathfrak{L}[\cosh x;w]=\frac{w}{w^{2}-1},\ \mbox{for}\ w>1.$ (4)
In propositions below our aim is to show that functions
$w\to\frac{iV_{X}(-iw)}{w^{2}-1}$,
(on the right hand side in (4)), are indeed Laplace’s transform of some
functions or measures. This , in principle, enables us to identify a
representing measure $m_{X}$ in the canonical form (1).
2\. Results. In order to present our results we will need some special
functions.. Therefore before each proposition we recall their definitions
and/or characterizations. Many of those functions are derived from _Euler’s
$\Gamma$ gamma function_: $\Gamma(z):=\int_{0}^{\infty}x^{z-1}e^{-x}dx;\ \Re
z>0$ and _digamma function_ $\psi(z):=d/dz\log\Gamma(z)$. For more facts and
formulas we refer to the Appendix at the end of this article.
In the first proposition we need _$\beta$ beta function_ which admits a
representation $\beta(z)=\int_{0}^{\infty}(1+e^{-x})^{-1}\,e^{-zx}dx,\ \ \Re
z>0$, and originally was defined via digamma function $\psi$; cf. Appendix
(A).
###### Proposition 1.
For a free-infinitely divisible Voiculescu transform
$V_{\tilde{C}}(it)=i[1-t\beta(t/2)],t\neq 0,$
we have that in its representation (1) a real parameter $a_{\tilde{C}}=0$ and
a measure $m_{\tilde{C}}$ is such that a total mass
$m_{\tilde{C}}(\mathbb{R})=\pi/2-1\approx 0.57079$ and its characteristic
function $\hat{m}_{\tilde{C}}$ is equal to
$\hat{m}_{\tilde{C}}(s)=2\sinh(s)\tan^{-1}(e^{-s})+\frac{\pi}{2}e^{-s}-1\\\
=\int_{0}^{\infty}\cos(sx)\frac{|x|}{1+x^{2}}\frac{1}{\sinh(\pi|x|/2)}dx$ (5)
($\tilde{C}$ indicates a free-probability analogue of a classical hyperbolic
cosine characteristic function $\phi_{C}(t)=1/\cosh(t).$)
To formulate next proposition we need two special functions. Namely, _a
digamma function_ $\psi$ , which is defined as
$\psi(z):=\frac{d}{dz}\log\Gamma(z)$, and _an exponential integral function_
$Ei(x):=-\int_{-x}^{\infty}\frac{e^{-t}}{t}dt$, for $x<0$; for more see
Appendix (B).
###### Proposition 2.
. For a free-infinitely divisible Voiculescu transform
$V_{\tilde{S}}(it)=i[t\psi(t/2)-t\log(t/2)+1],t\neq 0,$
we have that in its representation (1) a parameter $a_{\tilde{S}}=0$ and a
measure $m_{\tilde{S}}$ is a such that $m_{\tilde{S}}(\mathbb{R})=\gamma+\log
2-1\approx 0.270362$ and its characteristic function $\hat{m}_{\tilde{S}}$ is
of a form
$\hat{m}_{\tilde{S}}(s)=-1+\cosh(s)\big{(}\log(1+e^{-s})-\log(1-e^{-s})\big{)}+\frac{e^{-s}}{2}Ei(s)+\frac{e^{s}}{2}Ei(-s)\\\
=2\int_{0}^{\infty}\cos(sx)\frac{x}{1+x^{2}}\frac{1}{e^{\pi x}-1}dx,\ \
s>0,\qquad\qquad$ (6)
and $Ei(x)$ is an exponential integral function.
($\tilde{S}$ indicates a free-probability counter part of the hyperbolic sine
characteristic function $\phi_{S}(t)=t/\sinh(t)$.)
In next statements appear special functions from two previous propositions
because of the elementary relation: $\phi_{C}(t)=\phi_{S}(t)\cdot\phi_{T}(t)$.
###### Proposition 3.
For a free-infinitely divisible Voiculescu transform
$V_{\tilde{T}}(it)=V_{\tilde{C}}(it)-V_{\tilde{S}}(it)=it\,[\,\log(t/2)-\beta(t/2)-\psi(t/2)\,],t\neq
0,$
we have that in its a representation (1) a parameter $a_{\tilde{T}}=0$ and for
a measure $m_{\tilde{T}}$ we have that
$m_{\tilde{T}}(\mathbb{R})=\pi/2-\gamma-\log 2\,\,(\approx 0.3004)$, and its
characteristic function $\hat{m}_{\tilde{T}}$ has a form
$\hat{m}_{\tilde{T}}(s)=\frac{\pi}{2}e^{-s}+2\sinh(s)\tan^{-1}(e^{-s})-\cosh
s\log(1+e^{-s})\\\
-\frac{e^{-s}}{2}\big{(}Ei(s)-\log(1-e^{-s})\big{)}-\frac{e^{s}}{2}\big{(}Ei(-s)-\log(1-e^{-s})\big{)}\\\
=\int_{0}^{\infty}\cos(sx)\frac{|x|}{1+x^{2}}\frac{e^{-\pi|x|/4}}{\cosh(\pi|x|/4)}dx,\
\ s>0.\qquad\qquad$ (7)
($T$ indicates a hyperbolic tangent characteristic function
$\phi_{T}(t)=\tanh(t)/t$.)
All three Voiculescu transforms from Propositions 1, 2 and 3 are free-
probability analogs of selfdecomposable characteristic functions. Therefore
they have so called _background driving terms_ from corresponding random
integral representations. In particular, these are background driving
characteristic functions $\psi_{\tilde{C}},\psi_{\tilde{S}}$ and
$\psi_{\tilde{T}}$, and Lévy (spectral) measures $N_{C},N_{S}$ and $N_{T}$;
cf. Jurek (2019), Section 2.1 or Jurek-Yor (2004).
As in previous propositions we have similar results for them as well, although
we computed it only for a background driving characteristic function
$\psi_{\tilde{C}}$, only; cf. Proposition 4 below.
For a following proposition, we need another two special functions. Namely,
_Riemann’s zeta function_ , $\zeta$, and _polylogarithm functions $Li_{n}(z)$
_ (in Wolframalpha.com language written as $PolyLog[n,z]$); cf. Appendix (C).
###### Proposition 4.
For a free-infinitely divisible Voiculescu transform
$V_{\psi_{\tilde{C}}}(it)=i\,[\frac{t^{2}}{2}\zeta(2,t/2)-\frac{t^{2}}{4}\zeta(2,t/4)+1]$
we have that in its representation (1) a parameter $a_{\psi_{\tilde{C}}}=0$, a
total mass $m_{\psi_{\tilde{C}}}(\mathbb{R})=2C-1\,(\approx 0.83193)$ and for
a measure $m_{\psi_{\tilde{C}}}$ we have its characteristic function
$\hat{m}_{\psi_{\tilde{C}}}(t)=-1-t\tanh(t)-\cosh(t)\big{(}\,i(Li_{2}(ie^{t})-Li_{2}(-ie^{t}))+2t\arctan(e^{t})\,\big{)}\\\
=\frac{\pi}{2}\int_{0}^{\infty}\cos(tx)\,\frac{x^{2}}{1+x^{2}}\,\frac{\cosh(\pi
x/2)}{\sinh^{2}(\pi x/2)}dx.\qquad\qquad$
In particular, we have:
$i(Li_{2}(ie^{t})-Li_{2}(-ie^{t}))=2\sum_{k=1}^{\infty}(-1)^{k}\frac{e^{(2k-1)t}}{(2k-1)^{2}};\
\ i(Li_{2}(i)-Li_{2}(-i))=-2C.$
As a by-product of our Proposions 1 and 4, we have
###### Corollary 1.
For a hyperbolic cosine function $\phi_{C}(t)=1/\cosh t$ we have
$\hat{m}_{\psi_{\tilde{C}}}(t)+\hat{m}_{\tilde{C}}(t)=2\int_{0}^{\infty}\cos(tx)\frac{x^{3}}{1+x^{2}}\big{(}k_{C}(x)\big{)}^{\prime}dx$
where a function $k_{C}(x):=(2x\sinh(\pi x/2))^{-1}$ is a denstity of Lévy
measure of a hyperbolic cosine function. $\phi_{C}$.
3\. PROOFS
All boldface numbers appearing below refer to the corresponding fromulae in
Gradstheyn-Ryzhik (1994).
_Proof of Proposition 1._
First, note that $V_{\tilde{C}}(i)=i(1-\beta(1/2))=-i(\pi/2-1)$ and therefore
$m_{\tilde{C}}(\mathbb{R})=\pi/2-1$. Second, since by (8),
$\beta(s)=\int_{0}^{\infty}\frac{1}{1+e^{-x}}e^{-sx}dx=\mathfrak{L}[(1+e^{-x})^{-1};s],\
\ \Re s>0,\ \textbf{8.371}(2)$
consequently
$\frac{iV_{\tilde{C}}(-iw)}{w^{2}-1}=\frac{1-w\beta(w/2)}{w^{2}-1}=\mathfrak{L}[\sinh
x;w]-\mathfrak{L}[\cosh x;w]\,\mathfrak{L}[2/(1+e^{-2x});w]\\\
=\mathfrak{L}[\sinh
x;w]-\mathfrak{L}[\,\underline{(\cosh(s)\ast(2/(1+e^{-2s})))}(x);w],\ \
\qquad$ (8)
where $\ast$ denotes a convolution of functions on positive half-line.
Third, one checks by a differentiation (or by WolframAlpha or Mathematica)
that for $x>0$ we have
$\big{(}\underline{\cosh(s)\ast(2/(1+e^{-2s}))}\big{)}(x):=\int_{0}^{x}\cosh(x-s)\,\frac{2}{1+e^{-2(x-s)}}ds\\\
=2\sinh(x)\arctan(e^{s-x})-e^{-s}+constant|_{s=0}^{s=x}\\\
=2\sinh(x)\arctan(e^{x-x})-e^{-x}-\big{(}2\sinh(x)\arctan(e^{-x})-1\big{)}\\\
=2\sinh(x)(\pi/4-\arctan(e^{-x}))-e^{-x}+1,\qquad\qquad$
and inserting it into (9) we get
$\frac{iV_{\tilde{C}}(-iw)}{w^{2}-1}=\mathfrak{L}[\sinh x-2\sinh
x(\pi/4-\arctan(e^{-x}))+e^{-x}-1;w]$ (9)
Finally, since $m_{C}(\mathbb{R})=\pi/2-1$ and taking into account (4) we have
$\hat{m}_{C}(x)=(\pi/2-1)\cosh x+\sinh x-2\sinh
x(\pi/4-\arctan(e^{-x})+e^{-x}-1\\\ =2\sinh
x\arctan(e^{-x})+\pi/2e^{-x}-1,\qquad$
which gives a first equality in (5).
On the other hand, from Jurek (2019), Example 1 we know that
$V_{\tilde{C}}(it)$ is a free-probability analog of a classical hyperbolic
characteristic function $1/\cosh(t)$ whose (finite) Khintchine measure $m_{C}$
has a density $\frac{1}{2}\frac{|x|}{1+x^{2}}\frac{1}{\sinh(\pi|x|/2)}$, for
$x\in\mathbb{R}$. Thus
$\hat{m}_{C}(t)=\int_{\mathbb{R}}e^{itx}\frac{1}{2}\frac{|x|}{1+x^{2}}\frac{1}{\sinh(\pi|x|/2)}dx=\int_{0}^{\infty}\cos(tx)\frac{|x|}{1+x^{2}}\frac{1}{\sinh(\pi|x|/2)}dx,$
which completes proof of Proposition 1.
###### Remark 1.
_From a formula in Proposition 1, we have an identity_
$\int_{0}^{\infty}\cos(sx)\,\frac{|x|}{1+x^{2}}\frac{1}{\sinh(\pi|x|/2)}dx=-1+\frac{\pi}{2}e^{-s}+2\sinh(s)\tan^{-1}(e^{-s}),\
s>0;$
_which is confirmed by 4.113(8). _
_Proof of Proposition 2._
Since $\psi(1/2)=-\gamma-2\log 2$, (8.366(2)) then $V_{S}(i)=-i(\gamma+\log
2-1)$. Hence a parameter $a_{\tilde{S}}=0$ and a finite measure
$m_{\tilde{S}}$ in (1) has a finite mass
$m_{\tilde{S}}(\mathbb{R})=\gamma+\log 2-1$. Finally, using an integral
formula for $\psi$ function,
$\psi(z)=\log
z+\int_{0}^{\infty}\big{(}\frac{1}{s}-\frac{1}{1-e^{-s}}\big{)}e^{-zs}ds,\ \Re
z>0;\ \ \textbf{8.361}(8),$
we have
$\frac{iV_{S}(-iw)}{w^{2}-1}=\frac{1+w(\psi(w/2)-\log(w/2))}{w^{2}-1}\\\
=\mathfrak{L}[\sinh x;w]+\mathfrak{L}[\cosh
x;w]\mathfrak{L}[\frac{1}{x}-\frac{2}{1-e^{-2x}};w]\\\ =\mathfrak{L}[\sinh
x+\underline{\big{(}\cosh
s\ast(\frac{1}{s}-\frac{2}{1-e^{-2s}})\big{)}}(x);w],$ (10)
where $"\ast"$ denotes a convolution of functions.
By a direct differentiation using properties quoted before a proof of this
proposition (or using www.Wolframalpha.com) one checks that:
$\int\cosh(x-s)\Big{[}\frac{1}{s}-\frac{2}{1-e^{-2s}}\Big{]}ds\\\
=\frac{e^{-x}}{2}Ei(s)+\frac{e^{x}}{2}Ei(-s)-e^{s-x}+\cosh(x)\,\log\frac{1+e^{-s}}{1-e^{-s}}+constant.$
Thus for $x>0$ we have
$\underline{\big{(}\cosh
s\ast(\frac{1}{s}-\frac{2}{1-e^{-2s}})\big{)}}(x)=\int_{0}^{x}\cosh(x-s)\Big{[}\frac{1}{s}-\frac{2}{1-e^{-2s}}\Big{]}ds\\\
=\big{[}\frac{e^{-x}}{2}Ei(s)+\frac{e^{x}}{2}Ei(-s)-e^{s-x}+\cosh(x)\,\log\frac{1+e^{-s}}{1-e^{-s}}\Big{]}\Big{|}^{s=x}_{s=0^{+}}\\\
=\frac{e^{-x}}{2}Ei(x)+\frac{e^{x}}{2}Ei(-x)-e^{x-x}+\cosh(x)\,\log\frac{1+e^{-x}}{1-e^{-x}}\\\
-\lim_{s\to
0^{+}}\big{[}\frac{e^{-x}}{2}(Ei(s)-\log(1-e^{-s}))+\frac{e^{x}}{2}(Ei(-s)-\log(1-e^{-s}))\\\
-e^{s-x}+\cosh
x\log(1+e^{-s})\big{]}=\frac{e^{-x}}{2}Ei(x)+\frac{e^{x}}{2}Ei(-x)-1\\\
+\cosh(x)\,\log\frac{1+e^{-x}}{1-e^{-x}}-\gamma\cosh x+e^{-x}-\cosh x\log 2\\\
=\frac{e^{-x}}{2}Ei(x)+\frac{e^{x}}{2}Ei(-x)-1+e^{-x}+\cosh(x)\big{(}\log\frac{1+e^{-x}}{1-e^{-x}}-\gamma-\log
2\big{)}.$
Inserting above line into (11) and using (4) with
$m_{S}(\mathbb{R})=\gamma+\log 2-1$ we get
$\hat{m}_{S}(x)=(\gamma+\log 2-1)\cosh(x)+\sinh(x)\\\
+\frac{e^{-x}}{2}Ei(x)+\frac{e^{x}}{2}Ei(-x)-1+e^{-x}+\cosh(x)\big{(}\log\frac{1+e^{-x}}{1-e^{-x}}-(\gamma+\log
2)\big{)}\\\
=-\cosh(x)+\sinh(x)+\frac{e^{-x}}{2}Ei(x)+\frac{e^{x}}{2}Ei(-x)-1+e^{-x}+\cosh(x)\log\frac{1+e^{-x}}{1-e^{-x}}\\\
=\frac{e^{-x}}{2}Ei(x)+\frac{e^{x}}{2}Ei(-x)-1+\cosh(x)(\log(1+e^{-x})-\log(1-e^{-x})),$
(11)
which proves first equality in Proposition 2.
Since from Jurek (2019), Corollary 4, we know that $V_{\phi_{S}}$ is a free-
analog of a classical hyperbolic sine characteristic function
$\phi_{S}(t)=t/\sinh(t)$ whose (finite) Khintchine measure is equal to
$m_{S}(dx)=\frac{1}{2}\frac{|x|}{1+x^{2}}\frac{e^{-\pi|x|/2}}{\sinh(\pi|x|/2)}dx=\frac{|x|}{1+x^{2}}\,\frac{1}{e^{\pi|x|}-1}dx,\
\mbox{on}\ \mathbb{R},$
we get second equality in Proposition 2.
###### Remark 2.
_From Proposition 2 we get an integral identity_
$2\int_{0}^{\infty}\cos(sx)\frac{x}{1+x^{2}}\frac{1}{e^{\pi x}-1}dx\\\
=-1+\cosh(s)\big{(}\log(1+e^{s})-\log(1-e^{-s})\big{)}+\frac{e^{-s}}{2}Ei(s)+\frac{e^{s}}{2}Ei(-s)\
\ s>0,$
_that might be of some interest and it seems to be new?_
Since for hyperbolic characteristic functions $\phi_{C},\phi_{S}$ and
$\phi_{T}$ we have that $\phi_{C}(t)=\phi_{S}(t)\cdot\phi_{T}(t)$ therefore
for their Khintchine measures $m_{C},m_{S},m_{T}$ we have
$m_{C}(dx)=m_{S}(dx)+m_{T}(dx)$.
_Proof of Proposition 3._
Taking into account a discussion above and the fact that free-infinitely
divisible transforms $V_{\tilde{C}},V_{\tilde{S}},V_{\tilde{T}}$, in Jurek
(2019) were defined via one - to -one correspondence with classical infinite
divisibility we get that
$V_{\tilde{C}}(it)=V_{\tilde{S}}(it)+V_{\tilde{T}}(it),t\neq 0$. Consequently,
proof of Proposition 3 follows from proofs of Propositions 1 and 2.
_Proof of Proposition 4._
Since $V_{\tilde{\psi_{C}}}(i)=-i(2C-1)\approx-i0,83193$, ( $C$ is Catalan
constant $\approx$ 0.9159), then in (1), $a_{\psi_{C}}=0$, and for a measure
$m_{\psi_{C}}$ we have $m_{\psi_{C}}(\mathbb{R})=2C-1$. Using (4) and the
integral representation for $\zeta(2,s)$ function in Appendix (C), we have
$\mathfrak{L}[\hat{m_{\psi_{C}}}(x)-(2C-1)\cosh(x);t]=\frac{t^{2}/2(\zeta(2,t/2)-1/2\zeta(2,t/4))+1}{t^{2}-1}\\\
=\frac{1}{t^{2}-1}+\frac{1}{2}\frac{t^{2}}{t^{2}-1}\Big{[}8\int_{0}^{\infty}\frac{w}{1-e^{-2w}}e^{-tw}dw-16\int_{0}^{\infty}\frac{w}{1-e^{-4x}}e^{-tw}dw\Big{]}\\\
=\mathfrak{L}[\sinh
x;t]+2(1+\frac{1}{t^{2}-1})\int_{0}^{\infty}w\frac{e^{-2w}-1}{1-e^{-4w}}e^{-tw}dw=\\\
\mathfrak{L}[\sinh
x;t]+(1+\frac{1}{t^{2}-1})\int_{0}^{\infty}w\frac{1-e^{2w}}{\sinh(2w)}\,e^{-tw}dw\\\
=\mathfrak{L}[\sinh x+x\frac{1-e^{2x}}{\sinh(2x)};t]+\mathfrak{L}[\sinh
x;t]\,\mathfrak{L}[x\frac{1-e^{2x}}{\sinh(2x)}]\\\ =\mathfrak{L}[\sinh
x+x\frac{1-e^{2x}}{\sinh(2x)}+h_{\hat{\psi_{C}}}(x);t].\qquad\qquad$
where
$h_{\hat{\psi_{C}}}(x):=(\sinh u\ast
u\frac{1-e^{2u}}{\sinh(2u)})(x)=\int_{0}^{x}\sinh(x-u)u\frac{1-e^{2u}}{\sinh(2u)}du.$
(12)
Consequently from the above calculation we get
$\hat{m_{\psi_{C}}}(x)=(2C-1)\cosh x+\sinh
x+\frac{x(1-e^{2x})}{\sinh(2x)}+h_{\hat{\psi_{C}}}(x),\ \mbox{for $x\geq 0$.}$
(13)
Using website www.Wolframalpha.com or just by elementary computations, knowing
that $d/dxLi_{2}(\pm ix)=-x^{-1}\log(1\pm ix)$, (cf. Appendix (C)), one checks
that
$\int\sinh(x-s)s\frac{1-e^{2s}}{\sinh(2s)}ds=e^{-x}(s-1)e^{s}\\\
-i\cosh(x)\big{(}-Li_{2}(-ie^{s})+Li_{2}(ie^{s})+s\log(1-ie^{s})-s\log(1+ie^{s})\big{)}+const.$
Since $\lim_{x\to 0^{+}}(Ei(\pm x)-\log x)=\gamma$ (Euler constant), from
(13), for $x>0$,
$h_{\hat{\psi_{C}}}(x)=e^{-x}(s-1)e^{s}\qquad\qquad\qquad\qquad\qquad\\\
-i\cosh(x)\big{(}-Li_{2}(-ie^{s})+Li_{2}(ie^{s})+s\log(1-ie^{s})-s\log(1+ie^{s})\big{)}|^{s=x}_{s=0}$
$=e^{-x}(x-1)e^{x}-i\cosh(x)\big{(}-Li_{2}(-ie^{x})+Li_{2}(ie^{x})+x\log(1-ie^{x})\\\
-x\log(1+ie^{x})\big{)}-[e^{-x}(-1)-i\cosh(x)\big{(}-Li_{2}(-i)+Li_{2}(i)]\\\
=(x-1)-i\cosh(x)\big{(}-Li_{2}(-ie^{x})+Li_{2}(ie^{x})+x\log\frac{1-ie^{x}}{1+ie^{x}}\big{)}+e^{-x}-2C\cosh(x)\\\
=-1+x+e^{-x}-2C\cosh
x-i\cosh(x)\big{(}-Li_{2}(-ie^{x})+Li_{2}(ie^{x})-2ix\arctan(e^{x})\big{)},$
where in the last line we use
$\log(1-ie^{x})-\log(1+ie^{x})=-2i\arctan(e^{x})$; cf. Appendix C. And finally
from (14) we arrive at
$\hat{m_{\psi_{C}}}(x)=(2C-1)\cosh x+\sinh
x+\frac{x(1-e^{2x})}{\sinh(2x)}-1+x+e^{-x}-2C\cosh x\\\
-i\cosh(x)\big{(}-Li_{2}(-ie^{x})+Li_{2}(ie^{x})-2ix\arctan(e^{x})\big{)}\\\
=-1+x+\frac{x(1-e^{2x})}{\sinh(2x)}-i\cosh(x)\big{(}-Li_{2}(-ie^{x})+Li_{2}(ie^{x})-2ix\arctan(e^{x})\,\big{)}\qquad\qquad$
and since $(1+(1-e^{2x})/\sinh(2x))=-\tanh(x)$ therfore this completes a first
part of Proposition 4.
For the second one, let us recall that from Jurek and Yor (2004), Corollary 1
and a formula (7) on p. 186, that Khintchine (finite) measure corresponding to
BDCF $\psi_{C}$ is given by
$m_{\psi{{}_{C}}}(dx)=\frac{\pi}{4}\,\frac{x^{2}}{1+x^{2}}\,\frac{\cosh(\pi
x/2)}{\sinh^{2}(\pi x/2)}dx,\ \mbox{on}\ \mathbb{R}.$
Hence we get that
$\hat{m}_{\psi_{\tilde{C}}}(t)=\frac{\pi}{2}\,\int_{0}^{\infty}\cos(tx)\frac{x^{2}}{1+x^{2}}\,\frac{\cosh(\pi
x/2)}{\sinh^{2}(\pi x/2)}dx=-1+t+\frac{t(1-e^{2t})}{\sinh(2t)}\\\
-i\cosh(t)\big{(}-Li_{2}(-ie^{t})+Li_{2}(ie^{t})+t\log\frac{1-ie^{t}}{1+ie^{t}}\,\big{)},\
\mbox{for}\ t\geq 0.\qquad\qquad$
which completes a proof of Proposition 4.
###### Remark 3.
_As a consequence of Proposition 4 we have a formula_
$\frac{\pi}{2}\,\int_{0}^{\infty}\cos(tx)\frac{x^{2}}{1+x^{2}}\,\frac{\cosh(\pi
x/2)}{\sinh^{2}(\pi x/2)}dx=-1+t+\frac{t(1-e^{2t})}{\sinh(2t)}\\\
-i\cosh(t)\big{(}-Li_{2}(-ie^{t})+Li_{2}(ie^{t})-2it\arctan(e^{t})\,\big{)},\
\mbox{for}\ t\in\mathbb{R},\qquad\qquad$
_that might be of some interest as well._
Proof of Corollary 1.
In general, if $M(dx)=h(x)dx,h(x)>0$, is a spectral measure of a
selfdecomposable characteristic function, say $\phi$, then
$N(dx):=-(xh(x)^{\prime}dx$ is a spectral measure of a characteristic function
$\psi(t):=\exp[t\big{(}\log\phi(t)\big{)}^{\prime}]$; so called background
driving characteristic function; cf. Jurek (1997), Corollary 1.1., p.97 or
Jurek and Yor (2004), p. 183, formulae (d) and (e).
Consequently, on the level of Khintichine (finite) measures have;
$n(dx):=\frac{x^{2}}{1+x^{2}}N(dx)\\\
=-\frac{x^{2}}{1+x^{2}}(h(x)+xh^{\prime}(x))dx=-m(dx)-\frac{x^{2}}{1+x^{2}}xh^{\prime}(x)dx,$
which gives a proof of Corollary 1, when applied to $h(x):=k_{C}(x)$.
###### Remark 4.
_Since a hyperbolic sine and a hyperbolic tangent are selfdecopmosable as well
we may have statements about $\tilde{S}$ and $\tilde{T}$ analogous to the one
in Corollary 1, a for hyperbolic cosine function._
5\. Appendix.
For a convenience of reading we collect here some useful facts. Boldface
numbers refer to formulae from Gradshteyn and Ryzhik (1994).
(A) $\beta(x):=\frac{1}{2}[\ \psi(\frac{x+1}{2})-\psi(\frac{x}{2})\ ],\
\beta(x)=\sum_{k=0}^{\infty}\frac{(-1)^{k}}{x+k},\ -x\notin\mathbb{N},\
\textbf{8.732(1)}.$
(B) For the exponential integral function $Ei$ we have :
$\ Ei(x)=-\int_{-x}^{\infty}\frac{e^{-t}}{t}\,dt,\ \mbox{for}\ \ x<0;\ \ \ \ \
\textbf{8.211}(1);\\\ \ Ei(x)=-\lim_{\epsilon\to
0}\Big{[}\int_{-x}^{-\epsilon}\frac{e^{-t}}{t}dt+\int_{\epsilon}^{\infty}\frac{e^{-t}}{t}dt,\
\mbox{for}\ \ x>0;\ \ \ \textbf{8.211}(2)\qquad\\\ \ \ Ei(x)=\gamma+\log
x+\sum_{k=1}^{\infty}\frac{x^{k}}{k\cdot k!},\ \mbox{for }x>0;\ \
\textbf{8.214}(2);\ \ (\gamma\ \mbox{is Euler constant})$
From above we get: $\ \frac{d}{dx}Ei(\pm x)=\frac{e^{\pm x}}{x}$ and
$\lim_{x\to 0^{+}}\,Ei(\pm x)-\log x)=\gamma.$
(C) For Riemann $\zeta$ functions we have:
$\zeta(s,a):=\sum_{k=0}^{\infty}\frac{1}{(k+a)^{s}},\ \Re s>1,\
-a\not\in\mathbb{N};\ \ \textbf{9.521}(1)\,$
In particular, for $s=2$, we have $\zeta$ function as a Laplace transform:
$\zeta(2,a)=\int_{0}^{\infty}\frac{x}{1-e^{-x}}e^{-ax}dx,\ \mbox{for}\ \Re
a>0;\ $ $\mbox{because}\ \ \sum_{k=0}^{\infty}xe^{-kx}=\frac{x}{1-e^{-x}};\ \
\mbox{and}\ \mathfrak{L}[xe^{-kx},w]=\frac{1}{(w+k)^{2}}.$
Polylogarithmic functions $Li_{n}(z)$, ($n$ is a fixed parameter) are defined
via series
$Li_{n}(z):=\sum_{k=1}^{\infty}\frac{z^{k}}{k^{n}}\equiv z\Phi(z,n,1)\
z\in\mathbb{C};\ \ \mbox{(Lerch function)},\textbf{9.550}$
In particular, $\frac{d}{dz}Li_{2}(z)=-z^{-1}\log(1-z);\ \
Li_{2}(i)-Li_{2}(-i)=2iC,$ where $C$ stands for a Catalan constant.
(D) For the equality, $i\log\frac{1-ie^{x}}{1+ie^{x}}=2\arctan(e^{x}),\
x\in\mathbb{R},$ firstly, note that for $x=0$, indeed we have
$i\log\frac{1-i}{1+i}=\pi/2$, and secondly, note equality of derivatives
$\frac{d}{dx}(i\log\frac{1-ie^{x}}{1+ie^{x}})=\frac{d}{dx}(2\arctan(e^{x}))$.
REFERENCES.
[1] N.I. Akhiezer (1965),_The classical moment problem_ , Oliver $\&$ Boyd,
Edinburgh and London,
[2] L. Bondesson (1992), _Generalized gamma convolutions and related_
_classes of distributions and densities_ , Springer-Verlag, New York; Lecture
Notes in Statistics, vol. 76.
[3] I.S. Gradshteyn and I. M. Ryzhik (1994), _Tables of Integrals, Series,_
_and Products_ , Academic Press, New York.
[4] L. Jankowski and Z. J. Jurek (2012), Remarsk on restricted Nevalinna
transforms, _Demonstratio. Math._ , vol. XLV, no.2, pp.297-307.
[5] Z. J. Jurek (1993), _Operator-limit distributions in probability theory_ ,
J. Wiley, New York.
[6] Z. J. Jurek (1997), Selfdecomposability:an exception or a rule ?, _Ann._
_Unviversitatis M. Curie-Sklodowska, Lublin-Polonia_ , vol. LI. 1 Sectio A.
[7 ] Z. J. Jurek (2019), On a relation between classical and free infinitely
divisible transforms, _Probab.Math. Statist._ , to appear;
[Also: math.arXiv:1707.02540 [math.PR],9 July 2017.]
[8] Z.J. Jurek and W. Vervaat (1983), An integral representation for
selfdecomposable Banch space valued random variables., _Z. Wahrschein_
_lichkeitstheorie verv. Gebiete_ , vol. 62, pp. 247-262.
[9] Z. J. Jurek and M. Yor (2004), Selfdecomposable laws associated with
hyperbolic functions,_Probab. Math. Statist._ , vol.24, Fasc.1, pp. 181-190.
[10] S. Lang(1975), _$SL_{2}(\mathbb{R})$_ , Addis0n-Wesley, Reading
Massachusetts.
[11] J. Pitman and M. Yor (2003), Infinitely divisible laws associated with
hyperbolic functions,_Canad. J. Math._ 55 (2), pp. 292-330.
Author’s address:
Institute of Mathematics, University of Wrocław, Pl. Grunwaldzki 2/4,
50-384 Wrocław, Poland;
Email<EMAIL_ADDRESS>; www.math.uni.wroc.pl/$\sim$zjjurek
|
# MEV in fixed gas price blockchains: Terra Classic as a case of study
Facundo Carrillo CONICET-Universidad de Buenos Aires, Instituto de
Investigación en Ciencias de la Computación (ICC), Buenos Aires, Argentina
Elaine Hu Flashbots
###### Abstract
Maximum extractable value (MEV) has been extensively studied. In most papers,
the researchers have worked with the Ethereum blockchain almost exclusively.
Even though, Ethereum and other blockchains have dynamic gas prices this is
not the case for all blockchains; many of them have fixed gas prices.
Extending the research to other blockchains with fixed gas price could broaden
the scope of the existing studies on MEV. To our knowledge, there is not a
vast understanding of MEV in fixed gas price blockchains. Therefore, we
propose to study Terra Classic as an example to understand how MEV activities
affect blockchains with fixed gas price. We first analysed the data from Terra
Classic before the UST de-peg event in May 2022 and described the nature of
the exploited arbitrage opportunities. We found more than 188K successful
arbitrages, and most of them used UST as the initial token. The capital to
perform the arbitrage was less than 1K UST in 50% of the cases, and 80% of the
arbitrages had less than four swaps. Then, we explored the characteristics
that attribute to higher MEV. We found that searchers who use more complex
mechanisms, i.e. different contracts and accounts, made higher profits.
Finally, we concluded that the most profitable searchers used a strategy of
running bots in a multi-instance environment, i.e. running bots with different
virtual machines. We measured the importance of the geographic distribution of
the virtual machines that run the bots. We found that having good geographic
coverage makes the difference between winning or losing the arbitrage
opportunities. That is because, unlike MEV extraction in Ethereum, bots in
fixed gas price blockchains are not battling a gas war; they are fighting in a
latency war.
## 1 Introduction
MEV has been extensively studied, from mathematical models to data analysis
applications [1, 2, 3, 4]. In most studies, researchers have worked with the
Ethereum blockchain almost exclusively, so extending the studies to other
blockchains could help broaden the scope of MEV research. Understandably, one
of the most important aspects of MEV is gas bidding.
In Ethereum, the miners include transactions from the mempool and propose a
block with a subset of transactions in an arbitrary order. However, this
arbitrary order is not random; many miners use gas to sort transactions and
create new blocks since this strategy maximises their rewards. This incentive
increases the gas price to a point that makes the network almost impossible to
use for normal users.
Although this incentive is common in Ethereum, it is not present in all
blockchains. Many blockchains based on Tendermint don’t take dynamic gas
prices into account to prioritise transactions. They use a fixed gas price and
process transactions in the same order in which the validators receive them
111 https://docs.tendermint.com/v0.34/tendermint-core/mempool.html . This
difference could heavily impact the role and behaviour of MEV searchers. In
blockchains with fixed gas prices, the only resource that searchers have is to
recognize an MEV opportunity before other actors and respond quickly. This
contrast could have enormous repercussions on the opportunities that MEV
searchers choose to pursue. At first, there would be no possibility, by non-
privileged users, of positioning one transaction ahead of another once the
first one was already broadcasted. This restriction would forbid extracting
value based on front-running and sandwich strategies, leaving only back-
running as a resource for searchers. Even though searchers only have this
strategy, arbitrages could be performed.
Arbitrage is a particular way to extract value. It is a concept broadly used
in traditional finances that has presented a great opportunity in blockchains,
particularly since decentralised exchanges have become so popular. Arbitrage
consists of taking advantage of a difference in prices in two or more markets.
Users can buy an asset in one market at price A and sell it in another market
at price B. This action results in a profit equals to B minus A. This practice
is well established in many economic systems and allows different markets to
stay synchronised between them. Many studies have presented profits higher
than hundred of millions of dollars by applying arbitrage in Ethereum[5, 6, 7,
8, 9].
The community understands the importance of this practice. Every day more data
allows us to follow the progress in value extraction in different blockchains.
For example, the Flashbots community offers a dashboard that measures MEV
extraction on Ethereum in real-time222https://explore.flashbots.net/, Skip
Protocol provides data for Osmosis 333https://satellite.skip.money/, and
Marlin.Org for Polygon 444https://explore.marlin.org/, among others. Although
all these data allow us to quantify the amount of MEV extracted, they do not
provide the full picture. There isn’t a complete understanding of the type of
extraction, the type of pairs exploited, and cycles, among others. In
particular, those sites do not give enough information about searchers’
behaviour and how they maximise profit.
We believe blockchains that don’t use gas to prioritise transactions present
MEV opportunities with their own dynamics. As we have previously mentioned,
even though MEV has been extensively studied (market designs and their
implications [10], MEV in other blockchains [11], among others), to our
knowledge, there is not a full understanding of MEV in blockchains with fixed
gas price mechanisms. Therefore, we propose the following general objective:
To study the MEV opportunities in a blockchain with fixed gas price. To
contribute to this general object, we propose to study the Terra Classic
blockchain during the period from September 2021 until the de-peg event in May
2022, with the following specific objectives:
* •
Specific Objective 1: Study the characteristics of arbitrages in Terra Classic
in the defined period.
* •
Specific Objective 2: Understanding which strategies perform searchers to
improve profit.
* •
Specific Objective 3: Understanding the timing-related characteristics of
arbitrage.
* •
Specific Objective 4: Create a dashboard to help with the analysis of this
data
## 2 Methods
### 2.1 Dataset
Terra Classic suffered a particular and non-typical behaviour during and after
the de-peg event in May 2022. Due to this phenomenon, we decided to use the
last version of Terra Classic (Columbus-5) before the de-peg event. This
dataset contains 2,801,944 blocks, from block 4734841 (2021-10-01 08:31:52
UTC) to block 7536784 (2022-05-07 00:00:04 UTC).
In order to get the data, we ran a Terra Classic node using the snapshot
available in ChainLayer 555https://quicksync.io/networks/terra.html. This
website offers different blockchain snapshots to speed up synchronisation. We
used the Terra Classic’s code node available on its repository
666https://github.com/terra-money/classic-core.
After we set up the node, we iterated over all the blocks using Terra
Classic’s API to consume data. For every block, we used the Tendermint RPC to
get the block with its transactions coded in base 64. Then we also used the
Terra RPC to get the block result. This entry returns the logs of every
transaction after the block is proposed. Having this information is very
useful as it helps us understand if the transactions succeeded and
characterise its nature. Information on these two APIs is available in the
official documentation 777https://classic-
docs.terra.money/docs/develop/endpoints.html.
### 2.2 Arbitrage identification
We created an algorithm to understand if a successful transaction performed an
arbitrage. To do this, we used the transaction logs. Terra transactions’ logs
have a lot of information. Of all the information available, we used only the
wasm[12] log to understand if a transaction performed an arbitrage. For this,
we define a function that identifies a series of actions. These actions try to
represent a swap between tokens. Typically, these actions are simple swaps,
but sometimes they are more difficult to understand, for example, burning a
token and minting a new one. For this purpose, we define an action with the
following fields: pair address, token in, token out, amount in and amount out.
A transaction then contains a list of actions. With all the information, we
define an entire arbitrage as a list of these actions with the following
constraints:
1. 1.
Successive tokens match
$\forall i\in[0...|actions|)\ actions[i].token_{out}=actions[i+1].token_{in}\
$ (1)
Note that $actions[i]$ represents the i-th action, $actions[i+1]$ represents
the following action, and $actions[i-1]$ represents the previous action.
2. 2.
Arbitrage generates profit
$actions[|actions|-1].amount_{out}>actions[0].amount_{in}$ (2)
Note that fees are not taken into account.
3. 3.
Arbitrage starts and ends with the same token
$actions[|actions|-1].token_{out}=actions[0].token_{in}$ (3)
For example, Figure 1 shows the actions list as a table for a particular
transaction.
Figure 1: Example of how the platform shows a list of actions (link to
dashboard).
If we can parse the logs and create a consistent list of actions that complies
with the restrictions, we flag this transaction as a successful arbitrage. In
summary, we deem an arbitrage successful if it generates profit (without
taking into account fees); and the transaction is included in a block; and its
execution ends without being reverted. The code for parsing logs and creating
the list of actions are available in the repository (see Data, Code and
Analysis notebooks)
#### 2.2.1 Inference of arbitrage in failed transactions
Transactions revert for different reasons. In most cases, lack of profit is
the cause. When transactions revert, they don’t produce the same wasm logs we
use to understand if the transactions are arbitrage. Therefore, we developed a
heuristic to flag arbitrage attempts. We define a transaction as failed
transaction if:
1. 1.
The sender address has signed at least one successful arbitrage
2. 2.
The contract address has executed at least one successful arbitrage
3. 3.
The execute-message resembles another one produced by the same contract in at
least one successful arbitrage.
### 2.3 Dashboard
An interactive dashboard is developed using Django [13] to display the charts
and findings. The dashboard is available at: facuzeta.github.io/frp/dashboard.
#### 2.3.1 Time Analysis
In order to analyse the effect of latency, we need to record transactions in
different geographic locations. For this, we designed an experiment in which
we deployed 84 instances well distributed around the world using AWS cloud
services. We created instances in the following regions: US East (Ohio), US
East (N. Virginia), US West (N. California), US West (Oregon), Africa (Cape
Town), Asia Pacific (Hong Kong), Asia Pacific (Jakarta), Asia Pacific
(Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific
(Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central),
Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe
(Paris), Europe (Stockholm), Europe (Zurich), Middle East (Bahrain), Middle
East (United Arab Emirates), South America (São Paulo).
After we deployed the instances, we recorded the transactions that arrived at
the mempool. Every node started with the same list of peers to connect (the
default peer list offered by the official documentation). But these peers are
not necessarily connected with the same set of nodes throughout the
experiment. This is due to the fact that Tendermint P2P layer connects,
disconnects, and asks for new peers depending on different statistics. Besides
the default configuration, we added two constraints : 1) our 84 nodes are not
connected to each other, and 2) Our nodes do not broadcast transactions from
their mempool.
Finally, we measured the operation system time differences among instances to
make sure that they had an error of less than 15 milliseconds.
### 2.4 Data, Code and Analysis notebooks
All the data and code used in this work are available in the following
repositories:
* •
https://github.com/facuzeta/frp-mev-fixed-gas-price-dashboard
* •
https://github.com/facuzeta/frp-mev-fixed-gas-price-analysis
## 3 Results
Blocks in Terra contain different types of transactions. Among them are:
transactions to initialise a contract, transactions to send tokens,
transactions to vote on governance decisions, and transactions to execute
smart contracts, and others. Arbitrages are usually carried out through
transactions that execute smart contract functions. Taking this into account,
we only use Execute-Msg type transactions to analyse the data.
Our dataset contains 2,801,944 blocks, from block 4734841 2021-10-01 08:31:52
UTC to block 7536784 2022-05-07 00:00:04 UTC. These blocks contain more than
117M transactions of any type and more than 37M execute-message transactions.
The number of transactions in blocks varies depending on the transactions
available in the mempool at the time of the block creation. On average, there
are 28.10$\pm$ 22 (mean and standard deviation) transactions per block with a
median of 23. Even though there are blocks with more than 800 transactions,
90% of blocks contain less than 50 transactions. Figure 2 shows the
distribution of the number of transactions with execute contract message per
block.
Figure 2: Distribution of transactions with execute contract message per
block.
### 3.1 Successful arbitrages
We found 188,564 arbitrage that were mined by the validators. These
transactions are distributed in many blocks. However, only 4.48% (125K) of the
blocks presented at least one successful arbitrage.
Figure 3: Distribution of successful arbitrages per block with at least one
successful arbitrage.
In the blocks where there is at least one arbitrage, in 73% of the cases, only
one arbitrage occurs. In 16% of the blocks, two successful arbitrages happen,
and in 5% of the cases, three successful arbitrages occur. That is, in 94% of
the blocks with successful arbitrages, a maximum of 3 arbitrages per block
occur. The Figure 3 illustrates the distribution of the number of arbitrages
in the blocks with at least one successful arbitrage. This variability is
probably due to different arbitrage opportunities occurring on certain blocks.
### 3.2 Identifying searchers
Identifying searchers is a difficult task because of the on-chain data
availability. We can only see addresses signing transactions to execute smart
contracts. In the 188K arbitrages, we found 517 different sender addresses and
167 different smart contracts.
Grouping the arbitrages by the sender, only the top 10 generate 40% of the
successful arbitrages. (see Table 1). Grouping by the smart contract, the top
10 contracts used by searchers account for 49% of successful arbitrages (see
Table 2).
Sender | Successful arbitrages | Contracts | %
---|---|---|---
terra1fgef888tuj3g8tmpxp4klvthq4eqk8a2yw7vsw | 27197 | 6 | 14.42
terra1lywrs72lpsptu68usehlae5yq94wsuk2g6fa3m | 10232 | 13 | 5.42
terra1n3s52w97j599sq3k4wcr3j0027j0stnfttnjvg | 7385 | 7 | 3.91
terra1nyud8zzctrks6ltnhhn4q820umnvqg39hc6ydz | 7193 | 2 | 3.81
terra16kd4ucrkj3mu2xpt25075s3jsztkx9ltf3tj9y | 4921 | 1 | 2.60
terra1xjpy3hu7qzun8p3h3cuf4w2v4f98w376mjj74l | 4797 | 13 | 2.54
terra140d3kgus5nxnqextv6fahzncfvp90r7t0s6mjp | 3684 | 5 | 1.95
terra1lxyhvhhdjc3sxk0kjeljtqnxjf7ek6asxxyp7q | 3645 | 3 | 1.93
terra1cxspyydlp4qdu0pzetg9au549n29tnvs4t3kej | 3334 | 2 | 1.76
terra1qfg2exflsshg23gs5w4ml37hwnunzthtx2gpp6 | 3288 | 2 | 1.74
Table 1: Top 10 arbitrage senders Contract | Successful arbitrages | Senders | %
---|---|---|---
terra1p9zmnwqrfpzx0k585r3lhrqhlakmrnar2cljt9 | 17130 | 1 | 9.08
terra1kz93zt3ng7kags09qy06a079wjl3qpzfp0axsn | 14195 | 8 | 7.52
terra1nuzl3sppu0suws9zml5psu8tmuqk5dnr88n8kc | 10664 | 8 | 5.65
terra10j88evvssl4m0ztj2hcelf993j5a0xd3ezd7se | 10314 | 80 | 5.46
terra1myl5pk2a7qj37yu6dgmy68rxkznmq9nrk69nrj | 8367 | 50 | 4.43
terra1wswy9763nhugvphchcdc438r3ejvskjjz5h0rs | 7731 | 8 | 4.09
terra139y02s9urkkyesukndrqdjmqj7gkk5dltd05v8 | 7060 | 8 | 3.74
terra19rltg2vaurffa25cvtm5zey9muhaxflfxt5e2p | 6152 | 1 | 3.26
terra1egqmcaupc87sdg6sqmzypuv06ykxkek3j7s4w5 | 5927 | 1 | 3.14
terra1sg877xutwmyvsz45jrlwcdhx07jac33433499e | 5634 | 3 | 2.98
Table 2: Top 10 arbitrage contracts
A contract is not necessarily used by a single address and an address does not
necessarily use a single smart contract to carry out arbitrages. In fact, only
55% of the contracts are used by a single sender, while 61% of the senders use
a single contract.
Given this, we proposed a method to infer searchers. The method consists of
grouping sender addresses and smart contracts related to each other. We
defined that an address is related to a smart contract if the sender signed a
transaction that executed the smart contract. Then, we extended this using a
transitive property.
To compute this, we created a graph where each node represented a sender
address or a smart contract. Then, the two nodes had a common edge if there
was a transaction calling the smart contract signed by the address. Next, we
executed the analysis of connected components. Every connected component was a
searcher, and the addresses and smart contracts belonged to it.
Figure 4 shows the graph and the inferred searchers.
Figure 4: In this graph, each node represents a sender address or a smart
contract. Nodes have a common edge if there is a transaction calling the smart
contract and it is signed by the sender’s address. The largest nine connected
components are coloured. The rest of the connected components are grey. The
size of the node represents the node degree. The distance between nodes does
not add extra information.
Our method identified 56 connected components or searchers. 50% of them used
only one smart contract, and 57% of the searchers used only one address to
sign transactions. Figure 5 shows the distribution of contracts and senders by
searchers.
Figure 5: The left panel shows the histogram of the number of contracts per
searcher. The right panel shows the distribution of the number of sender
addresses per searcher.
There was a positive correlation (Pearson rho=0.3487, p-value=0.0084) between
the number of contracts and addresses that searchers used. The number of
contracts and addresses could describe the complexity of searchers’
strategies. Some searchers used only one smart contract and one wallet to sign
transactions. More sophisticated searchers could have multiple contracts and
wallets or sender addresses. Figure 6 shows this correlation.
Figure 6: Scatter plot of the number of different senders’ addresses and
contracts by searchers. The colour code follows the one presented in Figure 4.
### 3.3 Other characteristics of arbitrage
#### 3.3.1 Token In
Arbitrages start with different tokens. In our dataset, 87% (165K) of the
arbitrages started with UST. This seems reasonable since UST was the biggest
stable coin in Terra Classic before the de-peg event. Following UST, 10% (19K)
of the arbitrages started with LUNA, the native token of Terra Classic. Figure
7 shows the distribution of the token-in.
Figure 7: Number of arbitrages that start with different tokens.
#### 3.3.2 Arbitrage Amount in
Arbitrages had a different range of amounts - those started with the UST token
shows a median of 998 UST and the 25 percentile is 260 UST. For arbitrage that
started with LUNA, the median is 88 LUNA and the 25 percentile is 20 LUNA.
Figure 8 shows these two distributions.
Figure 8: The left panel shows the distribution of the number of arbitrages
that start with UST. The right panel shows the distribution of the number of
arbitrages that starts with LUNA.
#### 3.3.3 Path length
Arbitrage path lengths could be variable. 80% of the arbitrages use 2 or 3
swaps. Arbitrages with paths greater than five swaps are present in 0.4% of
the arbitrages. Increasing the length of the cycles also increases the cost of
gas which reduces profit. So it is reasonable to see the length of the cycles
is mostly 2 or 3.
Figure 9: Percentage of arbitrages according to the cycle length.
#### 3.3.4 Pairs
Arbitrages used 300 different pairs/pools. However, the 10 most used pairs
cover 38% of the swaps; the 50 most used pairs cover 82% and the 100 most used
pairs cover 95%. The top 10 most used pairs are shown in Table 3.
Pair | Rate | Token | Token
---|---|---|---
terra1m6ywlgn6wrjuagcmmezzz2a029gtldhey5k552 | 0.07316 | UST | LUNA
terra1tndcaqxkpc5ce9qee5ggqf430mr2z3pefe5wj6 | 0.07297 | UST | LUNA
terra106a00unep7pvwvcck4wylt4fffjhgkf9a0u6eu | 0.04285 | UST | LOOPC
terra1jxazgm67et0ce260kvrpfv50acuushpjsz2y0p | 0.03370 | bLUNA | LUNA
terra163pkeeuwxzr0yhndf8xd2jprm9hrtk59xf7nqf | 0.03260 | UST | PSI
terra1cda4adzngjzcn8quvfu2229s8tedl5t306352x | 0.03056 | bLUNA | nLUNA
terra1c0afrdc5253tkp5wt7rxhuj42xwyf2lcre0s7c | 0.02925 | UST | bETH
terra1sgu6yca6yjk0a34l86u6ju4apjcd6refwuhgzv | 0.02562 | UST | LUNA
terra1v5ct2tuhfqd0tf8z0wwengh4fg77kaczgf6gtx | 0.02375 | UST | PSI
terra1j66jatn3k50hjtg2xemnjm8s7y8dws9xqa5y8w | 0.02339 | bLUNA | LUNA
Table 3: Top 10 most used pairs
It is worth mentioning that some of the top 10 pairs allow swapping the same
pair of tokens. This duplication is because different DEXes offer to swap the
same pair of tokens.
### 3.4 Profit
The distribution of arbitrages by searchers is very irregular. Some searchers
had good performance while others did not get high profits. 78% of searchers
had ten or more successful arbitrages, 59% had 100 or more successful
arbitrages, and only 35% had more than 1000 successful arbitrages.
As mentioned before, the token-in and the token of the profit can vary. Since
97% of arbitrages used UST or LUNA to start the arbitrage, in this section, we
focus only on the arbitrages that used these tokens as token-in.
For those using UST as a token-in, the total profit was 16.24M UST and for
LUNA it was 20,569. The value of LUNA can be volatile, unlike stablecoins. In
the time range of our dataset, the LUNA price ranged from 35 to 100 UST. Using
100 UST as the upper bound, we limited the profit per 2M UST above (although
the value was likely lower).
One way to compare searchers’ performance is to use the profit rate. We
defined the profit rate as the percentage of profit out of the initial value
of the arbitrage - a ratio between profit and the starting value of the
arbitrage. For example, if an arbitration starts with 105 UST and 115 UST is
obtained, the profit rate is (115-105)/105=0.095. Taking this normalised value
into account, 99% of arbitrages have a profit rate of less than 1. Besides the
99% of the sample, there were arbitrages with outlier values. For example,
this transaction
$7ad339227b991c402efaa4c8e02cd7af3c76c5d820d1234cbb759f5f9968aa1e$
888https://facuzeta.github.io/frp/dashboard/7ad339227b991c402efaa4c8e02cd7af3c76c5d820d1234cbb759f5f9968aa1e/
reached a profit rate higher than 2000.
Discarding profit rate values greater than 1, Figure 10 shows the distribution
of this measure.
Figure 10: Distribution of Profit Rate for arbitrages with Profit rate less
or equal to 1.
Grouping by searchers, we saw that the profit is not uniformly distributed.
The three searchers with the highest profit cover more than 75% of the MEV.
Various characteristics can explain why some searchers have more profit than
others. For example, the number of arbitrages they conducted, the number of
instances they used to run the bots, and the number of contracts they used,
among others.
For arbitrages beginning with UST, the correlation between the profit and the
number of arbitrages was positive and statistically significant (Pearson
correlation rho=0.8987, p-value$<10^{-18}$). They also were significant and
positive correlations, the number of different contracts against the number of
arbitrages (rho=0.6945, p-value$<10^{-7}$) and the number of senders against
the number of arbitrages (rho=0.5615, p-value$<10^{-5}$). It is relevant to
clarify that there was also a significant positive correlation between the
number of senders and the number of arbitrages. And also between the number of
contracts and the number of arbitrages. Therefore, the previous correlation
might only be the product of this phenomenon.
For arbitrages beginning with LUNA, the correlation between the profit and the
number of arbitrages is positive and significant (Pearson correlation
rho=0.8551, p-value=0.0016). Correlations are also significant and positive in
the number of different contracts versus the number of arbitrages (rho=0.6903,
p-value=0.0271); and the number of senders versus the number of arbitrages
(rho=0.6792, p-value=0.0307). In this case, there is a significant positive
correlation between the number of arbitrages and the number of senders.
However, there is no correlation between the number of different contracts and
the number of arbitrages.
This information is not enough to explain everything. For example whether
using more contracts promotes a higher profit. This question can not be
answered because profit could also be affected by the number of arbitrages.
To identify which factors contribute to higher profits, it is useful to have a
history of the arbitrage attempts that did not work. In the next section, we
analyse this factor.
### 3.5 Failed arbitrages
Similar to other blockchains, arbitrages that fail or revert in Terra Classic
happened for various reasons. It could be due to estimation errors, or other
reasons that caused the swaps to not land on chain. But most likely the
transactions revert because another transaction won the opportunity and the
profit was already extracted.
In the section Inference of arbitrage in failed transactions, we introduced a
way to estimate reverted arbitrages. The number of arbitrages that reverted
was higher than those succeeded. There were 188,564 successful arbitrages and
670,258 failed one. In other words, for each successful arbitrage, there were
3.55 that failed on average.
Considering only executing message-type transactions, successful arbitrages
represented 5% of these transactions and failed arbitrages represented 19%.
This difference generates negative effects for the network because although
they do not increase the value of gas like other chains, they fill the blocks
with transactions that revert and increase the size of the mempool, resulting
in the network congestion. Analysing it by blocks, the number of failed
arbitrages correlates with the number of successful arbitrages (rho = 0.4551,
p$<10^{-100}$). The amount of gas used for successful arbitrages was 445
billion, and 1357 billion for failed arbitrages, which is approximately three
times of gas used for successful arbitrages.
In the following subsection, we use success rate to help understand the
different aspects that determine searchers’ profits.
### 3.6 Success rate
We defined the success rate of a searcher as the number of successful
arbitrage divided by its own total arbitrage. This measure quantified the
searcher with different information. Then, we assess whether there is a
correlation between the success rate and the total profit using Spearman
correlation. We filter out those searchers with very few successful arbitrages
through experimenting with different minimum successful arbitrages as a
threshold. In all cases, we obtain a negative correlation between the success
rate and profit. This result means that searchers with lower successful-rate
had more profit. Table 4 summarises the correlations.
Threshold Min Successful arbitrages | Percentile that threshold represented | Number of Searchers after threshold | Rho | P-value
---|---|---|---|---
10 | 23 | 40 | -0.4869 | 0.0014
50 | 38 | 32 | -0.5821 | 0.0005
100 | 44 | 29 | -0.6084 | 0.0005
250 | 46 | 28 | -0.6349 | 0.0003
500 | 53 | 24 | -0.5983 | 0.0020
750 | 55 | 23 | -0.5652 | 0.0049
Table 4: Result of correlations for different thresholds
Figure 11 shows as an example the scatter plot between the successful rate and
the profit for threshold=50 successful arbitrages.
Figure 11: Correlation between success rate and profit by searchers. The
colour code follows the one presented in Figure 4.
We have already established a negative correlation between profit and the
success rate. Those searchers who have a high success rate are not the ones
who have a higher profit. The hypothesis behind this phenomenon is that
searchers that have many transactions with reverted arbitrages are running
bots in multiple instances. They probably have several instances running the
same code in different virtual machines. Every instance responds with the same
attempt to arbitrage, but one arrives before the others, causing the second
and the following to revert.
Multiple running instances could be an effect of the lack of variable gas
prices for this type of blockchain. The only mechanism that searchers have to
arrive earlier is by reducing the response time. Searchers achieved this by
having many instances competing to receive the transaction that generated the
opportunity earlier. In the following subsection, we test this hypothesis.
### 3.7 Searchers with multiple running instances
It was impossible to determine if a searcher was running on multiple instances
using exclusively on-chain data. However, we can address this problem by
studying the reverted transactions and trying to understand, in each case, if
they were reverted because another transaction from the same searcher with the
same execution message arrived first. If we have many of these transactions,
we can assume that this is an effect of the same searcher computing the
opportunity to generate a transaction and send it from many instances that are
running concurrently.
To test this hypothesis, we built the rate repeated transaction index. This
rate measures the times a searcher has repeated the same transaction (in terms
of the same execute message) in the same block on average. A high value of
this measure indicates that searchers have more than one transaction with the
same executed message in the same block. For searchers with at least 50
successful arbitrages, the mean and standard deviation of this measure were
1.3125$\pm$0.5574.
We also studied the correlation between the rate of repeated transactions and
the profit. We found a positive and significant correlation (Spearman
correlation rho=0.6022, p-value=0.0011). Figure 12 shows the scatter plot of
this correlation.
Figure 12: Correlation between repeated transaction rate and profit by
searchers. The colour code follows the one presented in Figure 4.
This positive and significant correlation explained that searchers that, on
average, sent more transactions with the same execute message per block,
obtained a higher profit. This is probably explained by the concurrent
execution of instances running the same code.
In the next section, we look at the timing and how running multiple instances
can make bots see transactions earlier.
### 3.8 Time analysis
In the previous section, we showed that bots with higher profits had higher
repeated transaction rate. This effect is probably due to the searchers having
more than one instance running the arbitrage bots. The motivation behind this
is strictly related to the nature of the fixed gas price blockchains. The only
recourse bots have to win an opportunity is to be the fastest.
We set up the experiment detailed in the Time Analysis Section and measured
how long it took for the 84 instances in different geographic locations
(running a Terra Classic node) to receive the 431K transactions. For each
transaction, we measured the time elapsed relative to the first region that
received the transaction. This analysis generated more than 36M different
times. Figure 13 shows the non-uniform distribution of how many times a region
saw a transaction before others.
Figure 13: Distribution of times a region saw transactions before others.
Figure 14 shows the distribution of the latency for transactions that took
less than 7 seconds (98% of the sample).
Figure 14: Distribution of latency in milliseconds of transactions with
latency less than 7 seconds.
Although the sample distribution of all recorded times is informative, it may
be useful to analyse it by region because the effect of geographic proximity
may be relevant. We believe that the physically closer the instances are, the
faster the transaction arrives. Figure 15 shows the median time (in
milliseconds) that it takes to receive transactions taking into account the
time that elapsed since the first region received the transaction. For
example, if Asia Pacific Mumbai received a transaction first, Asia Pacific
Osaka waits 63 milliseconds (in median time) and then receives the
transaction. Asia Pacific Sydney has to wait for 102 milliseconds, and South
America Sao Paulo, 131 milliseconds.
Figure 15: Heat map of median latency in milliseconds. The median time (in
milliseconds) that it takes to receive transactions, taking into account the
time that elapsed since the first region received the transaction. For
example, if Asia Pacific Mumbai received a transaction first, Asia Pacific
Osaka waits 63 milliseconds (in median time) and then receives the
transaction. Asia Pacific Sydney has to wait for 102 milliseconds, and South
America Sao Paulo, 131 milliseconds.
Figure 15 shows the expected geographic grouping. We performed the following
procedure to quantify the effect. We calculated all pairs of regions and then
measured the correlation between each pair of median times with the distance
in kilometres between cities of the data centres. We found a positive and
significant Pearson correlation (rho=0.7452 and p-value=1.6721e- 33). Figure
16 shows this effect.
Figure 16: Correlation between median time latency (in milliseconds) and the
distance between regions (in Km).
We have already established that the time to receive a transaction is related
to geographical distances and these values range from 30 to 100 milliseconds
in most cases. These characteristics implied that the time in which a bot
could see a transaction if it was not in the same region where it originated,
was relevant and probably up to an order of magnitude larger than what it may
take to compute a solution for a possible arbitrage. Therefore, if another
searcher had a bot in the same region where the transaction that generates the
arbitrage opportunity originated, it had an advantage of at least 30
milliseconds. This difference suggests that it was relevant for searchers to
have many instances running in different regions of the world so that they
were always close to the node that first received the transaction that created
the opportunity. It is important to note that the time synchronisation error
in different instances is less than 10 milliseconds.
## 4 Discussion
In our work, we studied a Terra Classic as an example of a blockchain with a
fixed gas price. We use Terra as a case study because of its enormous data
volume and the number of decentralised exchanges it has.
We have four specific objectives: Specific Objective 1: Study the
characteristics of arbitrages in Terra Classic in the defined period. Specific
Objective 2: Understanding which strategies improve searchers’ profit.
Specific Objective 3: Understanding the timing-related characteristics of the
arbitrage. Specific Objective 4: Create a dashboard to help with the analysis.
Regarding SO1: In the studied time period, we found more than 188 thousand
arbitrages distributed in time proportional to the network load. Typically,
they started and ended with two tokens: UST and LUNA. The number of tokens in
the arbitrages is very different and the length of the arbitrage cycles is
shorter than expected. 80% of the arbitrages had only 2 or 3 swap and they’re
mostly from the most exploited pairs - UST and LUNA. We also found some
extremely profitable arbitrages, with profit rate greater than 2000. This
phenomenon could be due to volatile movements in the token prices at
distressed times, breaking the synchronisation of two markets in an outrageous
way.
Beyond the individual quantification of the arbitrages, we created a strategy
to identify searchers. This task allowed us to make a more precise analysis.
We found that a small number of searchers contributed to almost all the
profit.
We then proposed a mechanism to identify failed arbitrages which is much
larger than the number of successful arbitrages. The ratio was 3.55 failed
arbitrages for each successful one. This rate can have negative consequences
on the network. Although the gas price cannot be increased, validators have to
create larger blocks which increase computation time for all nodes. Also, the
amount of data transmitted was larger, congesting and increasing network costs
without much real benefits. This could result in transactions staying in the
mempool for a long time.
Regarding Specific Objective 2: We understood that several characteristics
correlate with high profits. We found that searchers with more complexity
(more senders and more contracts) had a higher profit. This complexity could
indicate that searchers who constantly changed their strategies with different
contracts and frequent improvements eventually generated more profits.
Furthermore, we found that searchers with a lower success rate had a higher
profit. At first, this result could sound contradictory. But the fact that
searchers sent simultaneous transactions with the same message could mean that
searchers ran many instances from various places and they tried to respond
with the shortest possible delay. To measure this, we created the rate
repeated transition index that allowed us to quantify and corroborate this
hypothesis.
Regarding Specific Objective 3: We carried out an observational experiment
where we recorded transactions in several instances. After reviewing more than
400K transactions, we found that the physical distance explained the latency
in receiving transactions. That means a node has to wait a time proportional
to its distance to the source node until it receives the transaction.
Regarding Specific Objective 4: we condensed all this analysis into an
interactive platform that allows users to explore the data in a new way. We
believe that making these tools available promotes the testing of new
hypotheses. The platform also allows for data download which could help the
community with further research.
This research study quantifies and characterizes the arbitrage opportunities
in Terra Classic before the de-peg event. We believe it is one of the first to
address the effects of arbitrages on a fixed gas price blockchain.
In some papers [14], authors propose that MEV on blockchains like Ethereum is
a crypto gas war because bots play a game of finding the right gas price. In
the case of blockchains with fixed price gas, we believe that bots are
competing in a different type of war: the bot latency war.
### 4.1 Limitations
Our work studied the effects of fixed gas prices on a single blockchain during
a particular time. We found features that maximize profit for searchers but
these features are not necessarily the same for all blockchains of this type.
For future studies, we could test these hypotheses in other blockchains to
understand if the same features apply.
Regarding the time experiment, we believe that two improvements could
contribute to more robust results. First, our experiment was purely
observational. We could study the timing results by sending transactions
ourselves in order to have a more precise understanding of the location of the
transaction. Secondly, although we control the synchronisation among the
different nodes with an error of less than 10 milliseconds, it is possible to
reduce this error further and to make our findings more accurate.
## Acknowledgement
We thank Dr Enzo Tagliazucchi for providing the infrastructure to run the
analysis of this work. We thank the Flashbot team for supporting this project.
## Funding
This research was supported by the Flashbot Research Program.
## References
* [1] Kaihua Qin, Liyi Zhou, and Arthur Gervais. Quantifying blockchain extractable value: How dark is the forest? In 2022 IEEE Symposium on Security and Privacy (SP), pages 198–214. IEEE, 2022.
* [2] Ben Weintraub, Christof Ferreira Torres, Cristina Nita-Rotaru, and Radu State. A flash (bot) in the pan: measuring maximal extractable value in private pools. In Proceedings of the 22nd ACM Internet Measurement Conference, pages 458–471, 2022.
* [3] Alex Obadia. Flashbots: Frontrunning the MEV Crisis. https://medium.com/flashbots/frontrunning-the-mev-crisis-40629a613752, 2022\.
* [4] Philip Daian, Steven Goldfeder, Tyler Kell, Yunqi Li, Xueyuan Zhao, Iddo Bentov, Lorenz Breidenbach, and Ari Juels. Flash boys 2.0: Frontrunning in decentralized exchanges, miner extractable value, and consensus instability. In 2020 IEEE Symposium on Security and Privacy (SP), pages 910–927. IEEE, 2020.
* [5] Ye Wang, Yan Chen, Haotian Wu, Liyi Zhou, Shuiguang Deng, and Roger Wattenhofer. Cyclic arbitrage in decentralized exchanges. In Companion Proceedings of the Web Conference 2022, pages 12–19, 2022.
* [6] Igor Makarov and Antoinette Schoar. Trading and arbitrage in cryptocurrency markets. Journal of Financial Economics, 135(2):293–319, 2020.
* [7] Hai Jin, Chenchen Li, Jiang Xiao, Teng Zhang, Xiaohai Dai, and Bo Li. Detecting arbitrage on ethereum through feature fusion and positive-unlabeled learning. IEEE Journal on Selected Areas in Communications, 40(12):3660–3671, 2022.
* [8] Magnus Hansson. Arbitrage in crypto markets: An analysis of primary ethereum blockchain data. Available at SSRN 4278272, 2022.
* [9] Liyi Zhou, Kaihua Qin, Antoine Cully, Benjamin Livshits, and Arthur Gervais. On the just-in-time discovery of profit-generating transactions in defi protocols. In 2021 IEEE Symposium on Security and Privacy (SP), pages 919–936. IEEE, 2021.
* [10] Flashbots. Why your blockchain needs an MEV solution. https://www.youtube.com/watch?v=sYFuFLe9kp0, 2022.
* [11] Alex Obadia. A brief Survey of MEV on Ethereum, BSC, Avalanche and Polygon in 2021 . https://www.youtube.com/watch?v=OYE9uAf_v18, 2022.
* [12] Andreas Haas, Andreas Rossberg, Derek L Schuff, Ben L Titzer, Michael Holman, Dan Gohman, Luke Wagner, Alon Zakai, and JF Bastien. Bringing the web up to speed with webassembly. In Proceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 185–200, 2017.
* [13] Jeff Forcier, Paul Bissex, and Wesley J Chun. Python web development with Django. Addison-Wesley Professional, 2008.
* [14] Kyungchan Ko, Taeyeol Jeong, Jongsoo Woo, and James Won-Ki Hong. An analysis of crypto gas wars in ethereum. In 2022 23rd Asia-Pacific Network Operations and Management Symposium (APNOMS), pages 1–6. IEEE, 2022.
|
The upcoming spectroscopic powerhouses at the Isaac Newton Group of Telescopes
Balcells, M.1,2,3
1 Isaac Newton Group, 38700 Santa Cruz de La Palma, Spain
2 Instituto de Astrofísica de Canarias, 38200 La Laguna, Tenerife, Spain
3 Universidad de La Laguna, 38200 La Laguna, Tenerife, Spain
## Abstract
The Isaac Newton Group of Telescopes is completing a strategic change for the
scientific use of its two telescopes, the 4.2-m William Herschel Telescope
(WHT) and the 2.5-m Isaac Newton Telescope (INT). After more than 30 years
operating as multi-purpose telescopes, the telescopes will soon complete their
shift to nearly-single instrument operation dominated by large surveys.
At the WHT, the WEAVE multi-fibre spectrograph is being commissioned in late
2022. Science surveys are expected to launch in 2023. 30% of the available
time will be offered in open time. For the INT, construction of HARPS-3, a
high-resolution ultra-stable spectrograph for extra-solar planet studies, is
underway, with deployment planned for late 2024. The INT itself is being
modernised and will operate as a robotic telescope. An average of 40% of the
time will be offered as open time.
The ING will maintain its student programme. Plans call for moving student
work from the INT to the WHT once the INT starts operating robotically.
## 1 Introduction
Since the mid 1980’s, the Isaac Newton Group111www.ing.iac.es (ING) operates
the 4.2-m William Herschel Telescope (WHT) and the 2.5-m Isaac Newton
Telescope (INT) at the Observatorio Roque de los Muchachos on the Canarian
island of La Palma. The telescopes have provided front-line multi-instrument
observing capabilities to the ING astronomical communities of the UK, Spain
and the Netherlands. Instrumentation comprised facility instruments (most
recently ISIS, LIRIS, ACAM, PF, AF2, WYFFOS, INGRID, LDSS, NAOMI, OASIS,
TAURUS-II, WFC, IDS), as well as powerful visiting instruments for science or
for technology demonstration (AOLI, CANARY, DIPOL, GASP, GHaFaS, HiPERCAM,
iQuEyE, LEXI, PAUCAM, PN.S, SPIFS, CIRPASS, CIRSI, ExPo, INTEGRAL, MAOJCam,
PLANETPOL, PWFS, SAURON, ULTRACAM, Stereo-SCIDAR, Sodium Profiler).
In 2010, following a strategic review and community
consultation222https://www.ing.iac.es/about-ING/strategy/, and following the
Astronet Roadmap for European Astronomy [5] and the European Telescope
Strategy Review Committee 2010 report [15], ING started making steps to allow
the WHT to become a powerful spectroscopic survey facility. This
transformation had been announced at previous SEA conferences[3].
This paper provides a much needed update, after the pandemia years, coinciding
with the exciting times when the WHT is starting to collect sky light with its
new instrumentation.
Figure 1: The WHT prime focus equiped with the WEAVE fibre positioner. Photo
credit: J. Méndez, ING.
## 2 The new WHT
With a strategic view focused on science from massive spectroscopic surveys
doable on a 4-m class telescope, there were clear design drivers for new
instrumentation[1]. The WEAVE instrument (Fig. 1) is the result of that
strategy. Three main elements comprise the new WHT:
Prime-focus corrector.
A new optical corrector delivers a corrected field of view (FOV) of 2 degree
(40 cm) diameter, with good transmission from 360 nm to 1000 nm.
Fibre positioner.
A pick-and-place system, based on the 2dF system at the AAT, employs two
robots to place up to 960 fibres on the focal plane. In addition to the
standard single-fibre multi-object mode, the instrument comprises 20 fully-
deployable mini-integral-field units (mIFU) of 10 arcsec FOV, as well as a
monolithic integral-field unit (LIFU) with field of view 90 arcsec. A total of
3243 fibres transmit the light from the MOS system, the mIFU system and the
LIFU to the spectrograph.
Spectrograph.
A new two-arm spectrograph provides spectroscopy at resolving power of either
5,000 (LR) or 20,000 (HR). Wavelength coverage at LR is the entire optical
range, 366 – 959 nm.
### 2.1 WEAVE highlights
The WEAVE instrument has been described elsewhere [7][8][17][2]. The
definitive description of the as-built instrument, at the time of integration
at the telescope, pre-commissioning, is given in [12]. And an updated
description is being maintained at the ING web
site333https://www.ing.iac.es/astronomy/instruments/weave/weaveinst.html.
We direct the reader to the above references, and here just note the three
input modes: multi-object mode (MOS), the large-integral field mode (LIFU) and
the mini-integral field mode (mIFU). We discuss WEAVE’s main highlights in the
global context of multi-object spectroscopy instruments.
Table 1: WEAVE highlights Parameter | Interest
---|---
MOS lowres $R=5,000$ | $\delta v\sim 3\,\mathrm{km\,s^{-1}}$ @ $V\sim 20$, match to Gaia $v_{\mathrm{T}}$
MOS highres $R=20,000$ | Improved continuum for line strengths
MOS, mIFU FOV $\sim 2\,\mathrm{deg}$ | High multiplex
LIFU highres $R=10,000$ | Resolving vertical vel. disp. of galaxy disks
LIFU FOV $\sim 90\,\mathrm{arcsec}$ | Evolution of PPAK
20 mIFU, FOV $\sim 11\,\mathrm{arcsec}$ | MANGA but on a 4-m, $R=5,000$
End-to-end pipeline | Science-ready data
Offered for surveys and open time | Accommodates both large and small projects
When compared to other MOS instruments on 4-m class telescopes, WEAVE shines
in a number of aspects. The more salient of them are noted in Table 1:
* •
In its default, low-resolution mode, WEAVE’s resolving power $R=5000$ is high
among high-multiplex MOS instruments built for galactic and extra-galactic
science, and makes WEAVE ideal for Milky Way dynamics and archaeology, as it
provides stellar radial velocities with accuracies similar to those of
tangential velocity data from Gaia.
* •
In its high-resolution mode, WEAVE’s resolving power $R=20,000$, when combined
with its high multiplex, represents a significant step forward for stellar
line strength determinations, as continua adjacent to the lines are less
affected by instrumental broadening.
* •
The MOS FOV, 2 degree diameter, is unique in its mIFU mode and in the high-
resolution MOS.
* •
The LIFU high-resolution configuration, which delivers R=10,000 spectra, can
resolve the vertical velocity structure of galaxy disks.
* •
The LIFU FOV, which was patterned after the PPAK LIFU[13], has 50% higher FOV,
and higher wavelength coverage ($3660-9590~{}\AA$), and represents a true
next-generation large IFU for the study of extended sources.
* •
The mIFU FOV matches the smaller end of the distribution of FOV for the MANGA
units[4]. With their wide wavelength coverage and with spectral resolutions
2.5[10] times higher than the SDSS-III spectrographs in the low[high] WEAVE
resolutions, the WEAVE mIFU’s will be powerful tools for low-mass galaxies and
for compact star-forming regions in our Galaxy.
* •
Data will be delivered fully reduced and containing a range of science-ready
products.
### 2.2 WEAVE surveys
Eight surveys (Table 2) have been approved for execution over 5 years of WEAVE
operation; they will be allocated 70% of the available time on WEAVE. As of
this writing, the most comprehensive description of the surveys is given in
[12]. For updates and contact points for the surveys and for WEAVE science,
the reader is directed to the WEAVE web
site444https://ingconfluence.ing.iac.es/confluence/display/WEAVEDEV/WEAVE%3A+The+Science.
Table 2: The eight WEAVE surveys | Title | | Title
---|---|---|---
GA | Galactic Archaeology | SCIP | Stellar, Circumstellar and Interstellar Physics
WC | Galaxy Clusters[6] | StePS | Stellar Populations At Intermediate Redshift[11]
WA | WEAVE Apertif [10] | WL | WEAVE LOFAR[16]
WQ | WEAVE QSO[14] | WD | WEAVE White Dwarfs
### 2.3 Using the WHT: surveys and open time
ING is offering time on WEAVE for large surveys as well as through the ING
national time-allocation committees. The International Time programme of the
Canarian Observatories555https://www.iac.es/en/observatorios-de-
canarias/observing-time/observing-time/international-time-programme provides
an additional means of obtaining up to 15 WEAVE nights per year in addition to
time on any of the other ORM telescopes.
WEAVE is due to start commissioning in the fall of 2022. We anticipate LIFU
science verification observations in early 2023, and aim for starting science
in the middle of 2023.
## 3 The third life of the Isaac Newton Telescope
ING is in the middle of transforming the 2.5-m Isaac Newton Telescope (INT) by
installing HARPS3666https://www.terrahunting.org/harps3.html [18], an enhanced
version of the HARPS and HARPS-N spectrographs aimed at achieving 10 cm s-1
radial velocity precision on nearby stars to search for Earth-like planets.
HARPS3 differs from its predecessors by a stabilised beam feed and a
polarimetric sub-unit[9] which will provide a powerful tool for characterising
stellar activity. The Terra Hunting Experiment (THE) consortium, P.I. Didier
Queloz, is building HARPS3 and making it available in exchange for $\sim$50%
of every night over 10 years; the remaining time will be offered as open time
through the usual national time allocation channels. The consortium is also
modernising the INT, which will become a robotic telescope. This will be the
third encarnation of the venerable INT, after its first installation in
Herstmonceux in 1967 and it re-deployment in La Palma in 1982.
With this transformation, we expect the INT will provide the UK, ES and NL
astronomical communities with a much needed tool for extra-solar planet
science.
Current plans call for the robotic INT to be commissioned during the summer of
2024, and for HARPS3 to start scientific operations before the end of 2024.
## 4 Opportunities for students
ING will continue to welcome 4 to 6 students for year-long stays at the ING.
Building on the success of this highly-demanded programme, we are hoping to
increase the number of student positions in the near future.
When the INT closes down for reforms and becomes a robotic telescope, students
will dominantly work at the WHT, partaking in WEAVE survey and open-time
observations. This will open up opportunities for the students to become
familiar with the execution of large projects and to develop expertise in
WEAVE data.
## Acknowledgments
The Isaac Newton Group of Telescopes is operated on behalf of the UK Science
and Technology Facilities Council (STFC) the Nederlanse Organisatie voor
Wetenschappelijk Onderzoek (NWO), and the Instituto de Astrofísica de Canarias
(IAC).
WEAVE construction was funded from generous grants from STFC, NWO, Spanish
Science Ministry, the French CNRS, and the Italian INAF. Additional
contributions were received from Konkoly Observatory in Hungary, INAOE in
Mexico, and PI grants from Lund Observatory, IAP Potsdam, MPIA Heidelberg.
The HARPS3 instrument is being built by the Terra Hunting Experiment
Consortium led by University of Cambridge Cavendish Laboratory.
## References
* [1] Balcells, M., Benn, C. R., Carter, D., et al. 2010, in Ground-based and Airborne Instrumentation for Astronomy III, Proc. SPIE, 7735, 77357G
* [2] Balcells, M. 2014, EAS Pub. Ser., 67, 227
* [3] Balcells, M. 2015, in Highlights of Spanish Astrophysics VIII, 776
* [4] Bundy, K., Bershady, M. A., Law, M. R., et al. 2015, ApJ, 798, 1
* [5] ASTRONET infrastructure roadmap, 2009, http://www.astronet-eu.org/IMG/pdf/Astronet-Book_light.pdf
* [6] Cornwell, D. J., Kuchner, U., Aragón-Salamanca, A., et al. 2022, MNRAS, 517, 1678
* [7] Dalton, G., Trager, S. C., Abrams, D. C., et al. 2012, in Ground-based and Airborne Instrumentation for Astronomy IV, Proc. SPIE, 8446, 84460P
* [8] Dalton, G., Trager, S. C., Abrams, D. C., et al. 2014, in Ground-based and Airborne Instrumentation for Astronomy V, Proc. SPIE, 9147, 91470L
* [9] Dorval, P., Snik, F., Piskunov, N., et al. 2018, in Ground-based and Airborne Instrumentation for Astronomy VII, Proc. SPIE, 10702, 107026B
* [10] Hess, K. M., Falcón-Barroso, J., Ascasibar, Y., et al. 2020, AAS, 235, 459.03
* [11] Iovino, A., Poggianti, B. M., Mercurio, A., et al. 2023, A&A, accepted
* [12] Jin, S., Trager, S. C., Dalton, G., et al. 2022, MNRAS, accepted, arXiv:2212.03981
* [13] Kelz, A., Verheijen, M. A. W, Roth, M., et al. 2006, PASP, 118, 129
* [14] Kraljic, K., Laigle, C., Pichon, C., et al. 2022, MNRAS, 514, 1359
* [15] Report of the European Telescope Strategy Review Committee, 2010, http://www.astronet-eu.org/IMG/pdf/PlaquetteT2_4m-final-2.pdf
* [16] Smith, D. J. B., Best, P. N., Duncan, K. J., et al. 2016, in SF2A-2016: Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics, 271
* [17] Terrett, D. L., Lewis I. J., Dalton, G., Abrams, D. C., et al. 2014, SPIE, 9152, 216
* [18] Thompson, S. J., Queloz, D., Baraffe, I., et al. 2016, in Ground-based and Airborne Instrumentation for Astronomy VI, Proc. SPIE, 9908, 99086F
|
checkmark
# HELPER-X: A Unified Instructable Embodied Agent to Tackle Four Interactive
Vision-Language Domains with Memory-Augmented Language Models
###### Abstract
Recent research on instructable agents has used memory-augmented Large
Language Models (LLMs) as task planners, a technique that retrieves language-
program examples relevant to the input instruction and uses them as in-context
examples in the LLM prompt to improve the performance of the LLM in inferring
the correct action and task plans. In this technical report, we extend the
capabilities of HELPER, by expanding its memory with a wider array of examples
and prompts, and by integrating additional APIs for asking questions. This
simple expansion of HELPER into a shared memory enables the agent to work
across the domains of executing plans from dialogue, natural language
instruction following, active question asking, and commonsense room
reorganization. We evaluate the agent on four diverse interactive visual-
language embodied agent benchmarks: ALFRED, TEACh, DialFRED, and the Tidy
Task. HELPER-X achieves few-shot, state-of-the-art performance across these
benchmarks using a single agent, without requiring in-domain training, and
remains competitive with agents that have undergone in-domain training.
Gabriel Sarch Sahil Somani∗ Raghav Kapoor∗
Michael J. Tarr Katerina Fragkiadaki
∗equal contribution
Carnegie Mellon University
helper-agent-llm.github.io
## 1 Introduction
A typical way to adapt LLMs to downstream applications is through prompting
(Brown, 2020; Alayrac, 2022; Liu, 2022b; Hongjin, 2022; Mishra, 2022; Wei,
2021; Song, 2022b), exploiting their strong in-context and few-shot learning
abilities. When the amount of in-context examples and task descriptions
necessary to cover the task constraints increases, inference costs
significantly rise due to additional attention operations. To handle
computational challenges and LLM context length, a growing body of research
explores the concept of “memory-augmented prompting” – a method that involves
retrieving a set of pertinent in-context examples to append to the prompt,
thereby broadening their applicability (Perez, 2021; Schick, 2021; Gao, 2020;
Liu, 2022c; Song, 2023; Sarch, 2023; Lewis, 2020; Mao, 2021). HELPER Sarch
(2023) retrieves a set of language-program examples based on the user’s input
instruction and adds them to the prompt to provide contextualized examples for
GPT-4 task planning.
Figure 1: TEACh-tailored HELPER Sarch (2023) demonstrates a 6.9% drop in
success when applied to ALFRED, despite sharing the same action space and
environments, due to variations in language inputs and tasks. HELPER-X
consistently performs well in both domains with one model.
Despite GPT-4’s robust generalization, memory-augmented prompting tailored for
one domain does not guarantee high performance in a similar yet distinct
domain. Applying HELPER, prompted with TEACh-specific examples and
descriptions, to ALFRED – a related domain that shares the same action space
and environments but differs in language inputs and task types – results in a
notable 6.9% decrease in accuracy compared to a HELPER model with a
specialized prompt and customized example memory specifically for ALFRED, and
vice versa when doing the same on TEACh (3.2% decrease), as shown in Figure 1
(right).
We report that two simple extensions of HELPER that allows for strong
performance across four domains, by expanding its memory with a wider array of
examples and prompts, and by integrating a additional APIs for asking
questions. Specifically, we introduce two HELPER-X variants: HELPER-XP, which
retrieves domain-specific prompt templates and related in-context examples for
the LLM, and HELPER-XS, which retrieves in-context examples from a shared
memory under a single prompt template.
We evaluate HELPER-X across four domains that include dialogue-based task
completion on TEACh (Padmakumar, 2021), following instructions from natural
language on ALFRED (Shridhar, 2020), engaging in instruction following with
active question asking on DialFRED (Gao, 2022), and organizing rooms using
spatial commonsense reasoning in the Tidy Task (Sarch, 2022). HELPER-X
demonstrates state-of-the-art performance in the few-shot example domain, that
is, without in-domain training. Extending the language-program memory does not
cause interference and does not hinder performance. In fact, HELPER-X matches,
and sometimes even exceeds, the performance of agents prompted with a single-
domain in mind.
## 2 HELPER-X
We extend HELPER (Sarch, 2023) to work across four domains. We propose two
versions to extend the memory-augmented prompting of LLMs in HELPER: 1)
HELPER-XP that retrieves from a memory of domain-tailored prompt templates and
associated domain-specific examples (Section 2.2.1), and 2) HELPER-XS that
expands the memory of HELPER into a shared memory of in-context examples
across domains combined with a domain-agnostic prompt template (Section
2.2.2). Additionally, we extend the capabilities of HELPER for question
asking, by appending a question API with functions defining possible questions
and their arguments to the LLM prompt (Section 2.2.3). We use HELPER (Sarch,
2023) for execution of the generated program using standard perception
modules.
### 2.1 Background
Here, we give an account of HELPER to make the paper self-contained.
HELPER prompts an LLM, namely GPT-4 (gpt, 2023), to generate plans as Python
programs. It assumes that the agent has access to a set of action skills $S$
(e.g., go_to(X), pickup(X), etc.). HELPER adds these skills to the prompt in
the form of a Python API. The LLM is instructed only to call these pre-defined
skill functions in its generated programs. HELPER considers a key-value memory
of language inputs and successful program pairs. It retrieves a set of in-
context examples relevant to the current input language to add to the prompt
to assist program generation. Each key is encoded into a language embedding.
The top-$k$ language-program pairs are retrieved based on their euclidean
distance to the encoding of the language input $I$ encoding.The HELPER prompt
also contains a role description ("You are a helpful assistant with expertise
in…"), a task description ("Your task is to …") and guidelines to follow for
program generation ("You should only use functions in the API…"), that are
commonly tailored to the domain-of-interest.
HELPER (Sarch, 2023), using RGB input, estimates depth maps, object masks, and
agent egomotion at each timestep. This facilitates the creation and upkeep of
a 3D occupancy map and an object memory database, essential for obstacle
navigation and object tracking. Object detection in each frame leads to
instance aggregation based on 3D centroid proximity, with each instance
characterized by dynamic state attributes (e.g., cooked, sliced, dirty). When
an action fails, a Vision-Language Model (CLIP (Radford, 2021)) provides
feedback, prompting the LLM to re-plan. For objects not present in the map,
the LLM suggests areas for HELPER to search (e.g., “near the sink").
Figure 2: Illustration of the shared example memory (HELPER-XS; top) and the
prompt retrieval (HELPER-XP; bottom). The memory is shared across domains in
both versions, allowing language and task inputs from any of the domains.
### 2.2 Unified Memory-Augmented Prompting
We explore two ways to expand HELPER to work across four domains, either
through prompt retrieval (Section 2.2.1) or through a shared example memory
(Section 2.2.2).
#### 2.2.1 Prompt Retrieval
Given an input language instruction $I$, the prompt retrieval agent HELPER-XP
retrieves a specialized prompt template $P$ and an associated set of
specialized examples $E$. Each specialized prompt template contains role
descriptions, task instructions and guidelines tailored to each domain,
namely, dialogue-based task completion (based on TEACh (Padmakumar, 2021)),
instruction following from natural language (based on ALFRED (Shridhar,
2020)), instruction following with active question asking (based on DialFRED
(Gao, 2022)), or tidying up rooms (based on Tidy Task (Sarch, 2022)). For
retrieval, a query is generated from the input instruction by encoding the
instruction into an embedding vector using a pre-trained language model
(Greene, 2022). The query retrieves the closest key from memory, where each
key represents the language encodings of each prompt template and example text
, as shown in Figure 2. The top-$k$ in-context examples are further retrieved
from the specialized set of examples associated with the retrieved prompt and
added to the retrieved prompt template, and the resulting prompt is used for
LLM’s program generation.
#### 2.2.2 Shared Example Memory
Given an input language instruction $I$, the shared example memory agent
HELPER-XS retrieves a set of in-context examples from a shared memory that
includes in-context examples from all domains considered. These examples are
added to a domain-agnostic prompt template that does not have a specialized
role description, task instructions or guidelines for any single domain. A
query is generated from the input instruction by encoding it into an embedding
vector using a pre-trained language model (Greene, 2022). The keys represent
encodings of each in-context example language in the shared memory. The query
embedding retrieves the top-$k$ nearest neighbors keys and their values. These
are added to the prompt as relevant in-context examples for LLM program
generation, as shown in Figure 2.
#### 2.2.3 Question Asking API
A natural limitation of asking questions in a simulator is that only certain
types of questions can be understood and answered. We constrained the set of
possible questions asked by the agent by defining an API of available
questions in the DialFRED (Gao, 2022) benchmark and their arguments that
HELPER-X can call on to gather more information. These include questions in
three categories—Location, Appearance, and Direction—pertaining to the agent’s
next interaction object. Importantly, this API can be continuously expanded by
adding an additional function to the question-asking API.
Implementation details. We follow the network implementation of HELPER (Sarch,
2023). We use GPT-4-0613 (gpt, 2023) for text generation and text-embedding-
ada-002 (Greene, 2022) for text embeddings. We use the SOLQ object detector
(Dong, 2021) and ZoeDepth network (Bhat, 2023) for depth estimation from RGB
input. We use $k=3$ for example retrieval.
## 3 Experiments
We test HELPER-X in the following benchmarks: 1. Inferring and executing
action plans from dialogue (TEACh (Padmakumar, 2021)), 2. Inferring and
executing action plans from instructions (ALFRED (Shridhar, 2020)), 3. Active
question asking for seeking help during instruction execution (DialFRED (Gao,
2022)), and 4. Tidying up rooms (Tidy Task (Sarch, 2022)).
### 3.1 Inferring and Executing Action Plans from Dialogue
Inferring and executing task plans from dialogue involves understanding
dialogue segments and executing related instructions using the provided
information in the dialogue. This task evaluates the agent’s ability in
understanding noisy and free-form conversations between two humans discussing
about a household task.
##### Dataset
We use the TEACh benchmark (Padmakumar, 2021), which consists of over 3,000
dialogues focused on household tasks in the AI2-THOR environment Kolve (2017).
We use the Trajectory from Dialogue (TfD) variant, where an agent, given a
dialogue segment, must infer action sequences to fulfill tasks like making a
coffee or preparing a salad. The training dataset contains 1482 expert
demonstrations with associated dialogues. The evaluation includes 181 ’seen’
and 612 ’unseen’ episodes, with ’seen’ having different object placements and
states than in training, and ’unseen’ also having different object instances
and room environments. The agent receives an egocentric image at each step and
selects actions to execute, such as pickup(X), turn_left(), etc.
Baselines We consider two kinds of baselines: 1. Methods that supervise low-
level or high-level action prediction from language and visual input using
expert demonstrations in the training set (1482 in number) (Pashevich, 2021;
Zheng, 2022; Min, 2021; Zhang, 2022), and 2. Methods that use a small amount
of expert demonstrations for prompting pretrained LLM (Sarch, 2023).
Specifically, HELPER and HELPER-X use 11 domain-specific examples. We
additionally include a comparison to HELPER-ALF, which uses specialized
prompts and examples for ALFRED.
Evaluation Metrics We follow the TEACh evaluation metrics. Task success rate
(SR) is a binary metric of whether all subtasks were successfully completed.
Goal condition success rate (GC) quantifies the proportion of achieved goal
conditions across all sessions. Both SR and GC have path-length weighted
variants weighted by (path length of the expert trajectory) / (path length
taken by the agent).
Results are reported in Table 1. On validation unseen, HELPER-XS and HELPER-XP
demonstrate performance on-par with HELPER, with HELPER-XS even slightly
outperforming HELPER, despite HELPER-X being shared across four domains.
HELPER-X also outperforms the best supervised baselines trained in-domain with
many demonstrations. On validation seen, both HELPER-X variants outperform
HELPER, with the best model HELPER-XP outperforming HELPER by 2.7% in success
rate..
Table 1: Trajectory from Dialogue (TfD) evaluation on the TEACh validation set. Trajectory length weighted metrics are included in ( parentheses ). FS = few shot. Sup. = supervised. G = generalist; shared across benchmarks. GC = goal-condition success. | | Unseen | Seen
---|---|---|---
| | Success | GC | Success | GC
Sup. | E.T. (Pashevich, 2021) | 0.5 (0.1) | 0.4 (0.6) | 1.0 (0.2) | 1.4 (4.8)
JARVIS (Zheng, 2022) | 1.8 (0.3) | 3.1 (1.6) | 1.7 (0.2) | 5.4 (4.5)
FILM (Min, 2022) | 2.9 (1.0) | 6.1 (2.5) | 5.5 (2.6) | 5.8 (11.6)
DANLI (Zhang, 2022) | 8.0 (3.2) | 6.8 (6.6) | 5.0 (1.9) | 10.5 (10.3)
ECLAIR (Kim, 2023) | 13.2 | – | – | –
FS | HELPER-ALF | 10.5 (1.8) | 10.3 (4.5) | 13.3 (2.8) | 14.2 (7.5)
HELPER (Sarch, 2023) | 13.7 (1.6) | 14.2 (4.6) | 12.2 (1.8) | 18.6 (9.3)
G+FS | HELPER-XP | 13.6 (2.0) | 13.6 (5.6) | 14.9 (3.6) | 20.3 (11.0)
HELPER-XS | 14.5 (2.1) | 14.0 (5.4) | 14.4 (3.5) | 19.9 (11.0)
### 3.2 Following Natural Language Instructions
Natural language instruction following evaluates the agent’s ability to carry
out high-level instructions (“Rinse off a mug and place it in the coffee
maker") and low-level ones (“Walk to the coffee maker on the right") provided
by a human user. Importantly, the language and tasks in this evaluation differ
from the ones in the TEACh benchmark (Section 3.1).
##### Dataset
Te ALFRED (Shridhar, 2020) is a vision-and-language navigation benchmark
designed for embodied agents to execute tasks in domestic settings from RGB
sensory input. It includes seven task types across 207 environments, that
involve 115 object types in 4,703 task instances, varying from simple object
relocation to placing a heated item in a receptacle. The dataset includes
detailed human-authored instructions and high-level goals, based on 21,023
expert demonstrations. It also comprises 820 ’seen’ and 821 ’unseen’
validation episodes. Agents receive egocentric RGB images at each step and
select actions from a predefined set to progress, such as pickup(X),
turn_left(), etc.
Baselines Again, we consider two sets of baselines: those that supervise low-
level or high-level action prediction using the expert demonstrations in the
training set (Pashevich, 2021; Zheng, 2022; Min, 2021; Zhang, 2022; 2021;
Song, 2022a; Blukis, 2022; Bhambri, 2023; Liu, 2022a; Murray, 2022; Inoue,
2022), and those that use a small amount of demonstrations ($<=100$) (few-
shot) (Sarch, 2023; Song, 2023; Brohan, 2023; Liu, 2023). The SayCan (Brohan,
2023), FILM (Min, 2021) (FS), and HLSM (Blukis, 2022) (FS) few shot baselines
are adapted for the few-shot ALFRED setting by the authors of Song (2023),
where the planning modules in FILM and HLSM are re-trained using only 100
demonstrations. We adapt HELPER (Sarch, 2023) as a baseline with specialized
prompts and examples for ALFRED, as well as a comparison to HELPER-TEACh,
which uses specialized prompts and examples for TEACh. HELPER and our HELPER-X
model each use 7 domain-specific examples in their memory.
Evaluation Metrics We follow the ALFRED evaluation metrics: 1\. Task success
rate (SR) and 2\. Goal condition success rate (GC). These are defined the same
as in TEACh (Section 3.1).
Results are reported in Table 2. Our conclusions are similar to Section 3.1.
On validation unseen, HELPER-XS and HELPER-XP demonstrate performance on-par
with HELPER, with HELPER-XP marginally outperforming HELPER by 1.0%, despite.
HELPER-X is also competitive with the best supervised baselines, despite only
requiring a few in-domain demonstrations. On validation seen, we observe both
HELPER-X models marginally outperforming HELPER. We additionally show that
using HELPER-TEACh which has prompts and examples for a different domain
(TEACh) causes a significant 6.9% drop in performance.
Table 2: Evaluation on the ALFRED validation unseen set. Trajectory length weighted metrics are included in ( parentheses ). FS = few shot. Sup. = supervised. G = generalist; shared across benchmarks. | | Unseen | Seen
---|---|---|---
| | Success | GC | Success | GC
Sup. | E.T. (Pashevich, 2021) | 7.3 (3.3) | 20.9 (11.3) | 46.6 (32.3) | 52.9 (42.2)
HiTUT (Zhang, 2021) | 12.4 (6.9) | 23.7 (12.0) | 25.2 (12.2) | 34.9 (18.5)
M-TRACK(Song, 2022a) | 17.29 | 28.98 | 26.70 | 33.21
HLSM (Blukis, 2022) | 18.3 | 31.2 | 29.6 | 38.7
FILM (Min, 2021) | 20.1 | 32.5 | 24.6 | 37.2
MCR-Agent (Bhambri, 2023) | 20.1 (10.8) | – | 34.4 (23.0) | –
LEBP (Liu, 2022a) | 22.36 | 29.58 | 27.63 | 35.76
LGS-RPA (Murray, 2022) | 33.18 | 44.68 | 43.86 | 52.51
EPA (Liu, 2023) | 40.11 | 44.14 | 45.78 | 51.03
Prompter (Inoue, 2022) | 53.3 (19.6) | 63.0 (21.7) | – | –
FS | HLSM (Blukis, 2022) (FS) | 0.00 | 1.86 | 0.1 | 2.8
FILM (Min, 2021) (FS) | 0.00 | 9.65 | 0.0 | 0.0
SayCan (Brohan, 2023) | 9.9 | 22.5 | 12.3 | 24.5
LLM-Planner (Song, 2023) | 15.4 | 23.4 | 16.5 | 30.1
HELPER-TEACh | 27.5 (5.9) | 44.3 (10.1) | 24.5 (6.3) | 38.2 (11.2)
HELPER (Sarch, 2023) | 34.4 (7.6) | 51.5 (11.9) | 27.6 (7.4) | 42.0 (12.7)
G+FS | HELPER-XP | 35.4 (7.9) | 52.9 (12.3) | 28.2 (7.5) | 42.5 (12.9)
HELPER-XS | 34.0 (7.5) | 51.1 (11.9) | 28.0 (7.4) | 42.1 (12.8)
### 3.3 Instruction Following with Asking Questions
Question asking instruction following allows the agent to choose to ask
questions to an oracle to gain additional information to help it complete a
task defined by an initial natural language instruction.
##### Dataset
The DialFRED benchmark (Gao, 2022) enables an agent to query users while
executing language instructions, utilizing user responses for task
improvement. It features a human-annotated dataset with 53K relevant questions
and answers, plus an oracle for responding to agent queries. Agents can ask
questions in three categories—Location, Appearance, and Direction—pertaining
to their next interaction object. The dataset covers 25 task types across 207
environments, 115 object types, and includes ’seen’ and ’unseen’ episodes. The
agent receives egocentric RGB images at each step and selects actions from a
set, like pickup(X), turn_left(), etc. This benchmark’s instructions and tasks
are distinct from TEACh and partially overlap with ALFRED, with significant
modifications and 18 new task types.
Questioning Implementation To ask questions in the DialFRED task, we add to
the question asking API functions to query the oracle in DialFRED (see Section
2.2.3). HELPER-X asks questions when it does not know the location of an
object required for the task at hand. Unlike ALFRED, success in DialFRED
requires interacting with a specific instance of an object class. To account
for this, HELPER-X also asks questions to help disambiguate when it has seen
multiple instances of the same object.
Baselines We compare with the baselines in the DialFRED paper (Gao, 2022),
which includes a sequence-to-sequence architecture for choosing to ask a
question, trained with reinforcement learning, and the Episodic Transformer
architecture (Pashevich, 2021), trained with behavioral cloning. We adapt
HELPER (Sarch, 2023) as a baseline with specialized prompts and examples for
DialFRED, as well as our question asking API. We consider a few shot setting
with each few shot model receiving 7 domain-specific examples.
Evaluation Metrics We follow the conventions of the DialFRED benchmark. We use
the Task success rate (SR) metric. This is defined the same as TEACh (Section
3.1).
Results are reported in Table 4. On validation unseen, we observe HELPER-XS
marginally outperforming HELPER by 0.38 points in success rate, despite
HELPER-X being shared across all domains. While HELPER-X is outperformed by
the best supervised baselines, HELPER-X only requires a few in-domain
demonstrations compared to the thousands of language-action demonstrations and
RL interactions needed to train the baseline models. Most importantly, we see
the addition of question-asking in HELPER-X improves success rate by 2.48
points, highlighting its efficiency in question selection and response
utilization.
Table 3: Evaluation on the DialFRED validation unseen set. FS = few shot. Sup. = supervised. G = generalist; shared across benchmarks. | | Success |
---|---|---|---
Sup. | Instructions Only (Gao, 2022) | 18.3 |
All QAs (Gao, 2022) | 32.0 |
RL Anytime (Gao, 2022) | 33.6 |
FS | HELPER (Sarch, 2023) | 19.62 |
G+FS | HELPER-XP | 18.96 |
without QA | 16.48 |
HELPER-XS | 19.99 |
Table 4: Evaluation on the Tidy Task test set. Trajectory length weighted metrics are included in ( parentheses ). FS = few shot. S = supervised. G = generalist; shared across benchmarks. CM = Correctly Moved. IM = Incorrectly Moved. | | CM $\uparrow$ | IM $\downarrow$ | Energy% $\downarrow$ | Steps
---|---|---|---|---|---
S | TIDEE (Sarch, 2022) | 2.7 | 0.3 | 64.9 | 437.6
FS | Random Receptacle | 2.0 | 0.3 | 95.5 | 329.3
HELPER (Sarch, 2023) | 2.1 | 0.2 | 83.9 | 348.3
G+FS | HELPER-XP | 2.1 | 0.3 | 86.9 | 368.2
HELPER-XS | 2.2 | 0.2 | 83.4 | 333.9
### 3.4 Tidying Up using Spatial Commonsense Reasoning
Tidying up involves figuring out where to place items without explicit
instructions, relying on spatial commonsense to infer a proper location for an
object. This task tests the agent’s ability to use commonsense reasoning
regarding contextual, object-object, and object-room spatial relations.
##### Dataset
We evaluate on the Tidy Task (Sarch, 2022) benchmark, where the agent is
spawned in a disorganized room, and must reposition objects to bring them to
an organized tidy state. The dataset consists of 8000 training, 200
validation, and 100 testing messy configurations in 120 distinct scenes of
bedrooms, living rooms, kitchens and bathrooms. At each time step, the agent
obtains an egocentric RGB and depth image and must choose an action from a
specified set to transition to the next step, such as pickup(X), turn_left(),
etc. In this setup, the models are prompted to tidy up the room, given a set
of objects that are out of place obtained using the visual detector from TIDEE
(Sarch, 2022).
Baselines We compare against TIDEE (Sarch, 2022), which includes a graph
neural network encoding common object arrangements. This is supervised in the
training set of the Tidy Task to predict where a target object should be re-
positioned to in the current scene. We adapt HELPER (Sarch, 2023) as a
baseline with specialized prompts and examples for the Tidy Task. We
additionally include a random receptacle baseline which chooses random
receptacle placement locations for the out of place objects. We consider a few
shot setting with each few shot model receiving 3 domain-specific examples.
Evaluation Metrics We use the following evaluation metrics for the Tidy Task:
Correctly Moved (CM) Average number of correctly moved objects that are out of
place in the scene, and moved by the agent. Higher is better. Incorrectly
Moved (IM) Average number of incorrectly moved objects that are not out of
place, but were moved by the agent. Lower is better. Energy The "cleaniness"
energy, where lower energy represents a higher likelihood of the room object
configuration aligning with the configurations in the organized AI2THOR rooms.
Following ProcThor (Deitke, 2022), for each receptacle object, the probability
that each object type appears on its surface is computed across the AI2THOR
scenes. See the Appendix for more details.
Results are in Table 4. On the Tidy Task, HELPER-XS and HELPER-XP demonstrates
performance on-par with HELPER. HELPER-X does significantly better than if
object locations are randomly placed (Random Receptacle). We find that the
supervised baseline, TIDEE, outperforms HELPER-X, especially in the Energy
metric, revealing that in-domain training on this benchmark is helpful for
learning the common object configurations within the AI2THOR environments.
However, we find that HELPER-X accomplishes the task in significantly fewer
steps compared to TIDEE.
## 4 Conclusion
We introduce HELPER-X, an embodied agent that executes tasks from dialogue or
language instructions, ask questions, and tides up rooms. HELPER-X has two
variants, HELPER-XP and HELPER-XS, enhancing HELPER’s memory capabilities.
HELPER-XP retrieves domain-specific templates and examples for large language
models, while HELPER-XS retrieves only examples for a domain-agnostic prompt
template through a shared memory. Evaluation of HELPER-X in four domains:
TEACh, ALFRED, DialFRED, and the Tidy Task, yields state-of-the-art
performance in the few-shot example setting. Memory and API expansions we
considered maintained or improved performance for the LLM, highlighting the
effectiveness of memory-enhanced LLMs in building versatile, instructable
agents.
## 5 Related Work
### 5.1 Memory-Augmented Prompting of Large Language Models
Recently, external memories have been instrumental in scaling language models
Borgeaud (2021); Khandelwal (2019), overcoming the constraints of limited
context windows in parametric transformers Wu (2022). They also facilitate
knowledge storage in various forms such as entity mentions de Jong (2021),
knowledge graphs Das (2022), and question-answer pairs Chen (2022). Retrieval-
augmented generation (RAG) (Lewis, 2020; Mao, 2021) has been shown to
significantly improve response quality in large language models (LLMs) by
integrating external knowledge sources with the model’s internal
representations. In agent-based domains, memory-augmented prompting has
enhanced task planning in embodied instructional contexts (Song, 2023; Sarch,
2023) and open-world gaming (Wang, 2023b; a; Majumder, 2023). Our model,
HELPER-X, employs memory-augmented prompting across four benchmarks,
demonstrating that memory expansion across related domains can maintain
performance.
### 5.2 Instructable Embodied Agents that Interact with their Environments
Numerous benchmarks assess embodied vision-and-language tasks, with
significant advancements in learning-based embodied AI agents across tasks
like scene rearrangement Gan (2022); Weihs (2021); Batra (2020); Sarch (2022);
Trabucco (2022), object-goal navigation Anderson (2018); Yang (2019); Wortsman
(2019); Chaplot (2020); Gupta (2017); Chang (2020); Gervet (2023); Chang
(2023), point-goal navigation and exploration Anderson (2018); Savva (2019);
Wijmans (2020); Ramakrishnan (2020); Gupta (2017); Chen (2019); Chaplot
(2019); Kumar (2021), embodied question answering Gordon (2018); Das (2018);
Zhu (2023); Datta (2022); Das (2020); Gao (2022), instructional and image
navigation (Ku, 2020; Krantz, 2023), audio-visual navigation (Chen, 2020),
interactive dialogue and natural language instruction following (Yenamandra,
2023; Shridhar, 2020; Padmakumar, 2021; Gao, 2023), and embodied commonsense
reasoning (Kant, 2022; Sarch, 2022; Wu, 2023). Interactive instruction
benchmarks (e.g., ALFRED (Shridhar, 2020) and TEACh (Padmakumar, 2021))
require agents to follow natural language directives and dialogue, identifying
objects in scenes via interaction masks. Variants like DialFRED (Gao, 2022)
allow agent inquiries about objects and locations. Benchmarks such as TIDEE
(Sarch, 2022) and HouseKeep (Kant, 2022) test agents’ ability to tidy rooms
using commonsense, without explicit object placement directives. Unlike most
methods confined to a single domain, our work focuses on creating a multi-
domain agent adept in dialogue-based task planning, natural language
instruction following, asking questions for disambiguation of instructions,
and tidying up scenes. Our method shows competitive performance across the
four domains with a few task-specific demonstrations and without domain-
specific weights, beyond the single image object detector.
Interactive vision-language embodied agent methods train distinct agents for
each language-defined task, using large datasets from expert demonstrations
(Min, 2021; Inoue, 2022; Zhang, 2022; Kim, 2023; Pashevich, 2021). Some
approaches use these demonstrations for end-to-end network training to
directly predict actions from observations (Pashevich, 2021; Gao, 2022; Zhang,
2021). Others employ modular methods, training planners to generate subgoals
handled by specialized perception, manipulation, and obstacle avoidance
modules (Min, 2021; Inoue, 2022; Blukis, 2022; Zheng, 2022; Kim, 2023;
Bhambri, 2023; Liu, 2022a; Murray, 2022; Liu, 2023). However, these methods
often over-specialize to specific datasets and tasks, limited by the training
domain’s language and task structure. In contrast, our method performs
competitively across multiple benchmarks with minimal task-specific
demonstrations and without needing domain-specific networks.
## References
* gpt (2023) Openai. gpt-4 technical report. _arXiv preprint arxiv:2303.08774_ , 2023.
* Alayrac (2022) Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. _Advances in Neural Information Processing Systems_ , 35:23716–23736, 2022.
* Anderson (2018) Peter Anderson, Angel Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, et al. On evaluation of embodied navigation agents. _arXiv preprint arXiv:1807.06757_ , 2018.
* Batra (2020) Dhruv Batra, Angel Xuan Chang, S. Chernova, Andrew J. Davison, Jun Deng, Vladlen Koltun, Sergey Levine, Jitendra Malik, Igor Mordatch, Roozbeh Mottaghi, Manolis Savva, and Hao Su. Rearrangement: A challenge for embodied ai. _ArXiv_ , abs/2011.01975, 2020.
* Bhambri (2023) Suvaansh Bhambri, Byeonghwi Kim, and Jonghyun Choi. Multi-level compositional reasoning for interactive instruction following. _Interaction_ , 3:4, 2023.
* Bhat (2023) Shariq Farooq Bhat, Reiner Birkl, Diana Wofk, Peter Wonka, and Matthias Müller. Zoedepth: Zero-shot transfer by combining relative and metric depth. _arXiv preprint arXiv:2302.12288_ , 2023.
* Blukis (2022) Valts Blukis, Chris Paxton, Dieter Fox, Animesh Garg, and Yoav Artzi. A persistent spatial semantic representation for high-level natural language instruction execution. In _Conference on Robot Learning_ , pp. 706–717. PMLR, 2022.
* Borgeaud (2021) Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and Laurent Sifre. Improving language models by retrieving from trillions of tokens. _CoRR_ , abs/2112.04426, 2021. URL https://arxiv.org/abs/2112.04426.
* Brohan (2023) Anthony Brohan, Yevgen Chebotar, Chelsea Finn, Karol Hausman, Alexander Herzog, Daniel Ho, Julian Ibarz, Alex Irpan, Eric Jang, Ryan Julian, et al. Do as i can, not as i say: Grounding language in robotic affordances. In _Conference on Robot Learning_ , pp. 287–318. PMLR, 2023.
* Brown (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901, 2020.
* Chang (2020) Matthew Chang, Arjun Gupta, and Saurabh Gupta. Semantic visual navigation by watching youtube videos. _Advances in Neural Information Processing Systems_ , 33:4283–4294, 2020.
* Chang (2023) Matthew Chang, Theophile Gervet, Mukul Khanna, Sriram Yenamandra, Dhruv Shah, So Yeon Min, Kavit Shah, Chris Paxton, Saurabh Gupta, Dhruv Batra, Roozbeh Mottaghi, Jitendra Malik, and Devendra Singh Chaplot. Goat: Go to any thing, 2023.
* Chaplot (2019) Devendra Singh Chaplot, Dhiraj Gandhi, Saurabh Gupta, Abhinav Gupta, and Ruslan Salakhutdinov. Learning to explore using active neural slam. In _International Conference on Learning Representations_ , 2019.
* Chaplot (2020) Devendra Singh Chaplot, Dhiraj Prakashchand Gandhi, Abhinav Gupta, and Russ R Salakhutdinov. Object goal navigation using goal-oriented semantic exploration. _Advances in Neural Information Processing Systems_ , 33, 2020.
* Chen (2020) Changan Chen, Unnat Jain, Carl Schissler, Sebastia Vicenc Amengual Gari, Ziad Al-Halah, Vamsi Krishna Ithapu, Philip Robinson, and Kristen Grauman. Soundspaces: Audio-visual navigation in 3d environments. In _Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VI 16_ , pp. 17–36. Springer, 2020.
* Chen (2019) Tao Chen, Saurabh Gupta, and Abhinav Gupta. Learning exploration policies for navigation. In _International Conference on Learning Representations_ , 2019. URL https://openreview.net/pdf?id=SyMWn05F7.
* Chen (2022) Wenhu Chen, Pat Verga, Michiel de Jong, John Wieting, and William Cohen. Augmenting pre-trained language models with qa-memory for open-domain question answering, 2022. URL https://arxiv.org/abs/2204.04581.
* Das (2018) Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Embodied question answering. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 1–10, 2018.
* Das (2020) Abhishek Das, Federico Carnevale, Hamza Merzic, Laura Rimell, Rosalia Schneider, Josh Abramson, Alden Hung, Arun Ahuja, Stephen Clark, Gregory Wayne, et al. Probing emergent semantics in predictive agents via question answering. In _Proceedings of the 37th International Conference on Machine Learning_ , pp. 2376–2391, 2020.
* Das (2022) Rajarshi Das, Ameya Godbole, Ankita Naik, Elliot Tower, Robin Jia, Manzil Zaheer, Hannaneh Hajishirzi, and Andrew McCallum. Knowledge base question answering by case-based reasoning over subgraphs, 2022. URL https://arxiv.org/abs/2202.10610.
* Datta (2022) Samyak Datta, Sameer Dharur, Vincent Cartillier, Ruta Desai, Mukul Khanna, Dhruv Batra, and Devi Parikh. Episodic memory question answering. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 19119–19128, 2022.
* de Jong (2021) Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Fei Sha, and William Cohen. Mention memory: incorporating textual knowledge into transformers through entity mention attention. _CoRR_ , abs/2110.06176, 2021. URL https://arxiv.org/abs/2110.06176.
* Deitke (2022) Matt Deitke, Eli VanderBilt, Alvaro Herrasti, Luca Weihs, Kiana Ehsani, Jordi Salvador, Winson Han, Eric Kolve, Aniruddha Kembhavi, and Roozbeh Mottaghi. Procthor: Large-scale embodied ai using procedural generation. _Advances in Neural Information Processing Systems_ , 35:5982–5994, 2022.
* Dong (2021) Bin Dong, Fangao Zeng, Tiancai Wang, Xiangyu Zhang, and Yichen Wei. Solq: Segmenting objects by learning queries. _Advances in Neural Information Processing Systems_ , 34:21898–21909, 2021.
* Gan (2022) Chuang Gan, Siyuan Zhou, Jeremy Schwartz, Seth Alter, Abhishek Bhandwaldar, Dan Gutfreund, Daniel L.K. Yamins, James J. DiCarlo, Josh McDermott, Antonio Torralba, and Joshua B. Tenenbaum. The threedworld transport challenge: A visually guided task-and-motion planning benchmark towards physically realistic embodied ai. In _2022 International Conference on Robotics and Automation (ICRA)_ , pp. 8847–8854, 2022. doi: 10.1109/ICRA46639.2022.9812329.
* Gao (2023) Qiaozi Gao, Govind Thattai, Xiaofeng Gao, Suhaila Shakiah, Shreyas Pansare, Vasu Sharma, Gaurav Sukhatme, Hangjie Shi, Bofei Yang, Desheng Zheng, et al. Alexa arena: A user-centric interactive platform for embodied ai. _arXiv preprint arXiv:2303.01586_ , 2023.
* Gao (2020) Tianyu Gao, Adam Fisch, and Danqi Chen. Making pre-trained language models better few-shot learners. _arXiv preprint arXiv:2012.15723_ , 2020.
* Gao (2022) Xiaofeng Gao, Qiaozi Gao, Ran Gong, Kaixiang Lin, Govind Thattai, and Gaurav S Sukhatme. DialFRED: Dialogue-enabled agents for embodied instruction following. _IEEE Robotics and Automation Letters_ , 7(4):10049–10056, 2022.
* Gervet (2023) Theophile Gervet, Soumith Chintala, Dhruv Batra, Jitendra Malik, and Devendra Singh Chaplot. Navigating to objects in the real world. _Science Robotics_ , 8(79):eadf6991, 2023.
* Gordon (2018) Daniel Gordon, Aniruddha Kembhavi, Mohammad Rastegari, Joseph Redmon, Dieter Fox, and Ali Farhadi. Iqa: Visual question answering in interactive environments. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 4089–4098, 2018.
* Greene (2022) Ryan Greene, Ted Sanders, Lilian Weng, and Arvind Neelakantan. Openai. new and improved embedding model. 2022\.
* Gupta (2017) Saurabh Gupta, James Davidson, Sergey Levine, Rahul Sukthankar, and Jitendra Malik. Cognitive mapping and planning for visual navigation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017.
* Hongjin (2022) SU Hongjin, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, et al. Selective annotation makes language models better few-shot learners. In _The Eleventh International Conference on Learning Representations_ , 2022.
* Inoue (2022) Yuki Inoue, Hiroki Ohashi, and Hiroki Ohashi. Prompter: Utilizing large language model prompting for a data efficient embodied instruction following. _arXiv preprint arXiv:2211.03267_ , 2022.
* Kant (2022) Yash Kant, Arun Ramachandran, Sriram Yenamandra, Igor Gilitschenski, Dhruv Batra, Andrew Szot, and Harsh Agrawal. Housekeep: Tidying virtual households using commonsense reasoning. In _Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIX_ , pp. 355–373. Springer, 2022.
* Khandelwal (2019) Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. Generalization through memorization: Nearest neighbor language models, 2019. URL https://arxiv.org/abs/1911.00172.
* Kim (2023) Byeonghwi Kim, Jinyeon Kim, Yuyeong Kim, Cheolhong Min, and Jonghyun Choi. Context-aware planning and environment-aware memory for instruction following embodied agents. In _ICCV_ , 2023.
* Kolve (2017) Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Daniel Gordon, Yuke Zhu, Abhinav Gupta, and Ali Farhadi. AI2-THOR: An Interactive 3D Environment for Visual AI. _arXiv_ , 2017.
* Krantz (2023) Jacob Krantz, Theophile Gervet, Karmesh Yadav, Austin Wang, Chris Paxton, Roozbeh Mottaghi, Dhruv Batra, Jitendra Malik, Stefan Lee, and Devendra Singh Chaplot. Navigating to objects specified by images. _arXiv preprint arXiv:2304.01192_ , 2023.
* Ku (2020) Alexander Ku, Peter Anderson, Roma Patel, Eugene Ie, and Jason Baldridge. Room-across-room: Multilingual vision-and-language navigation with dense spatiotemporal grounding. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pp. 4392–4412, 2020.
* Kumar (2021) Ashish Kumar, Zipeng Fu, Deepak Pathak, and Jitendra Malik. Rma: Rapid motor adaptation for legged robots. 2021\.
* Lewis (2020) Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. _Advances in Neural Information Processing Systems_ , 33:9459–9474, 2020.
* Lin (2014) Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In _Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13_ , pp. 740–755. Springer, 2014.
* Liu (2022a) Haoyu Liu, Yang Liu, Hongkai He, and Hangfang Yang. Lebp–language expectation & binding policy: A two-stream framework for embodied vision-and-language interaction task learning agents. _arXiv preprint arXiv:2203.04637_ , 2022a.
* Liu (2022b) Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. What makes good in-context examples for gpt-3? _DeeLIO 2022_ , pp. 100, 2022b.
* Liu (2022c) Jiachang Liu, Dinghan Shen, Yizhe Zhang, William B Dolan, Lawrence Carin, and Weizhu Chen. What makes good in-context examples for gpt-3? In _Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures_ , pp. 100–114, 2022c.
* Liu (2023) Xiaotian Liu, Hector Palacios, and Christian Muise. Egocentric planning for scalable embodied task achievement. _arXiv preprint arXiv:2306.01295_ , 2023.
* Majumder (2023) Bodhisattwa Prasad Majumder, Bhavana Dalvi Mishra, Peter Jansen, Oyvind Tafjord, Niket Tandon, Li Zhang, Burch Callison-Burch, and Peter Clark. Clin: A continually learning language agent for rapid task adaptation and generalization. _arXiv_ , 2023.
* Mao (2021) Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen. Generation-augmented retrieval for open-domain question answering. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pp. 4089–4100, 2021.
* Min (2021) So Yeon Min, Devendra Singh Chaplot, Pradeep Ravikumar, Yonatan Bisk, and Ruslan Salakhutdinov. Film: Following instructions in language with modular methods, 2021.
* Min (2022) So Yeon Min, Hao Zhu, Ruslan Salakhutdinov, and Yonatan Bisk. Don’t copy the teacher: Data and model challenges in embodied dialogue. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pp. 9361–9368, 2022.
* Mishra (2022) Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. Cross-task generalization via natural language crowdsourcing instructions. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pp. 3470–3487, 2022.
* Mu (2023) Yao Mu, Qinglong Zhang, Mengkang Hu, Wenhai Wang, Mingyu Ding, Jun Jin, Bin Wang, Jifeng Dai, Yu Qiao, and Ping Luo. Embodiedgpt: Vision-language pre-training via embodied chain of thought. _arXiv preprint arXiv:2305.15021_ , 2023.
* Murray (2022) Michael Murray, Maya Cakmak, and Maya Cakmak. Following natural language instructions for household tasks with landmark guided search and reinforced pose adjustment. _IEEE Robotics and Automation Letters_ , 7(3):6870–6877, 2022.
* Padmakumar (2021) Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, and Dilek Hakkani-Tur. Teach: Task-driven embodied agents that chat, 2021.
* Pashevich (2021) Alexander Pashevich, Cordelia Schmid, and Chen Sun. Episodic transformer for vision-and-language navigation. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pp. 15942–15952, 2021.
* Perez (2021) Ethan Perez, Douwe Kiela, and Kyunghyun Cho. True few-shot learning with language models. _Advances in neural information processing systems_ , 34:11054–11070, 2021.
* Radford (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International conference on machine learning_ , pp. 8748–8763. PMLR, 2021.
* Ramakrishnan (2020) Santhosh K Ramakrishnan, Ziad Al-Halah, and Kristen Grauman. Occupancy anticipation for efficient exploration and navigation. In _European Conference on Computer Vision_ , pp. 400–418. Springer, 2020.
* Sarch (2022) Gabriel Sarch, Zhaoyuan Fang, Adam W Harley, Paul Schydlo, Michael J Tarr, Saurabh Gupta, and Katerina Fragkiadaki. Tidee: Tidying up novel rooms using visuo-semantic commonsense priors. In _Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIX_ , pp. 480–496. Springer, 2022.
* Sarch (2023) Gabriel Sarch, Yue Wu, Michael Tarr, and Katerina Fragkiadaki. Open-ended instructable embodied agents with memory-augmented large language models. In _Findings of the Association for Computational Linguistics: EMNLP 2023_ , 2023.
* Savva (2019) Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, et al. Habitat: A platform for embodied ai research. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pp. 9339–9347, 2019.
* Schick (2021) Timo Schick, Hinrich Schütze, and Hinrich Schütze. It’s not just size that matters: Small language models are also few-shot learners. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pp. 2339–2352, 2021.
* Shridhar (2020) Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. ALFRED: A benchmark for interpreting grounded instructions for everyday tasks. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pp. 10740–10749, 2020.
* Song (2022a) Chan Hee Song, Jihyung Kil, Tai-Yu Pan, Brian M Sadler, Wei-Lun Chao, and Yu Su. One step at a time: Long-horizon vision-and-language navigation with milestones. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 15482–15491, 2022a.
* Song (2023) Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M. Sadler, Wei-Lun Chao, and Yu Su. Llm-planner: Few-shot grounded planning for embodied agents with large language models, 2023.
* Song (2022b) Haoyu Song, Li Dong, Weinan Zhang, Ting Liu, and Furu Wei. Clip models are few-shot learners: Empirical studies on vqa and visual entailment. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pp. 6088–6100, 2022b.
* Trabucco (2022) Brandon Trabucco, Gunnar A Sigurdsson, Robinson Piramuthu, Gaurav S Sukhatme, and Ruslan Salakhutdinov. A simple approach for visual room rearrangement: 3d mapping and semantic search. In _The Eleventh International Conference on Learning Representations_ , 2022.
* Wang (2023a) Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. _arXiv preprint arXiv: Arxiv-2305.16291_ , 2023a.
* Wang (2023b) Zihao Wang, Shaofei Cai, Anji Liu, Yonggang Jin, Jinbing Hou, Bowei Zhang, Haowei Lin, Zhaofeng He, Zilong Zheng, Yaodong Yang, Xiaojian Ma, and Yitao Liang. Jarvis-1: Open-world multi-task agents with memory-augmented multimodal language models. _arXiv preprint arXiv: 2311.05997_ , 2023b.
* Wei (2021) Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In _International Conference on Learning Representations_ , 2021.
* Weihs (2021) Luca Weihs, Matt Deitke, Aniruddha Kembhavi, and Roozbeh Mottaghi. Visual room rearrangement. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2021.
* Wijmans (2020) Erik Wijmans, Abhishek Kadian, Ari S. Morcos, Stefan Lee, Irfan Essa, Devi Parikh, Manolis Savva, and Dhruv Batra. Dd-ppo: Learning near-perfect pointgoal navigators from 2.5 billion frames. In _ICLR_ , 2020.
* Wortsman (2019) Mitchell Wortsman, Kiana Ehsani, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Learning to learn how to learn: Self-adaptive visual navigation using meta-learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 6750–6759, 2019.
* Wu (2023) Jimmy Wu, Rika Antonova, Adam Kan, Marion Lepert, Andy Zeng, Shuran Song, Jeannette Bohg, Szymon Rusinkiewicz, and Thomas Funkhouser. Tidybot: Personalized robot assistance with large language models. _arXiv preprint arXiv:2305.05658_ , 2023.
* Wu (2022) Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, and Christian Szegedy. Memorizing transformers, 2022. URL https://arxiv.org/abs/2203.08913.
* Yang (2023) Jingkang Yang, Yuhao Dong, Shuai Liu, Bo Li, Ziyue Wang, Chencheng Jiang, Haoran Tan, Jiamu Kang, Yuanhan Zhang, Kaiyang Zhou, et al. Octopus: Embodied vision-language programmer from environmental feedback. _arXiv preprint arXiv:2310.08588_ , 2023.
* Yang (2019) Wei Yang, Xiaolong Wang, Ali Farhadi, Abhinav Gupta, and Roozbeh Mottaghi. Visual semantic navigation using scene priors. In _Proceedings of (ICLR) International Conference on Learning Representations_ , May 2019.
* Yenamandra (2023) Sriram Yenamandra, Arun Ramachandran, Karmesh Yadav, Austin Wang, Mukul Khanna, Theophile Gervet, Tsung-Yen Yang, Vidhi Jain, Alexander William Clegg, John Turner, et al. Homerobot: Open-vocabulary mobile manipulation. _arXiv preprint arXiv:2306.11565_ , 2023.
* Zhang (2021) Yichi Zhang, Joyce Chai, and Joyce Chai. Hierarchical task learning from language instructions with unified transformers and self-monitoring. _ACL 2021, Findings._ , 2021.
* Zhang (2022) Yichi Zhang, Jianing Yang, Jiayi Pan, Shane Storks, Nikhil Devraj, Ziqiao Ma, Keunwoo Yu, Yuwei Bao, and Joyce Chai. Danli: Deliberative agent for following natural language instructions. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pp. 1280–1298, 2022.
* Zheng (2022) Kaizhi Zheng, Kaiwen Zhou, Jing Gu, Yue Fan, Jialu Wang, Zonglin Li, Xuehai He, and Xin Eric Wang. Jarvis: A neuro-symbolic commonsense reasoning framework for conversational embodied agents. 2022\.
* Zhu (2023) Hao Zhu, Raghav Kapoor, So Yeon Min, Winson Han, Jiatai Li, Kaiwen Geng, Graham Neubig, Yonatan Bisk, Aniruddha Kembhavi, and Luca Weihs. Excalibur: Encouraging and evaluating embodied exploration. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 14931–14942, 2023.
## Appendix S1 Limitations
Our model has the following limitations:
1\. Task planning from multimodal input: Currently, our LLM receives the
environment’s state only in case of a failure, through VLM feedback.
Integrating the visual state of the environment in a more direct way may
dramatically increase the accuracy of predicted plans. This direction aligns
with recent work (Wang, 2023b; Mu, 2023; Yang, 2023) that uses visual features
as input to language models.
2\. Cost of GPT-4: While GPT-4 is the most accurate Large Language Model, its
high cost necessitates exploring alternatives such as open-source models,
hardware optimization, model compression or distillation of its knowledge to
smaller models, to reduce expenses.
3\. Manual Addition of Domains: Our model supports four domains with shared
examples and prompts, but manual intervention is needed for adding
significantly different domains and tasks. Future developments should focus on
automating the detection and integration of out-of-domain inputs.
## Appendix S2 Prompts
### S2.1 Prompt templates for prompt retrieval
In the prompt retrieval experiments, we include four prompt templates to be
retrieved. These templates are shown for TEACh, ALFRED, Dialfred, and the Tidy
Task in Listing S1, Listing S2, Listing S3, Listing S4, respectively.
### S2.2 In-Context Examples
Samples of the in-context examples are shown for TEACh, ALFRED, Dialfred, and
the Tidy Task in Listing S5, Listing S6, Listing S7, Listing S8, respectively.
## Appendix S3 Question Asking
### S3.1 Overview
In the DialFRED benchmark, when HELPER-X is unable to find an object, it is
able to ask one of three question types in order to aid itself. In a real-
world scenario, HELPER-X could take advantage of the LLM’s capability to ask
many types of questions, but the DialFRED benchmark limits us to three:
direction, location, and appearance.
### S3.2 Question asking pipeline
When HELPER-X does not have an object’s location already in its memory or
multiple instances of an objects exist in the memory, it forms a prompt with
its current context and the API of available questions, as in Listing S9.
Based on the context, HELPER-X then chooses and asks the most appropriate
question. The returned answer and an API of search related actions, alongside
the context and question, are then formed into another prompt, seen in Listing
S10. Finally, this prompt is parsed by HELPER-X into an action script to
search for the object. Examples of this full pipeline for if an object does
not exist in the memory are in Listing S11 and Listing S12.
## Appendix S4 Pre-conditions
An example of a pre-condition check for a macro-action is provided in Listing
LABEL:precond_example.
## Appendix S5 Example LLM inputs & Outputs
We provide examples of dialogue input, retrieved examples, and LLM output for
a TEACh sample in Listing LABEL:example1, Listing LABEL:example2, and Listing
LABEL:example3.
## Appendix S6 Simulation environment
The TEACh dataset builds on the Ai2thor simulation environment (Kolve, 2017).
At each time step the agent may choose from the following actions: Forward(),
Backward(), Turn Left(), Turn Right(), Look Up(), Look Down(), Strafe Left(),
Strafe Right(), Pickup(X), Place(X), Open(X), Close(X), ToggleOn(X),
ToggleOff(X), Slice(X), and Pour(X), where X refers an object specified via a
relative coordinate $(x,y)$ on the egocentric RGB frame. Navigation actions
move the agent in discrete steps. We rotate in the yaw direction by 90
degrees, and rotate in the pitch direction by 30 degrees. The RGB and depth
sensors are at a resolution of 480x480, a field of view of 90 degrees, and lie
at a height of 0.9015 meters. The agent’s coordinates are parameterized by a
single $(x,y,z)$ coordinate triplet with $x$ and $z$ corresponding to movement
in the horizontal plane and $y$ reserved for the vertical direction. The TEACh
benchmark allows a maximum of 1000 steps and 30 API failures per episode.
## Appendix S7 $Executor$ details
### S7.1 Semantic mapping and planning
##### Obstacle map
HELPER-X maintains a 2D overhead occupancy map of its environment
$\in\mathbb{R}^{H\times W}$ that it updates at each time step from the input
RGB-D stream. The map is used for exploration and navigation in the
environment.
At every time step $t$, we unproject the input depth maps using intrinsic and
extrinsic information of the camera to obtain a 3D occupancy map registered to
the coordinate frame of the agent, similar to earlier navigation agents
Chaplot (2019). The 2D overhead maps of obstacles and free space are computed
by projecting the 3D occupancy along the height direction at multiple height
levels and summing. For each input RGB image, we run a SOLQ object segmentor
(Dong, 2021) (pretrained on COCO Lin (2014) then finetuned on TEACh rooms) to
localize each of 116 semantic object categories. For failure detection, we use
a simple matching approach from Min (2021) to compare RGB pixel values before
and after taking an action.
##### Object location and state tracking
We maintain an object memory as a list of object detection 3D centroids and
their predicted semantic labels
$\\{[(X,Y,Z)_{i},\ell_{i}\in\\{1...N\\}],i=1..K\\}$, where $K$ is the number
of objects detected thus far. The object centroids are expressed with respect
to the coordinate system of the agent, and, similar to the semantic maps,
updated over time using egomotion. We track previously detected objects by
their 3D centroid $C\in\mathbb{R}^{3}$. We estimate the centroid by taking the
3D point corresponding to the median depth within the segmentation mask and
bring it to a common coordinate frame. We do a simple form of non-maximum
suppression on the object memory, by comparing the euclidean distance of
centroids in the memory to new detected centroids of the same category, and
keep the one with the highest score if they fall within a distance threshold.
For each object in the object memory, we maintain an object state dictionary
with a pre-defined list of attributes. These attributes include: category
label, centroid location, holding, detection score, can use, sliced, toasted,
clean, cooked. For the binary attributes, these are initialized by sending the
object crop, defined by the detector mask, to the VLM model, and checking its
match to each of [f"The {object_category} is {attribute}", f"The
{object_category} is not {attribute}"]. We found that initializing these
attributes with the VLM gave only a marginal difference to initializing them
to default values in the TEACh benchmark, so we do not use it for the TEACh
evaluations. However, we anticipate a general method beyond dataset biases of
TEACh would much benefit from such vision-based attribute classification.
Table S1: Alternative TEACh Execution from Dialog History (EDH) evaluation split. Trajectory length weighted metrics are included in ( parentheses ). SR = success rate. GC = goal condition success rate. Note that Test Seen and Unseen are not the true TEACh test sets, but an alternative split of the validation set used until the true test evaluation is released, as mentioned in the TEACh github README, and also reported by DANLI (Zhang, 2022). | Validation | Test
---|---|---
| Unseen | Seen | Unseen | Seen
| SR | GC | SR | GC | SR | GC | SR | GC
E.T. | 8.35 (0.86) | 6.34 (3.69) | 8.28 (1.13) | 8.72 (3.82) | 7.38 (0.97) | 6.06 (3.17) | 8.82 (0.29) | 9.46 (3.03)
DANLI | 17.25 (7.16) | 23.88 (19.38) | 16.89 (9.12) | 25.10 (22.56) | 16.71 (7.33) | 23.00 (20.55) | 18.63 (9.41) | 24.77 (21.90)
HELPER | 17.25 (3.22) | 25.24 (8.12) | 19.21 (4.72) | 33.54 (10.95) | 17.55 (2.59) | 26.49 (7.67) | 17.97 (3.44) | 30.81 (8.93)
##### Exploration and path planning
${{HELPER-X}}$ explores the scene using a classical mapping method. We take
the initial position of the agent to be the center coordinate in the map. We
rotate the agent in-place and use the observations to instantiate an initial
map. Second, the agent incrementally completes the maps by randomly sampling
an unexplored, traversible location based on the 2D occupancy map built so
far, and then navigates to the sampled location, accumulating the new
information into the maps at each time step. The number of observations
collected at each point in the 2D occupancy map is thresholded to determine
whether a given map location is explored or not. Unexplored positions are
sampled until the environment has been fully explored, meaning that the number
of unexplored points is fewer than a predefined threshold.
To navigate to a goal location, we compute the geodesic distance to the goal
from all map locations using graph search inoue2022prompter given the top-down
occupancy map and the goal location in the map. We then simulate action
sequences and greedily take the action sequence which results in the largest
reduction in geodesic distance.
### S7.2 2D-to-3D unprojection
For the $i$-th view, a 2D pixel coordinate $(u,v)$ with depth $z$ is
unprojected and transformed to its coordinate $(X,Y,Z)^{T}$ in the reference
frame:
$(X,Y,Z,1)=\mathbf{G}_{i}^{-1}\left(z\frac{u-c_{x}}{f_{x}},z\frac{v-c_{y}}{f_{y}},z,1\right)^{T}$
(1)
where $(f_{x},f_{y})$ and $(c_{x},c_{y})$ are the focal lengths and center of
the pinhole camera model and $\mathbf{G}_{i}\in SE(3)$ is the camera pose for
view $i$ relative to the reference view. This module unprojects each depth
image $I_{i}\in\mathbb{R}^{H\times W\times 3}$ into a pointcloud in the
reference frame $P_{i}\in\mathbb{R}^{M_{i}\times 3}$ with $M_{i}$ being the
number of pixels with an associated depth value.
## Appendix S8 Additional details of the Tidy Task
### S8.1 Metric Definitions in the Tidy Task
The metrics in the original TIDEE paper (Sarch, 2022) require separate human
evaluations on Amazon Mechanical Turk. We define a new set of metrics that
does not require expensive annotations from humans for every evaluation. Below
are detailed descriptions of each of the new metrics:
1. 1.
Correctly Moved (CM) Average number of correctly moved objects that are out of
place in the scene, and moved by the agent. Higher is better.
2. 2.
Incorrectly Moved (IM) Average number of incorrectly moved objects that are
not out of place, but were moved by the agent. Lower is better.
3. 3.
Energy Following ProcThor (Deitke, 2022), for each receptacle object, the
probability that each object type appears on its surface is computed across
the AI2THOR scenes. Here, we compute the total number of times each object
type is on the receptacle type and divide it by the total number of times the
receptacle type appears across the scenes. The energy metric in the Tidy Task
is defined as follows:
$(P_{cleanup}-P_{original})/(P_{dirty}-P_{original})$ (2)
where $P_{cleanup}$, $P_{dirty}$, and $P_{original}$ represent the sum of the
object location probabilities for the cleaned up state of the room, the
dirty/messy state of the room, and the original state of the room with objects
put in-place by human designers, respectively. Lower is better.
4. 4.
Steps Average number of steps taken by the agent per episode.
### S8.2 Langauge Instructions for the Tidy Task
Since the Tidy Task does not include natural language instruction annotations,
we formulate the language instruction as the following to give to the HELPER
baseline and HELPER-X: “Tidy up the house. These are the out of place objects:
{detected_out_of_place_objects}. These are the receptacles in the current
scene: {detected_receptacles}”, where {detected_out_of_place_objects} are the
objects classified as out of place, and {detected_receptacles} are any
receptacle detected in the scene by the agent.
To obtain the list of out of place objects, we allow the agents use of the
TIDEE (Sarch, 2022) visual detector to determine whether each object detected
during the mapping phase is out of place. We found that out of place detection
benefits significantly from visual detection in the Tidy Task, and thus we do
not use an LLM for detecting the out of place attribute. Notably, adding the
additional out of place attribute to the objects in the object memory can be
shared across all benchmarks.
Listing S1: Prompt template for TEACh
⬇
You are an adept at translating human dialogues into sequences of actions for
household robots. Given a dialogue between a <Driver> and a <Commander>, you
convert the conversation into a Python program to be executed by a robot.
{API}
Write a script using Python and the InteractionObject class and functions
defined above that could be executed by a household robot.
Here are a few examples of typical inputs and outputs (only for in-context
reference):
{RETRIEVED_EXAMPLES}
Adhere to these stringent guidelines:
1. Use only the classes and functions defined previously. Do not create functions that are not provided above.
2. Make sure that you output a consistent plan. For example, opening of the same object should not occur in successive steps.
3. Make sure the output is consistent with the proper affordances of objects. For example, a couch cannot be opened, so your output should never include the open() function for this object, but a fridge can be opened.
4. The input is dialogue between <Driver> and <Commander>. Interpret the dialogue into robot actions. Do not output any dialogue.
5. Object categories should only be chosen from the following classes: {OBJECT_CLASSES}
6. You can only pick up one object at a time. If the agent is holding an object, the agent should place or put down the object before attempting to pick up a second object.
7. Each object instance should instantiate a different InteractionObject class even if two object instances are the same object category.
Follow the output format provided earlier. Think step by step to carry out the
instruction.
Write a Python script that could be executed by a household robot for the
following:
dialogue: {command}
Python script:
Listing S2: Prompt template for ALFRED
⬇
You are an excellent interpreter of instructions for household tasks. Given a
task overview <High Level Goal> and step to perform <Low Level Goal>, you
break the instructions down into a sequence of robotic actions.
{API}
Write a script using Python and the InteractionObject class and functions
defined above that could be executed by a household robot.
Here are a few examples of typical inputs and outputs (only for in-context
reference):
{RETRIEVED_EXAMPLES}
Adhere to these stringent guidelines:
1. Use only the classes and functions defined previously. Do not create functions that are not provided above.
2. Make sure that you output a consistent plan. For example, opening of the same object should not occur in successive steps.
3. Make sure the output is consistent with the proper affordances of objects. For example, a couch cannot be opened, so your output should never include the open() function for this object, but a fridge can be opened.
4. The input is high level task description and low level subgoals to perform the high level task. Interpret the instructions into robot actions.
5. Object categories should only be chosen from the following classes: {OBJECT_CLASSES}
6. You can only pick up one object at a time. If the agent is holding an object, the agent should place or put down the object before attempting to pick up a second object.
7. Each object instance should instantiate a different InteractionObject class even if two object instances are the same object category.
8. Always focus on solving the high level goal. Low level instructions should only be used to guide and plan better.
9. Before performing each action, check if that action is allowed for a particular receptacle class. A few examples have been given in API documentation.
10. Check if the receptacle needs to be opened before placing the object. If yes, then open the receptacle before placing the object.
Follow the output format provided earlier. Think step by step to carry out the
instruction.
Write a Python script that could be executed by a household robot for the
following:
{command}
Python script:
Listing S3: Prompt template for the Dialfred
⬇
You are an excellent interpreter of instructions for household tasks. Given a
task overview <High Level Goal> and step to perform <Low Level Goal>, you
break the instructions down into a sequence of robotic actions.
{API}
Write a script using Python and the InteractionObject class and functions
defined above that could be executed by a household robot.
Here are a few examples of typical inputs and outputs (only for in-context
reference):
{RETRIEVED_EXAMPLES}
Adhere to these stringent guidelines:
1. Use only the classes and functions defined previously. Do not create functions that are not provided above.
2. Make sure that you output a consistent plan. For example, opening of the same object should not occur in successive steps.
3. Make sure the output is consistent with the proper affordances of objects. For example, a couch cannot be opened, so your output should never include the open() function for this object, but a fridge can be opened.
4. The input is high level task description and low level subgoals to perform the high level task. Interpret the instructions into robot actions.
5. Object categories should only be chosen from the following classes: {OBJECT_CLASSES}
6. You can only pick up one object at a time. If the agent is holding an object, the agent should place or put down the object before attempting to pick up a second object.
7. Each object instance should instantiate a different InteractionObject class even if two object instances are the same object category.
8. Make sure that you are solving both the high level goal and the low level goals. Some instructions may only be present in one or the other, so address everything from both.
9. Before performing each action, check if that action is allowed for a particular receptacle class. A few examples have been given in API documentation.
Follow the output format provided earlier. Think step by step to carry out the
instruction.
Write a Python script that could be executed by a household robot for the
following:
{command}
Python script:
Listing S4: Prompt template for the Tidy Task
⬇
Task: As an AI-driven cleaning robot, you are tasked with employing
commonsense reasoning to identify where to place out of place objects that
aren’t situated appropriately. Given a list of out of place objects, you are
to write a Python program to be executed by a robot that will bring the out of
place objects to a suitable location.
{API}
Write a script using Python and the InteractionObject class and functions
defined above that could be executed by a household robot.
Here are a few examples of typical inputs and outputs (only for in-context
reference):
{RETRIEVED_EXAMPLES}
Adhere to these stringent guidelines:
1. Use only the classes and functions defined previously. Do not create functions that are not provided above.
2. Make sure that you output a consistent plan. For example, opening of the same object should not occur in successive steps.
3. Make sure the output is consistent with the proper affordances of objects. For example, a couch cannot be opened, so your output should never include the open() function for this object, but a fridge can be opened.
4. Object categories should only be chosen from the following classes: {OBJECT_CLASSES}
5. You can only pick up one object at a time. If the agent is holding an object, the agent should place or put down the object before attempting to pick up a second object.
6. Each object instance should instantiate a different InteractionObject class even if two object instances are the same object category.
7. Address each item systematically, one by one.
8. Base your decisions on your ingrained knowledge about the typical placement of day-to-day objects.
Follow the output format provided earlier. Think step by step to carry out the
instruction.
Write a Python script that could be executed by a household robot for the
following:
input: {command}
Python script:
Listing S5: Sample in-context example for TEACh
⬇
Dialogue input:
<Driver> what shall I do today? <Commander> clean all the bowls. <Driver>
where are the bowls? <Commander> start with the one by the stove. <Commander>
left. <Commander> rinse it with water. <Commander> great. <Driver> what next?
<Commander> the next one is in the fridge. <Commander> you need to rinse it
with water also. <Commander> great job. we are finished.
Python script:
target_bowl1 = InteractionObject("Bowl", landmark = "Stove", attributes =
["clean"])
target_bowl1.go_to()
target_bowl1.pickup()
target_bowl1.clean()
target_bowl1.put_down()
target_bowl2 = InteractionObject("Bowl", landmark = "Fridge", attributes =
["clean"])
target_bowl2.go_to()
target_bowl2.pickup()
target_bowl2.clean()
target_bowl2.put_down()
Listing S6: Sample in-context example for ALFRED
⬇
High Level Goal: To heat an apple and place in the black bin.
Low Level Goal:
(1) Turn around and walk to the kitchen island.
(2) Pick up the apple in front of the gold colored plate.
(3) Walk around the kitchen island and to the stove on the right, look above
the stove to
face the microwave.
(4) Place the apple inside the microwave, heat up/cook the apple, take the
apple out of the
microwave.
(5) Turn left, turn left at the fridge, turn left to face the kitchen island,
and look down
at the black bin.
(6) Place the apple in the bin on the right side.
Python script:
target_apple = InteractionObject("Apple", landmark = "CounterTop")
target_apple.go_to()
target_apple.pickup()
target_microwave = InteractionObject("Microwave")
target_microwave.go_to()
target_microwave.open() # open microwave before placing
target_apple.place(target_microwave)
target_microwave.close() # close microwave before toggle on
target_microwave.toggle_on() # toggle on to heat up & cook
target_microwave.toggle_off() # Important! toggle off to end heating. Do not
try to open microwave without toggle off!
target_microwave.open() # open microwave before picking
target_apple.pickup()
target_trashcan = InteractionObject("GarbageCan")
target_trashcan.go_to()
target_apple.place(target_trashcan)
Listing S7: Sample in-context example for Dialfred
⬇
High Level Goal: Put the watch in the bowl on the nightstand.
Low Level Goal:
(1) Turn around, walk around the bookshelf, turn to the desk.
(2) Pick the watch up off of the desk.
(3) Put the watch in the bowl on the desk.
(4) Pick up the bowl off of the desk.
(5) Walk back around the bookshelf, walk over between the two beds.
(6) Put the bowl on the nightstand.
Python script:
target_watch = InteractionObject("Watch", landmark = "Desk")
target_watch.go_to()
target_watch.pickup()
target_bowl = InteractionObject("Bowl", landmark = "Desk")
target_bowl.go_to()
target_watch.place(target_bowl)
target_bowl.go_to()
target_bowl.pickup()
target_nightstand = InteractionObject("SideTable", landmark = "Bed")
target_nightstand.go_to()
target_bowl.place(target_nightstand)
Listing S8: Sample in-context example for the Tidy Task
⬇
input: Tidy up the house. These are the out of place objects: Potato, Knife.
These are
the receptacles in the current scene: DiningTable, Microwave, CoffeeMachine,
CounterTop.
Python script:
# initialize the out of place objects
target_potato = InteractionObject("Potato")
target_knife = InteractionObject("Knife")
# initialize the placement objects to place the out of place object on
target_countertop = InteractionObject("CounterTop") # The best, commonsense
location for both the potato and knife is on the countertop.
# re-position potato to the countertop to tidy it up
target_potato.go_to()
target_potato.pickup()
target_countertop.go_to()
target_potato.place(target_countertop)
# re-position knife to the countertop to tidy it up
target_knife.go_to()
target_knife.pickup()
target_countertop.go_to()
target_knife.place(target_countertop)
Listing S9: Prompt template for Question Selection
⬇
You are an excellent interpreter of human instructions for household tasks.
Given a list of questions you can ask and information about the current
environment and context, you provide a question that should be asked in order
to give the agent useful information.
{API}
Write a script using Python using the class and functions defined above that
could be executed by a household robot.
Adhere to these stringent guidelines:
1. Use only the classes and functions defined previously. Do not create functions that are not provided above.
2. Make sure you choose the question that provides the most information and is most relevant for the situation at hand.
3. Object categories should only be chosen from the following classes: {OBJECT_CLASSES}
Follow the output format provided earlier. Think step by step to carry out the
instruction.
Write a Python script that asks questions to help a household robot in the
following situation:
{context}
Python script:
Listing S10: Prompt template for Answer Parsing
⬇
You are an excellent interpreter of human instructions for household tasks.
Given the current context of the agent, a question that was asked, and an
answer that was given, you must write code for actions the agent should take
based on the answer provided.
{API}
Write a script using Python using the class and functions defined above that
could be executed by a household robot.
Adhere to these stringent guidelines:
1. Use only the classes and functions defined previously. Do not create functions that are not provided above.
2. Make sure you plan the most simple and direct interpretation of the answer given.
3. Prioritize the most specific information given. For example, an actual object name should be deemed more important than a region.
4. If multiple pieces of information are given, ensure you incorporate all of them into the script.
5. Object categories should only be chosen from the following classes: {OBJECT_CLASSES}
Write a Python script that asks questions to help a household robot in the
following situation:
{context}
{question}
{answer}
Python script:
Listing S11: Sample in-context example 1 for Question Asking
⬇
Question Selection Input:
Context: The agent does not know where the ButterKnife is.
Questioning Script:
askForLocation(’ButterKnife’)
Answer Parsing Input:
Context: The agent does not know where the ButterKnife is.
Question Asked: askForLocation(’ButterKnife’)
Answer Returned: The ButterKnife is to your left on the countertop.
Parsed Answer:
# Turn to the left as per the instruction
turn(’left’)
# Search for the butterknife on the counter top
search_near_other_object(’ButterKnife’, ’CounterTop’)
Listing S12: Sample in-context example 2 for Question Asking
⬇
Question Selection Input:
Context: The agent does not know where the SoapBar is.
Questioning Script:
askForLocation(’SoapBar’)
Answer Parsing Input:
Context: The agent does not know where the SoapBar is.
Question Asked: askForLocation(’SoapBar’)
Answer Returned: The SoapBar is to your front right in the garbage can.
Parsed Answer:
# Turn right as the SoapBar is to the front right
turn(’right’)
# Move forward to reach the garbage can
move(’forward’)
# Search for the SoapBar near the garbage can
search_near_other_object(’SoapBar’, ’GarbageCan’)
|
# pyWATTS: Python Workflow Automation Tool for Time Series
aInstitute for Automation and Applied Informatics (IAI), Karlsruhe Institute
of Technology (KIT), Karlsruhe, Germany; bCluster of Excellence “Machine
Learning: New Perspectives for Science”, University of Tübingen, Germany
###### Abstract
Time series data are fundamental for a variety of applications, ranging from
financial markets to energy systems. Due to their importance, the number and
complexity of tools and methods used for time series analysis is constantly
increasing. However, due to unclear APIs and a lack of documentation,
researchers struggle to integrate them into their research projects and
replicate results. Additionally, in time series analysis there exist many
repetitive tasks, which are often re-implemented for each project,
unnecessarily costing time. To solve these problems we present pyWATTS, an
open-source Python-based package that is a non-sequential workflow automation
tool for the analysis of time series data. pyWATTS includes modules with
clearly defined interfaces to enable seamless integration of new or existing
methods, subpipelining to easily reproduce repetitive tasks, load and save
functionality to simply replicate results, and native support for key Python
machine learning libraries such as scikit-learn, PyTorch, and Keras.
###### keywords:
Time Series Analysis; Python; Workflow Automation; Machine Learning; Pipeline
## 1 Introduction
In many areas, time series data are the most prominent form of data collected.
In contrast to other sequential data such as speech data, time series data are
not only ordered, but the time stamp associated with the observation might
also have explicit information. For example, looking at energy time series,
the demand at a specific time step depends on calendar-based information such
as the day of the week or the season. Generally, time series analysis uses
various algorithms from statistics to deep learning to answer questions about
time-dependent systems.
Although more and more code from researchers focusing on time-dependent data
is publicly available, there is still a need for respective tools. These tools
should allow automating the workflow in time series analysis and an easy
integration of new research approaches with third-party code. Automating the
workflow is necessary, since many preprocessing tasks are repetitive, such as
accounting for seasonality, adding calendar-based features, or detecting and
imputing missing values. As a result of the lacking tools, researchers often
re-implement these repetitive tasks at the unnecessary expense of time.
Moreover, it is challenging to integrate new or alternative approaches into
existing code workflows and, although the push towards open science increases
the importance of reproducibility, it is often difficult to replicate earlier
experimental results. Thus, any tool to aid researchers in automated time
series analysis needs to focus on two features: re-usability and
reproducibility of existing and new code.
Several factors currently prevent good integration and re-usability of
publicly available code for time series analysis. For example, most authors
only publish their proposed new algorithm or method, excluding any steps
necessary to prepare the data. Using their code then entails re-writing the
required preprocessing method. Additionally, interfaces are hardly ever
defined and basic unit-testing is often non existent, which regularly leads to
re-implementation being the only quick and attainable solution. Regarding
reproducibility, some of the issues include platform-dependent code, no
information on parameter settings, or insufficient description on the order in
which function or scripts need to be executed, making it almost impossible to
reproduce results.
A remedy to the issues mentioned is workflow automation using pipelines and
modules. In a pipeline, one can define the workflow, i. e. the exact order in
which several modules, each including a method, are run to achieve a specific
result. No matter if we wish to use or reproduce code with pipelines, the
steps needed to reach a specific result can be non-sequential. For example, we
might wish to run parts of the code in parallel, have branching and merging
pathways in the workflow, or even condition-dependent paths. Furthermore, the
modules have a clearly defined structure which allows simple integration of
new or alternative methods into an existing workflow.
Current Python tools which allow the realisation of pipelines are, for
example, scikit-pipeline111https://scikit-
learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html [8] and
river222https://github.com/online-ml/river [6]. While scikit-pipeline is part
of the package scikit-learn [8], river is a merger of CremeML and Sk-
multiflow. However, both tools only allow linear execution of modules, where
neither parallel nor conditional execution is possible. Only the package
baikal333https://github.com/alegonz/baikal/ provides non-sequential pipelines
inherited from scikit-learn. It is based on wrappers for scikit-learn modules,
where each module has to be wrapped individually. Therefore, it is rather
tedious. Furthermore, it does not allow integration of other libraries such as
PyTorch [7] and Keras [1], which are useful for deep learning-based time
series approaches. Additionally, baikal aims to combine several scikit-learn
modules such that they work as one module and thus focuses on the model
creation only.
In the present paper, we introduce pyWATTS, an open-source Python-based
package that provides a non-sequential workflow automation tool for the
analysis of time series data. In contrast to baikal, pyWATTS includes generic
wrappers for libraries such as scikit-learn, PyTorch, and Keras, allows the
pipelines to have conditions, and is able to visualise intermediate results.
Summarising the key features, pyWATTS
* •
is a platform-independent solution to implement workflows from start to finish
using pipelines. Thereby, time series experiments can be performed in an
organised manner and in any environment that supports pyWATTS.
* •
enables re-usability through subpipelining. Any useful part of a time series
experiment, e. g. preprocessing, can be defined as a subpipeline and
integrated into other pipelines without further adaption and independently of
the original experiment.
* •
allows saving and loading of any given pipeline configuration to reproduce
results at a later date.
* •
enables simple integration of new research approaches through a plug-and-play
style environment where modules implemented in pyWATTS can be exchanged
seamlessly between pipelines through a modular architecture with data handling
through xarray [4].
* •
includes a clear API of the modules, i. e. transform and fit methods, ensuring
that pipelines within pyWATTS are adaptable and that modules can easily run on
multiple data sets, at different points in a pipeline, and in various
pipelines.
* •
allows using different modules for the same part in the pipeline such that a
condition mechanism decides which module is executed depending on the
characteristics of the applied data.
* •
takes pandas DataFrame[5, 9] or xarray Dataset as input, allowing users to
flexibly read the data from any source (file, database, website) with their
method of choice.
* •
is able to use callbacks, e. g. for visualising, analysing, and writing the
intermediate results of modules.
To the best of our knowledge, pyWATTS is the first tool to automate time
series analysis workflows using non-sequential pipelines in this form. The
remainder of the paper is structured as follows. We first introduce the
implementation and architecture of pyWATTS before providing an overview on the
availability and re-usability. We conclude with an outlook on ongoing and
future projects that use pyWATTS.
## 2 Implementation and architecture
Implementing the features mentioned above requires a careful design of the
architecture. In this section, we, therefore, describe pyWATTS’ architecture
and implementation.
The pyWATTS package is written in the programming language
Python444https://www.python.org/. To realise non-sequential workflows, it uses
three classes as illustrated in Figure 1; a _pipeline_ , representing the
workflow as a graph, a _step_ which represents a node in the graph referencing
its dependencies with edges, and a _module_ , representing the algorithm
running in a step. We introduce these three classes in more detail in the
following.
Figure 1: pyWATTS uses the three classes _pipeline_ , _step_ , and _module_ to
realise non-sequential workflows.
Every algorithm used in pyWATTS is implemented with the module class. pyWATTS
distinguishes between algorithms requiring training and algorithms that can be
applied without training. For both types, the module class’s _transform_
method must be implemented to apply the algorithm. For algorithms requiring
training, we additionally have to implement the module class’s _fit_ method
that defines the training of the machine learning model. Modules must also
include methods to save and load all information necessary for executing the
module.
In general, the implementation of modules follows the concepts introduced by
scikit-learn [8]. pyWATTS itself provides a comprehensive library of
algorithms, as listed in Table 1. The currently available modules implement
utile algorithms for time series analytics and serve as a guideline to
implement further modules. pyWATTS also provides special modules, called
wrappers, to seamlessly integrate existing algorithms and models from scikit-
learn [8] or deep learning models implemented in Keras [1], or PyTorch [7],
Table 1: The library of pyWATTS contains several utile algorithms when dealing with time series. Module name | Description
---|---
Calendar extraction | Extracts or extends a time-series with calendar information such as weekdays or holidays
Change Direction | Extracts if the change is positive or negative for each time point in a time series
Clock Shift | Shifts the data with a certain offset
Differentiate | Calculates the n-th order difference of a time series
Linear Interpolator | Creates a linear interpolation
Missing Value Detector | Detects missing values such as ”NaN”
Resampler | Reduces or increases the temporal resolution of a given time series
Rolling Mean | Calculates a rolling mean over a specific window size
RMSE Calculator | Calculates the Root Mean Squared Error (RMSE)
Sampler | Creates samples with a specified sample size
Trend Extraction | Extracts a trend specified by a period and a length
Sklearn Wrapper | Wraps machine learning modules from the scikit-learn library
Keras Wrapper | Wraps deep learning neural networks implemented in the Keras library
PyTorch Wrapper | Wraps deep learning neural networks implemented in the PyTorch library
Given a module, pyWATTS creates one or multiple steps555A step contains zero
or one module. A module can be used in multiple steps.. The step class
organises the execution of the pipeline. A step collects and merges the
results of its dependencies, calls the fit and transform method of its module,
and provides its output to the pipeline for the subsequent steps. Moreover, a
step can execute callbacks defined by the user, e. g. for visualising,
analysing, and writing the module’s intermediate result. Furthermore, a step
controls the execution of the module based on conditions defined by the user.
The pipeline class organises the steps in nodes and creates a graph, where
every step input is represented as an edge. This graph supports branching and
merging of paths and is used to define the execution order of the steps. This
way, all previous steps represented as dependencies have to be successfully
executed before the current step itself is executed. The pipeline also serves
as the interface to the user and provides control commands. These commands
include training and executing a pipeline, as well as saving and loading the
whole pipeline.
Based on the mentioned three classes, pyWATTS implements the following three
functionalities for an easy structuring and flexible application of pipelines:
Batch/online learning:
By specifying that the pipeline processes only one time step at a time, the
pipeline can be executed iteratively.
Conditional branching:
Depending on the applied data, condition steps with ”if-then-else” can be used
to select different paths of the pipeline for execution (see Figure 2).
Subpipelines:
Grouping steps of the pipeline in subpipelines allows an easy structuring and
naming of certain parts of the workflow (see Figure 3).
Figure 2: pyWATTS uses condition steps with ”if-then-else” for conditional
branching in pipelines. Figure 3: pyWATTS makes use of subpipelines to easily
structure and name parts of the workflow.
## 3 Quality control
For quality control, we apply comprehensive testing to pyWATTS, use
programming guidelines and code reviews, as well as provide up-to-date
documentation and examples. For automated testing, we use GitHub Actions. All
core classes implementing the main functionality of the pipeline and the steps
are tested with unit tests. Furthermore, unit tests cover the wrappers and the
modules of the library. To ensure the correct interaction of steps and modules
as well as saving and loading of whole workflows, exemplary pipelines are
implemented as integration tests.
Furthermore, we follow programming guidelines to ensure high-quality source
code. These guidelines define the naming of branches and code conventions as
well as prescribe the use of linting software such as
pylint666https://www.pylint.org/. Additionally, the guidelines require
developers to implement tests for each module and to use loggers with
appropriate logging messages. Finally, the guidelines demand developers to use
type annotations for arguments, variables, and return values. In the GitHub
Actions, we automatically check whether code conventions are met using
flake8777https://gitlab.com/pycqa/flake8. To ensure compliance with all
guidelines, we additionally perform manual code reviews on pull requests.
Maintainers review pull requests concerning their correct operation, coverage
through tests, and compliance to programming guidelines and code conventions
before the pull requests are merged into the master branch.
Lastly, we maintain an up-to-date documentation. Based on the annotated source
code and restructured text files, the documentation888The pyWATTS
Documentation is available at https://pywatts.readthedocs.io/en/latest/. of
pyWATTS is automatically generated using sphinx999https://www.sphinx-
doc.org/en/master/ and readthedoc101010https://readthedocs.org/..
Besides serving as integration tests, the provided examples introduce new
users to pyWATTS and its features and support them in creating working
pipelines in pyWATTS. In the following, we briefly describe the provided
examples, which are detailed in the documentation.
* •
To prevent fundamental errors during the creation of a pipeline, a simple
example explains how one can create a pipeline for electrical load forecasting
and how one can add modules such as the Calendar Extraction to the pipeline.
* •
To test the functionality of the condition mechanism, we provide an example
that changes the method for electrical load forecasting depending on day-time
and night-time.
* •
Advanced examples aim to avoid mistakes in the application of deep learning
frameworks in pyWATTS. In the examples using Keras [1] or PyTorch [7], the
pipelines train simple deep learning models.
## 4 Availability
### Operating system
Platform independent
### Programming language
Python
### Additional system requirements
pyWATTS is designed to perform various time series analysis tasks on data sets
of arbitrary size. Therefore, hardware requirements depend on the size of the
data set and the task being performed.
### Dependencies
The core pyWATTS dependencies are the following:
* •
scikit-learn – 0.23.2
* •
cloudpickle – 1.6.0
* •
xarray – 0.16.1
* •
numpy – 1.19.2
* •
pandas – 1.1.5
* •
matplotlib – 3.3.2
* •
tensorflow – 2.3.1
* •
workalendar – 12.0.0
Dependencies required for development purposes comprise the following:
* •
pytest – 6.1.1
* •
sphinx – 3.2.1
* •
pylint – 2.6.0
* •
pytest-cov – 2.10.1
### Software location:
Archive
Name:
Zenodo
Persistent identifier:
https://doi.org/10.5281/zenodo.4637197
Licence:
MIT Licence111111https://opensource.org/licenses/MIT
Publisher:
Zenodo
Version published:
0.1.0
Date published:
25.03.2021
Code repository GitHub
Name:
pyWATTS
Persistent identifier:
https://github.com/KIT-IAI/pyWATTS
Licence:
MIT Licence
Date published:
25/09/2020
### Language
English
## 5 Reuse potential
Due to the architecture and the modular structure of pyWATTS, anyone who
wishes to analyse time series can use pyWATTS out-of-the-box. It enables the
users to easily select the modules and determine the pipeline structure
relevant for their specific use case, such as forecasting. Additionally, the
possibility to save and load pipelines together with the platform-independence
of pyWATTS, allows easy reproduction of research results. Moreover, common
Python-based machine learning libraries can be used within pyWATTS. For
example, we provide wrapper modules for scikit-learn [8], Keras [1], and
PyTorch [7] to allow the inclusion of the available functions.
Moreover, pyWATTS’ users are supported by comprehensive documentation for its
core structure and the individual modules as well as detailed examples. In
case of questions, the core developer team can also be contacted with the help
of GitHub issues or the pyWATTS contact email address. The generous MIT
license11 allows research, commercial and non-commercial use, and development
of the package as either an anonymous user, private developer or publicly
contributing developer. All users can stick to the existing modules and
pipelines, extend them based on known or unknown issues, or create new modules
and pipelines. Whether any changes to the modules are made locally or through
the public repository is up to the user to decide.
The developer team, for example, wants to use pyWATTS in various research
applications in the future. For preprocessing, we plan to extend pyWATTS with
the Copy Paste Imputation of missing values for energy time series as
described in [10]. We also plan to use pyWATTS for time series forecasting, e.
g. by using Profile Neural Networks [3]. Furthermore, we intend to extend
pyWATTS for the insertion of typical anomalies in energy time series to have
data sets with ground truth for anomaly detection and anomaly handling. To
generate realistic synthetic energy time series, we also aim to use pyWATTS.
An interface for pipeline tuning and selection will further assist in
automating the iterative design process. Lastly, we want to deploy pyWATTS as
an execution environment in the research infrastructure Energy Lab 2.0 [2].
Putting it all in a nutshell, pyWATTS provides an extendable framework for
automating time series analysis workflows. It uses comprehensible pipelines
and is able to integrate established statistical, machine learning, and deep
learning frameworks. Thus, pyWATTS makes it easy to develop, adapt, and
reproduce pipeline-based experiments for energy time series analysis.
## Acknowledgements
We thank Simon Waczowicz for the valuable input on the concept of pyWATTS.
## Funding statement
This project is funded by the Helmholtz Association’s Initiative and
Networking Fund through Helmholtz AI, the Helmholtz Association under the
Program “Energy System Design”, the Joint Initiative “Energy System Design - A
Contribution of the Research Field Energy”, the Helmholtz Metadata
Collaboration, and the German Research Foundation (DFG) as part of the
Research Training Group 2153 “Energy Status Data: Informatics Methods for its
Collection, Analysis and Exploitation” and under Germany’s Excellence Strategy
– EXC number 2064/1 – Project number 390727645.
## References
* [1] François Chollet “Keras”, https://keras.io, 2015
* [2] Veit Hagenmeyer et al. “Information and Communication Technology in Energy Lab 2.0: Smart Energies System Simulation and Control Center with an Open-Street-Map-Based Power Flow Simulation Example” In _Energy Technology_ 4.1, 2016, pp. 145–162 DOI: 10.1002/ente.201500304
* [3] Benedikt Heidrich et al. “Forecasting Energy Time Series with Profile Neural Networks” In _Proceedings of the Eleventh ACM International Conference on Future Energy Systems_ Association for Computing Machinery, 2020, pp. 220–230 DOI: 10.1145/3396851.3397683
* [4] Stephan Hoyer and Joseph J. Hamman “xarray: N-D labeled Arrays and Datasets in Python” In _Journal of Open Research Software_ 5 Ubiquity Press, Ltd., 2017 DOI: 10.5334/jors.148
* [5] Wes McKinney “Data Structures for Statistical Computing in Python” In _Proceedings of the 9th Python in Science Conference_ , 2010, pp. 56–61 DOI: 10.25080/Majora-92bf1922-00a
* [6] Jacob Montiel et al. “River: machine learning for streaming data in Python” In _arXiv:2012.04740_ , 2020 arXiv:2012.04740 [cs.LG]
* [7] Adam Paszke et al. “PyTorch: An Imperative Style, High-Performance Deep Learning Library” In _Advances in Neural Information Processing Systems_ 32 Curran Associates, Inc., 2019, pp. 8026–8037 URL: https://proceedings.neurips.cc/paper/2019/file/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf
* [8] Fabian Pedregosa et al. “Scikit-learn: Machine Learning in Python” In _Journal of Machine Learning Research_ 12, 2011, pp. 2825–2830
* [9] The team “pandas-dev/pandas: Pandas” Zenodo, 2020 DOI: 10.5281/zenodo.3509134
* [10] Moritz Weber et al. “Data-Driven Copy-Paste Imputation for Energy Time Series” In _arXiv:2101.01423_ , 2021
|
where \( \Lambda_{j_0}\defeq \Lambda(j_0, \cdot,\cdot) \).
Moreover, for every \( A_0 \in \TBundle_{j_0} \SectionSpaceAbb{I}(V, \omega) \),
\begin{equation}
\label{kaehler_contraction_linear_second}
\int_0^1 \left(\bar{\Lambda}_j^* \, \Omega\right)_{(j_0, t)}
(\difp_t, A_0) \dif t = \frac{1}{4} \tr \bigl(\phi_{j_0}(j) A_0\bigr),
\end{equation}
where \( \bar{\Lambda}_j \defeq \Lambda(\cdot, j, \cdot) \).
Using the identities
\begin{align}
\tangent_j \phi_{j_0} (A) &= 2 (j+j_0)^{-1} A (j+j_0)^{-1} j_0,
\\
\tangent_S \phi_{j_0}^{-1} (C) &= 2 j_0 (1-S)^{-1} C (1-S)^{-1}
\end{align}
we find
\begin{equation}
\tangent_j \Lambda_{j_0} (\difp_t) = - 2 S (1+tS)^{-2} j_0
\end{equation}
\begin{equation}
\tangent_j \Lambda_{j_0}(A) =
4 t (1+tS)^{-1} (j+j_0)^{-1} A (j+j_0)^{-1} (1-tS)^{-1}.
\end{equation}
Here and in the following, we abbreviated \( S \defeq \phi_{j_0}(j) \).
Using these formulas
and (<ref>),
we calculate
\begin{equation}\begin{split}
\left(\Lambda_{j_0}^* \Omega\right)_{(j, t)} (\difp_t, A)
&= \Omega_{\Lambda_{j_0}(j,t)}\bigl(\tangent_j \Lambda_{j_0}
(\difp_t), \tangent_j \Lambda_{j_0}(A)\bigr)
\\
&= 2t \tr\left(S \bigl(1-t^2 S^2\bigr)^{-2} (j+j_0)^{-1} A (j+j_0)^{-1} \right).
\end{split}\end{equation}
\begin{equation}
\difFrac{}{t} \bigl(1-t^2 S^2\bigr)^{-1} = 2 t S^2 \bigl(1-t^2 S^2\bigr)^{-2},
\end{equation}
we obtain[Note that it is enough to verify this identity for invertible \( S \) by density of the invertible elements in \( \End(V) \).]
\begin{equation}\begin{split}
\int_0^1 \left(\Lambda_{j_0}^* \Omega\right)_{(j, t)} (\difp_t, A) \dif t
&= \tr \left(S \bigl(1-S^2\bigr)^{-1} (j+j_0)^{-1} A (j+j_0)^{-1}\right) \\
&= \frac{1}{4} \tr \bigl(S A\bigr).
\end{split}\end{equation}
This establishes (<ref>).
The second identity (<ref>) follows
from a similar, but slightly more involved, calculation.
Indeed, the derivative of the map \( \bar{\Lambda}_{j}(j_0, t) \defeq
\Lambda(j_0, j, t) \) is given by
\begin{equation}
\tangent_{j_0} \bar{\Lambda}_j (A_0) =
(1-t) (1+tS)^{-1} (A_0 - t S A_0 S) (1-tS)^{-1}.
\end{equation}
Thus, we find
\begin{equation}
\left(\bar{\Lambda}_{j}^* \Omega\right)_{(j_0, t)} (\difp_t, A_0)
= \frac{1}{2} \tr \left(\bigl(1-t^2 S^2\bigr)^{-2} S \bigl(1-tS^2\bigr)
(1-t) A_0 \right).
\end{equation}
\begin{equation}
\difFrac{}{t} \bigl(-2 S^2 t+S^2 + 1\bigr)\bigl(1-t^2 S^2\bigr)^{-1}
= -2 S^2 (1-t) \bigl(1-t S^2\bigr) \bigl(1-t^2 S^2\bigr)^{-2},
\end{equation}
this expression can easily be integrated over \( t \) to obtain
\begin{equation}
\int_0^1 \left(\bar{\Lambda}_{j}^* \Omega\right)_{(j_0, t)}
(\difp_t, A_0) \dif t
= \frac{1}{4} \tr \bigl(S A_0\bigr).
\end{equation}
This verifies (<ref>) and finishes
the proof.
As a direct application, let us compute the momentum map \( J \) for the
action of \( \SpGroup(V, \omega) \) on \( \SectionSpaceAbb{I}(V, \omega) \).
According to <ref>, the unique momentum map
vanishing at \( j_0 \) is given by
\begin{equation}
\kappa\bigl(J(j), \xi \bigr) =
\frac{1}{4} \tr \bigl(\phi_{j_0}(j) (\xi \ldot j +
\xi \ldot j_0)\bigr) = \frac{1}{2} \tr \bigl((j-j_0)\xi\bigr).
\end{equation}
Thus, identifying \( \SpAlgebra(V, \omega)^* \) with
\( \SpAlgebra(V, \omega) \) using the pairing \( \kappa(\alpha, \xi)
= \frac{1}{2} \tr(\alpha \xi) \), we find \( J(j) = j - j_0 \).
We now finish the proof of <ref>.
In the nonlinear setting, the momentum map for the action of the
symplectomorphism group \( \DiffGroup(M, \omega) \) on
\( \SectionSpaceAbb{I} \equiv \SectionSpaceAbb{I}(M, \omega) \)
is computed in a very similar way, with the twist that the final
dualization involves integration by parts.
Let us discuss the details.
First, we extend the definition of the generalized Cayley transform
\( \phi \) and the contraction \( \Lambda \) to \( \SectionSpaceAbb{I} \)
by applying these maps pointwise.
The resulting map \( \Lambda: \SectionSpaceAbb{I} \times
\SectionSpaceAbb{I} \times [0,1] \to \SectionSpaceAbb{I} \) is smooth
in the \( \sFunctionSpace \)-topology by
[Theorem II.2.2.6]Hamilton1982.
Since \( \DiffGroup(M, \omega) \) acts on \( \SectionSpaceAbb{I} \)
by push-forward, the infinitesimal action is given by
\( \xi \ldot j = - \difLie_{\xi} j \), where \(\xi
\in \VectorFieldSpace(M, \omega ) \defeq \set{\zeta \in \VectorFieldSpace(M) \given
\difLie_{\zeta} \omega =0} \).
By <ref>,
the momentum map \( \SectionSpaceAbb{J}: \SectionSpaceAbb{I} \to
\DiffFormSpace^{2n-1}(M) \slash \dif \DiffFormSpace^{2n-2}(M) \)
vanishing at \( j_0 \in \SectionSpaceAbb{I} \) is uniquely characterized by
\begin{equation}
\label{eq:kaehler:momentumMapIntegralFormula}
\kappa\bigl(\SectionSpaceAbb{J}(j), \xi \bigr) = -
\frac{1}{4} \int_M \tr \bigl((j+j_0)^{-1}(j-j_0)
\difLie_\xi (j + j_0)\bigr) \, \mu_\omega.
\end{equation}
In order to eliminate the derivative in \( \xi \)-direction, following
Garcia-PradaSalamonTrautwein2018, we fix a torsion-free
connection \( \nabla \) on \( M \) with \( \nabla \mu_\omega = 0 \) and
introduce \( \tau(j, A) \in \DiffFormSpace^1(M) \), for \( j \in
\SectionSpaceAbb{I} \) and \( A \in \TBundle_j \SectionSpaceAbb{I} \),
\begin{equation}
\label{eq:kaehler:tau}
\tau^\nabla(j, A)(Y) = \tr \bigl((\nabla A) Y\bigr) +
\frac{1}{2} \tr \left(A j \, \nabla_Y j\right)
= Y^r \nabla_p \tensor{A}{_r^p} +
\frac{1}{2} Y^r \tensor{A}{_s^p}\tensor{j}{_q^s} \, \nabla_r \tensor{j}{_p^q},
\end{equation}
for \( Y \in \VectorFieldSpace(M) \).
By [Theorem 2.6]Garcia-PradaSalamonTrautwein2018,
\( \tau^\nabla(j, A) \) does not depend on the connection \( \nabla \).
However, for the Levi-Civita connection of \( g_j \), the expression
of \( \tau^\nabla \) simplifies considerably.
Let \( j \in \SectionSpaceAbb{I} \) and
\( A \in \TBundle_j \SectionSpaceAbb{I} \).
Then, for the Levi-Civita connection \( \nabla \) of \( g_j \),
we have
\begin{equation}
\label{eq:kaehler:tauForLeviCivita}
\tau^\nabla(j, A)(Y) = \tr \bigl((\nabla A)Y\bigr) =
Y^r \nabla_p \tensor{A}{_r^p}.
\qedhere
\end{equation}
By taking the covariant derivative of \( g_j(j \cdot, j \cdot)
= g_j(\cdot, \cdot) \), we see that \( \nabla_Y j \) is
antisymmetric with respect to \( g_j \) for every
\( Y \in \VectorFieldSpace(M) \).
Thus, also \( j \nabla_Y j \) is antisymmetric.
But \( A \in \TBundle_j \SectionSpaceAbb{I} \) is a \( g_j \)-symmetric
tensor, so \( A j \nabla_Y j \) is trace-free.
The importance of \( \tau \) lies in the following integration
by parts identity [Theorem 2.6]Garcia-PradaSalamonTrautwein2018:
\begin{equation}
\label{eq:kaehler:integrationByPartsTau}
\frac{1}{2} \int_M \tr(A \difLie_\xi j) \, \mu_\omega
= -\int_M \tau^\nabla(j, Aj) \wedge (\xi \contr \mu_\omega)
= - \kappa\bigl(\tau^\nabla(j, Aj) \wedge \omega^{n-1}, \xi\bigr),
\end{equation}
for \( \xi \in \VectorFieldSpace(M, \omega) \) and
\( A \in \TBundle_j \SectionSpaceAbb{I} \).
Using this fact, we read-off from (<ref>)
that the momentum map is given by
\begin{equation}
\SectionSpaceAbb{J}(j) =
\frac{1}{2} \Bigl(\tau^\nabla\bigl(j, (j+j_0)^{-1}(j-j_0) j\bigr)
+ \tau^\nabla\bigl(j_0, (j+j_0)^{-1}(j-j_0) j_0\bigr)\Bigr) \wedge \omega^{n-1}.
\end{equation}
Now, for \( Y \in \VectorFieldSpace(M) \), we have by (<ref>)
\begin{equation}\begin{split}
\tau^\nabla\bigl(j, &(j+j_0)^{-1}(j-j_0) j\bigr)(Y) +
\tau^\nabla\bigl(j_0, (j+j_0)^{-1}(j-j_0) j_0\bigr)(Y)
\\
&= \tr \left(\nabla \bigl((j+j_0)^{-1}(j-j_0) (j+j_0)\bigr) Y
- \frac{1}{2} (j+j_0)^{-1}(j-j_0) \, \nabla_Y (j+j_0)\right)
\\
&= -\tr \bigl(\nabla (j-j_0) Y\bigr) -
\frac{1}{2} \tr \left((j+j_0)^{-1}(j-j_0) \,\nabla_Y (j+j_0)\right).
\end{split}\end{equation}
This finishes the proof of <ref>.
Alternatively, one can directly verify <ref>
using the integration by parts relation (<ref>)
and the following expression for the variation of the \( 1 \)-form
\( J(j_0, j) \).
For every \( j_0, j \in \SectionSpaceAbb{I} \) and \( A \in \TBundle_j
\SectionSpaceAbb{I} \), we have
\begin{equation}
\label{eq:kaehler:variationJ}
\tangent_j J(j_0, \cdot)(A) = - \frac{1}{2} \tau^\nabla(j, A)
- \frac{1}{4} \dif \tr\bigl(A \phi_{j_0}(j)\bigr).
\qedhere
\end{equation}
Continuing using the notation \( \phi_{j_0}(j) \) for the Cayley transform
introduced in (<ref>),
using (<ref>) we get
\begin{equation}\begin{split}
\tangent_j J(j_0, \cdot)(A)(Y)
&= -\frac{1}{2} \tr\bigl((\nabla A)Y\bigr) -
\frac{1}{2} \tr \bigl((j+j_0)^{-1} A (j+j_0)^{-1} j_0
\nabla_Y (j + j_0)\bigr) \\
&\qquad- \frac{1}{4} \tr \bigl(\phi_{j_0}(j) \nabla_Y A\bigr).
\end{split}\end{equation}
On the other hand,
\begin{equation}
\phi_{j_0}(j) jA = Aj - 2 (j+j_0)^{-1} A (j+j_0)^{-1} (j_0 + j).
\end{equation}
Since \( \phi_{j_0}(j) \), \( A \), and \( \nabla_Y j \) all anti-commute
with \( j \), we obtain
\begin{equation}
0 = \tr(\phi_{j_0}(j) jA) = \tr\bigl(Aj \nabla_Y j\bigr) -
2 \tr\bigl((j+j_0)^{-1} A (j+j_0)^{-1} (j_0 + j) \nabla_Y j \bigr)
\end{equation}
and thus
\begin{equation}\begin{split}
\tr\bigl(Aj \nabla_Y j\bigr) &- 2 \tr\bigl((j+j_0)^{-1} A
(j+j_0)^{-1} j_0 \nabla_Y (j+j_0) \bigr) \\
&= 2 \tr\left((j+j_0)^{-1} A (j+j_0)^{-1} \bigl(j \nabla_Y j
- j_0 \nabla_Y j_0 \bigr)\right) \\
&= 2 \tr\left(A (j+j_0)^{-1} \bigl(j \nabla_Y j -
j_0 \nabla_Y j_0 \bigr) (j+j_0)^{-1}\right) \\
&= -\tr(A \nabla_Y \phi_{j_0}(j)).
\end{split}\end{equation}
Hence we conclude
\begin{equation}\begin{split}
\tangent_j J(j_0, \cdot)(A)(Y)
&= -\frac{1}{2} \tr\bigl((\nabla A)Y\bigr) -
\frac{1}{4} \tr\bigl(Aj \nabla_Y j\bigr) -
\frac{1}{4}\tr(A \nabla_Y \phi_{j_0}(j)) \\
&\qquad- \frac{1}{4} \tr \bigl(\phi_{j_0}(j) \nabla_Y A\bigr)\\
&= - \frac{1}{2} \tau^\nabla(j, A)(Y) -
\frac{1}{4} \nabla_Y \tr\bigl(A \phi_{j_0}(j)\bigr).
\end{split}\end{equation}
This finishes the proof.
In order to give a geometric interpretation of the \( 1 \)-form, and
thereby of the momentum map \( \SectionMapAbb{J} \), we recall a few
basic facts from almost complex geometry; see
GarciaPradaSalamon2020,Gauduchon2017 for more details.
For every almost complex structure \( j \), the Levi-Civita
connection \( \nabla \) associated with the Riemannian metric
\( g_j = \omega(\cdot, j \cdot) \) induces the Chern connection
(also called the second canonical Hermitian connection)
\begin{equation}
\tilde{\nabla}_X Y = \nabla_X Y - \frac{1}{2} j \, (\nabla_X j) Y.
\end{equation}
This is the unique connection that preserves the metric \( g_j \),
the complex structure \( j \), and the symplectic form \( \omega \),
and whose torsion is the Nijenhuis tensor \( N_j \).
The Chern–Ricci form is defined by
\begin{equation}
\mathrm{Ric}_j(X, Y) \defeq
\frac{1}{2} \tr \left(R^{\tilde{\nabla}}(X, Y) j\right),
\end{equation}
where \( R^{\tilde{\nabla}} \) is the curvature of the Chern connection
(on \( \TBundle M \)). With these conventions, \( \I \, \mathrm{Ric}_j \)
is the curvature of the induced Chern connection on the anti-canonical bundle
\( \KBundle^{-1}_j M = \ExtBundle^{0,n} (\CotBundle M) \) and
\( \frac{1}{2\pi}\mathrm{Ric}_j \) represents
\( c_1(M) = c_1 \bigl(\KBundle^{-1}_j M\bigr) \in \sCohomology^2(M, \Z) \).
Moreover, \( S_j \defeq \tr_\omega \mathrm{Ric}_j \) is
the Chern scalar curvature.
In the following, we also need the normalized version
\begin{equation}
\label{eq:kaehler:scalarCurvatureNormalized}
\bar{S}_{j} \defeq S_{j}
- \frac{1}{\vol_{\mu_\omega}(M)} \int_M S_{j} \, \mu_\omega \, .
\end{equation}
Since the space \( \SectionSpaceAbb{I} \) of almost complex structures
compatible with \( \omega \) is contractible, the anti-canonical bundles
\( \KBundle^{-1}_j M \) are all isomorphic as \( j \) varies
in \( \SectionSpaceAbb{I} \).
For every \( j_0, j \in \SectionSpaceAbb{I} \), choose an isomorphism
\( \KBundle^{-1}_{j_0} M \isomorph \KBundle^{-1}_j M \), and
let \( \widebar{J}(j_0, \cdot) \) be the difference of the
Chern connections on \( \KBundle^{-1}_{j_0} M \) and
\( \KBundle^{-1}_j M \) (under this isomorphism).
Choosing a different isomorphism of the anti-canonical bundles changes
\( \widebar{J}(j_0, \cdot) \) by an exact \( 1 \)-form. This gives
rise to a well-defined map
\begin{equation}
\widebar{J}(j_0, \cdot): \SectionSpaceAbb{I} \to
\DiffFormSpace^1(M) \slash \dif \DiffFormSpace^0(M).
\end{equation}
Mohsen2003 showed that the derivative of this map is given by
\begin{equation}
\tangent_j \bigl(\widebar{J}(j_0, \cdot)\bigr)(A) =
-\frac{1}{2} \tau^\nabla(j, A) \mod \dif \DiffFormSpace^0(M)
\end{equation}
for the Levi-Civita connection \( \nabla \); see also
[Proposition 9.5.1]Gauduchon2017[Proposition 9]Vernier2020.
Clearly, \( \widebar{J}(j_0, \cdot) \) vanishes at \( j_0 \), and
hence, by <ref>, the maps
\( j \mapsto \widebar{J}(j_0, j) \) and
\( j \mapsto J(j_0, j) \mod \dif \DiffFormSpace^0(M) \)
have to coincide.
In other words, the \( 1 \)-form \( J(j_0, j) \) is the difference of
the Chern connections on the anti-canonical bundles
\( \KBundle^{-1}_{j_0} M \) and \( \KBundle^{-1}_j M \) under the isomorphism
\( \KBundle^{-1}_{j_0} M \isomorph \KBundle^{-1}_j M \) induced by
the generalized Cayley transform \( \Lambda \).
Based on this discussion, an equivalent restatement of
<ref>, enlightening the geometric meaning,
is the following.
For every \( j_0, j \in \SectionSpaceAbb{I} \), let \( \widebar{J}(j_0, j)
\in \DiffFormSpace^{1}(M) \slash \dif \DiffFormSpace^{0}(M) \) be the
difference of the Chern connections on the anti-canonical
bundles \( \KBundle^{-1}_{j_0} M \)
and \( \KBundle^{-1}_j M \) under an arbitrary isomorphism
\( \KBundle^{-1}_{j_0} M \isomorph \KBundle^{-1}_j M \).
Then the unique momentum map \( \SectionMapAbb{J}: \SectionSpaceAbb{I}
\to \DiffFormSpace^{2n-1}(M) \slash \dif \DiffFormSpace^{2n-2}(M) \)
for the action of \( \DiffGroup(M, \omega) \) on \( \SectionSpaceAbb{I} \)
satisfying \( \SectionMapAbb{J}(j_0) = 0 \) is given by
\( \SectionMapAbb{J}(j) = \widebar{J}(j_0, j) \wedge \omega^{n-1} \).
In DiezRatiuAutomorphisms, we have investigated the action
of \( \DiffGroup(M, \omega) \) on \( \SectionSpaceAbb{I} \) and showed
that it admits a so-called group-valued momentum map.
Let us briefly outline the construction. Assume that \( \omega \) has
integral periods so that there exists a prequantum bundle \( L \to M \).
Let \( \KBundle_j M \) be the canonical bundle induced by
\( j \in \SectionSpaceAbb{I} \), and consider the map
\begin{equation}
\skew{3}{\tilde}{\SectionMapAbb{J}}: \SectionSpaceAbb{I} \to
\csCohomology^{2n}(M, \UGroup(1)), \qquad j \mapsto \KBundle_j M \star L^{n-1},
\end{equation}
where \( \csCohomology^{k}(M, \UGroup(1)) \) is the group of Cheeger-Simons
differential characters, and \( \star: \csCohomology^{k}(M, \UGroup(1)) \times
\csCohomology^{l}(M, \UGroup(1)) \to \csCohomology^{k+l}(M, \UGroup(1)) \)
is the natural ring structure; see, , BaerBecker2013.
By construction, \( \KBundle^{-1}_j M \star L^{n-1} \) can be viewed as a
higher bundle with connection whose curvature is
\( \mathrm{Ric}_j \wedge \omega^{n-1} = \frac{S_j}{2n} \omega^n \).
By [Theorem 4.10]DiezRatiuAutomorphisms,
\( \skew{3}{\tilde}{\SectionMapAbb{J}} \) is a group-valued momentum
map for the action of the group of symplectomorphisms in the sense
that the left logarithmic derivative
\( \difLog \skew{3}{\tilde}{\SectionMapAbb{J}} \in
\DiffFormSpace^1\bigl(\SectionSpaceAbb{I},
\DiffFormSpace^{2n-1}(M) \slash \dif \DiffFormSpace^{2n-2}(M)\bigr) \)
\begin{equation}
\xi^* \contr \Omega + \kappa\bigl(\difLog \skew{3}{\tilde}{\SectionMapAbb{J}},
\xi\bigr) = 0,
\end{equation}
where \( \xi^* \) is the fundamental vector field on \( \SectionSpaceAbb{I} \)
induced by the action of \( \xi \in \VectorFieldSpace(M, \omega) \).
Choose \( j_0 \in \SectionSpaceAbb{I} \).
Since the Chern class of the anti-canonical bundle \( \KBundle^{-1}_j M \)
is independent
of the almost complex structure \( j \), there exists a map
\( \tilde{J}(j_0, \cdot): \SectionSpaceAbb{I} \to \DiffFormSpace^{1}(M) \slash
\clZDiffFormSpace^{1}(M) \) such that
\begin{equation}
\label{eq:kaehler:momentumMap:asDifferenceCanonicalBundle}
\KBundle^{-1}_j M = \KBundle^{-1}_{j_0} M -
\iota\bigl(\tilde{J}(j_0, j)\bigr),
\end{equation}
where \( \iota: \DiffFormSpace^{k}(M) \slash \clZDiffFormSpace^{k}(M)
\to \csCohomology^{k+1}(M, \UGroup(1)) \) is the inclusion of topologically
trivial characters and \( \clZDiffFormSpace^{k}(M) \) is the space of
closed forms with integral periods. This identity states that
\( \KBundle^{-1}_j M \) and \( \KBundle^{-1}_{j_0} M \) are isomorphic,
and \( \tilde{J}(j_0, j) \)
is the difference of the Chern connections on these bundles up to gauge
transformations. Since \( \SectionSpaceAbb{I} \) is contractible, there
exists a lift \( \widebar{J}(j_0, \cdot): \SectionSpaceAbb{I}
\to \DiffFormSpace^{1}(M) \slash \dif \DiffFormSpace^{0}(M) \) of
\( \tilde{J}(j_0, \cdot) \) covering the projection
\( \pr: \DiffFormSpace^{k}(M) \slash \dif \DiffFormSpace^{k-1}(M)
\to \DiffFormSpace^{k}(M) \slash \clZDiffFormSpace^{k}(M) \). Thus,
\begin{equation}
\skew{3}{\tilde}{\SectionMapAbb{J}}(j) =
\skew{3}{\tilde}{\SectionMapAbb{J}}(j_0) +
\iota \circ \pr \bigl(\widebar{J}(j_0, j) \wedge \omega^{n-1} \bigr).
\end{equation}
Hence, the logarithmic derivative is given by
\( \difLog \skew{3}{\tilde}{\SectionMapAbb{J}} =
\bigl(\tangent \widebar{J}(j_0, \cdot)\bigr) \wedge \omega^{n-1} \).
This shows that the map
\begin{equation}
\SectionSpaceAbb{I} \ni j \mapsto \widebar{J}(j_0, j) \wedge \omega^{n-1}
\in \DiffFormSpace^{2n-1}(M) \slash \dif \DiffFormSpace^{2n-2}(M)
\end{equation}
is a momentum map for the action of \( \DiffGroup(M, \omega) \)
on \( \SectionSpaceAbb{I} \). Clearly, it vanishes at \( j_0 \) and
thus by uniqueness has to coincide with the momentum map
\( \SectionMapAbb{J} \).
In this way, we recover
Note that the group-valued momentum map \( \skew{3}{\tilde}{\SectionMapAbb{J}} \)
is equivariant under the action of \( \DiffGroup(M, \omega) \), but this
equivariance is broken for \( \SectionMapAbb{J} \) by choosing a reference
complex structure. We study this non-equivariance in more detail below
and will see that it has a topological character.
<Ref> implies that the Chern scalar
curvature is the momentum map for the action of the subgroup of Hamiltonian
In this way we recover the result of Fujiki1992,Donaldson1997
that the Chern scalar curvature is the momentum map for the action of the
subgroup of Hamiltonian diffeomorphisms.
The action of the group of Hamiltonian diffeomorphisms on
\( \SectionSpaceAbb{I} \) has a momentum map
\begin{equation}
\SectionMapAbb{J}_{\HamDiffGroup}(j) =
\frac{1}{2} \bigl(S_{j} - S_{j_0}\bigr) \, \mu_\omega
\end{equation}
relative to the integration pairing of \( \sFunctionSpace_0(M) \)
and \( \dif \DiffFormSpace^{2n-1}(M) \).
The non-equivariance one-cocycle
\begin{equation}
\HamDiffGroup(M, \omega) \to \dif \DiffFormSpace^{2n-1}(M),
\qquad \phi \mapsto \frac{1}{2} \bigl(S_{j_0} \circ
\phi^{-1} - S_{j_0}\bigr) \, \mu_\omega
\end{equation}
is a coboundary.
Consider the isomorphism of the space
\(\HamVectorFields(M, \omega)\) of Hamiltonian vector fields with the
space \( \sFunctionSpace_0(M) \) of smooth functions on \( M \) with zero
mean given by the map \( f \mapsto X_f \).
By Hodge theory, the natural integration pairing gives a non-degenerate
pairing of \( \sFunctionSpace_0(M) \) and \( \dif \DiffFormSpace^{2n-1}(M) \).
The following calculation, for
\( \alpha \in \DiffFormSpace^{2n-1}(M) \) and
\( f \in \sFunctionSpace_0(M) \),
\begin{equation}
(n-1)! \, \kappa\bigl(\equivClass{\alpha}, X_f \bigr)
= \int_M \alpha \wedge (X_f \contr \omega)
= - \int_M \alpha \wedge \dif f
= -\int_M f \dif \alpha
= \dualPair{-\dif \alpha}{f},
\end{equation}
shows that the adjoint of the map \( f \mapsto X_f \) is essentially
given by the exterior differential \( \dif: \DiffFormSpace^{2n-1}(M)
\slash \dif \DiffFormSpace^{2n-2}(M) \to \dif \DiffFormSpace^{2n-1}(M) \).
Thus, the momentum map for the action of \( \HamDiffGroup(M, \omega) \)
on \( \SectionSpaceAbb{I} \) is given by
\begin{equation}
\SectionMapAbb{J}_{\HamDiffGroup}(j) =
\frac{-1}{(n-1)!} \dif \SectionMapAbb{J}(j) =
-\dif J(j_0, j) \wedge \frac{\omega^{n-1}}{(n-1)!}.
\end{equation}
\begin{equation}
\label{eq:kaehler:difOfJForm}
\dif J(j_0, j) = \mathrm{Ric}_{j_0} - \mathrm{Ric}_{j}.
\end{equation}
This identity follows either from a direct calculation using identities
of [Proof of Theorem 2.6]Garcia-PradaSalamonTrautwein2018 or
from the identification of \( J(j_0, j) \) as the difference of the
Chern connections on the anti-canonical bundle by recalling
that \( \I \, \mathrm{Ric}_j \) is
the curvature of the Chern connection on \( K^{-1}_j M \).
Thus, invoking (<ref>) and the
definition of the scalar curvature in terms of the Ricci form, we find
\begin{equation}
\SectionMapAbb{J}_{\HamDiffGroup}(j) =
\bigl(\mathrm{Ric}_{j} - \mathrm{Ric}_{j_0}\bigr) \wedge
\frac{\omega^{n-1}}{(n-1)!} = \frac{1}{2} (S_{j} - S_{j_0}) \, \mu_\omega.
\end{equation}
The expression for the non-equivariance cocycle follows directly from
the definition (<ref>).
This finishes the proof.
§.§ Central extension and quasimorphism of Diff(M, w)
In contrast to the momentum map \( \SectionMapAbb{J}_{\HamDiffGroup} \) for the
subgroup of Hamiltonian diffeomorphisms, the momentum map \( \SectionMapAbb{J} \)
for the full group of symplectomorphisms is not equivariant, in general.
The different equivariance properties of these momentum maps can be succinctly
captured using the work of Vizman2006, who used the exact sequence
0 →[r]
(M, ω) →[r]
(M, ω) →[r]
^1(M) →[r]
to show that the second continuous Lie algebra cohomology
of \( \VectorFieldSpace(M, \omega) \) consists of sums of extensions of
certain \( 2 \)-cocycles on \( \HamVectorFields(M, \omega) \) and pull-backs
of elements of \( \ExtBundle^2 {\deRCohomology^1(M)}^* \); see
[Corollary 4.4]Vizman2006.
In order to describe the corresponding decomposition of the non-equivariance
cocycle of \( \SectionMapAbb{J} \), we need the following description of a
cocycle associated with a closed \( 2 \)-form.
Let \( (M, \omega) \) be a closed \( 2n \)-dimensional symplectic manifold
with \( n \geq 2 \), and \( \lambda \) be a closed \( 2 \)-form on \( M \).
Then the associated Lichnerowicz \( 2 \)-cocycle
This is a slight abuse of conventions since, usually, the
name Lichnerowicz cocycle refers to the cocycle defined on
the Lie algebra of volume-preserving vector fields. However, we are
mainly interested in its restriction to the subalgebra of symplectic
vector fields.
\begin{equation}
\lambda_c (\xi, \eta) \defeq \int_M \lambda(\xi, \eta) \, \mu_\omega
\end{equation}
on \( \VectorFieldSpace(M, \omega) \) is cohomologous to the pull-back
by the map
\( \VectorFieldSpace(M, \omega) \ni \xi \mapsto [\xi \contr \omega]
\in \deRCohomology^1(M) \) of the bilinear form
\begin{equation}
\label{eq:kaehler:LichnerowiczCocyclePullback:onH1}
\lambda_c^H\bigl(\equivClass{\alpha}, \equivClass{\beta}\bigr)
= \int_M \Biggl(\frac{\mathrm{Av}_\omega (\lambda)}{2 (n-1)} \, \omega
- \lambda\Biggr) \wedge \alpha \wedge \beta \wedge
\frac{\omega^{n-2}}{(n-2)!}
\end{equation}
on \( \deRCohomology^1(M) \), where \( \mathrm{Av}_\omega (\lambda) \defeq
\frac{1}{\vol_{\mu_\omega}(M)} \int_M \tr_\omega (\lambda) \, \mu_\omega \) and
\( \tr_\omega (\lambda) = \tensor{\lambda}{_i^i} = \varpi^{ij} \lambda_{ij}\);
see (<ref>).
In particular, the restriction of \( \lambda_c \) to
\( \HamVectorFields(M, \omega) \) is trivial in Lie algebra cohomology.
Moreover, \( \lambda_c^H \) vanishes if \( \lambda \wedge \omega^{n-2} \)
is exact. If the cup product yields an isomorphism
\( \ExtBundle^2 {\deRCohomology^1(M)} = \deRCohomology^2(M) \), then
exactness of \( \lambda \wedge \omega^{n-2} \) is also necessary for
\( \lambda_c^H \) to vanish.
Note that the bilinear form \( \lambda_c^H \) factors through the
cup product \( \deRCohomology^1(M) \times \deRCohomology^1(M) \to
\deRCohomology^2(M) \), and as such is closely related to the
skew-structures on \( \pi_1(M) \) studied by JohnsonRees1991.
By construction, \( \bar{\lambda} =
\tr_\omega (\lambda) - \mathrm{Av}_\omega (\lambda) \)
has average value zero. Thus, there exists
\( \tau \in \DiffFormSpace^{2n-1}(M) \) such
that \( \dif \tau = \bar{\lambda} \mu_\omega \).
Now, the calculation
\begin{equation}
\int_M \bar{\lambda} \omega(\xi, \eta) \, \mu_\omega
= \int_M \omega(\xi, \eta) \dif \tau
= - \int_M \difLie_{\eta}(\xi \contr \omega) \wedge \tau
= \int_M (\commutator{\xi}{\eta} \contr \omega) \wedge \tau
\end{equation}
shows that \( \lambda_c \) is cohomologous to the cocycle
\begin{equation}
\tilde{\lambda}_c(\xi, \eta) = \int_M \left(\lambda -
\frac{\bar{\lambda}}{2} \omega\right) (\xi, \eta) \, \mu_\omega \, .
\end{equation}
On the other hand, using (<ref>),
we find for every \( 2 \)-form \( \alpha \) that
\begin{equation}\begin{split}
\alpha(\xi, \eta) \, \mu_\omega
&= (\xi \contr \omega) \wedge (\eta \contr \alpha)
\wedge \frac{\omega^{n-1}}{(n-1)!}
\\
&= \alpha \wedge \frac{\omega^{n-1}}{(n-1)!} \, \omega(\xi, \eta)
- \lambda \wedge (\xi \contr \omega) \wedge (\eta \contr \omega) \wedge
\frac{\omega^{n-2}}{(n-2)!}
\\
&= \frac{1}{2} \tr_\omega(\alpha) \, \omega(\xi, \eta) \, \mu_\omega -
\alpha \wedge (\xi \contr \omega) \wedge (\eta \contr \omega)
\wedge \frac{\omega^{n-2}}{(n-2)!}.
\end{split}\end{equation}
The first and second identities follow from contracting
\( (\xi \contr \alpha) \wedge \omega^n = 0 \) and
\( (\xi \contr \omega) \wedge \alpha \wedge \omega^{n-1} = 0 \)
with \( \eta \), respectively.
Hence, using this relation for \( \alpha = \lambda \)
and in the second line for \( \alpha = \omega \), we obtain
\begin{equation}\begin{split}
\tilde{\lambda}_c(\xi, \eta)
&= \int_M \frac{1}{2} \mathrm{Av}_\omega (\lambda) \,
\omega (\xi, \eta) \, \mu_\omega -
\lambda \wedge (\xi \contr \omega) \wedge
(\eta \contr \omega) \wedge \frac{\omega^{n-2}}{(n-2)!}
\\
&= \int_M \Biggl(\frac{1}{2 (n-1)} \mathrm{Av}_\omega(\lambda) \, \omega
- \lambda\Biggr) \wedge (\xi \contr \omega) \wedge
(\eta \contr \omega) \wedge \frac{\omega^{n-2}}{(n-2)!}\,.
\end{split}\end{equation}
From this expression, it is evident that \( \tilde{\lambda}_c \) is the
pull-back from \( \deRCohomology^1(M) \) of \( \lambda_c^H \).
Finally, if \( \lambda \wedge \omega^{n-2} \) is exact, then the average
of \( \lambda \) vanishes by (<ref>),
and thus also \( \lambda_c^H = 0 \). Conversely, assume that cup product
identifies \( \ExtBundle^2 {\deRCohomology^1(M)} \) with
\( \deRCohomology^2(M) \) and that \( \lambda_c^H = 0 \).
Then, the linear functional \( \deRCohomology^2(M) \to \R \) given by
integration against \( \sigma \wedge \omega^{n-2} \) with \( \sigma =
\frac{\mathrm{Av}_\omega(\lambda)}{2 (n-1)} \omega - \lambda \)
has to vanish, , \( \sigma \wedge \omega^{n-2} \) has to be exact.
But then \( 0 = \mathrm{Av}_\omega(\sigma) =
\mathrm{Av}_\omega(\lambda) \bigl(\frac{n}{n-1} - 1\bigr) \),
and thus \( \mathrm{Av}_\omega(\lambda)
= 0 \). Hence, \( \sigma = \lambda \) and
\( \lambda \wedge \omega^{n-2} \) has to be exact.
Applied to the non-equivariance cocycle of \( \SectionMapAbb{J} \), we
find the following.
The class of the non-equivariance cocycle of \( \SectionMapAbb{J} \) in the second
continuous Lie algebra cohomology of \( \VectorFieldSpace(M, \omega) \)
coincides with the pull-back along the natural map
\( \VectorFieldSpace(M, \omega) \to \deRCohomology^1(M) \) of the
antisymmetric bilinear form
\begin{equation}
\Sigma^H_{j_0}\bigl(\equivClass{\alpha}, \equivClass{\beta}\bigr) \defeq \int_M
\left((\mathrm{Ric}_{j_0})_{rs} - \frac{1}{2}\bar{S}_{j_0}\omega_{rs}
\right) \alpha^r \beta^s \, \mu_\omega
\end{equation}
on \( \deRCohomology^1(M) \), where the indices of \( \alpha \) and
\( \beta \) are raised using \( \omega \).
Moreover, the non-equivariance cocycle is trivial in Lie algebra
cohomology if \( c_1(M) \cup \equivClass{\omega}^{n-2} = 0 \), and
this condition is necessary when the cup product yields an
isomorphism \( \ExtBundle^2 {\deRCohomology^1(M)} = \deRCohomology^2(M) \).
Note that the class of the non-equivariance cocycle \( \Sigma^H_{j_0} \)
is independent of the reference complex structure \( j_0 \) and thus is
a well-defined invariant of the symplectic manifold \( (M, \omega) \).
Recall that a Kähler manifold with vanishing first real Chern class is
called a Calabi–Yau manifold. Thus, for Calabi–Yau manifolds, the
non-equivariance cocycle is trivial in Lie algebra cohomology.
To emphasize the close relationship, we say that a symplectic manifold
\( (M, \omega) \) is weakly Calabi–Yau if \( \Sigma^H_j = 0 \)
for some compatible almost complex structure \( j \).
Let \( j_0 \in \SectionSpaceAbb{I} \) and let
\( X, Y \in \VectorFieldSpace(M, \omega) \).
By (<ref>), the non-equivariance 2-cocycle
\( \Sigma \) is given by \( \Sigma(X, Y) = -
\kappa\Bigl(\tangent_{j_0}\SectionMapAbb{J} \bigl(\difLie_X j_0\bigr),
Y\Bigr) \). On the other hand, by (<ref>), we find
\begin{equation}\begin{split}
\tangent_{j_0}J(j_0, \cdot) \bigl(\difLie_X j_0\bigr)
&= -\frac{1}{2} \tau^\nabla(j_0, \difLie_X j_0)
= - X \contr \mathrm{Ric}_{j_0} -\frac{1}{2} \dif \divergence(j_0X),
\end{split}\end{equation}
where the second equality follows from
[Theorem 2.7]Garcia-PradaSalamonTrautwein2018.
Thus, in summary,
\begin{equation}\begin{split}
\Sigma(X, Y)
&= \kappa\bigl((X \contr \mathrm{Ric}_{j_0})\wedge \omega^{n-1}, Y\bigr)
\\
&= \int_M (X \contr \mathrm{Ric}_{j_0}) \wedge \frac{\omega^{n-1}}{(n-1)!}
\wedge (Y \contr \omega)
\\
&= \int_M \mathrm{Ric}_{j_0}(X, Y) \, \mu_\omega.
\end{split}\end{equation}
Alternatively, this identity for the non-equivariance cocycle follows
directly from [Eq. (2.33)]Garcia-PradaSalamonTrautwein2018.
The claim now is a consequence of
From <ref> we know that the
non-equivariance cocycle of \( \SectionMapAbb{J} \) integrates to
a central extension of \( \DiffGroup(M, \omega) \).
In fact, the associated group \( 2 \)-cocycle \( c \) on
\( \DiffGroup(M, \omega) \) can be explicitly computed
using (<ref>), at least in principle.
The cocycle \( c \) also coincides with the cocycle given in
Reznikov1999 (where it appeared out of thin air).
Moreover, according to <ref>, the
cocycle \( c \) is bounded in the sense of Gromov.
This follows from the fact that \( \SectionSpaceAbb{I} \) is a
Domic-Toledo space, essentially because it is the space of sections
of a bundle whose typical fiber is a Domic-Toledo space; see
[Section 1.7]Shelukhin2014 for details.
Although this prescription yields a direct way to construct the
central extension of \( \DiffGroup(M, \omega) \), its geometric
interpretation still remains unclear.
This can be compared to the description of the momentum map above:
<ref> gives a concrete formula for the
momentum map \( \SectionMapAbb{J} \), but its geometric interpretation
in terms of the anti-canonical bundle is not obvious from this formula.
A first step towards a geometric interpretation of the central extension
is to better understand the prequantum bundle of \( \SectionSpaceAbb{I} \).
For example, realize it as a determinant line bundle of certain Dirac
operators or use the asymptotic prescription of FothUribe2007.
If \( \Sigma^H_{j} = 0 \) for some compatible almost complex structure
\( j \), , \( (M, \omega) \) is weakly Calabi–Yau, then the momentum
map \( \SectionMapAbb{J} \) is infinitesimally equivariant.
We can thus apply the general construction of Shelukhin2014
to obtain a quasimorphism on the universal covering of
\( \DiffGroup(M, \omega)_0 \).
By construction, the restriction of this quasimorphism to the subgroup
of Hamiltonian diffeomorphisms is the (non-trivial) quasimorphism
constructed in Shelukhin2014.
For completeness, let us record this observation.
If \( (M, \omega) \) is weakly Calabi–Yau, then the universal
covering of \( \DiffGroup(M, \omega)_0 \) admits a non-trivial
quasimorphism and hence has infinite commutator length.
Under the stronger assumption that the first Chern class vanishes,
Entov2004 constructed a quasimorphism on the universal
covering of \( \DiffGroup(M, \omega)_0 \) that coincides with the
Shelukhin quasimorphism on Hamiltonian diffeomorphisms.
Hence, it is natural to conjecture that the quasimorphism constructed
above is a natural generalization of the Entov quasimorphism; see
also [Point 3.2]Shelukhin2014.
§.§ Norm-squared momentum map
We identify the Lie algebra \( \HamVectorFields(M, \omega) \) of
Hamiltonian vector fields with the Lie algebra \( \sFunctionSpace_0(M) \)
of smooth functions with average zero by \( F \mapsto X_F \).
Using this identification, there is a natural inner product on
\( \HamVectorFields(M, \omega) \) defined by
\begin{equation}
\label{eq:kaehler:pairingOnHam}
\scalarProd{X_F}{X_G} = \frac{1}{2} \int_M F G \, \mu_\omega.
\end{equation}
To extend this inner product to \( \VectorFieldSpace(M, \omega) \),
consider the exact sequence
\begin{equation}
\label{eq:kaehler:symplecticVectorFieldsExactSequence}
0 \to \HamVectorFields(M, \omega) \to \VectorFieldSpace(M, \omega)
\to \deRCohomology^1(M) \to 0.
\end{equation}
We split this sequence by choosing a reference almost complex
structure \( j_0 \in \SectionMapAbb{I} \).
Let \( \mathfrak{har}_{j_0}(M) \isomorph \deRCohomology^1(M) \)
be the space of all vector fields \( \xi^h \) such that
\( \omega^\flat(\xi^h) \) is a \( g_{j_0} \)-harmonic
\( 1 \)-form. We call such infinitesimally
symplectic vector fields harmonic.
Then every \( \xi \in \VectorFieldSpace(M, \omega) \)
can be uniquely written as \( \xi = X_F + \xi^h \) for some
\( F \in \sFunctionSpace_0(M) \) and
\( \xi^h \in \mathfrak{har}_{j_0}(M) \). Define the following
inner product on \( \VectorFieldSpace(M, \omega) \):
\begin{equation}
\label{eq:kaehler:pairingOnLieAlgebra}
\scalarProd{\xi}{\eta}_{j_0} =
\int_M \frac{1}{2} F G \, \mu_\omega \, + \omega^\flat(\xi^h)
\wedge \hodgeStar_{g_{j_0}} \omega^\flat(\eta^h) =
\int_M \left(\frac{1}{2} F G +
g_{j_0}(\xi^h, \eta^h)\right) \, \mu_\omega
\end{equation}
for \( \xi = X_F + \xi^h \) and \( \eta = X_G + \eta^h \),
where \(\hodgeStar_{g_{j_0}}\) is the
Hodge star operator defined by the Riemannian metric \(g_{j_0}\).
From <ref>, we obtain
the momentum map relative to the inner
product \( \scalarProdDot_{j_0} \)
given in (<ref>).
For every \( j_0 \in \SectionSpaceAbb{I} \), the action
of \( \DiffGroup(M, \omega) \) on \( \SectionSpaceAbb{I} \)
has a momentum map
\( \SectionMapAbb{J}: \SectionSpaceAbb{I} \to
\VectorFieldSpace(M, \omega) \) relative to the inner
product \( \scalarProdDot_{j_0} \) given by assigning
to \( j \in \SectionSpaceAbb{I} \) the symplectic vector field
\begin{equation}
- X_{\bar{S}_j} + \omega^\sharp\bigl(j_0 J(j_0, j)\bigr)^h,
\end{equation}
where \( \bar{S}_j \) is the normalized Chern scalar curvature
given by (<ref>), the
\( 1 \)-form \( J(j_0, j) \) is defined
in (<ref>), and the
superscript \( h \) refers to taking the
\( g_{j_0} \)-harmonic part.
Moreover, \( \SectionMapAbb{J}_\HamDiffGroup:
\SectionSpaceAbb{I} \ni j \mapsto - X_{\bar{S}_j}
\in \HamVectorFields(M, \omega) \) is a momentum
map for the action of \( \HamDiffGroup(M, \omega) \),
relative to the inner product \( \scalarProdDot \)
defined in (<ref>).
With respect to the pairing \( \kappa \)
from (<ref>), we have for every
\( \xi = X_F + \xi^h \in \VectorFieldSpace(M, \omega) \)
and \( \alpha \in \DiffFormSpace^1(M) \):
\begin{equation}\begin{split}
\kappa\bigl(\equivClass{\alpha \wedge \omega^{n-1}}, \xi\bigr)
&= \frac{1}{(n-1)!} \int_M \alpha \wedge \omega^{n-1}
\wedge (\xi \contr \omega)
\\
&= \frac{1}{(n-1)!} \int_M F \, \dif \alpha \wedge \omega^{n-1}
+ \alpha \wedge (\xi^h \contr \omega) \wedge \omega^{n-1}
\\
&= \int_M \left(\frac{1}{2} F \tr_\omega(\dif \alpha)
+ \alpha(\xi^h)\right) \, \mu_\omega \,,
\end{split}\end{equation}
where the last equality follows
from (<ref>).
On the other hand, we have \( \alpha(\xi^h) =
g_{j_0}\bigl(\omega^\flat (\xi^h), j_0 \alpha\bigr) \)
with \( j_0 \alpha \defeq - \alpha(j_0 \cdot) \).
By assumption, \( \omega^\flat (\xi^h) \) is a
\( g_{j_0} \)-harmonic \( 1 \)-form.
Thus, \( \LFunctionSpace^2 \)-orthogonality of the
Hodge decomposition implies that only the harmonic
part \( (j_0\alpha)^h \) of \( j_0 \alpha \) contributes
to the integral, and we obtain
\begin{equation}\label{eq:kaehler:projectionDualSymp}\begin{split}
\kappa\bigl(\equivClass{\alpha \wedge \omega^{n-1}}, \xi\bigr)
&= \int_M \left(\frac{1}{2} F \tr_\omega(\dif \alpha) +
\xi^h\right)\right) \, \mu_\omega
\\
&= \scalarProd{\eta_\alpha}{\xi}_{j_0},
\end{split}\end{equation}
for \( \eta_\alpha = X_G + \omega^\sharp\bigl((j_0 \alpha)^h\bigr) \)
with \( G = \tr_\omega(\dif \alpha) - \mathrm{Av}_\omega(\dif \alpha)\).
Note that if \( \alpha \) is exact, say \( \alpha = \dif f \), then the
Kähler identities, see [Proposition 1.14.1]Gauduchon2017,
imply \( j_0 \alpha = \diF (f \omega) \) so that \( (j_0 \alpha)^h = 0 \),
and hence \( \eta_\alpha = 0 \).
This verifies that \( \eta_\alpha \) depends only on the equivalence
class of \( \alpha \) modulo exact forms.
Finally, <ref> and (<ref>)
imply that \( \SectionMapAbb{J}(j) = X_F + \eta^h \) with
\begin{equation}
F = S_{j_0} - S_j \quad \text{and} \quad \omega^\flat (\eta^h)
= \bigl(j_0 J(j_0, j)\bigr)^h
\end{equation}
is a momentum map relative to the inner product
\( \scalarProdDot_{j_0} \).
Clearly, we can shift \( \SectionMapAbb{J} \) by a constant
and still obtain a momentum map.
The momentum map for the subgroup of Hamiltonian diffeomorphisms
can be calculated in a similar way.
This finishes the proof.
Assume that \( \mathrm{Ric}_{j_0} = \mathrm{Ric}_{j} \),
so that \( J(j_0, j) \) is closed by (<ref>).
In this situation, the harmonic form \( (j_0 J(j_0, j))^h \) in
<ref> has a nice
geometric interpretation.
To find it, choose an orthonormal basis
\( \set{\alpha_p} \) of \( g_{j_0} \)-harmonic \( 1 \)-forms,
and expand \( (j_0 J(j_0, j))^h = \sum_p c_p \alpha_p \).
The coefficients \(c_p\) are given by
\begin{equation}\begin{split}
c_p &= \dualPair{j_0 J(j_0, j)}{\alpha_p}_{j_0}
= \int_M g_{j_0}(j_0 J(j_0, j), \alpha_p) \, \mu_\omega
\\
&= \int_M \omega(J(j_0, j), \alpha_p) \, \mu_\omega
= \int_M J(j_0, j) \wedge \alpha_p \wedge \frac{\omega^{n-1}}{(n-1)!}\,,
\end{split}\end{equation}
where the last equality follows
from (<ref>).
Thus, in summary, \( c_p = \int_{\gamma_p} J(j_0, j) \)
for the Poincaré dual \( \gamma_p \) of the \( (2n-1) \)-form
\( \alpha_p \wedge \frac{\omega^{n-1}}{(n-1)!} \).
Now recall that \( J(j_0, j) \) is the difference of the Chern
connections of \( j_0 \) and \( j \) on \( \KBundle_{j_0} M \)
(relative to an identification of
\( \KBundle_{j_0} M \isomorph \KBundle_j M \)).
Thus, \( c_p \) is the difference of the holonomies of the
Chern connections of \( j_0 \) and \( j \) around the
loop \( \gamma_p \).
The norm-squared of the momentum map for the
action of Hamiltonian diffeomorphisms is the
\( \LFunctionSpace^2 \)-norm of the (normalized) scalar curvature
that is, the Calabi energy functional on \( \SectionSpaceAbb{I} \):
\begin{equation}
\label{eq:kaehler:momentumMapHamSquared}
\norm{\SectionMapAbb{J}_\HamDiffGroup}^2(j) =
\frac{1}{2} \int_M {\bar{S}_j}^2 \mu_\omega \,.
\end{equation}
Critical points of \( \norm{\SectionMapAbb{J}_\HamDiffGroup}^2 \)
are called extremal almost-Kähler metrics; see
Calabi1985 in the Kähler setting and
Lejmi2010 without the integrability condition.
According to <ref>, these
are precisely the almost complex structures \( j \) for which
the Hamiltonian vector field \( X_{\bar{S}_j} \) is a real
holomorphic vector field, , \( \difLie_{X_{\bar{S}_j}} j = 0 \).
Constant scalar curvature metrics constitute an important special
case of extremal almost-Kähler metrics, and they correspond
to zeros of \( \SectionMapAbb{J}_\HamDiffGroup \).
Similarly, the norm-squared of the momentum map
for the full group of symplectomorphisms (see
yields the following
functional on \( \SectionSpaceAbb{I} \):
\begin{equation}
\label{eq:kaehler:momentumMapSquared}
\norm{\SectionMapAbb{J}}_{j_0}^2(j) =
\frac{1}{2} \int_M \left({\bar{S}_j}^2 +
2 \norm{\bigl(j_0 J(j_0, j)\bigr)^h}_{j_0} \right) \mu_\omega \,,
\end{equation}
where on the right-hand side the norm of the one-form
\( \bigl(j_0 J(j_0, j)\bigr)^h \) is taken with respect
to the metric \( g_{j_0} \).
The first summand is again the Calabi energy.
The second summand penalizes the difference between the
Chern connections of \( j \) and \( j_0 \).
In other words, \( \norm{\SectionMapAbb{J}}_{j_0}^2 \)
can be viewed as a localized Calabi energy.
Zeros of \( \norm{\SectionMapAbb{J}}_{j_0}^2 \) are, if they exist,
almost complex structures \( j \) with constant Chern scalar curvature
and \( j_0 J(j_0, j) \) having no \( g_{j_0} \)-harmonic component.
In analogy with Calabi's extremal metrics, we say that an almost
complex structure \( j \) is a \( j_0 \)-extremal metric
if it is a critical point of \( \norm{\SectionMapAbb{J}}_{j_0} \).
By <ref>, this is equivalent
to \( \SectionMapAbb{J}(j) \) being a real holomorphic vector field.
Our general results in
require a few technical properties. Let us check that these
are satisfied in the present situation.
The following holds:
* The group of symplectomorphisms \( \DiffGroup(M, \omega) \)
is an infinite-dimensional Fréchet Lie group and has a smooth
exponential map given by the flow.
* For every \( j \in \SectionSpaceAbb{I} \), the stabilizer
\( \DiffGroup(M, \omega)_j \) is a finite-dimensional Lie
subgroup of \( \DiffGroup(M, \omega) \) consisting of isometries
of \( g_j \).
* For every \( j \in \SectionSpaceAbb{I} \), the isotropy
representation of \( \DiffGroup(M, \omega)_j \) on
\( \TBundle_j \SectionSpaceAbb{J} \) is Hamiltonian with momentum
map given by
\begin{equation}
\label{eq:kaehler:momentumMapIsotropyRepresentation}
\widehat{\SectionMapAbb{J}_j}(A) = -\frac{1}{8} \pr_j
\left(X_{\tr_\omega(\dif \alpha)} +
\omega^\sharp \bigl((j_0 \alpha)^h\bigr)\right),
\end{equation}
where \( \alpha(Y) \defeq \tr(A j \nabla_Y A) -
2 \tr\bigl(\nabla (A j A) Y\bigr) \) for a torsion-free
connection \( \nabla \) preserving the volume form and
\( \pr_j \) is the orthogonal projection onto
\( \VectorFieldSpace(M, \omega)_j \).
For every \( X \in \VectorFieldSpace(M, \omega) \),
the adjoint of \( \difLie_X: \VectorFieldSpace(M, \omega) \to
\VectorFieldSpace(M, \omega) \) with respect to
\( \dualPairDot_{j_0} \) is
\begin{equation}
\label{eq:kaehler:adjointLieDerivative}
X_F + \xi^h \mapsto - \difLie_X X_F + \frac{1}{2} (F j_0 X)^h,
\end{equation}
for \( F \in \sFunctionSpace_0(M) \) and
\[ \xi^h \in \mathfrak{har}_{j_0}(M) \defeq \{\zeta \in
\mathfrak{X}(M, \omega) \mid
\omega^\flat(\zeta ) \text{ is a } g_{j_0}\text{-harmonic }
\]
\begin{equation}\begin{split}
\label{eq:kahler:adjointLieDerivative:pairing}
\dualPair*{\difLie_X (X_F + \xi^h)}{X_G + \eta^h}_{j_0}
&= - \dualPair*{X_F + \xi^h}{\difLie_X(X_G + \eta^h)}_{j_0}\\
&\qquad+ \frac{1}{2} \dualPair*{(Fj_0 X)^h}{\eta^h}_{j_0}\\
&\qquad+ \frac{1}{2} \dualPair*{\xi^h}{(Gj_0 X)^h}_{j_0}.
\end{split}\end{equation}
This shows that \( \dualPairDot_{j_0} \) is
not invariant under the Lie derivative by vector fields in
\(\VectorFieldSpace(M, \omega) \). However,
\( \dualPairDot_{j_0} \) is invariant under the Lie
derivative by elements of the stabilizer
\( \VectorFieldSpace(M, \omega)_{j_0} \).
* The almost complex structure \( \SectionMapAbb{j} \) is
equivariant with respect to the push-forward action of
\( \DiffGroup(M, \omega) \).
The group of symplectomorphisms is a Fréchet Lie group
by [Theorem 43.12]KrieglMichor1997 and the
automorphism group of an almost complex structure is a
finite-dimensional Lie group by
[Corollary I.4.2]Kobayashi1972.
Let \( \nabla \) be a torsion-free connection-preserving the
volume form. Using \( \difLie_X A =
\nabla_X A + \commutator{A}{\nabla X} \), we find
\begin{equation}
\tr(A j \difLie_X A)
= \tr(A j \nabla_X A) + 2 \tr(A j A \nabla X)
= \alpha(X) + 2 \tr\bigl(\nabla(A j A X)\bigr).
\end{equation}
On the other hand, \( \tr(\nabla Y) \) is the divergence of the
vector field \( Y \) so that upon integration over \( M \),
the last term vanishes and we obtain
\begin{equation}\begin{split}
\dualPair{\widehat{\SectionMapAbb{J}_j}(A)}{X}
\frac{1}{2} \Omega_j(A, -\difLie_X A) \\
&\, \, \, = - \frac{1}{8} \int_M \tr(A j \difLie_X A) \, \mu_\omega \\
&\, \, \, = - \frac{1}{8} \int_M \alpha(X) \, \mu_\omega \\
&\, \, \, = - \frac{1}{8} \int_M \alpha \wedge (X \contr \mu_\omega) \\
&\,\stackrel{\eqref{eq:kaehler:pairing}}{=} - \frac{1}{8}
\kappa\bigl(\equivClass{\alpha \wedge \omega^{n-1}}, X\bigr) \\
&\stackrel{\eqref{eq:kaehler:projectionDualSymp}}{=} -\frac{1}{8}
\dualPair*{X_{\tr_\omega(\dif \alpha)} +
\omega^\sharp \bigl((j_0 \alpha)^h\bigr)}{X}_{j_0}. \\
\end{split}\end{equation}
From this identity we directly read
off (<ref>).
Let \( X \in \VectorFieldSpace(M, \omega) \),
\( F, G \in \sFunctionSpace_0(M) \) and
\( \xi^h, \eta^h \in \mathfrak{har}_{j_0}(M) \).
Then we find
\begin{equation}\begin{split}
&\dualPair*{\difLie_X (X_F + \xi^h)}{X_G + \eta^h}_{j_0}\\
&\qquad= \dualPair*{X_{\omega(X, X_F + \xi^h)}}{X_G + \eta^h}_{j_0}
\\
&\qquad= \frac{1}{2} \int_M \omega(X, X_F + \xi^h) G \, \mu_\omega
\\
&\qquad= \frac{1}{2} \int_M \left(\dif F (X) G -
G \omega^\flat(\xi^h)(X)\right) \mu_\omega
\\
&\qquad= \frac{1}{2} \int_M - \dif G (X) F \mu_\omega -
G \omega^\flat(\xi^h) \wedge (X \contr \mu_\omega)
\\
&\qquad= - \dualPair*{X_F + \xi^h}{\difLie_X X_G}_{j_0} -
\frac{1}{2} \int_M G \omega^\flat(\xi^h)
\wedge *_{{g_j}_0} g^\flat_{j_0}(X)
\\
&\qquad= - \dualPair*{X_F + \xi^h}{\difLie_X X_G}_{j_0} +
\frac{1}{2} \int_M G \omega^\flat(\xi^h)
\wedge *_{{g_j}_0} \omega^\flat(j_0X)
\\
&\qquad= \dualPair*{X_F + \xi^h}{- \difLie_X X_G +
\frac{1}{2} (G j_0 X)^h}_{j_0}.
\end{split}\end{equation}
This verifies (<ref>).
Using this equation, we obtain
\begin{equation}\begin{split}
\dualPair*{\difLie_X (X_F + \xi^h)}{X_G + \eta^h}_{j_0}
&= -\dualPair*{X_F + \xi^h}{\difLie_X(X_G + \eta^h)}_{j_0}\\
&\qquad+ \frac{1}{2} \dualPair*{X_F + \xi^h}{\difLie_X \eta^h}_{j_0}\\
&\qquad+ \frac{1}{2} \dualPair*{X_F + \xi^h}{(Gj_0X)^h}_{j_0}.
\end{split}\end{equation}
Applying again (<ref>) on the
second summand, we get (<ref>).
Now, if \( X \) is Killing, then \( 0 = \difLie_X \alpha =
\dif (X \contr \alpha) \) for every harmonic
\( 1 \)-form \( \alpha \). Applied to
\( \alpha = \omega^\flat(\xi^h) \) in the above
chain of equalities at step \( 3 \), we conclude
that then \( \dualPair*{\difLie_X (X_F + \xi^h)}{X_G + \eta^h}_{j_0}
= \frac{1}{2} \int_M (\dif F(X) G) \, \mu_\omega =
-\dualPair*{X_F + \xi^h}{\difLie_X(X_G + \eta^h)}_{j_0} \),
which shows that \( \dualPairDot_{j_0} \) is invariant under
the adjoint action of Killing vector fields.
For every \( \phi \in \DiffGroup(M, \omega) \), we have
\begin{equation}
\phi_* \, \SectionMapAbb{j}_j(A) = \phi_* (A j) =
(\phi_* A) (\phi_* j) =
\SectionMapAbb{j}_{\phi_* j} (\phi_* A).
\end{equation}
Thus, \( \SectionMapAbb{j} \) is equivariant.
Let us discuss the analogs of the Lichnerowicz and Calabi operators.
Relative to the splitting of the exact
sequence (<ref>), every
operator \( T: \VectorFieldSpace(M, \omega) \to
\VectorFieldSpace(M, \omega) \) gives rise to operators
\( T^{SS}: \sFunctionSpace(M) \to \sFunctionSpace(M) \),
\( T^{HS}: \mathfrak{har}_{j_0}(M) \to \sFunctionSpace(M) \),
\( T^{SH}: \sFunctionSpace(M) \to \mathfrak{har}_{j_0}(M) \) and
\( T^{HH}: \mathfrak{har}_{j_0}(M) \to \mathfrak{har}_{j_0}(M) \).
The defining equations for these operators are
\begin{equation}
T(X_f) = X_{T^{SS}(f)} + T^{SH}(f),
\qquad T(X^h) = X_{T^{HS}\bigl(X^h\bigr)}
+ T^{HH} X^h
\end{equation}
with \( f \in \sFunctionSpace(M) \) and
\( X^h \in \mathfrak{har}_{j_0}(M) \), and where
we keep identifying \( \deRCohomology^1(M) \) with
\( g_{j_0} \)-harmonic \( 1 \)-forms.
Using this notation, we calculate the
Lichnerowicz operator introduced in the general setting in (<ref>).
For a \( j_0 \)-extremal almost Kähler metric
\( j \in \SectionSpaceAbb{I} \), the operators
\( L_j \xi = \tangent_j \SectionMapAbb{J} (j \, \difLie_\xi j) \)
and \( Z_j \xi = -\tangent_j \SectionMapAbb{J} (\difLie_\xi j) \)
on \( \VectorFieldSpace(M, \omega) \) are given by:
\begin{equation}\label{eq:kaehler:Lj}\begin{split}
L_j^{SS}(f) &= - \frac{1}{2}\tr_\omega
(\dif \tau^\nabla(j, j \difLie_{X_f} j))\\
L_j^{SH}(f) &= - \frac{1}{2} \omega^\sharp (
j_0 \tau^\nabla(j, j \difLie_{X_f} j))^h\\
L_j^{HS}(X^h) &= - \frac{1}{2}\tr_\omega
(\dif \tau^\nabla(j, j \difLie_{X^h} j)) \\
L_j^{HH}(X^h) &= - \frac{1}{2} \omega^\sharp (j_0 \tau^\nabla(j,
j \difLie_{X^h} j))^h,
\end{split}\end{equation}
\begin{equation}\label{eq:kaehler:Zj}\begin{split}
Z_j^{SS}(f) &= \poisson{f}{\bar{S}_j}\\
Z_j^{SH}(f) &= \omega^\sharp (j_0 (X_f \contr \mathrm{Ric}_j))^h\\
Z_j^{HS}(X^h) &= \dif \bar{S}_j(X^h)\\
Z_j^{HH}(X^h) &= \omega^\sharp (j_0 (X^h
\contr \mathrm{Ric}_j))^h
\end{split}\end{equation}
for \( f \in \sFunctionSpace(M) \) and every
\( X^h \in \mathfrak{har}_{j_0}(M) \).
These follow from direct computations.
*Calculation of \( Z_j \):
By <ref>, \( \tangent_j J(j_0, \cdot)(A) \)
equals \( - \frac{1}{2} \tau^\nabla(j, A) \) modulo an exact form.
On the other hand, from
[Theorem 2.7]Garcia-PradaSalamonTrautwein2018,
for every \( \xi \in \VectorFieldSpace(M, \omega) \), we have
\begin{equation}
\tau^\nabla(j, \difLie_\xi j) = 2 \xi \contr \mathrm{Ric}_j
+ \dif \divergence(j\xi).
\end{equation}
Hence, viewing \( \SectionMapAbb{J} \) as a map into
\( \DiffFormSpace^{2n-1} M \slash \dif \DiffFormSpace^{2n-2} M \),
we have
\begin{equation}
- \tangent_j \SectionMapAbb{J}(\difLie_\xi j) =
\frac{1}{2} \tau^\nabla(j, \difLie_\xi j) \wedge
\omega^{n-1} \textrm{ mod exact}
= (\xi \contr \mathrm{Ric}_j) \wedge \omega^{n-1}
\textrm{ mod exact}.
\end{equation}
Thus, composing with the projection (<ref>),
we obtain \( Z_j \xi = X_G + \omega^\sharp \beta^h \) for
\begin{equation}
G = \tr_\omega (\dif (\xi \contr \mathrm{Ric}_j))
= \tr_\omega (\difLie_\xi \mathrm{Ric}_j)
= \difLie_\xi S_j = \omega(\xi, X_{S_j})
\end{equation}
\begin{equation}
\beta^h = (j_0 (\xi \contr \mathrm{Ric}_j))^h.
\end{equation}
Using these equations for \( \xi = X_f \) or \( \xi = X^h \)
yields (<ref>).
*Calculation of \( L_j \):
Using a similar argument as above, we obtain
\( L_j \xi = X_G + \omega^\sharp \beta^h \) for
\begin{equation}
G = - \frac{1}{2}\tr_\omega (\dif \tau^\nabla(j, j \difLie_\xi j))
\end{equation}
\begin{equation}
\beta^h = - \frac{1}{2} (j_0 \tau^\nabla(j, j \difLie_\xi j))^h.
\end{equation}
Using these equations for \( \xi = X_f \) or \( \xi = X^h \)
yields (<ref>).
In the integrable case, the operator \( L_j^{SS}: \sFunctionSpace_0(M)
\to \sFunctionSpace_0(M) \) recovers the classical Lichnerowicz
operator, and a few different ways of writing it down are known
in the literature, see Gauduchon2017.
For non-integrable \( j \), similar expressions for \( L_j^{SS} \)
are obtained in Vernier2020,HeZheng2023.
In both cases, one concludes from these explicit expressions
that \( L_j^{SS} \) is a \( 4 \)-order elliptic differential
operator. In particular, \( L_j \) is a Fredholm operator.
Following the general procedure,
(<ref>), for
every \( j \in \SectionMapAbb{I} \), the Calabi operators
\( C^\pm_j: \VectorFieldSpace(M, \omega)_\C \to
\VectorFieldSpace(M, \omega)_\C \) are defined by
\begin{equation}
C^\pm_j = L_j \pm \I Z_j.
\end{equation}
Recall that a real vector field \( X \) is called holomorphic
if \( \difLie_X j = 0 \).
Using a similar argument as above, one sees that \( C^\pm_j \)
are Fredholm operators.
In particular, their kernels are finite-dimensional.
For every \( j \in \SectionMapAbb{I} \), the kernel of \( C^+_j \)
coincides with the stabilizer \( (\VectorFieldSpace(M, \omega)_\C)_j \)
under the complexified action. If \( j \) is integrable, then the
map \( \VectorFieldSpace(M, \omega)_\C \ni \xi +
\I \eta \mapsto \xi - j \eta \in \VectorFieldSpace(M) \)
restricts to a surjection from \( (\VectorFieldSpace(M, \omega)_\C)_j \)
onto the space of real holomorphic vector fields and it has kernel
\( \mathfrak{har}_{j}(M) \).
We do not know whether \( (\VectorFieldSpace(M, \omega)_\C)_j \)
is a Lie subalgebra of \( \VectorFieldSpace(M, \omega)_\C \).
The first statement follows directly from
For the second statement, we observe that \( \xi + \I \eta \)
is in the stabilizer \( (\VectorFieldSpace(M, \omega)_\C)_j \)
if and only if \( 0 = \xi \ldot j + \SectionMapAbb{j}_j (\eta \ldot j)
= - \difLie_\xi j + j \difLie_\eta j =
- \difLie_\xi j + \difLie_{j \eta} j \),
where the last equality uses the integrability of \( j \),
see, , [Lemma 1.1.1]Gauduchon2017.
In other words, \( \xi - j \eta \) is a real holomorphic vector
field. Conversely, by [Lemma 2.1.1]Gauduchon2017,
every real holomorphic vector field \( X \) on a compact Kähler
manifold can be uniquely written as the sum \( X = j X^h + j X_f
+ X_g \) for \( X^h \in \mathfrak{har}_j(M) \) and
\( f,g \in \sFunctionSpace_0(M) \). This shows that the map
\( \xi + \I \eta \mapsto \xi - j \eta \) is surjective.
Finally, if \( \xi = j \eta \) with \( \xi, \eta \in
\VectorFieldSpace(M, \omega) \), then both \( \omega^\flat(\xi) \)
and \( \omega^\flat(j\xi) = j \omega^\flat(\xi) \) are closed.
By [Proposition 2.3.1]Gauduchon2017, this is equivalent
to \( \omega^\flat(\xi) \) being harmonic.
As a direct application of <ref>,
we obtain the following result.
Let \( (M, \omega) \) be a compact symplectic manifold and
\( j_0 \) a compatible almost complex structure. For every
\( j_0 \)-extremal almost complex structure \( j \) satisfying
\( \SectionSpaceAbb{J}(j) \in \VectorFieldSpace(M, \omega)_{j_0} \),
the following decomposition holds:
\begin{equation}
(\VectorFieldSpace(M, \omega)_\C)_j = \LieA{c} \oplus
\bigoplus_{\lambda \neq 0} \LieA{k}_\lambda,
\end{equation}
* \( \LieA{c} \) is the Lie subalgebra of
\( (\VectorFieldSpace(M, \omega)_\C)_j \) consisting of all
elements that commute with \( \SectionSpaceAbb{J}(j) \);
* \( \C \SectionSpaceAbb{J}(j) \subseteq \LieA{c} \);
\( \mathfrak{har}_j \subseteq \LieA{c} \);
* \( \LieA{k}_\lambda \) are eigenspaces of
\( 2 \I \difLie_{\SectionSpaceAbb{J}(j)} \) with eigenvalue
\( \lambda \in \R \) (with the convention that
\( \LieA{k}_\lambda = \set{0} \) if \( \lambda \) is not an
eigenvalue); in particular, $\mathfrak{c} = \mathfrak{k}_0$;
* \( \commutator{\LieA{k}_\lambda}{\LieA{k}_\mu}
\intersect (\VectorFieldSpace(M, \omega)_\C)_j \subseteq
\LieA{k}_{\lambda + \mu} \) if $\lambda + \mu$ is an eigenvalue
of \( 2 \I \difLie_{\SectionSpaceAbb{J}(j)} \); otherwise
\( \commutator{\LieA{k}_\lambda}{\LieA{k}_\mu} \intersect
(\VectorFieldSpace(M, \omega)_\C)_j = 0 \).
The only statement that does not follow directly from
<ref> is the
inclusion \( \mathfrak{har}_j \subseteq \LieA{c} \).
But this follows from the fact that the Lie derivative with
respect to a symplectic vector field commutes with the musical
isomorphism \( \omega^\flat \): for every
\( X^h \in \mathfrak{har}_j \), we have
\begin{equation}
\omega^\flat \difLie_{\SectionSpaceAbb{J}(j)} X^h
= \difLie_{\SectionSpaceAbb{J}(j)} (\omega^\flat X^h)
= 0,
\end{equation}
where the last equality follows from the fact that
\( \omega^\flat X^h \) is a \( g_j \)-harmonic form
and \( \SectionSpaceAbb{J}(j) \) is a \( g_j \)-Killing
vector field.
The assumption that \( \SectionSpaceAbb{J}(j) \in
\VectorFieldSpace(M, \omega)_{j_0} \) is not essential
and only serves to ensure that \( 2 \I \difLie_{\SectionSpaceAbb{J}(j)} \)
is symmetric and thus diagonalizable,
Without this assumption a similar statement
holds using generalized eigenspaces; see
Similarly, for extremal metrics, we obtain the following theorem
generalizing the classical result of Calabi1985 which holds
in the integrable case.
Let \( (M, \omega) \) be a compact symplectic manifold. For every
extremal almost complex structure \( j \in \SectionMapAbb{I} \),
the following decomposition holds:
\begin{equation}
\label{eq:kaehler:extremal:decompositionComplexStab}
(\VectorFieldSpace(M, \omega)_\C)_j = \LieA{c} \oplus
\bigoplus_{\lambda \neq 0} \LieA{k}_\lambda,
\end{equation}
* \( \LieA{c} \) is the subset of
\( (\VectorFieldSpace(M, \omega)_\C)_j \) consisting of
all elements that commute with \( X_{S_j} \);
* \( \C X_{S_j} \subseteq \LieA{c} \);
\( \mathfrak{har}_j \subseteq \LieA{c} \);
* \( \LieA{k}_\lambda \) are eigenspaces of \( 2 \I \difLie_{X_{S_j}} \)
with eigenvalue \( \lambda \in \R \) (with the convention that
\( \LieA{k}_\lambda = \set{0} \) if \( \lambda \) is not an
eigenvalue); in particular, $\mathfrak{c} = \mathfrak{k}_0$;
* \( \commutator{\LieA{k}_\lambda}{\LieA{k}_\mu} \subseteq
\LieA{k}_{\lambda + \mu} \intersect
(\VectorFieldSpace(M, \omega)_\C)_j \) if
\( \lambda + \mu \) is an eigenvalue
of \( 2 \I \difLie_{X_{S_j}} \); otherwise
\( \commutator{\LieA{k}_\lambda}{\LieA{k}_\mu}
\intersect (\VectorFieldSpace(M, \omega)_\C)_j = 0 \).
Moreover, if \( j \) is integrable, then the Lie algebra
\( \LieA{h}(M, j) \) of real holomorphic vector fields
admits the following decomposition:
\begin{equation}
\label{eq:kaehler:extremal:decompositionHolomorphic}
\LieA{h}(M, j) = \mathrlap{\overbrace{\phantom{\LieA{a}(M, g_j)
\oplus \LieA{k}_{\textnormal{ham}}(M,g_j)}}^{\LieA{k}(M,g_j)}}
\LieA{a}(M,j) \oplus \underbrace{\LieA{k}_{\textnormal{ham}}(M,g_j)
\oplus j \LieA{k}_{\textnormal{ham}}(M, g_j) \oplus
\bigoplus_{\lambda \neq 0} \LieA{h}_\lambda(M,j)}_{\LieA{h}_{
\textnormal{red}}(M, j)},
\end{equation}
where \( \LieA{a}(M, g_j) \) is the complex Abelian Lie
subalgebra of \( \LieA{h}(M, j) \) consisting of vector
fields that are parallel with respect to the Levi-Civita
connection of \( g_j \), \( \LieA{k}(M, g_j) \) is the Lie
algebra of Killing vector fields,
\( \LieA{k}_{\textnormal{ham}}(M, g_j) \) the subalgebra
of Hamiltonian Killing vector fields, and
\( \LieA{h}_{\textnormal{red}}(M, j) \) is the Lie
algebra of the reduced automorphism group
(see [Section 2.4]Gauduchon2017), and
\( \LieA{h}_\lambda(M,j) \) are
\( \lambda \)-eigenspaces of \( - 2 j \difLie_{X_{S_j}} \).
This statement does not directly follow from
<ref> applied to
the action of \( \HamDiffGroup(M, \omega) \), since this would
only yield a decomposition of \( (\HamVectorFields(M, \omega)_\C)_j \).
Instead, we use the fact that
\( \SectionMapAbb{J}_{\HamDiffGroup} (j) \) is an
element of the stabilizer \( \VectorFieldSpace(M, \omega)_j \)
as \( j \) is extremal. Then the first part of the statement follows
from <ref> relative to
the \( \DiffGroup(M, \omega) \) action, applied to the one-dimensional
subalgebra \( \LieA{t} \subseteq \VectorFieldSpace(M, \omega)_j \)
spanned by \( \SectionMapAbb{J}_{\HamDiffGroup} (j) = X_{S_j} \).
The image of the
decomposition (<ref>)
under the map
\( \VectorFieldSpace(M, \omega)_\C \ni \xi + \I \eta \mapsto
\xi - j \eta \in \VectorFieldSpace(M) \) yields the decomposition
\( \LieA{h}(M, j) = \bigoplus_{\lambda} \LieA{h}_\lambda(M,j) \),
<ref> (this uses
the fact that the kernel of this map is \( \mathfrak{har}_j \),
which is completely included in \( \LieA{c} \)).
A direct calculation shows that \( 2 \I \difLie_{X_{S_j}} \) under
this map takes the form \( -2 j \difLie_{X_{S_j}} \), which identifies
\( \LieA{h}_\lambda(M,j) \) as eigenspaces of \( -2 j \difLie_{X_{S_j}} \).
Finally, the further decomposition of the zero eigenspace and the
identification of \( \LieA{k}(M,g_j) \) and
\( \LieA{h}_{\textnormal{red}}(M, j) \) in this decomposition
are standard; see [Theorem 3.4.1]Gauduchon2017.
This finishes the proof of the
decomposition (<ref>).
Let \( (M, \omega) \) be a compact symplectic manifold.
The Hessian of the Calabi functional
\( \norm{\SectionMapAbb{J}_\HamDiffGroup}^2 \) at an
extremal \( j \in \SectionMapAbb{I} \) is given by
\begin{equation}
\frac{1}{2} \Hessian_j \norm{\SectionMapAbb{J}_\HamDiffGroup}^2
(\zeta \ldot j, \gamma \ldot j) =
\Re \, \dualPair{\zeta}{C^+_j C^-_j \smallMatrix{0 & 0\\0 & 1} \gamma}_{\C}
\end{equation}
for \( \zeta, \gamma \in \HamVectorFields(M, \omega)_\C \).
Moreover, the restriction of \( \Hessian_j
\norm{\SectionMapAbb{J}_\HamDiffGroup}^2 \) to
\( \VectorFieldSpace(M, \omega)_\C \ldot j \subseteq
\TBundle_j \SectionSpaceAbb{J} \) is positive semi-definite.
Similarly, for every \( j_0 \)-extremal \( j \in \SectionMapAbb{I} \)
(with respect to a given almost complex structure
\( j_0 \in \SectionMapAbb{I} \)), the Hessian of
\( \norm{\SectionMapAbb{J}}_{j_0}^2 \) at \( j \) is given by
\begin{equation}
\frac{1}{2} \Hessian_j \norm{\SectionMapAbb{J}}^2_{j_0}
(\zeta \ldot j, \gamma \ldot j) =
\Re \, \dualPair{\zeta}{C^+_j R_j \gamma}_{j_0, \C},
\end{equation}
for \( \zeta, \gamma \in \VectorFieldSpace(M, \omega)_\C \) and
\begin{equation}
R_j = C^-_j \Matrix{0 & 0 \\ 0 & 1} +
\I \, \bigl(\difLie_{\SectionMapAbb{J}(j)} + Z_j\bigr)
\end{equation}
where \( Z_j \) is calculated in (<ref>).
The expression for the Hessian follows directly from
<ref> in both cases.
The fact that the restriction of
\( \Hessian_j \norm{\SectionMapAbb{J}_\HamDiffGroup}^2 \) to
\( \VectorFieldSpace(M, \omega)_\C \ldot j \subseteq
\TBundle_j \SectionSpaceAbb{J} \) is positive semi-definite
is a direct consequence of
In fact, the only missing assumption to verify is that the
Calabi operators \( C^\pm_j \) are essentially self-adjoint.
But this is clearly the case as these operators are elliptic
operators on \( \sFunctionSpace(M)_\C \isomorph
\sFunctionSpace(M, \C) \).
The first part concerning the Hessian of the Calabi energy
recovers the classical work of [Theorem 2]Calabi1985
in the integrable case (in which case, the vector space
\( \VectorFieldSpace(M, \omega)_\C \ldot j \) is identified with
the tangent
space to the space of Kähler forms in a given cohomology class
up to automorphisms; see
[p. 408f]Donaldson1997[Proposition 9.1.1]Gauduchon2017)
and the recent result of
[Theorem 1.1]HeZheng2023 in the non-integrable case.
The second part of the theorem is thus a natural generalization
of these insights to the case of \( j_0 \)-extremal metrics.
In Donaldson2015a, Donaldson introduced another symplectic
form on the space \( \SectionSpaceAbb{I} \) on a Fano manifold.
This new symplectic form is induced from the space of differential
\( n \)-forms with values in the prequantum bundle \( (L, \theta) \)
over \( M \), essentially via the Plücker embedding.
With respect to this symplectic form, the action of the group
\( \AutGroup(L, \theta) \) is Hamiltonian whose momentum map is
the logarithm of the Ricci potential. This momentum map is equivariant.
Zeros of the momentum map are precisely the Kähler–Einstein metrics.
The norm-squared of the momentum map yields the Ricci–Calabi
functional, whose critical points are generalized Kähler–Einstein
metrics, also known as Mabuchi solitons after Mabuchi2001.
As an application of our general results in
<ref>, we recover the
Matsushima type decomposition theorem for holomorphic vector fields
in the presence of generalized Kähler–Einstein metrics of
[Theorem 4.1]Mabuchi2001.
Moreover, from
<ref>, we recover the Hessian of the Ricci–Calabi
functional which has been calculated in
[Theorem 1.1]Nakamura2019a.
As an alternative to the norm-squared of the momentum map, one
may also consider the composition of the momentum map with a
certain convex function on the Lie algebra \( \sFunctionSpace(M) \)
of \( \AutGroup(L, \theta) \).
The resulting functional on \( \SectionSpaceAbb{I} \) is the
\( \SectionMapAbb{H} \)-functional introduced in He2016,
whose critical points are Kähler–Ricci solitons.
We expect that our results can be used to study the
\( \SectionMapAbb{H} \)-functional as well.
In fact, the results about the Hessian of the
\( \SectionMapAbb{H} \)-functional and the decomposition of
holomorphic vector fields in the presence of Kähler–Ricci
solitons obtained in Fong2016,Nakamura2019 and
[Theorem A]TianZhu2000, respectively, should follow
from an extension of our results to allow arbitrary convex functions
on the Lie algebra along the lines of the finite-dimensional/formal
picture of LeeSturmWang2022.
A different application of our results is to the coupled
Kähler–Einstein equations introduced by
Following DatarPingali2019, this setting fits into
our infinite-dimensional symplectic framework.
In this case, the Hessian and the Matsushima-type decomposition
recover the recent results of Nakamura2023.
Let \( f \) be a positive function on the symplectic
manifold \( (M, \omega) \), and denote its Hamiltonian vector field
by \( K = X_f \)We are skipping over some technical
points here, like the assumption that \( K \) has to lie in a
certain torus.. ApostolovMaschler2019 defined on the
space \( \SectionSpaceAbb{I}_K(M, \omega) \) of \( K \)-invariant
(almost) complex structures on \( M \) a \( f \)-deformed version
of the symplectic form (<ref>) as follows:
\begin{equation}
\Omega^f_j (A, B) = \frac{1}{4} \int_M \tr (A \, j \, B) \,
\frac{\mu_\omega}{f^{2n-1}}.
\end{equation}
They showed that the momentum map for the action of
\( \HamDiffGroup_K(M, \omega) \) on
\( \SectionSpaceAbb{I}_K(M, \omega) \) is given by
assigning to \( j \) the Hermitian scalar curvature
of \( f^{-2} g_j \).
Thus, zeros of the momentum map correspond to conformally
Kähler–Einstein metrics (cKEM) and the norm-squared
of the momentum map is the \( f \)-weighted Calabi functional, whose
critical points are called \( f \)-extremal Kähler metrics.
The Calabi program for \( f \)-extremal Kähler metrics has
been initiated by FutakiOno2017,Lahdili2019.
Naturally, this setting fits into our infinite-dimensional
symplectic framework and we can recover these results using
our general
Moreover, based on our discussion above, it would be interesting
to study the momentum map of all symplectic diffeomorphisms
preserving \( K \) (and not just the subgroup of Hamiltonian
In the odd-dimensional counterpart to Kähler geometry,
Sasakian metrics and their non-integrable pendant $K$-contact
structures are another important class of geometries that can
be studied using our results.
He2014,LejmiUpmeier2015 have shown that the space
of $K$-contact structures on a compact contact manifold is
an infinite-dimensional symplectic manifold and that the
action of the group of strict contactomorphisms is Hamiltonian
with momentum map given by the transverse Hermitian scalar
curvature. The critical points of the norm-squared of the
momentum map have been studied in BoyerGalickiSimanca2008
and are called extremal Sasakian metrics.
We expect that our results can be used to study the Hessian of
the norm-squared of the momentum map and the decomposition of
the complexified stabilizer of a $K$-contact structure.
In particular, the decomposition theorem
[Theorem 11.3.1]Boyer2008 of the space of transverse
holomorphic vector fields in the presence of an extremal
Sasakian metric should directly follow from
Moreover, it would be interesting to study the action of the
whole group of contactomorphisms (and not just the subgroup of strict
contactomorphisms) on the space of $K$-contact structures.
In parallel to our discussion of the Kähler case, one would
expect that the momentum map is then non-equivariant and one
obtains a natural central extension of the group of contactomorphisms.
§ APPLICATION: SYMPLECTIC CONNECTIONS
§.§ Momentum map for the action of
Diff(M, w)
First, we briefly review the necessary background on symplectic connections
summarizing definitions and conventions following Tondeur1961,
Hess1980, MarsdenRatiuEtAl1991, CahenGutt2005.
We make heavy use of the Penrose notation, which is reviewed in <ref>.
An affine connection \( \nabla \) on a symplectic manifold \( (M, \omega) \)
is a symplectic connection, if it is torsion-free and satisfies
\( \nabla \omega =0 \), , \( X[\omega(Y,Z)] =\omega(\nabla_X Y, Z) +
\omega(Y,\nabla_X Z) \), for all \( X, Y, Z \in \VectorFieldSpace(M) \).
This condition is equivalent to the parallel transport operator being a
symplectic isomorphism between the tangent spaces to \( M \).
In contrast to the Levi-Civita connection on a Riemannian manifold, there does
not exist a unique symplectic connection on a given
symplectic manifold.
The non-uniqueness of torsion free symplectic connections cannot
be improved even if \( M = \CotBundle Q \), endowed with its standard exact
symplectic form and \(Q\) is Riemannian.
Then \( \TBundle Q\) has a naturally induced Riemannian metric.
Pull back this metric to \( \CotBundle Q\) using the given Riemannian metric
on \( Q \) to endow \( \CotBundle Q \) with a Riemannian metric.
So, we have now a Levi-Civita connection on \( \CotBundle Q \).
It turns out that it is symplectic if and only if the given Riemannian
metric on \( Q \) is flat.
If \( \nabla^1 \) and \( \nabla^2 \) are symplectic connections on
\( (M,\omega) \), then
\begin{equation}
\label{two_nablas}
\nabla^2_i X^k = \nabla^1_i X^k + \tensor{A}{_{ij}^k} X^j
\end{equation}
for some tensor \( \tensor{A}{_{ij}^k} \) such that
\( \tensor{A}{_i_j_k} \defeq \tensor{A}{_{ij}^l}\omega_{lk}\)
is symmetric in all indices, see CahenGutt2005.
The Penrose index notation in (<ref>) stands for
the intrinsic formula \(\nabla^2_Y X = \nabla^1_Y X + A(Y, X, \cdot) \).
We abbreviate (<ref>) by writing
\( \nabla^2 = \nabla^1 + A \).
The important conclusion of these considerations is that the space
\(\ConnSpace_\omega(M) \) of symplectic connections on the symplectic
manifold \( (M, \omega) \) is an affine space whose linear model space is
isomorphic to the space \( \SymTensorFieldSpace_3(M) \) of symmetric
covariant \( 3 \)-tensor fields on \(M\).
In particular, \( \ConnSpace_\omega(M) \) is always non-empty.
In the following we assume \( M \) to be compact. We endow
\( \ConnSpace_\omega(M) \) with its natural \( \sFunctionSpace \)-Fréchet
topology. According to CahenGutt2005, the space
\( \ConnSpace_\omega(M) \) carries a natural affine (weak) symplectic
structure \( \Omega \) defined by
\begin{equation}
\label{symplectic_form_connections}
\Omega_\nabla(A, B) = \int_M \tensor{A}{_i_j_k} \tensor{B}{^i^j^k} \mu_\omega,
\end{equation}
where \( \nabla \in \ConnSpace_\omega(M) \), \( A, B \in
\SymTensorFieldSpace_3(M) \), and \( \mu_\omega = \frac{\omega^n}{n!} \)
is the Liouville volume form. Note that \( \Omega_\nabla \) does not depend
on \(\nabla \). The Fréchet Lie group \( \DiffGroup(M, \omega) \) of
symplectomorphisms of \( (M, \omega) \) acts on the left on
\( \ConnSpace_\omega(M) \) by push-forward according to
\begin{equation}
\label{eq:symplecticConnections:action}
(\phi \cdot \nabla)_X Y \defeq \phi_* \left(\nabla_{\phi^{-1}_* X} (\phi^{-1}_* Y)
\right)
\end{equation}
for \(\phi \in \DiffGroup(M, \omega)\) and \(\nabla \in \ConnSpace_\omega(M)\).
This action is clearly affine and the induced linear action is given by the
natural left action
\begin{equation}
(\phi \cdot A)(X, Y, Z) = A\bigl(\phi^{-1}_* X, \phi^{-1}_* Y, \phi^{-1}_*Z
\bigr)
\end{equation}
of \( \DiffGroup(M, \omega) \) on \( \SymTensorFieldSpace_3(M) \).
Using this expression for the linear action, it is straightforward to verify
that the \( \DiffGroup(M, \omega) \)-action on \( \ConnSpace_\omega(M) \)
preserves the symplectic form \( \Omega \). The infinitesimal action of a
symplectic vector field \( \xi \in \VectorFieldSpace(M, \omega) \)
on \( \ConnSpace_\omega(M) \) is given by
\begin{equation}
\label{eq:symplecticConnections:infAction}
\tensor{\bigl(\xi \ldot \nabla\bigr)}{_i_j^k}
= - \tensor{\bigl(\difLie_\xi \nabla\bigr)}{_{ij}^k}
= - \nabla_i \nabla_j \xi^k - \tensor{R}{_l_i_j^k} \xi^l.
\end{equation}
Similarly, the infinitesimal action of \( \VectorFieldSpace(M, \omega) \)
on \( \SymTensorFieldSpace_3(M) \) takes the form
\begin{equation}
\label{eq:symplecticConnections:infActionLinear}
\tensor{\bigl(\xi \ldot A\bigr)}{_i_j^k}
= - \tensor{\bigl(\difLie_\xi A\bigr)}{_{ij}^k}
= - \bigl(\xi^p \nabla_p \tensor{A}{_{ij}^k} + \tensor{A}{_{pj}^k}
\nabla_i \xi^p + \tensor{A}{_{ip}^k} \nabla_j \xi^p -
\tensor{A}{_i_j^q} \nabla_q \xi^k\bigr).
\end{equation}
As we are working in an infinite-dimensional setting, we have to pay
attention to functional analytic problems. We will be brief here and refer
the reader to DiezRatiuAutomorphisms,DiezThesis for background
information and further technical details.
For the construction of the momentum map, we need to clarify what we mean by
the dual space of \( \VectorFieldSpace(M, \omega) \). Note that the map
\( \xi \mapsto \xi \contr \omega \) identifies
\( \VectorFieldSpace(M, \omega) \) with the space of closed \( 1 \)-forms
on \( M \).
This suggests the choice \( {\VectorFieldSpace(M, \omega)}^* \defeq
\DiffFormSpace^{2n-1}(M) \slash \dif \DiffFormSpace^{2n-2}(M) \) for the dual
space of \( \VectorFieldSpace(M, \omega) \) relative to the pairing
\begin{equation}
\label{eq:symplecticConnections:pairing}
\kappa\bigl(\equivClass{\alpha},\xi\bigr)
= \frac{1}{(n-1)!}\int_M \alpha \wedge (\xi \contr\omega),
\end{equation}
where \( \equivClass{\alpha} \in \DiffFormSpace^{2n-1}(M) \slash \dif
\DiffFormSpace^{2n-2}(M) \) and \( \xi \in \VectorFieldSpace(M, \omega) \).
This is the same pairing as (<ref>)
used above in the Kähler setting.
The following proposition shows that the action has a momentum map.
For \( \nabla \in \ConnSpace_\omega(M) \) and \( A \in
\SymTensorFieldSpace_3(M) \), let \( J(\nabla + A) \in \DiffFormSpace^1(M) \)
be given by
\begin{equation}\begin{split}
\label{eq:symplecticConnections:momentumMapStrange:oneForm}
J(\nabla + A)_p
= - \nabla_j \nabla_i \tensor{A}{^i^j_p} + R_{pijk}A^{ijk} +
\frac{1}{2} (\nabla_p A_{ijk}) A^{ijk} -
\frac{3}{2} \nabla_i \bigl(A^{ijk} A_{pjk} \bigr),
\end{split}\end{equation}
where \(R\) is the curvature operator of
\(\nabla \) and \( R_{pijk} = \tensor{R}{_{pij}^{s}} \omega_{sk} \).
For each \( \nabla \in \ConnSpace_\omega(M) \), the map
\begin{equation}
\label{eq:symplecticConnections:momentumMapStrange}
\SectionMapAbb{J}: \ConnSpace_\omega(M) \to \DiffFormSpace^{2n-1}(M) \slash
\dif \DiffFormSpace^{2n-2}(M), \qquad
\nabla + A \mapsto \equivClass*{J(\nabla + A) \wedge
\omega^{n-1}}
\end{equation}
is the unique momentum map for the \( \DiffGroup(M, \omega) \)-action
on \( \ConnSpace_\omega(M) \) relative to the
pairing (<ref>) that vanishes
at \( \nabla \).
According to <ref>, the momentum map is
given by
\begin{equation}
\label{eq:symplecticConnections:momentumMapStrange:fromAffine}
\kappa\bigl(\SectionMapAbb{J}(\nabla + A), \xi\bigr) = \Omega(A, \xi \ldot \nabla)
+ \frac{1}{2} \Omega(A, \xi \ldot A).
\end{equation}
Let us start by evaluating the first summand on the right-hand side.
Using (<ref>)
and (<ref>), integrating by parts yields
\begin{equation*}\begin{split}
\Omega(A, \xi \ldot \nabla)
&= \int_M A_{ijk} \bigl(\xi \ldot \nabla\bigr)^{ijk} \mu_\omega \\
&= - \int_M A_{ijk} \bigl(\nabla^i \nabla^j \xi^k +
\tensor{R}{_p^i^j^k} \xi^p\bigr) \mu_\omega \\
&= - \int_M \bigl( \nabla^j \nabla^i A_{ijp} +
A_{ijk}\tensor{R}{_p^i^j^k} \bigr) \xi^p \mu_\omega \\
&= \int_M \bigl( - \nabla_j \nabla_i \tensor{A}{^{ij}_p} +
\tensor{R}{_p_i_j_k}A^{ijk} \bigr) \xi^p \mu_\omega .
\end{split}\end{equation*}
Using (<ref>) and the symmetry
of \( A_{ijk} \) we find for the second summand
in (<ref>):
\begin{equation*}\begin{split}
\Omega(A, \xi \ldot A)
&= \int_M A_{ijk} \bigl(\xi \ldot A\bigr)^{ijk} \mu_{\omega} \\
&= - \int_M A_{ijk} \bigl(\xi^p \nabla_p \tensor{A}{^{ij}^k} +
\tensor{A}{_p^j^k} \nabla^i \xi^p + \tensor{A}{^i_p^k} \nabla^j \xi^p -
\tensor{A}{^i^j^q} \nabla_q \xi^k\bigr) \mu_{\omega} \\
&= \int_M \bigl(- A_{ijk} \nabla_p \tensor{A}{^{ij}^k} +
\nabla^i (A_{ijk} \tensor{A}{_p^j^k}) +
\nabla^j(A_{ijk} \tensor{A}{^i_p^k}) -
\nabla_q (A_{ijp}\tensor{A}{^i^j^q})\bigr) \xi^p \mu_{\omega} \\
&= \int_M \bigl(A^{ijk} \nabla_p \tensor{A}{_{ij}_k} -
3 \nabla_i (A^{ijk} \tensor{A}{_p_j_k})\bigr) \xi^p \mu_{\omega}.
\end{split}\end{equation*}
Thus, comparing
with (<ref>), we get
\begin{equation*}
\kappa\bigl(\SectionMapAbb{J}(\nabla + A),\xi\bigr)
= \Omega(A, \xi \ldot \nabla) + \frac{1}{2} \Omega(A, \xi \ldot A)
= \int_M J(\nabla + A)_p \xi^p \mu_\omega .
\end{equation*}
Finally, for every \( 1 \)-form \( \beta \), we have
\begin{equation*}
\int_M \beta_p \xi^p \mu_\omega
= \frac{1}{(n-1)!} \int_M \beta \wedge\omega^{n-1} \wedge (\xi \contr \omega)
= \kappa\bigl(\beta \wedge \omega^{n-1} , \xi\bigr),
\end{equation*}
which yields the expression (<ref>)
for the momentum map \( \SectionMapAbb{J} \).
Let us rewrite the momentum map \( \SectionMapAbb{J} \) in such a way that
its geometric meaning becomes apparent. For this purpose, recall that the
Ricci curvature is defined by \( R_{ij} = \tensor{R}{_{kij}^k} \) and the
curvature \( 1 \)-form by
\begin{equation}
\label{eq:symplecticConnections:curvature1Form}
\rho_i = 2 \nabla^j \tensor{R}{_i_j} = - 2 \nabla_j \tensor{R}{_i^j} \, .
\end{equation}
We sometimes write \( \rho(\nabla) \) to emphasize the dependency on the
symplectic connection \( \nabla \).
Moreover, let \( \pontryaginClass \equiv \pontryaginClass(\nabla) \) be
the \( 4 \)-form
\begin{equation}
\label{eq:symplecticConnections:pontryaginForm}
\pontryaginClass_{ijkl} =
\frac{1}{4 \pi^2}
\bigl(
\tensor{R}{_i_j^p^q}\tensor{R}{_k_l_p_q}
+ \tensor{R}{_i_k^p^q}\tensor{R}{_l_j_p_q}
+ \tensor{R}{_i_l^p^q}\tensor{R}{_j_k_p_q}
\bigr)
\end{equation}
representing the first Pontryagin class of \( (M, \nabla) \).
Chern–Weil theory entails that the \( 4 \)-forms \( \pontryaginClass(\nabla
+ A) \) and \( \pontryaginClass(\nabla) \) associated with the connections
\( \nabla + A \) and \( \nabla \), respectively, are cohomologous.
Indeed, a straightforward (but lengthy) calculation shows that
\begin{equation}
\label{eq:symplecticConnections:pontryaginRelationTwoConnections}
\pontryaginClass(\nabla + A) = \pontryaginClass(\nabla) -
\frac{1}{4 \pi^2} \dif \sigma,
\end{equation}
where the \( 3 \)-form \( \sigma \equiv \sigma(\nabla, A) \) is defined by
\begin{equation}\label{eq:symplecticConnections:pontryaginPreForm}
\begin{split}
\sigma_{ijk}
&= \tensor{A}{_i_p^q}\tensor{R}{_j_k_q^p} +
\tensor{A}{_j_p^q}\tensor{R}{_k_i_q^p} +
\tensor{A}{_k_p^q}\tensor{R}{_i_j_q^p}
\\
&\quad + \frac{1}{2} \bigl(
\tensor{A}{_k_p^q}\nabla_i\tensor{A}{_j_q^p}
+ \tensor{A}{_i_p^q}\nabla_j\tensor{A}{_k_q^p}
+ \tensor{A}{_j_p^q}\nabla_k\tensor{A}{_i_q^p}
\\
- \tensor{A}{_j_p^q}\nabla_i\tensor{A}{_k_q^p}
- \tensor{A}{_k_p^q}\nabla_j\tensor{A}{_i_q^p}
- \tensor{A}{_i_p^q}\nabla_k\tensor{A}{_j_q^p}
\bigr)
\\
&\quad - \tensor{A}{_i_a^b} (\tensor{A}{_j_b^c}\tensor{A}{_k_c^a}
- \tensor{A}{_k_b^c}\tensor{A}{_j_c^a}).
\end{split}
\end{equation}
Using these notions, the momentum map \( \SectionMapAbb{J} \) has the
following expression.
The \( \DiffGroup(M, \omega) \)-action on \( (\ConnSpace_\omega(M), \Omega) \)
defined in (<ref>) is symplectic and has a
momentum map, relative to the pairing (<ref>),
given by
\begin{equation}
\label{eq:symplecticConnections:momentumMap}
\SectionMapAbb{J}: \ConnSpace_\omega(M) \to \DiffFormSpace^{2n-1}(M) \slash
\dif \DiffFormSpace^{2n-2}(M),
\quad
\nabla + A \mapsto \equivClass*{J(\nabla + A) \wedge
\omega^{n-1}},
\end{equation}
where \( J(\nabla + A) \in \DiffFormSpace^1(M) \) is defined by
\begin{equation}
J(\nabla + A)_i
= \frac{1}{2} \bigl( \rho(\nabla + A)_i - \rho(\nabla)_i -
\tensor{\sigma(\nabla, A)}{_i_j^j}\bigr) \,.
\qedhere
\end{equation}
The momentum map \( \SectionSpaceAbb{J} \) involves two ingredients
that have a different flavor. First, it contains the curvature
\( 1 \)-form \( \rho(\nabla) \) which has a clear geometric meaning.
Second, the correction term \( \sigma(\nabla, A) \) is closely related
to the Pontryagin class of \( M \) and thus has a more topological origin.
This is another manifestation of the general principle that momentum maps
for diffeomorphism groups involve geometric as well as topological data
(see DiezRatiuAutomorphisms for more examples).
Recall that the Ricci curvature \( \bar{R}_{ij} \) of the connection
\( \bar{\nabla} = \nabla + A \) is given by
\begin{equation}
\bar{R}_{ij} = R_{ij} + \nabla_k \tensor{A}{_i_j^k} -
\tensor{A}{_i_k^l} \tensor{A}{_j_l^k}
\end{equation}
and, for every tensor \( T^{ij} \), we have
\begin{equation}
\bar{\nabla}_k T^{ij} = \nabla_k T^{ij} + \tensor{A}{_k_l^i} T^{lj} +
\tensor{A}{_k_l^j} T^{il} \, .
\end{equation}
We hence obtain
\begin{equation}\begin{split}
\rho(\bar{\nabla})_i
&= - 2 \bar{\nabla}_j \tensor{\bar{R}}{_i^j}
\\
&= - 2 (\nabla_j \tensor{\bar{R}}{_i^j} +
\tensor{A}{_j_l_i} \tensor{\bar{R}}{^{lj}})
\\
&= - 2 (
\nabla_j \tensor{R}{_i^j}
+ \nabla_j \nabla_k \tensor{A}{_i^j^k}
- \tensor{A}{^j_l^k} \nabla_j \tensor{A}{_i_k^l} -
\tensor{A}{_i_k^l} \nabla_j \tensor{A}{^j_l^k}
\\
&\qquad+ \tensor{A}{_j_l_i} R^{lj}
+ \tensor{A}{_j_l_i} \nabla_k \tensor{A}{^l^j^k}
- \tensor{A}{_j_l_i} \tensor{A}{^l_k^p} \tensor{A}{^j_p^k} )
\\
&= \rho(\nabla)_i - 2 (\nabla_j \nabla_k \tensor{A}{_i^j^k} +
\tensor{A}{_j_k_i} R^{kj} )
\\
&\qquad+ 2 \tensor{A}{^j_l^k} \nabla_j \tensor{A}{_i_k^l} -
4 \tensor{A}{_j_l_i} \nabla_k \tensor{A}{^l^j^k}
+ 2 \tensor{A}{_j_l_i} \tensor{A}{^l_k^p} \tensor{A}{^j_p^k} \, .
\end{split}\end{equation}
On the other hand, we have
\begin{equation}\label{eq:symplecticConnections:pontryaginPreFormContracted}
\begin{split}
\tensor{\sigma}{_{ij}^j}
&= 2 \tensor{A}{_i_p^q}\tensor{R}{_q^p}
+ 2 \tensor{A}{_j_p^q}\tensor{R}{^j_i_q^p}
\\
+ \tensor{A}{^j_p^q}\nabla_i\tensor{A}{_j_q^p}
+ \tensor{A}{_i_p^q}\nabla_j\tensor{A}{^j_q^p}
- \tensor{A}{^j_p^q}\nabla_j\tensor{A}{_i_q^p}
\\
&\quad - 2 \tensor{A}{_i_a^b} \tensor{A}{_j_b^c}\tensor{A}{^j_c^a} \, .
\end{split}
\end{equation}
Comparing these identities
with (<ref>) shows that
the momentum map \( \SectionMapAbb{J} \) can be indeed written in the
form (<ref>).
If \( (M, \omega) \) is a two-dimensional symplectic manifold, then the
\( 3 \)-form \( \sigma \) necessarily vanishes and thus the momentum
map (<ref>) takes the simple form
\begin{equation}
\SectionMapAbb{J}: \ConnSpace_\omega(M) \to \DiffFormSpace^{1}(M)
\slash \dif \DiffFormSpace^{0}(M),
\qquad
\nabla + A \mapsto \frac{1}{2} \equivClass*{\bigl( \rho(\nabla + A) -
\rho(\nabla) \bigr)}.
\end{equation}
Thus, we recover the formula for the momentum map in this setting established
in [Theorem 1.2]Fox2014 (up to some constant).
Let us derive from the expression (<ref>)
the momentum map for the action of the subgroup \( \HamDiffGroup(M, \omega)
\subseteq \DiffGroup(M, \omega) \) of Hamiltonian diffeomorphisms.
As in <ref>, we identify the space
\( \HamVectorFields(M, \omega) \) of Hamiltonian vector fields
with \( \sFunctionSpace_0(M) \). Under this identification, the
space \( \dif \DiffFormSpace^{2n-1}(M) \) is dual to
\( \HamVectorFields(M, \omega) \) and the momentum map for the
action of \( \HamDiffGroup(M, \omega) \) is given by post-composition
of the momentum map for the action of \( \DiffGroup(M, \omega) \)
with the exterior differential.
Accordingly, the momentum map associated with the action of
\( \HamDiffGroup(M, \omega) \) on \( \ConnSpace_\omega(M) \) is given by
\begin{equation}
\label{eq:symplecticConnections:momentumMapHam}
\SectionMapAbb{J}_{\HamDiffGroup}: \ConnSpace_\omega(M) \to
\dif \DiffFormSpace^{2n-1}(M),
\qquad
\nabla + A \mapsto \dif J(\nabla + A) \wedge \frac{\omega^{n-1}}{(n-1)!} \, .
\end{equation}
Let \( \bar{\sigma}_i = \tensor{\sigma}{_i_j^j} \) be the contraction
of \( \sigma \). A straightforward calculation
using (<ref>)
\begin{equation}
\tensor{(\dif \bar{\sigma})}{_i^i}
= \frac{1}{2} \tensor{(\dif \sigma)}{_i^i_j^j}
= - 2 \pi^2 \tensor{\bigl(\pontryaginClass(\nabla + A) +
\pontryaginClass(\nabla)\bigr)}{_i^i_j^j} \, .
\end{equation}
Thus, using (<ref>), we can
rewrite the momentum map \( \SectionMapAbb{J}_{\HamDiffGroup} \)
\begin{equation}\begin{split}
\SectionMapAbb{J}_{\HamDiffGroup}(\nabla + A)
&= \dif J(\nabla + A) \wedge \frac{\omega^{n-1}}{(n-1)!}
\\
&= \frac{1}{2} \tensor{\bigl(\dif J(\nabla + A)\bigr)}{_i^i}
\mu_\omega
\\
&= \frac{1}{4} \tensor{\bigl(\dif \rho(\nabla + A) -
\dif \rho(\nabla)\bigr)}{_i^i} \mu_\omega
+ \frac{\pi^2}{2} \tensor{\bigl(\pontryaginClass(\nabla + A) -
\pontryaginClass(\nabla)\bigr)}{_i^i_j^j} \mu_\omega
\\
&\equiv \bigl(\SectionMapAbb{K}(\nabla + A) -
\SectionMapAbb{K}(\nabla)\bigr) \mu_\omega,
\end{split}\end{equation}
where, in the last line, we introduced the map
\begin{equation}
\label{eq:symplecticConnections:cahenGuttMomentumMap}
\SectionMapAbb{K}: \ConnSpace_\omega(M) \to \sFunctionSpace(M),
\qquad
\nabla \mapsto \frac{1}{2} \left(\nabla_i \rho(\nabla)^i +
\pi^2 \tensor{\pontryaginClass(\nabla)}{_i^i_j^j}\right) \, .
\end{equation}
It was shown in [Theorem 1.1]Fox2014 that \( \SectionMapAbb{K} \)
coincides with the Cahen–Gutt momentum map
[Proposition 1.1]CahenGutt2005Gutt2006 for the action of
the group of Hamiltonian diffeomorphisms on \( \ConnSpace_\omega(M) \).
In other words, \( \SectionMapAbb{J}_{\HamDiffGroup} \) recovers
the Cahen–Gutt momentum map (as a slight reformulation).
Let us record this observation.
The action of the subgroup of Hamiltonian diffeomorphisms on
\( \ConnSpace_\omega(M) \) has a momentum map \( \SectionMapAbb{J}_{\HamDiffGroup}:
\ConnSpace_\omega(M) \to \dif \DiffFormSpace^{2n-1}(M) \) given by
\begin{equation}
\SectionMapAbb{J}_{\HamDiffGroup}(\nabla + A)
= \dif J(\nabla + A) \wedge \frac{\omega^{n-1}}{(n-1)!}
= \bigl(\SectionMapAbb{K}(\nabla + A) - \SectionMapAbb{K}(\nabla)\bigr) \mu_\omega ,
\end{equation}
where \( \SectionMapAbb{K}: \ConnSpace_\omega(M) \to \sFunctionSpace(M) \) is the
Cahen–Gutt momentum map defined
in (<ref>).
Note that \( \SectionMapAbb{J}_{\HamDiffGroup} \) is
\( \HamDiffGroup(M, \omega) \)-equivariant.
Equivariance is, however, no longer the case for the
momentum map \( \SectionMapAbb{J} \)
for the full group of symplectomorphisms.
§.§ Central extension of
Diff(M, w)
As we have seen in <ref>, the momentum map
for an affine symplectic action is, in general, not equivariant.
For the action of \( \DiffGroup(M, \omega) \) on the space of symplectic
connections, we obtain the following.
The non-equivariance \( 2 \)-cocycle \( \Sigma: \VectorFieldSpace(M, \omega)
\times \VectorFieldSpace(M, \omega) \to \R \) associated with the momentum
map \( \SectionMapAbb{J} \) (see
<ref> or
<ref>) is given by
\begin{equation}
\label{eq:symplecticConnections:nonEquivariance:cocycle}
\Sigma(\xi, \eta) =
\frac{1}{2} \kappa\bigl(\rho(\nabla), \commutator{\xi}{\eta}\bigr) -
2 \pi^2 \int_M \tensor{\pontryaginClass}{_k^k_i_j}(\nabla) \xi^i \eta^j
\mu_\omega \, .
\qedhere
\end{equation}
The part of \( \Sigma \) not cohomologous to \( 0 \) is determined by the
contracted Pontryagin class \( \tensor{\pontryaginClass}{_k^k_i_j} \in
\DiffFormSpace^2(M) \) and thus carries topological information of the
symplectic manifold \( (M, \omega) \).
It is straightforward to verify that the curvature \( 1 \)-form \( \rho \)
transforms naturally under the action of \( \DiffGroup(M, \omega) \),
that is, we have
\begin{equation}
\rho(\phi \cdot \nabla) = (\phi^{-1})^* \rho(\nabla)
\end{equation}
for every \( \phi \in \DiffGroup(M, \omega) \) and \( \nabla \in
\ConnSpace_\omega(M) \).
Thus, according to (<ref>)
and (<ref>),
the non-equivariance \( 1 \)-cocycle \( \lambda: \DiffGroup(M, \omega) \to
\DiffFormSpace^{2n-1}(M) \slash \dif \DiffFormSpace^{2n-2}(M) \) is given by
\begin{equation}
\lambda(\phi)
= \SectionMapAbb{J}(\phi \cdot \nabla)
= \frac{1}{2} \equivClass*{\Bigl((\phi^{-1})^* \rho(\nabla) -
\rho(\nabla) - \bar{\sigma}(\nabla, \phi \cdot \nabla - \nabla)
\Bigr) \wedge \omega^{n-1}},
\end{equation}
where we recall that \( \bar{\sigma}_i =
\tensor{\sigma}{_i_j^j} \). Differentiating this relation with respect
to \( \phi \), we find for the non-equivariance Lie algebra \( 2 \)-cocycle:
\begin{equation}\begin{split}
\Sigma(\xi, \eta)
\kappa\bigl(\tangent_e \lambda (\xi), \eta \bigr)
\\
- \frac{1}{2} \dualPair*{\difLie_\xi \rho}{\eta}
+ \dualPair*{\tensor{(\difLie_\xi \nabla)}{_i_p^q}\tensor{R}{_q^p}
+ \tensor{(\difLie_\xi \nabla)}{_j_p^q}\tensor{R}{^j_i_q^p}}{\eta}
\\
\frac{1}{2} \dualPair*{\rho}{\commutator{\xi}{\eta}}
\\
&\qquad+ \dualPair*{\bigl(\nabla_i \nabla_p \xi^q +
\tensor{R}{_l_i_p^q} \xi^l\bigr)\tensor{R}{_q^p} +
\bigl(\nabla_j \nabla_p \xi^q +
\tensor{R}{_l_j_p^q} \xi^l\bigr)\tensor{R}{^j_i_q^p}}{\eta},
\end{split}\end{equation}
where we used the
expression (<ref>)
for \( \bar{\sigma} \) and (<ref>)
for the Lie derivative of \( \nabla \); the brackets \( \dualPairDot \)
in the last two lines denote the natural
pairing of \( 1 \)-forms with vector fields by integration against
\( \mu_\omega \). The terms on the right-hand side
involving two partial derivatives cancel, which can be seen using
integration by parts:
\begin{equation}\begin{split}
&\dualPair*{\bigl(\nabla_i \nabla_p \xi^q\bigr)\tensor{R}{_q^p} +
\bigl(\nabla_j \nabla_p \xi^q\bigr)\tensor{R}{^j_i_q^p}}{\eta}
\\
= \int_M \bigl(\nabla_i \nabla_p \xi^q\bigr)\tensor{R}{_q^p} \eta^i
+ \bigl(\nabla_j \nabla_p \xi^q\bigr)\tensor{R}{^j_i_q^p} \eta^i
\\
= - \int_M (\nabla_p \xi^q) (\nabla_i \tensor{R}{_q^p}) \eta^i +
(\nabla_p \xi^q) (\nabla_j \tensor{R}{^j_i_q^p}) \eta^i
\\
= 0,
\end{split}\end{equation}
where we used the fact that \( \nabla_i \eta_j \) is symmetric in \( (i, j) \)
and thus the terms involving the covariant derivatives of \( \eta^i \) vanish.
Indeed, \(0 = (\difLie_\eta \omega)_{ij} =
(\nabla_\eta \omega)_{ij} + \nabla_i \eta_j - \nabla_j \eta_i\) and
\(\nabla_\eta \omega =0\).
Thus, we get
\begin{equation}\label{eq:symplecticConnections:nonEquivariance:cocyclePrelim}
\begin{split}
\Sigma(\xi, \eta)
\frac{1}{2} \dualPair*{\rho}{\commutator{\xi}{\eta}}
+ \dualPair*{\bigl(\tensor{R}{_i_j_p^q} \tensor{R}{_q^p} +
\tensor{R}{_i_r_p^q} \tensor{R}{^r_j_q^p}\bigr) \xi^i}{\eta}.
\end{split}\end{equation}
Finally, we find for the contraction of the Pontryagin form
(see (<ref>)):
\begin{equation}
4 \pi^2 \tensor{\pontryaginClass}{_k^k_i_j}
\tensor{R}{_k^k^p^q}\tensor{R}{_i_j_p_q}
+ \tensor{R}{_k_i^p^q}\tensor{R}{_j^k_p_q}
+ \tensor{R}{_k_j^p^q}\tensor{R}{^k_i_p_q}
= 2 \tensor{R}{^p^q}\tensor{R}{_i_j_p_q}
+ 2 \tensor{R}{_k_i^p^q}\tensor{R}{_j^k_p_q} \, .
\end{equation}
Inserting this relation into the
expression (<ref>)
for the non-equivariance cocycle \( \Sigma \)
yields (<ref>).
Recall from the discussion in <ref>, that
\( 2 \)-cocycles on \( \VectorFieldSpace(M, \omega) \) are sums of
extensions of certain \( 2 \)-cocycles on \( \HamVectorFields(M, \omega) \)
and pull-backs of elements of \( \ExtBundle^2 {\deRCohomology^1(M)}^* \).
Applied to the non-equivariance cocycle \( \Sigma \), we obtain the following.
The class of the non-equivariance cocycle \( \Sigma \) in the second
continuous Lie algebra cohomology of \( \VectorFieldSpace(M, \omega) \)
coincides with the pull-back along the natural map
\( \VectorFieldSpace(M, \omega) \to \deRCohomology^1(M) \) of the
antisymmetric bilinear form
\begin{equation}
\bigl(\equivClass{\alpha}, \equivClass{\beta}\bigr) \mapsto \pi^2 \int_M
\left(\SectionMapAbb{p} (\nabla) \, \varpi^{ij} -
2 \tensor{\pontryaginClass}{_k^k^i^j} (\nabla)\right) \alpha_i \beta_j \,
\mu_\omega
\end{equation}
on \( \deRCohomology^1(M) \), where
\begin{equation}
\SectionMapAbb{p}(\nabla) \defeq \tensor{\pontryaginClass}{_i^i_j^j}(\nabla)
- \frac{1}{\vol_{\mu_\omega}(M)} \int_M \tensor{\pontryaginClass}{_i^i_j^j}
(\nabla) \, \mu_\omega \, .
\qedhere
\end{equation}
It should not come as a big surprise that there is no contribution from the
second Lie algebra cohomology of \( \HamVectorFields(M, \omega) \), because
the momentum map \( \SectionMapAbb{K} \) for the action of the group of
Hamiltonian diffeomorphisms is equivariant.
We conclude that the momentum map for the action of the full group of
symplectomorphisms contains topological information of \( (M, \omega) \)
in terms of the Pontryagin form
while \( \HamVectorFields(M, \omega) \) is not sensitive to these
topological properties. A similar dichotomy has also been observed in
DiezRatiuAutomorphisms for different actions of
symplectomorphism groups.
The prequantum bundle construction in
<ref> shows that the \( 2 \)-cocycle
\( \Sigma \) integrates to a central Lie group extension of
\( \DiffGroup(M, \omega) \).
There exists a central Lie group \( \UGroup(1) \)-extension of the group
\( \DiffGroup(M, \omega) \) of symplectomorphisms whose corresponding
Lie algebra \( 2 \)-cocycle is cohomologous to the non-equivariance
\( 2 \)-cocycle \( \Sigma \).
Note that the central group extension in
<ref> has
been obtained by means of the action of \( \DiffGroup(M, \omega) \)
on the infinite-dimensional space of symplectic connections.
On the other hand, we have seen in
<ref> that
the non-equivariance cocycle \( \Sigma \) is essentially the pull-back
of a cocycle on the finite-dimensional space \( \deRCohomology^1(M) \).
One may thus hope for a finite-dimensional construction of the central
extension of \( \DiffGroup(M, \omega) \).
This is an issue for future research to explore.
§.§ Norm-squared momentum map
In this section, we apply our general results concerning the norm-squared
of the momentum map to the action of the symplectomorphism
group on the space
of symplectic connections.
As discussed in <ref>, the action of the
group \( \DiffGroup(M, \omega) \) of symplectomorphisms leaves
\( \Omega \) invariant and has a momentum map \( \SectionMapAbb{J}:
\ConnSpace_\omega(M) \to \Omega^{2n - 1}(M) \slash \dif \Omega^{2n-2}(M) \)
as calculated in <ref>.
Here, the target of \( \SectionMapAbb{J} \) is identified with the dual
of \( {\VectorFieldSpace(M, \omega)}^* \) by the
pairing (<ref>).
In order to fit this setting into the general framework of
<ref>, we need to realize \( \SectionMapAbb{J} \)
as a map into the space of symplectic vector fields.
For this purpose, let \( j \) be a complex structure
on \( M \) compatible with \( \omega \),
, \( \omega(j \,\cdot, j \,\cdot) =
\omega(\cdot, \cdot) \) and \( \omega(X, j X) > 0 \) for all non-zero
\( X \in \TBundle M \).
Most results of this section hold with minor modification
also when \( j \) is not integrable.
Denote the associated Riemannian metric by \( g(\cdot, \cdot )
= \omega (\cdot, j \cdot ) \).
Using this data, consider the following non-degenerate
pairing on \( \VectorFieldSpace(M, \omega) \):
\begin{equation}
\label{eq:symplecticConnections:pairingOnLieAlgebra}
\kappa(\xi, \eta) = \int_M g(\xi, \eta) \, \mu_\omega \, .
\end{equation}
Relative to this pairing, the momentum
map (<ref>) takes the form
\begin{equation}
\SectionMapAbb{J}: \ConnSpace_\omega(M) \to \VectorFieldSpace(M, \omega),
\quad \nabla + A \mapsto -
\frac{1}{2} \tensor{j}{^i_k} \bigl( \rho(\nabla + A)^k -
\rho(\nabla)^k - \tensor{\sigma(\nabla, A)}{^k_j^j}\bigr) \,,
\end{equation}
where \( \rho_i \) and \( \sigma_{ijk} \) have been defined
in (<ref>)
and (<ref>), respectively.
In the following, it is often convenient to work on the complexified
tangent bundle and use an abstract index notation that is adapted to
the decomposition of \( \TBundle M \tensorProd \C =
\TBundle^{(1,0)} M \oplus \TBundle^{(0,1)} M \) into
\( \pm i \)-eigenspaces of \( j \).
For this purpose, we use capital Latin letters
\( \mathsf{A, B}, \ldots \)
to denote elements of \( \TBundle M \tensorProd \C \), Greek
letters \( \alpha, \beta, \ldots \) for elements of
\( \TBundle^{(1,0)} M \) and overlined Greek letters
\( \bar{\alpha}, \bar{\beta}, \ldots \) for elements of
\( \TBundle^{(0,1)} M \). For example, \( X^\mathsf{A}\) is a complex
vector field and \( Y^{\bar{\alpha}} \) is a \( (0,1) \)-vector field.
Moreover, we use only the symplectic form and not the metric to lower and raise
Using these conventions, the complex structure \( j \) on \( M \)
defines a constant almost complex structure \( \SectionMapAbb{j} \)
on \( \ConnSpace_\omega(M) \) by
$\mathsf{ABC}$ $(\SectionMapAbb{j} A)_\mathsf{ABC}$
$\alpha\beta\gamma$ $- \I A_{\alpha\beta\gamma}$
$\bar{\alpha}\beta\gamma$ $- \I A_{\bar{\alpha}\beta\gamma}$
$\alpha\bar{\beta}\bar{\gamma}$ $+ \I A_{\alpha\bar{\beta}\bar{\gamma}}$
$+ \I A_{\bar{\alpha}\bar{\beta}\bar{\gamma}}$
and symmetric extension, where $A$ is the symmetric covariant 3-tensor
defined in the text following (<ref>). Here, the
possible components are listed in the first
column, and the corresponding value of \( \SectionMapAbb{j} A \) is the entry
in the second column, the first row is equivalent to
\( (\SectionMapAbb{j} A)_{\alpha\beta\gamma} = - \I A_{\alpha\beta\gamma} \).
Note that this complex structure is not just precomposition of \( A \)
with \( j \), which would have a different sign in the second and third row.
A direct calculation yields \( \Omega(\SectionMapAbb{j} \cdot,
\SectionMapAbb{j} \cdot) = \Omega(\cdot, \cdot) \).
Moreover, in [Proposition 17 and Remark 20]Fuente-Gravy2015
(see also [Lemma 4.9]FutakiOno2018) it has been shown
that \( \Omega(\cdot, \SectionMapAbb{j} \cdot) \) is positive definite
on the complexified \( \DiffGroup(M, \omega) \)-orbit
if the Ricci curvature is non-negative.
In the general setting above, we only used the non-degeneracy of
\( \Omega(\cdot, \SectionMapAbb{j} \cdot) \) in
<ref> to determine
the kernel of the Calabi operators and, for this computation,
non-degeneracy along the
\( \VectorFieldSpace(M,\omega)_\C \)-orbit suffices.
The Levi-Civita connection \( \nabla^j \) associated with
the Riemannian metric defined by \( j \) and \( \omega \) is
a symplectic connection. We say that a compatible complex structure
\( j \) is a Cahen–Gutt critical if its Levi-Civita
connection \( \nabla^j \) is a critical point of the norm-squared of the momentum
map \( \norm{\SectionMapAbb{J}}_\kappa^2:
\ConnSpace_\omega(M) \to \R \).
We need the following generalization of
[Lemma 2.2 and 4.9]FutakiOno2018 from
Hamiltonian vector fields to symplectic vector fields.
Let \( (M, \omega, j) \) be a compact Kähler manifold with
Levi-Civita connection \( \nabla \). The following holds:
* For every \( X \in \VectorFieldSpace(M, \omega) \), \( \difLie_X \nabla = 0 \)
if and only if \( X \) is real holomorphic.
* For every \( X + \I Y \in \VectorFieldSpace(M, \omega)_\C \),
\( \difLie_X \nabla + \SectionMapAbb{j} \difLie_Y \nabla = 0 \)
if and only if \( (X + \I Y)^{(1,0)} \) is holomorphic.
Moreover, the map \( X + \I Y \mapsto X - j Y \) yields a surjection
from \( \VectorFieldSpace(M, \omega)_\C \) onto the space of
holomorphic vector fields.
Let \( Z^\mathsf{A} \in \VectorFieldSpace(M, \omega)_\C \). Since
\( \DiffGroup(M, \omega) \) acts on the space of
symplectic connections, we know that
\( (\difLie_Z \nabla)_\mathsf{ABC} \) is a symmetric tensor.
The only independent components are given as follows:
\begin{equation}
\label{eq:symplecticConnections:difLieNabla}
\begin{array}{ l c }
\mathsf{ABC} & (\difLie_Z \nabla)_\mathsf{ABC} \\ \midrule
\alpha\beta\gamma & \nabla_\alpha \nabla_\beta Z_\gamma \\
\bar{\alpha}\beta\gamma & \nabla_{\bar{\alpha}} \nabla_\beta Z_\gamma \\
\alpha\bar{\beta}\bar{\gamma} &
\nabla_\alpha \nabla_{\bar{\beta}} Z_{\bar{\gamma}} \\
\bar{\alpha}\bar{\beta}\bar{\gamma} &
\nabla_{\bar{\alpha}} \nabla_{\bar{\beta}} Z_{\bar{\gamma}} \\
\end{array}
\end{equation}
Here, we used (<ref>) and
the fact that the Riemann curvature of a Kähler metric
has additional symmetry properties, so that, for example,
\( R_{\mathsf{D}\alpha\beta\gamma} \) vanishes. Thus,
\( \difLie_Z \nabla = 0 \) implies, using integration
by parts, that
\begin{equation}
\int_M g^{\alpha \bar{\gamma}} g^{\beta \bar{\delta}} \,
\bigl(\nabla_\alpha Z_\beta\bigr) \bigl(\nabla_{\bar{\gamma}}
\bar{Z}_{\bar{\delta}}\bigr) \, \mu_\omega = 0 \, ,
\end{equation}
hence \( \nabla_\alpha Z_\beta = 0 \). Similarly, we conclude
that \( \nabla_{\bar{\alpha}} Z_{\bar{\beta}} = 0 \).
Summarizing, \( \difLie_Z \nabla = 0 \) is equivalent to
\( \nabla_\alpha Z_\beta = 0 = \nabla_{\bar{\alpha}} Z_{\bar{\beta}} \).
On the other hand, by [Lemma 2.3]Futaki2006, we have
\begin{equation}
\tensor{(\difLie_Z j)}{_{\mathsf{A}}^{\mathsf{B}}} =
- 2 \I \tensor{\delta}{_{\bar{\beta}}^{\mathsf{B}}}
\tensor{\delta}{_{\mathsf{A}}^{\alpha}} \,
\nabla_\alpha Z^{\bar{\beta}}
+ 2 \I \tensor{\delta}{_\beta^{\mathsf{B}}}
\tensor{\delta}{_{\mathsf{A}}^{\bar{\alpha}}} \,
\nabla_{\bar{\alpha}} Z^{\beta} \, .
\end{equation}
Thus, upon lowering the last index, the only non-zero components
of \( \tensor{(\difLie_Z j)}{_{\mathsf{AB}}} \) are:
$\mathsf{AB}$ $(\difLie_Z j)_\mathsf{AB}$
$\alpha\beta$ $-2 \I \, \nabla_\alpha Z_\beta$
$\bar{\alpha}\bar{\beta}$ $2 \I \, \nabla_{\bar{\alpha}} Z_{\bar{\beta}}$
Thus, we see that \( \difLie_Z \nabla = 0 \) if and only if
\( \difLie_Z j = 0 \). In particular, this holds for \( Z = X \)
being a real vector field. This proves (i).
Finally, let \( X, Y \in \VectorFieldSpace(M, \omega) \) be
such that \( \difLie_X \nabla + \SectionMapAbb{j} \difLie_Y \nabla = 0 \).
The definition of \( \SectionMapAbb{j} \)
and (<ref>) imply that
this is equivalent to
\begin{equation}\begin{split}
\nabla_\alpha \nabla_\beta (X_\gamma - \I Y_\gamma)
&= 0 = \nabla_{\bar{\alpha}} \nabla_\beta (X_\gamma - \I Y_\gamma), \\
\nabla_\alpha \nabla_{\bar{\beta}} (X_{\bar{\gamma}} + \I Y_{\bar{\gamma}})
&= 0 = \nabla_{\bar{\alpha}} \nabla_{\bar{\beta}} (X_{\bar{\gamma}}
+ \I Y_{\bar{\gamma}}).
\end{split}\end{equation}
Using integration by parts as above, we see that these equations
themselves are equivalent to \( \nabla_{\bar{\beta}} (X_{\bar{\gamma}}
+ \I Y_{\bar{\gamma}}) = 0 \), that is \( \nabla_{\bar{\beta}} (X^{\gamma}
+ \I Y^{\gamma}) = 0 \) upon lifting the second index, which
proves the first part of (ii).
The second part follows as in <ref>:
\begin{equation}
(\difLie_X j - \difLie_{jY} j)_{{\bar{\alpha}} {\bar{\beta}}}
= (\difLie_X j - j \difLie_Y j)_{{\bar{\alpha}} {\bar{\beta}}}
= 2 \I \nabla_{\bar{\alpha}} X_{\bar{\beta}} -
2 \nabla_{\bar{\alpha}} Y_{\bar{\beta}}
= 2 \I \nabla_{\bar{\alpha}}(X_{\bar{\beta}} +
\I Y_{\bar{\beta}}) = 0.
\qedhere
\end{equation}
This shows that every vector field \( X \) in the stabilizer
\( \VectorFieldSpace(M, \omega)_\nabla \) of the Levi-Civita
connection \( \nabla \) is real holomorphic, and thus Killing.
Hence, \( \kappa \) and \( \SectionMapAbb{j} \) are invariant
under \( \VectorFieldSpace(M, \omega)_\nabla \).
Moreover, the stabilizer of \( \nabla \) under the
\( \VectorFieldSpace(M,\omega)_\C \)-action
projects onto the Lie algebra of
holomorphic vector fields; in particular,
it is finite dimensional, too.
<Ref> implies that a compatible complex
structure \( j \) is Cahen–Gutt critical if and only if
\( \SectionMapAbb{J}(\nabla^j) \in \VectorFieldSpace(M, \omega) \) is
real holomorphic.
Note that our notion of extremality is hence slightly different
from FutakiOno2018,Fox2014.
As a consequence of <ref>
we obtain the following.
Let \( (M, \omega) \) be a compact symplectic manifold and let \( j \)
be a compatible Cahen–Gutt critical
complex structure on \( M \). Assume that the Ricci curvature of
the Levi-Civita connection \( \nabla \) associated with \( g_j \) is non-negative.
Then the Lie algebra of real holomorphic vector fields admits the
following decomposition:
\begin{equation}
\label{eq:symplecticConnections:decompositionAut}
\LieA{h}(M, j) = \LieA{c} \oplus
\bigoplus_{\lambda \neq 0} \LieA{k}_\lambda,
\end{equation}
* \( \LieA{c} \) is the Lie subalgebra of
\( \LieA{h}(M, j) \) consisting of
all elements that commute with \( \SectionMapAbb{J}(\nabla) \);
* \( \C \SectionMapAbb{J}(\nabla) \subseteq \LieA{c} \);
* \( \LieA{k}_\lambda \) are eigenspaces of
\( -2 j \difLie_{\SectionMapAbb{J}(\nabla)} \)
with eigenvalue \( \lambda \in \R \) (with the convention that
\( \LieA{k}_\lambda = \set{0} \) if \( \lambda \) is not an
eigenvalue); in particular, $\mathfrak{c} = \mathfrak{k}_0$;
* \( \commutator{\LieA{k}_\lambda}{\LieA{k}_\mu} \subseteq
\LieA{k}_{\lambda + \mu} \) if
\( \lambda + \mu \) is an eigenvalue
of \( -2 j \difLie_{\SectionMapAbb{J}(\nabla)} \); otherwise
\( \commutator{\LieA{k}_\lambda}{\LieA{k}_\mu} = 0 \).
Moreover, the Hessian of \( \norm{\SectionMapAbb{J}}^2 \) at
\( \nabla \) is given by
\begin{equation}
\frac{1}{2} \Hessian_\nabla \norm{\SectionMapAbb{J}}^2
(\zeta \ldot \nabla,
\gamma \ldot \nabla) =
\Re \, \dualPair{\zeta}{C^+_\nabla R_\nabla \gamma}_{\C},
\end{equation}
for \( \zeta, \gamma \in \VectorFieldSpace(M, \omega)_\C \) and
\begin{equation}
C_\nabla^\pm = L_\nabla \pm \I Z_\nabla, \qquad
R_\nabla = C^-_\nabla \Matrix{0 & 0 \\ 0 & 1} + \I \,
\bigl(\difLie_{\SectionMapAbb{J}(\nabla)} + Z_\nabla\bigr)
\end{equation}
where \( L_\nabla = - \tangent_\nabla \SectionMapAbb{J} (\SectionMapAbb{j} \,
\difLie_\xi \nabla) \) and \( Z_\nabla =
- \tangent_\nabla \SectionMapAbb{J} (\difLie_\xi \nabla) \).
We can apply <ref>
to obtain a decomposition of the stabilizer
\( \VectorFieldSpace(M, \omega) \ldot \nabla \), where \( \nabla \)
is the Levi-Civita connection associated with \( g_j \).
The claims concerning the
decomposition (<ref>)
follow then directly under the map \( \VectorFieldSpace(M, \omega) \ldot
\nabla \ni X + \I Y \mapsto X - j Y \in \LieA{h}(M, j) \),
<ref> and the
proof of <ref>.
The expression for the Hessian follows directly from
* If one proceeds in an analogous way for the action of the
subgroup of Hamiltonian diffeomorphism, then one recovers
[Theorem 4.7 and 4.11]FutakiOno2018 as a direct application of
* A similar theorem holds for connections that are critical points
of the momentum map squared (seen as a functional on
\( \ConnSpace_\omega(M) \))
without being necessarily the Levi-Civita connection of some compatible
Riemannian metric. However, in this case, we do not know of a result
similar to <ref> that
allows us to identify the stabilizers.
* Instead of using the almost complex structure \( \SectionMapAbb{j} \)
on \( \ConnSpace_\omega(M) \) defined above, one could also work with
the almost complex structure that sends
\( A \in \SymTensorFieldSpace_3(M) \)
to \( A(j \cdot, j \cdot, j \cdot) \).
In this case, the stabilizer of the Levi-Civita connection under the
complexified action is a proper subalgebra of \( \LieA{h}(M, j) \).
* In FutakiOno2018,Fuente-Gravy2016 a slightly different
viewpoint is used: instead of working on the symplectic manifold
\( \ConnSpace_\omega(M) \) of symplectic connections as we do above,
the pull-back of the symplectic form \( \Omega \) along the Levi-Civita
map \(\SectionMapAbb{lc}: j \mapsto \nabla^j\) to the space
\( \SectionSpaceAbb{I}(M, \omega) \) of integrable complex structures
on \( M \) compatible with \( \omega \) is used.
In this setting, the condition of non-negative Ricci curvature is necessary
to guarantee the non-degeneracy of \( \SectionMapAbb{lc}^* \Omega \); see
[Proposition 17]Fuente-Gravy2016.
§ APPLICATION: YANG–MILLS
Let \( G \) be a compact connected Lie group and let \( P \to M \) be a
principal \( G \)-bundle over a closed connected Riemann surface \( M \).
Fix an \( \AdAction \)-invariant pairing on the Lie algebra \( \LieA{g} \)
of \( G \). The space \( \ConnSpace(P) \) of connections on \( P \) is an
affine space modeled on the tame Fréchet space
\( \DiffFormSpace^1(M, \AdBundle P) \)
of \( 1 \)-forms on \( M \) with values in the adjoint bundle
\( \AdBundle P \).
The \( 2 \)-form \( \omega \) on \( \ConnSpace(P) \) defined by the
integration pairing
\begin{equation}
\label{eq:yangMillsSurface:symplecticForm}
\omega_A (\alpha, \beta) = \int_M \wedgeDual{\alpha}{\beta}
\end{equation}
for \( \alpha, \beta \in \DiffFormSpace^1(M, \AdBundle P) \) is a
symplectic form, where \( \wedgeDualDot \) denotes the wedge
product relative to the
\( \AdAction \)-invariant pairing on \( \LieA{g} \).
For \( \alpha, \beta \in \DiffFormSpace^1(M, \AdBundle P) \)
and \( X, Y \in \VectorFieldSpace(M) \), we have
\( \wedgeDual{\alpha}{\beta}(X,Y) = \dualPair{\alpha(X)}{\beta(Y)} -
\dualPair{\alpha(Y)}{\beta(X)} \).
The natural action on \( \ConnSpace(P) \) of the group \( \GauGroup(P) \)
of gauge transformations of \( P \) is smooth and preserves the symplectic
structure \( \omega \). The \( \AdAction \)-invariant pairing on
\( \LieA{g} \) induces a natural pairing
\begin{equation}
\kappa: \sSectionSpace(\AdBundle P) \times
\sSectionSpace(\AdBundle P) \to \R, \qquad (\phi, \varrho) \mapsto
\int_M \dualPair{\phi}{\varrho} \, \vol_g \, .
\end{equation}
A straightforward calculation verifies that the map
\begin{equation}
\SectionMapAbb{J}: \ConnSpace(P) \to \sSectionSpace(\AdBundle P),
\qquad A \mapsto - \hodgeStar F_A
\end{equation}
is an equivariant momentum map for the \( \GauGroup(P) \)-action on
\( \ConnSpace(P) \), see AtiyahBott1983.
The norm-squared of the momentum map
\( \norm{\SectionMapAbb{J}}_\kappa^2(A)
= \int_{M} F_A \wedge \hodgeStar F_A \) is the Yang–Mills action, whose
critical points are, according to <ref>,
precisely the Yang–Mills connections, , connections \( A \)
satisfying \( \dif \hodgeStar F_A = 0 \). This observation goes back
at least to [Proposition 4.6]AtiyahBott1983. Moreover,
the Hodge star operator squares to minus the identity on \( 1 \)-forms
and so yields an almost complex structure
\( \hodgeStar: \DiffFormSpace^1(M, \AdBundle P ) \to
\DiffFormSpace^1(M, \AdBundle P ) \) that is compatible
with \( \omega \). Upon complexification, we obtain a decomposition
\begin{equation}
\DiffFormSpace^1(M, \AdBundle P \tensorProd \C)
= \DiffFormSpace^{1,0}(M,
\AdBundle P) \oplus \DiffFormSpace^{0,1}(M, \AdBundle P)
\end{equation}
in eigenspaces of \( \hodgeStar \) with eigenvalues \( -i \) and \( i \),
For a connection \( A \), the associated
exterior derivative \( \dif_A:
\DiffFormSpace^0(M, \AdBundle P \tensorProd \C) \to \DiffFormSpace^1(M,
\AdBundle P \tensorProd \C) \) decomposes accordingly into
\( \dif_A = \difp_A + \difpBar_A \). Under the
\( \hodgeStar \)-\( i \)-complex
linear identification \( \DiffFormSpace^1(M, \AdBundle P) \ni
\alpha \mapsto \I \alpha + \hodgeStar \alpha \in
\DiffFormSpace^{0,1}(M, \AdBundle P) \), the
complexified action on the Lie algebra level is the operator
\( - 2 \I \difpBar_A: \GauAlgebra(P)_\C \to
\DiffFormSpace^{0,1}(M, \AdBundle P) \).
Thus, the stabilizer \( \bigl(\GauAlgebra(P)_\C\bigr)_A \)
is identified with
the space \( \holSectionSpace_A(\AdBundle P \tensorProd \C) \) of
holomorphic sections of \( \AdBundle P \tensorProd \C \).
Moreover, in [p. 556]AtiyahBott1983, it was shown that
the eigenvalues of the endomorphism
\( - 2 \I \, \commutator{\hodgeStar F_A}{\cdot} \) on
\( \AdBundle P \) are locally constant, and that one thus
obtains an eigenspace decomposition
\begin{equation}
\label{eq:yangMillsSurface:decompositionAdBundle}
\AdBundle P \tensorProd \C = \bigoplus_{\lambda} \AdBundle_\lambda P \, ,
\end{equation}
where \( \AdBundle_\lambda P \) is the eigenspace of \( - 2 \I \,
\commutator{\hodgeStar F_A}{\cdot} \) corresponding to the eigenvalue
\( \lambda \). Since \( \bigl(\GauAlgebra(P)_\C\bigr)_A \) is
finite-dimensional, we can apply
<ref> to obtain the
Let \( G \) be a compact connected Lie group and let \( P \to M \)
be a principal \( G \)-bundle over a closed connected Riemann
surface \( M \). For every Yang–Mills connection \( A \) on \( P \),
the following decomposition of
the complex Lie algebra of holomorphic sections of
\( \AdBundle P \tensorProd \C \)
\begin{equation}
\holSectionSpace_A\bigl(\AdBundle P \tensorProd \C\bigr) =
\bigl(\GauAlgebra(P)_A\bigr)_\C \oplus \bigoplus_{\lambda < 0}
\holSectionSpace_A\bigl(\AdBundle_\lambda P\bigr)
\end{equation}
such that \( \hodgeStar F_A \) lies in the center of
\( \GauAlgebra(P)_A \) and
\begin{equation}
\commutator*{\holSectionSpace_A\bigl(\AdBundle_\lambda P\bigr)}
{\holSectionSpace_A\bigl(\AdBundle_\mu P\bigr)} \subseteq
\holSectionSpace_A\bigl(\AdBundle_{\lambda + \mu} P\bigr),
\end{equation}
with the convention that \(\holSectionSpace_A\bigl(
\AdBundle_{\lambda+\mu} P\bigr)\) is trivial if \( \lambda + \mu \)
is not an eigenvalue of \( - 2 \I \, \commutator{\hodgeStar F_A}{\cdot} \).
This follows from <ref> but for
completeness we give a sketch of a direct proof.
The decomposition (<ref>) of
\( \AdBundle P \tensorProd \C \) induces decompositions on the level of
differential forms:
\begin{equation}
\DiffFormSpace^k(M, \AdBundle P \tensorProd \C) =
\bigoplus_{\lambda} \DiffFormSpace^k(M, \AdBundle_\lambda P).
\end{equation}
As a consequence of the Yang–Mills equation, the operators \( \difpBar_A \)
and \( \commutator{\hodgeStar F_A}{\cdot} \) commute.
Hence, \( \difpBar_A \) decomposes into the sum of operators
\( \difpBar_{A, \lambda}: \DiffFormSpace^0(M, \AdBundle_\lambda P) \to
\DiffFormSpace^1(M, \AdBundle_\lambda P) \) and so
\begin{equation}
\holSectionSpace_A\bigl(\AdBundle P \tensorProd \C\bigr) =
\bigoplus_{\lambda} \holSectionSpace_A\bigl(\AdBundle_\lambda P\bigr) \, .
\end{equation}
By considering appropriate Laplacian operators, one can show that
\( \holSectionSpace_A\bigl(\AdBundle_\lambda P\bigr) \) is isomorphic
to \( \bigl(\GauAlgebra(P)_A\bigr)_\C \) for \( \lambda = 0 \) and
is trivial for \( \lambda > 0 \); see
[Lemma 5.9 (iii) and p. 559]AtiyahBott1983.
If \( P \) is a reduction of a \( G^\C \)-principal bundle
\( P^\C \) to \( G \subseteq G^\C \), the space
\( \holSectionSpace_A\bigl(\AdBundle P \tensorProd \C\bigr) \) is
naturally identified with the space of sections of \( \AdBundle P^\C \)
that are holomorphic with respect to the holomorphic structure
\( \difpBar_A \) on \( P^\C \) induced by the connection \( A \).
Hence, \( \holSectionSpace_A\bigl(\AdBundle P \tensorProd \C\bigr) \)
can be viewed as the stabilizer algebra of \( \difpBar_A \) under
the action of \( \GauGroup(P^\C) \) on the space of holomorphic
structures on \( P^\C \).
It is possible to extend the above results to the case when the
base \( M \) is a compact symplectic manifold of arbitrary dimension;
see [Section 4]Donaldson1985 for the setup of the
infinite-dimensional symplectic framework.
Then global minima and critical points of the norm-squared of the
momentum map correspond to Kähler–Einstein connections and
Hermitian Yang–Mills connections, respectively.
This extension is especially fruitful when coupled to other geometric
structures on the base, such as one of the special Kähler metrics
discussed in <ref>.
For example, we expect that our general results directly yield the
reductiveness obstruction of solutions of the
Kähler–Yang–Mills–Higgs equations [Theorem 3.6]
AlvarezConsulGarciaFernandezGarciaPrada2019 (note, however, that
the assumption of vanishing first Betti number in that theorem calls
for a careful treatment, so one might expect to again encounter
central extensions of the symplectomorphism group).
§ NOTATION AND CONVENTIONS
Penrose Notation.
In <ref>, we shall make extensive use of
Penrose's abstract index notation. In this notation, indices are
used as labels indicating the type of a tensor and do not
denote the components of a tensor with respect to a local frame.
For example, a vector field is denoted by \( X^i \).
The superscript \( i \) in \( X^i \) does not refer to a particular
component in local coordinates but serves as a label telling us
that \( X \) is a vector field. Similarly, a \( 1 \)-form is
written as \( \alpha_j \). Contraction is indicated by labeling one
covariant index and one contravariant
index with the same letter, , \( \alpha(X) \equiv \alpha_i X^i \).
Thus, the resulting calculus resembles the usual coordinate
expressions but has the important advantage of being completely intrinsic
and coordinate-free.
Indices are raised and lowered using the symplectic form
\( \tensor{\omega}{_i_j} \) as follows:
\begin{align}
\label{flat}
\omega^\flat: X^i &\mapsto X_i \equiv \omega_{ji} X^j \, , \\
\label{sharp}
\omega^\sharp: \alpha_j &\mapsto \alpha^j \equiv \varpi^{ji} \alpha_i \, ,
\end{align}
where \( \varpi^{ij} \) is the Poisson tensor associated with
\( \omega_{ij} \) according to \( \varpi^{ik}\omega_{kj} =
- \tensor{\delta}{^i_j} \).
For a symplectic form \( \omega \) with associated Poisson
tensor \( \varpi \), the Poisson bracket is given by \( \poisson{f}{g} =
\omega(X_f, X_g) = \varpi(\dif f, \dif g) \), where \( X_f \) is the
Hamiltonian vector field satisfying \( X_f \contr \omega = - \dif f \).
We thus have
\begin{equation*}
\poisson{f}{g}
= \varpi^{ij} (\dif f)_i (\dif g)_j
= \varpi^{ij} \omega_{ki} (X_f)^k \omega_{lj} (X_g)^l
= \omega_{kl} (X_f)^k (X_g)^l.
\end{equation*}
In other words, \( \varpi^{ij} \omega_{ki} \omega_{lj} = \omega_{kl} \),
which is equivalent to \( \varpi^{ik}\omega_{kj} =
- \tensor{\delta}{^i_j} \).
Note that \( \omega^\flat \) and \( \omega^\sharp \) are inverses of
each other. The minus sign in the definition of the Poisson tensor
is a consequence of the skew-symmetry of \( \omega_{ij} \) and leads
to some subtle consequences for the index calculus that are different
from the Riemannian context. In particular, the position of the
indices is important even if they are summed-over. For example,
we have \( A_{ij} = \tensor{A}{_i^l} \omega_{lj}\)
and \( B^{jk} = \varpi^{jp} \tensor{B}{_p^k} \) so that
\begin{equation}
A_{ij} \tensor{B}{^{jk}}
= \tensor{A}{_i^l} \omega_{lj} \varpi^{jp} \tensor{B}{_p^k}
= - \tensor{A}{_i^p}\tensor{B}{_p^k} \, .
\end{equation}
Moreover, lowering the index of the identity map \( \tensor{\delta}{_i^j}:
\TBundle M \to \TBundle M \) yields the skew-symmetric map
\( \delta_{ij} = \omega_{ij}: \TBundle M \times_M \TBundle M \to \R \).
Let \( \TensorFieldSpace^r_s(M) \) be the space of \( r \)-times
contravariant and \( s \)-times covariant tensor fields.
An affine connection on \( M \) is a linear map
\begin{equation}
\nabla: \VectorFieldSpace(M) \to \TensorFieldSpace^1_1(M),
\qquad
X^j \mapsto \nabla_i X^j,
\end{equation}
which satisfies the Leibniz rule \( \nabla_i (fX^j) =
{(\dif f)}_i X^j + f \, \nabla_i X^j \).
The covariant derivative extends uniquely to all tensor fields
by requiring \( \nabla _i \) to preserve the type of the tensor
and to be a \( \R \)-linear tensor derivation, ,
\( \nabla_i (t \otimes s) = (\nabla_i t) \otimes s +
t \otimes (\nabla_i s) \) for any \( t, s \in \TensorFieldSpace(M) \),
and to commute with contractions. In abstract Penrose index notation,
the covariant derivative of a tensor field
\( t^{j_1 \dotso j_p}_{k_1 \dotso k_q} \) is
denoted by \( \nabla_i t^{j_1 \dotso j_p}_{k_1 \dotso k_q} \).
The Lie derivative of a connection is defined by the requirement
that it behaves like a derivation on all symbols, , for each given
\( X \in \VectorFieldSpace(M) \), the formula
\begin{equation}
\difLie_X (\nabla_Y Z) = (\difLie_X \nabla)_Y Z + \nabla_{\difLie_X Y} Z
+ \nabla_Y \difLie_X Z,
\end{equation}
for all \( Y,Z \in \VectorFieldSpace(M) \), defines a new covariant
derivative \( (\difLie_X \nabla)_Y \) along the vector field \( Y \).
The torsion of \( \nabla \) is the \( 1 \)-contravariant,
\( 2 \)-covariant tensor field \( \tensor{T}{_i_j^k} \) defined by
\begin{equation}
\label{torsion_definition}
\tensor{T}{_i_j^k} X^i Y^j =
X^i \nabla_i Y^k - Y^j \nabla_j X^k - [X,Y]^k
\quad
\text{for all}\quad X^i,Y^j \in \VectorFieldSpace(M),
\end{equation}
The curvature \( \tensor{R}{_i_j_k^l} \) of the connection
\( \nabla \) is defined by
\begin{equation}
\label{curvature_definition}
\tensor{R}{_i_j_k^l} Z^k = \nabla_i \nabla_j Z^l - \nabla_j \nabla_i Z^l
+ \tensor{T}{_i_j^k} \, \nabla_k Z^l \, .
\end{equation}
Since we will rely heavily on the Penrose notation, this is a good place
to compare it with the standard coordinate free notation. Remember,
the indices are not coordinate components of tensors.
For example, in (<ref>), $X^i \nabla_i Y^k$ actually
means $\nabla_X Y$. So, formula (<ref>), even though it
looks like the coordinate expression of the torsion tensor, it really
means $T(X,Y)= \nabla_X Y - \nabla_Y X - [X,Y]$, the standard coordinate
free definition of the torsion. This brings us to the interpretation
of (<ref>), which would be a standard formula
had the sub- and superscripts been indices in a coordinate system. Note
that (<ref>) does not state that
$R(X,Y)Z=\nabla_X \nabla_Y Z - \nabla_Y \nabla_X Z +
\nabla_{T(X,Y)}Z$, which is false, even though one is tempted
to interpret it in this manner. To
see how one can recover the standard definition of the curvature from
the Penrose index formula (<ref>), we multiply
both sides by $X^i Y^j$ and get (again, the indices and their position
only reflect what kind of tensor is considered, so the index $l$ in
the computation below is not a “free index” in Penrose notation;
it only tells us that the result is a vector field and, similarly,
we need to interpret $\nabla_X Y^j$ as $(\nabla_X Y)^j$ since the
upper index only indicates that the expression is a vector field):
\begin{align*}
R(X,Y)Z&= X^i Y^j\tensor{R}{_i_j_k^l} Z^k = X^i Y^j\nabla_i \nabla_j Z^l -
X^i Y^j\nabla_j \nabla_i Z^l + X^i Y^j\tensor{T}{_i_j^k} \, \nabla_k Z^l\\
&= Y^j \nabla_X \nabla_j Z^l - X^i \nabla_Y \nabla_i Z^l +
T(X,Y)^k \nabla_k Z^l \\
&= \nabla_X(Y^j \nabla_j Z^l) - (\nabla_X Y)^j \nabla_j Z^l -
\nabla_Y(X^i \nabla_i Z^l) + (\nabla_Y X)^i \nabla_i Z^l+
\nabla_{T(X,Y)}Z \\
&= \nabla_X \nabla_Y Z - \nabla_{\nabla_X Y} Z -\nabla_Y \nabla_X Z +
\nabla_{\nabla_Y X} Z + \nabla_{T(X,Y)}Z \\
&= \nabla_X \nabla_Y Z - \nabla_Y \nabla_X Z +
\nabla_{T(X,Y)-\nabla_X Y + \nabla_Y X}Z \\
&= \nabla_X \nabla_Y Z - \nabla_Y \nabla_X Z -
\nabla_{[X,Y]}Z\,,
\end{align*}
which is the definition of the curvature tensor. This simple computation
illustrates the power of the Penrose notation: (<ref>)
looks like the correct local formula in coordinates for the curvature
tensor, whereas, in reality, it gives an intrinsic expression of the
curvature tensor and one can recover the classical definition after a simple
computation. It is in this spirit that all the formulas that appear later
on should be interpreted; they have Penrose indices, which means that they
are intrinsic, and the index free expressions can be easily obtained
after a computation analogous to the one above.
Lie Group and Lie Algebra Actions. The
left (right) action of of a Lie group $G$ on a manifold $M$ is denoted
by $(g,m) \mapsto g.m$ ($m.g$) for $g \in G$ and $m \in M$. The induced
left (right) Lie algebra action of $\mathfrak{g}$, the Lie algebra of $G$,
on $M$ is denoted by $(\xi,m) \mapsto \xi.m$ ($m.\xi$) for $\xi \in
\mathfrak{g}$ and $m \in M$, where
\[
\xi.m \defeq \xi^*(m) \defeq
\left.\frac{d}{dt}\right|_{t=0} \exp(t \xi).m
\]
is the value of the fundamental vector field (or infinitesimal
generator) \( \xi^* \) defined by $\xi$ at $m$; analogous notation
for a right action. Recall
that for left (right) Lie algebra actions we have $[\xi^*, \eta^*] =
- [\xi, \eta]^*$ ($[\xi^*, \eta^*] =[\xi, \eta]^*$).
Throughout the paper we think of $\operatorname{U}(1)
= S^1$ as $\mathbb{R}/\mathbb{Z}$ and write hence the group multiplication
Conventions in Symplectic Geometry.
Since the sign conventions in symplectic geometry are not uniform,
we specify them at the outset. The canonical one-form on the cotangent
bundle is in canonical local cotangent bundle coordinates
$(q^i, p_i)$ equal to $\theta =p_i {\rm d}q^i$ and the symplectic
form is $\omega = {\rm d}\theta = {\rm d} p_i \wedge {\rm d}q^i$.
The Hamiltonian vector field $X_h$ of a function $h$ on a general
symplectic manifold $(M,\omega)$ is defined hence by
${\rm d}h = - X_h \contr \omega$ and Hamilton's equations
in Poisson bracket form are $\dot{f}=\{h,f\}$ for any smooth function,
which, in local Darboux coordinates $(q^i, p_i)$ on $M$
(, $\omega = {\rm d} p_i \wedge {\rm d}q^i$) are the standard
Hamilton equations $\frac{dq^i}{dt} = \frac{\partial h}{\partial p_i}$,
$\frac{dp_i}{dt} = -\frac{\partial h}{\partial q^i}$.
We have $[X_f, X_g] = X_{\{f,g\}}$ for any $f,g \in C^{\infty} (M)$.
The trace of a \( 2 \)-form \( \alpha \) with respect to \( \omega \)
is defined by first raising the second index of \( \alpha \)
with \( \omega \) and then taking the ordinary trace of the
resulting endomorphism of \( \TBundle M \), that is,
\( \tr_\omega (\alpha) \defeq \tensor{\alpha}{_i^i} \).
The following formula
\begin{equation}
\label{eq:symplectic:formWedgeOmegaNMinus1}
\alpha \wedge \frac{\omega^{n-1}}{(n-1)!} =
\frac{1}{2} \tr_\omega (\alpha)\, \mu_\omega \, ,
\end{equation}
where \( \mu_\omega = \omega^n/n! \) is the volume
form on the \(2n\)-dimensional manifold \( M \) induced by \( \omega \),
will often be used in <ref>;
it is checked using a canonical basis in each tangent space.
For all \( 1 \)-forms \( \sigma, \tau \), we have
\begin{equation}
\label{eq:symplectic:symplecticHodge1Form}
\omega(\sigma, \tau) \, \mu_\omega = \sigma \wedge \tau \wedge
\frac{\omega^{n-1}}{(n-1)!} \, .
\end{equation}
A Lie group action on the symplectic manifold $(M, \omega)$ is called
symplectic or canonical if the diffeomorphism on $M$ defined by each
$g \in G$ preserves the symplectic form $\omega$ on $M$. This implies
$\dif (\xi^*\contr \omega) = \difLie_{\xi^*} \omega = 0$, where $\difLie$
denotes the Lie derivative; this condition
is equivalent to the action being symplectic if the Lie group $G$ is
We use a weakly nondegenerate pairing $\kappa: \mathfrak{g}^\ast \times
\mathfrak{g} \rightarrow \mathbb{R}$ and think of $\mathfrak{g}^\ast$
as the “dual” of $\mathfrak{g}$ (even though it is not the functional
analytic dual in infinite dimensions); nondegenerate always means weakly
nondegenerate. The momentum map $J:M \rightarrow
\mathfrak{g}^\ast$ is defined by the requirement $\xi^\ast = X_{J_\xi}$
for any $\xi \in \mathfrak{g}$, where $J_\xi (m):= \kappa (J(m), \xi)$
for any $m \in M$. Thus, $J$ is infinitesimally equivariant if and only
if it is an anti-Poisson map, , $\{ J_\xi , J_\eta \} +
J_{[ \xi , \eta ]} = 0$ for any $\xi, \eta \in \mathfrak{g}$.
Missing 'biblatex' package
The bibliography requires the 'biblatex' package.
groupsSymplectic connection
journaltitlePure Appl. Math. Q.
titleOn the Kähler-Yang-Mills-Higgs equations
groupsSymplectic connection
journaltitleJournal of the European Mathematical Society
titleConformally Kähler, Einstein–Maxwell geometry
journaltitlePhilos. Trans. Roy. Soc. London Ser. A
titleThe Yang-Mills equations over Riemann surfaces
Birkhäuser Boston
booktitleArithmetic and Geometry
titleConvexity and Loop Groups
editionSoftcover reprint of the original 2nd ed. 2004
groupsMomentum polytopes
titleTorus Actions on Symplectic Manifolds (Progress in Mathematics)
seriesLecture Notes in Mathematics
titleDifferential Characters
journaltitleThe Annals of Mathematics
titleOn unitary ray representations of continuous groups
Basel: Springer
booktitleRecent trends in dynamical systems. Proceedings of the international conference, Munich, Germany, January 11–13, 2012, in honor of Jürgen Scheurle on the occasion of his 60th birthday
titleGradient flows in the normal and Kähler metrics and triple bracket generated metriplectic systems
Fondation L'Enseignement Mathématique
journaltitleEnseign. Math. (2)
titleOn the characteristic classes of groups of diffeomorphisms
Oxford University Press
groupsSymplectic connection
noteIncludes bibliographical references and index. - Formerly CIP
seriesOxford Math. Monogr.
titleSasakian Geometry
groupsSymplectic connection
journaltitleCommun. Math. Phys.
titleCanonical Sasakian metrics
Scalar Curvature,Einstein Metrics,Constant Scalar Curvature,Sasakian Manifold,Reeb Vector,paper
Academia Press
booktitleLiber Amicorum Richard Delanghe: Een Veelzijdig Wiskundige
groupsSymplectic connection
titleMoment map for the space of symplectic connections
Princeton Univ. Press, Princeton, N.J.
booktitleSeminar on Differential Geometry
seriesAnn. of Math. Stud.
titleExtremal Kähler metrics
53C55 (58E30)
Springer, Berlin
booktitleDifferential Geometry and Complex Analysis
groupsSymplectic connection
titleExtremal Kähler metrics II
53C55 (58E30)
American Mathematical Society (AMS)
|
# Using Experimentally Calibrated Regularized Stokeslets to Assess Bacterial
Flagellar Motility Near a Surface
Orrin Shindell Trinity University, San Antonio, TX, USA<EMAIL_ADDRESS>Hoa Nguyen Trinity University, San Antonio, TX, USA<EMAIL_ADDRESS>Nicholas Coltharp Trinity University, San Antonio, TX, USA
<EMAIL_ADDRESS>Frank Healy Trinity University, San Antonio, TX, USA
<EMAIL_ADDRESS>Bruce Rodenborn Centre College, Danville, KY, USA
<EMAIL_ADDRESS>
###### Abstract
The presence of a nearby boundary is likely to be important in the life cycle
and evolution of motile flagellate bacteria. This has led many authors to
employ numerical simulations to model near-surface bacterial motion and
compute hydrodynamic boundary effects. A common choice has been the method of
images for regularized Stokeslets (MIRS); however, the method requires
discretization sizes and regularization parameters that are not specified by
any theory. To determine appropriate regularization parameters for given
discretization choices in MIRS, we conducted dynamically similar macroscopic
experiments and fit the simulations to the data. In the experiments, we
measured the torque on cylinders and helices of different wavelengths as they
rotated in a viscous fluid at various distances to a boundary. We found that
differences between experiments and optimized simulations were less than 5%
when using surface discretizations for cylinders and centerline
discretizations for helices. Having determined optimal regularization
parameters, we used MIRS to simulate an idealized free-swimming bacterium
constructed of a cylindrical cell body and a helical flagellum moving near a
boundary. We assessed the swimming performance of many bacterial morphologies
by computing swimming speed, motor rotation rate, Purcell’s propulsive
efficiency, energy cost per distance, and a new metabolic energy cost defined
to be the energy cost per body mass per distance. All five measures predicted
the same optimal flagellar wavelength independently of body size and surface
proximity. Although the measures disagreed on the optimal body size, they all
predicted that body size is an important factor in the energy cost of
bacterial motility near and far from a surface.
## I Introduction
Living organisms emerge, evolve, and reside within habitats, and the physical
interactions among organisms and their environments impose selective forces on
their evolution. In their low Reynolds number surroundings, bacteria such as
Escherichia coli and Pseudomonas aeruginosa have evolved a mechanical motility
system to propel themselves through fluids. This system consists of one or
more helical flagella, and these flagellar organelles are attached to the cell
body by rotary nanomotors. Flagellar motor rotation is driven by an ion flow
through the motor, causing the flagellum and the bacterial cell body to rotate
in opposite directions [1]. A bacterium swimming through a fluid can be
described as a non-inertial system in which the mechanical power output by the
motor is instantaneously dissipated by fluid drag on the body and flagellar
filaments. The interaction between the bacterium and the fluid generates a
flow that results in the net motion of the bacterium. Different flows can be
more or less favorable to the survival of an organism [2]; and the presence of
a surface introduces boundary effects that modify how a swimming cell
interacts with the fluid. We consider here the example of a unicellular motile
flagellate bacterium swimming through a fluid near to a surface and how the
conformation of the bacterial cell body and the flagellar organelle may be
optimized for such an environment. The efficiency of the bacterial motility
system has been the focus of numerous theoretical [3, 4, 5], computational [6,
7, 8, 9, 10, 11], and experimental works [12, 13, 14]. In an early paper on
swimming efficiency, E. Purcell discussed two measures: the propulsive
efficiency (Purcell efficiency) and the energy consumed during bacterial
motion per body mass [3]. The Purcell efficiency–a specialized form of the
Lighthill efficiency [15] for rotary motor-driven bacterial propulsion–is
defined as the ratio of the least power needed to translate a bacterial body
against fluid drag to the total power output by the motor during motion of the
bacterium. Most work has focused on the Purcell efficiency because it is a
scale-independent function of the geometries of the cell body and flagellum.
One shortcoming of this measure, however, is that it is independent of the
motor’s response to an external load imposed by the environment and therefore
cannot assess the biological fitness of the bacterial motor. Another measure
of bacterial performance used by a few authors is the distance traveled by a
bacterium per energy input by the motor [13, 16], which provides a different
means of evaluating fitness, as explained below. In this work, we investigate
and compare predictions of the optimal bacterial motility system made by five
measures. The first two measures are related directly to the motion of a
bacterium: the swimming speed and the motor rotation frequency. Bacteria live
in an environment where nutrients diffuse on time and length scales comparable
to bacterial motion. To effectively achieve chemotaxis, bacteria must move
quickly enough to sample their chemical environment before it is randomized by
diffusion [3, 11]. The bacterial motor has a characteristic frequency response
that depends on the external torque load [17, 18, 19, 20]. At low frequencies,
small changes in applied load correspond to large changes in operating
frequency, whereas at high frequencies, small changes in load correspond
smaller changes in frequency. In the low speed regime, the motion may be
unreliable because small changes in applied load that occur, for example, by
approaching a boundary could lead to the motor stalling. However, the low
speed regime is more thermodynamically efficient than the high speed regime.
These two competing effects must be balanced to achieve a strong swimming
performance. The other three performance measures we studied are based on the
mechanical energy cost to achieve motility: the Purcell inefficiency (or the
inverse of the Purcell efficiency), the inverse of distance traveled per
energy input, and the metabolic energy cost, which we define to be the energy
output by the motor per body mass per distance traveled. Each of these
measures compares the ratio of the power output of the bacterial motor to the
performance of a particular task. The rationale for introducing the metabolic
cost function is that it measures the actual energetic cost to the organism to
perform a specific biologically relevant task, i.e., translation through the
fluid. Moreover, the metabolic energy cost depends upon the rotation speed of
the motor and, because the bacterial motor has a different responses to
different external conditions, predicts different optimal morphologies based
on the environment than the other measures. To determine the values of
performance measures attained by different bacterial geometries, we employed
the method of regularized Stokeslets [21] and the method of images for
regularized Stokeslets (MIRS) [22], which includes the effect of a solid
boundary. Employing MRS and MIRS requires determining values for two kinds of
free parameters: those associated with computation and those associated with
the biological system. As with any computational method, the bacterial
structure in the simulation is represented as a set of discrete points. The
body forces acting at those points are expressed as a vector force multiplied
by a regularized distribution function, whose width is specified by a
regularization, or “blob” parameter. Though other simulations have produced
numerical values for dynamical quantities like torque [23] that are within a
reasonable range for bacteria, precise numbers are not possible without an
accurately calibrated method. There is no known theory that predicts the
relationship between the discretization and regularization parameters, though
one benchmarking study showed that MRS simulations could be made to match the
results of other numerical methods [24]. To determine the optimal
regularization parameter for chosen discretization sizes, we performed
dynamically similar macroscopic experiments using the two objects from our
model bacterium: a cylinder and a helix, see Fig. 1. Such an approach was
previously used to evaluate the accuracy of various computational and
theoretical methods for a helix [25]. By measuring values of the fluid torque
acting on rotating cylinders near a boundary, we verified the theory of
Jeffery and Onishi [26], which in turn we used to calibrate the ratio of
discretization to regularization size in MRS and MIRS simulations of rotating
cylindrical cell bodies. For helices there are no exact analytical results. To
determine regularization parameters for helices we descretized them along
their centerlines and fit simulation results directly to experimental
measurements. Calibrating our simulations of rotating cylinders and helices
with the experiments allowed us to build a bacterial model with a cylindrical
cell body and a helical flagellum whose discretization and regularization
parameter are optimized for each part. To impose motion on the bacterial
model, we needed only to specify the motor rotation – a consequence of there
being no body forces acting on the bacterium [23]. The motor rotation rate,
however, depends upon the external load [17, 18, 19, 13]. In our simulations,
we ensured the motor rotation rate and the total torque acting on the motor
match a point on the experimentally determined torque-speed response curve
reported in the literature [17, 20]. The dynamical quantities output from the
simulations were then used to compute performance measures for different
bacterial geometries at various distances from the boundary. Our paper is
organized as follows: Sec. II discusses our implementation of the MRS and the
MIRS, our use of dynamically similar experiments to calibrate the simulations,
and our determination of the torque-speed response curve for the motor; Sec.
III compares our five fitness measures: free swimming speed, motor frequency,
inverse Purcell efficiency, energy per distance and metabolic cost per
distance; and Sec. IV discusses the predictions made by each fitness measure
and comments on future directions of our work.
## II Materials and Methods
### II.1 Numerical Methods
Bacterial motility using a helical flagellum often involves multiple flagella,
and bodies may be spherical, cylindrical, or helical [27]. We reduced the
complexity by considering a simpler biomechanical system of a regular
cylindrical body to which a single, uniform flagellum is attached, as shown in
Fig. 1. This simple system, however, contains the same essential geometric
factors as some real bacteria such as E. coli, which have a long rod-shaped
body and helical flagella that bundle together, forming a single helix. Our
goal was to assess how the performance of our model organism changes when its
geometrical parameters and distance to an infinite plane wall are varied in
numerical simulations. We quantified the performance of different models by
computing speed, motor rotation rate, and the three energy cost measures.
Figure 1: Schematic of our model bacterium with flagellar radius $R$,
wavelength $\lambda$, axial length $L$, and filament radius $a$. The body of
the bacterium was modeled as a cylinder with radius $r$ and length $\ell$.
Each flagellum was modeled as a regular helix that tapers to zero radius at
the point it attaches to the body. Our simulations used a surface
discretization of regularized Stokeslets to represent the cylinder and a
string of regularized Stokeslets along the centerline of the flagellum. The
inset represents a radially symmetric blob function described in Sec. II.1.1
that is used to spread the force at a given point on the flagellar centerline.
For the purpose of illustration, we show the blob function of two variables
whose width is controlled by the regularization parameter $\epsilon_{f}$.
Figure 2: Our model bacterium had a cylindrical cell body and a helical
flagellum, and 25 different cell body sizes and eighteen different flagellar
wavelengths were used, as described in Table 1. Three cell bodies with the
smallest, average, and largest volumes, respectively are shown on the right
whereas the three flagella with the shortest, average, and longest wavelengths
are presented on the left. The middle shows an example of one such model,
which has the smallest body and the longest wavelength flagellum.
We composed our model of a bacterium with a cylindrical cell body and a
tapered left-handed helical flagellum as shown in Fig. 1 and Fig. 2. The
flagellar centerline is described by
$\left\\{\begin{aligned} x(s)&=(1-e^{-k^{2}s^{2}})R\sin(ks+\theta)\\\
y(s)&=(1-e^{-k^{2}s^{2}})R\cos(ks+\theta)\\\ z(s)&=s\end{aligned}\right.$ (1)
where $0\leq s\leq L$ and $L$ is the axial length in the $z$-direction. $k$ is
the wavenumber $2\pi/\lambda$ where $\lambda$ is the wavelength. $\theta$ is
the phase angle of the helical flagellum at 16 evenly spaced phases. The
parameter values used for the bacterium models shown in Fig. 2 are given in
Table 1.
Table 1: Parameters used in numerical simulations.
Parameter | Symbol | Value | Unit | Reference
---|---|---|---|---
Dynamic viscosity of the fluid | $\mu$ | $0.93$ | $cP$ |
Cell body (Cylinder) | | | |
length | $\ell$ | (a) | $\mu m$ | [20]
radius | $r$ | (b) | $\mu m$ | [20]
optimal discretization factor | $\gamma_{c}$ | $6.4$ | |
discretization size | $ds_{c}$ | $0.096$ | $\mu m$ |
regularization parameter | $\epsilon_{c}=ds_{c}/\gamma_{c}$ | $0.015$ | $\mu m$ |
Flagellum (Helix) | | | |
axial length | $L$ | $8.3$ | $\mu m$ | [20]
wavelength | $\lambda$ | (c) | $\mu m$ |
helix radius | $R$ | $0.2$ | $\mu m$ | [20]
filament radius | $a$ | $0.012$ | $\mu m$ | [20]
initial motor frequency | $\Omega_{m}/(2\pi)$ | $154$ | $Hz$ | [20]
optimal filament factor | $\gamma_{f}$ | $2.139$ | |
regularization parameter | $\epsilon_{f}=\gamma_{f}a$ | $0.026$ | $\mu m$ |
discretization size | $ds_{f}=\epsilon_{f}$ | $0.026$ | $\mu m$ |
distance from flagellar axis to wall | $d$ | (d) | $\mu m$ |
(a) $\ell\in$ {1.9, 2.2, 2.5, 2.8, 3.1} $(\mu m)$
(b) $r\in$ {0.395, 0.4175, 0.44, 0.4625, 0.485} $(\mu m)$
(c) $\lambda\in$ {0.2, 0.5, 0.8, 1.1, 1.4, 1.7, 2.02, 2.22, 2.3, 2.42, 2.6,
2.9, 3.2, 3.6, 4.0, 5.0, 7.0, 9.0 } $(\mu m)$
(d) $d\in$ {0.55, 0.62, 0.71, 0.82, 0.96, 1.12, 1.32, 1.56, 1.85, 2.20, 2.26,
2.52, 2.81, 3.14, 3.5, 3.93, 4.4, 4.93, 5.53, 6.2, 8.2, 10.2} $(\mu m)$
#### II.1.1 Method of regularized Stokeslets
The microscopic length and velocity scales of bacteria ensure that fluid
motion at that scale can be described using the incompressible Stokes
equations. We used the MRS in three dimensions [21] to compute the fluid-
bacterium interactions due to the rotating flagellum in free space at steady
state:
$\begin{split}\mu\triangle\mathbf{u(\mathbf{x})}-\nabla
p(\mathbf{x})&=-\mathbf{F(\mathbf{x})}\\\
\nabla\cdot\mathbf{u(\mathbf{x})}&=0\end{split}$ (2)
$\mathbf{u}$ is the fluid velocity, $p$ is the fluid pressure, and $\mu$ is
the dynamic viscosity. $\mathbf{F}$ is the body force represented as
$\mathbf{f}_{k}\phi_{\epsilon}(\mathbf{x}-\mathbf{x}_{k})$ where
$\mathbf{f}_{k}$ is a point force at a discretized point $\mathbf{x}_{k}$ of
the bacterium model. In our simulations, we used the blob function
$\phi_{\epsilon}(\mathbf{x}-\mathbf{x}_{k})=\frac{15\epsilon^{4}}{8\pi(r_{k}^{2}+\epsilon^{2})^{\frac{7}{2}}}$
where $r_{k}=\left\lVert\mathbf{x}-\mathbf{x}_{k}\right\rVert$. This radially
symmetric smooth function depends on a regularization parameter $\epsilon$
which controls the spread of the point force $\mathbf{f}_{k}$. Given $N$ such
forces, the resulting velocity at any point $\mathbf{x}$ in the fluid can be
computed as
$\mathbf{u}(\mathbf{x})=\frac{1}{8\pi\mu}\sum_{k=1}^{N}\frac{\mathbf{f}_{k}(r_{k}^{2}+2\epsilon^{2})}{(r_{k}^{2}+\epsilon^{2})^{\frac{3}{2}}}+\frac{(\mathbf{f}_{k}\cdot(\mathbf{x}-\mathbf{x}_{k}))(\mathbf{x}-\mathbf{x}_{k})}{(r_{k}^{2}+\epsilon^{2})^{\frac{3}{2}}}=\frac{1}{8\pi\mu}\sum_{k=1}^{N}S_{\epsilon}(\mathbf{x},\mathbf{x}_{k})\mathbf{f}_{k}$
(3)
Evaluating Eq. 3 $N$ times, once for each $\mathbf{x}_{k}$, yields a $3N\times
3N$ linear system of equations for the velocities of the model points. In the
limit as $\epsilon$ approaches 0, the resulting velocity $\mathbf{u}$
approaches the classical singular Stokeslet solution. In practice, the
specific choice of $\epsilon$ may depend on the discretization or the physical
thickness of the structure. In our bacterium model, we discretized the cell
body as $N_{c}$ points on the surface of a cylinder, and we modeled the
flagellum as $N_{f}$ points distributed uniformly along the arclength of the
centerline. In Sec. III, we present the optimal regularization parameter for
the cylindrical cell we obtained by calibrating the simulations based on the
experiments and theory. The regularization parameter for the helical flagellum
was found by calibrating simulations with experiments, since there is no exact
theory for rotating helices, as presented in Sec. II.3.
#### II.1.2 Method of images for regularized Stokeslets
We used the method of images for regularized Stokeslets (MIRS)[22] to solve
the incompressible Stokes equations (Eq. 2) and simulate bacterial motility
near a surface. In the method, the no-slip boundary condition on an infinite
plane wall is satisfied by imposing a combination of a Stokeslet, a Stokeslet
doublet, a potential dipole, and rotlets at the image point
$\mathbf{x}^{*}_{k}$ of each discretized point $\mathbf{x}_{k}$. The image
point $\mathbf{x}^{*}_{k}$ is the point obtained by reflecting
$\mathbf{x}_{k}$ across the planar surface. The resulting velocity at any
point $\mathbf{x}$ in the fluid bounded by a plane can be found in Ref.[22]
and written in the compact form similar to Eq. 3:
$\mathbf{u}(\mathbf{x})=\frac{1}{8\pi\mu}\sum_{k=1}^{N}S^{*}_{\epsilon}(\mathbf{x},\mathbf{x}_{k})\mathbf{f}_{k}$
(4)
#### II.1.3 Force-free and torque-free models
On a free-swimming bacterium, the only external forces acting are due to the
fluid-structure interaction. A bacterium is a non-inertial system so the net
external force and net external torque acting on it must vanish. This means
that $\mathbf{F}_{c}+\mathbf{F}_{f}=\mathbf{0}$ and
$\bm{\tau}_{c}+\bm{\tau}_{f}=\mathbf{0}$, where $\mathbf{F}_{c}$ /
$\bm{\tau}_{c}$ and $\mathbf{F}_{f}$ / $\bm{\tau}_{f}$ represent,
respectively, the net fluid forces and torques acting on the cell body and
flagellum. These force-free and torque-free constraints require the cell body
and flagellum to counterrotate relative to each other. In our simulations, the
point connecting the cell body and the flagellum $\mathbf{x}_{r}$ represented
the motor location, and was used as the reference point for computing torque
and angular velocity. Given an angular velocity $\mathbf{\Omega}_{m}$ of the
motor, the relationship between the lab frame angular velocities of the
flagellum and the cell body is
$\mathbf{\Omega}_{f}=\mathbf{\Omega}_{c}+\mathbf{\Omega}_{m}$ [23]. Since
$\mathbf{\Omega}_{m}$ is the relative rotational velocity of the flagellum
with respect to the cell body, the resulting velocity
$\mathbf{\tilde{u}}(\mathbf{x}_{k})$ at a discretized point $\mathbf{x}_{k}$
on the flagellum ($k=1,...,N_{f}$) can be computed as
$\mathbf{\Omega}_{m}\times\mathbf{x}_{k}$ (this velocity is set to zero at a
discretized point on the cell body). Using the MRS (or MIRS) and the six added
constraints from the force-free and torque-free conditions, we formed a
$(3N+6)\times(3N+6)$ linear system of equations to solve for the translational
velocity $\mathbf{U}$ and angular velocity $\mathbf{\Omega}_{c}$ of the cell
body and the internal force $\mathbf{f}_{k}$ acting at the discretized point
$\mathbf{x}_{k}$ of the model:
$\begin{split}\mathbf{\tilde{u}}(\mathbf{x}_{j})&=\frac{1}{8\pi\mu}\sum_{k=1}^{N}G_{\epsilon}(\mathbf{x}_{j},\mathbf{x}_{k})\mathbf{f}_{k}-\mathbf{U}-\mathbf{\Omega}_{c}\times(\mathbf{x}_{j}-\mathbf{x}_{r}),\quad
j=1,...,N\\\
\sum_{k=1}^{N}\mathbf{f}_{k}&=\mathbf{0},\quad\quad\sum_{k=1}^{N}(\mathbf{x}_{k}-\mathbf{x}_{r})\times\mathbf{f}_{k}=\mathbf{0}\end{split}$
(5)
where $G_{\epsilon}$ is $S_{\epsilon}$ from Eq. 3 for swimming in a free space
or $S^{*}_{\epsilon}$ from Eq. 4 for swimming near a plane wall. Each
$\mathbf{f}_{k}$ represents a point force acting at point $\mathbf{x}_{k}$,
which is in principle an internal contact force due to interactions with the
points on the bacterium that neighbor $\mathbf{x}_{k}$. Each $\mathbf{f}_{k}$
is balanced by the hydrodynamic drag that arises from a combination of viscous
forces and pressure forces exerted on the point $\mathbf{x}_{k}$ by the fluid
(Eq. 2). By computing each $\mathbf{f}_{k}$, we were able to deduce the fluid
interaction with each point of the bacterial model. Eq. 5 shows that the
calculated quantities $\mathbf{U}$, $\mathbf{\Omega}_{c}$, $\mathbf{F}_{c}$,
and $\bm{\tau}_{c}$ depend linearly on the angular velocity
$\mathbf{\Omega}_{m}$ since
$\mathbf{\tilde{u}}(\mathbf{x}_{j})=\mathbf{\Omega}_{m}\times\mathbf{x}_{j}$.
### II.2 Torque-speed motor response curve
The singly-flagellated bacteria we simulated move through their environment by
rotating their motor, which causes their body and flagellum to counter-rotate
accordingly. Drag force from the fluid exerts equal magnitude torques on the
body and the flagellum, and the value of the torque equals the torque load
applied to the motor. The relationship between the motor rotation rate and the
torque load is characterized by a torque-speed curve, which has been measured
experimentally in several organisms [17, 18, 19, 13, 20]. In the context of
motor response characteristics, speed refers to frequency of rotation. We
estimated the torque-speed curve for E. coli with typical values taken from
the literature [17, 20] to match the body and flagellum parameters also taken
from measurements on E. coli [20]. The fluid torque exerted on a rotating
object is proportional to its rotation rate under constant environmental
conditions in Stokes flow, and thus plotting the fluid torque versus rotation
rate in fixed conditions yields a straight line. Fig. 3 shows examples of
these ‘load lines’ computed for our bacterial model at different distances
from the boundary; the shallower blue line is calculated for a bacterium far
from the boundary, and the steeper red line is calculated near the boundary.
The load lines shown in Fig. 3 were computed with typical body and flagellum
parameters for E. coli [20].
Figure 3: Illustration of the estimated torque-speed curve for E. coli [17,
20]. There are two operating regimes: a relatively flat low speed regime
$0\leq\Omega_{m}/2\pi\leq 175$ Hz where the torque drops from its maximum
value of 1300 pN$\cdot$nm at 0 Hz to 1196 pN$\cdot$nm at 175 Hz and a
relatively steep high speed regime $175\leq\Omega_{m}/2\pi\leq 350$ Hz where
the torque drops from 1196 pN$\cdot$nm at 175 Hz to 0 pN$\cdot$nm at 350 Hz.
The insets depict a bacterium model with the average body length $\ell=2.5$
$\mu$m, the smallest body radius $r=0.395$ $\mu$m, and the average flagellar
wavelength $\lambda=2.22$ $\mu$m at different distances from the boundary:
$d=8.2$ $\mu$m (blue), $d=0.71$ $\mu$m (green), $d=0.54$ $\mu$m (red). At
closer distances the torque versus rotation rate load lines are steeper so
that they intersect the torque-speed curve at a slower rotation speed.
The torque-speed curve of the E. coli motor has been determined experimentally
by measuring the rotation rate of a bead attached to a flagellar stub and then
computing the torque on the bead due to fluid drag. By performing the
measurement in fluids of different viscosities, many points on the torque-
speed curve were assembled. It was found that the torque-speed curve of the E.
coli bacterial motor decreases monotonically from a maximum stall torque (i.e.
the zero-speed torque) of about 1300 pN$\cdot$nm to zero torque, which occurs
at a maximum speed of 350 Hz [17, 19, 20]. There are two linear operating
regimes, a low speed regime from 0-175 Hz and a high speed regime 175-350 Hz.
In the low speed regime below 175 Hz, the torque is a relatively flat function
of the motor rotation rate, falling to 0.92 of the stall torque at 175 Hz. In
the high speed regime above 175 Hz, the torque falls steeply to zero at 350
Hz. The torque-speed curve is thus expressed as a piecewise linear function of
the motor rotation rate, $\Omega_{m}$:
$\tau=\begin{cases}\text{$\displaystyle\left(-0.59\left(\frac{\Omega_{m}}{2\pi}\right)+1300\right)$
pN$\cdot$nm}&\text{for $\displaystyle 0\leq\frac{\Omega_{m}}{2\pi}\leq 175$
Hz}\\\
\text{$\displaystyle\left(-6.83\left(\frac{\Omega_{m}}{2\pi}\right)+2392\right)$
pN$\cdot$nm}&\text{for $\displaystyle 175\leq\frac{\Omega_{m}}{2\pi}\leq 300$
Hz}\end{cases}$ (6)
Fig. 3 shows the torque-speed curve as a solid black line. In each of our
simulations, we ensured that the prescribed motor speed and the computed
torque load formed a pair that corresponded to a point on that line.
### II.3 Dynamically similar experiments
Experiments were performed in an 45-liter tank (300 mm $\times$ 500 mm
$\times$ 500 mm high) filled with incompressible silicone oil (Clearco®) with
density 970 kg/m3 and dynamic viscosity $\mu$ = $1.13\times 10^{2}$
kg/(m$\cdot$s) at $22^{\circ}$C, about $10^{5}$ times that of water. The
length and speed scales in the experiment ensured that the incompressible
Stokes equations Eqs. 2 were valid. The viscosity of the oil drifted from the
manufacturer’s stated value ($\mu$ = $1.00\times 10^{2}$ kg/(m$\cdot$s)) very
slowly over a two-year period, so we determined the modified viscosity by
measuring the torque on rotating cylinders at the center of the tank and
recorded data within two months of that measurement. The theoretical value for
torque per unit length on an infinite rotating cylinder in Stokes flow is
$\sigma=4\pi\mu\Omega r^{2}$, where $\mu$ is the dynamic viscosity of the
fluid, $\Omega$ is the angular rotation rate, and $r$ is the cylindrical
radius. We measured the torque $\tau$ on a rotating cylinder with radius
$r=6.35\pm 0.2$ mm and length $\ell=149\pm 1$ mm and, by assuming
$\tau=\ell\sigma$, used the data to solve for the viscosity of the fluid. We
also assumed that the finite size of the tank did not affect the torque value
in the middle, which was more than $20r$ from the nearest boundary. Before
each data collection run, we measured the temperature of the oil with a NIST-
traceable calibrated thermistor (Cole-Parmer Digi-Sense-AO-37804-04 Calibrated
Digital Thermometer) and adjusted the previously determined viscosity using
the manufacturer’s temperature coefficient of viscosity $1.00\times 10^{-6}$
kg/(m$\cdot$s)/∘C. See Sec. II.3.2 for a detailed description of the torque
measurements.
#### II.3.1 Fabricating helices
We fabricated helices of varying wavelengths ($2.26<\lambda/R<11.88$) by
wrapping straight stainless steel welding wire around cylindrical aluminum
mandrels with different helical V-grooves precisely machined using a CNC
lathe. The V-grooves transition to a flat face with a straight groove, to
which the remaining straight section can be clamped; see Fig. 4.
Figure 4: Aluminum rods with helical V-shaped grooves were used to form model
flagella with different wavelengths. After forming, the flagella were annealed
on a precision rod to increase uniformity in the radius ($\Delta R<0.1mm$).
The helical parameters are listed in Table 2.
Mandrels were held on a lathe, and the wire was hand-spun into the V-groove.
The straight sections were secured to the flat faces, which left straight
stems aligned with the axes of the helices to be attached to the motor via a
rigid shaft adapter. Residual tension in the wires caused the wavelengths and
radii to vary after they were removed from the mandrels. The helices were
forced onto a precision stainless steel rod with radius $R=6.350\pm 0.013$ mm
for annealing. The helices on the rod were then placed into a tube furnace
(MTI GSL-1500X) and annealed at 900 degrees Celsius for two hours, which
removed most of the variation in the radii of the helices and fixed the
helical wavelength. The helix parameters used in the experiments are listed
Table 2:
Table 2: Wavelengths and lengths of helices.
$\lambda/R$ | $L/R$
---|---
2.26 $\pm$ 0.13 | 22.3 $\pm$ 0.5
3.88 $\pm$ 0.01 | 24.3 $\pm$ 0.5
5.86 $\pm$ 0.08 | 30.0 $\pm$ 0.5
8.65 $\pm$ 0.01 | 23.3 $\pm$ 0.5
10.91 $\pm$ 0.01 | 24.2 $\pm$ 0.5
11.88 $\pm$ 0.01 | 23.1 $\pm$ 0.5
The helical wavelength $\lambda$ and axial length $L$ are expressed in terms
of the helical radius $R$; $R=6.35\pm 0.10$ mm in all cases. The filament
radius was $a/R=0.111$ for all helices.
#### II.3.2 Axial torque measurements
To measure the dependence of torque on boundary distance, we secured the tank
onto a horizontal stage that allowed for motion in the $x$-direction, as shown
in Fig. 5. The motion of the stage was controlled by a linear guide with a
worm gear screw that advanced the stage 0.3 mm per revolution. The screw was
turned using a computer-controlled NEMA 23 stepper motor with a resolution of
400 steps/rev. This gave better than 100 $\mu$m precision in controlling the
boundary distance, which was necessary: the step size near the boundary was as
small as 0.5 mm. Torque measurements were made for both cylinders and helices
using similar methods. The objects were held in a rigid shaft adapter and then
lowered until centered in the tank using a vertical translation stage built
from 80-20® extruded aluminum. At the beginning of each data set, we first
adjusted the vertical tilt of the object until it was parallel to the
boundary. Next, we manually adjusted the horizontal stage so that the cylinder
or helix touched the front vertical boundary of the tank. We used total
internal reflection to form an image of the object that could be used as a
reference to find where the edge of the object just made contact with the
boundary, which occurs when the image appears to touch the object. The torque
was measured using a FUTEK TFF400, 10 in-oz, Reaction Torque Sensor. The
cylinder and helices were driven by a variable speed DC motor with a magnetic
encoder (Pololu 298:1 Micro Metal Gear Motor with Magnetic Encoder) and housed
inside of a 3D-printed enclosure that included sleeve bearings to minimize
frictional torque. The power and signal wires were fed through a 6.32 mm
opening at the center of the torque sensor. The wires were then fixed to the
outside structure so that they did not create a torque when measurements were
taken. The encoder output was read by the counter input on a National
Instruments USB6211 M series multifunction DAQ. The torque signal was
amplified using an amplifier/driver (Omega DP25B-E-A 1/8 DIN Process Meter and
Controller) and its output fed into the same National Instruments data
acquisition board’s analog to digital input with a resolution of 250 thousand
samples per second, which is much faster than any time scales in the
experiment. Data were taken with the DC motor rotating at varying speeds and
with the objects located at a distance from the boundary set by the horizontal
stage. The torque and motor frequency were simultaneously recorded using
MATLAB to acquire and plot them. We used MATLAB and a motor controller
(ARDUINO MEGA 2560 with an ADAFRUIT Motor Shield v.2) to control the motor.
However, the motor rotation varied depending on the axial load, so we divided
the signal from the torque sensor by the frequency data from the counter input
to get the torque per unit frequency at each boundary distance, see Fig. 6. A
MATLAB data acquisition GUI included the temperature and distance values,
ensuring that the acquisition parameters were stored with the raw data. Data
were taken for approximately 60 rotation periods for both CW and CCW rotation
at each boundary location. The frequency signal occasionally showed large
spikes that affected the average torque-per-frequency value because the torque
signal did not show a corresponding jump. We considered this to be the result
of the encoder miscounting the rotation rate or the counter input in the DAQ
misreading the signal from the encoder. We used MATLAB’s outliers function to
remove such frequency spikes that were more than nine median absolute
deviations from the median calculated in a moving window ten data points wide,
and replaced them with the average of the adjoining data points. The number of
outliers was less than 1% of the data points, so this frequency smoothing
should not have biased the averaging significantly. The difference between
mean CW and CCW rotation values, which should have been the same, was used to
establish the uncertainty in the experimental measurements. An analysis script
read the geometric parameters and data files for a given set of measurements
(cylinder or helix) and plotted the data versus boundary distance. We scaled
the torque using a unit of $[\mu\Omega r^{2}\ell]$ (cylinder) or $[\mu\Omega
R^{2}L]$ (helix), where $\mu$ is the fluid viscosity, $\Omega$ is the angular
speed, $r$ is cylindrical radius, $\ell$ is the cylindrical length, $R$ is the
helical radius, and $L$ is the helical axial length. Plots of the
dimensionless torque for cylinders and helices are shown in Fig. 7 and Fig. 9.
Using these units for the torque allowed for easy comparison between
experiments, numerical simulations, and theory.
Figure 5: Experimental setup showing the tank, translation stage, torque
sensor and a cylinder positioned for measurement. The motor and magnetic
encoder were housed inside a 3D-printed structure that was mounted to the
active side of the torque sensor. Signal wires were run through the center of
the torque sensor for motor control and data acquisition. Figure 6: Example of
data signals from the experimental torque measurements. The frequency and
torque data were read by the DAQ and the torque per unit frequency was
calculated in real time and was smoothed to remove outliers as described in
the text.
### II.4 Summary of algorithms and data analysis
Two separate sets of simulations are presented in this paper. For those with a
helix model or a bacterium model, the results were averaged over 16 evenly
spaced phases as described in Eq. 1 of the flagellar centerline. (i) The goal
of the first set of simulations was to calibrate the MRS and MIRS methods by
finding the optimal factors ($\gamma_{c}$ for a cylindrical cell body and
$\gamma_{f}$ for a helical flagellum) and the optimal regularization
parameters ($\epsilon_{c}$ and $\epsilon_{f}$), as reported in Table 1. Eq. 3
was used to solve for the force $\mathbf{f}_{k}$ at each discretized point
$\mathbf{x}_{k}$ in a free space, whereas Eq. 4 was used for simulations near
a plane wall. The resulting net torque of each rotating structure was then
compared with the results from theory for a cylinder or from experiments for a
helix, as described in Sec. III.1. (ii) The goal of the second set of
simulations was to assess the motility performance of the force-free and
torque-free bacterium models with boundary effects incorporated. Step 1: Eq. 5
was used with $S_{\epsilon}$ (for simulations in a free space) or with
$S^{*}_{\epsilon}$ (for simulations with a plane wall). Different combinations
of the cell body size, flagellar wavelength, and distance to the wall were
simulated. We used five values for the length $\ell$ and five values for the
radius $r$ shown in Table 1. These values are within the range of normal E.
coli [20]. We used $18$ wavelengths $\lambda$ that cover a range of biological
values ($2.22\pm 0.2$ $\mu m$) and values that are shorter and longer than the
biological values (Table 1 and Fig. 2). The set of geometric parameters,
together with $22$ distance values $d$ measured from the flagellar axis of
symmetry to the wall, resulted in $9,900$ simulations. From each simulation,
we obtained the axial component of the translational velocity $U$, the
magnitude of the axial-component of the hydrodynamic drag on the cell body
$F$, and the magnitude of the axial-component of the hydrodynamics torque on
the cell body $\tau$. For each body geometry (450 total), we performed a
simulation in free-space to ensure the convergence of MIRS calculations to MRS
calculations as the distance $d\rightarrow\infty$. Step 2: The torque value
$\tau$ was output from each simulation in Step 1 with the motor frequency set
to $154\,\text{Hz}$. That torque-frequency pair was then used to determine the
load line and its intersection with the torque-speed, as discussed in Sec.
II.2 and shown in Fig. 3. Each motor frequency $\Omega_{m}/2\pi$ on the
torque-speed curve was given as some multiple $q$ of $154\,\text{Hz}$. The
simulation outputs were scaled by $q$, since they were all linear with motor
frequency; i.e., $\left(U,F,\tau\right)\rightarrow q\left(U,F,\tau\right)$.
These scaled quantities were then used calculate the performance measures.
Results are presented in Sec. III.2 and Sec. III.3.
## III Results
### III.1 Verifying the numerical model and determining the optimal
regularization parameters
When using MRS or MIRS, the choice of the regularization parameter for a given
discretization (cylinder) or filament radius (helix) of the immersed structure
has generally been made without precise connection to real-world experiments,
because there are large uncertainties in biological and other small-scale
measurements. We therefore used theory, as described below, and dynamically
similar experiments, as described in Sec. II.3, to determine the optimal
regularization parameters for the two geometries used in our bacterial model:
a cylinder and a helix.
#### III.1.1 Finding the optimal regularization parameter for a rotating
cylinder
Jeffrey and Onishi (1981) derived a theory for the torque per length on an
infinite cylinder rotating near an infinite plane wall [28] that was used
previously to calibrate numerical simulations of helical flagella [23]. The
torque per unit length $\sigma$ on an infinite cylinder is given as
$\sigma=4\pi\mu\Omega r^{2}\frac{d}{(d^{2}-r^{2})^{1/2}}$ (7)
where $\mu$ is the dynamic viscosity of the fluid, $\Omega$ is the angular
rotation speed, $r$ is the cylindrical radius, and $d$ is the distance from
the axis of symmetry to the plane wall. We used this theoretical value as a
common reference point between the experiments and simulations to establish
optimal computational parameters, but note that this theory has not been
experimentally tested outside of the present work. We assumed Eq. 7 is valid
for our experiments and simulations, though this assumption as applied to
experiments ignored the finite size of the tank. To control for end effects in
the experiments, we measured the torque with only the first 3 cm inserted into
the fluid and with the full cylinder inserted at the same boundary locations.
We subtracted the torque found for the short section from the torque found for
the full insertion of the cylinder. In simulations, we controlled for finite-
length effects by measuring the torque on a middle subsection of the simulated
cylinder, as discussed below. Our experimental data are shown in Fig. 7, with
the torque made dimensionless using the quantity $\mu\Omega r^{2}\ell$, where
$\mu$ is the fluid viscosity, $\Omega$ is the rotation rate, $r$ is the
cylindrical radius, and $\ell$ is the cylindrical length. The mean squared
error (MSE) between experiments and theory is MSE $\leq 6\%$ when calculated
for the boundary distances where $d/r>1.1$ (i.e. the distance from the
boundary to the edge of the flagellum is $\geq 1$ mm). The theory
asymptotically approaches infinity as the boundary distance approaches
$d/r=1$, which skewed the MSE unrealistically. For the data where $d/r\geq 2$,
the mean squared error is less than 1%. In numerical simulations of the
cylinder, the computed torque value depended on both the discretization and
regularization parameter. Having found good correspondence with the
experiments, we used Eq. (7) to find an optimal regularization parameter for a
given discretization of the cylinder (see Table 1: cylinder part). The
discretization size of the cylindrical model $ds_{c}$ was varied among $0.192$
$\mu$m, $0.144$ $\mu$m, and $0.096$ $\mu$m. For each $ds_{c}$, an optimal
discretization factor $\gamma_{c}$ was found by minimizing the MSE between the
numerical simulations and the theoretical values using the computed torque in
the middle two-thirds of the cylinder to avoid end effects. The optimal factor
was found to be $\gamma_{c}=$ 6.4 for all the discretization sizes. We used
the finest discretization size for our model bacterium as reported in Table 1
since it returned the smallest MSE value of 0.36%.
Figure 7: Dimensionless torque, $\tau/(\mu\Omega r^{2}\ell)$, for a cylinder
versus scaled boundary distance ($d/r$), where $\mu$ is the dynamic viscosity,
and $\ell$ is the length of the cylinder. The boundary distance is scaled by
the cylindrical radius $r$ as measured to the centerline of the cylinder:
theory by Jeffrey and Onishi [28] (solid black line), optimized MIRS
simulations (solid red curve) and dynamically similar experiments (solid blue
circles). The numerical simulations were optimized by adjusting the
discretization factor $\gamma_{c}$ to minimize the MSE between theory and
simulation (the minimum MSE is 0.36%). The MSE between experiments and theory
was large near the boundary because the theory goes to infinity at $d/r=1$.
Outside of the near-boundary region ($d/r\geq 2$) the MSE is less than 1%.
#### III.1.2 Finding the optimal regularization parameter for a rotating
helix far from a boundary
Simulated helical torque values also depend on the discretization and
regularization parameter, but there is no theory for a helix to provide a
reference. Other researchers have determined the regularization parameter
using complementary numerical simulations, but the reference simulations also
have free parameters that may have affected their results [24]. Thus, we used
dynamically similar experiments, as described in Sec. II.3, to determine the
optimal filament factor, $\gamma_{f}=2.139$, for a helix filament radius
$a/R=0.111$. Torque were measured for the six helical wavelengths given in
Table 2 when the helix was far from the boundary. The optimal filament factor
$\gamma_{f}=2.139$ was found by the following steps (i) varying $\epsilon_{f}$
for each helix until the percent difference between the experiment and
simulation was under 5%; and (ii) averaging the $\epsilon_{f}$ values found in
Step (i). In these simulations, the regularization parameter and
discretization size are both equal to $\gamma_{f}a$. The results are shown in
Fig. 8, with the torque values non-dimensionalized by the value $\mu\Omega
R^{2}L,$ where $\mu$ is the fluid viscosity, $\Omega$ is the rotation rate,
$R$ is the helical radius, and $L$ is the axial length. The optimized
simulations returned an average percent difference of $2.4\pm 1.7$% compared
to the experimental values.
Figure 8: Dimensionless torque ($\tau/(\mu\Omega R^{2}L)$ for different
flagellar wavelengths ($\lambda/R$), where $\mu$ is the dynamic viscosity,
$\Omega$ is the angular speed, $R$ is the helical radius and $\lambda$ is the
helical wavelength. Experimental values are solid black circles and solid blue
circles; our MRS simulations with a centerline distribution of regularized
Stokeslets are the solid red and solid green triangles, and MRS simulations
computed with a surface discretization of the helices using the code provided
by Rodenborn et al. (2013) [25] are the blue and black curves.
We checked whether helices with different filament radii could be accurately
simulated using our optimized $\gamma_{f}$ to scale the regularization
parameter, i.e. ($\epsilon_{f}=\gamma_{f}a$) to account for relative size of
the filament, as is commonly done [29, 30, 31, 32, 33]. We computed torque
values that matched the experimental values given in Rodenborn et al. (2013),
which used a filament radius $a/R=0.063$. The results are also presented in
Fig. 8. The percent difference between our MRS simulations and their data is
2.5$\pm$1.3%. Martindale et al. (2016) [24] used an MRS with a surface
discretization of the flagellum to calibrate their simulation parameters,
whereas our MRS used a string of regularized Stokeslets along the helical
centerline to reduce the computational cost in the MIRS calculations. As a
final test, we used the freely available and calibrated code for the MRS with
surface discretization from Rodenborn et al. to compute torque values for our
$a/R=0.111$ data and for their $a/R=0.063$ data. Fig. 8 shows the torque
comparison of their surface discretized MRS (solid curves), our centerline
distribution MRS (triangles), and the experiments (circles). The percent
difference between their MRS and the experiments is $3.6\pm 3.4$%. The percent
difference between our MRS and theirs is 1.8$\pm$3.7%. Thus, within the range
we tested our MRS with a centerline distribution, the optimal filament factor
$\gamma_{f}$ worked very well for another filament radius and other helical
wavelengths when compared to both the experiments and the surfaced discretized
MRS in Rodenborn et al. (2013) [25] for torques far from the boundary.
#### III.1.3 Torque on rotating helices near a boundary
To determine how boundaries affect bacterial motility, we used our optimized
value for $\gamma_{f}$ in our MIRS simulations to compute the torque as a
function of boundary distance, as shown in Fig. 9. The computed torque values
and measured torque values also show excellent agreement at most boundary
distances, except for the shortest wavelength $\lambda/R=2.26$. We note that
this helix had the largest variation in wavelength, as reported in Table 2.
Furthermore, the torque for short wavelengths is more sensitive to variation
in wavelength as compared to variation at longer wavelengths, which likely
explains the difference between simulation and experiment for this geometry,
whereas for the other wavelengths the simulated values are generally within
the uncertainty in the experiments for all boundary distances.
Figure 9: Dimensionless torque $\tau$ for different helical wavelengths
($\lambda/R$) versus boundary distance ($d/R$) scaled by the helical radius
$R$. The optimized MIRS simulations are the solid curves and the experimental
values are solid circles with vertical error bars. The data also show good
agreement for the far from boundary value at $d/R\approx 20$ (see Fig. 8). The
data show that once the far from boundary distance was properly calibrated,
the MIRS worked very well to represent the effects of the boundary.
### III.2 Speed Measurements to Assess Performance
The motion of bacteria through their environment enables them to find
nutrients. Indeed, it has been suggested that the purpose of bacterial
motility is primarily to perform chemotaxis [3]. Living in a microscopic
environment where thermal effects are significant, bacteria must be able to
sample chemical concentrations faster than diffusion causes those
concentrations to change [3, 11], so moving faster may confer a survival
advantage. The low speed operating regime of the bacterial motor (below 175
Hz) is thermodynamically more efficient than the high speed regime. A simple
model gives the fraction of energy lost to friction in the motor as
$(\tau_{0}-\tau)/\tau_{0}$, where $\tau_{0}$ is the stall torque and $\tau$ is
the operating torque at a given frequency [13]. In the low speed regime
$\tau\geq 0.92\tau_{0}$, so that the power output of the motor is greater than
92% of the power input. However, the low speed regime may be less
operationally reliable for motility; the flatness of the torque-speed curve
implies that small increases in load correspond to large decreases in motor
rotation rate, so the bacterium risks stalling and may be unable to restart
its motor. Using our simulations, we determined the swimming speed and motor
rotation rate for different bacterial geometries at different distances to a
solid boundary and assessed. the performance of bacterial geometries typically
associated with swimming.
#### III.2.1 Optimal flagellar wavelength
We first consider the effect of different flagellar wavelengths on swimming
speed and motor rotation rate, as shown in Figs. 10a and b. Swimming speed and
motor rotation rate are shown as heat maps for different flagellar wavelengths
at different distances to the boundary. The heat map shows the median values
computed among all 25 bacterial body geometries we investigated (Table 1). The
maximum of all the median swimming speed values is about
$26\,\mu\text{m}s^{-1}$, and it occurs far from the boundary for a wavelength
near $8R$. For long and short flagellar wavelengths, the swimming speed at all
distances is much lower than the maximum. Long wavelengths yield about
$10\,\mu\text{m}s^{-1}$, whereas very short wavelengths give values closer to
$1\,\mu\text{m}s^{-1}$. For the flagellar wavelength of $\lambda/R=11.1$ that
is typical for E. coli, the swimming speed is about $25\,\mu\text{m}s^{-1}$
far from the boundary, whereas it drops to about $20\,\mu\text{m}s^{-1}$ very
near the boundary. Interestingly, the flagellar wavelengths that correspond to
swimming speeds near the maximum in Fig. 10a also correspond to motor rotation
rates in the low end of the high speed regime in the torque-speed curve, so
that the motion is both thermodynamically efficient and operationally
reliable. A wavelength of $\lambda/R=8$ gives 190 Hz and 183 Hz far from and
near to the wall, respectively, which correspond to mechanical energy outputs
of about 84% and 88%. Short and long wavelengths result in a weaker
performance, but for different reasons: short wavelengths operate in the low
speed regime and thus are efficient but unreliable, whereas longer wavelengths
operate farther into the high speed region and thus are reliable but
inefficient.
Figure 10: Swimming speed and motor frequency for different flagellar
wavelengths at different boundary distances. Panels a and b show heat maps of
free swimming speed $U$ and motor frequency $\Omega_{m}/2\pi$ with axes
flagellar wavelength ($\lambda/R$) versus boundary distance ($d/R$), where $R$
is the helical radius. Typical E. coli wavelengths are indicated with the
dashed white lines, which shows this range is near to the peak in swimming
speed. Panels c-e show line plots of speed and motor frequency across
different body sizes far from ($d/R=51.0$) and near ($d/R=2.74$) the boundary.
The solid circles are the simulation data points and the solid curves are
spline fits to the data. The three curves show the maximum (black), the median
(red) and the minimum (blue) among all cell bodies simulated. Panels c and e
show that the peak swimming speed $\lambda/R\approx 8$, which is close to the
range of E. coli wavelengths and the peak has a long “tail” as wavelength
increases. Panels d and f show increasing motor frequency with increasing
wavelength. The trend reflects the plot of the torque-speed curve in Fig. 3.
#### III.2.2 Boundary effects
To illustrate how proximity to the boundary affects swimming speed and motor
rotation rate, we show line plots in Figs. 10c-f of the speed and rotation
rate as functions of the flagellar wavelength both far from and near to the
boundary. The maximum, median, and minimum values among all bacterial body
geometries are shown for each boundary distance. Comparing Figs.10c and e
shows that proximity to the boundary does not appreciably alter the optimal
wavelength: it remains near $8R$ for all body geometries both near and far
from the boundary. However, proximity to the boundary does increase the
difference in the swimming speed among different bodies at a given wavelength.
Far from the boundary, the difference between the maximum and minimum swimming
speeds for the optimal flagellar wavelength is 14% of the maximum value of
$28\,\mu\text{m}s^{-1}$; near the boundary, the difference is 34% of the
maximum value of $26\,\mu\text{m}s^{-1}$. Figs. 10d and f show the motor
rotation rate is less sensitive to the body geometry and proximity to the
surface than the swimming speed. Far from the boundary, the difference between
the maximum and minimum rotation rates for the optimal flagellar wavelength
$8R$ is 6% of the maximum value $198\,\text{Hz}$; near the boundary, the
difference is 6% of the maximum value of $190\,\text{Hz}$. To further probe
the effect of the cell body geometry on swimming speed and motor rotation
rate, we show heat maps of the speed and rotation rate fixed at the typical E.
coli wavelength $\lambda/R=11.1$ as functions of the length and radius of the
cylindrical cell body. Figs. 11a-d show the results. The translational speed
is optimized for short thin cell bodies (lower left-hand corner of Figs. 11a
and c both near and far from the surface. Conversely, the slowest motor
rotation rates (though all higher than 175 Hz), and therefore the most
thermodynamically efficient, occur for long thick cell bodies (upper right-
hand corners of Figs. 11b and d. Taken together, these two results suggest
that balancing the need of a bacterium to move quickly with its need to be
thermodynamically efficient would yield a cell body geometry somewhere between
long, thick cell bodies and short, thin cell bodies. Interestingly, the center
point of the heat maps shown in Fig. 11 corresponds to the mean size of the E.
coli cell body.
Figure 11: Free-swimming speed $U$ and motor frequency $\Omega_{m}/2\pi$ shown
as heat maps with axes cylindrical radius ($r/R$) versus body length
($\ell/R$). The data are for a fixed flagellum wavelength, $\lambda/R=11.1$,
where $R$ is the helical radius. The top row a and b is far from the boundary
$d/R=51.0$ data where boundary effects are minimal, and the bottom row c and d
are data close to the boundary $d/R=2.74$. The swimming speed data in a and c
show that short thin bodies result in higher swimming speed both near and far
from the boundary, though near the boundary the swimming speed is lower for a
given body geometry. Therefore the swimming speed measure predicts short thin
bodies far from the surface result in a better motility performance. The motor
frequency data in b) and d) show long thick bodies result in a slower motor
frequency near and far from the boundary, though far from the boundary the
motor frequency is higher. Therefore, the motor frequency measure predicts
long thick bodies near the surface result in better motility performance.
### III.3 Energy Cost Measures to Assess Performance
The energy cost required to move is another way to assess the performance of
the bacterial motility system. Here we present simulation results of three
different energy cost measures. The first measure we consider is what we term
the Purcell inefficiency $\mathcal{E}_{Purcell}^{-1}$ given by,
$\mathcal{E}_{Purcell}^{-1}=\frac{\tau\Omega_{m}}{FU},$ (8)
where $\tau$ is the motor torque (or the torque on the cell body or the
flagellum), $\Omega_{m}$ is the motor rotation rate, $F$ is the drag force on
the cell body (or on the flagellum), and $U$ is the swimming speed of the
bacterium. Thus, the Purcell inefficiency measures the mechanical energy
($T\Omega_{m}$ ) required to swim at speed $U$ relative to the least amount of
energy ($FU$) needed to translate the cell body at speed $U$. The Purcell
inefficiency is useful because, under certain simplifying assumptions [34], it
can be expressed as a function of the geometry of the cell body and the
flagellum alone. The difficulty with this measures is that it does not depend
on the rotation rate of the motor because all four quantities appearing in Eq.
8 scale with the motor frequency (see Eq. 5). Therefore, the Purcell
inefficiency cannot assess how swimming performance depends on the torque-
speed characteristics of the motor, and thus omits an important element of the
bacterial motility system that is subject to selective forces. The second
measure is the energy cost $E$ to travel a unit distance $d$ given by
$\frac{E}{d}=\frac{\tau\Omega_{m}}{U}.$ (9)
Several authors [13, 16] have considered the distance traveled per energy
output by the motor, which is the inverse of the measure we consider here. The
merit of the energy cost per distance measure is that it expresses the amount
of energy used by the bacterium to perform a biologically relevant task;
namely, to swim one unit distance. Another advantage is that it depends on the
motor rotation rate and thus can probe the effect of the torque-speed
characteristics of the motor. However, it does not account for the size of the
bacterium, and thus does not measure the energy cost relative to the overall
metabolic budget of the organism. To account for the metabolic energy cost
required to swim a unit distance, we introduce a third measure,
$\frac{(E/m)}{d}=\frac{\tau\Omega_{m}}{mU}.$ (10)
The mass $m$ associated with each bacterial model is $m=1.1\times
10^{-15}\left(\pi r^{2}l\right)\,\text{kg}$, where $r$ is the body radius and
$\ell$ is the body length, both measured in $\mu\text{m}$. Though this energy
cost measure has not been considered in the literature, it was suggested
earlier by Purcell [3].
#### III.3.1 Optimal wavelength
We first consider the optimal flagellar wavelength predicted by the three
energy cost measures, as shown in Figs. 12. The top row a-c shows heat maps of
the three energy cost measures as functions of flagellar wavelength and
boundary distance, which correspond to the median values computed for all body
geometries listed in Table 1. All three measures give an optimal wavelength
near $\lambda/R=8$ (where each energy cost measure is minimal). However, the
three measures differ in other ways. The Purcell inefficiency predicts that
swimming near the boundary is less inefficient than swimming far from the
boundary, whereas the opposite is true for the energy per distance and
metabolic cost measures. At a wavelength of $8R$, the minimum Purcell
inefficiency value is about 84 (or $1/84=1.2\%$ if calculated as Purcell
efficiency), the minimum energy per distance measure is $5.0\times
10^{-11}\,\text{Jm}^{-1}$, and the minimum metabolic energy cost is $3.1\times
10^{4}\,\text{J}\text{m}^{-1}\text{kg}^{-1}$.
#### III.3.2 Boundary effects
To evaluate how proximity to the surface affects the predictions of the energy
cost measures, we show line plots in Figs. 12 of the measures as functions of
flagellar wavelength far from ($d/R=51$) in d-f and near the boundary
($d/R=2.74$) in g)-i). The maximum, median, and minimum values among all body
geometries are shown for each wavelength. The Purcell inefficiency is the
least sensitive of the measures to changes in the body size. For a wavelength
of $\lambda/R=8$, the difference between the maximum and the minimum is 8% of
the maximum (110 vs 101). Near the boundary, the difference increases to 13%
of the maximum value (94 vs 82). The energy per distance measure is more
sensitive to the body size, and the sensitivity increases near the boundary.
For a wavelength of $\lambda/R=8$, the difference between the maximum and
minimum values is 16% of the maximum value of $5.5\times
10^{-11}\,\text{J}\text{m}^{-1}$ far from the boundary. Near the boundary the
difference increases to 35% of the maximum value of $7.5\times
10^{-11}\,\text{J}\text{m}^{-1}$. The metabolic energy cost is the measure
most sensitive to the body size, though interestingly the sensitivity
decreases with proximity to the boundary. At a wavelength of $\lambda/R=8$,
the difference between the maximum and minimum value far from the boundary is
51% of the maximum value of $4.5\times
10^{4}\,\text{J}\text{m}^{-1}\text{kg}^{-1}$. Near the boundary the difference
decreases to 38% of the maximum value of $5.0\times
10^{4}\,\text{J}\text{m}^{-1}\text{kg}^{-1}$.
Figure 12: Energy cost as a function of wavelength and boundary distance. The
top row shows three energy cost measures as a function of helical wavelength
$\lambda/R$ and boundary distance $d/R$, where $R$ is the helical radius.
Typical E. coli wavelengths are indicated with the dashed white lines whose
range is close to the optimal wavelength predicted by these energy cost
measures. The second and third rows show line plots at distances far from
($d/R=51.0$) and near ($d/R=2.74$) the boundary to assess the wavelength
dependence of each measure at those distances. The solid circles are numerical
simulations and the solid curves are spline fits to the numerical data. The
three curves show the maximum (black), the median (red) and the minimum (blue)
among all cell bodies simulated. All these plots have the optimal flagellar
wavelength $\lambda/R\approx 8$.
Finally, we consider how the energy cost measures depend on body radius and
body length at different distances to the boundary. In Fig. 13 we show heat
maps of the three energy cost measures fixed at the typical E. coli wavelength
$\lambda/R=11.1$, as functions of the radius and length. The Purcell
inefficiency shown in Figs. 13 gives different optimal body geometries near
and far from the boundary: far from the boundary short, thick cylinders (top
left corner of Fig. 13a) are the least inefficient; near the boundary short
thin cylinders (bottom left corner of Fig. 13d) are the least inefficient. The
energy per distance measure gives the same optimal body far from and near to
the boundary: the lowest energy per distance cost measure is given by short,
thin cylinders (bottom left corners of Figs. 13b and e. The metabolic cost
measure gives the same optimal body near and far from the surface, though it
is opposite of the optimal body predicted by the energy per distance measure:
the lowest metabolic cost measure occurs for cylinders that are long and thick
(top right corners of Figs. 13c and f.
Figure 13: Comparison of Purcell inefficiency, energy per distance, and
metabolic energy cost with respect to body geometry at the typical wavelength
of E. coli ($\lambda/R=11.1$). The top row shows results far from the boundary
($d/R=51.0$) and the bottom row shows results near the boundary ($d/R=2.74$).
In panels a and d, the Purcell inefficiency shows that short thick bodies are
most efficient (i.e., least inefficient) far from the boundary but short thin
bodies are most efficient near the boundary. In panels b and e, short thin
bodies require the least energy cost per distance both far from and near the
boundary. In panels c and f, long thick bodies require the least metabolic
energy cost per distance traveled both far from and near the boundary
## IV Discussion
In this work we used the method of images for regularized Stokeslets (MIRS) to
simulate a motile flagellate bacterium moving near a solid boundary. We
determined the regularization parameter in the method by conducting
dynamically similar macroscopic experiments with rotating cylinders and
rotating helices near a solid boundary and comparing the results to equivalent
simulations. By varying the regularization parameters, we were able to find
optimal values that matched the experimental results within 5%. Having
calibrated MIRS, we simulated various bacterial morphologies to assess their
swimming performance. We assessed swimming performance using multiple
measures: swimming speed, motor rotation rate, the Purcell inefficiency,
energy cost per distance, and metabolic energy cost per distance. As a
important and novel addition to our simulations, we incorporated the
experimentally measured torque-speed response curve [17] by ensuring that the
torque and motor rotation rate matched a point on the curve in all our
calculated measures. Using our MIRS calibration method, we found that the
optimal discretization factor for a cylinder is $\gamma_{c}=6.4$ for the
surface discretizations we used, which may be used as a reference value for
other researchers who simulate rotating cylinders using MRS or MIRS. We also
found an optimal filament factor $\gamma_{f}=2.139$ when using MRS and MIRS
with each helix modeled as a string of regularized Stokeslets along the helix
centerline. Selecting an appropriate regularization parameter for a center-
line discretization of helices MIRS has been considered by other researchers.
Martindale et al. (2016) [24] benchmarked their center-line discretization of
a helix with a surface discretization model. They reported that the optimal
filament factor should be in the range $1\leq\gamma_{f}\leq 3$ to keep the
percent difference less than about 10% in their simulations, which is
consistent with our results. In our work, we calibrated simulations that used
a centerline discretization of helices by fitting the regularization parameter
directly with experimentally measured values of torque. These MIRS
computations showed excellent agreement with the experimental torque values at
most boundary distances (Fig. 9). In MRS/MIRS, using a centerline distribution
for a model helix (or flagellum) with a calibrated regularization parameter is
more useful than a surface discretization for several reasons: (i) the
computational cost is significantly reduced because the matrix system for the
centerline distribution is much smaller than for a surface discretization;
(ii) simulations of very short helical wavelengths using a centerline
distribution do not encounter discretization issues such as overlapping cross-
sections; (iii) in a centerline distribution, the point connecting the
cylindrical cell body and the tapered helical flagellum can be considered as
the motor location, whereas the motor location in a surface discretization is
hard to define because of the small gap between the cell body and the
flagellum needed to allow counter-rotation between the cell body and the
flagellum. Interestingly, all five performance measures we computed with our
calibrated model – swimming speed, motor speed, Purcell inefficiency, energy
per distance, and metabolic energy cost – predict an optimal flagellar
wavelength of $\lambda/R\approx 8$, where $R$ is the helical radius of the
flagellum. This result agrees with the work of Zhang et al. (2014) [9] who
studied the Purcell efficiency of a rotating helix, whereas our model includes
a cell body with rotation and translation. Furthermore, this prediction occurs
both near and far from the surface and for all body geometries, which suggests
that the bacterial wavelengths may be selected independently of body shape or
surface proximity. Therefore, none of the five measures can be distinguished
by their predictions of flagellar wavelength, but they are distinguished by
their predictions for the optimal body size. Further analysis showed that the
swimming speed is optimal (i.e. fastest) for bodies that are short and thin,
both near and far from the surface. The structure of the torque-speed curve
imposes two competing conditions that need to be balanced to achieve
optimality: at low speeds the torque-speed curve is flat and therefore
thermodynamically efficient, but in that regime small increases in applied
load result in large decreases in motor rotation rate that could cause the
motor to stall. We therefore suggest that the optimal speed is higher than the
175 Hz knee speed (see Fig. 3) so that the motor operates in the reliable
regime, but not much higher so that it remains thermodynamically efficient.
The lowest motor speed that is still above the knee speed for typical
bacterial wavelengths occurs for long and thick bacterial bodies, both near
and far from the surface. It is tempting to suggest that balancing the short,
thin bodies needed for optimal speed and long, thick bodies needed for optimal
motor operation yields the average body size. However, we do not infer too
much from this result because we do not have a principled way of performing
the balancing needed to draw a definitive conclusion. The three energy cost
measures also make different predictions about body shape. The Purcell
inefficiency is relatively insensitive to differences in body shape,
especially far from the wall. However, based on the small differences
($\approx 8\%$), the optimal body far from the boundary is short and thick,
whereas the optimal body near the wall is short and thin. The Purcell
inefficiency is the only quantity that makes different predictions about the
optimal body near and far from the boundary. The Purcell inefficiency also
predicts that bacterial motility systems become generally more efficient near
the boundary, which would suggest a natural benefit for all bacteria to move
near boundaries that are independent of any other biologically relevant
activities. Unlike the Purcell inefficiency, the energy cost per distance
traveled and the energy cost per body mass per distance traveled (metabolic
energy cost) both predict larger energy costs for moving near a surface.
However, they make opposite predictions about the optimal body size. The
energy cost per distance suggests short and thin cell bodies are most
efficient, and the metabolic energy cost suggests long and thick cell bodies
are most efficient. Though increasing body size results in a greater energy
cost for moving a given distance, the increase in body size results in a
smaller relative energy expenditure. The energy per distance predicts the same
optimal body as predicted by the fastest swimming speed, and metabolic energy
cost predicts the same optimal body as predicted by motor rotation rate. Only
the Purcell efficiency predicts a short, thick body is optimal, and this
occurs only far from the boundary. Although the Purcell efficiency has been a
popular quantity of analysis, we believe it has several important shortcomings
that warrant discussion, at least one of which was anticipated by Purcell.
First, the Purcell efficiency is dependent only on the geometry of the body
and flagellum and not on the motor’s torque-speed response characteristics.
From a physical standpoint, it is interesting to find such an invariant
quantity, but from a biological standpoint, it does not assess the bacterial
motility system’s thermodynamic efficiency because it ignores motor mechanics.
Second, the Purcell efficiency is defined to be the ratio between the minimum
power required to translate the cell body and the power actually dissipated
during the bacterial motion. In our simulations, we find the maximal
efficiency is in the range of 1-2%, similar to what others have found [3, 12,
9]. These two quantities (the minimum power vs the actual power) are clearly
of very different orders, which suggests that least power needed may not be an
appropriate reference quantity.To give a biophysical interpretation to the
least power needed to translate the cell body, some authors have suggested
that it represents the “useful” portion of the power dissipated during motion,
[12, 8] but we believe this is a misconception. The bacterium is non-inertial;
therefore, the force acting on the cell body by the fluid is exactly balanced
by the force acting on the flagellum by the fluid (assuming no net body
forces). Both the bacterial body and the flagellum have the same axial
velocity (in a rigid model); therefore the power dissipated due to the axial
fluid drag on the body is exactly compensated by the power input by the axial
fluid force exerted on the flagellum. Finally, as Purcell noted in 1977, the
efficiency of the bacterial motility system is probably best characterized by
the energy consumption relative to the overall metabolic budget of the
organism [3]. This suggestion led us to consider the metabolic energy cost
introduced in this paper. The actual amount of that metabolic budget used for
motility is a small fraction, which led Purcell [3] to suggest that bacterial
motility is not really subject to strong selective forces toward optimal
efficiency. Our data do not say whether evolutionary processes tend to
minimize the energy cost of bacterial motility, but a plausible
counterargument is that the bacterium needs to consume most of its energy for
other biological functions and has only a small fraction available for
motility. Thus, small absolute changes in energy consumption correspond to
large relative changes in the energy available for motility, resulting in a
significant selective pressure to make the motility system as efficient as
possible. Many research questions about how physical interactions between
bacteria and their environment result in selective pressures in evolutionary
processes remain open, despite significant progress in the field. Modern
computational simulations and methods such as MRS and MIRS will remain
important tools for quantifying microscopic bacterial motion with precision.
In this work, we presented for the first time in the literature a procedure
for calibrating MRS and MIRS using macroscopic dynamically similar
experiments. Calibrating models in this way helps to ensure simulations give
accurate quantitative results. In future work, we will extend the macroscopic
experimental system to consider a wider variety of possible geometries
relevant to bacterial motility and make comparisons with biological
measurements. Funding: This research was partially funded by NSF DMS-1720323
to H.N. and N.C., and NSF MRI-1531594 to H.N. We thank Trinity University for
the Summer Research Grant to O.S. and the provision of computational
resources. We would also like to thank the Faculty Development Fund at Centre
College for research support to B.R.
###### Acknowledgements.
We wish to thank undergraduate students Asha Ari, Alexandra Boardman, Tanner
May and Mackenzie Conkling and Prof. Philip Lockett for their assistance in
collecting experimental data at Centre College. We also acknowledge the
contributions of Mica Jarocki and David Clark at Trinity University to the
initial implementation of the model bacterium. We would also like to thank
Deon Lee for her support by editing the manuscript.
## References
* Sowa and Berry [2008] Sowa, Y.; Berry, R.M. Bacterial flagellar motor. Quarterly reviews of biophysics 2008, 41, 103–132.
* Lauga [2016] Lauga, E. Bacterial hydrodynamics. Annual Review of Fluid Mechanics 2016, 48, 105–130.
* Purcell [1977] Purcell, E.M. Life at low Reynolds number. American journal of physics 1977, 45, 3–11.
* Higdon [1979] Higdon, J. The hydrodynamics of flagellar propulsion: helical waves. Journal of Fluid Mechanics 1979, 94, 331–351.
* Shapere and Wilczek [1989] Shapere, A.; Wilczek, F. Efficiencies of self-propulsion at low Reynolds number. Journal of fluid mechanics 1989, 198, 587–599.
* Shum et al. [2010] Shum, H.; Gaffney, E.; Smith, D. Modelling bacterial behaviour close to a no-slip plane boundary: the influence of bacterial geometry. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 2010, 466, 1725–1748.
* Spagnolie and Lauga [2011] Spagnolie, S.E.; Lauga, E. Comparative hydrodynamics of bacterial polymorphism. Physical review letters 2011, 106, 058103.
* Acemoglu and Yesilyurt [2014] Acemoglu, A.; Yesilyurt, S. Effects of geometric parameters on swimming of micro organisms with single helical flagellum in circular channels. Biophysical journal 2014, 106, 1537–1547.
* He-Peng et al. [2014] He-Peng, Z.; Bin, L.; Rodenborn, B.; Swinney, H.L. Propulsive matrix of a helical flagellum. Chinese Physics B 2014, 23, 114703.
* Bet et al. [2017] Bet, B.; Boosten, G.; Dijkstra, M.; van Roij, R. Efficient shapes for microswimming: From three-body swimmers to helical flagella. The Journal of chemical physics 2017, 146, 084904.
* Schuech et al. [2019] Schuech, R.; Hoehfurtner, T.; Smith, D.J.; Humphries, S. Motile curved bacteria are Pareto-optimal. Proceedings of the National Academy of Sciences 2019, 116, 14440–14447.
* Chattopadhyay et al. [2006] Chattopadhyay, S.; Moldovan, R.; Yeung, C.; Wu, X. Swimming efficiency of bacterium Escherichiacoli. Proceedings of the National Academy of Sciences 2006, 103, 13712–13717.
* Li and Tang [2006] Li, G.; Tang, J.X. Low flagellar motor torque and high swimming efficiency of Caulobacter crescentus swarmer cells. Biophysical journal 2006, 91, 2726–2734.
* Jeon et al. [2012] Jeon, H.; Kim, Y.C.; Yim, D.; Yoo, J.Y.; Jin, S. Flow visualization and performance measurements of a flagellar propeller. Journal of Bionic Engineering 2012, 9, 322–329.
* Lighthill [1952] Lighthill, M. On the squirming motion of nearly spherical deformable bodies through liquids at very small Reynolds numbers. Communications on pure and applied mathematics 1952, 5, 109–118.
* Li et al. [2017] Li, C.; Qin, B.; Gopinath, A.; Arratia, P.E.; Thomases, B.; Guy, R.D. Flagellar swimming in viscoelastic fluids: role of fluid elastic stress revealed by simulations based on experimental data. Journal of The Royal Society Interface 2017, 14, 20170289.
* Chen and Berg [2000] Chen, X.; Berg, H.C. Torque-speed relationship of the flagellar rotary motor of Escherichia coli. Biophysical journal 2000, 78, 1036–1041.
* Sowa et al. [2003] Sowa, Y.; Hotta, H.; Homma, M.; Ishijima, A. Torque–speed relationship of the Na+-driven flagellar motor of Vibrio alginolyticus. Journal of molecular biology 2003, 327, 1043–1051.
* Xing et al. [2006] Xing, J.; Bai, F.; Berry, R.; Oster, G. Torque–speed relationship of the bacterial flagellar motor. Proceedings of the National Academy of Sciences 2006, 103, 1260–1265.
* Darnton and Berg [2007] Darnton, N.C.; Berg, H.C. Force-Extension Measurements on Bacterial Flagella: Triggering Polymorphic Transformations. Biophysical Journal 2007, 92, 2230–2236.
* Cortez [2001] Cortez, R. The method of regularized Stokeslets. SIAM Journal on Scientific Computing 2001, 23, 1204–1225.
* Ainley et al. [2008] Ainley, J.; Durkin, S.; Embid, R.; Boindala, P.; Cortez, R. The method of images for regularized Stokeslets. Journal of Computational Physics 2008, 227, 4600–4616.
* Das and Lauga [2018] Das, D.; Lauga, E. Computing the motor torque of Escherichia coli. Soft matter 2018, 14, 5955–5967.
* Martindale et al. [2016] Martindale, J.D.; Jabbarzadeh, M.; Fu, H.C. Choice of computational method for swimming and pumping with nonslender helical filaments at low Reynolds number. Physics of Fluids 2016, 28, 021901.
* Rodenborn et al. [2013] Rodenborn, B.; Chen, C.H.; Swinney, H.L.; Liu, B.; Zhang, H. Propulsion of microorganisms by a helical flagellum. Proceedings of the National Academy of Sciences 2013, 110, E338–E347.
* Jeffrey and Onishi [1981] Jeffrey, D.; Onishi, Y. The slow motion of a cylinder next to a plane wall. The Quarterly Journal of Mechanics and Applied Mathematics 1981, 34, 129–137.
* Young [2006] Young, K.D. The selective value of bacterial shape. Microbiology and molecular biology reviews 2006, 70, 660–703.
* Jeffrey and Onishi [1981] Jeffrey, D.J.; Onishi, Y. The slow motion of a cylinder next to a plane wall. The Quarterly Journal of Mechanics and Applied Mathematics 1981, 34, 129–137.
* Jabbarzadeh and Fu [2020] Jabbarzadeh, M.; Fu, H.C. A numerical method for inextensible elastic filaments in viscous fluids. Journal of Computational Physics 2020, 418, 109643.
* Olson et al. [2011] Olson, S.D.; Suarez, S.S.; Fauci, L.J. Coupling biochemistry and hydrodynamics captures hyperactivated sperm motility in a simple flagellar model. Journal of Theoretical Biology 2011, 283, 203–216.
* Nguyen et al. [2019] Nguyen, H.; Koehl, M.A.R.; Oakes, C.; Bustamante, G.; Fauci, L. Effects of cell morphology and attachment to a surface on the hydrodynamic performance of unicellular choanoflagellates. J. R. Soc. Interface 2019, 283, 20180736.
* Buchmann et al. [2018] Buchmann, A.; Fauci, L.J.; Leiderman, K.; Strawbridge, E.; Zhao, L. Mixing and pumping by pairs of helices in a viscous fluid. Phys. Rev. E 2018, 97, 023101.
* Bouzarth and Minion [2011] Bouzarth, E.L.; Minion, M. Modeling slender bodies with the method of regularized Stokeslets. J. Comput. Phys. 2011, 230, 3929–3947.
* Purcell [1997] Purcell, E.M. The efficiency of propulsion by a rotating flagellum. Proceedings of the National Academy of Sciences 1997, 94, 11307–11311.
|
# Mock-local energy density of gravitational waves
Antoine Rignon-Bret and Simone Speziale
Aix Marseille Univ., Univ. de Toulon, CNRS, CPT, UMR 7332, 13288 Marseille,
France
###### Abstract
We propose a new set of BMS charges at null infinity, characterized by a
super-translation flux that contains only the ‘hard’ term. This is achieved
with a specific corner improvement of the symplectic 2-form, and we spell the
conditions under which it is unique. The charges are associated to a Wald-
Zoupas symplectic potential, and satisfy all standard criteria: they are
covariant, provide a center-less realization of the symmetry algebra, have
vanishing flux in non-radiative spacetimes, and vanish in Minkowski. We use
them to define a certain notion of localized energy density of gravitational
waves. They have potential applications to the generalized second law and to
soft theorems.
## 1 Introduction
Bondi’s energy loss formula is a cornerstone of our physical understanding of
general relativity. It was the historical proof that gravitational waves
dissipate energy, ending any lingering doubt on their physical existence. The
proof of dissipation refers to the _total_ gravitational energy; there is no
equivalent statement for a local notion of energy, in line with standard
intuition that general covariance prevents the existence of a local energy-
momentum tensor for the gravitational field. In this paper we report that
general covariance allows an additional step: it is possible to provide a
formula for a _local_ energy density of gravitational waves at future null
infinity which is necessarily dissipated by physical processes. The non-
locality is confined to the relation between its flux, or rate of change, and
the spacetime curvature. We further argue that even the flux can actually be
determined by a local measurement in space, provided a non-local initial
condition in time. The result is based on taking into account so-called
‘corner degrees of freedom’, and leads to an estimate of the amount of energy
that can be absorbed by a local experiment for a given flux.
The group of gravitational symmetries at null infinity has been known since
the seminal work of Bondi, van der Burg, Metzner and Sachs (BMS) [1, 2] and
extensively studied since. It is a generalization of the Poincaré group to
angle-dependent translations, known as super-translations. To these asymptotic
symmetries are associated asymptotic charges which are conserved in the
absence of radiation, and satisfy flux-balance laws in the presence of
radiation. While there is a certain amount of ambiguity in the definition of
Noether charges and their fluxes, a unique set satisfying certain physical
requirements has been identified as early as [3, 4, 5, 6, 7], and later
related to Noether charges and canonical generators in [8], see also
discussion in [9]. Bondi’s energy loss formula fits elegantly this framework
as the flux-balance law for super-translations. It has two contributions, a
‘hard’ term which is monotonic, and a ‘soft’ term which is not. The soft term
however vanishes for global translations, and this is crucial to establish
Bondi’s proof of dissipation for the total energy.
The uniqueness of the BMS charges relies on a technical assumption concerning
the choice of symplectic structure used to define the canonical generators.
There is a growing body of evidence that this assumption can be relaxed,
allowing for so called ‘corner improvements’, which are important to capture
the physics of gauge and gravitational theories in the presence of boundaries,
see e.g. [10, 11, 12, 13, 14, 15, 16, 17, 18]. This is the first technical
ingredient to our result. The second is the use of the ‘covariant shear’
introduced in [19], which is related to previous work done in [4] and to the
notion of super-translation Goldstone field of [20], which is a corner degrees
of freedom. With these two inputs, we are able to identify a unique corner-
improved symplectic structure whose canonical generators satisfy all the
properties of the standard BMS charges, plus the new property of having a
purely hard flux. The new super-momentum charge associated with this flux is
the local energy density. We conclude the paper with a brief discussion of
potential applications.
## 2 BMS flux-balance laws
We assume that the reader has a certain familiarity with null infinity, and
refer to the reviews in [21, 22, 23, 9] for the necessary background. We use
the conventions of [24], to which we refer for further details. The Ashtekar-
Streubel (AS) flux for a BMS symmetry $\xi$ between two cross-sections $S_{1}$
and $S_{2}$ of ${\mathscr{I}}$ is given by [5]
$F_{\xi}^{\scriptscriptstyle\rm
AS}=-\frac{1}{16\pi}\int_{S_{1}}^{S_{2}}N_{ab}\delta_{\xi}\sigma^{ab}\epsilon_{\mathscr{I}}.$
(2.1)
Here $\sigma_{ab}$ is the asymptotic shear, which can be assumed without loss
of generality to refer to an affine foliation of ${\mathscr{I}}$, and then the
news tensor is $N_{ab}=2\pounds_{n}\sigma_{ab}-\rho_{\langle ab\rangle}$, wth
$n$ the null tangent to ${\mathscr{I}}$, and $\rho_{\langle ab\rangle}$ the
trace-less part of Geroch’s tensor, which vanishes if the conformal frame is a
round sphere, also known as a Bondi frame. The explicit form of the BMS
transformation $\delta_{\xi}\sigma_{ab}$ can be found in the references, and
will not be needed. The foliation induces a family of Lorentz subgroups of the
BMS group, one per cross-section. Picking coordinates $(u,x^{A})$ so that
$n\,\smash{\stackrel{{\scriptstyle\scriptscriptstyle\rm{{\mathscr{I}}}}}{{=}}}\,\partial_{u}$,
we can parametrize
$\xi\,\smash{\stackrel{{\scriptstyle\scriptscriptstyle\rm{{\mathscr{I}}}}}{{=}}}\,f\partial_{u}+Y^{A}\partial_{A}$,
with $f=T+\frac{u}{2}{\mathscr{D}}Y$. Here $T=T(x^{A})$ is the super-
translation parameter, and $Y^{A}(x^{A})$ is the Lorentz parameter,
represented by a globally defined conformal Killing vector on the cross-
sections. These are topological spheres, with time-independent metric
$q_{AB}$, and we denote ${\mathscr{D}}$ its covariant derivative and ${\cal
R}$ is Ricci scalar. We can then write the charges on cross sections of
constant $u$ as
$\displaystyle Q^{\scriptscriptstyle\rm BMS}_{\xi}=\frac{1}{8\pi
G}\oint_{S}(2fM_{\rho}+Y^{A}J_{A})\epsilon_{S},\qquad
M_{\rho}:=M+\frac{1}{4}\rho_{ab}\sigma^{ab}.$ (2.2)
They correspond to Geroch’s super-momentum [3] and to the Dray-Streubel
Lorentz charge [6], as proved in [5, 7]. The aspects $M$ and $J_{A}$ can be
written in terms of tensors e.g. [3, 9] or in terms of Newman-Penrose
formalism e.g. [6, 24]. It is also possible to express them in terms of the
metric components of an asymptotic expansion in Bondi coordinates [25, 22]
(but we the caveats explained in [24]). See the Appendix for more details.
The original papers singled out the unique choices (2.1) and (2.2) by a series
of physical requirements: they are covariant; they are conserved in non-
radiative spacetimes; they vanish in Minkowski spacetime, for any $\xi$ and at
any cut. There were also technical requirements, such as a number of
derivatives compatible with second order field equations, and a choice of
canonical symplectic form associated with the Einstein-Hilbert Lagrangian. The
later Wald-Zoupas insight [8] is that they are associated to a unique
symplectic potential, given by
$\theta^{\scriptscriptstyle\rm
BMS}=-\frac{1}{16\pi}N_{ab}\delta\sigma^{ab}\epsilon_{\mathscr{I}},$ (2.3)
so that the integrand of the flux is the Noether current
$j_{\xi}=I_{\xi}\theta^{\scriptscriptstyle\rm BMS}$. The covariance of the
charges means that they are independent of the background structures given by
the choice of conformal factor and choice of foliation for the shear. The only
background structure they depend on is the symmetry vector field $\xi$, and
for the Lorentz part of the charge also the choice of cross section
determining the Lorentz subgroup of $\xi$. Covariance can be expressed in
terms of the Barnich-Troessaert bracket [25] as
$\displaystyle\\{Q_{\xi},Q_{\chi}\\}_{*}:=\delta_{\chi}Q_{\xi}-\oint
i_{\chi}j_{\xi}\,\hat{=}\,Q_{[\xi,\chi]}.$ (2.4)
This property can be explicitly checked [24], and shown to be a consequence of
the covariance of (2.3) and of the BMS boundary conditions [18]. It
generalizes the standard Hamiltonian action to dissipative situations with a
non-zero current at the cross section.
Let us now focus attention on the super-translation charges. Their flux is
$Q^{\scriptscriptstyle\rm BMS}_{T}[S_{2}]-Q^{\scriptscriptstyle\rm
BMS}_{T}[S_{1}]\,\hat{=}\,-\frac{1}{32\pi}\int\left(TN^{ab}N_{ab}+2N^{ab}({\mathscr{D}}_{a}{\mathscr{D}}_{b}+\frac{1}{2}\rho_{ab})T\right)\epsilon_{\mathscr{I}}.$
(2.5)
The first is the ‘hard term’, squared in the news. The second is the ‘soft
term’, linear in the news. It follows from the time-independence of $T$ and of
the volume form, that it corresponds to the displacement memory
${\mathbbm{M}}=-\frac{1}{8\pi}\oint
T{\mathbbm{m}}\epsilon_{S},\qquad{\mathbbm{m}}:={\mathbbm{D}}_{\rho}^{ab}({\sigma}_{ab}-\frac{u}{2}\rho_{ab})\big{|}^{S_{2}}_{S_{1}}.$
(2.6)
where we introduced the short-hand notation
${\mathbbm{D}}_{\rho}{}_{ab}:={\mathscr{D}}_{\langle
a}{\mathscr{D}}_{b\rangle}+\frac{1}{2}\rho_{\langle ab\rangle}$. When acting
on $T$, the four zero modes of this operator define global translations, which
form an ideal of the BMS algebra, and can be identified with the first four
harmonics $l=(0,1)$ in Bondi frames. It follows that the soft term vanishes
for global translations, resulting in a ‘purely hard’ flux. Furthermore, it is
strictly negative for a future-pointing global time translation: this is
Bondi’s celebrated result proving that gravitational waves dissipate energy.
Modifying the flux-balance law so that it has a purely hard flux for any $T$
is at first sight straightforward. Since the soft term is a total derivative
in $u$, it is possible to reabsorb it in the left-hand side, and consider the
charge
$Q^{\scriptscriptstyle\rm
M}_{T}=\frac{1}{4\pi}\oint_{S}T\Big{(}M_{\rho}+\frac{1}{2}{{\mathbbm{D}}_{\rho}}^{ab}(\sigma_{ab}-\frac{u}{2}\rho_{\langle
ab\rangle})\Big{)}\epsilon_{S}.$ (2.7)
If we restrict this expression to a Bondi frame, we recognize the Moreschi
mass
$M^{\scriptscriptstyle\rm
M}:=M+\frac{1}{2}{{\mathscr{D}}}_{a}{{\mathscr{D}}}_{b}\sigma^{ab},$ (2.8)
that was proposed as a super-momentum charge in [26, 27]. The problem with
this proposal is that it is manifestly non-covariant. Adding Geroch’s tensor
fixes the limitation of the original expression to Bondi frames and achieves
full conformal invariance. However it is still foliation dependent, or
equivalently $l$ dependent, via the shear and $u$. Foliation-dependence
carries an intuitive meaning of non-covariance, which can be made sharper
observing that two super-translations vector fields commute, hence a
foliation-dependent super-momentum charge would fails to capture this basic
property of the algebra. The lack of covariance can be made explicit computing
its transformation law. This gives
$\delta_{\chi}Q^{\scriptscriptstyle\rm
M}_{T}\,\hat{=}\,Q^{\scriptscriptstyle\rm
M}_{[\xi_{T},\chi]}+K_{(\xi_{T},\chi)}+{\rm radiation\ terms},$ (2.9)
where the term
$K_{(\xi_{T},\chi)}=\frac{1}{16\pi}\oint_{S}T\left(({\mathscr{D}}^{2}+{\cal
R}){\mathscr{D}}^{2}f+2{\mathscr{D}}_{a}{\cal
R}{\mathscr{D}}^{a}f-4\sigma^{ab}{\mathscr{D}}_{a}{\mathscr{D}}_{b}f\right)\epsilon_{S}.$
(2.10)
prevents the recovery of the correct symmetry action on non-radiative
spacetimes, and shows up as a field-dependent cocycle on the right-hand side
of the Barnich-Troessaert bracket. In the rest of this communication we show
how this problem can be resolved, to obtain a covariant charge with a purely
hard flux. The first step is to use the ‘covariant shear’ of [19].
## 3 Super-translation Goldstone and covariant shear
The AS radiative phase space is supplemented with the late time boundary
conditions
$\lim_{u\to\infty}N_{ab}\sim\frac{1}{u^{1+\epsilon}},\qquad\lim_{u\to\infty}\mathrm{Im}(\psi_{2})\sim\frac{1}{u^{\epsilon}},\qquad\epsilon>0,$
(3.1)
namely no radiation and a vacuum shear ${\smash{\overset{\circ}{\sigma}}}$
that is purely electric,
$\lim_{u\to\infty}\sigma_{ab}-\frac{1}{2}u\rho_{\langle
ab\rangle}={\smash{\overset{\circ}{\sigma}}}_{ab}-\frac{1}{2}u\rho_{\langle
ab\rangle}=-({{\mathscr{D}}}_{\langle
a}{{\mathscr{D}}}_{b\rangle}+\frac{1}{2}\rho_{\langle ab\rangle})u_{0}.$ (3.2)
The system thus settles down to equilibrium, and the radiative degrees of
freedom reduce to a vacuum parametrized by $u_{0}(x^{A})$, whose four-
parameter family of zero modes are the famous ‘good cuts’. The boundary
condition $u_{0}$ can be interpreted as a choice of ‘bad cut’ [24], as a
super-translation field parametrizing the vacua [19], or as Goldstone mode for
the breaking of the super-translation symmetry [20] caused by the choice of
final vacuum state ${\smash{\overset{\circ}{\sigma}}}$. Its transformation
rule induced by that of a vacuum shear is [24]
$\delta_{\xi}u_{0}=-T-u_{0}\dot{f}.$ (3.3)
We can use the boundary condition to define the relative shear
${\cal
S}_{ab}:=\sigma_{ab}-{\smash{\overset{\circ}{\sigma}}}_{ab}=\sigma_{ab}-\frac{1}{2}u\rho_{\langle
ab\rangle}+({{\mathscr{D}}}_{\langle
a}{{\mathscr{D}}}_{b\rangle}+\frac{1}{2}\rho_{\langle
ab\rangle})u_{0}\equiv-\frac{1}{2}{\cal C}_{ab},$ (3.4)
which is nothing but the ‘covariant shear’ ${\cal C}_{ab}$ introduced in [19].
The term covariant refers to the fact that it is invariant under super-
translations, hence independent of the background structure given by the $u$
foliation. This property follows from the definition, and can be verified
using (3.3). Notice that with this definition, the relative shear vanishes in
any non-radiative spacetime, since the shear has to match its own ‘bad cut’ in
the absence of radiation. The relative shear measures the same memory effect,
${\mathbbm{m}}={\mathbbm{D}}_{\rho}^{ab}{{\cal
S}}_{ab}\big{|}^{S_{2}}_{S_{1}}.$ (3.5)
It has conformal weight 1, and satisfies
$N_{ab}=2\pounds_{n}{\cal S}_{ab}.$ (3.6)
It is thus a covariant choice of ‘time potential’ for the news. Since it
differs from the one used to obtain (2.7) by a time-independent term, it is
natural to ask whether one can use the ambiguity to add time-independent terms
to the charges to achieve covariance. This cannot be done in the context of
[5, 8], since the relation to the canonical choice for the symplectic 2-form
would be lost. The BMS charges are unique, after all.
## 4 Corner improvement and new charges
The way out of the uniqueness result for the standard BMS charges is that we
will allow for corner modifications of the symplectic 2-forms. These
modifications are compatible with the field equations, and can be motivated by
situations in which stationarity and covariance, or finiteness, cannot be
achieved otherwise, see e.g. [10, 11, 12, 13, 14, 15, 16, 17, 18]. For BMS all
these properties are already satisfied with the standard symplectic 2-form,
and our reason to consider a different one comes from the additional
requirement of having a purely hard flux.
The corner term modification that we introduce is
$\bar{\Omega}:=\Omega-\delta\oint_{S}\vartheta,\qquad\vartheta=\frac{1}{8\pi}{\cal
S}_{ab}\delta({\mathbbm{D}}_{\rho}^{ab}u_{0})\epsilon_{S}=\frac{1}{8\pi}{\cal
S}_{ab}\,\delta{\smash{\overset{\circ}{\sigma}}}^{ab}\epsilon_{S}.$ (4.1)
The new symplectic structure is not defined in the Ashtekar-Streubel radiative
phase space at ${\mathscr{I}}$, but in the enlarged phase space in which we
include as late time boundary condition a choice of vacuum. It makes the
Goldstone mode $u_{0}$ canonically conjugated to the displacement memory
effect, as can be easily seen integrating on (a region of) ${\mathscr{I}}$,
$\bar{\Omega}_{\mathscr{I}}=-\frac{1}{16\pi}\int\delta N_{ab}\delta{\cal
S}^{ab}\epsilon_{\mathscr{I}}=\Omega^{\scriptscriptstyle\rm
AS}-\frac{1}{8\pi}\oint_{S}\delta{\mathbbm{M}}\delta
u_{0}\epsilon_{S},\qquad{\mathbbm{M}}:={\mathbbm{D}}_{\rho}^{ab}{\sigma}_{ab}\big{|}^{S_{2}}_{S_{1}}.$
(4.2)
To write the fluxes and charges we need on top of the symplectic 2-form also a
choice of preferred symplectic potential.111For BMS, this step can be replaced
by a choice of topology in the phase space [9]. It would be interesting to
know if this alternative exists also for the new symplectic structure. We take
$\bar{\theta}=\theta^{\scriptscriptstyle\rm
BMS}-d\vartheta=-\frac{1}{16\pi}N_{ab}\delta{\cal
S}^{ab}\epsilon_{\mathscr{I}},$ (4.3)
where in the second equality we used (3.6). This choice is manifestly
covariant and vanishes in non-radiative spacetimes, hence it satisfies the
conditions for the generalized Wald-Zoupas prescription. The associated fluxes
are
$\bar{F}_{\xi}=-\frac{1}{16\pi}\int N_{ab}\delta_{\xi}{\cal
S}^{ab}\epsilon_{\mathscr{I}}=F^{\scriptscriptstyle\rm
BMS}_{\xi}-\frac{1}{8\pi}\oint^{S_{2}}_{S_{1}}\delta_{\xi}u_{0}\,{\mathbbm{D}}_{\rho}^{ab}{\cal
S}_{ab}\,\epsilon_{S},$ (4.4)
with charges
$\bar{Q}_{\xi}=\frac{1}{8\pi}\oint_{S}[(2fM_{\rho}+f|_{u_{0}}{{\mathbbm{D}}_{\rho}}^{AB}{\cal
S}_{AB})+Y^{A}(J_{A}-{\mathbbm{D}}_{\rho}^{BC}{\cal
S}_{BC}{\mathscr{D}}_{A}u_{0})]\epsilon_{S}.$ (4.5)
Since the symplectic potential is Wald-Zoupas, the fluxes are guaranteed to be
covariant, and the charge algebra admits at most a time-independent cocycle
[18]. Since the BMS algebra is center-less [18, 24], the only cocycle can come
from an anomalous $\vartheta$ [28, 29, 30, 15]. But the choice (4.1) is
covariant, hence the new charges (4.5) satisfy (2.4). Furthermore they differ
from the standard BMS charges by a term that vanishes in any non-radiative
spacetime, hence they also share with the BMS charges the property of
vanishing in Minkowski, for any symmetry parameter, at any cut.
Let us discuss the uniqueness of our proposal. The potential ambiguity in the
charges is the addition of time-independent terms. These are related to the
freedom of adding time-independent terms to $\vartheta$, in order to keep the
Wald-Zoupas requirement that the charges match the canonical generators on
partial Cauchy slices intersecting ${\mathscr{I}}$. Requiring covariance of
the charges restricts the allowed terms to be covariant themselves. But the
only time independent fields are $u_{0}$, $\rho_{ab}$ and $q_{ab}$, and a
moment of reflection shows that it is not possible to write something
covariant. Hence any ambiguity is ruled out by the requirement to have
covariant charges. Similar considerations show that (4.3) is the only
covariant symplectic potential with a purely hard flux. We conclude that the
symplectic potential (4.2) and charges (4.5) are unique. What distinguishes
them from their standard BMS analogue is only the modification of the
symplectic structure.
The new super-momentum charge is
$\bar{Q}_{T}=\frac{1}{4\pi}\oint_{S}T\left(M_{\rho}-\frac{1}{4}{\mathbbm{D}}_{\rho}^{ab}{\cal
C}_{ab}\right)\epsilon_{S}.$ (4.6)
It is covariant, a canonical generator of super-translations with respect to
the corner-improved symplectic structure (4.2), and has a purely hard flux
$\bar{Q}_{T}[S_{2}]-\bar{Q}_{T}[S_{1}]\,\hat{=}\,-\frac{1}{32\pi}\int
TN^{ab}N_{ab}\epsilon_{\mathscr{I}}.$ (4.7)
It can be seen as a version of the Moreschi mass made covariant under
conformal transformations and super-translations thanks to $\rho_{ab}$ and
$u_{0}$. The precise relation between the two is
$\bar{M}=M^{\scriptscriptstyle\rm
M}+\frac{1}{8\pi}\oint_{S}T\left(\rho^{ab}{\sigma}_{ab}-\frac{u}{2}{\mathbbm{D}}_{\rho}^{ab}\rho_{ab}+{\mathbbm{D}}_{\rho}^{2}u_{0}\right)\epsilon_{S}.$
(4.8)
On round spheres, the required extra term for foliation-independence reduces
to $\oint T({\mathscr{D}}^{2}+2){\mathscr{D}}^{2}u_{0}$, and vanishes for
global translations.
## 5 Applications
With the standard BMS charges, only the total energy has a monotonic flux.
This corresponds to the lowest mode of the super-translation parameter, or
$T=1$ on a Bondi frame. Consider instead a localized super-momentum charge,
corresponding for instance to a parameter $T$ with a Gaussian-like profile
peaked on some point on the celestial sphere. This profile necessarily
involves higher modes whose flux is not monotonic, hence the charge cannot be
interpreted as a localized energy, because it may be increasing even though
the system is dissipating. Indeed, the standard interpretation of the higher
super-momentum modes is not related to the energy, but rather to the notion of
mass multipoles. Their flux is not monotonic, and corresponds to the memory
effect.
The new charge (4.6) has a monotonic flux for any positive super-translation
parameter. It follows that we can take any peaked Gaussian-like function $T$,
and extend Bondi’s result about dissipation from the total energy to a local
energy! We stress that this construction does not violate the principle of
general covariance, because some aspects of non-locality do remain. We have
been able to define a local energy density at ${\mathscr{I}}$, using the
background structure there at disposal, and its flux is local in the news
tensors. But the news tensor itself is a _non-local_ function of the Weyl
curvature, in both space and time. E.g. in Newman-Penrose formalism,
$\psi_{3}=\eth N,\qquad\psi_{4}=\dot{N}.$ (5.1)
Therefore the flux of the local energy still requires non-local knowledge of
the curvature. For this reason, our construction does not achieve a full
localization, hence the title.
A monotonic flux for all future-pointing super-translations has further
potential applications. Let us mention two. The first comes from the work of
Wall [31]. His proof of the generalized second law requires the axioms of
ultra-locality and stability. Ultra-locality means that the null geodesics
spanning a null hypersurface can be treated as independent subsystems, and
stability implies that the Hamiltonian on each subsystem has positive
eigenvalues. Identifying the Hamiltonian density with the super-translation
Noether current $j_{T}$, we can state the axioms as
$\int_{\mathcal{N}}j_{T}\epsilon_{\mathcal{N}}\geq 0,$ (5.2)
for any $T>0$, which includes any approximation of a delta function localized
at a point. In Wall’s work the null hypersurface ${\cal N}$ is an horizon, for
which a monotonic flux has been studied in e.g. [32, 33, 34, 35, 36]. Our work
shows that the axioms can apply to ${\mathscr{I}}$ as well, if the right
symplectic structure is chosen. This is in line with the interpretation of
${\mathscr{I}}$ as an horizon (in the conformal spacetime [37, 9]), and can be
relevant for studying the generalized second law at ${\mathscr{I}}$ (e.g. [38,
39]).
A second application comes from soft theorems. It was pointed out that the
soft terms in the flux are related to infra-red divergences [40], see also
[41, 17], hence a covariant remotion of the soft flux may lead to infra-red
finiteness. We also remark that our improved symplectic structure appears to
be consistent with the corner bracket posited in [42]. However, we stress that
our proposal is valid for BMS, and not for larger symmetry groups such as eBMS
[43, 44, 45], gBMS [46, 47] or BMSW [30]. The reason for it is that the proof
of covariance relies on the universality of Geroch’s tensor, used explicitly
in the second equality of (4.1), and which is lost for the larger symmetry
groups. Covariant BMS symplectic potentials for eBMS and (a group isomorphic
to) gBMS have been recently found [13, 17, 18, 24], but do not have the
property of being purely hard.
## 6 Conclusions
The new super-momentum charge has the beautiful property that it provides a
notion of localized energy density, namely a quantity that can be defined on
arbitrarily small regions of the celestial sphere, and whose behaviour in
radiative spacetimes is monotonically decreasing. This property gives it a
compelling operational definition. Consider an observer with a container
filled of water and a very precise thermometer. When gravitational waves reach
the observer they will heat the water. By what amount? Bondi’s formula only
gives us the total amount of energy released, proportional to the integral of
the squared news on the whole celestial sphere. But this is hardly relevant to
the container, which is a sharply localized object on the celestial sphere.
Intuitively, the relevant flux should depend only on the news at the specific
direction $(\theta,\varphi)$ of the observer. As any function on the sphere
$T(x^{A})$ is an asymptotic symmetry of ${\mathscr{I}}$, we can choose one
that is sharply peaked at the given direction. Let us idealize it with a Dirac
delta, $T=\delta^{(2)}(\theta,\phi)$. Then the increase of water temperature
$T_{w}$ is
$\displaystyle\Delta T_{w}=\frac{\alpha_{w}}{32\pi
G\rho_{w}c_{w}}\int_{u_{1}}^{u_{2}}N^{2}_{ab}(u,\theta,\phi)du,$ (6.1)
where $\alpha_{w}$ is a conversion factor measuring the fraction of energy
that can be extracted from the radiation and converted into heat in the water
by unit of volume, $\rho_{w}$ is the water density and $c_{w}$ is the thermal
mass capacity. The question is then whether a local experiment can verify this
formula, since as remarked earlier, the news depends non-locally on the
curvature, via (5.1). We claim that this is possible, as long as $u_{1}$ was
in the non-radiative regime. With this assumption,
$N(u,\theta,\phi)=\int^{u}_{u_{1}}\psi_{4}(u^{\prime},\theta,\phi)du^{\prime}.$
(6.2)
Hence a _spatially local_ measurement of curvature is enough to determine the
energy dissipation (6.1). The only left-over non-locality is that the
characterization of a non-radiative cross-section is still requiring
$\psi_{3}$ on the whole sphere. The non-locality can be thus confined to the
initial conditions for the experiment.
The localized charge (4.6) associated to this flux meets various criteria in
order to be interpreted as an energy. First, it is the Noether charge and
canonical generator for the associated super-translation symmetry. Second, it
is always positive, thanks to the negativity of the flux, if we assume that
the final state has a positive Bondi mass aspect, as for a stationary black
hole. Third, it is extensive. This follows from the fact that it is
functionally linear in $T$. Extensivity may come as a surprise, since the
theory is non-linear and it is not possible to screen gravitational
interaction. But here we are talking about extensivity at ${\mathscr{I}}$,
where different regions on the celestial sphere are infinitely far away and
_causally disconnected_ , hence extensivity is a natural property.222A similar
construction could be done also on local null hypersurfaces if one uses a
monotonic flux, however only within domains free of caustics or crossings, as
to guarantee causal disconnection. Otherwise extensivity would no longer be
possible. Super-translations are in fact not even admissable symmetries of
local null surfaces [48].
Since the asymptotic observers cannot synchronize their clock, their notions
of future directed time translations are independent and equivalent. The
canonical generator associated to these symmetries should then capture this
property, and this is what our notion of local energy density does. The local
version of the Bondi energy loss formula reflects the basic fact that there is
no universal notion of energy in general relativity. Therefore, the new
charges that we constructed have all the properties of localized energy. If
this energy transfer is observable, it is therefore a good argument in favor
of the new symplectic stuctrure and of the introduction of the edge mode
$u_{0}$ in the phase space, as it seems impossible to obtained a covariant and
stationary symplectic flux and charge without it.
### Acknowledgements
We thank Abhay Ashtekar and Jurek Lewandowski for comments.
## Charge aspects
The charges can be written without reference to any coordinate system on
${\mathscr{I}}$ if we introduce an auxiliary null vector $l$ such that $n\cdot
l=-1$. This vector can be used to identify the shear and Lorentz subgroups,
and to that end it is convenient to take it hypersurface orthogonal and Lie
dragged by $n$, so that it is adapted to leafs of an affine foliation. The
pair $(n,l)$ can be completed to a Newman-Penrose basis at ${\mathscr{I}}$
with a complex dyad $(m,\bar{m})$. The charge aspects can then be written
using the Newman-Penrose formalism as
$M_{\rho}=-\mathrm{Re}(\psi_{2}-\sigma N),\qquad
m^{A}J_{A}=-\left(\psi_{1}+\sigma\eth\bar{\sigma}+\frac{1}{2}\eth(\sigma\bar{\sigma})\right),$
where $\sigma=-m^{a}m^{b}\sigma_{ab}$ and
$N=\frac{1}{2}\bar{m}^{a}\bar{m}^{b}N_{ab}$. This is the form in which they
appear for instance in [6]. Fixing $l$ adapted to the cross-section is not
necessary for the super-momentum, but it is for the Lorentz charge, and it is
ultimately related to the fact that there is no preferred Lorentz subgroup of
the BMS group in radiative spacetimes, one can speak of a Lorentz group only
in relation to a chosen cross section.
The aspects can also be related to the metric coefficients of an asymptotic
expansion of the bulk metric. To that end, we introduce Bondi coordinates
$(u,r,x^{A})$, and the expansion
$\displaystyle g_{uu}$ $\displaystyle=-\frac{{\cal
R}}{2}+\frac{2M}{r}+O(r^{-2}),\qquad
g_{ur}=-1-\frac{2\beta}{r^{2}}+O(r^{-3}),\qquad\beta:=-\frac{1}{32}C^{AB}C_{AB},$
$\displaystyle g_{uA}$
$\displaystyle=-U_{A}+\frac{2}{3r}(J_{A}+\partial_{A}\beta-\frac{1}{2}C_{AB}U^{B})+O(r^{-2}),\qquad
U_{A}:=-\frac{1}{2}{\mathscr{D}}^{B}C_{AB},$ $\displaystyle g_{AB}$
$\displaystyle=r^{2}q_{AB}+rC_{AB}+O(1).$
The bulk Bondi coordinates induce affine coordinates $(u,x^{A})$ on
${\mathscr{I}}$ such that
$n\,\,\smash{\stackrel{{\scriptstyle\scriptscriptstyle\rm{{\mathscr{I}}}}}{{=}}}\,\,\partial_{u}$,
and an affine foliation given by the level sets of $u$, whose shear is
$\sigma_{ab}=-1/2\delta_{a}^{A}\delta_{b}^{B}C_{AB}$, and the aspects $M$ and
$J_{A}$ appear at first order in the conformal factor $\Omega:=1/r$.
## References
* [1] H. Bondi, M. G. J. van der Burg and A. W. K. Metzner, Gravitational waves in general relativity. 7. Waves from axisymmetric isolated systems, Proc. Roy. Soc. Lond. A 269 (1962) 21–52.
* [2] R. Sachs, Asymptotic symmetries in gravitational theory, Phys. Rev. 128 (1962) 2851–2864.
* [3] R. Geroch, Asymptotic Structure of Space-Time, ch. 1, pp. 1–106. Springer US, Boston, MA, 1977.
* [4] A. Ashtekar, Radiative Degrees of Freedom of the Gravitational Field in Exact General Relativity, J. Math. Phys. 22 (1981) 2885–2895.
* [5] A. Ashtekar and M. Streubel, Symplectic Geometry of Radiative Modes and Conserved Quantities at Null Infinity, Proc. Roy. Soc. Lond. A 376 (1981) 585–607.
* [6] T. Dray and M. Streubel, Angular momentum at null infinity, Class. Quant. Grav. 1 (1984), no. 1 15–26.
* [7] T. Dray, Momentum Flux At Null Infinity, Class. Quant. Grav. 2 (1985) L7–L10.
* [8] R. M. Wald and A. Zoupas, A General definition of ’conserved quantities’ in general relativity and other theories of gravity, Phys. Rev. D 61 (2000) 084027 [gr-qc/9911095].
* [9] A. Ashtekar and S. Speziale, Null Infinity as a Weakly Isolated Horizon, 2402.17977.
* [10] G. Compere and D. Marolf, Setting the boundary free in AdS/CFT, Class. Quant. Grav. 25 (2008) 195014 [0805.1902].
* [11] D. Harlow and J.-Q. Wu, Covariant phase space with boundaries, JHEP 10 (2020) 146 [1906.08616].
* [12] L. Freidel, M. Geiller and D. Pranzetti, Edge modes of gravity. Part I. Corner potentials and charges, JHEP 11 (2020) 026 [2006.12527].
* [13] M. Campiglia and J. Peraza, Generalized BMS charge algebra, Phys. Rev. D 101 (2020), no. 10 104039 [2002.06691].
* [14] G. Odak and S. Speziale, Brown-York charges with mixed boundary conditions, JHEP 11 (2021) 224 [2109.02883].
* [15] V. Chandrasekaran, E. E. Flanagan, I. Shehzad and A. J. Speranza, A general framework for gravitational charges and holographic renormalization, Int. J. Mod. Phys. A 37 (2022), no. 17 2250105 [2111.11974].
* [16] G. Odak, A. Rignon-Bret and S. Speziale, Wald-Zoupas prescription with soft anomalies, Phys. Rev. D 107 (2023), no. 8 084028 [2212.07947].
* [17] L. Donnay, K. Nguyen and R. Ruzziconi, Loop-corrected subleading soft theorem and the celestial stress tensor, JHEP 09 (2022) 063 [2205.11477].
* [18] A. Rignon-Bret and S. Speziale, Covariance and symmetry algebras, 2403.00730.
* [19] G. Compère, A. Fiorucci and R. Ruzziconi, Superboost transitions, refraction memory and super-Lorentz charge algebra, JHEP 11 (2018) 200 [1810.00377]. [Erratum: JHEP 04, 172 (2020)].
* [20] A. Strominger, On BMS Invariance of Gravitational Scattering, JHEP 07 (2014) 152 [1312.2229].
* [21] A. Ashtekar, Geometry and Physics of Null Infinity, 1409.1800.
* [22] E. E. Flanagan and D. A. Nichols, Conserved charges of the extended Bondi-Metzner-Sachs algebra, Phys. Rev. D 95 (2017), no. 4 044002 [1510.03386].
* [23] A. M. Grant, K. Prabhu and I. Shehzad, The Wald-Zoupas prescription for asymptotic charges at null infinity in general relativity, Class. Quant. Grav. 39 (2022), no. 8 085002 [2105.05919].
* [24] A. Rignon-Bret and S. Speziale, Center-less BMS charge algebra, 2405.01526.
* [25] G. Barnich and C. Troessaert, BMS charge algebra, JHEP 12 (2011) 105 [1106.0213].
* [26] O. M. Moreschi and S. Dain, Rest frame system for asymptotically flat space-times, J. Math. Phys. 39 (1998) 6631–6650 [gr-qc/0203075].
* [27] S. Dain and O. M. Moreschi, General existence proof for rest frame systems in asymptotically flat space-time, Class. Quant. Grav. 17 (2000) 3663–3672 [gr-qc/0203048].
* [28] V. Chandrasekaran and A. J. Speranza, Anomalies in gravitational charge algebras of null boundaries and black hole entropy, JHEP 01 (2021) 137 [2009.10739].
* [29] L. Freidel, R. Oliveri, D. Pranzetti and S. Speziale, Extended corner symmetry, charge bracket and Einstein’s equations, JHEP 09 (2021) 083 [2104.12881].
* [30] L. Freidel, R. Oliveri, D. Pranzetti and S. Speziale, The Weyl BMS group and Einstein’s equations, JHEP 07 (2021) 170 [2104.05793].
* [31] A. C. Wall, A proof of the generalized second law for rapidly changing fields and arbitrary horizon slices, Phys. Rev. D 85 (2012) 104049 [1105.3445]. [Erratum: Phys.Rev.D 87, 069904 (2013)].
* [32] A. Ashtekar, N. Khera, M. Kolanowski and J. Lewandowski, Charges and fluxes on (perturbed) non-expanding horizons, JHEP 02 (2022) 066 [2112.05608].
* [33] A. Rignon-Bret, Second law from the Noether current on null hypersurfaces, Phys. Rev. D 108 (2023), no. 4 044069 [2303.07262].
* [34] G. Odak, A. Rignon-Bret and S. Speziale, General gravitational charges on null hypersurfaces, JHEP 12 (2023) 038 [2309.03854].
* [35] L. Ciambelli, L. Freidel and R. G. Leigh, Null Raychaudhuri: canonical structure and the dressing time, JHEP 01 (2024) 166 [2309.03932].
* [36] S. Hollands, R. M. Wald and V. G. Zhang, The Entropy of Dynamical Black Holes, 2402.00818.
* [37] A. Ashtekar and S. Speziale, Horizons and Null Infinity: A Fugue in 4 voices, 2401.15618.
* [38] R. Bousso, Asymptotic Entropy Bounds, Phys. Rev. D 94 (2016), no. 2 024018 [1606.02297].
* [39] T. Faulkner and A. J. Speranza, Gravitational algebras and the generalized second law, 2405.00847.
* [40] N. Arkani-Hamed, M. Pate, A.-M. Raclariu and A. Strominger, Celestial amplitudes from UV to IR, JHEP 08 (2021) 062 [2012.04208].
* [41] M. Campiglia and A. Laddha, BMS Algebra, Double Soft Theorems, and All That, 2106.14717.
* [42] T. He, V. Lysov, P. Mitra and A. Strominger, BMS supertranslations and Weinberg’s soft graviton theorem, JHEP 05 (2015) 151 [1401.7026].
* [43] G. Barnich and C. Troessaert, Symmetries of asymptotically flat 4 dimensional spacetimes at null infinity revisited, Phys. Rev. Lett. 105 (2010) 111103 [0909.2617].
* [44] G. Barnich and C. Troessaert, Aspects of the BMS/CFT correspondence, JHEP 05 (2010) 062 [1001.1541].
* [45] S. Pasterski, A. Strominger and A. Zhiboedov, New Gravitational Memories, JHEP 12 (2016) 053 [1502.06120].
* [46] M. Campiglia and A. Laddha, Asymptotic symmetries and subleading soft graviton theorem, Phys. Rev. D 90 (2014), no. 12 124028 [1408.2228].
* [47] M. Campiglia and A. Laddha, New symmetries for the Gravitational S-matrix, JHEP 04 (2015) 076 [1502.02318].
* [48] V. Chandrasekaran, E. E. Flanagan and K. Prabhu, Symmetries and charges of general relativity at null boundaries, JHEP 11 (2018) 125 [1807.11499].
|
11institutetext: NYU Multimedia and Visual Computing Lab
NYU Tandon School of Engineering, New York University, USA
New York University Abu Dhabi, UAE
NYU Langone Health, USA
11email: {yh3252, jf4151, yw523<EMAIL_ADDRESS>
11email<EMAIL_ADDRESS>
# Detect and Approach: Close-Range Navigation Support for People with
Blindness and Low Vision
Yu Hao 1122 Junchi Feng 22 John-Ross Rizzo 2244 Yao Wang 22 Yi Fang
11223^(✉)3^(✉)
###### Abstract
People with blindness and low vision (pBLV) experience significant challenges
when locating final destinations or targeting specific objects in unfamiliar
environments. Furthermore, besides initially locating and orienting oneself to
a target object, approaching the final target from one’s present position is
often frustrating and challenging, especially when one drifts away from the
initial planned path to avoid obstacles. In this paper, we develop a novel
wearable navigation solution to provide real-time guidance for a user to
approach a target object of interest efficiently and effectively in unfamiliar
environments. Our system contains two key visual computing functions: initial
target object localization in 3D and continuous estimation of the user’s
trajectory, both based on the 2D video captured by a low-cost monocular camera
mounted on in front of the chest of the user. These functions enable the
system to suggest an initial navigation path, continuously update the path as
the user moves, and offer timely recommendation about the correction of the
user’s path. Our experiments demonstrate that our system is able to operate
with an error of less than 0.5 meter both outdoor and indoor. The system is
entirely vision-based and does not need other sensors for navigation, and the
computation can be run with the Jetson processor in the wearable system to
facilitate real-time navigation assistance.
###### Keywords:
Assistive technology, Object localization from video, Navigation
## 1 Introduction
According to 2020 WHO estimates, 295 million people suffer from moderate to
severe visual impairment, while 43.3 million people are presently blind [22].
Globally, between 1990 to 2020, the number of moderate to severely visually
impaired increased by 91.7%, and the number of people who were blind increased
by 50.6% [8]. This trend is predicted to continue with estimates approaching
474 million people with moderate to severe visual impairment and 61 million
people with blindness by 2050 [21]. Blindness and low vision poses significant
challenges for nearly every activities of daily living [17]. One critical task
element of most activities in daily living is visual search or a goal-oriented
activity that involves the active scanning of the environment to locate a
particular target among irrelevant distractors [28]. Performing visual search
can be demanding in complex environments, even for those with normal vision.
It is even more challenging for the pBLV [15]. For people with moderate to
severe peripheral vision loss, central vision loss, and hemi-field vision
loss, due to reductions in the field of view, most have difficulty in
isolating a particular location when searching for an object of interest and
may need help in locating the object. For people experiencing blurred vision
or nearsightness, they may have difficulty in identifying object at relatively
far distances. For people with color deficient vision and low contrast vision,
it may be difficult for them to distinguish objects from background when the
object and background share similar colors. Aside from isolating the
particular location of an object, closing the distance between one’s current
position and the object itself is also a challenge. pBLV often want more than
just information about the initial location of the object relative to their
current position, but also continuous help in navigating to the object along
the way [4].
Figure 1: Challenges for pBLV exist in various scenarios: Searching for
objects of interests and walking to the objects (Left). Our wearable system
contains a backpack with a monocular camera, an Nvidia Jetson Xavier NX
Developer kit, and battery. The camera is placed on the chest of the user
(Right). Figure 2: Our system is able to detect and locate an object of
interest and guide a person with BLV to the target object. Initially, an
object detection module will detect all possible interesting objects. Once the
person selects a target object of interest, the object localization module
will provide the 3D location of the object and plan the path for the person to
reach the object. The trajectory estimation module will then continuously
estimate the person’s movement between two time points, update the object
location (relative to the user), and send path correction feedback to the user
when necessary.
Solutions to aid this enormous and ever-growing problem of blindness and low
vision are desperately needed. In the context of navigation and overcoming the
close-range challenge, assistive technologies may help close the gap, and aid
pBLV attain functional independence with better quality of life [18]. However,
for the pBLV, only a limited number of tools have modest market traction and
very few, if any, are able to support precise interaction with objects of
interest in the surrounding environment.
Many of the present mobile apps for way-finding have decent success at leading
end users to a general vicinity of a target location but few are able to
precisely provide instructions as one approaches the target. As most apps are
focused on outdoor use and are predicated on GPS technology, they lack the
accuracy required to support close-range navigation, which is necessary for
pBLV to approach their final destination. Adding insult to injury, as most
pBLV live in metropolitan environments, the accuracy of GPS-enabled
smartphones reduces from a 4.9 meters radius under the open sky to a 20 meters
radius in an urban setting, which is insufficient for reaching exact location
and/or specific objects of interest [6].
In this work, we develop a new wearable navigation solution to augment
perceptive ability for pBLV. The system will help a user to locate a target
object of interest and provide guidance to reach the target efficiently and
effectively in unfamiliar environments. Our wearable system, as shown in
Figure 1, contains a backpack with a monocular camera, an Nvidia Jetson Xavier
NX Developer kit, and a battery. The camera is placed in front of the chest of
the user in a custom scaffold that can be mounted on the shoulder strap of a
backpack housing the Jetson board and the battery. With the sequence of images
captured by the camera, our system is be able to detect the target object and
provide real-time path planing and updated guidance to the end user as the
user approaches the target object, with an accuracy of less than 0.5 meter.
The visual processing part of our system contains three main modules, as
illustrated in Figure 2: object detection, object localization, and trajectory
estimation. The object detection module is implemented by a pretrained YOLOv5
detection model [26], which is responsible to detect all possible objects of
interests. After the user selects an object as the target, the object
localization module will provide the initial 3D coordinate of the target
object and suggest an initial path for navigation from the user’s current
location to the target (e.g. the first purple path in the figure). The
trajectory estimation module will then continuously estimate the movement of
the user (or more precisely the camera) and consequently update the desired
path to the target (the second purple path in the figure). If the angle
between the updated path and the estimated user’s path (the yellow path) is
higher than a pre-defined threshold, our system will send an alert message to
the user. In this example, the system may say “Please head towards your left
slightly by about 30 degrees”.
To reduce the system cost and computational load, we only use a deep-learning
model for object detection in the first frame. We estimate the initial 3D
coordinates of the target object using the corresponding 2D locations of the
object in two initial frames as shown in Figure 3, to alleviate the need for a
stereo camera for depth sensing. Given the initial position of the target
object, we estimate the camera motion between successive frames to determine
the trajectory of the user, and update the object position relative to the
user for continuous path updating as shown in Figure 3.
Our experiments demonstrate that our system is able to detect objects of
interests and provide real-time update of the object location relative to the
user as the user moves towards the object, with an error of less than 0.5
meter. The system is entirely vision-based and does not need other sensors for
navigation (e.g. IMUs and range sensors), and the computation can be run with
the Jetson processor in the wearable system to facilitate real-time navigation
assistance.
## 2 Related Works
Considering the growing prevalence of smartphones in general and in pBLV
population [12] [14] [7], mobile applications can be a potential solution to
address the needs of localization and navigation for the pBLV [25]. The All
Aboard [10], developed by Massachusetts Eye and Ear, utilizes computer vision
to detect bus stop sign in the vicinity of the users and guide the users to
the precise location of the bus stop by providing the distance estimations of
the bus stop sign using computer vision algorithms. The drawback of this app
is that it only detect the bus stop sign.
Figure 3: The processing flow for the target object localization module (top)
and trajectory estimation module (bottom).
Another example of the computer vision-based app for the pBLV is Virtual Touch
[12]. This app utilizes the smartphone’s camera to capture the surrounding
environment of the user, and detects objects of interest in the scene. This
app enables the users to interact with the environment by pointing their
fingers to an object of interest and the app will tell the users what object
they are pointing to. However, this app lacks the abilities of distance
estimation and navigation.
Another category of assistive technology for navigation is light-based indoor
positioning [16]. This technology requires illuminating equipment such as LED
light to illuminate the environment and transmit infrared signals at the same
time. The user holds a receiver such as the smartphone to receive and decode
the light signals. By calculating the angles of the received signals, it is
possible to accurately localize the user and provide navigation for indoor
locations where GPS doesn’t work well. However, the cost of this system can be
a concern as it requires significant efforts in establishing the qualified
illuminating infrastructure.
On the other hand, our system combines object detection and trajectory
estimation features to provide all the functions necessary for detecting and
navigating to a target object using a monocular camera only. It can detect
different types of target, whereas the All Aboard app can only detect a
specific type of object. Our system augments the pBLV’s perceptive ability
more than the Virtual Touch app because our system also provides real-time
path correction function to guide users to the target object. Moreover, our
system is wearable and all the processing can be done locally, and hence can
provide navigation without the need of new infrastructure or other sensors
except a monocular camera.
Our methodology for object localization and trajectory estimation from 2D
video is similar to that used for visual SLAM (simultaneous localization and
mapping) [24] [19], typically used to track the camera pose of a field robot
and map the sounding environment relative to the robot. Here we use visual
SLAM to estimate the movement of a user wearing a camera and the location of a
particular stationary object relative to the user. Therefore, the proposed
system is an innovative integration of object detection and visual SLAM for
assisting pBLV in detecting and approaching a target object.
## 3 Methods
As shown in Figure 2, our system consists of three main visual processing
modules: object detection, object localization and trajectory estimation. We
use a pretrained YOLOv5 model to detect all possible interesting objects on
the first frame. After a user selects an object as the target object, the
object localization module will determine the 3D coordinate of the object
relative to the user (more precisely the camera center) based on first two
frames of the video. This module is applied only in the first two frames at
the start of the navigation. Then the trajectory estimation module will
continuously estimate the movement of the user between frames and consequently
update the location of the target relative to the user’s current position.
Based on the updated object location, the system may provide path correction
suggestions to the user. There are two options for selecting target object
from all detected objects by the object detection module: 1) Use audio play
all detected objects to the user and user uses the microphone to select the
target object by existing audio to text API. 2) Use Virtual Touch [12] to
interact with the environment by pointing their fingers to the target object.
The details of the object localization and trajectory estimation modules are
described in the following subsections. Because the object localization module
makes use of the feature correspondence and camera motion estimation
approaches used for trajectory estimation, we will first describe the
trajectory estimation module in Sec. 3.1, and then present the object
localization module in Sec. 3.2.
### 3.1 Trajectory Estimation
In this section, we introduce the trajectory estimation module, which aims to
determine the movement of the user between two video frames with a chosen
frame interval.111The video is typically captured between 15 to 30 frames per
second. But this processing may be done at a slower speed, e.g. every 0.5 to 1
second. We make use of the fact that the camera is mounted in front of the
user’s chest and therefore the camera’s movement is a good proxy for the
user’s movement. We adopt a classical approach for determining the rotation
and translation of the camera between two camera views based on the
correspondence of selected features points. The user’s movement between the
two frames is assumed to be equal to the estimated camera translation.
Our trajectory estimation module includes three components: feature point
detection and feature descriptor extraction; feature point matching; and
camera motion estimation based on the epipolar constraint, as shown in Figure
3 (bottom). The following subsections describe these components.
#### 3.1.1 Feature Descriptor Extraction and Matching
In recent years, many local feature detectors and descriptors, such as SIFT
[13], SURF [2] and ORB[27], have been developed and used for object
recognition, image registration, classification, or 3D reconstruction. To
enable real-time navigation assistance, we chose the ORB feature descriptor
[27], which are oriented multi-scale FAST [1] corners with a 256-bits
descriptor associated. There are two main advantages of ORB: 1) ORB uses an
orientation compensation mechanism, making it rotation invariant; 2) ORB
learns the optimal sampling pairs, whereas other descriptors like BRIEF [3]
uses randomly chosen sampling pairs. These strategies boost the accuracy and
efficiency of feature detection and matching.
Based on the feature descriptors, we establish the correspondences between the
features in the current frame and the reference frame of the same scene. We
first use a brute force matching algorithm to calculate the similarity between
all descriptors in the current frame and all descriptors in the reference
frame and determine an initial set of pairs of 2D coordinates of corresponding
features. RANdom SAmple Consensus (RANSAC) [5] algorithm is then utilized to
exclude the matching outliers and furthermore estimate the essential matrix
that best describes the geometric relation between corresponding 2D
coordinates, to be introduced in the next subsection.
#### 3.1.2 Determining the camera rotation and translation
When a monocular camera views a 3D scene from two distinct positions and
orientations, there are a number of geometric constraints between the
projections of the same 3D points onto the 2D images [9]. Let $p$ and $q$
denote the homogeneous coordinates of the 2D projections of the same 3D point
$P$ in the reference and the current frame. They are related by the
Longuet–Higgins equation [9]:
$q^{t}Ep=0$ (1)
where the matrix $E$ is known as the essential matrix, which depends on the
camera rotation and translation between the two frames and the camera’s
intrinsic parameters. As described previously, we can use RANSAC to determine
the best $E$ matrix given the set of corresponding features points in the two
frames.
It is well-known [20] that we can use singular value decomposition (SVD) of
the essential matrix $E$ to determine the camera rotation matrix $R$ and
translation vector $t$. Specifically, we use SVD to obtain matrix $U$ and $V$
so that:
$E=UDV^{T}$ (2)
The rotation $R$ and translation $t$ can be computed from $U$ and $V$ as:
$R=UWV^{T},\;\;t=U_{3},\;\;{\rm with}\;\;W=\left[\begin{array}[]{ccc}0&-1&0\\\
1&0&0\\\ 0&0&1\\\ \end{array}\right]$ (3)
The results are algebraically correct also with $-t$ and $W^{T}$, so we try
all possible solutions on the matching descriptors to choose the $R$ and $t$
that leads to the least fitting error for Eq. (1). For implementation, we use
the openCV library function [23] to calculate the essential matrix and camera
rotation and translation.
Once the camera translation $t$ is determined, we update the target object
location by the estimated camera translation, i.e., $o^{\prime}=o-t$, where
$o$ is the object location in the reference frame and $o^{\prime}$ is its
location in the current frame, relative to the camera center and hence the
user. The straight line connecting the object location $o^{\prime}$ in the
ground plane (i.e. the $X$ and $Z$ coordinate) and the user is the updated
path.222Here we assume that there is an open space between the target and the
user for simplicity. In practice, more sophisticated algorithms that detect
obstacles between the target and the user and plan the path accordingly are
needed. In this work, we focus on the visual processing components. On the
other hand, the camera translation $t$ indicates the direction of the user’s
latest movement between the current frame and the last frame. We evaluate the
angle between $t$ and $o^{\prime}$. If the angle is larger than a pre-defined
threshold, our system will sent out a friendly alert message to the user.
### 3.2 Object Localization
In this section, we introduce our object localization module, which aims to
determine the 3D coordinate of the target object at the start of the
navigation. Given that we only have a monocular camera, one potential option
is to use a deep-learning model for determining the depth from 2D images. This
is however computationally demanding. Instead, we take advantage of the fact
that we have a video sequence captured while the user is moving, and use the
two adjacent video frames to determine the object location. Specifically, we
first determine the 2D coordinates of the object center in the two initial
frames and the camera motion between the two frames. We then determine the 3D
coordinate of the object center through a triangulation algorithm, as
illustrated in Figure 3 (top).
We use the same algorithm described in Sec. 3.1 to determine the corresponding
features in the first two frames and the camera motion (rotation and
translation) between the two frames, except that, for the first frame, we only
perform feature extraction within the bounding box of the detected object. We
use the centroid of the 2D coordinates of all the feature points in the object
region in the first frame, as the coordinate of the object center in the first
frame, denoted by $p$. Similarly, we determine the object center coordinate in
the second frame, denoted by $q$, using feature points that correspond to the
features belonging to the object in the first frame.
Given the camera rotation $R$ and translation $t$ and the 2D positions of the
object center, $p$ and $q$, in their homogeneous representations, we utilize
triangulation [9] to obtain the 3D coordinate of the object center $P$ (in the
homogeneous representation) with respect to the camera center in the first
frame. Specifically, given the camera pose $R$ and $t$, we compute the
projection matrix $J_{1}$ for the first frame and $J_{2}$ for the second
frame:
$J_{1}=K\cdot[I,0],\;\;J_{2}=K\cdot[R,t]$ (4)
where $K$ is the intrinsic matrix of camera and $I$ is the identity matrix.
Since the cross-product between two parallel vectors equals to zero, we have:
$\displaystyle p\times(J_{1}P)=0,\;\;q\times(J_{2}P)=0$ (5)
where $p=(u_{1},v_{1},1)$ and $q=(u_{2},v_{2},1)$. This equation can also be
written as follows:
$\displaystyle\left(\begin{array}[]{c}u_{1}J_{1}^{3}-J_{1}^{1}\\\
v_{1}J_{1}^{3}-J_{1}^{2}\\\ u_{2}J_{2}^{3}-J_{2}^{1}\\\
v_{2}J_{2}^{3}-J_{2}^{2}\\\ \end{array}\right)\cdot P=A\cdot P=0$ (10)
Then we apply SVD on $A$ to obtain $C,S,$ and $D$ so that
$A=CSD^{T}$ (11)
The third column of matrix $D$ is $P$:
$P=(X,Y,Z,W)=D_{3}$ (12)
Finally, we can transform the homogeneous coordinate to the Cartesian
coordinate using
${\tilde{P}}=(X/W,Y/W,Z/W)$ (13)
## 4 Experiments
We carried out a set of experiments to evaluate the performance of our
proposed system. We first use the KITTI odometry data to evaluate our system,
where the video sequences are captured by a moving vehicle. We also run an
experiment simulating a user walking towards a target object in an indoor
environment and evaluate the performance of our algorithms. We describe these
two experiments and their results separately.
Figure 4: Example results of object detection and object localization for the
KITTI dataset. Yellow rectangles denote the bounding box of the detected
objects (car and motorcycle). We also show the estimated location $(X,Z)$ of
the detected objects. Since we are only interested in the ground position of
the objects, we only show the $X$ and $Z$ coordinate for visualization.
### 4.1 Experiment with the KITTI dataset
#### 4.1.1 KITTI Dataset:
The odometry benchmark from the KITTI dataset contains 11 sequences from a car
driven around a residential area with accurate ground truth from GPS and a
Velodyne laser scanner. We choose the car, the motorbike, the pedestrian and
the traffic light as possible target objects. We extract four video sequences
from the KITTI dataset each containing a target object. Specifically, for
video 1, we use frames 3-18 in Sequence 06 of KITTI visual odometry dataset
and select the car as the target object. For video 2, we use frames 2360-2370
from sequence 08 and select the motorcycle as the target object. For video 3,
we use frames 3416-3431 in Sequence 08 and select the person as the target.
For video 4, we use frames 3970-3980 in Sequence 08 and select the traffic
light as the target.
To generate the ground truth for object location, we use the corresponding 3D
scan of velodyne laser data in each frame for reference. Specifically, we
annotate a 3D bounding box for each object of interest and calculate the
centroid of the 3D coordinates of all points in the bounding box as the ground
truth object location.
Data | Mean Absolute Error
---|---
Car | 0.23
Motorcycle | 0.27
Person | 0.14
Traffic Light | 0.67
Mean | 0.39
Table 1: Accuracy of object localization for 4 videos on the KITTI odometry
dataset. MAE in meter.
Data | MAE | RMSE
---|---|---
Video 1 | 0.056 | 0.091
Video 2 | 0.052 | 0.085
Video 3 | 0.063 | 0.094
Video 4 | 0.055 | 0.092
Mean | 0.056 | 0.090
Table 2: Accuracy of trajectory estimation for 4 videos in the KITTI odometry
dataset. MAE and RMSE in meter
For evaluation of trajectory estimation, we use the root mean squared error
(RMSE) and mean absolute error (MAE) between the predicted translational
movement and ground truth camera movement between two frames, considering only
the X- and Z- coordinate. For evaluation of object localization, we use the
mean absolute error (MAE) between the estimated object location and ground
truth location. Lower values indicate better performance.
#### 4.1.2 Results:
We report the MAE between the predicted object location and ground truth
location to validate the effectiveness of our object localization module in
Table 2.Even when estimating small objects that are far away such as the
traffic light, our system still achieves a small error of 0.67 meter.
Table 2 reports the RMSE and MAE between the predicted trajectory between two
successive frames and ground truth trajectory. Our system is able to estimate
the trajectory accurately and achieve promising results with 0.056 meter for
MAE and 0.090 meter for RMSE on average over the 4 videos.
In addition to the quantitative results discussed above, we also show example
visual results in Figure 4. Our system can successfully detect the bounding
boxes and categories of the target objects by the object detection module and
estimate the object location by the object localization module.
Figure 5: Examples of target object detection and object localization for our
dataset. Yellow rectangles denote the bounding boxes of the detected objects
(car and motorcycle). We also show the estimated location $(X,Z)$ of the
detected objects.
### 4.2 Simulated Navigation Experiments
#### 4.2.1 Experimental Setting:
We also conducted experiments to evaluate the proposed system in a simulated
navigation experiment. We record 4 videos by the ZED camera [11] in an office
room. Specifically, we select 2 objects of interest (laptop and chair) and
record 2 videos while we walk towards each object. We use the trajectory
captured by the positional tracking system of the ZED camera as the ground
truth. To validate the effectiveness of our object localization module, we use
the depth sensing system of ZED camera to obtain the ground truth of object
position. For each object, we design 2 different test scenarios. In the first
scene, the object is located in front of the user. In the second scenario, the
object is located to the left front or right front of the user. In each case,
the video is captured while a user wearing the camera is walking straight to
the front.
Data | Mean Absolute Error
---|---
Chair 1 | 0.30
Chair 2 | 0.33
Laptop 1 | 0.18
Laptop 2 | 0.20
Mean | 0.25
Table 3: Accuracy of object localization using sequences in our dataset. MAE
in meter between the ground truth location and the estimated location in $X$
and $Z$.
Data | MAE | RMSE
---|---|---
Video 1 | 0.094 | 0.139
Video 2 | 0.088 | 0.138
Video 3 | 0.077 | 0.122
Video 4 | 0.083 | 0.127
Mean | 0.086 | 0.132
Table 4: Accuracy of trajectory estimation using sequences on our dataset.
RMSE and MAE in meter between the ground truth translation and the estimated
translation.
Figure 6: Examples of trajectory estimation, path update, and path correction alert on our dataset. We show frames 2, 4, 6, and 8 in Video 4. Purple arrow denotes the updated desired path. Yellow arrow denotes the actual path of the user. As an example, in frame 8, the angle between the planned path and the user’s path is larger than 30 degree, our system will sent out an alert message to the user, and suggesting the user to veer left slightly. Table 5: Running time of different modules in second. Object detection module is tested on Jetson Xavier NX with NVIDIA Volta GPU. Trajectory estimation and object localization is tested with the ARM Cortex®-A57 MPCore CPU. | | Image
---
Resolution
| Object
---
Detection
| Object
---
Localization
| Trajectory
---
Estimation
KITTI dataset | 1226x370 | 0.18 | 0.62 | 0.58
Our dataset | 1920x1080 | 0.18 | 0.98 | 0.92
#### 4.2.2 Results:
We first examine the accuracy of object localization module in Table 4.
Moreover, we show some sample frames with object localization results in
Figure 5. We observe that our system successfully detect the bounding box and
predict the initial coordinate of the target object in the first frame.
Table 4 reports the accuracy for trajectory estimation. We illustrate a few
examples frames in Figure 6. As we can see from the figure, our system can
provide accurate path planing and path correction. If the angle between the
planned path (purple arrow) and the user’s path (yellow arrow) is larger than
30 degree, our system will sent out an alert message to the user.
### 4.3 Run time Analysis
The run time for the three computation modules for the KITTI video and our
video are summarized in Table 5. We expect the run time using the Jetson
processor to be slightly higher than using our CPU. Therefore, we expect the
navigation initialization (including object detection and localization) takes
less than 2 sec. and trajectory estimation takes less than 1 sec. with the
Jetson processor. This should be sufficient for real-time navigation
assistance when one walks towards an object. These times can be further
shortened with the optimization of the software implementation.
## 5 Conclusions
In this paper, we present a novel wearable navigation assistive system for
pBLV, which augments their perceptive power so that they can perceive objects
in their surrounding environment and reach the object of interest easily. Our
light-weight wearable system consists of a monocular camera mounted in front
of the chest of the user, a Jetson board for computation, and a battery. To
reduce the system cost and computation load, the system performs object
detection and localization only at the start of the navigation using the first
two captured frames, and then continuously update the object location relative
to the user by estimating the camera motion between frames. This is akin to
visual SLAM for tracking the pose of a moving camera, but here we use the
visual SLAM approach to update the user’s location and correspondingly the
object location relative to the user. Such continuously updated user and
object locations then enable real-time navigation path update and feedback to
the user.
Our experimental results on the KITTI odometry video dataset and simulated
indoor navigation videos dataset demonstrate that the proposed system can
accurately detect and localize the target object at the start of the
navigation, and estimate the user movement continuously, with an error well
within 0.5 meter, both outdoor and indoor.333 Note that in KITTI video and our
video, the camera motion between successive frames is relatively small,
leading to very small motion estimation error as well. For the intended
navigation application, such analysis only need to be run between frames with
a larger interval, and hence larger errors are likely, but we expect them to
on the same order as the localization error, which is within 0.5 meter. The
system is entirely vision-based and does not need other sensors for navigation
(e.g. IMUs and range sensors), and the computation can be run with the Jetson
processor in the wearable system to facilitate real-time navigation
assistance. Such a system holds great promise for assisting pBLV in their
daily living. Future research may develop a system where the video is uploaded
to an edge server for conducting all computation tasks, to further reduce the
wearable system weight [29].
##### Acknowledgments:
Research reported in this publication was supported in part by the NSF grant
1952180 under the Smart and Connected Community program, the National Eye
Institute of the National Institutes of Health under Award Number R21EY033689,
and DoD grant VR200130 under the Delivering Sensory and Semantic Visual
Information via Auditory Feedback on Mobile Technology” The content is solely
the responsibility of the authors and does not necessarily represent the
official views of the National Institutes of Health and NSF, and DoD. Yu Hao
was partially supported by NYUAD Institute (Research Enhancement Fund -
RE132).
##### Conflict of Interest:
New York University (NYU) and John-Ross Rizzo (JRR) have financial interests
in related intellectual property. NYU owns a patent licensed to Tactile
Navigation Tools. NYU, JRR are equity holders and advisors of said company.
## References
* [1] Alcantarilla, P.F., Solutions, T.: Fast explicit diffusion for accelerated features in nonlinear scale spaces. IEEE Trans. Patt. Anal. Mach. Intell 34(7), 1281–1298 (2011)
* [2] Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (surf). Computer vision and image understanding 110(3), 346–359 (2008)
* [3] Calonder, M., Lepetit, V., Strecha, C., Fua, P.: Brief: Binary robust independent elementary features. In: European conference on computer vision. pp. 778–792. Springer (2010)
* [4] Fernandes, H., Costa, P., Filipe, V., Paredes, H., Barroso, J.: A review of assistive spatial orientation and navigation technologies for the visually impaired. Universal Access in the Information Society 18(1), 155–168 (2019)
* [5] Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24(6), 381–395 (1981)
* [6] GPS.gov: Gps accuracy. Official U.S. government information about the Global Positioning System (GPS) and related topics
* [7] Griffin-Shirley, N., Banda, D.R., Ajuwon, P.M., Cheon, J., Lee, J., Park, H.R., Lyngdoh, S.N.: A survey on the use of mobile applications for people who are visually impaired. Journal of Visual Impairment & Blindness 111(4), 307–323 (2017)
* [8] Hakobyan, L., Lumsden, J., O’Sullivan, D., Bartlett, H.: Mobile assistive technologies for the visually impaired. Survey of ophthalmology 58(6), 513–528 (2013)
* [9] Hartley, R., Zisserman, A.: Multiple view geometry in computer vision. Cambridge university press (2003)
* [10] Jiang, E., Ma, Z., Singh, A., Bobba, A., Park, S., Pundlik, S., Luo, G.: Field testing of all aboard, an ai app for helping blind individuals to find bus stops. Investigative Ophthalmology & Visual Science 62(8), 3529–3529 (2021)
* [11] Labs, S.: ZED 2 Camera product page. https://www.stereolabs.com/zed-2
* [12] Liu, X.J., Fang, Y.: Virtual touch: Computer vision augmented touch-free scene exploration for the blind or visually impaired. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1708–1717 (2021)
* [13] Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International journal of computer vision 60(2), 91–110 (2004)
* [14] Lu, D., Fang, Y.: Audi-exchange: Ai-guided hand-based actions to assist human-human interactions for the blind and the visually impaired. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1718–1726 (2021)
* [15] MacKeben, M., Fletcher, D.C.: Target search and identification performance in low vision patients. Investigative Ophthalmology & Visual Science 52(10), 7603–7609 (2011)
* [16] Maheepala, M., Kouzani, A.Z., Joordens, M.A.: Light-based indoor positioning systems: A review. IEEE Sensors Journal 20(8), 3971–3995 (2020). https://doi.org/10.1109/JSEN.2020.2964380
* [17] Massiceti, D., Hicks, S.L., van Rheede, J.J.: Stereosonic vision: Exploring visual-to-auditory sensory substitution mappings in an immersive virtual reality navigation paradigm. PloS one 13(7), e0199389 (2018)
* [18] Montello, D.R.: Cognitive research in giscience: Recent achievements and future prospects. Geography Compass 3(5), 1824–1840 (2009)
* [19] Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: Orb-slam: a versatile and accurate monocular slam system. IEEE transactions on robotics 31(5), 1147–1163 (2015)
* [20] Nistér, D.: An efficient solution to the five-point relative pose problem. IEEE transactions on pattern analysis and machine intelligence 26(6), 756–770 (2004)
* [21] Organization, W.H., et al.: Visual impairment and blindness fact sheet n 282. World Health Organization (2014)
* [22] Pascolini, D., Mariotti, S.P.: Global estimates of visual impairment: 2010. British Journal of Ophthalmology 96(5), 614–618 (2012)
* [23] Pulli, K., Baksheev, A., Kornyakov, K., Eruhimov, V.: Real-time computer vision with opencv. Communications of the ACM 55(6), 61–69 (2012)
* [24] Qin, T., Li, P., Shen, S.: Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics 34(4), 1004–1020 (2018)
* [25] Real, S., Araujo, A.: Navigation systems for the blind and visually impaired: Past work, challenges, and open problems. Sensors 19(15), 3404 (2019)
* [26] Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 779–788 (2016)
* [27] Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: Orb: An efficient alternative to sift or surf. In: 2011 International conference on computer vision. pp. 2564–2571. Ieee (2011)
* [28] Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cognitive psychology 12(1), 97–136 (1980)
* [29] Yuan, Z., Azzino, T., Hao, Y., Lyu, Y., Pei, H., Boldini, A., Mezzavilla, M., Beheshti, M., Porfiri, M., Hudson, T.E., et al.: Network-aware 5g edge computing for object detection: Augmenting wearables to “see” more, farther and faster. IEEE Access 10, 29612–29632 (2022)
|
# One-Bit Quadratic Compressed Sensing:
From Sample Abundance to Linear Feasibility
Arian Eamaz⋆, Farhang Yeganegi⋆, Deanna Needell†, and Mojtaba Soltanalian⋆
The first two authors have contributed equally to this work. ⋆ECE Department,
University of Illinois Chicago, Chicago, USA
†Department of Mathematics, University of California Los Angeles, Los Angeles,
USA
###### Abstract
One-bit quantization with time-varying sampling thresholds has recently found
significant utilization potential in statistical signal processing
applications due to its relatively low power consumption and low
implementation cost. In addition to such advantages, an attractive feature of
one-bit analog-to-digital converters (ADCs) is their superior sampling rates
as compared to their conventional multi-bit counterparts. This characteristic
endows one-bit signal processing frameworks with what we refer to as _sample
abundance_. On the other hand, many signal recovery and optimization problems
are formulated as (possibly non-convex) quadratic programs with linear
feasibility constraints in the one-bit sampling regime. We demonstrate, with a
particular focus on quadratic compressed sensing, that the sample abundance
paradigm allows for the transformation of such quadratic problems to merely a
linear feasibility problem by forming a large-scale overdetermined linear
system; thus removing the need for costly optimization constraints and
objectives. To efficiently tackle the emerging overdetermined linear
feasibility problem, we further propose an enhanced randomized Kaczmarz
algorithm, called _Block SKM_. Several numerical results are presented to
illustrate the effectiveness of the proposed methodologies.
## I Introduction
In the past two decades, sparsity-based processing methods have been
attracting a growing interest in statistical signal processing
applications[1]. Quadratic compressed sensing (QCS) is a widely used
formulation in sparse signal recovery; examples include when imaging a sparse
object using partially and spatially incoherent illumination[2], or phase
retrieval for sparse signals [3].
To approach the global optimum, the QCS problem was relaxed as a semidefinite
programming (SDP) problem, which involves minimizing the rank of a lifted
matrix while satisfying both the recovery constraints and the row sparsity
constraints on the signal[1, 4]. To retrieve the sparse solution, an iterative
thresholding algorithm was proposed that leverages a sequence of SDPs. This
approach is similar to the recent developments in the field of phase
retrieval, where similar semidefinite programming-based ideas have been
utilized[5, 4, 6, 7]. Unfortunately, these methods have a high complexity,
making them difficult to use for the QCS problem.
To overcome the computational challenges posed by convex optimization
techniques, non-convex methods have been introduced as an alternative
approach. These methods tackle the phase retrieval problem as a least-square
problem and aim to find a local optimum using various optimization
techniques[3, 8, 9]. In[3], they proposed the greedy sparse phase retrieval
(GESPAR), a fast local search method, to efficiently recover the signal from
measurements of magnitudes in the QCS problem which is more accurate than
existing local methods. However, the highly non-convex and non-unique nature
of the problem presents a challenge in finding an optimal local solution. To
enhance the performance of these local methods, various initialization
algorithms have been proposed to improve their outcomes[10, 11].
Sampling the signals of interest at high data rates with high-resolution ADCs
would dramatically increase the overall implementation cost and power
consumption of the sampling task. In multi-bit sampling scenarios, a very
large number of quantization levels is necessary in order to represent the
original continuous signal with high accuracy, which in practice, leads to a
considerable reduction in sampling rate [12, 13]. This attribute of multi-bit
sampling has served as a key motivator for the proliferation of
underdetermined signal processing tools [6, 14, 15]. An alternative solution
to such challenges is to deploy _one-bit quantization_ , which is an extreme
sampling scenario, where the signals are merely compared with given threshold
levels at the ADC, thus producing sign data ($\pm 1$). This enables signal
processing equipment to sample at a very high rate, with a considerably lower
cost and energy consumption, compared to their conventional counterparts that
employ multi-bit ADCs [16, 12, 17, 18].
The use of a fixed threshold in one-bit quantization can result in
difficulties in accurately estimating the signal amplitude. To address this
issue, recent studies have proposed the use of time-varying thresholds, which
have been shown to enhance signal recovery performance[19, 20, 21, 22].
In this paper, we consider the deployment of one-bit sampling with time-
varying thresholds on QCS, leading to an increased sample size and a _highly
overdetermined system_ as a result. Our proposed method can recover the
desired sparse signal from the _one-bit QCS_ by (i) generating abundant one-
bit measurements, in order to define a large scale overdetermined system where
a finite volume feasible set is created for QCS, and (ii) solving this
obtained linear feasibility problem by leveraging one of the efficient solver
families of overdetermined systems, namely the _Kaczmarz algorithms_.
The Kaczmarz method [23] is an iterative projection algorithm for solving
linear systems of equations and inequalities. It is usually applied to highly
overdetermined systems because of its simplicity. Many variants of this
iterative method and their convergence rates have been proposed and studied in
recent decades for both consistent and inconsistent systems including the
randomized Kaczmarz algorithm, the randomized block Kaczmarz algorithm and
most recently, the sampling Kaczmarz-Motzkin (SKM) method [24, 25, 26, 27].
To reconstruct the signal of interest from the one-bit sampled QCS, we employ
the novel variant of the Kaczmarz algorithm, _Block_ _S_ ampling _K_ aczmarz-
_M_ otzkin (Block SKM) whose theoretical guarantees will be discussed.
_Outline_ : Section II is dedicated to a review of QCS. In Section III, we
will briefly introduce the one-bit sampling via time-varying thresholds and
propose the _one-bit polyhedron_ for the QCS, which is a large-scale
overdetermined system. An accelerated Kaczmarz approach is proposed to find
the optimal point in the one-bit QCS polyhedron in Section IV. Also, the
convergence rate of proposed algorithm is investigated. Section V is devoted
to numerical results of the proposed Kaczmarz algorithm to show its recovery
performance in one-bit. Also, we compare the performance of proposed algorithm
to that of the well-known high-resolution method, GESPAR for the _phase
retrieval_ scenario, when the rank of middle matrix is one. Finally, Section
vi@ concludes the paper.
_Notation:_ We use bold lowercase letters for vectors and bold uppercase
letters for matrices. $\mathbb{C}$ and $\mathbb{R}$ represent the set of
complex and real numbers, respectively. $(\cdot)^{\top}$ and
$(\cdot)^{\mathrm{H}}$ denote the vector/matrix transpose, and the Hermitian
transpose, respectively. $\mathbf{I}_{N}\in\mathbb{R}^{N\times N}$ and
$\mathbf{0}_{N_{1}\times N_{2}}$ are the identity matrix of size $N$ and all-
zero matrix of size $N_{1}\times N_{2}$. $\operatorname{Tr}(.)$ denotes the
trace of the matrix argument. The Frobenius norm of a matrix $\mathbf{B}$ is
defined as
$\|\mathbf{B}\|_{\mathrm{F}}=\sqrt{\sum^{N_{1}}_{r=1}\sum^{N_{2}}_{s=1}\left|b_{rs}\right|^{2}}$
where $\\{b_{rs}\\}$ are elements of $\mathbf{B}$. The $\ell^{0}$-norm of a
vector counts the number of its non-zero elements. The Hadamard (element-wise)
product of two matrices $\mathbf{B}_{1}$ and $\mathbf{B}_{2}$ is denoted as
$\mathbf{B}_{1}\odot\mathbf{B}_{2}$. The vectorized form of a matrix
$\mathbf{B}$ is written as $\operatorname{vec}(\mathbf{B})$. $\mathbf{1}_{s}$
is the $s$-dimensional all-one vector. Given a scalar $x$, we define $(x)^{+}$
as $\max\left\\{x,0\right\\}$. The function $\operatorname{sgn}(\cdot)$ yields
the sign of its argument. The floor operation is denoted by $\lfloor\rfloor$.
## II Quadratic Compressed Sensing
In QCS, a sparse high-dimensional signal is to be recovered from a quadratic
cost function [1, 3]:
$\displaystyle\min_{\mathbf{x}}$ $\displaystyle\left\|\mathbf{x}\right\|_{0}$
(1) s.t. $\displaystyle
y_{j}=\mathbf{x}^{\mathrm{H}}\mathbf{A}_{j}\mathbf{x},\quad
j\in\mathcal{J}=\left\\{1,\cdots,m\right\\},$
where $\mathbf{x}\in\mathbb{C}^{n}$ is the signal to be recovered,
$\left\\{y_{j}\right\\}$ are the measurements,
$\left\\{\mathbf{A}_{j}\right\\}\in\mathbb{R}^{n\times n}$ are the associated
sensing matrices, and $m$ is the number of measurements. The convex relaxation
of (1) is obtained by the matrix lifting procedure, given by
$\displaystyle
y_{j}=\mathbf{x}^{\mathrm{H}}\mathbf{A}_{j}\mathbf{x}=\operatorname{Tr}\left(\mathbf{A}_{j}\mathbf{xx}^{\mathrm{H}}\right)=\operatorname{Tr}\left(\mathbf{A}_{j}\mathbf{X}\right),$
(2)
where $\mathbf{X}=\mathbf{xx}^{\mathrm{H}}$, and
$\operatorname{Tr}\left(\mathbf{A}_{j}\mathbf{X}\right)=\operatorname{vec}\left(\mathbf{A}^{\top}_{j}\right)^{\top}\operatorname{vec}\left(\mathbf{X}\right)$.
The sparsity constraint on $\mathbf{x}$ can be dealt with by enforcing the
row-sparsity of $\mathbf{X}$. If $\mathbf{x}$ has $k$ non-zero elements, then
$\mathbf{X}$ has $k$ rows containing non-zero elements, and each of these rows
is also $k$-sparse. The row-sparsity of $\mathbf{X}$ may be promoted by adding
a quadratic constraint on $\mathbf{X}$, i.e.,
$\sum_{r}\left(\sum_{s}\left|\mathbf{X}_{rs}\right|^{2}\right)^{\frac{1}{2}}<\eta$,
where $\eta$ is a positive number[2]. Based on (2), the QCS problem can be
reformulated as:
find $\displaystyle\bm{X}$ (3) s.t.
$\displaystyle\operatorname{Tr}\left(\mathbf{A}_{j}\mathbf{X}\right)=y_{j},$
$\displaystyle\sum_{r}\left(\sum_{s}\left|\mathbf{X}_{rs}\right|^{2}\right)^{\frac{1}{2}}<\eta,$
$\displaystyle\operatorname{rank}\left(\mathbf{X}\right)=1,\leavevmode\nobreak\
\mathbf{X}\succeq 0.$
To have a convex program similar to [2], the problem (3) may be relaxed as,
$\displaystyle\min_{\mathbf{X}}$ $\displaystyle\operatorname{Tr}(\mathbf{X})$
(4) s.t.
$\displaystyle\operatorname{Tr}\left(\mathbf{A}_{j}\mathbf{X}\right)=y_{j},$
$\displaystyle\sum_{r}\left(\sum_{s}\left|\mathbf{X}_{rs}\right|^{2}\right)^{\frac{1}{2}}<\eta,\leavevmode\nobreak\
\mathbf{X}\succeq 0.$
The above problem is a semi-definite program (SDP). Similar SDP-based ideas
were recently utilized in the context of phase retrieval [6, 14]. However, the
SDP has a high computational complexity; in particular, the semi-definiteness
and row-sparsity constraint in the above problem render it computationally
demanding [28, 29, 3].
An interesting alternative to enforcing the _feasible set_ in problem (4),
denoted as $\mathcal{F}_{\mathbf{X}}$, emerges when one increases the number
of samples $m$, and solves the overdetermined linear system of equations with
$m\gg n$. In this sample abundance regimen, the linear constraint
$\operatorname{Tr}\left(\mathbf{A}_{j}\mathbf{X}\right)=y_{j}$ may actually
yield the optimum inside $\mathcal{F}_{\mathbf{X}}$. As a result of increasing
the number of samples, it is possible that the intersection of these
hyperplanes will achieve the optimal point without the need to consider other
costly constraints. However, this idea may face practical limitations in the
case of multi-bit quantization systems since ADCs capable of ultra-high rate
sampling are difficult and expensive to produce. Moreover, one cannot
necessarily expect these constraints to intersect with
$\mathcal{F}_{\mathbf{X}}$ in such a way to form a finite-volume space before
the optimum is obtained [15, 6].
In the next section, by deploying the idea of one-bit sampling with time-
varying thresholds, linear equality constraints are superseded by a massive
array of linear inequalities—thus forming a polyhedron that asymptotically
coincides with $\mathcal{F}_{\mathbf{X}}$.
## III One-Bit QCS
In this section, we will briefly introduce the one-bit sampling with time-
varying scheme and the signal reconstruction problem in the one-bit
quantization scheme. We will demonstrate that the utilization of time-varying
thresholds in one-bit sampling results in a highly over-determined system,
represented as a polyhedron. Subsequently, by exploiting the _ample of
samples_ in the one-bit sampling approach, the one-bit sampled QCS problem
will be formulated as a _linear feasibility_ problem.
### III-A One-Bit Sampling with Time-Varying Thresholds
Consider a bandlimited signal $y\in L^{2}$, which is to be represented by its
samples via the standard sampling formula[30],
$0<\mathrm{T}\leqslant\frac{\pi}{\Omega},\quad
y(t)=\sum_{k=-\infty}^{k=+\infty}y(k\mathrm{T})\operatorname{sinc}\left(\frac{t}{\mathrm{T}}-k\right),$
(5)
where $1/\mathrm{T}$ is the sampling rate and
$\operatorname{sinc}(t)=\frac{\sin(\pi t)}{(\pi t)}$ is an _ideal_ low-pass
filter. Suppose $y_{k}=y(k\mathrm{T})$ denotes the uniform samples of $y(t)$
with the sampling rate $1/\mathrm{T}$. Let $r_{k}$ denote the quantized
version of $y[k]$ with the formulation $r_{k}=Q(y_{k})$, where $Q$ denotes the
quantization effect. In one-bit quantization, compared to zero or constant
thresholds, time-varying sampling thresholds yield a better reconstruction
performance [31, 32]. These thresholds may be chosen from any distribution. In
this work, to be consistent with state-of-the-art [31, 33, 20], we consider a
Gaussian non-zero time-varying threshold vector
$\bm{\uptau}=\left[\tau_{k}\right]$ that follows the distribution
$\bm{\uptau}\sim\mathcal{N}\left(\mathbf{d}=\mathbf{1}d,\bm{\Sigma}\right)$.
In the case of one-bit quantization with such time-varying sampling
thresholds, the quantizer is simply written as
$r_{k}=\operatorname{sgn}\left(y_{k}-\tau_{k}\right)$. Let
$\mathbf{y}=[y_{k}]$ and $\mathbf{r}=[r_{k}]$. Then, the signal feasibility
based on the one-bit measurements takes the form
$\mathbf{r}\odot\left(\mathbf{y}-\uptau\right)\geq\mathbf{0},$ (6)
or equivalently
$\displaystyle\bm{\Omega}\mathbf{y}$
$\displaystyle\succeq\mathbf{r}\odot\uptau,$ (7)
where $\bm{\Omega}\triangleq\operatorname{diag}\left\\{\mathbf{r}\right\\}$.
Suppose $\mathbf{y},\uptau\in\mathbb{R}^{m}$, and that $\uptau^{(\ell)}$
denotes the time-varying sampling threshold in $\ell$-th experiment where
$\ell\in\mathcal{L}=\\{1,\cdots,m_{1}\\}$. According to (7), for the $\ell$-th
experiment we have
$\displaystyle\bm{\Omega}^{(\ell)}\mathbf{y}$
$\displaystyle\succeq\bm{r}^{(\ell)}\odot\uptau^{(\ell)},\quad\ell\in\mathcal{L},$
(8)
where
$\bm{\Omega}^{(\ell)}=\operatorname{diag}\left\\{\mathbf{r}^{(\ell)}\right\\}$.
In (8), we have $m_{1}$ linear system of inequalities which can be put
together and expressed as
$\tilde{\bm{\Omega}}\mathbf{y}\succeq\operatorname{vec}\left(\mathbf{R}\right)\odot\operatorname{vec}\left(\bm{\Gamma}\right),$
(9)
where $\mathbf{R}$ and $\bm{\Gamma}$ are matrices, with
$\left\\{\mathbf{r}^{(\ell)}\right\\}$ and $\left\\{\uptau^{(\ell)}\right\\}$
representing their columns, respectively, and $\tilde{\bm{\Omega}}$ is given
by
$\tilde{\bm{\Omega}}=\left[\begin{array}[]{c|c|c}\bm{\Omega}^{(1)}&\cdots&\bm{\Omega}^{(m)}\end{array}\right]^{\top},\quad\tilde{\bm{\Omega}}\in\mathbb{R}^{m_{1}m\times
m}.$ (10)
Utilizing the one-bit quantization technique with multiple time-varying
sampling threshold sequences allows for an increase in the number of samples
with little extra cost and serves as a gateway to the realm of few-bit
sampling. This can be especially beneficial in applications where measurement
limitations exist.
### III-B One-Bit QCS as Linear Feasibility Problem
Hereafter, we will focus on (9) as an overdetermined linear system of
inequalities that is associated with the one-bit sampling scheme. If we apply
one-bit sampling to the QCS (1), referred to as one-bit QCS,
$r^{(\ell)}_{j}=\begin{cases}+1&\operatorname{Tr}\left(\mathbf{A}_{j}\mathbf{X}\right)>\uptau^{(\ell)}_{j},\\\
-1&\operatorname{Tr}\left(\mathbf{A}_{j}\mathbf{X}\right)<\uptau^{(\ell)}_{j}.\end{cases}$
(11)
As a result, by using the linear property of trace function
$\operatorname{Tr}\left(\mathbf{A}_{j}\mathbf{X}\right)=\operatorname{vec}\left(\mathbf{A}^{\top}_{j}\right)^{\top}\operatorname{vec}\left(\mathbf{X}\right)$,
the _one-bit QCS_ polyhedron can be written as
$\displaystyle\mathcal{P}=\left\\{\mathbf{X}\mid
r^{(\ell)}_{j}\operatorname{vec}\left(\mathbf{A}^{\top}_{j}\right)^{\top}\operatorname{vec}\left(\mathbf{X}\right)\geq
r^{(\ell)}_{j}\uptau^{(\ell)}_{j},\leavevmode\nobreak\
\ell\in\mathcal{L},\leavevmode\nobreak\ j\in\mathcal{J}\right\\},$ (12)
which is vectorized based on
$\mathbf{y}=\mathbf{V}\operatorname{vec}\left(\mathbf{X}\right)$, where
$\mathbf{V}$ is a matrix with
$\left\\{\operatorname{vec}\left(\mathbf{A}^{\top}_{j}\right)\right\\}$ as its
rows. The inequality (12) may be recast in the standard polyhedron form as
$\displaystyle\mathcal{P}=\left\\{\mathbf{X}\mid\mathbf{P}\operatorname{vec}\left(\mathbf{X}\right)\succeq\operatorname{vec}\left(\mathbf{R}\right)\odot\operatorname{vec}\left(\bm{\Gamma}\right)\right\\},$
(13)
where $\mathbf{P}=\tilde{\bm{\Omega}}\bm{V}$. By leveraging the sample
abundance in the one-bit sampling, the space constrained by (13), _shrinks_ to
become contained inside the _feasible region_. However, this shrinking space
always contains the globally optimal solution, with a volume that is
decreasing with an increasing number of one-bit samples. We will discuss our
approach to find the desired matrix $\mathbf{X}^{\star}$ below.
## IV Proposed Algorithm
To recover the desired signal within the one-bit QCS polyhedron, we use an
accelerated variant of randomized Kaczmarz algorithm (RKA). Many variants of
this iterative method and their convergence rates have been proposed and
studied in recent decades for both consistent and inconsistent systems,
including the original randomized Kaczmarz algorithm, the randomized block
Kaczmarz algorithm and most recently, the sampling Kaczmarz-Motzkin (SKM)
method [24, 25, 27]. The block-structured nature of the one-bit QCS matrix has
motivated the development of the SKM method, designed specifically to handle
block-structured linear feasibility problems with efficiency. Further, the
proposed algorithm will be backed by theoretical guarantees.
### IV-A SKM Method
The SKM is a _subconjugate gradient method_ to solve overdetermined linear
systems, i.e., $\mathbf{B}\mathbf{x}\preceq\mathbf{b}$, where $\mathbf{B}$ is
a $m_{1}m\times n$ matrix. The conjugate-gradient methods immediately turn
such an inequality to an equality of the following form:
$\left(\mathbf{B}\mathbf{x}-\mathbf{b}\right)^{+}=0,$ (14)
and then approach the solution by the same process as used for systems of
equations. Given a sample index set $\mathcal{J}$, without loss of generality,
rewrite (14) as the polyhedron
$\displaystyle\begin{cases}\mathbf{c}_{j}\mathbf{x}\leq
b_{j}&\left(j\in\mathcal{I}_{\leq}\right),\\\
\mathbf{c}_{j}\mathbf{x}=b_{j}&\left(j\in\mathcal{I}_{=}\right),\end{cases}$
(15)
where the disjoint index sets $\mathcal{I}_{\leq}$ and $\mathcal{I}_{=}$
partition $\mathcal{J}$ and $\\{\mathbf{c}_{j}\\}$ are the rows of
$\mathbf{B}$. The projection coefficient $\beta_{i}$ of the SKM at $i$
iteration is [25, 34, 35]
$\beta_{i}=\begin{cases}\left(\mathbf{c}_{j}\mathbf{x}_{i}-b_{j}\right)^{+}&\left(j\in\mathcal{I}_{\leq}\right),\\\
\mathbf{c}_{j}\mathbf{x}_{i}-b_{j}&\left(j\in\mathcal{I}_{=}\right).\end{cases}$
(16)
The central contribution of SKM lies in its innovative way of projection plane
selection. The hyperplane selection is done as follows. At iteration $i$ the
SKM algorithm selects a collection of $\gamma$ (denoted by the set
$\mathcal{T}_{i}$), uniformly at random out of $m_{1}m$ rows of the constraint
matrix $\mathbf{B}$. Then, out of these $\gamma$ rows, the row with maximum
positive residual is selected. Finally, the solution is updated as [27, 36]:
$\mathbf{x}_{i+1}=\mathbf{x}_{i}-\lambda_{i}\frac{\beta_{i}}{\|\mathbf{c}_{{}_{j^{\star}_{i}}}\|^{2}_{2}}\mathbf{c}^{\mathrm{H}}_{{}_{j^{\star}_{i}}}$,
where the index $j^{\star}_{i}$ is chosen as the _Motzkin sampling_ , i.e.,
$j^{\star}_{i}=\operatorname{argmax}\leavevmode\nobreak\
\left\\{\left(\mathbf{c}_{j}\mathbf{x}_{i}-b_{j}\right)^{+}\right\\},\leavevmode\nobreak\
j\in\mathcal{T}_{i}$ at iteration $i$, and $\lambda_{i}$ is a relaxation
parameter which for consistent systems must satisfy,
$0\leq\lim_{i\to\infty}\inf\lambda_{i}\leq\lim_{i\to\infty}\sup\lambda_{i}<2$,
to ensure convergence [24]. The convergence bound for SKM is given by
$\mathbb{E}\left\\{\left\|\mathbf{x}_{i}-\mathbf{x}_{\star}\right\|^{2}_{2}\right\\}\leq\left(1-\frac{2\lambda_{i}-\lambda^{2}_{i}}{\kappa^{2}\left(\mathbf{B}\right)}\right)^{i}\leavevmode\nobreak\
\left\|\mathbf{x}_{0}-\mathbf{x}_{\star}\right\|^{2}_{2},$ (17)
with
$\kappa\left(\mathbf{B}\right)=\|\mathbf{B}\|_{\mathrm{F}}\|\mathbf{B}^{\dagger}\|_{2}$
denoting the scaled condition number, and $\mathbf{x}_{\star}$ is the optimal
solution.
### IV-B Block SKM Algorithm
The matrix $\mathbf{P}$ in (13) has a block structure with the following
formulation:
$\mathbf{P}=\left[\begin{array}[]{c|c|c}\mathbf{V}^{\top}\bm{\Omega}^{(1)}&\cdots&\mathbf{V}^{\top}\bm{\Omega}^{(m)}\end{array}\right]^{\top},\quad\mathbf{P}\in\mathbb{R}^{m_{1}m\times
n}.$ (18)
Therefore, it is useful to investigate the accelerated block-based RKA methods
to find the desired signal in (13) for further computational efficiency
enhancement. Our proposed algorithm, the _Block SKM_ , is described as
follows. Suppose we have a linear feasibility problem
$\mathbf{B}\mathbf{x}\preceq\mathbf{b}$ where
$\mathbf{B}=\left[\begin{array}[]{c|c|c}\mathbf{B}^{\top}_{1}&\cdots&\mathbf{B}^{\top}_{m_{1}}\end{array}\right]^{\top},$
$\mathbf{B}\in\mathbb{R}^{m_{1}m\times n}$, and
$\mathbf{b}=\left[\begin{array}[]{c|c|c}\mathbf{b}^{\top}_{1}&\cdots&\mathbf{b}^{\top}_{m_{1}}\end{array}\right]^{\top}$.
The proposed algorithm for sparse signal recovery, i.e., the Block SKM, may be
summarized as follows:
1. 1.
Choose a block $\mathbf{B}_{j}$ uniformly at random with the probability
$\operatorname{Pr}\\{j=k\\}=\frac{\left\|\mathbf{B}_{k}\right\|^{2}_{\mathrm{F}}}{\|\mathbf{B}\|_{\mathrm{F}}^{2}}$.
2. 2.
Compute $\mathbf{e}=\mathbf{B}_{j}\mathbf{x}-\mathbf{b}_{j}$.
3. 3.
Let $\mathbf{e}^{\prime}$ denote the sorted version of $\mathbf{e}$ from
$e_{\text{max}}$ (the maximum element of $\mathbf{e}$) to $e_{\text{min}}$
(the minimum element of $\mathbf{e}$). This step is inspired by the idea of
the Motzkin sampling, presented in [27], to have an accelerated convergence.
4. 4.
Select the first $k^{\prime}<n$ element of $\mathbf{e}^{\prime}$ and construct
the sub-problem
$\mathbf{B}_{j}^{\prime}\mathbf{x}\preceq\mathbf{b}_{j}^{\prime}$, where
$\mathbf{B}_{j}^{\prime}\in\mathbb{R}^{k^{\prime}\times n}$ and
$\mathbf{b}_{j}^{\prime}\in\mathbb{R}^{k^{\prime}\times 1}$. The reason behind
choosing $k^{\prime}<n$ is due to the computation of
$\left(\mathbf{B}_{j}^{\prime}\mathbf{B}_{j}^{\prime\top}\right)^{-1}$ in the
next step (Step $5$). For $k^{\prime}>n$, the matrix
$\mathbf{B}_{j}^{\prime}\mathbf{B}_{j}^{\prime\top}$ is rank-deficient and its
inverse is not available.
5. 5.
Compute the Moore-Penrose of $\mathbf{B}_{j}^{\prime}$, i.e.,
$\mathbf{B}_{j}^{\prime\dagger}=\mathbf{B}_{j}^{\prime\top}\left(\mathbf{B}_{j}^{\prime}\mathbf{B}_{j}^{\prime\top}\right)^{-1}$.
6. 6.
Update the solution
$\mathbf{x}_{i+1}=\mathbf{x}_{i}-\lambda_{i}\mathbf{B}_{j}^{\prime\dagger}\left(\mathbf{B}_{j}^{\prime}\mathbf{x}-\mathbf{b}_{j}^{\prime}\right)^{+}$.
This update process is inspired from the randomized block Kaczmarz method [37,
26] which takes advantage of the efficient matrix-vector multiplication, thus
giving the method a significant reduction in computational cost [34].
Particularly, in the case of the one-bit QCS polyhedron,
$\mathbf{B}=-\mathbf{P}$,
$\mathbf{x}=\operatorname{vec}\left(\mathbf{X}\right)$, and
$\mathbf{b}=-\operatorname{vec}\left(\mathbf{R}\right)\odot\operatorname{vec}\left(\bm{\Gamma}\right)$.
### IV-C Convergence Analysis
It is worth pointing out that the Block SKM algorithm can be considered to be
a special case of the more general _sketch-and-project_ method, defined
as[38]:
$\mathbf{x}_{i+1}=\underset{\mathbf{x}}{\textrm{argmin}}\leavevmode\nobreak\
\left\|\mathbf{x}-\mathbf{x}_{i}\right\|^{2}_{2}\quad\textrm{subject
to}\quad\mathbf{S}^{\top}\mathbf{B}\mathbf{x}\preceq\mathbf{S}^{\top}\mathbf{b},$
(19)
where $\mathbf{S}\in\mathbb{R}^{m_{1}m\times k^{\prime}}$ is the sketch matrix
choosing a block uniformly at random from the main matrix as mentioned in step
$1$. The second step of the proposed algorithm follows the Motzkin sampling
where the index $j^{\star}_{i}$ is chosen in $i$-th iteration as follows:
$j^{\star}_{i}=\underset{j}{\textrm{argmax}}\left\\{\left(\left(\mathbf{S}^{\top}\mathbf{B}\right)_{j}\mathbf{x}_{i}-\left(\mathbf{S}^{\top}\mathbf{b}\right)_{j}\right)^{+}\right\\},$
(20)
with $(\cdot)_{i}$ denoting the $i$th row of the matrix argument.
In the Block SKM algorithm, the sketch matrix is given by
$\mathbf{S}=\left[\begin{array}[]{c|c|c}\mathbf{0}_{k^{\prime}\times
p}&\mathbf{I}_{k^{\prime}}&\mathbf{0}_{k^{\prime}\times(m_{1}m-k^{\prime}-p)}\end{array}\right]^{\top},\leavevmode\nobreak\
\mathbf{S}\in\mathbb{R}^{m_{1}m\times k^{\prime}},$ (21)
where $k^{\prime}$ is the block size and
$p=k^{\prime}\alpha,\leavevmode\nobreak\
\alpha\in\left\\{1,\cdots,\left\lfloor\frac{m_{1}m}{k\prime}\right\rfloor\right\\}$.
Note that the literature does not offer any theoretical guarantees for the
convergence of the Block SKM with the above sketch matrix [39]. To derive our
theoretical guarantees for the algorithm used to solve the one-bit QCS, we
change the sketch matrix to the _Gaussian_ sketch matrix as follows:
$\mathbf{S}=\left[\begin{array}[]{c|c|c}\mathbf{0}_{k^{\prime}\times
p}&\mathbf{G}&\mathbf{0}_{k^{\prime}\times(m_{1}m-k^{\prime}-p)}\end{array}\right]^{\top},\leavevmode\nobreak\
\mathbf{S}\in\mathbb{R}^{m_{1}m\times k^{\prime}},$ (22)
where $\mathbf{G}$ is a $k^{\prime}\times k^{\prime}$ Gaussian matrix, whose
entries are i.i.d. following the distribution $\mathcal{N}\left(0,1\right)$.
In this framework, we are able to provide some theoretical guarantees by
taking advantage of the favorable properties of Gaussian random variables.
Assume that $\mathcal{S}$ denotes a non-empty solution set of the polyhedron
(13). Owing to the fact that
$\mathbb{E}\left\\{\left\|\mathbf{x}_{i+1}-\mathcal{S}\right\|_{2}^{2}\right\\}\leq\mathbb{E}\left\\{\left\|\mathbf{x}_{i+1}-\mathbf{x}_{\star}\right\|_{2}^{2}\right\\}$
[25], then we proceed to prove the convergence rate by employing
$\mathbb{E}\left\\{\left\|\mathbf{x}_{i+1}-\mathbf{x}_{\star}\right\|_{2}^{2}\right\\}$.
Using the fact that $\mathbf{x}_{i+1}-\mathbf{x}_{\star}$ is orthogonal to
$(\mathbf{S}^{T}\mathbf{B})_{j^{\star}_{i}}$ [39], where $j^{\star}_{i}$ is
the index chosen based on the Motzkin sampling for the $i$-th iteration, we
have the following Pythagorean relation [39, 38]:
$\displaystyle\left\|\mathbf{x}_{i+1}-\mathbf{x}_{\star}\right\|_{2}^{2}=\left\|\mathbf{x}_{i}-\mathbf{x}_{\star}\right\|_{2}^{2}-\frac{\left\|\left(\left(\mathbf{S}^{\top}\mathbf{B}\right)_{j^{\star}_{i}}\mathbf{x}_{i}-\left(\mathbf{S}^{\top}\mathbf{b}\right)_{j^{\star}_{i}}\right)^{+}\right\|_{2}^{2}}{\left\|\left(\mathbf{S}^{\top}\mathbf{B}\right)_{i}\right\|_{2}^{2}}.$
(23)
In the linear inequality system, the Kaczmarz algorithms only updates the
solution when
$\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{i}\succeq\mathbf{S}^{\top}\mathbf{b}$
at $i$-th iteration. Therefore, one can readily rewrite (23) at iteration $i$
where the condition
$\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{i}\succeq\mathbf{S}^{\top}\mathbf{b}$
is met:
$\displaystyle\left\|\mathbf{x}_{i+1}-\mathbf{x}_{\star}\right\|_{2}^{2}=\left\|\mathbf{x}_{i}-\mathbf{x}_{\star}\right\|_{2}^{2}-\frac{\left\|\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{i}-\mathbf{S}^{\top}\mathbf{b}\right\|_{\infty}^{2}}{\left\|\left(\mathbf{S}^{\top}\mathbf{B}\right)_{j^{\star}_{i}}\right\|_{2}^{2}}.$
(24)
By taking the expectation over the error, we have
$\mathbb{E}_{\mathbf{S}}\left\\{\left\|\mathbf{x}_{i+1}-\mathbf{x}_{\star}\right\|_{2}^{2}\right\\}=\left\|\mathbf{x}_{i}-\mathbf{x}_{\star}\right\|_{2}^{2}-\mathbb{E}_{\mathbf{S}}\left\\{\frac{\left\|\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{i}-\mathbf{S}^{\top}\mathbf{b}\right\|_{\infty}^{2}}{\left\|\left(\mathbf{S}^{\top}\mathbf{B}\right)_{j^{\star}_{i}}\right\|_{2}^{2}}\right\\}.$
(25)
In addition, we have that
$\displaystyle\mathbb{E}_{\mathbf{S}}\left\\{\left\|\left(\mathbf{S}^{\top}\mathbf{B}\right)_{j^{\star}_{i}}\right\|_{2}^{2}\right\\}=\sum_{k=1}^{n}\mathbb{E}_{\mathbf{S}}\left\\{\left(\sum_{i_{1}=1}^{m_{1}m}\mathbf{S}_{ji_{1}}\mathbf{B}_{i_{1}i_{2}}\right)^{2}\right\\},$
(26)
or equivalently, in terms of $\mathbf{G}$ in (22),
$\displaystyle\sum_{i_{2}=1}^{n}\mathbb{E}_{\mathbf{G}}\left\\{\left(\sum_{i_{1}=1}^{k^{\prime}}\mathbf{G}^{\top}_{ji_{1}}\mathbf{B}_{i_{1}i_{2}}\right)^{2}\right\\}$
$\displaystyle=$ (27) $\displaystyle\sum_{i_{2}=1}^{n}$
$\displaystyle\sum_{i_{1}=1}^{k^{\prime}}\mathbb{E}_{\mathbf{G}}\left\\{\left(\mathbf{G}^{\top}_{ji_{1}}\right)^{2}\right\\}\mathbf{B}^{2}_{i_{1}i_{2}},$
with
$\mathbb{E}_{\mathbf{G}}\left\\{\left(\mathbf{G}^{\top}_{ji_{1}}\right)^{2}\right\\}=1$,
which helps to simplify (27) as
$\displaystyle\sum_{i_{2}=1}^{n}\sum_{i_{1}=1}^{k^{\prime}}\mathbf{B}^{2}_{i_{1}i_{2}}=\|\hat{\mathbf{B}}\|_{\mathrm{F}}^{2},$
(28)
where $\hat{\mathbf{B}}$ is the $k^{\prime}\times n$ submatrix of
$\mathbf{B}$. Due to the fact that the second term in the right-hand side of
(25) is an expectation over the convex function $f(x,y)=x^{2}/y$, we can apply
Jensen’s inequality as follows:
$\displaystyle\mathbb{E}_{\mathbf{S}}\left\\{\frac{\left\|\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{i}-\mathbf{S}^{\top}\mathbf{b}\right\|_{\infty}^{2}}{\left\|\left(\mathbf{S}^{\top}\mathbf{B}\right)_{j^{\star}_{i}}\right\|_{2}^{2}}\right\\}\geq\frac{\left(\mathbb{E}_{\mathbf{S}}\left\\{\left\|\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{i}-\mathbf{S}^{\top}\mathbf{b}\right\|_{\infty}\right\\}\right)^{2}}{\mathbb{E}_{\mathbf{S}}\left\\{\left\|\left(\mathbf{S}^{\top}\mathbf{B}\right)_{j^{\star}_{i}}\right\|_{2}^{2}\right\\}}.$
(29)
Since
$\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{\star}\preceq\mathbf{S}^{\top}\mathbf{b}$
and
$\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{i}\succeq\mathbf{S}^{\top}\mathbf{b}$,
one can conclude
$\left\|\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{i}-\mathbf{S}^{\top}\mathbf{b}\right\|_{\infty}\geq\left\|\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{i}-\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{\star}\right\|_{\infty}.$
(30)
It follows from the above that
$\displaystyle\mathbb{E}_{\mathbf{S}}\left\\{\frac{\left\|\mathbf{S}^{\top}\mathbf{B}\mathbf{x}_{i}-\mathbf{S}^{\top}\mathbf{b}\right\|_{\infty}^{2}}{\left\|\left(\mathbf{S}^{\top}\mathbf{B}\right)_{j^{\star}_{i}}\right\|_{2}^{2}}\right\\}\geq\frac{\left(\mathbb{E}_{\mathbf{S}}\left\\{\left\|\mathbf{S}^{\top}\mathbf{B}\left(\mathbf{x}_{i}-\mathbf{x}_{\star}\right)\right\|_{\infty}\right\\}\right)^{2}}{\|\hat{\mathbf{B}}\|_{\mathrm{F}}^{2}}.$
(31)
We can additionally take advantage of the estimate for the maximum of
independent normal random variables[39],
$\displaystyle\mathbb{E}_{\mathbf{S}}\left\\{\left\|\mathbf{S}^{\top}\mathbf{B}\left(\mathbf{x}_{i}-\mathbf{x}_{\star}\right)\right\|_{\infty}\right\\}$
$\displaystyle=\mathbb{E}_{\mathbf{S}}\left\\{\max_{t\in[k^{\prime}]}\left\langle\mathbf{s}_{t},\mathbf{B}\left(\mathbf{x}_{i}-\mathbf{x}_{\star}\right)\right\rangle\right\\}$
(32)
$\displaystyle=\mathbb{E}_{\mathbf{G}}\left\\{\max_{t\in[k^{\prime}]}\left\langle\mathbf{s}_{t},\hat{\mathbf{B}}\left(\mathbf{x}_{i}-\mathbf{x}_{\star}\right)\right\rangle\right\\}$
$\displaystyle\geq
c\|\hat{\mathbf{B}}\left(\mathbf{x}_{i}-\mathbf{x}_{\star}\right)\|_{2}\sqrt{\log
k^{\prime}},$
where $\mathbf{s}_{t}$ is the $t$-th column of $\mathbf{S}$,
$[k^{\prime}]=\left\\{1,2,\cdots,k^{\prime}\right\\}$, and $c$ is a positive
value. By plugging the inequality (32) into (25), and using the inequality,
$\left\|\hat{\mathbf{B}}\left(\mathbf{x}_{i}-\mathbf{x}_{\star}\right)\right\|_{2}^{2}\geq\sigma^{2}_{\textrm{min}}\left(\hat{\mathbf{B}}\right)\left\|\mathbf{x}_{i}-\mathbf{x}_{\star}\right\|^{2}_{2},$
(33)
where $\sigma^{2}_{\textrm{min}}$ is the minimum singular value. Thus, we
obtain
$\displaystyle\mathbb{E}\left\\{\left\|\mathbf{x}_{i+1}-\mathbf{x}_{\star}\right\|_{2}^{2}\right\\}$
$\displaystyle\leq\left\|\mathbf{x}_{i}-\mathbf{x}_{\star}\right\|_{2}^{2}-\frac{c\|\mathbf{B}\left(\mathbf{x}_{\star}-\mathbf{x}_{i}\right)\|_{2}^{2}\log
k^{\prime}}{\|\mathbf{B}\|_{\mathrm{F}}^{2}}$ (34)
$\displaystyle\leq\left\|\mathbf{x}_{i}-\mathbf{x}_{\star}\right\|_{2}^{2}-\frac{c\sigma_{\min}^{2}(\hat{\mathbf{B}})\log
k^{\prime}}{\|\hat{\mathbf{B}}\|_{\mathrm{F}}^{2}}\left\|\mathbf{x}_{i}-\mathbf{x}_{\star}\right\|_{2}^{2}$
$\displaystyle\leq\left(1-\frac{c\sigma_{\min}^{2}(\hat{\mathbf{B}})\log
k^{\prime}}{\|\hat{\mathbf{B}}\|_{\mathrm{F}}^{2}}\right)\left\|\mathbf{x}_{i}-\mathbf{x}_{\star}\right\|_{2}^{2},$
which can be recast as the following _convergence rate_ , after $K$ updates:
$\displaystyle\mathbb{E}\left\\{\left\|\mathbf{x}_{i+1}-\mathbf{x}_{\star}\right\|_{2}^{2}\right\\}\leq\left(1-\frac{c\sigma_{\min}^{2}(\hat{\mathbf{B}})\log
k^{\prime}}{\|\hat{\mathbf{B}}\|_{\mathrm{F}}^{2}}\right)^{K}\left\|\mathbf{x}_{0}-\mathbf{x}_{\star}\right\|_{2}^{2}.$
(35)
(a) (b)
Figure 1: Average NMSE for the error between the desired $\mathbf{X}^{\star}$
and its recovered version $\bar{\mathbf{X}}$ for various numbers of time-
varying sampling threshold sequences $m_{1}\in\left\\{10,50,100,150\right\\}$
when the block SKM is utilized in with (a) $\|\mathbf{x}\|_{0}=5$, (b)
$\|\mathbf{x}\|_{0}=10$.
(a) (b)
Figure 2: Average NMSE for the error between the desired $\mathbf{X}^{\star}$
and its recovered version $\bar{\mathbf{X}}$ for various numbers of time-
varying sampling threshold sequences $m_{1}\in\left\\{10,50,100,150\right\\}$
when the block SKM is utilized in with (a) $\|\mathbf{x}\|_{0}=5$, (b)
$\|\mathbf{x}\|_{0}=10$.
## V Numerical Results
In this section, at first, we numerically scrutinize the capability of the
block SKM in the one-bit QCS problem by evaluating the squared Frobenius norm
of the error between the desired matrix $\mathbf{X}^{\star}$ and its estimate
$\bar{\mathbf{X}}$, normalized by the squared Frobenius norm of the desired
matrix:
$\mathrm{NMSE}\triangleq\frac{\left\|\mathbf{X}^{\star}-\bar{\mathbf{X}}\right\|^{2}_{\mathrm{F}}}{\left\|\mathbf{X}^{\star}\right\|^{2}_{\mathrm{F}}}.$
(36)
The input signal $\mathbf{x}\in\mathbb{R}^{64}$, is considered to be a sparse
signal with (i) $\|\mathbf{x}\|_{0}=5$, and (ii) $\|\mathbf{x}\|_{0}=10$. To
choose the time-varying sampling thresholds, we consider the framework
presented in [22], which relies on knowledge of the dynamic range of the
measurements $\mathbf{y}$. Assume
$\beta_{\mathbf{y}}=\left\|\mathbf{y}\right\|_{\infty}$ denotes the dynamic
range of the measurements. Then, herein we generate the time-varying sampling
thresholds as
$\left\\{\mathbf{\uptau}^{(\ell)}\sim\mathcal{N}\left(\mathbf{0},\frac{\beta_{\mathbf{y}}^{2}}{9}\mathbf{I}_{5000}\right)\right\\}_{\ell=1}^{m_{1}}$.
Each sensing matrix is generated based on
$\mathbf{A}_{j}=\mathbf{a}_{j}\mathbf{a}^{\mathrm{H}}_{j}$, where
$\mathbf{a}_{j}\sim\mathcal{N}\left(\mathbf{0},\mathbf{I}_{64}\right)$. We
solve the overdetermined one-bit QCS polyhedron in (13) via the Block SKM for
the number of time-varying sampling threshold sequences
$m_{1}\in\left\\{10,50,100,150\right\\}$. Fig. 1 appears to confirm the
possibility of recovering the desired matrix $\mathbf{X}^{\star}$ in the one-
bit QCS polyhedron (13) by applying Block SKM. As expected, the performance of
the recovery will be significantly enhanced as the number of time-varying
sampling threshold sequences grows large. The reason behind this observation
is the sample abundance condition which has been initially analyzed and proved
in [15, Theorem 1] and extended to another sampling scheme in [20]. Note that
the results in Fig. 1 are averaged over $15$ experiments. To examine the
performance of the proposed algorithm for the full rank $\mathbf{A}_{j}$
scenario, we generate a full rank $\mathbf{A}_{j}\in\mathbb{R}^{64\times 64}$
where its entries are i.i.d normal random variables. Similarly, we generate
time-varying sampling thresholds as
$\left\\{\mathbf{\uptau}^{(\ell)}\sim\mathcal{N}\left(\mathbf{0},\frac{\beta_{\mathbf{y}}^{2}}{9}\mathbf{I}_{5000}\right)\right\\}_{\ell=1}^{m_{1}}$.
Fig. 2 illustrates the recovery performance of the Block SKM in this case
while preserving the property of boosting the recovery error as the number of
time-varying sampling thresholds grows large. Each data point in Fig. 2 is
averaged over $15$ experiments. Note that the algorithm proposed employs only
low-resolution (one-bit) samples, but capitalizes on their abundance to
converge to the global solution with heightened precision as the quantity of
one-bit samples increases.
Figure 3: Comparing the recovery performance of the proposed Kaczmarz-based algorithm, namely the Block SKM, with that of SKM and RKA in terms of NMSE for a linear system of inequalities. Table I: Comparing CPU times and $\operatorname{NMSE}$ of Block SKM and GESPAR. Algorithm | $m^{\star}$ | CPU time (s) | $\operatorname{NMSE}$
---|---|---|---
Block SKM (one-bit) | $5000$ | $0.0026$ | $3.1072e-7$
GESPAR | $128$ | $0.0041$ | $2.4382e-5$
Moreover, we numerically compare the RKA[25], SKM[27], and our proposed Block
SKM in linear systems of inequalities. We apply one-bit sampling to a system
of linear equalities $\mathbf{B}\mathbf{x}=\mathbf{y}$, resulting in the
creation of its corresponding system of linear inequalities as described in
(9). Herein, we consider $\mathbf{B}\in\mathbb{R}^{100\times 10}$,
$\mathbf{x}\in\mathbb{R}^{10}$, and $\mathbf{y}\in\mathbb{R}^{100}$. Each row
of $\mathbf{B}$ is generated as
$\mathbf{b}_{j}\sim\mathcal{N}\left(\mathbf{0},\mathbf{I}_{10}\right)$. Also,
the desired signal $\mathbf{x}$ is generated as
$\mathbf{x}\sim\mathcal{N}\left(\mathbf{0},\mathbf{I}_{10}\right)$.
Accordingly, we generate time-varying sampling thresholds as
$\left\\{\mathbf{\uptau}^{(\ell)}\sim\mathcal{N}\left(\mathbf{0},\frac{\beta_{\mathbf{y}}^{2}}{9}\mathbf{I}_{100}\right)\right\\}_{\ell=1}^{m_{1}}$
for $m_{1}=40$. The performance of the RKA, SKM, and Block SKM is illustrated
in Fig. 3. The results show that the Block SKM outperforms the other two
approaches, delivering a faster recovery and higher accuracy in the recovery
of the desired signal $\mathbf{x}$. The normalized mean square error for the
signal is defined as
$\operatorname{NMSE}\triangleq\frac{\left\|\mathbf{x}_{\star}-\bar{\mathbf{x}}\right\|_{2}^{2}}{\left\|\mathbf{x}_{\star}\right\|_{2}^{2}}$,
where $\mathbf{x}_{\star}$ and $\bar{\mathbf{x}}$ denote the true discretized
signal and its recovered version, respectively. The NMSE results in Fig. 3 are
averaged over $15$ experiments.
To further investigate the efficacy of the proposed algorithm in QCS, we
compare our proposed approach with the well-known GESPAR approach with the
initialization algorithm proposed in [3] in terms of NMSE and CPU time. As
presented in Table I, Block SKM outperforms GESPAR in terms of both NMSE and
CPU time. The results are obtained for $\mathbf{x}\in\mathbb{R}^{64}$ when the
optimal number of samples are utilized, and where $m^{\star}=2n=128$ (high-
resolution samples) and $m^{\star}=5000$ (one-bit samples) are considered for
the high-resolution method and one-bit QCS, respectively. Herein, the
optimality of sample sizes means that the number of samples utilized by
algorithms leads to their best performance (up to global phase), i.e.
satisfying the criterion
$\left\|\mathbf{x}_{\star}-\bar{\mathbf{x}}\right\|_{2}^{2}\leq 5\times
10^{-5}\left\|\mathbf{x}_{\star}\right\|_{2}^{2}$. By this comparison, we
remove the burden of the large number of samples from the GESPAR to fairly
compare their optimal shape deploying incomplete measurements with that of the
Block SKM. Note that the signal of interest is obtained from
$\bar{\mathbf{X}}=\bar{\mathbf{x}}\bar{\mathbf{x}}^{\mathrm{H}}$, where the
signal is the largest eigenvector of the recovered matrix.
## VI Conclusion
We propose taking advantage of the abundant number of samples available in
one-bit sampling with time-varying thresholds to efficiently and globally
solve the quadratic compressed sensing problem. In particular, a state-of-the-
art randomized Kaczmarz algorithms is proposed to find the desired signal
inside the emerging confined feasible regions, named the one-bit polyhedron,
with an enhanced convergence rate. The numerical results showcased the
effectiveness of the proposed approaches for the quadratic compressed sensing
problem.
## References
* [1] A. Beck and Y. Eldar, “Sparsity constrained nonlinear optimization: Optimality conditions and algorithms,” _SIAM Journal on Optimization_ , vol. 23, no. 3, pp. 1480–1509, 2013.
* [2] Y. Shechtman, Y. Eldar, A. Szameit, and M. Segev, “Sparsity based sub-wavelength imaging with partially incoherent light via quadratic compressed sensing,” _Optics express_ , vol. 19, no. 16, pp. 14 807–14 822, 2011.
* [3] Y. Shechtman, A. Beck, and Y. Eldar, “GESPAR: Efficient phase retrieval of sparse signals,” _IEEE Transactions on Signal Processing_ , vol. 62, no. 4, pp. 928–938, 2014.
* [4] K. Jaganathan, Y. Eldar, and B. Hassibi, “Phase retrieval: An overview of recent developments,” _Optical Compressive Imaging_ , pp. 279–312, 2016\.
* [5] K. Jaganathan, S. Oymak, and B. Hassibi, “Sparse phase retrieval: Convex algorithms and limitations,” in _2013 IEEE International Symposium on Information Theory_. IEEE, 2013, pp. 1022–1026.
* [6] E. J. Candes, T. Strohmer, and V. Voroninski, “PhaseLift: Exact and stable signal recovery from magnitude measurements via convex programming,” _Communications on Pure and Applied Mathematics_ , vol. 66, no. 8, pp. 1241–1274, 2013.
* [7] E. Candès and X. Li, “Solving quadratic equations via PhaseLift when there are about as many equations as unknowns,” _Foundations of Computational Mathematics_ , vol. 14, no. 5, pp. 1017–1026, 2014.
* [8] G. Wang, G. Giannakis, and Y. Eldar, “Solving systems of random quadratic equations via truncated amplitude flow,” _IEEE Transactions on Information Theory_ , vol. 64, no. 2, pp. 773–794, 2017.
* [9] T. Bendory, Y. Eldar, and N. Boumal, “Non-convex phase retrieval from STFT measurements,” _IEEE Transactions on Information Theory_ , vol. 64, no. 1, pp. 467–484, 2017.
* [10] E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: Theory and algorithms,” _IEEE Transactions on Information Theory_ , vol. 61, no. 4, pp. 1985–2007, Apr 2015.
* [11] I. Waldspurger, “Phase retrieval for wavelet transforms,” _IEEE Transactions on Information Theory_ , vol. 63, no. 5, pp. 2993–3009, 2017.
* [12] A. Eamaz, F. Yeganegi, and M. Soltanalian, “Modified arcsine law for one-bit sampled stationary signals with time-varying thresholds,” in _ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2021, pp. 5459–5463.
* [13] P. Boufounos and R. Baraniuk, “1-bit compressive sensing,” in _2008 42nd Annual Conference on Information Sciences and Systems_. IEEE, 2008, pp. 16–21.
* [14] E. J. Candes, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” _SIAM review_ , vol. 57, no. 2, pp. 225–251, 2015\.
* [15] A. Eamaz, F. Yeganegi, and M. Soltanalian, “One-bit phase retrieval: More samples means less complexity?” _IEEE Transactions on Signal Processing_ , vol. 70, pp. 4618–4632, 2022.
* [16] A. Mezghani and A. L. Swindlehurst, “Blind estimation of sparse broadband massive MIMO channels with ideal and one-bit ADCs,” _IEEE Transactions on Signal Processing_ , vol. 66, no. 11, pp. 2972–2983, 2018.
* [17] L. Jacques, J. Laska, P. Boufounos, and R. Baraniuk, “Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors,” _IEEE Transactions on Information Theory_ , vol. 59, no. 4, pp. 2082–2102, 2013.
* [18] P. Boufounos, L. Jacques, F. Krahmer, and R. Saab, “Quantization and compressive sensing,” in _Compressed Sensing and its Applications: MATHEON Workshop 2013_. Springer, 2015, pp. 193–237.
* [19] A. Eamaz, F. Yeganegi, and M. Soltanalian, “Covariance recovery for one-bit sampled stationary signals with time-varying sampling thresholds,” _Signal Processing_ , vol. 206, p. 108899, 2023.
* [20] A. Eamaz, K. V. Mishra, F. Yeganegi, and M. Soltanalian, “UNO: Unlimited sampling meets one-bit quantization,” _arXiv preprint arXiv:2301.10155_ , 2022.
* [21] C. Xu and L. Jacques, “Quantized compressive sensing with RIP matrices: The benefit of dithering,” _Information and Inference: A Journal of the IMA_ , vol. 9, no. 3, pp. 543–586, 2020.
* [22] J. N. Laska, Z. Wen, W. Yin, and R. G. Baraniuk, “Trust, but verify: Fast and accurate signal recovery from 1-bit compressive measurements,” _IEEE Transactions on Signal Processing_ , vol. 59, no. 11, pp. 5289–5301, 2011.
* [23] S. Kaczmarz, “Angenäherte auflösung von systemen linearer gleichungen (english translation by Jason Stockmann): Bulletin international de l’académie polonaise des sciences et des lettres,” 1937.
* [24] T. Strohmer and R. Vershynin, “A randomized Kaczmarz algorithm with exponential convergence,” _Journal of Fourier Analysis and Applications_ , vol. 15, no. 2, pp. 262–278, 2009.
* [25] D. Leventhal and A. S. Lewis, “Randomized methods for linear constraints: convergence rates and conditioning,” _Mathematics of Operations Research_ , vol. 35, no. 3, pp. 641–654, 2010.
* [26] D. Needell and J. A. Tropp, “Paved with good intentions: Analysis of a randomized block Kaczmarz method,” _Linear Algebra and its Applications_ , vol. 441, pp. 199–221, 2014.
* [27] J. De Loera, J. Haddock, and D. Needell, “A sampling Kaczmarz–Motzkin algorithm for linear feasibility,” _SIAM Journal on Scientific Computing_ , vol. 39, no. 5, pp. S66–S87, 2017.
* [28] Z. Q. Luo, W. K. Ma, A. M. C. So, Y. Ye, and S. Zhang, “Semidefinite relaxation of quadratic optimization problems,” _IEEE Signal Processing Magazine_ , vol. 27, no. 3, pp. 20–34, 2010.
* [29] M. M. Naghsh, M. Soltanalian, P. Stoica, M. Modarres-Hashemi, A. De Maio, and A. Aubry, “A doppler robust design of transmit sequence and receive filter in the presence of signal-dependent interference,” _IEEE Transactions on Signal Processing_ , vol. 62, no. 4, pp. 772–785, 2013.
* [30] A. Bhandari, F. Krahmer, and R. Raskar, “On unlimited sampling and reconstruction,” _IEEE Transactions on Signal Processing_ , vol. 69, pp. 3827–3839, 2020.
* [31] A. Ameri, J. Li, and M. Soltanalian, “One-bit radar processing and estimation with time-varying sampling thresholds,” in _2018 IEEE 10th Sensor Array and Multichannel Signal Processing Workshop (SAM)_. IEEE, 2018, pp. 208–212.
* [32] A. Eamaz, F. Yeganegi, and M. Soltanalian, “Covariance recovery for one-bit sampled non-stationary signals with time-varying sampling thresholds,” _IEEE Transactions on Signal Processing_ , vol. 70, pp. 5222–5236, 2022.
* [33] S. Khobahi and M. Soltanalian, “Signal recovery from 1-bit quantized noisy samples via adaptive thresholding,” in _2018 52nd Asilomar Conference on Signals, Systems, and Computers_. IEEE, 2018, pp. 1757–1761.
* [34] J. Briskman and D. Needell, “Block Kaczmarz method with inequalities,” _Journal of Mathematical Imaging and Vision_ , vol. 52, no. 3, pp. 385–396, 2015.
* [35] L. Dai, M. Soltanalian, and K. Pelckmans, “On the randomized Kaczmarz algorithm,” _IEEE Signal Processing Letters_ , vol. 21, no. 3, pp. 330–333, 2013.
* [36] M. Sarowar Morshed and M. Saiful Islam, “Sampling Kaczmarz Motzkin method for linear feasibility problems: Generalization & acceleration,” _arXiv e-prints_ , pp. arXiv–2002, 2020.
* [37] T. Elfving, “Block-iterative methods for consistent and inconsistent linear equations,” _Numerische Mathematik_ , vol. 35, no. 1, pp. 1–12, 1980.
* [38] M. Dereziński and E. Rebrova, “Sharp analysis of sketch-and-project methods via a connection to randomized singular value decomposition,” _arXiv preprint arXiv:2208.09585_ , 2022.
* [39] E. Rebrova and D. Needell, “On block gaussian sketching for the kaczmarz method,” _Numerical Algorithms_ , vol. 86, pp. 443–473, 2021.
|
# An illustrated tutorial on global optimization in nanophotonics
Pauline Bennet Denis Langevin Chaymae Essoual Université Clermont Auvergne,
Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
Abdourahman Khaireh-Walieh LAAS, Université de Toulouse, CNRS, Toulouse,
France Olivier Teytaud Meta AI Research Paris, France Peter Wiecha LAAS,
Université de Toulouse, CNRS, Toulouse, France Antoine Moreau Université
Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000
Clermont-Ferrand, France<EMAIL_ADDRESS>
###### Abstract
Numerical optimization for the inverse design of photonic structures is a tool
which is providing increasingly convincing results – even though the wave
nature of problems in photonics makes them particularly complex. In the
meantime, the field of global optimization is rapidly evolving but is prone to
reproducibility problems, making it harder to identify the right algorithms to
use. This paper is thought as a tutorial on global optimization for photonic
problems. We provide a general background on global optimization algorithms
and a rigorous methodology for a physicist interested in using these tools –
especially in the context of inverse design. We suggest algorithms and provide
explanations for their efficiency. We provide codes and examples as an
illustration than can be run online, integrating quick simulation code and
Nevergrad, a state-of-the-art benchmarking library. Finally, we show how
physical intuition can be used to discuss the results of the optimization and
to determine whether the solutions are satisfying or not.
## I Introduction
While efficient automated design methods for multilayered structures have
emerged in the 1970s, typically, numerical optimization has been used only
more recently, thanks to the increase in the available computational power and
the progress in simulation techniques. These developments lead to methods
providing original and efficient designs for three-dimensional structures, for
which no design rules exist[1]. In photonics, the most promising approaches so
far are inspired by successful methods from mechanics and are based on local
optimization algorithms[2]. However, in photonics, the wave nature of the
problem typically generates a very large number of local minima, making global
optimization algorithms valuable tools, while they are in many cases
unreasonably expensive in mechanics[3].
Numerical optimization is a domain in which significant progress has been made
in the last two decades, with enormous practical implications. Recent results
suggest that modern global optimization algorithms are able to provide us with
original and efficient photonic designs[4] that are particularly convincing as
they can be understood from a physical point of view[5]. However,
reproducibility problems have made the important results in the optimization
field harder to identify and to trust[6] – especially for researchers in other
communities.
The aim of this paper is to serve as a tutorial for photonics specialists who
are interested in using modern numerical optimization tools. We provide
insights, practical tips, and guidance to help researchers navigate the
challenges and pitfalls associated with optimization. We demonstrate how
simulation tools and state-of-the-art optimization libraries can be easily
integrated to effectively tackle inverse design problems.
Specifically, we provide examples of multi-layer photonics problems, simulated
with the PyMoosh toolkit [7] and optimized using the Nevergrad python library
[8]. We present a comprehensive methodology that includes defining relevant
observables, choosing optimization strategies, and computing specific criteria
to assess the reliability of the obtained solutions. We offer practical
examples inspired by real-world problems involving multilayered structures.
These examples effectively illustrate our methodology and make it easy to
transpose to other situations. For easy reproducibility, the codes are
provided as an online Jupyter Notebook, hosted on google’s Colab plateform[9].
In the first part, we provide essential background information on automated
design of photonic structures and optimization. We also introduce the test
cases that we use to demonstrate the concepts, discussed in this paper.
The second part focuses on the seamless integration of the PyMoosh simulation
library [7] and the Nevergrad optimization library [8] using Python. We
explain how these libraries work together to define the cost function and to
enable efficient benchmarking of optimization algorithms. In the final part,
we present general guidelines for optimizing photonic structures. These
guidelines serve as a methodology to avoid common pitfalls and maximize the
potential of the optimization process.
## II General background and description of the test cases
In the fields of photonics and optimization, numerous concepts and
terminologies have been introduced over the years. As we now require tools and
concepts from both domains to optimize complex photonic structures, it is
important to present an overview of the vocabulary that has been developed and
provide clear definitions for key terms.
### II.1 Fundamentals of optimization in photonics
Figure 1: Parametrization. The process of selecting the parameters to describe
the structure plays a crucial role in determining the types of solutions that
can be obtained. (a) Less than 10 parameters are optimized. The geometry of
the structure is fully constrained. We refer to problems of such low degrees
of freedom as optimization, but not inverse design. Image from [10]. (b)
Around a hundred parameters are optimized. The geometry is constrained. We
consider problems of this or larger degrees of freedom to be inverse design
because very different patterns can emerge, not necessarily a checkerboard.
Image from [4]. (c) Around a thousand parameters are optimized on a pixel-
based optimization. Each pixel is filled with one material or another to
create the final design. Image from [11] (d) Tens of thousands of parameters
are optimized. We call this type of image parametrization topology
optimization. Image from [12].
Defining the optimization problem. To apply optimization techniques for
improving a photonic structure, the structure must be represented by a finite
set of parameters. This process of selecting the parameters to describe the
structure is known as parametrization, and it plays a crucial role in
determining the types of solutions that can be obtained, potentially
introducing biases. Figure. 1 presents typical examples of problems with
increasing complexity in their parametrization.
In photonics, many problems are continuous, meaning that valid structures can
be achieved by making the parameters change continuously. If this is not the
case, the problem is said to be discrete, requiring specialized algorithms
that will not be discussed here [13, 14]. In the present paper we focus on
continuous optimization.
It is then necessary to define a cost function that quantifies the discrepancy
between the actual optical response of the structure and the desired response.
This cost function serves as a measure of how close the performance of the
structure is to the target, even if this is not a measure in a strict
mathematical sense. Finally, an optimization domain must be defined, typically
by setting bounds on the parameter values. Together, the cost function and the
optimization domain form the optimization problem. The global optimum is the
point of the optimization domain where the value of the cost function is the
lowest. While it is simple to determine whether a solution corresponds to a
minimum of the cost function, called a local minimum, it is generally
impossible to know whether such a minimum is the global optimum, i.e., the
solution with the lowest possible value of the cost function, and hence the
closest to the desired response. The surface representing the cost function as
a function of the parameters is called the cost function landscape.
From optimization to inverse design. Optimization techniques have found
applications in the improvement of photonic structures. They can also be
employed in retrieving experimental parameters, as for instance commonly done
in ellipsometry.
More recently, optimization has been used in cases when the geometric model is
sufficiently unconstrained such that totally new solutions can emerge. Such an
approach is commonly called inverse design. It is difficult to determine
exactly when an optimization problem becomes an inverse design problem. For
instance, optimizing a multilayered structure consisting of 20 layers with
alternating refractive index can already be considered an inverse design
problem, even though the geometry is relatively constrained: multilayers
present a wide diversity of optical response of a 20 layer structure, allowing
to propose very diverse and unexpected solutions to different problems [15].
On the contrary a complex 3D structure characterized by less than 10
parameters is typically not an inverse design problem[16] because the
structure geometry is already fully constrained and no new pattern can emerge
from the optimization process. A rule of thumb would be to consider that an
optimization problem presents the characteristics of an inverse design problem
when more than 20 parameters are used to describe the structure. As explained
below, in order to constrain the structures as little as possible, several
thousand parameters are often considered. Inverse design problems are thus
particularly complex, requiring advanced methods and increased computational
power to solve them effectively.
Local optimization. A local optimization algorithm starts with one structure
(either generated randomly within the optimization domain or given by the
user), and tries to improve it at each step by changing its parameters by a
small amount – often guided by the gradient of the cost function landscape.
Typically the new structure is kept, if the associated cost function is better
than for the previous structure. This makes for a simple and fast optimization
technique, but has the problem of easily getting stuck in local minima of the
cost function. Several strategies are possible to avoid the problem of local
minima[17, 18, 19], but they often make the algorithms more computationally
expensive. In the end, performing multiple iterations of the optimization
process with different initial structures can provide insight into the
presence of local minima, which indicates the level of difficulty of the
problem. Put simply, the greater the number of local minima, the less likely
it is for a solution found by local optimization to be the global optimum.
An attraction basin is the region of the optimization domain for which a
solution will be found whenever a local algorithm starts in this region. An
optimization domain can typically be divided into attraction basins.
Global optimization and Black Box Optimization. The search for a global
optimum is called global optimization (GO). Algorithms that optimize without
using the gradient or other side information, are called Black Box
Optimization (BBO). BBO and GO are not synonymous: the former refers to the
absence of additional information (gradient or other), whereas the latter
refers to caring about global minima rather than local minima. However, there
is a link, because local methods in continuous domains frequently use
gradients. BBO algorithms are generalist in nature and can be applied to a
large variety of problems. A wide range of algorithms exist, and new ones are
continually proposed[20]. This includes genetic algorithms, mathematical
programming, Bayesian optimization, machine learning, reinforcement learning
or other BBO methods. Most of these algorithms are heuristic in nature, making
them non-deterministic. This means that two different runs, even with the same
starting points, can yield different solutions. Moreover, the performance of
an algorithm often vastly depends on the specific optimization problem at
hand[21], making it challenging to compare different algorithms. Consequently,
BBO suffers from poor reproducibility, hindering the identification of the
most efficient algorithms[22]. Combining rigorous benchmarking[23, 24, 25, 26,
27, 28, 29, 30, 31, 32, 33] and BBO libraries is the unique approach, able to
address the reproducibility crisis in optimization, ensuring transparency and
reliability of results. A lot of efforts have thus been made to design
benchmarks for BBO[34, 35, 6, 8].
Topology optimization. With the increase in available computational power, it
has become recently possible to divide a given region of space into
pixels/voxels that can typically be either filled with a material or left
empty and to look for a structure presenting a desired optical response. Then,
the number of parameters is intentionally very large, to offer the algorithms
a very large number of degrees of freedom. Such an approach is called Topology
Optimization (TO). TO draws inspiration from its successful application in the
field of mechanics, where it has been widely used. Given the large number of
parameters used for representing a structure, TO usually employs gradient-
based methods. First, the problem is made continuous instead of discrete
(continuous properties are allowed for a given pixel instead of a binary
choice between filled and void) and since the gradient can be computed for a
low computational cost (through the use of the adjoint method [36]), steepest
descent algorithms seem a natural choice. This approach has been extremely
successful in mechanics, to the point that global optimization has been
considered obsolete [3].
When applied to photonics problems, this approach has at first shown
remarkable success, demonstrating that photonic structures can be miniaturized
to an unprecedented degree while maintaining high optical performance[12].
However, the physics fundamentally differ in some aspects between mechanics
and photonics. Mechanical problems can be regarded as comparatively simpler
since a continuous deformation of an object results in a continuous and nearly
monotonic deformation of its properties. The optimization process in photonics
poses greater challenges compared to mechanical cases, primarily due to the
wave nature of the problem. Photonic structures can exhibit resonant behavior,
with each resonance corresponding to a local maximum or minimum of the cost
function. This characteristic poses challenges for gradient descent methods,
which are better suited for problems with smoother landscapes of the cost
function. In photonics, two different starting points in the optimization
process most often lead to two different outcomes[14]. This is in contrast to
mechanics, where different starting points do not typically result in
significant variations in the outcomes[37, 3]. Moreover, the structures
produced by these algorithms often exhibit excessive intricacies, which can
pose challenges for fabrication and hinder their commercialization
potential[1, 38].
Global optimization for photonics. It is now evident that early attempts at
using genetic algorithms for optimizing simple photonic structures were
unsuccessful[39]. Specifically designed heuristics have in general failed to
produce structures that are convincing enough to persuade the community to
embrace global numerical optimization as a tool for years. Moreover, for
continuous problems, even modern global optimization algorithms often yield
unsatisfactory solutions when the parameter space has a high dimensionality,
as in TO. The first successes of gradient-based TO may thus suggest that
global optimization, as in mechanics, cannot compete with TO on inverse design
problems.
However, in our experience with continuous problems and modern optimization
algorithms, we have found that keeping the number of parameters relatively
low, typically below 100, usually suffices to give BBO algorithms enough
degrees of freedom to yield innovative and convincing results[4]. This
approach requires limiting the number of degrees of freedom, a strategy also
referred to as Parametric Optimization (PO). It becomes however challenging to
make a fair comparison between BBO and TO, as they operate in very different
regimes.
In addition, global optimization algorithms may be valuable in problems that
are typically considered in TO, due to the discretization requirement. Many
topology optimization problems are inherently discrete, and the use of
continuous parameters is primarily for leveraging gradient-based methods. The
variety of the results obtained using continuous parameters[14] suggests that
making the problem continuous is also making it more difficult, for example
because local optimization algorithms get stuck in minima with intermediate
values of the refractive index. In such cases, discrete global optimization
algorithms may actually offer advantages and recent results seem to indicate
that such methods are able to yield efficient solutions in a relatively
consistent way, even spontaneously generating welcome features like symmetry
[13] and the dissapearance of intermediate refractive index values. Once
again, the challenge is to identify the most suitable algorithm for a given
problem, which can only be done with comprehensive algorithm libraries. In the
present paper, we focus on PO, due to its advantages: suggested designs are
frequently smooth and possible to manufacture, and the dimensionality
typically remains moderate (dozens to hundreds).
In our opinion, global optimization algorithms are too often overlooked due to
the challenges posed by high-dimensional parameter spaces. We underline that
most often, finding the right algorithm makes a tremendous difference and
that, given the inherent complexity of optimization, a rigorous methodology
must be applied to achieve satisfactory solutions. While these methods may be
computationally demanding, the advent of parallelism has made many global
algorithms (which are usually intrinsically parallel) significantly more
efficient, making them increasingly relevant.
### II.2 Typical photonics test cases
We have chosen three test cases that are typical of problems that can be
encountered in photonics, on which we applied the methods we recommend. A
Jupyter Notebook[9] hosted on the Colab platform is provided in which we show
how cost functions can be easily defined in the context of multilayered
structures using the PyMoosh module[40] and how all the algorithms implemented
in the comprehensive Nevergrad library[8] can be subsequently tested. These
three test cases are: a dielectric mirror optimized for high reflectivity, an
ellipsometric problem, and a photovoltaic cell optimized for absorption inside
the active material.
High reflectivity problem. The cost function is defined as $1-R$ where $R$
represents the reflectance at the working wavelength of 600 nm of a
multilayered structure with alternating refractive indexes ($n_{1}=1.4$ and
$n_{2}=1.8$, starting with $n_{2}$). Therefore, the optimization algorithms
will aim to find highly reflective structures. As represented in Fig. 2(b) we
considered three sub-cases (i) a structure with only 10 layers (minibragg),
which is representative of low dimensionality problems in photonics (ii) a
structure with 20 layers (bragg), a number we feel marks the beginning of the
domain of inverse design problems and (iii) a structure with 40 layers
(bigbragg), where some algorithms may face challenges. The starting points of
each optimization are chosen randomly.
Figure 2: Illustration of the Bragg mirror benchmark problem. The objective is
to maximize the reflectance of a multilayered structure composed of two
alternating materials at a wavelength of 600 nm, as illustrated in a). The
figures presented to illustrate the different problem size in b) are typical
of an initial geometry for the optimization.
The thickness of each layer is taken between 0 and the thickness of a half-
wave layer for the lower index. When considering only a single working
wavelength, adding or removing a half-wave layer to a given layer has no
impact on the optical response (this is called an absent layer). Therefore,
considering larger thicknesses would only introduce degenerate minima. We
underline that letting the refractive index vary would lead to the same
solutions, as the optimization would converge to the highest index contrast
between two layers[4]. This can be considered as physically expected, as this
is known to be the most efficient choice to modulate the optical response of a
multilayer[41]. The Bragg mirror, a periodic solution, has been identified as
the best solution to this problem so far[4]. It is a local optimum, and it
outperforms any disordered solution, suggesting that it might be a regular,
principled solution that might be the optimum – even though it is not, as of
now, proven. This is why we have selected it as a test case, for which we know
the (likely) global optimum, and called it “Bragg”.
Ellipsometry problem. The objective here is to find the material and the
thickness of a reference layer, knowing its reflectance properties, obtained
using spectroscopic ellipsometry. This step is required to extract the desired
information from ellipsometric measurements. For a tested layer with a given
material and thickness, we compute the relation $e=\frac{r_{p}}{r_{s}}$ where
$r_{p}$ is the reflectance of the whole structure in TE polarization and
$r_{s}$ is the reflectance in TM polarization, for a wavelength range of 400 -
800 nm and for a given incidence angle of 40°. The quantity we minimize here
is the difference between the $e$ computed from the tested layer and the $e$
computed from the reference layer.
The thickness of the tested layer is taken between 30 and 250 nm, and we
assume the refractive index is comprised between 1.1 and 3. This simple
problem is illustrated in Fig.3. Problems in ellipsometry are generally more
complicated but highly dependent on the response model assumed for the
material. Our simple dispersion-less and absorption-less test case can however
be adapted easily thanks to the architecture of PyMoosh[7].
We underline that in many ways, this case mirrors the practical challenges
faced by photonics researchers. It illustrates the common situation where
researchers design structures, characterized by a small number of parameters,
and then engage in optimization to determine the upper limits of their
performance[42, 43, 44].
Figure 3: Illustration of the Ellipsometry benchmark problem. The objective is
to retrieve the thickness and the material of an unknown layer, knowing its
reflectance in TE and TM polarization for wavelengths comprised between 400 nm
and 800 nm.
Photovoltaics problem. The objective here is to find the best possible
antireflective coating in normal incidence with a multilayer structure made of
two alternating materials (permittivities of 2 and 3 respectively) on an
amorphous hydrogenated silicon substrate (30000 nm). The quantity to be
maximized here is the short circuit current in the 375 - 750 nm range,
assuming a quantum yield equal to 1, as described in [4, 5].
The cost function $f$ is one minus the efficiency defined as the ratio between
the short circuit current $j_{sc}$ and the maximum achievable current
$j_{max}$ if all photons are converted into electron-hole pairs :
$f=1-\frac{j_{sc}}{j_{max}}$ (1)
with
$j_{sc}=\int
A(\lambda)\frac{dI}{d\lambda}.\frac{\mbox{e}\lambda}{\mbox{h}\mbox{c}}\,\mbox{d}\lambda,$
(2)
where $A(\lambda)$ is the absorbance of the active layer, e, h and c are
respectively the elementary charge, the Planck constant and the speed of light
in vacuum and the spectral density of the illumination $\frac{dI}{d\lambda}$
is given by the solar spectrum[45].
The problem is illustrated in Fig. 4. As for the high reflectivity problem we
considered three sub-cases (i) a structure with only 10 layers
(photovoltaics), (ii) a structure with 20 layers (bigphotovoltaics), and (iii)
a structure with 32 layers (hugephotovoltaics).
Figure 4: Illustration of the Photovoltaic benchmark problem. The objective is
to maximise the absorption in the silicon layer, using an antireflective
coating composed of an alternating two materials multilayered structure.
## III Basic tools for optimization: algorithms and observables
As discussed before, many algorithms and methods are available to perform
optimization and eventually inverse design. It is therefore necessary to
define some observables and basic tools to compare the performances of
algorithms in a reliable way. This section presents first some well known
categories of algorithms. It is shown how these algorithms can all be run
through the Nevergrad platform[8]. Then we defined relevant observables, i.e.
quantities allowing discussions about the performances of algorithms and their
results. All the discussions in this section are illustrated by results of the
Jupyter Notebook provided in the supplementary material[9].
### III.1 Algorithms categories
Many BBO platforms exist, including CMA (Covariance Adaptation Algorithm)[46],
AX (Adaptive eXperiments)[47], HyperOpt [48], SMAC (Sequential Model-based
Algorithm Configuration [49]), NLOPT[50] and Nevergrad[8]: Nevergrad is a free
Python library which imports all those ones and others. We present in this
section the algorithms from Nevergrad used in our Jupyter Notebook
experiments[9], and organize them based on their respective categories.
Mathematical programming. Methods available for gradient-based optimization
have been adapted to the black-box setting. For example the limited-memory
Broyden-Fletcher-Goldfarb-Shanno LBFGS [51] method can be applied, with
derivatives computed by finite differences. Note that BFGS and LBFGS based on
finite differences are parallel: the $d+1$ evaluations needed for evaluating
the gradient by finite differences can be done in parallel. We also include
GradBFGS, which is using the gradient without having to “pay” the cost of
$d+1$ evaluations per iteration: GradBFGS is therefore not directly
comparable, in the sense that it uses more information than other methods and
is not as parallel as others.
Genetic and Evolutionary Algorithms. One of the most well known families of
algorithms are evolutionary and/or genetic algorithms. In an evolutionary
algorithm, each step of the process creates and evaluates the cost function of
a “population” of structures, preserving the “individuals” (i.e., the
structures) that are better than those belonging to the population of the
previous step (also called the previous “generation”). The different genetic
algorithms have different ways of creating the new generation of structures.
Most include steps such as mutations and crossovers between structure
information.
We underline that the historical algorithms, in which for instance the binary
coding of the parameters is considered as its genetic code, are considered
obsolete in the optimization community because of their well documented lack
of efficiency[52, 53] and their use should be avoided. A lot of more efficient
algorithms, inspired by these early ideas, have emerged over the years, with
an improved efficiency. One of the most well known and efficient overall seems
to be Differential Evolution (DE [54]), an algorithm that has many variants.
One of the most classical variants is presented in Table 1 and the different
formulas that can be used for the mutation to define new variants are
presented in Table 2.
In our experiments, we often use the quasi-opposite variant QODE [55], which
randomly draws half the initial population, and then, for each point $x$, adds
$-r\times x$ to the population with $r$ randomly uniformly drawn in $[0,1]$
(assuming that the center is $0$: otherwise $c-r\times(x-c)$ with $c$ the
center of the optimization domain). We also include QNDE (Quasi-Newton DE),
which runs QODE for half the budget and then finishes with BFGS with finite
differences: this is a memetic algorithm, which runs first a global algorithm
and then a local one. Both seem particularly efficient on our test cases.
Randomly draw the population $x_{1}$,…,$x_{30}$ in the search space.
while There is time do
{$//$ This is one iteration}
for each $x$ in the population do
Randomly draw $a$, $b$, $c$ and $d$ individuals (i.e. for us designs) in the
population.
Compute $y$ using one of the equations in Table 2.{$//$Mutation operator}
{$//$Crossover operator: for loop below.}
Randomly draw $CR$ in $\\{1,\dots,d\\}$. {$//$ $d$ is the dimension}
for each $i$ in $\\{1,\dots,d\\}$ do
if $i=CR$ or with probability $\frac{1}{2}$ then
$z_{i}\leftarrow y_{i}$.
else
$z_{i}\leftarrow x_{i}$.
end if
if $z$ has better fitness than $x$ then
$x$ is replaced by $z$ in the population.{End of crossover operator}
end if
end for
end for
end while
Table 1: Pseudo-code of DE. $a,b,c,d$ are distinct, randomly drawn individuals
in the population and $best$ refers to the best design so far. The $y$
individual is obtained by a mutation formula (see Tab. 2) and $CR$ is the
crossover rate, indicating the percentage of the mutant $y$ used in the
creation of a new individual $z$. DE/rand/1: $\displaystyle y(x)=$
$\displaystyle a+F_{1}(b-c)$ DE/best/1: $\displaystyle y(x)=$ $\displaystyle
best+F_{1}(a-b)$ DE/randToBest/1: $\displaystyle y(x)=$ $\displaystyle
c+F_{1}(a-b)+F_{2}(best-c)$ DE/currToBest/1: $\displaystyle y(x)=$
$\displaystyle x+F_{1}(a-b)+F_{2}(best-x)$ DE/rand/2: $\displaystyle y(x)=$
$\displaystyle a+F_{1}(a-b+c-d)$ DE/best/2: $\displaystyle y(x)=$
$\displaystyle best+F1(a-b+c-d)$ Table 2: Various DE formulas. $a,b,c,d$ are
distinct, randomly drawn individuals in the population. We see that the
mutated variant of $x$, namely $y(x)$, is in some cases, by design,
independent of $x$. $best$ refers to the best design so far. Typically,
$F_{1}=F_{2}=0.8$ and $CR=\frac{1}{2}$.
All the variants of DE typically perform well for a large parameter space
dimension (but smaller than topological optimization), including quite
irregular cost functions such as the ones which appear in photonics. Most
winning algorithms for Large-Scale Global Optimization (LSGO) competitions use
a variant of DE[27].
There are many reasons why DE can be a sensible default choice - among them
its simplicity, its low computational overhead, and its good performance
overall. We also point out that the rise of parallel computing resources makes
DE and other global methods like PSO (see below) way faster: whereas
parallelizing BFGS or other gradient-based methods is tricky, DE (as
implemented and parametrized in the present paper and in most implementations)
is just 30 times faster with 30 cores than on a sequential machine, and can be
way more parallel than that with a simple adaptation of parameters (typically
increasing the population size). Besides this natural parallelism advantage,
many balck-box optimization methods can include arbitrary parameters
(including discrete parameters, e.g. choosing a material in a list), addingin
non-differentiable stochastic manufacturing errors, adding multiple objective
functions and handling worst-case analysis over input angle or over a range of
wavelength.
Moreover, DE is built on a differential formula to create new structures based
on existing ones. This means that if multiple structures in the current
population share the same characteristics (e.g., in the present framework, the
same block of layers with identical thicknesses and materials), those
characteristics will probably be preserved in the creation of the new
candidate solution. In photonics, this leads to the emergence of modular
structures with distinct blocks of layers, each having well-defined optical
functions. We believe that this specific property of the optimization
algorithm contributes to its remarkable efficiency addressing photonic
problems [4, 56].
We have thus natively included DE in PyMoosh[7]. We invite the interested
reader to consult our Notebook to run simple optimizations with it. The Bragg
and Photovoltaics cases show how to use PyMoosh to perform an optimization
(one run), while the Ellipsometry case illustrates how to perform several
runs. We underline also that parameters can be grouped and evolve together
during the optimization, which is especially relevant if they are physically
linked (like the thickness and refractive index of the same layer). This can
be considered as a variant of DE and is called structured optimization[56]. It
is not shown here because in the cases we study, grouping parameters does not
make sense.
Other evolutionary methods include Particle Swarm Optimization (PSO [57]): a
population of particles with random initial positions and velocities is
attracted towards good search points found so far. PSO is known as a robust
method, dedicated to rugged cost functions. SQOPSO (Special Quasi-Opposite
PSO) is a variant in Nevergrad, based on quasi-opposite sampling. Many other
search methods are based on updating a Gaussian probability distribution for
matching the best points so far[58].
Typically, Evolution Strategies (ES) iteratively update a probability
distribution model, that is used to optimized the likelihood for generation of
good candidates. A well-known example is Covariance Matrix Adaptation (CMA
[59]) which updates the entire covariance matrix of a multivariate normal
distribution (MND). The CMA algorithm samples candidates using a MND,
evaluates them, and then modifies the MND using the evaluation results.
While CMA typically performs best on the Black Box Optimization Benchmark
(BBOB[26]), simpler methods such as the $(1+1)$ evolution strategy with one-
fifth rule[60] are still good in many cases due to its ability to quickly
adapt the search scale.
Bayesian Optimization. In a Bayesian optimization process[61, 62], the
algorithm uses a probability distribution (a Gaussian process) for modeling
the cost function and the uncertainty, and updates it with each new structure
tested. The model provides, for each point in the search space, an estimated
probability distribution for the cost function value of that search point.
Therefore, it is possible, at each iteration, to define the Expected
Improvement (EI) for a search point $x$:
$EI(x)=\mathbb{E}_{w}\max(0,m-CostModel(x,w))$, where $m$ is the best cost so
far and $CostModel(x,w)$ is the current statistical model of the cost
function. $CostModel(x,w)$ depends on a random $w$, because this model is
statistical. The value of $w\mapsto CostModel(x,w)$ is updated each time a new
cost value is available: it is frequently a Gaussian process.
This gives an estimation of what structure to try next: we search for $x$ in
the search space such that $EI(x)$ is approximately maximal – this requires a
side optimization for each iteration, hopefully orders of magnitude faster
than the original problem.
Many variants of this approach have been proposed in the literature, with many
parameters for the stochastic model. By design, Bayesian optimization (BO)
works best with problems with a small parameter space dimension and for
relatively smooth functions. Also, the probabilistic interpolation process,
which is necessary to know which structure to try next, can be expensive,
possibly becoming computationally more expensive than the cost function
itself. The authors in [63] tested Bayesian Optimization in a limited budget
scenario but other, computationally cheaper algorithms were still performing
well. In our code, we include two Bayesian Optimization methods for the low-
dimensional and low budget problems, namely BO[64] and PCABO[65].
Other black-box optimization methods. Other methods include pattern search
methods, such as e.g. the Nelder-Mead method[66], which iteratively updates a
simplex of search points. Also, optimization wizards (also known as hyper-
Heuristics) are defined by automatically selecting and combining existing
methods[67]: NGOpt and NGOptRW are examples of wizards included in Nevergrad.
Methods based on reinforcement learning (RL) have also been defined and
applied on machine learning problems[68], though simple classical evolutionary
methods were actually competitive [69]. Please note that[70] mentions
difficulties for reproducing results in some RL papers.
### III.2 Observables
For the vast majority of continuous optimization problems, proving that a
solution provided by an algorithm is the best possible solution in the
optimization domain is essentially impossible. In addition, many optimization
algorithms are non-deterministic or sensitive to the initialization, which
means each run of the algorithm will lead to a different outcome.
As a consequence, it is possible to perform many optimization runs and still
not be able to firmly determine whether the best of these solutions is good
enough to stop looking for other, potentially better, outcomes. Yet, observing
how exactly each different run progresses towards a solution and considering
the solutions that are produced statistically, yields crucial information that
may help the user gain confidence in the best solutions produced or, on the
contrary, indicate that these solutions cannot be trusted.
Convergence curves. The first observable, which is widely used, is the
convergence curve that can be produced for each run of a given algorithm by
plotting the value of the cost function of the solution that the algorithm
recommends (typically the best solution found so far) as a function of the
number of iterations. When multiple runs are launched, convergence curves can
be either drawn for each run or an averaged convergence can be obtained. Both
essentially provide the same kind of information: whether most of the runs
have settled on a given solution (if they have almost all reached a plateau
for many iterations), or if further iterations have a chance to improve the
outcome. Fig. 5 (resp. Fig. 6) presents such an example of individual curves
for each run (resp. aggregated curves with average convergence).
Budget. Not all iterations of all algorithms have the same computational cost
or evaluate the cost function the same number of times. An iteration for a
genetic or evolutionary algorithm typically corresponds to a new generation
and thus the evaluation of a few dozen of solutions. For some other
algorithms, an iteration requires the evaluation of a single new solution. A
way to compare two algorithms fairly is to discuss their performances for a
given number of evaluations of the cost function. The maximal number of
evaluations allowed is called the budget and it has to be chosen before
running the algorithms. Each run of an algorithm is then stopped when the
budget has been reached. Of course, this does not take into account the
computational cost of the optimization algorithm itself, which should be
discussed when not negligible compared to the cost function values.
Consistency curve. Convergence curves for different runs allow to determine
whether the chosen budget is large enough to reach a satisfactory convergence
for each run. However, since the different runs can produce completely
different results, the variability of the different results has to be
discussed. This can be done by plotting what we call the consistency curve
(Fig. 7). This curve is obtained by plotting the different values of the cost
function reached at the end of each run, sorted from the lowest to the
highest. This is generally done for the highest budget allowed. When such a
curve displays a plateau, then the same solution, or at least solutions with
the same cost, has been found several times by the algorithm. A large plateau
for the lowest value of the cost function thus means the best solution has
been found a large number of times. This reinforces the confidence that can be
placed in that solution.
Box plots. It is not always relevant to draw the consistency curve, especially
when comparing a large number of algorithms. This curve can be summarized by a
box plot as e.g. in Fig. 8. While a box plot will not allow to observe any
plateau and thus bears less information than a consistency curve, it gives an
immediate and easily readable access to the statistical properties of the
values of the cost function.
Illustration of local optima with local optimization. In addition, local
optimization algorithms can be used to estimate whether there is a limited or
a large number of local minima and whether the best solution found by any
other algorithm has a large attraction basin or not. A simple way to do so is
to launch multiple runs of the algorithm with starting points chosen randomly
in the optimization domain, then make sure all the runs have converged (e.g.
BFGS has a stop criterion essentially guaranteeing a local minimum has been
reached) and finally plot the resulting consistency curve. If we were to run a
very large number of such optimizations, the consistency curve should present
plateaus corresponding different local minima – eventually several minima can
present identical values of the cost function, because of symmetries of the
problem and not be distinguishable. The width of a plateau would allow to
estimate the volume of the corresponding attraction basin. Running BFGS with
randomly chosen starting point may appear here, beyond a sound optimization
approach, as a way of estimating the difficulty of a problem. A lot of
different results suggest a lot of different minima and if the best result is
rarely found, this also means it has a relatively small attraction basin
compared to the size of the optimization domain. These characteristics are
indicative of a difficult problem.
## IV Running and discussing experiments
With the idea that anyone should be able to build easily on our work, we
provide a Jupyter Notebook hosted on the Colab platform[9] which can be
downloaded and run, or run online allowing to reproduce all the results below.
The objective of this Notebook is to show how to optimize the typical
photonics test cases, either using a DE algorithm natively implemented in
PyMoosh or, for comparing a wide range of methods and/or creating new ones,
relying on Nevergrad. Launching a sufficient number of runs to be able to
compare algorithms afterwards is called running an experiment in the
optimization community. The first part of the online code can be modified to
choose the experimental setup. We have chosen a limited set of representative
algorithms to be included in our experiment for readability, but modifying the
set is straightforward. For instance, Bayesian Optimization results are shown
only in the lowest dimensional setting, the only one in which it was
performing well.
### IV.1 Algorithm comparisons with Nevergrad
Using the results produced by our code, we first illustrate here how the
observables defined above can be used to discuss the results of the
experiments.
It is common to represent the convergence curves for all the different runs of
a single algorithm, as shown in Fig. 5. While this does indeed provide useful
information about the robustness of an algorithm’s convergence, as discussed
above, it is convenient to represent the average of the convergence curves
(rather than all curves) when benchmarking a lot of different algorithms – and
adding the standard deviation can also be done as shown in Fig. 6 to reduce
the information loss.
For Photovoltaics we see that convergence seems reached only for variants of
DE. For the other algorithms, convergence is not completely clear and
increasing the budget might thus help. BFGS and LBFGS use finite differences,
and therefore the $d+1$ first points are essentially horizontal because they
are the finite differences computations. GradBFGS uses the real gradient, so
it does not suffer from this, and any flat convergence curve can be attributed
to the fact that, at a random starting point and at early iterations,
following the gradient improves the efficiency only slightly.
Figure 5: Convergence curves. Convergence curves for different runs for
different algorithms showing the value of the cost function (the lower, the
better) as a function of the budget consumed by the optimization. The
information brought by such individual curves is useful but difficult to read
when comparing even a small number of algorithms, that is why the average
convergence is often plotted, as shown in Fig. 6.
For small budgets, local algorithms, that tend to converge faster, can be a
good choice. This holds especially in cases with lower dimensionality like the
Ellipsometry example, because local optima are often less dangerous in a low
dimensional setting.
However, Fig. 7 shows that for more difficult problems and larger budgets, DE
variants are efficient despite the large number of local minima with high cost
function values shown on the consistency curves of BFGS and the like.
Figure 6: Averaged convergence curves. Averaged convergence curves for
different algorithms (26 runs each), with standard deviation visible in
ellipses, as a function of the number of iterations. The algorithms are
ordered by average performance for the maximum budget (the lower the better).
Using an average convergence curves allows to compare more algorithms[16].
Figure 7: Consistency curves. Each algorithm has been run several times (from
15 to 25 times) with the highest budget (from $10^{3}$ to 32000 depending on
the case, as shown on the convergence curves Fig. 5). The fitness values for
each run are sorted from left to right in ascending order. The lower the curve
and the flatter, the more efficient and reliable the algorithm can be
considered (good reproducibility the convergence value). A plateau means, that
a certain fitness value and probably the same solution has been found multiple
times. The results for BFGS and its variants correspond to different local
minima if the algorithm has converged (this is often the case when the budget
is large).
Showing on the same graph the consistency curves for a large variety of
algorithms is not convenient, according figures can be difficult to interpret
(see Fig. 7). Each consistency curve can however be summarized using a box
plot, allowing for a fair benchmarking between various algorithms, as shown in
Fig. 8 (for clarity, the results are not shown for all the algorithms that we
have tested). As can be seen, for a case like Ellipsometry with few
parameters, local methods perform excellently. When the landscape is simple
enough, mathematical programming tools (ranging from BFGS and the like to
Nelder-Meade) seem the most adapted. Differential Evolution variants are
frequently good, though slower than other methods on these problems.
For higher dimensions, and cases considered as inverse problems, DE variants
perform well overall, in full agreement with previous results[4]. We underline
that DE variants are tested with population size 30, which means that they
would run, without change, 30 times faster on a machine with 30 cores. In
addition, they can easily be adapted to parallel machines above this 30
threshold just by increasing the population size parameter, whereas local
algorithms are way harder to parallelize. Combining DE and local methods
sometimes brings improvement (e.g. QNDE vs QODE in the low dimensional case)
and is thus recommended[67]. QNDE is almost as good as QODE in most cases, and
competitive in the low-dimensional case.
Judging from the consistency curves and the boxplots, the CMA algorithm seems
as efficient as DE on the Bragg problems, finding the Bragg mirror most of the
time and outperforming local algorithms even if it presents a slower
convergence. However, increasing the dimension of the problem for
Photovoltaics makes the problem significantly harder for this algorithm, to
the point where it is not better than local optimization algorithms for the
largest problem. We see in that case, how comparing a global optimization
algorithm to local algorithms with a random start, allows to assess whether an
algorithm has an interest for a given problem.
Bayesian Optimization (BO) is computationally expensive, and this
computational overhead does not provide significant improvements, especially
in the more complex test cases. Furthermore, different BO implementations lead
to different results because there are many free parameters in Bayesian
optimization. We underline that the cases studied here are limited to
relatively fast cost functions and representations by a moderate number of
parameters. This confirms that BO is well suited for problems with relatively
low degree of freedom and for which the cost function is expensive to compute,
which imposes a hard limit on the budget when the landscape is still
complex[16, 71, 62].
Figure 8: Performance box-plots for comparing algorithms. Box-plots
representing the distribution of the minimum values of the cost function
reached at the end of each run for a given algorithm for different test cases.
The results are shown for the highest budget, ranging from 1000 for
Ellipsommetry to 32000 for HugePhotovoltaics, as shown Fig. 5. Each box
presents the first quartile, the third quartile, a line at the median, and
dashed lines are added at 1.5 times the inter-quartile range.
### IV.2 Physical analysis
Analyzing physically both the solutions produced and how they have been
produced (by which algorithm, in which conditions and how fast) sheds a new
light both on the solution and on the optimization process itself. When a
problem is grounded in physics, its distinct characteristics can and have to
be leveraged to gain deeper insights.
Low dimension. In the Ellipsometry case, local algorithms perform perfectly
and find the solution reliably and quickly. This is typical for a simple
landscape with few local minima. Physically, this means the geometry is simple
enough and the considered layers sufficiently thin, to have a very low number
of resonances.
Multiple resonances and local minima. The Bragg test case is more interesting.
Even for ten layers, local algorithms (versions of BFGS) most often fail to
find the Bragg mirror. Since the algorithms have converged and since local
algorithms tend to converge to local minima, this means local minima are
numerous, even for such a relatively simple problem with an intermediate
number of parameters. The fact that BFGS most often misses the Bragg mirror
means that the optimization domain is large compared to the attraction basin
of the Bragg mirror. This can be expected because each resonance in the
structure is likely to produce a local minimum or maximum in any considered
physical quantity. With a global thickness for the structure which is beyond 2
µm, at least as many resonances as for a homogeneous layer of the lower index
material with a thickness ranging from 0 to 2 µm can be expected. This means
at the very least a dozen resonances and thus a dozen local minima exist. The
algorithms CMA and DE perform noticeably better than local algorithms, which
means they are adapted to the problem.
Regularity and modularity. As explained above, for DE, if the same values for
some parameters can be found in all the individuals they will be conserved
throughout the optimization. If a subset of parameters then describes a
subpart of the structure, DE will keep this part unchanged. This makes DE
uniquely equipped to produce modular structures, in which specific parts play
a well defined role.
In the Bragg case for instance, we suspect that once a few layers begin to
present the characteristics of a Bragg mirror at any place in the structure,
the algorithm keeps them intact and explores the sub-space of the optimization
domain that this defines. As this sub-space has a smaller volume, it is then
easier for the algorithm to find the rest of the Bragg mirror. This allows to
understand why DE seems to generate easily structures that are periodic also.
Somehow, periodic structures can be seen as modular also.
This is illustrated Fig. 9, where the best results (whatever the algorithm)
are shown for different budgets. For budgets ranging from 100 to 1000, the
best solutions are produced by GradBFGS, as shown Fig 5 c). For the lower
budget, the structure appears disordered (no algorithm has converged at that
point) and the spectrum indicates that a relatively narrow anti-resonance
producing a large reflectance has been found. When the budget is around 1000,
GradBFGS has converged and proposes typically the solutions shown Fig. 9 b),
that show some regularity and decent performances. The spectrum (presenting a
broad reflectance peak) and the structure seem to indicate that an anti-
resonance is not enough to explain such performances. However for larger
budget, DE has converged towards the Bragg mirror shown in c), which is
perfectly periodic and has the highest efficiency as well as the broadest
reflectance spectrally.
Figure 9: Best structures (right) and their associated reflectance spectrum
(left) obtained for the Bragg case with 20 layers for a budget of a) 100 b)
1000 c) 20000\. The light, resp. dark color represents the high (1.8), resp.
low (1.4) refractive index material. The grey color represents substrate and
superstrate.
The Photovoltaics case is a perfect example of a modular structure. Fig. 10
shows the best results for all the Photovoltaics cases, all produced by QODE.
It is important to notice that, whatever the number of layers and even though
the structures have been obtained for independent runs, the three upper and up
to 5 lower layers are common to all the structures. For the largest number of
layers, periodical patterns (alternating 120 nm resp. 150 nm thick layers for
permittivy 3.0 resp. 2.0) are appearing. A previous study has shown that the
upper and lower layers allow light to be coupled in and out of the central
photonic crystal more easily[5]. The fact that they appear consistently
indicates their physical importance. They allow to diminish the oscillations
characteristics of Bragg mirrors outside the bandgap that can be seen Fig 9 c)
on the reflectance and are detrimental to the absorption.
Figure 10: Best structures (right) and their absorptance spectrum (left)
obtained for the Photolvoltaics case with a) 10 layers b) 20 layers and c) 32
layers. The light, resp. dark color represents the high (3), resp. low (2)
permittivity material. The grey color represents substrate and superstrat.
Sometimes DE is able to generate structures like chirped dielectric
mirrors[4], which locally resembles a Bragg mirror but whose parameters
changes gradually and smoothly. Such a structure cannot be considered
periodic. It is however modular, as the different parts of the structure are
obviously able to reflect different parts of the spectrum. We call this kind
of structures regular. A periodic structure is obviously regular, but a
regular structure is not periodic in that sense.
From the cases presented here and our experience, it can generally be expected
that periodic or regular structures will also be the most efficient to have an
impact on light because it is a wave – so that it is particularly sensitive to
periodicity. The Bragg mirror, for instance, has a lower value of the cost
function used here than any disordered structure that we have generated. The
thickness values for the AR coating point towards a structure containing a
photonic crystal, even if they are not as precise as with the Bragg case
because of the irregularities of the solar spectrum. We underline that we have
run these optimizations a very large number of times on these two cases and
never found any better solution that the ones presented here. This strongly
suggests that regularity should be expected and even sought after[11] and we
underline that in many results from the literature, periodic pattern often
seem to emerge spontaneously (see Fig. 11 below) .
Robustness. In photonics, robustness to fabrication defaults is always
desirable. A robust structure presents an efficiency which will not change
much when the parameters are slightly modified. From an optimization point of
view, this simply means that the attraction basin of the desired solution is
large. Evaluating the robustness of a solution is thus computationally costly
because it involves modifying a large number of parameters and computing the
change in the cost function. It can be tempting to include the robustness in
the cost function, however in our experience this is not especially necessary
to produce regular or periodic structures.
In the Bragg case, a robust solution is a solution for which the peak of
reflectance at the working wavelength will not be displaced too much when the
structure is slightly modified. As a consequence, the larger such a peak, the
more robust the solution can be considered. The largest reflectance on the
broadest spectral range is usually produced by regular structures like
photonic crystals. Which means that the most robust solutions will also be, at
least in that case, the most periodic. And in general, periodic solutions are
indeed associated with robustness. We underline thus, that finding regular
solutions should be the priority, and that this will probably also ensure the
robustness of the design.
## V Good practices for optimizing photonic structures
The optimization of photonic structures thus presents quite a few specific
characteristics, which influence the strategy to be followed in this
particular context. While the computational cost may put a limit to the
problems that can be tackled, we give below a list of strategies that, applied
together, constitute a methodology for the optimization of photonic
structures.
One of the most important questions is, when we should stop the optimization.
It is impossible to prove that a solution is optimal. However, there are ways
do determine the quality of a solution. We give below criteria that can help
to determine whether a solution is satisfying – meaning the optimization can
be stopped – or not.
### V.1 Optimization methodology
Our methodology consists in maximizing the information extracted from the
observables we have defined above, and to make use of specific characteristics
of photonic problems to gradually establish confidence in the generated
solutions. We leave other strategies that could also make interesting
solutions emerge for further work (e.g. multi-objective optimization, or the
use of manufacturing noise in the cost function).
Convergence curves for the determination of the budget. The presence of
plateaus in a convergence curve indicates that the algorithm has converged,
suggesting that the budget is adequate. On the contrary, the absence of
plateaus on the consistency curve suggests that the budget should be
increased.
Systematic use of consistency curves for checking local minima. Since global
algorithms are usually non deterministic, it is useful to launch multiple
runs, despite the computational cost, in order to conduct a statistical
analysis with a consistency curve. Ideally, the consistency curve exhibits a
plateau at its minimum value (see Fig. 7a). However, this is not always the
case (see Fig. 7f), and then this indicates that the problem might not be
fully solved.
Adding degrees of freedom. In photonics problems, it is generally
straightforward to gradually increase the number of elements that can be
optimized without changing the nature of the problem. In the cases presented
above, this can be done by increasing the number of layers of the structure,
but it could for example also be through a decrease in the discretization
stepsize, e.g. in topology optimization. Structures with different numbers of
layers can then be compared in terms of performances. It can be generally
expected that increasing the number of degrees of freedom leads to improved
optimized performances. Plotting the minimum value of the cost function as a
function of complexity, represented by the number of layers or elements in the
structure, can provide valuable insights: e.g. if increasing the number of
layers does not improve performance, which indicates that the difficulty of
the problem has also increased, continuing that path is likely pointless[5].
Parametrization bias awareness: meaningful representations make the
optimization problem easier. In parametric optimization, when a handful of
parameters are needed to describe a relatively complex device, the choice of
these parameters and of their limits (which defines the optimization domain)
is crucial. More precisely, these initial choices may introduce biases, favor
certain types of algorithms or make the convergence more difficult. When, for
instance, the parametrization is chosen such, that subsets of parameters
correspond to components of the structure, algorithms like DE are particularly
efficient. In DE, when a component of a structure is widely spread in the
whole population, it might be exactly preserved through the iterations,
whereas many other algorithms keep perturbing all variables. DE has the
distinctive property to produce, spontaneously, modular structures which are
exactly preserved through iterations. When the different parameters do not
describe a part of the structure but a more global property, other kinds of
algorithms might be more relevant, as has been underlined in previous
works[16].
Sensitivity to the optimization domain: choosing bounds. In many cases, the
imposed constraints strongly control the emerging solutions. For example,
using a medium with an extremely high refractive index (typically infinite) is
a simple but not realistic way to reflect light completely. The constraints on
the refractive index values are therefore the fundamental reason for the
production of Bragg mirrors as a solution. However, there are instances where
the constraints become too demanding, making it difficult to find satisfactory
solutions. It is important in that case to verify whether some parameters are
stuck at the optimization domain boundary (i.e. if the boundary constraints
are active). On the other hand, when a satisfactory solution is produced, it
can be informative to add or remove constraints or expand the optimization
domain. Bragg mirrors tend to emerge, whether the refractive indices are
allowed to vary within certain limits or are imposed, with the latter case
being straightforward. A clear understanding of the conditions under which a
solution is generated also contributes to building confidence.
Leverage your physical intuition. We underline that in optimization, there are
no rules a priori. If, for instance, it makes sense physically to modify a
solution by hand, this should not be considered forbidden or ”cheating”,
especially when it seems that the algorithm is stuck in a local optimum that
can be criticized based on a physical reasoning[72]. The limits of the
optimization range can also be set to encourage the algorithm to explore areas
where promising solutions are likely based on physics, or to stay within
specific functional ranges[73]. This approach usually makes the task easier
for the algorithm and provides solutions that are easier to understand and
more satisfying.
The physical intuition is what often determines the conditions in which the
optimization takes place and what allows to detect parametrization biases, or
even that a problem is not well posed enough for any algorithm to find a
satisfying solution. It should never be overlooked.
### V.2 Assessing the quality of a solution
Usually, no solution can be proven optimal, due to the impossibility to
explore the entire space of possible solutions or to locate all the local
minima. Therefore, it becomes necessary to establish criteria that enhance the
confidence in a solution. Besides the optimization criteria above based on
optimization observables, a physical analysis is possible, as developped in
the present section. When enough confidence in a solution has been built, it
can be deemed satisfying.
Consistency. A solution that has been obtained at least more than once
inspires greater confidence. If it is obtained repeatedly, it might correspond
to a plateau of similarly good solutions on the consistency curve of the most
efficient algorithm. In that case the solution can be deemed truly consistent
and particularly trustworthy. This is most often not the case. When the best
solution is obtained only for a single run, this should be considered
indicative of a local minimum. We regret that, except in a few cases[14],
elements allowing to assess the consistency of a solution, even if this is not
a consistency curve, are generally not given.
Spontaneous emergence of regularity. In photonics, periodical or regular
structures are ubiquitous. This can be directly attributed to the wave nature
of the underlying phenomena. As underlined in a pioneering optimization
study[11], “the emergence of periodicity suggests that periodicity is a
principal condition for strong light manipulation.” Many studies have shown
the emergence of partially periodic structures, even when the solution lacks
complete periodicity or regularity (like for chirped dielectric mirrors
typically[4]). However, when an algorithm proposes completely periodic
structures as a solution, they naturally inspire more satisfaction. Based on
our experience, we have yet to encounter a simple problem where fully
disordered structures outperform regular ones in terms of efficiency.
A symmetric structure can also be considered more regular. We believe that the
spontaneous emergence of symmetry also reinforce the confidence that can be
placed in a structure. We underline that this is rare, as symmetry is almost
always imposed (this tend to simplify problems noticeably).
Overall, we disagree with the notion that prioritizing performance over
aesthetics is necessary, as we believe that both aspects are inherently
intertwined in photonics. We underline again that in the Bragg mirror
benchmark, no disordered structure has ever presented a better performance
than a Bragg mirror with the same number of layers. In the Photovoltaics case,
the irregularities can be linked to the noise in the solar spectrum and, as a
consequence, in the cost function. Irregularities in that case improve the
performances, but the periodic pattern is still distinguishable and is central
for the overall efficiency. In the case of multilayered structures, regularity
or periodicity may be more likely to emerge, due to the relative simplicity of
the geometry. However, we underline that in the literature, relatively regular
patterns (in the sense defined above) emerge all the time, as shown Fig. 11.
Sometimes the patterns look unfinished, perhaps indicating the solution can be
further improved – which is likely if a local algorithm has been used. In our
experience, very regular or periodic pattern, including spontaneously
symmetrical ones, can be generated also in more complex setups[13].
Physical interpretability. The solutions that are most satisfying are those
that can be readily understood from a physical point of view. We underline
that, generally, only periodic, regular or modular structures can be truly
understood. This is more difficult for completely disordered structures,
except if the disorder itself is tailored, which cannot be ruled out. The
absence of physical interpretability is likely what hinders the widespread
adoption of optimization as a daily research tool within the community.
Sometimes, the solutions can be studied and fully understood a posteriori[5].
Although this does not guarantee optimality, this is at least a good reason to
stop looking for alternative solutions: any solution that is comprehensible
and understandable can serve as a valuable source of inspiration for manually
generated structures and can offer valuable design rules. In rare situations,
algorithms can produce solutions that resemble patterns found in nature, on
insect wings for instance, which have evolved for optical purposes. Although
these occurrences are uncommon, they can be highly satisfying, as they align
with the concept of evolution as a form of optimization. However, due to their
infrequency, they cannot be included in the above criteria. In the case of the
reflection problem which we have considered in this work, this criterion is
obviously fulfilled too as Bragg mirror are commonly found in nature.
Figure 11: Spontaneous emergence of regularity or periodicity. When
periodicity or regularity emerge, solutions appear much more satisfying and
are more likely to be analyzed physically. (a) When the position of
nanopillars of silicon are optimized to enhanced the magnetic Purcell factor,
using Differential Evolution algorithm, 2 rings cavities spontaneously
emerge[74]. (b) Two-dimensional dielectric metalenses designs obtained by
topology optimization[75] with a relatively low number of parameters. (c)
Silicon carbide optical cavity obtained using gradient-based optimization [76]
showing a very regular pattern (holes of increasing dimensions in the
waveguide). (d) When gratings of rectangular blocks of chitin are optimized to
maximizing the scattered reflection of a 450 nm wavelength in the first
diffraction orders, while minimizing the specular reflection of a range of
visible wavelengths quasi perfect checkboards with interdigitated blocks are
produced[4].
We underline that these criteria to determine whether a solution is satisfying
or not can be applied to any inverse design problem, whether it is solved by
topology optimization, shape optimization, parametric optimization or any
other technique. We advise all authors, as far as possible, to publish all the
necessary information so that other researchers can reproduce and verify the
quality of optimization results. When this is done, particularly interesting
and thorough discussions become possible[14].
## VI Conclusion
In this paper, we present different types of popular optimization algorithms
and compare them using the Nevergrad platform on three typical photonic
problems. For problems with low dimensionality, like ellipsometry or more
generally parameters retrieving, a lot of algorithms seem adapted, including
local optimization algorithms or Bayesian optimization. For problems closer to
inverse design, not all approaches are effective, because of the large number
of local minima, frequent in photonics problems. We have shown how algorithms
can be rigorously and thoroughly compared. The illustrated examples show that
variants of the Differential Evolution algorithm are highly efficient on a
large range of photonic problems. Additionally, we developed a methodology and
offer some advice for conducting high-performance design optimizations and
evaluating the quality of a solution. Finally, in supplement to the
manuscript, we provide python jupyter notebooks that illustrate our workflow
and can be used to reproduce the presented benchmarks.
We meant this work as a tutorial, but also as a warning. Optimization is
difficult because of its unique curse: it is never possible to guarantee that
a solution is optimum, or even close to it, making science much more
difficult. This has not been too much of a problem in mechanics because the
problem landscape is generally simple enough for understandable physical
reasons. However, photonics problems are comparatively more complicated as our
results here show: even the simplest problems present numerous local minima.
This must lead to extra caution. The field of optimization is awash in dubious
claims of novelty and efficiency. In order to avoid a similar reproducibility
crisis in photonics, adopting an open science approach is imperative: data
regarding the different runs of optimization should be published, codes should
be shared and both should be discussed[14, 77]. We are convinced that the
optimization of photonics structures is a work intensive domain, and that
neither a single method nor a single team will be enough to uncover what
optimization can bring in terms of innovation.
There also is a danger to deem a solution satisfactory when it should not: it
is to miss innovative and more efficient structures. We underline that the
question of how reflectance can be maximized using numerical optimization,
central in the present work, has been considered 25 years ago[39], exactly
when DE was published, but because of the lack of efficient algorithms at the
time, the resulting structures were disordered and not convincing enough to
foster further work in that direction. Perhaps because the problem had already
been studied, more than 20 years have passed until it has been examined
again[4], producing the convincing results this work is based on. Given the
potential of inverse design but the difficulty to find structures that would
be commercially viable[1], there is a danger in giving up too soon and miss
particularly efficient structures. Fortunately, physical analysis of
structures seems to be a powerful tool to discuss both the solutions and the
optimization process itself.
We have shown in the present work how modern numerical tools have made the use
of optimization much simpler and efficient in photonics. Even for well defined
functioning regimes suggested by physics itself, with a relatively low number
of parameters, numerically optimizing a photonic structure often yields
unexpectedly high performances. We underline that numerical optimization is
now able to produce photonics structures that can be understood. This
constitutes a complete change compared to the times when inefficient
algorithms (as the first genetic algorithms) were producing disordered and
impossible to understand results. Open science approaches now allow any
researcher in the field to use such tools easily. We hope that our work will
encourage fellow researchers within the community to seamlessly integrate
optimization tools into their routine practice and to join the effort in
discovering novel and more efficient structures to address the challenges of
the future.
## Acknowledgements
A.M. is an Academy CAP 20-25 chair holder. He acknowledges the support
received from the Agence Nationale de la Recherche of the French government
through the program Investissements d’Avenir (16-IDEX-0001 CAP 20-25). This
work was supported by the International Research Center ”Innovation
Transportation and Production Systems” of the Clermont-Ferrand I-SITE CAP
20-25. P.R.W. acknowledges the support of the French Agence Nationale de la
Recherche (ANR) under grant ANR-22-CE24-0002 (project NAINOS), and from the
Toulouse high performance computing facility CALMIP (grant p20010).
## References
* Molesky _et al._ [2018] S. Molesky, Z. Lin, A. Y. Piggott, W. Jin, J. Vucković, and A. W. Rodriguez, Nature Photonics 12, 659 (2018).
* Bendsoe and Sigmund [2003] M. P. Bendsoe and O. Sigmund, _Topology optimization: theory, methods, and applications_ (Springer Science & Business Media, 2003).
* Sigmund [2011] O. Sigmund, Structural and Multidisciplinary Optimization 43, 589 (2011).
* Barry _et al._ [2020] M. A. Barry, V. Berthier, B. D. Wilts, M.-C. Cambourieux, P. Bennet, R. Pollès, O. Teytaud, E. Centeno, N. Biais, and A. Moreau, Scientific reports 10, 1 (2020).
* Bennet _et al._ [2021] P. Bennet, P. Juillet, S. Ibrahim, V. Berthier, M. A. Barry, F. Réveret, A. Bousquet, O. Teytaud, E. Centeno, and A. Moreau, Physical Review B 103, 125135 (2021).
* Sörensen [2015] K. Sörensen, International Transactions in Operational Research 22, 3 (2015), https://onlinelibrary.wiley.com/doi/pdf/10.1111/itor.12001 .
* pym [2023] Pymoosh, https://pypi.org/project/PyMoosh/ (2023), verified: 2022-06-10.
* Rapin and Teytaud [2018] J. Rapin and O. Teytaud, Nevergrad - A gradient-free optimization platform, https://GitHub.com/FacebookResearch/Nevergrad (2018).
* Wiecha [2023] P. Wiecha, Tuto Multilayer Optimization, https://doi.org/10.5281/zenodo.8347023 (2023).
* Kim _et al._ [1996] J. Kim, H.-B. Lee, H. K. Jung, S.-Y. Hahn, C. Cheon, and H. Kim, IEEE Transactions on Magnetics 32, 1250 (1996).
* Gondarenko _et al._ [2006] A. Gondarenko, S. Preble, J. Robinson, L. Chen, H. Lipson, and M. Lipson, Physical review letters 96, 143904 (2006).
* Piggott _et al._ [2015] A. Y. Piggott, J. Lu, K. G. Lagoudakis, J. Petykiewicz, T. M. Babinec, and J. Vučković, Nature Photonics 9, 374 (2015).
* Teytaud _et al._ [2022] O. Teytaud, P. Bennet, and A. Moreau, Photonics and Nanostructures - Fundamentals and Applications 52, 101072 (2022).
* Su _et al._ [2020] L. Su, D. Vercruysse, J. Skarda, N. V. Sapra, J. A. Petykiewicz, and J. Vučković, Applied Physics Reviews 7 (2020).
* Tikhonravov _et al._ [1996] A. V. Tikhonravov, M. K. Trubetskov, and G. W. DeBell, Applied optics 35, 5493 (1996).
* Schneider _et al._ [2019] P.-I. Schneider, X. Garcia Santiago, V. Soltwisch, M. Hammerschmidt, S. Burger, and C. Rockstuhl, ACS Photonics 6, 2726 (2019).
* Mahfoud [1995] S. W. Mahfoud, _Niching Methods for Genetic Algorithms_ , Ph.D. thesis, University of Illinois at Urbana-Champaign (1995).
* Shir [2012] O. M. Shir, Niching in evolutionary algorithms, in _Handbook of Natural Computing_ (Springer Berlin Heidelberg, 2012) pp. 1035–1069.
* Petrowski [1996] A. Petrowski, in _Proceedings of IEEE International Conference on Evolutionary Computation_ (1996) pp. 798–803.
* Jakšić _et al._ [2023] Z. Jakšić, S. Devi, O. Jakšić, and K. Guha, Biomimetics 8, 10.3390/biomimetics8030278 (2023).
* Wolpert and Macready [1997] D. Wolpert and W. Macready, IEEE Transactions on Evolutionary Computation 1, 67 (1997).
* Markov [2023] I. L. Markov, The false dawn: Reevaluating google’s reinforcement learning for chip macro placement (2023), arXiv:2306.09633 [cs.LG] .
* Gould _et al._ [2003] N. I. Gould, D. Orban, and P. L. Toint, ACM Transactions on Mathematical Software (TOMS) 29, 373 (2003).
* Gould _et al._ [2015] N. I. Gould, D. Orban, and P. L. Toint, Computational optimization and applications 60, 545 (2015).
* DIMACS [2021] DIMACS, Dimacs implementation challenge, http://dimacs.rutgers.edu/programs/challenge (2021).
* Hansen _et al._ [2009] N. Hansen, A. Auger, S. Finck, and R. Ros, _Real-Parameter Black-Box Optimization Benchmarking 2009: Experimental setup_ , Tech. Rep. RR-6828 (INRIA, France, 2009).
* Li _et al._ [2013] X. Li, K. Tang, M. N. Omidvar, Z. Yang, and K. Qin, in _CEC 2013 proceedings_ (2013).
* Gallagher and Saleem [2018] M. Gallagher and S. Saleem, in _PPSN’18 workshop_ (2018).
* Häse _et al._ [2021] F. Häse, M. Aldeghi, R. J. Hickman, L. M. Roch, M. Christensen, E. Liles, J. E. Hein, and A. Aspuru-Guzik, Machine Learning: Science and Technology 2, 035021 (2021).
* Lee _et al._ [2018] J. Lee, M. X. Grey, S. Ha, T. Kunz, S. Jain, Y. Ye, S. S. Srinivasa, M. Stilman, and C. K. Liu, Journal of Open Source Software 3, 500 (2018).
* Coumans and Bai [2017] E. Coumans and Y. Bai, Pybullet, a python module for physics simulation in robotics, games and machine learning (2017).
* Todorov _et al._ [2012] E. Todorov, T. Erez, and Y. Tassa, in _Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems._ (2012) pp. 5026–5033.
* Brockman _et al._ [2016] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, arXiv preprint arXiv:1606.01540 (2016).
* Musgrave _et al._ [2020] K. Musgrave, S. J. Belongie, and S. Lim, CoRR abs/2003.08505 (2020), arXiv:2003.08505 .
* Kapoor and Narayanan [2022] S. Kapoor and A. Narayanan, Leakage and the reproducibility crisis in ml-based science (2022).
* Cea [1986] J. Cea, ESAIM: Mathematical Modelling and Numerical Analysis 20, 371 (1986).
* Wang _et al._ [2003] M. Y. Wang, X. Wang, and D. Guo, Computer methods in applied mechanics and engineering 192, 227 (2003).
* Wang _et al._ [2018] F. Wang, R. E. Christiansen, Y. Yu, J. Mørk, and O. Sigmund, arXiv preprint arXiv:1810.02417 (2018).
* Martin _et al._ [1995] S. Martin, J. Rivory, and M. Schoenauer, applied Optics 34, 2247 (1995).
* Moreau [2023] A. Moreau, PyMoosh (2023).
* Tikhonravov [1993] A. V. Tikhonravov, Applied Optics 32, 5417 (1993).
* Smaali _et al._ [2021] R. Smaali, T. Taliercio, A. Moreau, and E. Centeno, Applied Physics Letters 119 (2021).
* Centeno _et al._ [2021] E. Centeno, E. Alvear-Cabezón, R. Smaali, A. Moreau, and T. Taliercio, Semiconductor Science and Technology 36, 085014 (2021).
* Moreau _et al._ [2007] A. Moreau, C. Lafarge, N. Laurent, K. Edee, and G. Granet, Journal of Optics A: Pure and Applied Optics 9, 165 (2007).
* Santbergen _et al._ [2010] R. Santbergen, J. Goud, M. Zeman, J. van Roosmalen, and R. C. van Zolingen, Solar energy materials and solar cells 94, 715 (2010).
* Hansen _et al._ [2019] N. Hansen, Y. Akimoto, and P. Baudis, CMA-ES/pycma on Github, Zenodo, DOI:10.5281/zenodo.2559634 (2019).
* FacebookResearch [2020] FacebookResearch, Ax - adaptive experimentation, ax.dev (2020).
* Bergstra _et al._ [2015] J. Bergstra, B. Komer, C. Eliasmith, D. Yamins, and D. D. Cox, Computational Science and Discovery 8, 014008 (2015).
* Hutter _et al._ [2011] F. Hutter, H. H. Hoos, and K. Leyton-Brown, in _LION_, Lecture Notes in Computer Science, Vol. 6683, edited by C. A. C. Coello (Springer, 2011) pp. 507–523.
* Johnson [1994] S. G. Johnson, The nlopt nonlinear-optimization package (1994).
* Liu and Nocedal [1989] D. C. Liu and J. Nocedal, Math. Program. 45, 503 (1989).
* Janikow _et al._ [1991] C. Z. Janikow, Z. Michalewicz, _et al._ , in _ICGA_ , Vol. 1991 (1991) pp. 31–36.
* Herrera _et al._ [1998] F. Herrera, M. Lozano, and J. L. Verdegay, Artificial intelligence review 12, 265 (1998).
* Storn and Price [1997] R. Storn and K. Price, J. of Global Optimization 11, 341 (1997).
* Rahnamayan _et al._ [2007] S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, in _2007 IEEE Congress on Evolutionary Computation_ (2007) pp. 2229–2236.
* Rapin _et al._ [2020] J. Rapin, P. Bennet, E. Centeno, D. Haziza, A. Moreau, and O. Teytaud, in _Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion_ (2020) pp. 1599–1607.
* Kennedy and Eberhart [1995] J. Kennedy and R. C. Eberhart, in _Proceedings of the IEEE International Conference on Neural Networks_ (1995) pp. 1942–1948.
* Beyer [2001] H.-G. Beyer, _The Theory of Evolution Strategies_ , Natural Computing Series (Springer, Heideberg, 2001).
* Hansen and Ostermeier [2003] N. Hansen and A. Ostermeier, Evolutionary Computation 11 (2003).
* Rechenberg [1973] I. Rechenberg, _Evolutionsstrategie Optimierung technischer Systeme nach Prinzipien der biologischen Evolution_ (Friedrich Frommann Verlag, Stuttgart-Bad Cannstatt, 1973).
* Jones _et al._ [1998] D. R. Jones, M. Schonlau, and W. J. Welch, Journal of Global Optimization 13, 455 (1998).
* Elsawy _et al._ [2021] M. M. Elsawy, A. Gourdin, M. Binois, R. Duvigneau, D. Felbacq, S. Khadir, P. Genevet, and S. Lanteri, ACS photonics 8, 2498 (2021).
* Hutter _et al._ [2013] F. Hutter, H. Hoos, and K. Leyton-Brown, in _Proceedings of the 15th Annual Conference Companion on Genetic and Evolutionary Computation_, GECCO ’13 Companion (Association for Computing Machinery, New York, NY, USA, 2013) p. 1209–1216.
* Nogueira [14 ] F. Nogueira, Bayesian Optimization: Open source constrained global optimization tool for Python (2014–).
* Raponi _et al._ [2020] E. Raponi, H. Wang, M. Bujny, S. Boria, and C. Doerr, CoRR abs/2007.00925 (2020), 2007.00925 .
* Nelder and Mead [1965] J. A. Nelder and R. Mead, Computer Journal 7, 308 (1965).
* Liu _et al._ [2020] J. Liu, A. Moreau, M. Preuss, J. Rapin, B. Roziere, F. Teytaud, and O. Teytaud, in _Proceedings of the 2020 Genetic and Evolutionary Computation Conference_ , GECCO ’20 (2020) p. 620–628.
* Zoph and Le [2017] B. Zoph and Q. V. Le, in _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_ (OpenReview.net, 2017).
* Real _et al._ [2017] E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, Q. V. Le, and A. Kurakin, CoRR abs/1703.01041 (2017), arXiv:1703.01041 .
* Cheng _et al._ [2023] C.-K. Cheng, A. B. Kahng, S. Kundu, Y. Wang, and Z. Wang, in _Proceedings of the 2023 International Symposium on Physical Design_, ISPD ’23 (Association for Computing Machinery, New York, NY, USA, 2023) p. 158–166.
* Elsawy _et al._ [2020] M. M. Elsawy, S. Lanteri, R. Duvigneau, J. A. Fan, and P. Genevet, Laser & Photonics Reviews 14, 1900445 (2020).
* Frellsen _et al._ [2016] L. F. Frellsen, Y. Ding, O. Sigmund, and L. H. Frandsen, Optics Express 24, 16866 (2016), publisher: Optical Society of America.
* Moreau _et al._ [2012] A. Moreau, R. Smaali, E. Centeno, and C. Seassal, Journal of Applied Physics 111 (2012).
* Brûlé _et al._ [2022] Y. Brûlé, P. Wiecha, A. Cuche, V. Paillard, and G. C. Des Francs, Optics Express 30, 20360 (2022).
* Christiansen and Sigmund [2021] R. E. Christiansen and O. Sigmund, JOSA B 38, 510 (2021).
* Yang _et al._ [2023] J. Yang, M. A. Guidry, D. M. Lukin, K. Yang, and J. Vučković, arXiv preprint arXiv:2303.17079 (2023).
* Jiang _et al._ [2020] J. Jiang, R. Lupoiu, E. W. Wang, D. Sell, J. P. Hugonin, P. Lalanne, and J. A. Fan, Optics express 28, 13670 (2020).
|
# Numerical simulations of inflationary dynamics: slow-roll and beyond
Siddharth S. Bhatt Swagat S. Mishra Soumen Basak and Surya N. Sahoo
###### Abstract
Cosmic inflation is a period of rapid accelerated expansion of space in the
very early universe. During inflation, vacuum quantum fluctuations are
amplified and stretched to cosmological scales which seed the fluctuations in
the cosmic microwave background as well as the large-scale structure of our
universe. Large quantum fluctuations may lead to the formation of primordial
black holes (PBHs) in the post-inflationary universe. Numerical simulations of
the inflationary dynamics are presented here for a single canonical scalar
field minimally coupled to gravity. We spell out the basic equations governing
the inflationary dynamics in terms of cosmic time $t$ and define a set of
dimensionless variables convenient for numerical analysis. We then provide a
link to our simple numerical Python code on GitHub that can be used to
simulate the background dynamics as well as the evolution of linear
perturbations during inflation. The code computes both scalar and tensor power
spectra for a given inflaton potential $V(\phi)$. We discuss a concrete
algorithm to use the code for various purposes, especially for computing the
enhanced scalar power spectrum in the context of PBH formation. We intend to
extend the framework to simulate the dynamics of a number of different
quantities, including the computation of scalar-induced second-order tensor
power spectrum in the revised version of this manuscript in the near future.
## 1 Introduction
Cosmic inflation has emerged as the leading scenario for describing the very
early universe prior to the commencement of the radiative hot Big Bang Phase
[1, 2, 3, 4, 5, 6]. According to the inflationary paradigm, a transient epoch
of at least 60-70 e-folds of rapid accelerated expansion suffices in setting
natural initial conditions for the background space-time in the form of
spatial flatness as well as statistical homogeneity and isotropy on large
angular scales [2, 3, 4, 7]. Additionally, (and more significantly,) quantum
fluctuations during inflation naturally generate a spectrum of almost scale-
invariant initial scalar fluctuations which seed the temperature and
polarisation fluctuations in the Cosmic Microwave Background (CMB) Radiation,
and later, the formation of structure in the universe [8, 9, 10, 11, 7]. In
addition to scalar perturbations, quantum fluctuations during inflation also
create a spectrum of almost scale-invariant tensor perturbations which later
become gravitational waves [12, 13].
The simplest models of inflation comprising of a single scalar field, called
the ‘inflaton’, which is minimally coupled to gravity, makes several distinct
predictions [14] (i.e an almost scale-invariant, nearly Gaussian, and
adiabatic spectrum of scalar fluctuations) most of which have received
spectacular observational confirmation, particularly from the latest CMB
missions [15].
However, as mentioned earlier, inflation also generates tensor perturbations
that later constitute the relic gravitational wave background (GW) which
imprints a distinct signature on the CMB power spectrum in the form of the
B-mode polarization [15]. The amplitude of these relic GWs provides us
information about the inflationary energy scale while their spectrum enables
us to access general properties of the epoch of reheating, being exceedingly
sensitive to the post-inflationary equation of state [13, 16]. The amplitude
of inflationary tensor fluctuations, relative to that of scalar fluctuations,
is usually characterised by the tensor-to-scalar ratio $r$. Different models
of inflation predict different values for $r$ which is sensitive to the
gradient of the inflaton potential $V_{,\phi}(\phi)=\frac{dV(\phi)}{d\phi}$
relative to its height $V(\phi)$. Convex potentials predict large values for
$r$, while concave potentials predict relatively small values of $r$. While
the spectrum of inflationary tensor fluctuations has not yet been observed,
current CMB observations are able to place an upper bound on the tensor-to-
scalar ratio on large angular scales. In particular, the latest CMB
observations of BICEP/Keck [17], combined with those of the PLANCK mission
[15], place the strong upper bound $r\leq 0.036$ (at $95\%$ confidence).
This most recent upper bound on $r$ has important consequences for single
field canonical inflation. In particular, given $r\leq 0.036$, all
monotonically increasing convex potentials, including the whole family of
monomial potentials $V(\phi)\propto\phi^{p}$, are completely ruled out in the
canonical framework. Among these strongly disfavoured models are the simplest
classic inflaton potentials $\frac{1}{2}m^{2}\phi^{2}$ and $\lambda\phi^{4}$.
Instead, the observational upper bound on $r$ appears to favour
asymptotically-flat potentials possessing one or two plateau-like wings; see
[19, 18]. Current observational data lead to a scenario in which the inflaton
$\phi$ slowly rolls down a shallow potential $V(\phi)$ thereby giving rise to
a quasi-de Sitter early stage of near-exponential expansion. A thorough
analysis of the inflationary phase-space dynamics $\\{\phi,\dot{\phi}\\}$ for
plateau potentials shows [20] that a large range of initial conditions leads
to adequate inflation in these models.
However, it is important to stress that the CMB window constitutes only a tiny
part of the observationally available field space between the Hubble-exit of
the largest scales in the sky and the smallest scale at the end of inflation.
Consequently, a substantial period of the inflationary dynamics corresponding
to potentially interesting small-scale primordial physics (which accounts
roughly to the last 40–50 e-folds of accelerated expansion during inflation)
remains observationally unexplored, being inaccessible to the CMB and LSS
observations. Any departure from the slow-roll regime, that might be triggered
by a change in the dynamics of the inflaton field, would lead to interesting
observational consequences on small-scales. In particular, the presence of a
feature at intermediate field values that might lead to large enough
amplification of the small-scale scalar fluctuations, would facilitate the
formation of Primordial Black Holes upon the Hubble re-entry of these modes
during the post-inflationary epochs.
Primordial Black Holes (PBHs) are extremely interesting compact objects which
might have been formed from the collapse of large density fluctuations in the
early universe [21, 22, 23, 24] and they constitute a potential candidate for
dark matter [27, 26, 25, 28, 29, 30]. Seeds for such large fluctuations can be
generated during inflation, as mentioned above. For instance, a feature in the
inflaton potential in the form of a flat inflection point can further slow
down the already slowly rolling inflaton field substantially, leading to an
enhancement of the primordial scalar power $P_{\zeta}$. A number of different
features for enhancing small-scale power during inflation have been proposed
in the recent years [31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
45, 46, 47, 48, 49]. Hence PBHs (and the associated induced relic GWs) are
excellent probes of the small-scale primordial physics.
In this preliminary version of our paper, we discuss a simple code developed
for numerical simulations of the inflationary dynamics. We introduce the
relevant dimensionless variables used in our numerical analysis and provide a
link to the code in our GitHub account. We also discuss how to use the code in
various scenarios, which include phase-space analysis of inflationary initial
conditions, inflationary background dynamics and determining scalar and tensor
power spectra both under slow-roll approximation and beyond. The latter case
has important implications for PBH formation and we discuss how to use the
code to simulate the Mukhanov-Sasaki equation mode by mode. We also discuss a
number of important future directions that are to be included in the
forthcoming version of our paper. The primary version of our numerical
framework is quite simple and less compact. It is intended to provide a
pedagogical guideline for researchers who are relatively new to numerical
simulations of inflation. In the forthcoming version of our work, we will
introduce a much more compact numerical framework that we are currently
working on which will incorporate additional new features.
This paper is organised as follows: we begin with a brief introduction of the
inflationary scalar field dynamics in section 2 and quantum fluctuations in
section 3. We then proceed to discuss numerical simulations of the background
dynamics in section 4. This also includes studying the scalar and tensor
fluctuations under the slow-roll approximations. Section 5 is dedicated to
studying the inflationary quantum fluctuations by numerically solving the
Mukhanov-Sasaki equation, and its application to inflaton potentials
possessing a slow-roll violating feature. We also mention a number of future
extensions of our numerical set-up in section 6, before concluding with a
discussion section.
We work in the units $c,\hbar=1$. The reduced Planck mass is defined to be
$m_{p}\equiv 1/\sqrt{8\pi G}=2.43\times 10^{18}~{}{\rm GeV}$. We assume the
background universe to be described by a spatially flat Friedmann-Lemaitre-
Robertson-Walker (FLRW) metric with signature $(-,+,+,+)$.
## 2 Inflationary Dynamics
The Action for a scalar field which couples minimally to gravity has the
following general form
$S[\phi]=\int d^{4}x\,\sqrt{-g}\;{\cal L}(F,\phi),$ (2.1)
where the Lagrangian density ${\cal L}(\phi,F)$ is a function of the field
$\phi$ and the kinetic term
$F=\frac{1}{2}\partial_{\mu}\phi\;\partial^{\mu}\phi.$ (2.2)
Varying (2.1) with respect to $\phi$ results in the equation of motion
$\frac{\partial{\cal
L}}{\partial\phi}-\left(\frac{1}{\sqrt{-g}}\right)\partial_{\mu}\left(\sqrt{-g}\frac{\partial{\cal
L}}{\partial\left(\partial_{\mu}\phi\right)}\right)=0.$ (2.3)
The energy-momentum tensor associated with the scalar field is
$T^{\mu\nu}=\left(\frac{\partial{\cal L}}{\partial
F}\right)\,\left(\partial^{\mu}\phi\;\partial^{\nu}\phi\right)-g^{\mu\nu}\,{\cal
L}~{}.$ (2.4)
Specializing to a spatially flat FRW universe and a homogeneous scalar field,
one gets
$ds^{2}=-dt^{2}+a^{2}(t)\left[dx^{2}+dy^{2}+dz^{2}\right]\,,$ (2.5)
$T^{\mu}_{\;\>\;\nu}=\mathrm{diag}\left(-\rho_{{}_{\phi}},p_{{}_{\phi}},p_{{}_{\phi}},p_{{}_{\phi}}\right),$
(2.6)
where the energy density, $\rho_{{}_{\phi}}$, and pressure, $p_{{}_{\phi}}$,
are given by
$\displaystyle\rho_{{}_{\phi}}$ $\displaystyle=\left(\frac{\partial{\cal
L}}{\partial F}\right)\,(2\,F)-{\cal L}\,,$ (2.7) $\displaystyle
p_{{}_{\phi}}$ $\displaystyle={\cal L}\,,$ (2.8)
and $F=-({\dot{\phi}}^{2}/2)$. The evolution of the scale factor $a(t)$ is
governed by the Friedmann equations:
$\displaystyle\left(\frac{\dot{a}}{a}\right)^{2}\equiv H^{2}$ $\displaystyle=$
$\displaystyle\frac{1}{3m_{p}^{2}}\,\rho_{{}_{\phi}},$ (2.9)
$\displaystyle\frac{\ddot{a}}{a}\equiv\dot{H}+H^{2}$ $\displaystyle=$
$\displaystyle-\frac{1}{6m_{p}^{2}}\,\left(\rho_{{}_{\phi}}+3\,p_{{}_{\phi}}\right),$
(2.10)
where $H\equiv\dot{a}/a$ is the Hubble parameter and $\rho_{{}_{\phi}}$
satisfies the conservation equation
${\dot{\rho}_{{}_{\phi}}}=-3\,H\left(\rho_{{}_{\phi}}+p_{{}_{\phi}}\right)~{}.$
(2.11)
In the standard single field inflationary paradigm, inflation is sourced by a
minimally coupled canonical scalar field $\phi$ with a suitable potential
$V(\phi)$ (see figure 1). For such a canonical scalar field
${\cal L}(F,\phi)=-F-V(\phi),$ (2.12)
Figure 1: This figure schematically depicts a prototype inflation potential
$V(\phi)$, plotted in solid green curve. The ‘CMB Window’ represents field
values corresponding to the Hubble-exit epochs of scales
$k\in\left[0.0005,0.5\right]~{}{\rm Mpc}^{-1}$ that are observable by the
latest CMB missions.
Substituting (2.12) into (2.7) and (2.8), we find
$\displaystyle\rho_{{}_{\phi}}$ $\displaystyle=$
$\displaystyle\frac{1}{2}{\dot{\phi}}^{2}+\;V(\phi),$ $\displaystyle
p_{{}_{\phi}}$ $\displaystyle=$
$\displaystyle\frac{1}{2}{\dot{\phi}}^{2}-\;V(\phi),~{}~{}$ (2.13)
consequently the two Friedmann equations (2.9), (2.10) and the equation (2.11)
become
$\displaystyle H^{2}\equiv\frac{1}{3m_{p}^{2}}\,\rho_{\phi}$ $\displaystyle=$
$\displaystyle\frac{1}{3m_{p}^{2}}\left[\frac{1}{2}{\dot{\phi}}^{2}+V(\phi)\right]\,,$
(2.14) $\displaystyle\dot{H}\equiv\frac{\ddot{a}}{a}-H^{2}$ $\displaystyle=$
$\displaystyle-\frac{1}{2m_{p}^{2}}\,\dot{\phi}^{2}\,,$ (2.15)
$\displaystyle{\ddot{\phi}}+3\,H{\dot{\phi}}+V_{,\phi}(\phi)$ $\displaystyle=$
$\displaystyle 0\,.$ (2.16)
The epoch of inflation at any time $t<t_{\rm end}$ is conveniently marked by
the number of e-folds before the end of inflation
$N_{e}=\log_{e}{\frac{a_{\rm end}}{a(t)}}=\int_{t}^{t_{\rm
end}}H(t^{\prime})dt^{\prime}\,,$ (2.17)
where $H(t)$ is the Hubble parameter during inflation. $a(t)$ and $a_{\rm
end}$ denote the scale factor at time $t$ and at the end of inflation
respectively. Typically a period of quasi-de Sitter inflation lasting for at
least 60-70 e-folds is required in order to address the problems of the
standard hot Big Bang model. We denote $N_{*}$ as the number of e-folds
(before the end of inflation) when the CMB pivot scale
$k_{*}=(aH)_{*}=0.05~{}\rm Mpc^{-1}$ left the comoving Hubble radius during
inflation. For convenience, we have chosen $N_{*}=60$ for the most part of
this work, although the exact value of $N_{*}$ depends upon the particular
detail of reheating history.
The quasi-de Sitter like phase corresponds to the inflaton field rolling
slowly down the potential $V(\phi)$. This slow-roll regime111It is well known
that the slow-roll phase of the inflation is actually a local attractor for
many different models of inflation, see [50, 20] and the references therein.
of inflation, ensured by the presence of the Hubble friction term in the
equation (2.16), is usually characterised by the first two kinematical Hubble
slow-roll parameters $\epsilon_{H}$, $\eta_{H}$, defined by [7]
$\displaystyle\epsilon_{H}$ $\displaystyle=$
$\displaystyle-\frac{\dot{H}}{H^{2}}=\frac{1}{2m_{p}^{2}}\,\frac{\dot{\phi}^{2}}{H^{2}},$
(2.18) $\displaystyle\eta_{H}$ $\displaystyle=$
$\displaystyle-\frac{\ddot{\phi}}{H\dot{\phi}}=\epsilon_{H}+\frac{1}{2\epsilon_{H}}\,\frac{d\epsilon_{H}}{dN_{e}}~{},$
(2.19)
where the slow-roll regime of inflation corresponds to
$\epsilon_{H},~{}\eta_{H}\ll 1~{}.$ (2.20)
The slow-roll regime is also often characterised by the dynamical potential
slow-roll parameters [7], defined by
$\epsilon_{{}_{V}}=\frac{m_{p}^{2}}{2}\left(\frac{V_{,\phi}}{V}\right)^{2}~{},~{}~{}\eta_{{}_{V}}=m_{p}^{2}\,\left(\frac{V_{,\phi\phi}}{V}\right)~{}.$
(2.21)
For small values of these parameters $\epsilon_{H},\,\eta_{H}\ll 1$, one finds
$\epsilon_{H}\simeq\epsilon_{{}_{V}}$ and
$\eta_{H}\simeq\eta_{{}_{V}}-\epsilon_{{}_{V}}$. Using the definition of
Hubble parameter, $H=\dot{a}/a$, we have
$\ddot{a}/a=\dot{H}+H^{2}=H^{2}(1+\dot{H}/H^{2})$. From the expression for
$\epsilon_{H}$ in (2.18), it is easy to see that
$\frac{\ddot{a}}{a}=\big{(}1-\epsilon_{H}\big{)}\,H^{2}~{}.$ (2.22)
Which implies that the universe accelerates, ${\ddot{a}}>0$, when
$\epsilon_{H}<1$. Using equation (2.14), the expression for $\epsilon_{H}$ in
(2.18) reduces to $\epsilon_{H}\simeq\frac{3}{2}\frac{\dot{\phi}^{2}}{V}$ when
${\dot{\phi}}^{2}\ll V$.
Figure 2: This figure schematically illustrates the evolution of the comoving
Hubble radius $(aH)^{-1}$ with scale factor. During inflation $(aH)^{-1}$
decreases which causes physical scales to exit the Hubble radius. After
inflation ends $(aH)^{-1}$ increases, and physical scales begin to re-enter
the Hubble radius. The CMB pivot scale, as used by the Planck mission, is set
at $k_{*}=0.05~{}{\rm Mpc}^{-1}$ and has been depicted by the dashed blue
line. The ‘CMB Window’ corresponds to the scales
$k\in\left[0.0005,0.5\right]~{}{\rm Mpc}^{-1}$ that are observable by the
latest CMB missions. Note that $(aH)^{-1}\propto a$ during the radiative
regime and $(aH)^{-1}\propto a^{-1}$ during inflation.
## 3 Quantum fluctuations during inflation
In the standard scenario of a minimally coupled single canonical scalar field
$\phi$ as the inflaton, two gauge-independent massless fields, one scalar, and
one transverse traceless tensor, get excited during inflation and receive
quantum fluctuations that are correlated over super-Hubble scales [51] at late
times.
### Scalar fluctuations
The evolution of the scalar degree of freedom, called the curvature
perturbation222Note that the comoving curvature perturbation ${\cal R}$ is
related to the curvature perturbation on uniform-density hypersurfaces,
$\zeta$, and both are equal during slow-roll inflation as well as on super-
Hubble scales, $k\ll aH$, in general (see [7]). $\zeta$ is described by the
following quadratic Action [7, 52]
$\boxed{S_{(2)}[\zeta]=\frac{1}{2}\int{\rm d}\tau{\rm
d}x^{3}\,z^{2}\,\left[\,(\zeta^{\prime})^{2}-(\partial_{i}\zeta)^{2}\,\right]}~{},$
(3.1)
which upon the change of variable
$v\equiv z\,{\zeta}~{},\quad\mbox{with}\quad
z=am_{p}\sqrt{2\epsilon_{H}}=a\,\frac{\dot{\phi}}{H}\,,$ (3.2)
takes the form
$S_{(2)}\left[v\right]=\frac{1}{2}\int{\rm d}\tau{\rm
d}x^{3}\left[\left(v^{\prime}\right)^{2}-\left(\partial_{i}v\right)^{2}+\frac{z^{\prime\prime}}{z}v^{2}\right]~{},$
(3.3)
where the $(^{\prime})$ denotes derivative with respect to conformal time
$\tau=\int\frac{dt}{a(t)}$ ($\simeq\frac{-1}{aH}$ for quasi-de Sitter
expansion). The variable $v$, which itself is a scalar quantum field like
$\zeta$, is called the Mukhanov-Sasaki variable in literature. Its Fourier
modes $v_{k}$ satisfy the famous Mukhanov-Sasaki equation given by [53, 54]
$\boxed{v^{\prime\prime}_{k}\,+\,\left(k^{2}-\frac{z^{\prime\prime}}{z}\right)v_{k}=0}~{},$
(3.4)
where the effective mass term is given by the following exact expression [55]
$\displaystyle\boxed{\frac{z^{\prime\prime}}{z}=(aH)^{2}\left(2-\epsilon_{1}+\frac{3}{2}\epsilon_{2}+\frac{1}{4}\epsilon_{2}^{2}-\frac{1}{2}\epsilon_{1}\epsilon_{2}+\frac{1}{2}\epsilon_{2}\epsilon_{3}\right)}~{},$
(3.5)
$\displaystyle\Rightarrow\boxed{\frac{z^{\prime\prime}}{z}=(aH)^{2}\left[2+2\epsilon_{H}-3\eta_{H}+2\epsilon_{H}^{2}+\eta_{H}^{2}-3\epsilon_{H}\eta_{H}-\frac{1}{aH}\,\eta^{\prime}_{H}\right]}~{},$
(3.6)
with $\epsilon_{1}=\epsilon_{H}$ and where
$\epsilon_{n+1}=-\frac{d\ln{\epsilon_{n}}}{dN_{e}}~{}$ (3.7)
are the ‘Hubble flow’ parameters. Given a mode $k$, at sufficiently early
times when it is sub-Hubble i.e $k\gg aH$, we can assume $v$ to be in the
Bunch-Davies vacuum [56] satisfying
$v_{k}\rightarrow\frac{1}{\sqrt{2k}}e^{-ik\tau}~{}.$ (3.8)
During inflation as the comoving Hubble radius falls (see figure 2), modes
start becoming super-Hubble i.e $k\ll aH$ and equation (3.4) dictates that
$|v_{k}|\propto z$ and hence $\zeta_{k}$ approaches a constant value. By
solving the Mukhanov-Sasaki equation we can estimate the dimensionless
primordial power spectrum of $\zeta$ using the following relation [51]
$\boxed{P_{\zeta}\equiv\frac{k^{3}}{2\pi^{2}}\,|{\zeta}_{k}|^{2}\Big{|}_{k\ll
aH}=\frac{k^{3}}{2\pi^{2}}\,\frac{|v_{k}|^{2}}{z^{2}}\Big{|}_{k\ll aH}}~{}.$
(3.9)
During slow-roll inflation, the factor
$\frac{z^{\prime\prime}}{z}=\frac{\nu^{2}-0.25}{\tau^{2}}$ with $\nu\approx
1.5+\epsilon_{H}+\frac{\dot{\epsilon}_{H}}{2H\epsilon_{H}}$. Solving the
Mukhanov-Sasaki equation with suitable Bunch-Davies vacuum conditions leads to
the famous slow-roll approximation formula [7]
$\boxed{P_{\zeta}=\frac{1}{8\pi^{2}}\left(\frac{H}{m_{p}}\right)^{2}\frac{1}{\epsilon_{H}}}~{}.$
(3.10)
Note that one could also directly try to solve for the fourier modes of the
comoving curvature perturbation $\zeta$ (instead of the Mukhanov-Sasaki
variable $v$) which satisfies the equation
$\boxed{{\zeta}^{\prime\prime}_{k}+2\left(\frac{z^{\prime}}{z}\right){\zeta}^{\prime}_{k}+k^{2}{\zeta}_{k}=0}$
(3.11)
and implement the corresponding Bunch-Davies initial conditions for
${\zeta}_{k}$. The friction term in equation (3.11) is given by
$\boxed{\frac{z^{\prime}}{z}=aH\,(1+\epsilon_{H}-\eta_{H})}~{}.$ (3.12)
Before moving forward, we stress that the slow-roll regime of inflation
necessarily requires both the slow-roll parameters to be small i.e
$\epsilon_{H}\ll 1$ and $\eta_{H}\ll 1$. Violation of either of these
conditions invalidates the above analytical treatment. When either of the
slow-roll conditions is violated, which is the situation in the context of
primordial black hole formation, a more accurate determination of $P_{\zeta}$
is provided by solving the Mukhanov-Sasaki equation (3.4) numerically. The
computation of power spectrum when the slow-roll approximation is violated
will be our primary focus.
### Tensor fluctuations
The corresponding quadratic Action for tensor fluctuations is given by [7, 52]
$\boxed{S_{(2)}[\gamma_{ij}]=\frac{1}{2}\int{\rm d}\tau{\rm
d}^{3}x\,a^{2}\left(\frac{m_{p}}{2}\right)^{2}\left[(\gamma_{ij}^{\prime})^{2}-(\partial\gamma_{ij})^{2}\right]}\,.$
(3.13)
The Mukhanov-Sasaki variable for tensor fluctuations are defined as
$\frac{m_{p}}{2}\,a\,\gamma_{ij}\equiv\left(\begin{smallmatrix}h_{+}&h_{\times}&0\\\
h_{\times}&-h_{+}&0\\\ 0&0&0\end{smallmatrix}\right)$ (3.14)
or,
$\frac{m_{p}}{2}\,a\,\gamma_{ij}\equiv\sum_{s=+,\times}\Pi_{ij}^{s}\ h_{s}\,,$
(3.15)
where $\Pi^{+}$ and $\Pi^{\times}$ are the 2 polarization modes, written as
$\Pi^{+}_{ij}=\begin{pmatrix}1&0&0\\\ 0&-1&0\\\
0&0&0\end{pmatrix}\quad\quad{\rm
and}\quad\quad\Pi^{\times}_{ij}=\begin{pmatrix}0&1&0\\\ 1&0&0\\\
0&0&0\end{pmatrix}\,.$ (3.16)
Thus,
$\gamma_{ij}=\frac{2}{m_{p}}\sum_{s=+,\times}\Pi_{ij}^{s}\,\frac{h_{s}}{a}\,.$
(3.17)
The evolution equation for the mode functions (by dropping the ‘s’ subscript
and remembering that it is summed over for 2 polarization states) is given by
$\boxed{h_{k}^{\prime\prime}+\left(k^{2}-\frac{a^{\prime\prime}}{a}\right)h_{k}=0}\,.$
(3.18)
The subsequent computation of tensor power spectrum
$P_{T}(k)\equiv
2\times\frac{k^{3}}{2\pi^{2}}\,\frac{|h_{k}|^{2}}{a^{2}}\Big{|}_{k\ll aH}\,,$
(3.19)
under quasi-de Sitter approximation leads to [7]
$\boxed{P_{T}(k)=\frac{2}{\pi^{2}}\left(\frac{H}{m_{p}}\right)^{2}}\,.$ (3.20)
Note that, unlike the Mukhanov-Sasaki equation (3.4) for scalar fluctuations,
the tensor mode equation (3.18) does not depend upon $z$, rather it depends
only upon the scale factor $a$. Hence, as long as the quasi-de Sitter
approximation is valid, i.e $\epsilon_{H}\ll 1$, power spectrum of tensor
fluctuations does not get affected by an appreciable amount even if slow-roll
is violated. Although, this statement is true only at linear order in
perturbation theory. Tensor fluctuations at second order in perturbation
theory can be induced by large first-order scalar fluctuations [57, 58, 59,
60] (also see [61] and references therein).
### 3.1 Large scale primordial fluctuations
On large cosmological scales which are accessible to CMB observations, the
scalar power spectrum typically takes the form of a power law represented by
$P_{\zeta}(k)=A_{{}_{S}}\left(\frac{k}{k_{*}}\right)^{n_{{}_{S}}-1},$ (3.21)
where $A_{{}_{S}}=P_{\zeta}(k_{*})$ is the amplitude of the scalar power
spectrum at the pivot scale $k=k_{*}$, given by333Note that in general, $k$
may correspond to any observable CMB scale in the range
$k\in\left[0.0005,0.5\right]~{}{\rm Mpc}^{-1}$. However, in order to derive
constraints on the inflationary observables $\\{n_{{}_{S}},r\\}$, we mainly
focus on the CMB pivot scale, namely $k\equiv k_{*}=0.05~{}{\rm Mpc}^{-1}$.
$\boxed{A_{{}_{S}}=\frac{1}{8\pi^{2}}\left(\frac{H}{m_{p}}\right)^{2}\frac{1}{\epsilon_{H}}\,\bigg{|}_{\phi=\phi_{*}}}~{},$
(3.22)
where $\phi_{*}$ is the value of the inflaton field at the epoch of Hubble
exit of the CMB pivot scale $k_{*}$. The scalar spectral tilt $n_{{}_{S}}$, in
the slow-roll regime is given by [7]
$\displaystyle\boxed{n_{{}_{S}}-1\equiv\frac{d\,\mathrm{ln}P_{\zeta}}{d\,\mathrm{ln}k}=2\eta_{H}-4\epsilon_{H}}~{}.$
(3.23)
Similarly the tensor power spectrum, in the slow-roll limit, is represented by
$P_{T}(k)=A_{{}_{T}}\left(\frac{k}{k_{*}}\right)^{n_{{}_{T}}},$ (3.24)
with the amplitude of tensor power spectrum at the CMB pivot scale is given by
[7, 51]
$\boxed{A_{{}_{T}}\equiv
P_{T}(k_{*})=\frac{2}{\pi^{2}}\left(\frac{H}{m_{p}}\right)^{2}\bigg{|}_{\phi=\phi_{*}}}~{},$
(3.25)
and the tensor spectral index (with negligible running) is given by
$\boxed{n_{{}_{T}}=-2\,\epsilon_{H}}~{}.$ (3.26)
The tensor-to-scalar ratio $r$ is defined by
$\boxed{r\equiv\frac{A_{{}_{T}}}{A_{{}_{S}}}=16\,\epsilon_{H}}~{},$ (3.27)
yielding the single field consistency relation
$\boxed{r=-8\,n_{{}_{T}}}~{}.$ (3.28)
Hence the slow-roll parameters $\epsilon_{H}$ and $\eta_{H}$ play an important
role in characterising the power spectra of scalar and tensor fluctuations
during inflation. Before going forward, we briefly discuss the implications of
the latest CMB observations for the slow-roll parameters as well as for other
relevant inflationary observables. In order to relate the CMB observables to
the inflaton potential $V(\phi)$, we work with the potential slow-roll
parameters defined in equation (2.21).
Consider a canonical scalar field minimally coupled to gravity and having the
potential
$V(\phi)=V_{0}\,f\left(\frac{\phi}{m_{p}}\right)~{}.$ (3.29)
The potential slow-roll parameters (2.21) are given by
$\displaystyle\epsilon_{{}_{V}}=\frac{m_{p}^{2}}{2}\left(\frac{f_{,\phi}}{f}\right)^{2}~{},$
(3.30)
$\displaystyle\eta_{{}_{V}}=m_{p}^{2}\left(\frac{f_{,\phi\phi}}{f}\right)~{}.$
(3.31)
In the slow-roll limit $\epsilon_{{}_{V}},\,\eta_{{}_{V}}\ll 1$, the scalar
power spectrum is given by the expression (3.21) with the amplitude of scalar
power at the CMB pivot scale $k\equiv k_{*}=0.05~{}{\rm Mpc}^{-1}$ expressed
as [7]
$A_{{}_{S}}\equiv
P_{\cal\zeta}(k_{*})\simeq\frac{1}{24\pi^{2}}\frac{V_{0}}{m_{p}^{4}}\frac{f\left(\phi_{k}\right)}{\epsilon_{{}_{V}}(\phi_{k})}\bigg{|}_{k=k_{*}}~{},$
(3.32)
and the scalar spectral index (with negligible running) is given by
$n_{{}_{S}}\simeq
1+2\,\eta_{{}_{V}}(\phi_{*})-6\,\epsilon_{{}_{V}}(\phi_{*})~{},$ (3.33)
Similarly the amplitude of tensor power spectrum at the CMB pivot scale is
given by
$A_{{}_{T}}\equiv
P_{T}(k_{*})=\frac{2}{\pi^{2}}\left(\frac{H}{m_{p}}\right)^{2}\bigg{|}_{\phi=\phi_{*}}\simeq\frac{2}{3\pi^{2}}\frac{V_{0}}{m_{p}^{4}}f\left(\phi_{*}\right)~{},$
(3.34)
and the tensor spectral index (3.26) becomes
$n_{{}_{T}}\simeq-2\,\epsilon_{{}_{V}}(\phi_{*})~{},$ (3.35)
and the tensor-to-scalar ratio (3.27) can be written as
$r\simeq 16\,\epsilon_{{}_{V}}(\phi_{*})~{},$ (3.36)
satisfying the single field consistency relation (3.28). From the CMB
observations of Planck 2018 [15], we have
$A_{{}_{S}}=2.1\times 10^{-9}~{},$ (3.37)
while the $2\sigma$ constraint on the scalar spectral index is given by
$n_{{}_{S}}\in[0.957,0.976]~{}.$ (3.38)
Similarly the constraint on the tensor-to-scalar ratio $r$, from the latest
combined observations of Planck 2018 [15] and BICEP/Keck [17], is given by
$r\leq 0.036~{},$ (3.39)
which translates into $A_{{}_{T}}\leq 3.6\times 10^{-2}\,A_{{}_{S}}$. Equation
(3.34) helps place the following upper bound on the inflationary Hubble scale
$H^{\rm inf}$ and the energy scale during inflation $E_{\inf}$
$\displaystyle H^{\rm inf}\leq 4.7\times 10^{13}~{}{\rm GeV}~{},$ (3.40)
$\displaystyle E_{\inf}\equiv\left[\sqrt{3}\,m_{p}\,H^{\rm
inf}\right]^{1/2}\leq 1.4\times 10^{16}~{}{\rm GeV}~{}~{}.$ (3.41)
Similarly the CMB bound on $r$ when combined with (3.36) translates into an
upper bound on the first slow-roll parameter
$\epsilon_{H}\simeq\epsilon_{{}_{V}}\leq 0.00225~{},$ (3.42)
rendering the tensor tilt from equation (3.35) to be negligibly small
$|n_{{}_{T}}|\leq 0.0045~{}.$ (3.43)
Given the upper limit on $\epsilon_{{}_{V}}$, using the CMB bound on
$n_{{}_{S}}$ from (3.38) in (3.33), we infer that the second slow-roll
parameter is negative and obtain interesting upper and lower limits on its
magnitude, given by
$|\eta_{H}|\in[0.0075,0.0215]~{}.$ (3.44)
The EOS $w_{\phi}$ of the inflaton field is given by
$w_{\phi}=\frac{\frac{1}{2}\dot{\phi}^{2}-V(\phi)}{\frac{1}{2}\dot{\phi}^{2}+V(\phi)}\simeq-1+\frac{2}{3}\epsilon_{{}_{V}}(\phi)\,,$
(3.45)
Therefore one finds from (3.42) the following constraint on the inflationary
EOS at the pivot scale
$w_{\phi}\leq-0.9985\,,$ (3.46)
implying that the expansion of the universe during inflation was near
exponential (quasi-de Sitter like).
Figure 3: This figure shows the Starobinsky potential (3.48) with CMB pivot
scale $k_{*}=0.05~{}{\rm Mpc}^{-1}$ by a blue color star as well as the CMB
window $k_{\rm CMB}\in\left[0.0005,0.5\right]~{}{\rm Mpc}^{-1}$ in grey color
shade in the field space. From this figure, it is clear that CMB window
constitutes only a tiny portion of the available field space in between
$\phi_{\rm CMB}$ and end of inflation $\phi_{e}$.
The CMB observations, in the context of single field slow-roll inflationary
paradigm, favours asymptotically-flat potentials (featuring either one or two
plateau wings) with $n_{{}_{S}}\simeq 0.965$ and $r\leq 0.036$. A typical
plateau-potential is demonstrated in figure 1 and this is the standard/vanilla
scenario. Given that power spectrum is almost scale-invariant with slightly
red tilt, i.e $n_{{}_{S}}-1\lesssim 0$, large-scale fluctuations are more
important while nothing drastic is expected to happen on smaller cosmological
scales that are super-Hubble at the end of inflation.
Before proceeding further to discuss small-scale inflationary fluctuations,
let us make our nomenclature concrete (which is consistent with the standard
nomenclature in the inflationary literature).
• Quasi-de Sitter inflation corresponds to the condition $\epsilon_{H}\ll 1$.
• Slow-roll inflation corresponds to $\epsilon_{H},\,\eta_{H}\ll 1$.
This distinction will be important for the rest of the discussions in this
paper. Under either of the aforementioned assumptions, the expression for
conformal time is given by
$\displaystyle\boxed{-\tau\simeq\frac{1}{aH}}\,.$ (3.47)
### 3.2 Small-scale primordial fluctuations
As mentioned above, the recent CMB observations support the scenario of the
inflaton field rolling slowly down an asymptotically-flat potential at the
time when the observable CMB scales made their Hubble exit during inflation.
However, the current CMB and LSS observations probe only about 7-8 e-folds of
inflation around the Hubble exit time of the CMB pivot scale. We explicitly
mention that CMB observations probe primordial fluctuations with comoving
scales $k_{\rm CMB}\in\left[0.0005,0.5\right]~{}{\rm Mpc}^{-1}$ (which
includes pivot scale $k_{*}=0.05~{}{\rm Mpc}^{-1}$) corresponding to multipole
$l\in\left[2,2500\right]$ in the angular sky. Additionally, Lyman-$\alpha$
forest observations enforce constraints on the primordial power spectrum upto
$k\simeq{\cal O}(1)~{}{\rm Mpc}^{-1}$ (see [29]).
Hence a large portion of evolution during the inflationary phase that accounts
roughly to about 50 e-folds of expansion, corresponding to scales smaller than
those probed by CMB, remains observationally inaccessible at present.
Consequently the associated dynamics of the inflaton field also remains
unprobed. For example, figure 3 demonstrates that the CMB window constitutes
only a tiny part of the observationally available field space between the
largest scales in the sky and the smallest scale at the end of inflation for
Starobinsky potential [1, 62]
$\boxed{V(\phi)=V_{0}\,\left(1-e^{-\frac{2}{\sqrt{6}}\,\frac{\phi}{m_{p}}}\right)^{2}}\,.$
(3.48)
Any deviation from the quasi-de Sitter expansion and/or departure from the
slow-roll regime $\epsilon_{{}_{H}},\eta_{{}_{H}}\ll 1$ that might be
triggered by a change in the dynamics of the inflaton field, would lead to
interesting observational consequences on small scales. In particular if the
inflaton potential possesses a near inflection point-like broad feature at
some intermediate field values, then the scalar quantum fluctuations
corresponding to scales becoming super-Hubble around the time when the
inflaton rolls past such features, might receive enough amplification to
facilitate the formation of Primordial Black Holes (PBHs) upon their Hubble
re-entry during the radiative epoch.
Figure 4: This figure schematically depicts a prototype plateau potential,
plotted in solid green curve. The ‘CMB Window’ represents field values
corresponding to the Hubble-exit epochs of scales
$k\in\left[0.0005,0.5\right]~{}{\rm Mpc}^{-1}$ that are observable by the
latest CMB missions. The potential exhibits a small-scale feature (shown in
the salmon colour shading) in the form of a flat inflection point-like segment
which results in ultra slow-roll (USR) inflation. After exiting the first
slow-roll phase (SR-I) near the CMB window, the inflaton enters into an USR
phase, during which the second slow-roll condition is violated, namely
$\eta_{H}\simeq+3$. This leads to an enhancement of power spectrum at small-
scales. Later, the inflaton emerges out of the USR to another slow-roll phase
(SR-II) before the end of inflation.
We note that, there are a plethora of possible features that would lead to
deviation from standard scale-invariant power spectrum [63]. However, in this
paper we will only focus on potentials with a tiny local bump/dip like feature
[44] in order to illustrate the efficiency of our numerical analysis. Our code
can be used to simulate a number of different types of features in the
inflaton potential.
PBH formation requires the enhancement of the inflationary power spectrum by
roughly a factor of $10^{7}$ within less than 40 e-folds of expansion (on
scales smaller than the pivot scale $k_{*}$) as illustrated in figure 5.
Therefore the quantity $\Delta\ln{\epsilon_{H}}/\Delta N$, and hence also
$|\eta_{H}|$, can grow to become of order unity, thereby violating the second
slow-roll condition [64]. In fact the second Hubble slow-roll parameter
$|\eta_{H}|$ becomes larger than unity even though $\epsilon_{H}$ itself
remains much smaller than unity. As a result, equation (3.22) can no longer be
trusted to compute the power spectrum and one must determine $P_{\cal R}$ by
numerically integrating the Mukhanov-Sasaki equation (3.4).
We proceed as follows. We first discuss the simulations of inflationary
background dynamics in section 4, where we also discuss how to generate phase-
space portrait $\\{\phi,\dot{\phi}\\}$ during inflation. In section 5.1, we
introduce our numerical scheme for studying quantum fluctuations during slow-
roll inflation. We work with convex as well as asymptotically-flat potentials.
In section 5.2, we apply our numerical scheme to potentials featuring a local
bump/dip like feature that facilitates the amplification of scalar power on
small primordial scales. We demonstrate that slow-roll formula (3.22)
underestimates both the location as well as the height of scalar power
spectrum ${\cal P}_{\zeta}(k)$ for both type of aforementioned features and
hence one must solve the Mukhanov-Sasaki equation (3.4) numerically to
estimate the power spectrum accurately. We also demonstrate that the growth of
the power spectrum obeys the steepest growth bounds discussed in [65, 66, 67,
68].
Figure 5: This figure schematically illustrates the typical amplification of
inflationary primordial power spectrum at smaller length scales required for
PBH formation.
## 4 Numerical analysis of inflationary background dynamics
A complete analysis of the inflationary background dynamics can be obtained
from the evolution of $\phi$, $\dot{\phi}$ and $H$. All of these quantities
can be simulated by numerically solving equations (2.14), (2.15) and (2.16).
The evolution of the scale factor follows directly from $H=\dot{a}/a$. Our
system is defined by the following set of equations (as a function of cosmic
time $t$)
$\displaystyle H^{2}$
$\displaystyle=\frac{1}{3m_{p}^{2}}\left[\frac{1}{2}{\dot{\phi}}^{2}+V(\phi)\right]\,,$
(4.1) $\displaystyle\dot{H}$ $\displaystyle=-\frac{\dot{\phi}^{2}}{2m_{p}}\,,$
(4.2) $\displaystyle\ddot{\phi}$
$\displaystyle=-3H\dot{\phi}-V_{,\phi}(\phi)\,,$ (4.3)
where the functional form of the potential $V(\phi)$ is given by the specific
inflationary model. However, the rest of the algorithm is largely model-
independent. We can re-write the potential as
$V(\phi)=V_{0}\,f(\phi)\,.$ (4.4)
In order to carry out numerical simulations, it is convenient to write down
the dynamical equations in terms of dimensionless variables (which also
ensures that we do not need to worry about keeping track of units).
Furthermore, it is important to re-scale the time variable by a factor $S$
which can be suitably chosen according to the energy scale of the
dynamics444Depending upon the potential, we usually choose the value of $S$ to
be in the range $S\in[10^{-5},\,10^{-3}]$.. Our primary dimensionless
variables are defined as
$\displaystyle T$ $\displaystyle=\left(t\,m_{p}\right)\,S\,,$ (4.5)
$\displaystyle x$ $\displaystyle=\frac{\phi}{m_{p}}\,,$ (4.6) $\displaystyle
y$ $\displaystyle=\left(\frac{\dot{\phi}}{m_{p}^{2}}\right)\,\frac{1}{S}\,,$
(4.7) $\displaystyle z$
$\displaystyle=\left(\frac{H}{m_{p}}\right)\,\frac{1}{S}\,,$ (4.8)
$\displaystyle A$ $\displaystyle=\left(a\,m_{p}\right)\,S\,.$ (4.9)
In terms of these variables, the dynamical equations (to be simulated) take
the form
$\displaystyle\frac{{\rm d}x}{{\rm d}T}$ $\displaystyle=y\,,$ (4.10)
$\displaystyle\frac{{\rm d}y}{{\rm d}T}$
$\displaystyle=-3\,z\,y-\frac{v_{0}}{S^{2}}\,f_{,x}(x)\,,$ (4.11)
$\displaystyle\frac{{\rm d}z}{{\rm d}T}$
$\displaystyle=-\frac{1}{2}\,y^{2}\,,$ (4.12) $\displaystyle\frac{{\rm
d}A}{{\rm d}T}$ $\displaystyle=A\,z\,.$ (4.13)
We can also define the dimensionless potential to be
$\frac{V(\phi)}{m_{p}^{4}}\equiv v_{0}\,f(x)=\frac{V_{0}}{m_{p}^{4}}\,f(x)\,.$
(4.14)
We can solve the aforementioned set of equations with appropriate initial
conditions. In our analysis, we use the odeint function provided in the
scipy.integration package. By incorporating initial conditions
$\\{x_{i},\,y_{i},\,z_{i},\,A_{i}\\}$ for the primary dynamical variables
$\\{x,\,y,\,z,\,A\\}$, we simulate their time evolution during inflation.
Accordingly, we determine the crucial derived/secondary (dimensionless)
dynamical variables from the primary ones by555Note that the observables
$A_{{}_{S}},\,A_{{}_{T}},\,n_{{}_{S}},\,n_{{}_{T}},\,{\rm and}\,r$ are related
to inflationary scalar and tensor fluctuations and the expressions given here
are under slow-roll approximations, i.e $\epsilon_{H},|\eta_{H}|\ll 1$, during
which they can be determined purely from the dynamics of background quantities
such as $H,\,\epsilon_{H},\,\eta_{H}$. Computation of inflationary power
spectra when slow-roll is violated is described in section 5.2.
$\displaystyle N$ $\displaystyle=\log{\frac{A}{A_{i}}}$ (4.15)
$\displaystyle\epsilon_{H}$
$\displaystyle=\frac{1}{2}\,\frac{y^{2}}{z^{2}}~{},$ $\displaystyle\eta_{H}$
$\displaystyle=-\frac{1}{yz}\frac{{\rm d}y}{{\rm d}T}\,,$ (4.16)
$\displaystyle A_{{}_{S}}$
$\displaystyle=\frac{1}{8\pi^{2}}\,\frac{\left(Sz\right)^{2}}{\epsilon_{H}}~{},$
$\displaystyle A_{{}_{T}}$
$\displaystyle=\frac{2}{\pi^{2}}\,\left(Sz\right)^{2}\,,$ (4.17)
$\displaystyle n_{{}_{S}}$ $\displaystyle=1+2\,\eta_{H}-4\,\epsilon_{H}~{},$
$\displaystyle n_{{}_{T}}$ $\displaystyle=-2\,\epsilon_{H}\,,$ (4.18)
$\displaystyle r$ $\displaystyle=16\,\epsilon_{H}\,.$ (4.19)
We define $N_{T}$ to be the number of e-folds of accelerated expansion
realised in between an arbitrary initial time and the end of inflation, which
is marked by $\epsilon_{H}=1$. We then define the more important quantity
$N_{e}=N_{T}-N$ as the number of e-folds before the end of inflation. Note
that $N_{e}=0$ at the end of inflation while $N_{e}>0$ at early times. This
will be our primary time variable against which we will be plotting the
dynamics of different inflationary observables. In order to realise adequate
amount of inflation, i.e. $N_{T}>60$, initial value of scalar field must be
large enough (and is model dependent). For most large field potentials, this
value is of the order $\phi_{i}\lesssim{\cal O}(10)\,m_{p}$.
Composing the code involves typing down the dimensionless equations in the
appropriate syntax and solving them by using an ODE solver. See our
supplementary Python code 666https://github.com/bhattsiddharth/NumDynInflation
for details. For a particular model of interest, we need to input the inflaton
potential in the form $\frac{V(\phi)}{m_{p}^{4}}=v_{0}\,f(x)$. Since the slow-
roll parameters and the duration $N_{T}$ do not strongly depend on $v_{0}$, we
can initially set its value to roughly $v_{0}=10^{-10}$. We can later adjust
the value of $v_{0}$ to yield the correct CMB normalised value of scalar power
spectrum (3.37) at the pivot scale $N=N_{*}$. One can proceed in the following
step-by-step algorithm.
Figure 6: Time evolution of the number of e-folds (scale factor in the
logarithm scale) of expansion of the universe is shown for Starobinsky
potential (3.48). For the most part of inflation, the expansion is almost
exponential (quasi-de Sitter) i.e $a\sim e^{Ht}$, leading to a rapid growth in
the number of e-folds within a small amount of time. While after the end of
inflation, the expansion is decelerated, leading to a much slower growth in
the scale factor.
1. 1.
After setting the parameters of the potential and defining the function
$f(x)$, we need to incorporate initial conditions for the four primary
variables $\\{x,\,y,\,z,\,A\\}$. We enter appropriate initial conditions
$x_{i},\,y_{i}$ and $A_{i}$ in the following way. $A_{i}$ can be set
arbitrarily in a spatially flat universe, however depending upon the energy
scale of inflation, one can provide an appropriate value. We suggest a typical
$A_{i}=1\times 10^{-3}$, although its precise value does not affect the
dynamics. In regard to the initial value of $x$, we need to ensure that
$x_{i}$ is large enough (or small enough if we are working with symmetry
breaking hilltop type potentials) to yield adequate amount of inflation, i.e
$N_{T}\geq 70$. As mentioned before, the typical value for large field models
is $x_{i}\lesssim{\cal O}(10)$.
Since we will be mostly working with potentials that exhibit slow-roll
behaviour at initial times, and given that slow-roll trajectory is an
attractor in relatively large field models [20], we can safely set $y_{i}=0$,
as long as $x_{i}$ is large enough. One can also incorporate slow-roll initial
conditions from the beginning, namely
$y_{i}=-\frac{v_{0}}{3}\,\frac{f_{,x}}{Sz}$, as is usually done in
practice777Note that for phase-space analysis, we need to incorporate
arbitrary values of $x_{i},\,y_{i}$ (consistent with fixed initial $z_{i}$)
which may be away from the slow-roll trajectory as discuss below.. Finally,
the initial value of $z$ can be incorporated in terms of $x_{i},\,y_{i}$ using
the dimensionless Friedmann equation
$\displaystyle
z_{i}=\sqrt{\frac{1}{6}\,y_{i}^{2}+\frac{1}{3}\,v_{0}\,f(x_{i})\,\frac{1}{S^{2}}}\,.$
(4.20)
2. 2.
We then proceed to solve the system of equations by taking adequately small
time steps $T$ in the appropriate range $T\in[T_{i}=0,\,T_{f}]$. We then plot
$N$ vs $T$ as given in figure 6 for Starobinsky potential. Typically, $N$
grows linearly with $T$ during near exponential inflation and a substantial
decrease in the rate of growth of $N$ indicates the end of inflation.
3. 3.
In order to concretely determine the value of $N_{T}$, we plot $\epsilon_{H}$
vs $N$, and note the value of $N$ after which $\epsilon_{H}\geq 1$. By
definition, initially $N=0$. If $N_{T}<70$, then we repeat this step by
increasing the value of $x_{i}$, until we get $N_{T}\geq 70$. (Alternatively,
if inflation has not ended888Note that if one simulates the cosmological
equations in terms of number of e-folds $N$, rather than cosmic time $t$, this
step can usually be avoided by simulating the system from $N=0$ to $N=70$.
However, one has to adjust the value of $x_{i}$ in order to get enough
inflation., i.e $\epsilon_{H}<1$, at the end of our simulation, then either
one can increase the value of $T_{f}$ or decrease $x_{i}$. We suggest the
latter.) We can then define the number of e-folds before the end of inflation
to be $N_{e}=N_{T}-N$.
Figure 7: This figure describes the evolution of inflaton field $\phi$, and
its speed $\dot{\phi}$ in the left panel, while the Hubble parameter $H$ in
the right panel as a function of the number of e-folds before the end of
inflation $N_{e}$ for Starobinsky potential (3.48). Note that during slow-roll
inflation, $\dot{\phi}$ and $H$ are nearly constant, while $\phi$ changes
quite slowly. However, $\phi$ and $H$ begin to change rapidly towards the end
of inflation. After inflation ends, $\phi$ and $\dot{\phi}$ start oscillating
around the minimum of the potential (which is not shown in this figure).
Figure 8: Evolution of the slow-roll parameters $\epsilon_{H}$ and $\eta_{H}$
is shown as a function of the number of e-folds before the end of inflation
$N_{e}$ for Starobinsky potential (3.48). From this plot, it is easy to notice
that at early times when $N_{e}\gg 1$, the slow-roll conditions are satisfied
i.e $\epsilon_{H},\,|\eta_{H}|\ll 1$. However, the slow-roll conditions are
violated towards the end of inflation (marked by $N_{e}=0$ and
$\epsilon_{H}=1$).
4. 4.
The pivot scale can then be fixed to an appropriate value, for example
$N_{e}=60$, as used in this work. Figure 7 describes the evolution of
$\phi,\,\dot{\phi},\,{\rm and}\,H$, while figure 8 illustrates the dynamics of
slow-roll parameters $\epsilon_{H},\,|\eta_{H}|$ for Starobinsky potential, as
determined from our code. As mentioned before, we usually plot the dynamics of
inflation as a function of $N_{e}$.
5. 5.
In order to accurately fix the value of $v_{0}$, we need to impose
$A_{S}=2.1\times 10^{-9}$ at the pivot scale $N_{e}=60$. If $A_{S}$ is lower
than expected for the given value of $v_{0}$, then we increase the value of
$v_{0}$ or vice versa until we arrive at the correct value of $A_{S}$, and fix
the corresponding value of $v_{0}$.
Following the aforementioned algorithm, we can easily simulate the
inflationary background dynamics and investigate the evolution of relevant
quantities of our interest. Before going forward, we would like to stress that
many of the aforementioned steps (in the present version of our code) are
rather meant to be carried out manually by the user. While we are already
developing an automated version of this code (which will be presented in the
revised version of our paper), we believe that the present version will help
the user to understand the inflationary dynamics much better.
### 4.1 Phase-space analysis
Phase-space analysis of inflationary dynamics is usually carried out to
determine the set of initial conditions that results in adequate amount of
inflation, and hence it is important to access the generality of initial
conditions for inflation [69, 50, 20]. For a spatially flat background, the
phase-space portrait consists of trajectories of $\\{\phi,\,\dot{\phi}\\}$ for
different initial conditions, with fixed $H_{i}$. The standard algorithm to
generate such a plot is the following.
1. 1.
The initial energy scale of inflation is kept constant by fixing the value of
initial Hubble parameter in the phase-space portrait simulations. A typical
value often used is $H_{i}\leq m_{p}$ (see [20] for detail). Hence, the user
is expected to incorporate an appropriate value of $z_{i}$.
2. 2.
One can then input a suitable value of $x_{i}$ and determine the value of
$y_{i}$ for a given potential function $f(x)$ from the dimensionless Friedmann
equation (4.20) as
$y_{i}=\pm\sqrt{6}\,\sqrt{z_{i}^{2}-\frac{1}{3}\,v_{0}\,f(x_{i})\,\frac{1}{S^{2}}}~{}.$
(4.21)
3. 3.
With these initial conditions, one can then simulate the system of
dimensionless differential equations for $\\{x,\,y,\,z,\,A\\}$ from $T_{i}=0$
till an appropriate $T_{f}$. One can then repeat the same step by
incorporating a number of different values of of $x_{i}$ in order to generate
the phase-space portrait for the given potential. We provide the GitHub link
to our phase-space portrait framework here
999https://github.com/bhattsiddharth/NumDynInflation/blob/main/inf_dyn_phase.py.
The phase-space portraits for Starobinsky potential (3.48), and quadratic
potential $V(\phi)\propto\phi^{2}$ are illustrated in the left and right
panels of figure 9 respectively.
Figure 9: The phase-space portrait $\\{\phi,\,\dot{\phi}\\}$ of the inflaton
field has been illustrated for Starobinsky potential (3.48) in the left panel,
and for quadratic potential $V(\phi)=\frac{1}{2}\,m^{2}\,\phi^{2}$ in the
right panel corresponding to different initial conditions
$\\{\phi_{i},\,\dot{\phi_{i}}\\}$ (plotted in solid black colour) with a fixed
initial scale $H_{i}$. The figure demonstrates that trajectories commencing
from a large class of initial field values (including those with large initial
velocities $\dot{\phi_{i}}$) quickly converge towards the slow-roll attractor
separatrix $\dot{\phi}=-V_{,\phi}/3H\simeq\mathrm{const.}$ (plotted in green
colour) as can be seen from the rapid decline in the inflaton speed until they
meet the green colour curve. After the end of inflation, the inflaton begins
to oscillate around the minimum of the potential.
In order to determine the degree of generality of inflation, we need to define
a measure for the distribution of $\\{\phi_{i},\,\dot{\phi_{i}}\\}$. The
correct choice for the measure might depend on the quantum theory of gravity.
However, a uniform measure is usually considered in the literature. Interested
readers are referred to [69, 20] for further detail.
### 4.2 Quantum fluctuations under the slow-roll approximation
In section 3, we described the expressions for a number of inflationary
observables such as $A_{{}_{S}},\,A_{{}_{T}},\,n_{{}_{S}},\,n_{{}_{T}},\,{\rm
and}\,r$ associated with the scalar and tensor power spectra which can be
determined purely from the dynamics of background quantities such as
$H,\,\epsilon_{H},\,\eta_{H}$ under the slow-roll approximation. Hence they
can be conveniently determined from our background dynamics code as discussed
earlier in section 4. For example, the scalar and tensor power spectra for
Starobinsky inflation have been plotted in figure 10 as a function $N_{e}$.
Similarly, one can plot the spectral indices101010In the standard literature,
one usually plots $r$ vs $n_{{}_{S}}$ for a given inflaton potential for a
range of possible values of $N_{*}\in[50,60]$ which can also be done easily
using our code. $n_{{}_{S}}-1$ and $n_{{}_{T}}$ and determine their values at
the pivot scale $N_{*}$. The spectral indices for Starobinsky potential have
been plotted in figure 11.
Figure 10: The power spectra of scalar and tensor quantum fluctuations
(computed using the slow-roll formulae (3.10) and (3.25) respectively) are
shown for comoving modes exiting the Hubble radius at different number of
e-folds $N_{e}$ before the end of inflation for Starobinsky potential (3.48).
The CMB window (shown in grey shaded region) corresponds to comoving modes in
the range $k_{\rm CMB}\in[0.0005,0.5]~{}{\rm Mpc}^{-1}$ that are being probed
by the current CMB missions. Fluctuations over larger scales (shown in red
shaded region) are outside the observable universe at present and those over
smaller scales remain to be (potentially) probed by a plethora of upcoming
missions, from GW observatories to PBHs.
Figure 11: The scalar and tensor spectral indices $n_{{}_{S}}$ and
$n_{{}_{T}}$ are shown as a function of $N_{e}$ for Starobinsky potential
(3.48) as determined by their slow-roll approximated formulae (3.23) and
(3.26) respectively. Around the pivot scale, they take the approximate values
$n_{{}_{S}}\simeq 0.967\text{ and }n_{{}_{T}}\simeq-0.0004$. Note that we have
plotted $n_{{}_{S}}-1$ (instead of $n_{{}_{S}}$) since it is the correct
scalar spectral index. The tensor-to-scalar ratio is given by
$r\simeq-8\,n_{{}_{T}}$.
## 5 Numerical analysis for quantum fluctuations during inflation
In the previous section we used slow-roll approximated formulae to study the
spectra of inflationary fluctuations in terms of background quantities such as
$H,\,\epsilon_{H},\,\eta_{H}$. Hence, we only had to simulate the background
dynamics for a given potential in order to plot the relevant inflationary
observables. However, if we want to analyze the behaviour of quantum
fluctuations more accurately, especially in situations where one or both the
slow-roll conditions (2.20) are violated, we need to numerically solve the
Mukhanov-Sasaki equation (3.4) corresponding to each comoving scale $k$.
For this purpose, we first rewrite the Mukhanov-Sasaki equation (3.4) in
cosmic time as
$\frac{{\rm d}^{2}v_{k}}{{\rm d}t^{2}}+H\frac{{\rm d}v_{k}}{{\rm
d}t}+\left[\frac{k^{2}}{a^{2}}-\frac{1}{a^{2}}\frac{z^{\prime\prime}}{z}\right]v_{k}=0\,.$
(5.1)
Note that here $z$ is not the dimensionless Hubble parameter used in our
numerical code, rather it is the variable $z=am_{p}\sqrt{2\epsilon_{H}}$ in
the Mukhanov-Sasaki equation (3.4). The effective mass term
$z^{\prime\prime}/z$ in (3.6) can be re-written as
$\frac{z^{\prime\prime}}{z}=a^{2}\left[\frac{5}{2}\frac{\dot{\phi}^{2}}{m_{p}^{2}}+2\frac{\dot{\phi}\ddot{\phi}}{Hm_{p}^{2}}+2H^{2}+\frac{1}{2}\frac{\dot{\phi}^{4}}{H^{2}m_{p}^{4}}-V_{,\phi\phi}(\phi)\right]\,.$
(5.2)
Since $v_{k}$ is a complex valued function, it is convenient to split it into
its real and imaginary parts to study their evolution separately for the
numerical analysis. While both will follow the same evolution equation, they
will be supplied with different initial conditions in the form of the real and
imaginary parts of the Bunch-Davies vacuum (3.8). Writing the Mukhanov-Sasaki
equation for scalar fluctuations in terms of dimensionless variables, we
obtain
$\boxed{\frac{{\rm d}^{2}v_{k}}{{\rm d}T^{2}}+z\,\frac{{\rm d}v_{k}}{{\rm
d}T}+\left[\frac{k^{2}}{A^{2}}-\frac{5}{2}\,y^{2}+2\,\frac{y}{z}\left(3\,z\,y+\frac{v_{0}}{S^{2}}\,f_{,x}\right)-2\,z^{2}-\frac{1}{2}\,\frac{y^{4}}{z^{2}}+\frac{v_{0}}{S^{2}}\,f_{,xx}\right]v_{k}=0}\,.$
(5.3)
Figure 12: Evolution of scalar power $\frac{k^{3}}{2\pi^{2}}|\zeta_{k}|^{2}$
is plotted by numerically solving the Mukhanov-Sasaki equation (5.3) for a
mode exiting the Hubble radius at about 60 e-folds before the end of inflation
(for Starobinsky potential). At early times when the mode is sub-Hubble, i.e.
$k\gg aH$, the power decreases as ${\cal P}_{\zeta}\sim(aH)^{-2}$ as expected.
After the Hubble-exit, the power freezes to a constant in the super-Hubble
regime when $k\ll aH$. We note down its value after the mode-freezing as the
super-Hubble scale power corresponding to that mode. Repeating the procedure
for a range of scales $k$ yields us the power spectrum of scalar fluctuations.
The same numerical analysis can be carried out for tensor fluctuations.
Our primary goal in this section is to numerically solve equation (5.3) for
the Fourier modes $v_{k}$ corresponding to each comoving scale $k$ and plot
the frozen value of the scalar power spectrum of $\zeta_{k}$ given by (3.9)
after the mode becomes super-Hubble. We can conveniently relate a comoving
scale $k$ to its Hubble-exit epoch by $k=aH$. Since we are only interested in
the super-Hubble power spectra, we only need to simulate the system to evolve
$v_{k}$ for a small duration of time around the Hubble-exit of scale $k$
(which should be sufficiently early enough to impose Bunch-Davies initial
conditions and sufficiently late enough for the mode to be frozen outside the
Hubble radius).
In the following, we discuss the algorithm to solve the Mukhanov-Sasaki
equation (5.3) and determine the scalar power spectrum (3.9) numerically. We
also discuss how to solve the corresponding equation (3.18) for the tensor
power spectrum (at linear order in perturbation theory)111111Second-order
tensor fluctuations which are induced by first-order scalar fluctuations will
be discussed in the revised version of our manuscript.. The dimensionless
Mukhanov-Sasaki equation for tensor fluctuations is given by
$\boxed{\frac{{\rm d}^{2}h_{k}}{{\rm d}T^{2}}+z\,\frac{{\rm d}h_{k}}{{\rm
d}T}+\left[\frac{k^{2}}{A^{2}}+\frac{1}{2}\,y^{2}-2\,z^{2}\right]h_{k}=0}\,.$
(5.4)
We explicitly write down the Mukhanov-Sasaki equations for scalar and tensor
fluctuations in terms of dimensionless variables (as used in our code) in the
following way
$\displaystyle v_{k,T}$ $\displaystyle=\frac{{\rm d}v_{k}}{{\rm d}T}\,,$ (5.5)
$\displaystyle\frac{{\rm d}v_{k,T}}{{\rm d}T}$
$\displaystyle=-z\,v_{k,T}-\left[\frac{k^{2}}{A^{2}}-\frac{5}{2}\,y^{2}+2\,\frac{y}{z}\left(3\,z\,y+\frac{v_{0}}{S^{2}}\,f_{,x}\right)-2\,z^{2}-\frac{1}{2}\,\frac{y^{4}}{z^{2}}+\frac{v_{0}}{S^{2}}\,f_{,xx}\right]v_{k}\,;$
(5.6) $\displaystyle h_{k,T}$ $\displaystyle=\frac{{\rm d}h_{k}}{{\rm d}T}\,,$
(5.7) $\displaystyle\frac{{\rm d}h_{k,T}}{{\rm d}T}$
$\displaystyle=-z\,h_{k,T}-\left(\frac{k^{2}}{A^{2}}+\frac{1}{2}\,y^{2}-2\,z^{2}\right)\,h_{k}\,.$
(5.8)
In our numerical set up, we split $v_{k}$ and $h_{k}$ into their real and
imaginary parts and simulate them separately with appropriate Bunch-Davies
initial conditions. We begin with a discussion of numerical simulations of the
Mukhanov-Sasaki equations (5.3) and (5.4) for a purely slow-roll potential
(which we choose to be the Starobinsky potential (3.48) as usual), before
moving forward to discuss the same for a potential with a slow-roll violating
feature. This latter case is of the primary focus of our paper. In particular,
we will illustrate our numerical scheme for the case of a base slow-roll
potential possessing a tiny local bump feature, which was proposed in [44] in
the context of PBH formation.
Figure 13: The super-Hubble power spectra of scalar fluctuations (in green
colour) and tensor fluctuations (in red colour) are plotted for modes exiting
the Hubble radius at different number of e-folds $N_{e}$ before the end of
inflation for Starobinsky potential (3.48). The solid curves represent power
spectra computed under the slow-roll approximation (3.10), while the dotted
curves represent the power computed by numerically solving the Mukhanov-Sasaki
equation (5.3). We conclude that for Starobinsky model, since slow-roll
conditions $\epsilon_{H},\,|\eta_{H}|\ll 1$ are easily satisfied for most part
of inflation, the power spectra computed under the slow-roll approximation
match quite well with their numerically determined counterparts.
### 5.1 Numerical analysis for slow-roll potentials
1. 1.
As the first step, we numerically solve the background dynamics for a given
potential, determine the values of all relevant parameters of the potential
and the evolution of relevant primary dynamical variables
$\\{x,\,y,\,z,\,A\\}$ as well as the derived quantities such as
$\\{N_{e},\,\epsilon_{H},\,\eta_{H}\\}$ (as discussed in section 4).
2. 2.
We then proceed to identify different comoving scales $k$. This can be done by
determining their Hubble-exit epochs in the following way. For example, we
plot $aH$ (in log scale) against $N_{e}$ and identify the value of $aH$ at
$N_{e}=N_{*}$ to be the CMB pivot scale $k_{p}$. As mentioned before, we take
$N_{*}=60$ in all our analysis. Similarly, we associate a corresponding value
of $N_{e}$ to each comoving scale $k$ by the value of $aH$ at its Hubble-exit
epoch. This step ensures that we have a one-to-one correspondence between $k$
and $N_{e}$ in our analysis and we can use them interchangeably.
3. 3.
We intend to impose Bunch-Davies initial conditions for a given mode $v_{k}$
at an epoch when it is sub-Hubble. As it tuns out, for most potentials, the
Bunch-Davies initial conditions can be safely imposed as long as $k\geq
100\,aH$. Hence, rather than simulating the Mukhanov-Sasaki equation for each
mode (making Hubble-exit at the corresponding value of $N_{e}$) all through
the inflationary history (starting from $\phi_{i}>\phi_{*}$), we actually
impose the initial conditions from the background solutions for
$\\{x,\,y,\,z,\,A\\}$ at around $5$ e-folds before the Hubble-exit of that
mode. This step greatly reduces the running-time of the code. We then
incorporate the initial value of scale factor $A_{i}$ at the same epoch,
namely $A_{i}\exp{(N_{T}-N_{e}-5)}$ and do the same for the initial values of
the field $x_{i}$, and its derivative $y_{i}$. The initial conditions for the
mode functions $v_{k}$ and their derivatives $\dot{v_{k}}$ can then be safely
taken to be of Bunch-Davies type.
4. 4.
We solve the set of cosmological equations with these initial conditions for a
period of time $T=T_{i}\to T=T_{f}$ such that the mode becomes super-Hubble
and its power ($k^{3}|\zeta_{k}|^{2}/2\pi^{2}$) is frozen to a constant value
(see figure 12), which is typically within $5$ e-folds after Hubble-exit in
the kind of models we are interested in. We note down this frozen value as the
value of the power spectrum of that mode. While we have been discussing about
scalar fluctuations mostly, the same can be done for tensor fluctuations which
we have incorporated in our code.
5. 5.
We then select another mode that leaves the Hubble radius at some epoch
$N_{e}$ and repeat the procedure until we have collected the frozen super-
Hubble power spectra of a range of scales that we are interested in (see
figure 13).
Figure 14: This is a schematic plot of an asymptotically-flat inflationary
potential with a tiny bump/dip feature (5.9) near intermediate field values
$\phi\simeq\phi_{\rm PBH}$ that leads to an enhancement of the scalar power
spectrum. The full potential asymptotes to the base (slow-roll) potential near
the CMB window $\phi\simeq\phi_{*}$, thus satisfying observational constraints
on large cosmological scales. Slow-roll is violated around the feature, whose
position $\phi_{\rm PBH}$ dictates the range of moving scales $k$ that receive
amplification of power (which accordingly determines the mass and abundance of
formed PBHs). Note that the feature has been greatly exaggerated for
illustration purpose. In most realistic models, both the height and the width
of the feature are too small to be seen (without zooming-in considerably).
From the above numerical analysis of featureless vanilla potentials which
exhibit slow-roll dynamics until close to the end of inflation, we observe
that the power spectra of scalar and tensor fluctuations are nearly scale-
invariant (with small red-tilt) and their behaviour (as obtained from
numerically solving the Mukhanov-Sasaki equation) matches quite well with the
analytical predictions under the slow-roll approximations (see figure 13).
However, for potentials exhibiting a small-scale feature at intermediate field
values $\phi<\phi_{*}$, there might exist a short period of slow-roll
violating phase before the end of inflation during which slow-roll
approximations break down. In particular, as we will see, while the first
slow-roll parameter remains small $\epsilon_{H}\ll 1$, the second slow-roll
parameter might become $\eta_{H}\sim{\cal O}(1)$. Hence, a numerical analysis
of the Mukhanov-Sasaki equation is desired in order to determine the scalar
power spectrum more accurately. This will be the main focus of discussion in
the next subsection.
Figure 15: Evolution of the field value $\phi$ for the KKLT potential with a
tiny bump (5.10) is shown in solid green curve as a function of number of
e-folds $N_{e}$ before the end of inflation. We note that at CMB scales,
$\phi$ is much smaller than the corresponding value in the base model (KKLT)
(shown in dashed black curve). At intermediate scales when the inflaton
evolves across the bump feature in the potential, we gain a lot of extra
e-folds of expansion $\Delta N_{e}\simeq 15$ with little change in the field
value. After crossing the feature, evolution of $\phi$ mimics its
corresponding value in the base model. Figure 16: Evolution of the slow-roll
parameters $\epsilon_{H}$ and $\eta_{H}$ is shown in solid green and solid red
curves respectively for the KKLT potential with a tiny bump (5.10). Both
$\epsilon_{H}$ and $\eta_{H}$ are close to their corresponding values for the
base KKLT potential (shown in dashed curves) at early times near the CMB
window. At $N_{e}\simeq 30$, the value of $\epsilon_{H}$ starts decreasing
rapidly leading to an increase in $\eta_{H}$ from $|\eta_{H}|\ll 1$ to a
higher and positive value $\eta_{H}\simeq+3.3$ (almost USR phase). Thereafter,
the inflaton enters a phase of constant-roll inflation (where
$\eta_{H}\simeq-0.37$) before returning to the final slow-roll phase.
### 5.2 Numerical analysis for potentials with a local bump/dip feature
In order to facilitate PBH formation, we need a large amplification of scalar
power spectrum at smaller scales during inflation. This can be achieved by
introducing a small-scale feature in the potential which leads to a transient
period of slow-roll violating phase (including a short almost-USR phase).
Adequate amplification in the super-Hubble scalar power spectrum results in a
large density contrast in the post-inflationary universe (upon the Hubble-
entry of the corresponding modes) which in turn can collapse to form PBHs.
Usually, such a PBH feature in the potential leads to an increase in the value
of the second slow-roll parameter $\eta_{H}$ from near-zero to a positive
value of $\eta_{H}\sim{\cal O}(1)$.
A number of models with different types of features have been proposed in the
recent literature (as mentioned before) that facilitate the amplification of
scalar power spectrum at small scales. The most common amongst them is an
inflection point-like feature. However, we choose the model proposed in [44]
in which the base inflaton potential $V_{b}(\phi)$ possesses a tiny local bump
or dip $\pm\varepsilon(\phi,\phi_{0})$ at an intermediate field value
$\phi_{0}$ of the form
$V(\phi)=V_{b}(\phi)\,\left[1\pm\varepsilon(\phi,\phi_{0})\right]\,,$ (5.9)
where we assume $V_{b}(\phi)$ to be a symmetric or an anti-symmetric
asymptotically-flat potential in order to satisfy CMB constraints at large
cosmological scales. Such a potential has been schematically illustrated in
figure 14. To be specific, in this paper we choose the base potential to be
the D-brane KKLT potential [70, 71, 72, 73] with a tiny Gaussian bump of the
form [44]
$\boxed{V(\phi)=V_{0}\,\frac{\phi^{2}}{m^{2}+\phi^{2}}\,\left[1+A\,\exp{\left({-\frac{1}{2}\,\frac{(\phi-\phi_{0})^{2}}{\sigma^{2}}}\right)}\right]}~{},$
(5.10)
where $m$ is a mass scale in the KKLT model, while $A$ and $\sigma$ represent
the height and the width of the tiny bump respectively. We use this particular
model to demonstrate our numerical framework because of its simplicity and
efficiency. However, one can choose any model of their interest. Values of all
the parameters appearing in (5.10), which we use in our numerical analysis,
have been explicitly shown in figure 17.
The height and the width of the (bump/dip) feature required to facilitate
adequate amount of power amplification are quite small, and hence the feature
is tiny and local (in contrast to inflection point-like features). This
ensures that the feature does not significantly affect the CMB observables.
However, since we gain a lot of extra e-folds of expansion $\Delta N_{e}\simeq
15$ with little change in the field value (as shown in figure 15) when the
inflaton crosses the feature, the CMB pivot scale gets shifted towards smaller
values as compared to the same for the base potential.
Figure 17: The super-Hubble power spectra of scalar fluctuations are plotted
for modes exiting the Hubble radius at different number of e-folds $N_{e}$
before the end of inflation for KKLT potential with a tiny bump (5.10). The
solid green curve represents the power spectrum computed under the slow-roll
approximation (3.10), while the dotted red curve represents the power computed
by numerically solving the Mukhanov-Sasaki equation (5.3). Since the second
slow-roll condition is violated due to the presence of the bump (leading to
$\eta_{H}>1$), the slow-roll approximation underestimates the value of power
spectrum near the peak (as well as the position of the peak) for modes exiting
the Hubble radius near the epoch when the inflaton crosses the local maximum
around the bump feature. This leads to an incorrect estimation of the mass
fraction as well as the central mass of the PBHs formed in this scenario.
Since slow-roll is violated in these models (as shown in figure 16), we need
to solve the Mukhanov-Sasaki equation numerically in order to accurately
compute the scalar power spectrum. This has been explicitly demonstrated in
figure 17. Note that the inflationary dynamics in such models contains a
number of phases that include an early slow-roll phase SR-I near the CMB
window, a transition T-I from the early SR-I to the subsequent almost ultra
slow-roll phase USR and a transition T-II back to the next slow-roll phase SR-
II after passing through an intermediate constant-roll phase CR (shown in
figure 16).
Figure 18: Evolution of the effective mass term in the Mukhanov-Sasaki
equation is shown here in solid green curve for the KKLT potential with a tiny
bump (5.10). Important transient phases of the scalar field dynamics are also
highlighted following the behaviour of the second slow-roll parameter
$\eta_{H}$ (plotted in dashed blue curve). There is a sharp dip in the
effective mass term when the field transitions from the first slow-roll phase
(SR-I) to an almost ultra slow-roll phase (USR). The inflaton later makes a
transition to a phase of constant-roll inflation with $\eta_{H}\simeq-0.37$,
before reaching a final slow-roll phase until the end of inflation.
The effective mass term $z^{\prime\prime}/z$ in the Mukhanov-Sasaki equation
(3.4), which primarily governs the dynamics of scalar fluctuations, has been
shown in figure 18, and the resultant Hubble-exit behaviour of different modes
is described in figure 19. As the inflaton approaches the PBH feature,
$\eta_{H}$ starts to increase rapidly. This is accompanied by an initial sharp
dip in the effective mass term $z^{\prime\prime}/z$, which then increases to a
higher plateau as $\eta_{H}$ approaches its maximum in the USR type phase. The
modes that leave the Hubble radius slightly before the transition already
start receiving power amplification on super-Hubble scales as shown by the
orange color curve in figure 19.
It is worth noting that the power spectrum exhibits a dip which corresponds to
very narrow range of scales $k\simeq k_{\rm dip}$ that leave the Hubble radius
a few e-folds before the USR phase (shown by the blue color curve in figure
19). The maximum rate of growth observed in this model is consistent with the
steepest growth bound discussed in [65]. Near the USR phase when
$\eta_{H}\gtrsim 3$, $z^{\prime\prime}/z$ saturates to a constant value. Modes
leaving the Hubble radius around this USR epoch receive maximal amplification
in their super-Hubble power spectrum.
Figure 19: This figure demonstrates the horizon exit behaviour of different
modes i.e the evolution of $\sqrt{\cal{P}_{\zeta}}$ for different modes as
they cross the Hubble radius in KKLT model with a Gaussian bump (5.10). The
dot on each curve corresponds to its Hubble-exit epoch. The sharp dip in the
power spectrum (figure 17) corresponds to the mode $k_{\mathrm{dip}}$ (plotted
in blue color) that exits the Hubble radius a few e-folds before the
commencement of the USR phase. The mode $k_{\mathrm{PBH}}$ which exits the
Hubble radius during the USR phase receives a maximal amplification of power
(plotted in green color).
As the field crosses the maximum of the bump feature, $\eta_{H}$ decreases to
a constant negative value. It stays in this constant-roll phase until the
inflaton meets the base potential eventually and approaches the final slow-
roll phase in its dynamics before the end of inflation. The dynamics of scalar
fluctuations in the aforementioned phases are quite rich, and interesting.
However, since the main aim of this paper is to illustrate how to use our
numerical code with an example, we do not discuss these phases and their
impact on the power spectrum (some of which have been explicitly shown in
figures 18, 19, 20), and refer the interested readers to [63] for more detail.
Figure 20: The super-Hubble power spectrum of scalar fluctuations obtained by
numerically solving the Mukhanov-Sasaki equation (5.3) is plotted here for the
KKLT potential with a tiny bump (5.10). At large scales (around the CMB
window), the power spectrum matches that of the base KKLT potential. The power
spectrum (after exhibiting a sharp dip) receives a large amplification at
intermediate scales $k\sim k_{\rm PBH}$ around the USR phase. The maximum rate
of growth observed in this model is consistent with the steepest growth bound
discussed in [65]. After reaching the peak, the power then decreases at a
steady rate during the constant-roll phase before finally asymptoting towards
its base slow-roll value towards the end of inflation.
## 6 Future extension of our numerical framework
In preceding sections, we described the relevant cosmological equations
governing the inflationary dynamics in terms of dimensionless variables. We
also introduced our numerical code (written in terms of cosmic time) that can
easily simulate the inflationary dynamics both at the background level in
section 4 and at linear order in perturbation theory in section 5. However,
with minimal to moderate extension, our numerical code can be used to simulate
a number of different scenarios associated with scalar field dynamics both
during inflation as well as in the post-inflationary universe. We have already
started working on some of these aspects which will appear in the revised
version of our manuscript. In the following we discuss some of the important
future extensions of our code that we intend to include in our revised
version.
1. 1.
As stressed in section 4, the present version of our numerical code, although
quite fast and neat, contains segments that require the user to carry out a
number of tasks manually. While we believe that the present version will
definitely help a user (who is relatively new to the field) to understand the
inflationary dynamics much better, we are already developing an automated
version of this code that is much more compact and requires substantially less
manual involvement of the user. We also plan to make the code even faster. We
will present the updated version of our code in the revised version of our
paper and refer to it in the same GitHub link
121212https://github.com/bhattsiddharth/NumDynInflation.
2. 2.
In section 5, we briefly discussed how to solve the evolution equation for the
tensor fluctuations at linear order in perturbation theory. However, first-
order scalar fluctuations induce tensor fluctuations at second order in
perturbation theory which might be significant when slow-roll is violated,
especially in the scenario where scalar power spectrum is largely amplified in
order to source PBH formation. We will incorporate the computation of such
scalar-induced Gravitational Waves in the updated version of our code.
3. 3.
In our analysis, we used the Mukhanov-Sasaki variable $v_{k}$ with its
corresponding evolution equation (5.1) in order to simulate scalar
fluctuations at linear order. Our code runs quite quickly and generates the
power spectrum without requiring a large computational time. However, one can
study the scalar power spectrum by using other variables proposed in the
literature. We are particularly interested in two such variables. The first
one is the curvature perturbation $\zeta_{k}$ itself using the corresponding
evolution equation (3.11). For example, it was claimed in [74] that numerical
simulations are more stable in terms of $\zeta_{k}$ since it is explicitly
frozen well outside the Hubble radius. Similarly, authors of [75] have
suggested a change in the variable $v_{k}$ in the form
$g_{k}\equiv\frac{v_{k}}{z}\,e^{ik\tau}$ that is supposed to make the
simulations much more stable since it removes the early time
oscillations131313We are thankful to Christian Byrnes for bringing this to our
attention..
In the revised version of our work, we plan on carrying out a comprehensive
numerical analysis to compare both stability as well speed of the numerical
simulations using all three of the aforementioned variables $v_{k}$,
$\zeta_{k}$, and $g_{k}$.
4. 4.
It is easy to extend our numerical analysis to incorporate the dynamics of
more than one scalar fields during inflation, at least in the background
level. In the future, we are going to provide an updated numerical code to
study both the background dynamics as well as quantum fluctuations in two-
field inflationary dynamics (where the second field might also source
inflation, or act as a spectator field).
5. 5.
Additionally, our code can be extended to simulate the post-inflationary
dynamics of the inflaton field as well as to study parametric resonance by
simulating the evolution of different Fourier modes of the inflaton
fluctuations. During the post-inflationary oscillations, it is usually
advisable to make a change in the Mukhanov-Sasaki variable of the form
${\tilde{v}_{k}}=a^{1/2}\,v_{k}$ as suggested in [76, 77]. The code can also
be extended to study the dynamics of scalar field dark matter and quintessence
by suitably redefining the dimensionless variables as per the energy scale of
the dynamics.
## 7 Discussion
In the present version of our manuscript, we introduced our numerical approach
to simulate the cosmological equations in order to study the inflationary
dynamics at the level of both background as well as linear-order in
perturbation theory. We provided the link to our open-source GitHub repository
where we have supplied a Python-based simple numerical code to simulate
inflationary dynamics in terms of cosmic time $t$. We explicitly demonstrated
how to use the code to study the inflationary background dynamics in section 4
that includes plotting the phase-space portrait of inflation as well as to
characterise quantum fluctuations during inflation using the simulations of
the background dynamics.
Section 5 was dedicated to study quantum fluctuations during inflation
(without using slow-roll approximated expressions) by numerically solving the
mode function equations of scalar and tensor fluctuations. For a featureless
slow-roll inflaton potential, the difference between the results obtained
numerically and those obtained under slow-roll approximations were negligible
until close to the end of inflation, as expected. We used the Starobinsky
potential (3.48) as an example to illustrate our analysis. Our primary focus
was the numerical evaluation of the scalar power spectrum ${\cal
P}_{\zeta}(k)$ for potentials that exhibit a slow-roll violating feature. In
particular, we used the example of an asymptotically-flat base inflationary
potential that possesses a tiny local bump feature (5.9) to illustrate our
numerical scheme in section 5.2. By suitably choosing the parameters of the
potential (5.10), one can achieve a large enough amplification of the scalar
power spectrum in order to facilitate the formation of PBHs in the post-
inflationary epoch.
In our numerical analysis for the case of potentials with a slow-roll
violating feature, we explicitly chose the parameters in order to amplify the
small-scale scalar power by a factor of $\sim 10^{7}$ with respect to the
corresponding power at large cosmological scales, as is usually assumed in the
literature to facilitate the formation of of PBHs in the subsequent radiation
dominated epoch. However, it is important to stress that a significant growth
in the scalar power spectrum at small-scales might engender the dynamics to
enter into non-perturbative regime. For example, a careful computation of loop
corrections to the two-point scalar fluctuations demonstrates [78, 79] that
contribution from $1$-loop effects becomes of the same order as the tree-level
computation (which we carried out in this work) if ${\cal P}_{\zeta}(k)\sim
10^{-2}$ indicating a breakdown of the perturbative analysis.
Moreover, the mechanism of PBH formation in the context of single field models
of inflation involves additional intricacies that demand for a non-
perturbative analysis of primordial fluctuations. Firstly, a sharp drop in the
classical drift speed of the inflaton due to the presence of the PBH-producing
feature often incites the system to enter into a phase where stochastic
quantum diffusion effects become non-negligible, at times even significant.
More importantly, since PBHs form from rare extreme peaks, and hence are
determined by the tail of the probability distribution function (PDF)
$P[\zeta]$ of the primordial fluctuations, perturbative computations based on
only power-spectrum lead to an inaccurate estimation of the PBH mass fraction.
Consequently, determination of the full primordial PDF becomes crucial, which
can be computed non-perturbatively using the stochastic inflation framework
[80, 81, 82, 83, 84, 85, 86, 88, 87, 89, 90, 91, 92, 94, 95, 93, 96, 97, 98,
99, 100, 101, 102, 103, 104, 105], which usually predicts a non-Gaussian
exponential tail [86, 90]. Tail of the primordial PDF can also be computed by
using semi-classical techniques discussed in [106]. Determining the tail of
the primordial PDF is an important and active topic of research [90, 106, 107,
108, 109, 110, 111] at present, which is beyond the scope of our perturbative
analysis presented here.
Before concluding, let us mention that we also stressed upon various important
future extensions of our numerical scheme in section 6 that will result in
making our code more efficient, and enable us to simulate the scalar field
dynamics in a number of interesting scenarios. This includes updating our code
to make it more compact and automated, as well as extending it to study the
spectrum of scalar-induced gravitational waves, inflaton dynamics in the post-
inflationary epoch, multi-field inflationary dynamics, and even scalar field
models of dark matter and dark energy. We will incorporate most of these
additional features in the revised version of our manuscript. In the meantime,
we welcome constructive comments and suggestions as well as queries from
interested readers which will help us in improving the quality of our
numerical work and presentation in the updated version of the paper.
## 8 Acknowledgements
S.S.M thanks Satadru Bag, Shabbir Shaikh, and Varun Sahni for crucial inputs
during the early stages of development of this code. The authors are grateful
to Parth Bhargava and Sanket Dave for stimulating discussions on various
topics related to numerical dynamics discussed in this paper. S.S.M. is
supported as a postdoctoral Research Associate at the School of Physics and
Astronomy, University of Nottingham by the STFC funded consolidated grant, UK.
S.S.B was supported by the INSPIRE scholarship of the Department of Science
and Technology (DST), Govt. of India during his Master’s thesis work during
which a significant portion of this work was carried out.
## References
* [1] A. A. Starobinsky, “A New Type of Isotropic Cosmological Models Without Singularity,” Phys. Lett. B 91, 99-102 (1980).
* [2] A. H. Guth, “The Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems,” Phys. Rev. D 23, 347 (1981).
* [3] A. D. Linde, “A New Inflationary Universe Scenario: A Possible Solution of the Horizon, Flatness, Homogeneity, Isotropy and Primordial Monopole Problems”, Phys. Lett. B 108, 389 (1982).
* [4] A. Albrecht and P. J. Steinhardt, “Cosmology for Grand Unified Theories with Radiatively Induced Symmetry Breaking,” Phys. Rev. Lett. 48, 1220 (1982).
* [5] A. D. Linde, “Chaotic Inflation,” Phys. Lett. B 129, 177-181 (1983).
* [6] A.D. Linde, Particle Physics and Inflationary Cosmology, Harwood, Chur, Switzerland (1990).
* [7] D. Baumann, “TASI Lectures on Inflation”, [arXiv:0907.5424].
* [8] V. F. Mukhanov and G. V. Chibisov, JETP Lett. 33, 532 (1981).
* [9] S. W. Hawking, Phys. Lett. B 115, 295 (1982).
* [10] A. A. Starobinsky, Phys. Lett. B 117, 175 (1982).
* [11] A. H. Guth and S. -Y. Pi, Phys. Rev. Lett. 49, 1110 (1982).
* [12] A.A. Starobinsky, JETP Lett., 30 , 682 (1979).
* [13] V. Sahni, Phys. Rev. D42, 453 (1990).
* [14] M. Tegmark, “What does inflation really predict?,” JCAP 04 (2005), 001 [arXiv:astro-ph/0410281 [astro-ph]].
* [15] Y. Akrami et al. [Planck], “Planck 2018 results. X. Constraints on inflation” Astron. Astrophys. 641, A10 (2020) [arXiv:1807.06211 [astro-ph.CO]].
* [16] S. S. Mishra, V. Sahni and A. A. Starobinsky, “Curing inflationary degeneracies using reheating predictions and relic gravitational waves,” JCAP 05, 075 (2021) [arXiv:2101.00271 [gr-qc]].
* [17] P. A. R. Ade et al. [BICEP and Keck], “Improved Constraints on Primordial Gravitational Waves using Planck, WMAP, and BICEP/Keck Observations through the 2018 Observing Season,” Phys. Rev. Lett. 127, no.15, 151301 (2021) [arXiv:2110.00483 [astro-ph.CO]].
* [18] S. S. Mishra and V. Sahni, “Canonical and Non-canonical Inflation in the light of the recent BICEP/Keck results,” [arXiv:2202.03467 [astro-ph.CO]].
* [19] R. Kallosh and A. Linde, “BICEP/Keck and cosmological attractors,” JCAP 12, no.12, 008 (2021) [arXiv:2110.10902 [astro-ph.CO]].
* [20] S. S. Mishra, V. Sahni and A. V. Toporensky, “Initial conditions for Inflation in an FRW Universe,” Phys. Rev. D 98, no.8, 083538 (2018) [arXiv:1801.04948 [gr-qc]].
* [21] S. Hawking, “Gravitationally collapsed objects of very low mass,” Mon. Not. Roy. Astron. Soc. 152 (1971), 75
* [22] B. J. Carr and S. W. Hawking, “Black holes in the early Universe,” Mon. Not. Roy. Astron. Soc. 168 (1974), 399-415
* [23] B. J. Carr, “The Primordial black hole mass spectrum,” Astrophys. J. 201 (1975), 1-19
* [24] M. Sasaki, T. Suyama, T. Tanaka and S. Yokoyama, “Primordial black holes—perspectives in gravitational wave astronomy,” Class. Quant. Grav. 35 (2018) no.6, 063001 [arXiv:1801.05235 [astro-ph.CO]].
* [25] G. F. Chapline, “Cosmological effects of primordial black holes,” Nature 253 (1975) no.5489, 251-252
* [26] P. Meszaros, “Primeval black holes and galaxy formation,” Astron. Astrophys. 38 (1975), 5-13
* [27] P. Ivanov, P. Naselsky and I. Novikov, “Inflation and primordial black holes as dark matter,” Phys. Rev. D 50 (1994), 7173-7178
* [28] B. Carr, F. Kuhnel and M. Sandstad, “Primordial Black Holes as Dark Matter,” Phys. Rev. D 94 (2016) no.8, 083504 [arXiv:1607.06077 [astro-ph.CO]].
* [29] A. M. Green and B. J. Kavanagh, “Primordial Black Holes as a dark matter candidate,” J. Phys. G 48, no.4, 043001 (2021) [arXiv:2007.10722 [astro-ph.CO]].
* [30] B. Carr and F. Kuhnel, “Primordial Black Holes as Dark Matter: Recent Developments,” Ann. Rev. Nucl. Part. Sci. 70 (2020), 355-394 [arXiv:2006.02838 [astro-ph.CO]].
* [31] E. Bugaev and P. Klimai, “Large curvature perturbations near horizon crossing in single-field inflation models,” Phys. Rev. D 78 (2008), 063515 [arXiv:0806.4541 [astro-ph]].
* [32] C. Germani and T. Prokopec, “On primordial black holes from an inflection point,” Phys. Dark Univ. 18 (2017), 6-10 [arXiv:1706.04226 [astro-ph.CO]].
* [33] J. M. Ezquiaga, J. Garcia-Bellido and E. Ruiz Morales, “Primordial Black Hole production in Critical Higgs Inflation,” Phys. Lett. B 776 (2018), 345-349 doi:10.1016/j.physletb.2017.11.039 [arXiv:1705.04861 [astro-ph.CO]].
* [34] J. Garcia-Bellido and E. Ruiz Morales, “Primordial black holes from single field models of inflation,” Phys. Dark Univ. 18 (2017), 47-54 [arXiv:1702.03901 [astro-ph.CO]].
* [35] I. Dalianis, A. Kehagias and G. Tringas, “Primordial black holes from $\alpha$-attractors,” JCAP 01 (2019), 037 [arXiv:1805.09483 [astro-ph.CO]].
* [36] O. Özsoy, S. Parameswaran, G. Tasinato and I. Zavala, “Mechanisms for Primordial Black Hole Production in String Theory,” JCAP 07 (2018), 005 [arXiv:1803.07626 [hep-th]].
* [37] M. Cicoli, V. A. Diaz and F. G. Pedro, “Primordial Black Holes from String Inflation,” JCAP 06 (2018), 034 [arXiv:1803.02837 [hep-th]].
* [38] M. P. Hertzberg and M. Yamada, “Primordial Black Holes from Polynomial Potentials in Single Field Inflation,” Phys. Rev. D 97 (2018) no.8, 083509 [arXiv:1712.09750 [astro-ph.CO]].
* [39] G. Ballesteros and M. Taoso, “Primordial black hole dark matter from single field inflation,” Phys. Rev. D 97 (2018) no.2, 023501 [arXiv:1709.05565 [hep-ph]].
* [40] H. Di and Y. Gong, “Primordial black holes and second order gravitational waves from ultra-slow-roll inflation,” JCAP 07 (2018), 007 [arXiv:1707.09578 [astro-ph.CO]].
* [41] H. Motohashi, S. Mukohyama and M. Oliosi, “Constant Roll and Primordial Black Holes,” JCAP 03 (2020), 002 [arXiv:1910.13235 [gr-qc]].
* [42] R. Mahbub, “Primordial black hole formation in inflationary $\alpha$-attractor models,” Phys. Rev. D 101 (2020) no.2, 023533 [arXiv:1910.10602 [astro-ph.CO]].
* [43] N. Bhaumik and R. K. Jain, “Primordial black holes dark matter from inflection point models of inflation and the effects of reheating,” JCAP 01 (2020), 037 [arXiv:1907.04125 [astro-ph.CO]].
* [44] S. S. Mishra and V. Sahni, “Primordial Black Holes from a tiny bump/dip in the Inflaton potential,” JCAP 04, 007 (2020) [arXiv:1911.00057 [gr-qc]].
* [45] K. Kefala, G. P. Kodaxis, I. D. Stamou and N. Tetradis, “Features of the inflaton potential and the power spectrum of cosmological perturbations,” Phys. Rev. D 104 (2021) no.2, 023506 [arXiv:2010.12483 [astro-ph.CO]].
* [46] H. V. Ragavendra, P. Saha, L. Sriramkumar and J. Silk, “Primordial black holes and secondary gravitational waves from ultraslow roll and punctuated inflation,” Phys. Rev. D 103 (2021) no.8, 083510 [arXiv:2008.12202 [astro-ph.CO]].
* [47] O. Özsoy and Z. Lalak, “Primordial black holes as dark matter and gravitational waves from bumpy axion inflation,” JCAP 01 (2021), 040 [arXiv:2008.07549 [astro-ph.CO]].
* [48] R. Zheng, J. Shi and T. Qiu, “On Primordial Black Holes and secondary gravitational waves generated from inflation with solo/multi-bumpy potential,” [arXiv:2106.04303 [astro-ph.CO]].
* [49] K. Inomata, E. McDonough and W. Hu, “Primordial black holes arise when the inflaton falls,” Phys. Rev. D 104 (2021) no.12, 123553 [arXiv:2104.03972 [astro-ph.CO]].
* [50] R. Brandenberger, “Initial conditions for inflation — A short review,” Int. J. Mod. Phys. D 26 (2016) no.01, 1740002 [arXiv:1601.01918 [hep-th]].
* [51] D. Baumann, PoS TASI 2017, 009 (2018) [arXiv:1807.03098 [hep-th]].
* [52] J. M. Maldacena, “Non-Gaussian features of primordial fluctuations in single field inflationary models,” JHEP 05, 013 (2003) [arXiv:astro-ph/0210603 [astro-ph]].
* [53] M. Sasaki, “Large Scale Quantum Fluctuations in the Inflationary Universe,” Prog. Theor. Phys. 76, 1036 (1986).
* [54] V. F. Mukhanov, “Quantum Theory of Gauge Invariant Cosmological Perturbations,” Sov. Phys. JETP 67, 1297 (1988) [Zh. Eksp. Teor. Fiz. 94N7, 1 (1988)].
* [55] H. Motohashi, A. A. Starobinsky and J. Yokoyama, “Inflation with a constant rate of roll,” JCAP 1509, 018 (2015) [arXiv:1411.5021 [astro-ph.CO]].
* [56] T. S. Bunch and P. C. W. Davies, “Quantum Field Theory in de Sitter Space: Renormalization by Point Splitting,” Proc. Roy. Soc. Lond. A 360, 117 (1978).
* [57] S. Matarrese, S. Mollerach and M. Bruni, “Second order perturbations of the Einstein-de Sitter universe,” Phys. Rev. D 58 (1998), 043504
* [58] D. Baumann, P. J. Steinhardt, K. Takahashi and K. Ichiki, “Gravitational Wave Spectrum Induced by Primordial Scalar Perturbations,” Phys. Rev. D 76 (2007), 084019 [arXiv:hep-th/0703290 [hep-th]].
* [59] N. Bartolo, V. De Luca, G. Franciolini, A. Lewis, M. Peloso and A. Riotto, “Primordial Black Hole Dark Matter: LISA Serendipity,” Phys. Rev. Lett. 122 (2019) no.21, 211301 [arXiv:1810.12218 [astro-ph.CO]].
* [60] R. g. Cai, S. Pi and M. Sasaki, “Gravitational Waves Induced by non-Gaussian Scalar Perturbations,” Phys. Rev. Lett. 122 (2019) no.20, 201101 [arXiv:1810.11000 [astro-ph.CO]].
* [61] G. Domènech, “Scalar Induced Gravitational Waves Review,” Universe 7 (2021) no.11, 398 [arXiv:2109.01398 [gr-qc]].
* [62] B. Whitt, “Fourth Order Gravity as General Relativity Plus Matter,” Phys. Lett. B 145 (1984), 176-178
* [63] A. Karam, N. Koivunen, E. Tomberg, V. Vaskonen and H. Veermäe, “Anatomy of single-field inflationary models for primordial black holes,” [arXiv:2205.13540 [astro-ph.CO]].
* [64] H. Motohashi and W. Hu, “Primordial Black Holes and Slow-Roll Violation,” Phys. Rev. D 96 (2017) no.6, 063503 doi:10.1103/PhysRevD.96.063503 [arXiv:1706.06784 [astro-ph.CO]].
* [65] C. T. Byrnes, P. S. Cole and S. P. Patil, “Steepest growth of the power spectrum and primordial black holes,” JCAP 06, 028 (2019) [arXiv:1811.11158 [astro-ph.CO]].
* [66] P. Carrilho, K. A. Malik and D. J. Mulryne, “Dissecting the growth of the power spectrum for primordial black holes,” Phys. Rev. D 100, no.10, 103529 (2019) [arXiv:1907.05237 [astro-ph.CO]].
* [67] O. Özsoy and G. Tasinato, “On the slope of the curvature power spectrum in non-attractor inflation,” JCAP 04, 048 (2020) [arXiv:1912.01061 [astro-ph.CO]].
* [68] P. S. Cole, A. D. Gow, C. T. Byrnes and S. P. Patil, “Steepest growth re-examined: repercussions for primordial black hole formation,” [arXiv:2204.07573 [astro-ph.CO]].
* [69] V. A. Belinsky, I. M. Khalatnikov, L. P. Grishchuk and Y. B. Zeldovich, “INFLATIONARY STAGES IN COSMOLOGICAL MODELS WITH A SCALAR FIELD,” Phys. Lett. B 155 (1985), 232-236.
* [70] S. Kachru, R. Kallosh, A. D. Linde and S. P. Trivedi, “De Sitter vacua in string theory,” Phys. Rev. D 68, 046005 (2003) [arXiv:hep-th/0301240 [hep-th]].
* [71] S. Kachru, R. Kallosh, A. D. Linde, J. M. Maldacena, L. P. McAllister and S. P. Trivedi, “Towards inflation in string theory,” JCAP 0310, 013 (2003) [hep-th/0308055].
* [72] R. Kallosh and A. Linde, “CMB targets after the latest Planck data release,” Phys. Rev. D 100, no.12, 123523 (2019) [arXiv:1909.04687 [hep-th]].
* [73] J. Martin, C. Ringeval and V. Vennin, “Encyclopædia Inflationaris,” Phys. Dark Univ. 5-6 (2014), 75-235 [arXiv:1303.3787 [astro-ph.CO]].
* [74] W. J. Handley, A. N. Lasenby, H. V. Peiris and M. P. Hobson, “Bayesian inflationary reconstructions from Planck 2018 data,” Phys. Rev. D 100 (2019) no.10, 103511 [arXiv:1908.00906 [astro-ph.CO]].
* [75] S. Rasanen and E. Tomberg, “Planck scale black hole dark matter from Higgs inflation,” JCAP 01 (2019), 038 [arXiv:1810.12608 [astro-ph.CO]].
* [76] F. Finelli and R. H. Brandenberger, “Parametric amplification of gravitational fluctuations during reheating,” Phys. Rev. Lett. 82 (1999), 1362-1365 [arXiv:hep-ph/9809490 [hep-ph]].
* [77] K. Jedamzik, M. Lemoine and J. Martin, “Collapse of Small-Scale Density Perturbations during Preheating in Single Field Inflation,” JCAP 09 (2010), 034 [arXiv:1002.3039 [astro-ph.CO]].
* [78] K. Inomata, M. Braglia and X. Chen, “Questions on calculation of primordial power spectrum with large spikes: the resonance model case,” [arXiv:2211.02586 [astro-ph.CO]].
* [79] J. Kristiano and J. Yokoyama, “Ruling Out Primordial Black Hole Formation From Single-Field Inflation,” [arXiv:2211.03395 [hep-th]].
* [80] A. A. Starobinsky, “STOCHASTIC DE SITTER (INFLATIONARY) STAGE IN THE EARLY UNIVERSE,” Lect. Notes Phys. 246, 107-126 (1986) doi:10.1007/3-540-16452-9_6
* [81] D. S. Salopek and J. R. Bond, “Stochastic inflation and nonlinear gravity,” Phys. Rev. D 43, 1005-1031 (1991) doi:10.1103/PhysRevD.43.1005
* [82] A. A. Starobinsky and J. Yokoyama, “Equilibrium state of a selfinteracting scalar field in the De Sitter background,” Phys. Rev. D 50, 6357-6368 (1994) [arXiv:astro-ph/9407016 [astro-ph]].
* [83] T. Fujita, M. Kawasaki, Y. Tada and T. Takesako, “A new algorithm for calculating the curvature perturbations in stochastic inflation,” JCAP 12, 036 (2013) [arXiv:1308.4754 [astro-ph.CO]].
* [84] T. Fujita, M. Kawasaki and Y. Tada, “Non-perturbative approach for curvature perturbations in stochastic $\delta N$ formalism,” JCAP 10, 030 (2014) [arXiv:1405.2187 [astro-ph.CO]].
* [85] V. Vennin and A. A. Starobinsky, Eur. Phys. J. C 75, 413 (2015) [arXiv:1506.04732].
* [86] C. Pattison, V. Vennin, H. Assadullahi and D. Wands, JCAP 10, 046 (2017) [arXiv:1707.00537].
* [87] J. M. Ezquiaga and J. García-Bellido, “Quantum diffusion beyond slow-roll: implications for primordial black-hole production,” JCAP 08 (2018), 018 [arXiv:1805.06731 [astro-ph.CO]].
* [88] M. Biagetti, G. Franciolini, A. Kehagias and A. Riotto, “Primordial Black Holes from Inflation and Quantum Diffusion,” JCAP 07 (2018), 032 [arXiv:1804.07124 [astro-ph.CO]].
* [89] C. Pattison, V. Vennin, H. Assadullahi and D. Wands, “Stochastic inflation beyond slow roll,” JCAP 07, 031 (2019) [arXiv:1905.06300 [astro-ph.CO]].
* [90] J. M. Ezquiaga, J. García-Bellido and V. Vennin, “The exponential tail of inflationary fluctuations: consequences for primordial black holes,” JCAP 03 (2020), 029 [arXiv:1912.05399 [astro-ph.CO]].
* [91] H. Firouzjahi, A. Nassiri-Rad and M. Noorbala, “Stochastic Ultra Slow Roll Inflation,” JCAP 01 (2019), 040 [arXiv:1811.02175 [hep-th]].
* [92] V. Vennin, “Cosmological Inflation: Theoretical Aspects and Observational Constraints,” (PhD Thesis) tel-01094199.
* [93] G. Ballesteros, J. Rey, M. Taoso and A. Urbano, “Stochastic inflationary dynamics beyond slow-roll and consequences for primordial black hole formation,” JCAP 08 (2020), 043 [arXiv:2006.14597 [astro-ph.CO]].
* [94] K. Ando and V. Vennin, “Power spectrum in stochastic inflation,” JCAP 04, 057 (2021) [arXiv:2012.02031 [astro-ph.CO]].
* [95] A. De and R. Mahbub, “Numerically modeling stochastic inflation in slow-roll and beyond,” Phys. Rev. D 102, no.12, 123509 (2020) [arXiv:2010.12685 [astro-ph.CO]].
* [96] D. G. Figueroa, S. Raatikainen, S. Rasanen and E. Tomberg, “Non-Gaussian Tail of the Curvature Perturbation in Stochastic Ultraslow-Roll Inflation: Implications for Primordial Black Hole Production,” Phys. Rev. Lett. 127, no.10, 101302 (2021) [arXiv:2012.06551 [astro-ph.CO]].
* [97] D. Cruces and C. Germani, Phys. Rev. D 105, no.2, 023533 (2022) [arXiv:2107.12735].
* [98] G. Rigopoulos and A. Wilkins, JCAP 12, no.12, 027 (2021) [arXiv:2107.05317].
* [99] C. Pattison, V. Vennin, D. Wands and H. Assadullahi, “Ultra-slow-roll inflation with quantum diffusion,” JCAP 04, 080 (2021) [arXiv:2101.05741 [astro-ph.CO]].
* [100] E. Tomberg, “A numerical approach to stochastic inflation and primordial black holes,” J. Phys. Conf. Ser. 2156, no.1, 012010 (2021) [arXiv:2110.10684 [astro-ph.CO]].
* [101] D. G. Figueroa, S. Raatikainen, S. Rasanen and E. Tomberg, “Implications of stochastic effects for primordial black hole production in ultra-slow-roll inflation,” [arXiv:2111.07437 [astro-ph.CO]].
* [102] Y. Tada and V. Vennin, “Statistics of coarse-grained cosmological fields in stochastic inflation,” JCAP 02, no.02, 021 (2022) [arXiv:2111.15280 [astro-ph.CO]].
* [103] R. Mahbub and A. De, “Smooth coarse-graining and colored noise dynamics in stochastic inflation,” [arXiv:2204.03859 [astro-ph.CO]].
* [104] N. Ahmadi, M. Noorbala, N. Feyzabadi, F. Eghbalpoor and Z. Ahmadi, “Quantum Diffusion in Sharp Transition to Non-Slow-Roll Phase,” JCAP 08 (2022), 078 [arXiv:2207.10578 [gr-qc]].
* [105] J. H. P. Jackson, H. Assadullahi, K. Koyama, V. Vennin and D. Wands, “Numerical simulations of stochastic inflation using importance sampling,” JCAP 10 (2022), 067 [arXiv:2206.11234 [astro-ph.CO]].
* [106] M. Celoria, P. Creminelli, G. Tambalo and V. Yingcharoenrat, “Beyond perturbation theory in inflation,” JCAP 06, 051 (2021) doi:10.1088/1475-7516/2021/06/051 [arXiv:2103.09244 [hep-th]].
* [107] S. Hooshangi, M. H. Namjoo and M. Noorbala, “Rare events are nonperturbative: Primordial black holes from heavy-tailed distributions,” Phys. Lett. B 834 (2022), 137400 [arXiv:2112.04520 [astro-ph.CO]].
* [108] Y. F. Cai, X. H. Ma, M. Sasaki, D. G. Wang and Z. Zhou, “One small step for an inflaton, one giant leap for inflation: A novel non-Gaussian tail and primordial black holes,” Phys. Lett. B 834 (2022), 137461 [arXiv:2112.13836 [astro-ph.CO]].
* [109] J. M. Ezquiaga, J. García-Bellido and V. Vennin, “Could ”El Gordo” be hinting at primordial quantum diffusion?,” [arXiv:2207.06317 [astro-ph.CO]].
* [110] Y. F. Cai, X. H. Ma, M. Sasaki, D. G. Wang and Z. Zhou, “Highly non-Gaussian tails and primordial black holes from single-field inflation,” [arXiv:2207.11910 [astro-ph.CO]].
* [111] A. D. Gow, H. Assadullahi, J. H. P. Jackson, K. Koyama, V. Vennin and D. Wands, “Non-perturbative non-Gaussianity and primordial black holes,” [arXiv:2211.08348 [astro-ph.CO]].
|
# Exploiting hidden structures in non-convex games
for convergence to Nash equilibrium
Iosif Sakos∗ ∗ Singapore University of Technology and Design
<EMAIL_ADDRESS>, Emmanouil V. Vlatakis-Gkaragkounis§ § UC
Berkeley<EMAIL_ADDRESS>,
Panayotis Mertikopoulos♯ ♯ Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP,
LIG, 38000 Grenoble, France<EMAIL_ADDRESS>and Georgios
Piliouras∗<EMAIL_ADDRESS>
###### Abstract.
A wide array of modern machine learning applications – from adversarial models
to multi-agent reinforcement learning – can be formulated as non-cooperative
games whose Nash equilibria represent the system’s desired operational states.
Despite having a highly non-convex loss landscape, many cases of interest
possess a latent convex structure that could potentially be leveraged to yield
convergence to an equilibrium. Driven by this observation, our paper proposes
a flexible first-order method that successfully exploits such “hidden
structures” and achieves convergence under minimal assumptions for the
transformation connecting the players’ control variables to the game’s latent,
convex-structured layer. The proposed method – which we call preconditioned
hidden gradient descent (PHGD) – hinges on a judiciously chosen gradient
preconditioning scheme related to natural gradient methods. Importantly, we
make no separability assumptions for the game’s hidden structure, and we
provide explicit convergence rate guarantees for both deterministic and
stochastic environments.
###### Key words and phrases:
Non-convex games; hidden games; stochastic algorithms; Nash equilibrium.
###### 2020 Mathematics Subject Classification:
Primary 91A10, 91A26; secondary 68Q32.
## 1\. Introduction
Many powerful AI architectures are based on the idea of combining conceptually
straightforward settings coming from game theory with the expressive power of
neural nets. Some prominent examples of this type include generative
adversarial networks [14], robust reinforcement learning [37], adversarial
training [26], multi-agent reinforcement learning in games [40, 36, 42], and
even multi-player games that include free-form natural-language communication
[4]. Intuitively, in all these cases, the game-theoretic abstraction serves to
provide a palpable, easy-to-understand target, i.e., an equilibrium solution
with strong axiomatic justification. However, from a complexity-theoretic
standpoint, such targets are excessively ambitious, requiring huge amounts of
data to express and compute, even approximately. Because of this, the agents’
policies must be encoded via a universal function approximator (such as a
neural net) and training this architecture boils down to iteratively updating
these parameters until the process – hopefully! – converges to the target
equilibrium.
Unfortunately, despite the ubiquitousness of these settings, the design of
algorithms with provable convergence guarantees is still relatively lacking.
This deficit is not surprising if one considers that even the – comparatively
much simpler – problem of equilibrium learning in finite games is hindered by
numerous computational hardness [9, 10] as well as dynamic impossibility
results [15, 16, 19, 24, 31, 30]. In this regard, our best hope for designing
provably convergent algorithms is to focus on specific classes of games with
some _useful structure_ to exploit.
One of the most well-established frameworks of this type is the class of
_monotone games_ whose study goes back at least to Rosen [38]. As special
cases, this setting includes single-agent convex minimization problems, two-
player convex-concave min-max games, diagonally convex $N$-player games, etc.
Owing to this connection, there has been a proliferation of strong positive
results at the interface of game theory and optimization, see e.g., [39, 28]
and references therein. In our case however, the agents do not play this
monotone game directly, but can only access it _indirectly_ via an encoding
layer of _control variables_ – like the weight parameters of a neural net that
outputs a feasible strategy profile for the game in question. In this sense,
from a machine learning perspective, the strategies of the game are _latent
variables_ , so we can think of each player as being equipped with a smooth
mapping from a high-dimensional space of control variables to the strategy
space of the game. Importantly, in contrast to the control variables, the
latent variables are not directly accessible to the players themselves and
should only be viewed as auxiliary variables – to all extents and purposes,
the goal remains to find an operationally desirable control layer
configuration. In this sense, the convex structure of the game becomes
“hidden” behind the control layer, which entangles multiple input/control
variables into nonlinear manifolds of latent variables.
As a result, the convex structure of the underlying game is effectively
destroyed, resulting in highly non-convex end-to-end interactions. This raises
the following central challenge:
Can we design provably convergent algorithms for non-convex games with a
hidden structure in the presence of general couplings between control and
latent variables?
Prior work in the area has shown that this is a promising and, at the same
time, highly challenging question. In [43], the setting of hidden bilinear
games was introduced and a number of negative results were presented, to the
effect that gradient-descent-ascent can exhibit a variety of non-convergent
behaviors, even when the game admits a hidden _bilinear_ structure.
Subsequently, [13] provided an approximate minimax theorem for a class of two-
agent games where the players pick neural networks, but did not provide any
convergent training algorithm for this class of games. Instead, the first
positive result on the dynamics of hidden games was obtained by [44] who
established a series of non-local convergence guarantees to the von Neumann
max-min solution of the game in the case of two-agent hidden strictly convex-
concave games for all initial conditions satisfying a certain genericity
assumption. This approach, however, only applied to _continuous-time dynamics_
– _not algorithms_ – and it further imposed strong separability assumptions
on the representation of the game in the control layer. More recently, [32]
established the first global convergence guarantees in hidden games but, once
again, these apply only to _continuous-time dynamics_ and a special case of
two-agent convex-concave games (akin to playing a convex combination of hidden
games with one-dimensional latent spaces). This paper is the closest
antecedent to our work as it introduces a dynamical system, called Generalized
Natural Gradient Flow, which makes the $L^{2}$ norm between the equilibrium in
the hidden game and the current set of latent variables a Lyapunov function
for the system.
#### Our results & techniques.
Our paper seeks to provide an affirmative answer to the key challenge above
under minimal assumptions on the coupling between latent and control
variables. To that effect, we only assume that each agent is able to affect a
measurable change along any latent variable by updating their control
variables appropriately; without this assumption spurious equilibria can
emerge due to the deficiency of the control layer architecture. Importantly
however, even though the map from control to latent variables is known to the
players, _we do not assume_ that it can be efficiently inverted (e.g., to
solve for a profile of control variables that realizes a profile of latent
variables). Otherwise, if it could, the entire game could be solved directly
in the latent layer and then ported back to the control layer, thus rendering
the whole problem moot – and, indeed, when working with realistic neural net
architectures, this inversion problem is, to all intents and purposes,
impossible.
For intuition, we begin by designing a new continuous-time flow, that we call
preconditioned hidden gradient dynamics, and which enjoys strong convergence
properties in games with a hidden strictly monotone structure (Proposition 1).
Similarly to [32], this is achieved by using the $L^{2}$ norm in the latent
layer as a Lyapunov function in the control layer; however, the similarities
with the existing literature end there. Our paper does not make any
separability or low-dimensionality assumptions, and is otherwise _purely
algorithmic:_ specifically, building on the continuous-time intuition, we
provide a concrete, implementable algorithm, that we call _preconditioned
hidden gradient descent_ (PHGD); this algorithm is run with _stochastic
gradients_ , and enjoys a series of strong, global convergence guarantees in
hidden games.
First, as a baseline, Theorem 1 shows that a certain averaged process achieves
an $\operatorname{\mathcal{O}}(1/\sqrt{t})$ convergence rate in all games with
a hidden monotone structure and Lipschitz continuous loss functions. If the
hidden structure is strongly monotone, Theorem 2 further shows that this rate
can be improved to $\operatorname{\mathcal{O}}(1/t)$ for the _actual_
trajectory of the players’ control variables; and if the algorithm is run with
full, deterministic gradients, the rate becomes _geometric_ (Theorem 3). To
the best of our knowledge, these are the first bona fide algorithmic
convergence guarantees for games with a hidden structure.
$\cdots$$\operatorname{softmax}$$\theta$$R$$P$$S$$R$$P$$S$$\operatorname{softmax}$$\phi$$\cdots$$\cdots$$\cdots$$\cdots$$\cdots$
Figure 1. A hidden game of Rock-Paper-Scissors with strategies encoded by two
multi-layer perceptrons (MLPs), whose $4$-dimensional input is dictated by the
two players (cf. [32]). The nonlinearity of the MLP representation maps leads
to a highly non-convex non-concave zero-sum game. However, by employing PHGD,
both players’ MLP control variables accurately identify the $(1/3,1/3,1/3)$
equilibrium in the game’s latent space.
## 2\. Problem setup and preliminaries
Throughout the sequel, we will focus on continuous $N$-player games where each
player, indexed by $i\in\mathcal{N}\coloneqq\\{1,\ldots,N\\}$, has a convex
set of _control variables_
$\theta_{i}\in\Theta_{i}\coloneqq\mathbb{R}^{m_{i}}$, and a continuously
differentiable _loss function_ $\ell_{i}\colon\Theta\to\mathbb{R}$, where
$\Theta\coloneqq\prod_{i}\Theta_{i}$ denotes the game’s _control space_. For
concreteness, we will refer to the tuple
$\Gamma\equiv\Gamma(\mathcal{N},\Theta,\ell)$ as the _base game_.
The most relevant solution concept in this setting is that of a _Nash
equilibrium_ , i.e., an action profile $\theta^{\ast}\in\Theta$ that
discourages unilateral deviations. Formally, we say that
$\theta^{\ast}\in\Theta$ is a _Nash equilibrium_ of the base game $\Gamma$ if
$\ell_{i}(\theta^{\ast})\leq\ell_{i}(\theta_{i};\theta^{\ast}_{-i})\quad\text{for
all $\theta_{i}\in\Theta_{i}$, $i\in\mathcal{N}$}$ (NE)
where we employ the standard game-theoretic shorthand
$(\theta_{i};\theta_{-i})$ to distinguish between the action of the $i$-th
player and that of all other players in the game. Unfortunately, designing a
learning algorithm that provably outputs a Nash equilibrium is a very elusive
task: the impossibility results of Hart & Mas-Colell [15, 16] already preclude
the existence of uncoupled dynamics that converge to a Nash equilibrium in all
games; more recently, Milionis et al. [31] established a similar impossibility
result even for possibly _coupled_ dynamics (in both discrete and continuous
time), while Daskalakis et al. [9, 10] has shown that even the _computation_
of an approximate equilibrium can be beyond reach.
In view of this, our work focuses on games with a hidden, _latent_ structure
that can be exploited to compute its Nash equilibria. More precisely, inspired
by [43, 44] we have the following definition.
###### Definition 1.
We say that the game $\Gamma\equiv\Gamma(\mathcal{N},\Theta,\ell)$ admits a
_latent_ – or _hidden_ – _structure_ if:
1. (1)
Each player’s control variables can be mapped faithfully to a closed convex
set of _latent variables_
$x_{i}\in\mathcal{X}_{i}\subseteq\mathbb{R}^{d_{i}}$; formally, we posit that
there exists a Lipschitz smooth map
$\chi_{i}\colon\Theta_{i}\to\mathcal{X}_{i}$ with no critical points and such
that $\operatorname{cl}(\chi_{i}(\Theta_{i}))=\mathcal{X}_{i}$.
2. (2)
Each player’s loss function factors through the game’s _latent space_
$\mathcal{X}\coloneqq\prod_{i}\mathcal{X}_{i}$ as
$\ell_{i}(\theta)=f_{i}(\chi_{1}(\theta_{1}),\dotsc,\chi_{N}(\theta_{N}))$ (1)
for some Lipschitz smooth function $f_{i}\colon\mathcal{X}\to\mathbb{R}$
called the player’s _latent loss function_.
For concreteness, we will refer to the product map
$\chi(\theta)=(\chi_{i}(\theta_{i}))_{i\in\mathcal{N}}$ as the game’s
_representation map_ , and the tuple
$\mathcal{G}\equiv\mathcal{G}(\mathcal{N},\mathcal{X},f)$ will be called the
_hidden / latent game_. To simplify notation later on, we also assume that
$\mathcal{X}_{i}$ has nonempty topological interior, so, in particular,
$\dim(\mathcal{X}_{i})=d_{i}\leq m_{i}$.
We illustrate the above notions in two simple – but not simplistic – examples
below; for a schematic representation, cf. Fig. 2.
$\theta$$\cdot$Control variables$x=\chi(\theta)$$\cdot$Latent
variables$\chi$$\begin{array}[]{l}\mathbb{R}^{d}\\\
\end{array}$$\begin{array}[]{l}\mathbb{R}^{m}\\\
\end{array}$$\begin{array}[]{l}\mathbb{R}\\\ \end{array}$$f$$\ell=f\circ\chi$
Figure 2. Schematic representation of a game with a hidden / latent structure.
###### Example 2.1.
Consider a single-agent game ($N=1$) where the objective is to minimize the
non-convex function
$\ell(\theta)=\sum_{\alpha=1}^{\lvert\mathcal{D}\rvert}(\operatorname{sigmoid}(\theta_{\alpha})-s_{\alpha})^{2}$
over a dataset $\mathcal{D}=\\{s_{\alpha}\\}$. This problem can be recast as a
hidden convex problem with
$f(x)=\sum_{\alpha=1}^{\lvert\mathcal{D}\rvert}(x_{\alpha}-s_{\alpha})^{2}$
and $\chi(\theta)=\operatorname{sigmoid}(\theta)$.
###### Example 2.2.
A more complex scenario involves non-convex / non-concave min-max optimization
problems of the form
$\min_{\theta_{1}}\max_{\theta_{2}}\chi_{1}(\theta_{1})^{\intercal}C\chi_{2}(\theta_{2})$
where $\chi_{1}$ and $\chi_{2}$ are preconfigured multi-layer perceptrons
constructed with smooth activation functions (such as CeLUs). This problem can
be reformulated as a hidden bilinear problem by setting
$f_{1}(x_{1},x_{2})=x_{1}^{\intercal}Cx_{2}=-f_{2}(x_{1},x_{2})$.
Before moving forward, there are some points worth noting regarding the above
definitions.
###### Remark 1.
First, the requirement $\operatorname{cl}(\chi(\Theta))=\mathcal{X}$ means
that all latent variable profiles $x\in\mathcal{X}$ can be approximated to
arbitrary accuracy in the game’s control space. The reason that we do not make
the stronger assumption $\chi(\Theta)=\mathcal{X}$ is to capture cases like
the sigmoid map: if $\chi(\theta)=[1+\exp(-\theta)]^{-1}$ for
$\theta\in\mathbb{R}$, the image of $\chi$ is the interval $(0,1)$, which is
convex but not closed.
###### Remark 2.
Second, the requirement that $\chi$ has no critical points simply means that,
for any control variable configuration $\theta\in\Theta$, the Jacobian
$\operatorname{Jac}(\chi(\theta))$ of $\chi$ at $\theta$ has full rank. By the
implicit function theorem [23], this simply means that $\chi$ locally looks
like a projection (in suitable coordinates around $\theta$), so it is possible
to affect a measurable change along any feasible latent direction by updating
each player’s control variables appropriately. This is no longer true if
$\chi$ has critical points, so this is a minimal requirement to ensure that no
spurious equilibria appear in the game’s latent space.
To proceed, we will assume that the latent game is _diagonally convex_ in the
sense of Rosen [38], a condition more commonly known in the optimization
literature as _monotonicity_ [12]. Formally, let
$g_{i}(x)=\nabla_{i}f_{i}(x)\coloneqq\nabla_{x_{i}}f_{i}(x)$ (2)
denote the individual gradient of the latent loss function of player
$i\in\mathcal{N}$, and write $g(x)=(g_{1}(x),\dotsc,g_{N}(x))$ for the profile
thereof. We then say that the latent game $\mathcal{G}$ is:
* •
_Monotone_ if
$\langle g(x^{\prime})-g(x),\,x^{\prime}-x\rangle\geq 0\quad\text{for all
$x,x^{\prime}\in\mathcal{X}$}.$ (3a)
* •
_Strictly monotone_ if
$\langle g(x^{\prime})-g(x),\,x^{\prime}-x\rangle>0\quad\text{for all
$x,x^{\prime}\in\mathcal{X}$, $x\neq x^{\prime}$}.$ (3b)
* •
_Strongly monotone_ if, for some $\mu>0$, we have
$\langle g(x^{\prime})-g(x),\,x^{\prime}-x\rangle\geq\mu\lVert
x^{\prime}-x\rVert^{2}\quad\text{for all $x,x^{\prime}\in\mathcal{X}$}.$ (3c)
Clearly, strong monotonicity implies strict monotonicity, which in turn
implies monotonicity; on the other hand, when we want to distinguish between
problems that are monotone but not strictly monotone, we will say that $g$ is
_merely monotone_. Finally, extending the above to the base game, we will say
that $\Gamma$ admits a _hidden monotone structure_ when the latent game
$\mathcal{G}$ is monotone as above (and likewise for strictly / strongly
monotone structures).
Examples of monotone games include neural Kelly auctions and Cournot
oligopolies [20, 5, 25, 6], covariance matrix optimization problems and power
control [39, 27, 29], certain classes of congestion games [35], etc. In
particular, in the case of convex minimization problems, monotonicity (resp.
strict / strong monotonicity) corresponds to convexity (resp. strict / strong
convexity) of the problem’s objective function. In all cases, it is
straightforward to check that the image $x^{\ast}=\chi(\theta^{\ast})$ of a
Nash equilibrium $\theta^{\ast}\in\Theta$ of the base game satisfies a
Stampacchia variational inequality of the form
$\langle g(x^{\ast}),\,x-x^{\ast}\rangle\geq 0\quad\text{for all
$x\in\mathcal{X}$}.$ (SVI)
In turn, by monotonicity, this characterization is equivalent to the Minty
variational inequality
$\langle g(x),\,x-x^{\ast}\rangle\geq 0\quad\text{for all $x\in\mathcal{X}$}.$
(MVI)
To avoid trivialities, we will assume throughout that the solution set
$\mathcal{X}^{\ast}$ of (SVI)/(MVI) is nonempty. This is a standard assumption
without which the problem is not well-posed – and hence, meaningless from an
algorithmic perspective.
#### Notation.
To streamline notation (and unless explicitly mentioned otherwise), we will
denote control variables by $\theta_{i}$ and $\theta$, and we will write
$x_{i}=\chi_{i}(\theta_{i})$ and $x=\chi(\theta)$ for the induced latent
variables. We will also write $m\coloneqq\sum_{i}m_{i}$ for the dimensionality
of the game’s control space $\Theta$ and $d\coloneqq\sum_{i}d_{i}$ for the
dimensionality of the latent space $\mathcal{X}$ (so, in general, $m\geq d$).
Finally, when the representation map $\chi$ is clear from the context, we will
write
$\mathbf{J}_{i}(\theta_{i})\coloneqq\operatorname{Jac}(\chi_{i}(\theta_{i}))\in\mathbb{R}^{d_{i}\times
m_{i}}$ for the Jacobian matrix of $\chi_{i}$ at $\theta_{i}\in\Theta_{i}$,
and $\mathbf{J}(\theta)=\bigoplus_{i}\mathbf{J}_{i}(\theta_{i})$ for the
associated block diagonal sum.
## 3\. Hidden gradients and preconditioning
We are now in a position to present our main algorithmic scheme for
equilibrium learning in games with a hidden monotone structure. In this
regard, our aim will be to overcome the following limitations in the existing
literature on learning in hidden games: (it n) the literature so far has
focused exclusively on continuous-time dynamics, with no discrete-time
algorithms proven to efficiently converge to a solution; (it n) the number of
players is typically limited to two; and (it n) the representation maps are
_separable_ in the sense that the control variables are partitioned into
subsets and each subset affects exactly one latent variable.111The
separability assumption also rules out the logit representation
$\exp(\theta_{i\alpha})\big{/}\sum_{\beta}\exp(\theta_{i\beta})$ that is
standard when the latent structure expresses mixed strategies in a finite
game.
We start by addressing the last two challenges first. Specifically, we begin
by introducing a gradient preconditioning scheme that allows us to design a
convergent _continuous-time_ dynamical system for hidden monotone games with
an _arbitrary_ number of players and _no separability_ restrictions for its
representation maps. Subsequently, we propose a bona fide algorithmic scheme –
which we call _preconditioned hidden gradient descent_ (PHGD) – by
discretizing the said dynamics, and we analyze the algorithm’s long-run
behavior in Section 4.
### 3.1. Continuous-time dynamics for hidden games
To connect our approach with previous works in the literature, our starting
point will be a simple setting already captured within the model of [32], min-
max games with a hidden convex-concave structure and unconstrained one-
dimensional control and latent spaces per player, i.e.,
$\Theta_{i}=\mathbb{R}=\mathcal{X}_{i}$ for $i=1,2$. We should of course note
that this specific setting is fairly restrictive and not commonly found in
deep neural network practice: in real-world deep learning applications, neural
nets generally comprise multiple interconnected layers with distinct
activation functions, so the input-output relations are markedly more complex
and intertwined. Nevertheless, the setting’s simplicity makes the connection
with natural gradient methods particularly clear, so as in [32], it will serve
as an excellent starting point.
Concretely, Mladenovic et al. [32] analyze the _natural hidden gradient
dynamics_
$\dot{\theta}_{i}=-\frac{1}{\lvert\chi_{i}^{\prime}(\theta_{i})\rvert^{2}}\frac{\partial\ell_{i}}{\partial\theta_{i}}\quad\text{for
all $i\in\mathcal{N}$}.$ (NHGD)
The reason for this terminology – and the driving force behind the above
definition – is the observation that, in one-dimensional settings, a direct
application of the chain rule to the defining relation
$\ell_{i}=f_{i}\circ\chi$ of the game’s latent loss functions yields
$\partial\ell_{i}/\partial\theta_{i}=\chi_{i}^{\prime}(\theta_{i})\cdot\partial
f_{i}/\partial x_{i}$ so, in turn, (NHGD) becomes
$\dot{\theta}_{i}=-\frac{1}{\chi_{i}^{\prime}(\theta_{i})}\frac{\partial
f_{i}}{\partial x_{i}}$ (4)
where, for simplicity, we are tacitly assuming that
$\chi_{i}(\theta_{i})^{\prime}>0$.
This expression brings two important points to light. First, even though
(NHGD) is defined in terms of the actual, control-layer gradients
$\partial\ell_{i}/\partial\theta_{i}$ of $\Gamma$, Eq. 4 shows that the
dynamics are actually driven by the latent-layer, “hidden gradients”
$g_{i}(x)=\partial f_{i}/\partial x_{i}$ of $\mathcal{G}$. Second, the
preconditioner $\lvert\chi_{i}^{\prime}(\theta_{i})\rvert^{-2}$ in (NHGD) can
be seen as a Riemannian metric on $\Theta$: instead of defining gradients
relative to the standard Euclidean metric of $\Theta\equiv\mathbb{R}^{N}$,
(NHGD) can be seen as a gradient flow relative to the Riemannian metric
$g_{ij}(\theta)=\delta_{ij}\chi_{i}^{\prime}(\theta_{i})$, which captures the
“natural” geometry induced by the representation map $\chi$.222We do not
attempt to provide here a primer on Riemannian geometry; for a masterful
introduction, see [22].
From an operational standpoint, the key property of (NHGD) that enabled the
analysis of [32] is the observation that the $L^{2}$ energy function
$E(\theta)=\tfrac{1}{2}\lVert\chi(\theta)-x^{\ast}\rVert^{2}$ (5)
between the latent representation $x=\chi(\theta)$ of $\theta$ and a solution
$x^{\ast}$ of (SVI) / (MVI) is a Lyapunov function for (NHGD). Indeed, a
straightforward differentiation yields
$\dot{E}(\theta)=\frac{1}{2}\frac{d}{dt}\lVert
x-x^{\ast}\rVert^{2}=\sum\nolimits_{i}\dot{x}_{i}\cdot(x_{i}-x^{\ast}_{i})=-[g(\chi(\theta))]^{\intercal}(\chi(\theta)-x^{\ast})\leq
0$ (6)
with the penultimate step following from (4) and the last one from (MVI). It
is then immediate to see that the latent orbits $x(t)=\chi(\theta(t))$ of
(NHGD) converge to equilibrium in strictly monotone games.
The approach in the example above depends crucially on the separability
assumption which, among others, trivializes the problem. Indeed, if there is
no coupling between control variables in the game’s latent space, the
equations $x_{i}=\chi_{i}(\theta_{i})$ can be backsolved easily for $\theta$
(e.g., via binary search), so it is possible to move back-and-forth between
the latent and control layers, ultimately solving the game in the latent layer
and subsequently extracting a solution configuration in the control layer.
Unfortunately however, extending the construction of (NHGD) to a non-separable
setting is not clear, so it is likewise unclear how to exploit the hidden
structure of the game beyond the separable case.
To that end, our point of departure is the observation that the Lyapunov
property (6) of the energy function $E(\theta)$ is precisely the key feature
that enables convergence of (NHGD). Thus, assuming that control variables are
mapped to latent variables via a general – though possibly highly coupled –
representation map $\chi\colon\Theta\to\mathcal{X}$, we will consider an
abstract preconditioning scheme of the form
$\dot{\theta}_{i}=-\mathbf{P}_{i}(\theta_{i})v_{i}(\theta)$ (7)
where
$v_{i}(\theta)\coloneqq\nabla_{\theta_{i}}\ell_{i}(\theta)$ (8)
denotes the individual loss gradient of player $i$, while the preconditioning
matrix $\mathbf{P}_{i}(\theta_{i})\in\mathbb{R}^{m_{i}\times m_{i}}$ is to be
designed so that (6) still holds under (7). In this regard, a straightforward
calculation (which we prove in Appendix A) yields the following:
###### Lemma 1.
Under the dynamics (7), we have
$\dot{E}(\theta)=-\sum\nolimits_{i\in\mathcal{N}}[g_{i}(\chi(\theta))]^{\intercal}\mathbf{J}_{i}(\theta_{i})\mathbf{P}_{i}(\theta_{i})[\mathbf{J}_{i}(\theta_{i})]^{\intercal}(\chi_{i}(\theta_{i})-x^{\ast}_{i})$
(9)
where, as per Section 2, $\mathbf{J}_{i}(\theta_{i})\in\mathbb{R}^{d_{i}\times
m_{i}}$ denotes the Jacobian matrix of the map
$\chi_{i}\colon\Theta_{i}\to\mathcal{X}_{i}$. More compactly, letting
$\mathbf{P}\coloneqq\bigoplus_{i}\mathbf{P}_{i}$ denote the block-diagonal
ensemble of the players’ individual preconditioning matrices $\mathbf{P}_{i}$
(and suppressing control variable arguments for concision), we have
$\dot{E}=-g(x)^{\intercal}\cdot\mathbf{J}\mathbf{P}\mathbf{J}^{\intercal}\cdot(x-x^{\ast})$
(10)
In view of Lemma 1, a direct way to achieve the target Lyapunov property (6)
would be to find a preconditioning matrix $\mathbf{P}$ such that
$\mathbf{J}\mathbf{P}\mathbf{J}^{\intercal}=\mathbf{I}$. However, since
$\mathbf{J}$ is surjective (by the faithfulness assumption for $\chi$), the
Moore-Penrose inverse $\mathbf{J}^{+}$ of $\mathbf{J}$ will be a right inverse
to $\mathbf{J}$, i.e., $\mathbf{J}\mathbf{J}^{+}=\mathbf{I}$. Hence, letting
$\mathbf{P}=(\mathbf{J}^{\intercal}\mathbf{J})^{+}=\mathbf{J}^{+}[\mathbf{J}^{+}]^{\intercal}$,
we obtain:
$\mathbf{J}\mathbf{P}\mathbf{J}^{\intercal}=\mathbf{J}(\mathbf{J}^{\intercal}\mathbf{J})^{+}\mathbf{J}^{\intercal}=\mathbf{J}\mathbf{J}^{+}(\mathbf{J}^{\intercal})^{+}\mathbf{J}^{\intercal}=\mathbf{I}\cdot\mathbf{I}=\mathbf{I}$
(11)
In this way, unraveling the above, we obtain the _preconditioned hidden
gradient flow_
$\dot{\theta}_{i}=-\mathbf{P}_{i}(\theta_{i})\nabla_{i}\ell_{i}(\theta_{i})\quad\text{with}\quad\mathbf{P}_{i}(\theta_{i})=[\mathbf{J}_{i}(\theta_{i})^{\intercal}\mathbf{J}_{i}(\theta_{i})]^{+}$
(PHGF)
By virtue of design, the following property of (PHGF) is then immediate:
###### Proposition 1.
Suppose that $\Gamma$ admits a latent strictly monotone structure in the sense
of (3b). Then the energy function (5) is a strict Lyapunov function for
(PHGF), and every limit point $\theta^{\ast}$ of $\theta(t)$ is a Nash
equilibrium of $\Gamma$.
Proposition 1 validates our design choices for the preconditioning matrix
$\mathbf{P}$ as it illustrates that the dynamics (PHGF) converge to Nash
equilibrium, without any separability requirements or dimensionality
restrictions; in this regard, Proposition 1 already provides a marked
improvement over the corresponding convergence result of [32] for (NHGD). To
streamline our presentation, we defer the proof of Lemmas 1 and 1 to Appendix
A, and instead proceed directly to provide a bona fide, algorithmic
implementation of (PHGF).
Figure 3. Exploiting a hidden convex structure in the control layer. On the
left, we present the dynamics (PHGF) in the example of minimizing the simple
function
$f(\theta)=\operatorname{KL}(\mathrm{logit}(\theta_{1},\theta_{2})\|(1/2,1/3,1/6))$
over $\Theta\equiv\mathbb{R}^{2}$. In the subfigure to the right, we
illustrate the hidden convex structure of the energy landscape, from the non-
convex sublevel sets of $\Theta$ to the latent space
$\mathcal{X}\equiv\\{(x_{1},x_{2})\in\mathbb{R}^{2}_{\geq 0}:x_{1}+x_{2}\leq
1\\}$.
### 3.2. PHGD
In realistic machine learning problems, any algorithmic scheme based on (PHGF)
will have to be run in an iterative, discrete-time environment; moreover, due
to the challenges posed by applications with large datasets and optimization
objectives driven by the minimization of empirical risk, we will need to
assume that players only have access to a stochastic version of their full,
deterministic gradients. As such, we will make the following blanket
assumptions that are standard in the stochastic optimization literature [17,
21]:
###### Assumption 1.
Each agent’s loss function is an expectation of a random function
$L_{i}\colon\Theta\times\Omega\to\mathbb{R}$ over a complete probability space
$(\Omega,\mathcal{F},\operatorname{\mathbb{P}})$, i.e.,
$\ell_{i}(\theta)=\operatorname{\mathbb{E}}[L_{i}(\theta;\omega)]$. We further
assume that:
1. (1)
$L_{i}(\theta;\omega)$ is measurable in $\omega$ and $\beta$-Lipschitz smooth
in $\theta$ (for all $\theta\in\Theta$ and all $\omega\in\Omega$
respectively).
2. (2)
The gradients of $L_{i}$ have bounded second moments, i.e.,
$\sup_{\theta\in\Theta}\operatorname{\mathbb{E}}[\lVert\nabla
L_{i}(\theta;\omega)\rVert^{2}]\leq M^{2}.$ (12)
Taken together, the two components of Assumption 1 imply that each $\ell_{i}$
is differentiable and Lipschitz continuous (indeed, by Jensen’s inequality, we
have $\lVert\operatorname{\mathbb{E}}[\nabla
L_{i}(\theta;\omega)]\rVert^{2}\leq\operatorname{\mathbb{E}}[\lVert\nabla
L_{i}(\theta;\omega)\rVert^{2}]\leq M^{2}$). Moreover, by standard dominated
convergence arguments, $\nabla L_{i}(\theta;\omega)$ can be seen as an
unbiased estimator of $\nabla\ell_{i}(\theta)$, that is,
$\ell_{i}(\theta)=\operatorname{\mathbb{E}}[\nabla L_{i}(\theta;\omega)]$ for
all $\theta\in\Theta$. In view of this, we will refer to the individual loss
gradients $\nabla_{i}L_{i}(\theta;\omega)$ of player $i\in\mathcal{N}$ as the
player’s individual _stochastic gradients_.
With all this in mind, the _preconditioned hidden gradient descent_ algorithm
is defined as the stochastic first-order recursion
$\theta_{i,t+1}=\theta_{i,t}-\gamma_{t}\mathbf{P}_{i,t}V_{i,t}$ (PHGD)
where:
1. (1)
$\theta_{i,t}$ denotes the control variable configuration of player
$i\in\mathcal{N}$ at each stage $t=1,2,\dotsc$
2. (2)
$\gamma_{t}>0$ is a variable step-size sequence, typically of the form
$\gamma_{t}\propto 1/t^{p}$ for some $p\in[0,1]$.
3. (3)
$\mathbf{P}_{i,t}\coloneqq\mathbf{P}_{i}(\theta_{i,t})=[\mathbf{J}_{i}(\theta_{i,t})^{\intercal}\mathbf{J}_{i}(\theta_{i,t})]^{+}$
is the preconditioner of player $i\in\mathcal{N}$ at the $t$-th epoch.
4. (4)
$V_{i,t}\coloneqq\nabla_{i}L(\theta_{i,t};\omega_{t})$ is an individual
stochastic loss gradient generated at the control variable configuration
$\theta_{t}$ by an i.i.d. sample sequence $\omega_{t}\in\Omega$,
$t=1,2,\dotsc$
The basic recursion (PHGD) can be seen as a noisy Euler discretization of the
continuous flow (PHGF), in the same way that ordinary stochastic gradient
descent can be seen as a noisy discretization of gradient flows. In this way,
(PHGD) is subject to the same difficulties underlying the analysis of
stochastic gradient descent methods; we address these challenges in the next
section.
## 4\. Convergence analysis and results
In this section, we present our main results regarding the convergence of
(PHGD) in hidden monotone games. Because we are interested not only on the
asymptotic convergence of the method but also on its _rate_ of convergence, we
begin this section by defining a suitable merit function for each class of
hidden monotone games under consideration; subsequently, we present our main
results in Section 4.2, where we also discuss the main technical tools
enabling our analysis. To streamline our presentation, all proofs are deferred
to Appendix B.
### 4.1. Merit functions and convergence metrics
In the variational context of (SVI) / (MVI), the quality of a candidate
solution $\hat{x}$ is typically evaluated by means of the _restricted merit
function_
$\operatorname{Gap}\nolimits_{\mathcal{C}}(\hat{x})\coloneqq\sup\nolimits_{x\in\mathcal{C}}\langle
g(x),\,\hat{x}-x\rangle$ (13)
where the “test domain” $\mathcal{C}$ is a relatively open subset of
$\mathcal{X}$ [3]. The _raison d’être_ of this definition is that, if $g$ is
monotone and $x^{\ast}\in\mathcal{C}$ is a solution of (SVI) / (MVI), we have
$\langle g(x),\,x^{\ast}-x\rangle\leq\langle
g(x^{\ast}),\,x^{\ast}-x\rangle\leq 0\quad\text{for all $x\in\mathcal{C}$},$
(14)
so the supremum in (B.6) cannot be too positive if $\hat{x}$ is an approximate
solution of (SVI) / (MVI). This is encoded in the following lemma, which,
among others, justifies the terminology “merit function”:
###### Lemma 2.
Suppose that $g$ is monotone. If $\hat{x}$ is a solution of (SVI) / (MVI), we
have $\operatorname{Gap}_{\mathcal{C}}(\hat{x})=0$ whenever
$\hat{x}\in\mathcal{C}$. Conversely, if
$\operatorname{Gap}_{\mathcal{C}}(\hat{x})=0$ and $\mathcal{C}$ is a
neighborhood of $\hat{x}$ in $\mathcal{X}$, then $\hat{x}$ is a solution of
(SVI) / (MVI).
Lemma 2 extends similar statements by Auslender & Teboulle [3] and Nesterov
[33]; the precise variant that we state above can be found in [1], but, for
completeness, we provide a proof in Appendix B.
Now, since a latent variable profile $x^{\ast}=\chi(\theta^{\ast})$ solves
(SVI) if and only if the control variable configuration $\theta^{\ast}$ is
Nash equilibrium of the base game, the quality of a candidate solution
$\hat{\theta}\in\Theta$ with $\hat{x}=\chi(\hat{\theta})$ can be assessed by
the induced gap function
$\operatorname{Gap}(\hat{\theta})\coloneqq\operatorname{Gap}_{\mathcal{X}}(\chi(\hat{\theta}))=\sup\nolimits_{x\in\mathcal{X}}\langle
g(x),\,\hat{x}-x\rangle.$ (15)
Indeed, since $\hat{x}=\chi(\hat{\theta})\in\mathcal{X}$, Lemma 2 shows that
$\operatorname{Gap}(\hat{\theta})\geq 0$ with equality if and only if
$\hat{x}$ is a Nash equilibrium of the base game $\Gamma$. In this regard, Eq.
15 provides a valid equilibrium convergence metric for $\Gamma$, so we will
use it freely in the sequel as such; for a discussion of alternative
convergence metrics, we refer the reader to Appendix B.
### 4.2. Convergence results
We are now in a position to state our main results regarding the equilibrium
convergence properties of (PHGD). To streamline our presentation, we present
our results from coarser to finer, starting with games that admit a hidden
_merely_ monotone structure and _stochastic_ gradient feedback, and refining
our analysis progressively to games with a hidden _strongly_ monotone
structure and _full_ gradient feedback.333As expected, convergence rates
improve along the way. In addition, to quantify this distortion between the
game’s latent and control layers, we will require a technical regularity
assumption for the game’s representation map $\chi\colon\Theta\to\mathcal{X}$.
###### Assumption 2.
The singular values of the Jacobian $\mathbf{J}(\theta)$ of the representation
map $\chi$ are bounded as
$\sigma_{\min}^{2}\leq\operatorname{eig}(\mathbf{J}(\theta)\mathbf{J}(\theta)^{\intercal})\leq\sigma_{\max}^{2}$
(16)
for some $\sigma_{\min},\sigma_{\max}\in(0,\infty)$ and for all
$\theta\in\Theta$.
With all this in hand, we begin by studying the behavior of (PHGD) in games
with a hidden monotone structure.
###### Theorem 1 (PHGD in hidden monotone games).
Suppose that players run (PHGD) in a hidden monotone game with learning rate
$\gamma_{t}\propto 1/t^{1/2}$. Then, under Assumptions 1 and 2, the averaged
process $\bar{}\theta_{t}\in\chi^{-1}\big{(}t^{-1}\sum_{s=1}^{t}x_{s}\big{)}$
enjoys the equilibrium convergence rate
$\operatorname{\mathbb{E}}[\operatorname{Gap}(\bar{}\theta_{t})]=\operatorname{\mathcal{O}}(\log
t/\sqrt{t}).\vskip 6.0pt plus 2.0pt minus 2.0pt$ (17)
As far as we are aware, Theorem 1 is the first result of its kind in the
hidden games literature – that is, describing the long-run behavior of a
discrete-time algorithm with stochastic gradient input. At the same time, it
is subject to two important limitations: the first is that the averaged state
$\bar{}\theta_{t}$ cannot be efficiently computed for general representation
maps; second, even if it could, the $\operatorname{\mathcal{O}}(\log
t/\sqrt{t})$ convergence rate is relatively slow. The two results that follow
show that both limitations can be overcome in games with a hidden _strongly
monotone_ structure. In this case (SVI)/(MVI) admits a (necessarily) unique
solution $x^{\ast}$ in the game’s control space, so we will measure
convergence in terms of the latent equilibrium distance
$\operatorname{Err}(\hat{\theta})\coloneqq\tfrac{1}{2}\lVert\chi(\hat{\theta})-x^{\ast}\rVert^{2}.$
(18)
With this in mind, we have the following convergence results:
###### Theorem 2 (PHGD in hidden strongly monotone games).
Suppose that players run (PHGD) in a hidden $\mu$-strongly monotone game with
$\gamma_{t}=\gamma/t$ for some $\gamma>\mu$. Then, under Assumptions 1 and 2,
the induced sequence of play $\theta_{t}\in\Theta$, $t=1,2,\dotsc$, enjoys the
equilibrium convergence rate
$\operatorname{\mathbb{E}}[\operatorname{Err}(\theta_{t})]=\operatorname{\mathcal{O}}(1/t).$
(19)
This rate is tight, even for standard strongly monotone games. To improve it
further, we will need to assume that (PHGD) is run with full, _deterministic_
gradients, i.e., $V_{t}=g(\theta_{t})$. In this case, we obtain the following
refinement of Theorem 2.
###### Theorem 3 (PHGD with full gradient feedback in hidden strongly
monotone games).
Suppose that players run (PHGD) in a hidden $\mu$-strongly monotone game with
full gradient feedback, and a suffciently small learning rate $\gamma>0$.
Then, under Assumptions 1 and 2, the induced sequence of play
$\theta_{t}\in\Theta$, $t=1,2,\dotsc$, converges to equilibrium at a geometric
rate, i.e.,
$\operatorname{Err}(\theta_{t})=\operatorname{\mathcal{O}}(\rho^{t})$ (20)
for some constant $\rho\in(0,1)$ that depends only on the primitives of
$\Gamma$ and the representation map $\chi$.
Importantly, up to logarithmic factors, the convergence rates of Theorems 1–3
mirror the corresponding rates for learning in monotone games. This take-away
is particularly important as it shows that, _when it exists, a hidden convex
structure can be exploited to the greatest possible degree, without any loss
of speed in convergence relative to standard, non-hidden convex problems._
The proofs of Theorems 1–3 are quite involved, so we defer the details to
Appendix B. That said, to give an idea of the technical steps involved, we
provide below (without proof) two lemmas that play a pivotal role in our
analysis. The first one hinges on a transformation of the problem’s defining
vector field $v(\theta)=(\nabla_{i}\ell_{i}(\theta))_{i\in\mathcal{N}}$ which,
coupled with the specific choice of preconditioner
$\mathbf{P}_{i}(\theta_{i})=[\mathbf{J}_{i}(\theta_{i})^{\intercal}\mathbf{J}_{i}(\theta_{i})]^{+}$
in (PHGD) allows us to effectively couple the latent and control layers of the
problem in a “covariant” manner:
###### Lemma 3.
Fix some $\hat{x}\in\mathcal{X}$, and consider the energy function
$E(\theta;\hat{x})=(1/2)\lVert\chi(\theta)-\hat{x}\rVert^{2}$. Then, for all
$\theta\in\Theta$, we have
$\mathbf{J}(\theta)\mathbf{P}(\theta)\nabla_{\theta}E(\theta;\hat{x})=\chi(\theta)-\hat{x}.$
(21)
Our second intermediate result builds on Lemma 3 and provides a “template
inequality” for the energy function $E(\theta;\hat{x})$ in the spirit of [7,
18, 11].
###### Lemma 4 (Template inequality).
Suppose that Assumptions 1 and 2 hold. Then, with notation as in Lemma 3, the
sequence $E_{t}\coloneqq(1/2)\lVert\chi(\theta_{t})-\hat{x}\rVert^{2}$,
$t=1,2,\dotsc$, satisfies
$E_{t+1}\leq
E_{t}-\gamma_{t}g(x_{t})^{\intercal}(x_{t}-\hat{x})+\gamma_{t}\phi_{t}+\gamma_{t}^{2}\psi_{t},$
(22)
where $x_{t}\coloneqq\chi(\theta_{t})$,
$\phi_{t}\coloneqq(\mathbf{J}(\theta_{t})\mathbf{P}(\theta_{t})V_{t}-g(x_{t}))^{\intercal}(x_{t}-\hat{x})$
and $\psi_{t}$ is a random error sequence with
$\sup_{t}\operatorname{\mathbb{E}}[\psi_{t}]<\infty$.
This inequality plays a pivotal role in our analysis because it allows us to
couple the restricted merit function (B.8) in the game’s control space with
the evolution of the algorithm’s quasi-Lyapunov function in the game’s latent
space. We provide the relevant details and calculations in Appendix B.
## 5\. Experiments
This section demonstrates our method’s applicability in a couple of different
and insightful setups. Technical details of those setups, as well as
additional experimental results are deferred to the supplementary material. We
start with a regularized version of Matching Pennies zero-sum game where the
players’ strategies are controlled by two individual preconfigured
differentiable MLPs. Each MLP acts as the player’s representation map
$\chi_{i}$, which for each input $\theta_{i}$ outputs a one-dimensional latent
variable $x_{i}=\chi_{i}(\theta_{i})$ guaranteed to lie in
$\mathcal{X}_{i}\equiv[0,1]$; the player’s latent space.
Figure 4. A solution trajectory of (PHGD) in a hidden Matching Pennies game
over the sub-level sets of the energy function in (5).
Figure 4 illustrates the trajectory of (PHGD), represented by the black curve.
The algorithm employs a constant step-size of $0.01$ and is initialized at the
arbitrary state $(1.25,2.25)$ in the control variables’ space. The color map
in the figure serves as a visual representation of the level sets associated
with the proposed energy function (5). Notably, the trajectory of the
algorithm intersects each of the energy function’s level sets at most once,
indicating its non-cycling behavior, which is an issue that shows up often in
the equilibration task. Due to the design of our hidden maps, the
stabilization at the point $(0,0)$ corresponds to the
$(\chi_{1}(0),\chi_{2}(0))=(\frac{1}{2},\frac{1}{2})$ the unique equilibrium
of the game.
In the second, more complex, example, we consider a strongly-monotone
regularized modification of an (atomic) El Farol Bar congestion game among
$N=30$ players [45, 2]. In this setup, we let the control space of each player
$i$ be multi-dimensional, namely, $\Theta_{i}\equiv\mathbb{R}^{5}$,
$i\in\mathcal{N}$, and, as in the previous example, the representation map of
each player is instantiated by some preconfigured differentiable MLP whose
output $x_{i}\coloneqq\chi_{i}(\theta_{i})$ is guaranteed to lie in $[0,1]$.
The MLP’s output is going to be the probability with which player $i$ visits
the El Farol bar. For the interested reader, further details, including
technical specification of the MLPs, and loss functions of the games, can be
found in the supplementary material.
Figure 5. The evolution of the $L^{2}$ error function
$\operatorname{Err}_{\mathcal{X}}(\chi(\theta))$ of (PHGD) and GD with
constant step-size $0.01$ in the regularized games of Matching Pennies and El
Farol Bar as depicted in a semi-logarithmic scale. In the Matching Pennies
game (left) we depict a single trajectory initialized at $(1.25,2.25)$, while
in the El Farol Bar game (right) game we depict the mean and confidence bounds
of $100$ random trajectories.
Fig. 5 provides a comparative analysis of the performance between (PHGD), and
the standard GD method in the aforementioned two game scenarios. In Fig. 5
(left) we explore the Matching Pennies game, where GD exhibits slightly
erratic behavior. Despite eventually converging to the game’s equilibrium
point, GD’s convergence rate, in this case, can be described as linear at
best. In Fig. 5 (right), we examine the El Farol Bar game. Interestingly, in
this highly complex setup, GD fails to reach the equilibrium point entirely.
In contrast, (PHGD) not only manages to converge in both of these setups, but
it also consistently maintains an exponential rate of convergence. This stark
difference underscores the efficacy and robustness of our algorithm.
## 6\. Conclusion
This paper proposed a new algorithmic framework with strong formal convergence
guarantees in a general class non-convex games with a latent monotone
structure. Our algorithmic method – which we call preconditioned hidden
gradient descent – relies on an appropriately chosen gradient preconditioning
scheme akin to natural gradient ideas. Our class of games combines the useful
structure of monotone operators as well as the notion of latent/hidden
variables that arise in neural networks and can thus model numerous AI
applications. Our results indicate the possibility of deep novel algorithmic
ideas emerging at the intersection of game theory, non-convex optimization and
ML and offers exciting directions for future work.
## Acknowledgments
This research was supported in part by the National Research Foundation,
Singapore and DSO National Laboratories under its AI Singapore Program (AISG
Award No: AISG2-RP-2020-016), grant PIESGP-AI-2020-01, AME Programmatic Fund
(Grant No.A20H6b0151) from A*STAR and Provost’s Chair Professorship grant
RGEPPV2101. PM is also a member of the Archimedes Unit, Athena RC, Department
of Mathematics, University of Athens, and is grateful for financial support by
the French National Research Agency (ANR) in the framework of the
“Investissements d’avenir” program (ANR-15-IDEX-02), the LabEx PERSYVAL
(ANR-11-LABX-0025-01), MIAI@Grenoble Alpes (ANR-19-P3IA-0003), and project MIS
5154714 of the National Recovery and Resilience Plan Greece 2.0 funded by the
European Union under the NextGenerationEU Program.
## Appendix A Omitted proofs from Section 3
In this first appendix, we provide the technical proofs of our results for the
continuous-time dynamics (PHGF), namely Lemmas 1 and 1. For convenience, we
restate the relevant results as needed.
See 1
###### Proof.
Observe that if we expand the dynamical system of equations in (PHGF) with
respect to each individual coordinate, we have that for each player
$i\in\mathcal{N}$, for all profiles $\theta\in\Theta$, and for all coordinates
$\alpha\in\\{1,\ldots,m_{i}\\}$:
$\dot{\theta}_{i\alpha}=\sum_{\beta=1}^{m_{i}}\mathbf{P}_{i\alpha\beta}(\theta_{i})\frac{\partial\ell_{i}(\theta)}{\partial\theta_{i\beta}}.$
(A.1)
Now, since $\Gamma$ has a hidden monotone structure, and due to the
decomposition $\ell_{i}(\theta)=f_{i}(x)$, we can expand the above as
$\dot{\theta}_{i\alpha}=\sum_{\beta=1}^{m_{i}}\mathbf{P}_{i\alpha\beta}(\theta_{i})\sum_{r=1}^{d_{i}}\frac{\partial
f_{i}(x)}{\partial x_{ir}}\frac{\partial x_{ir}}{\partial\theta_{i\beta}}.$
(A.2)
Furthermore, if we expand the left-hand side (LHS) of (9) we also get
$\dot{E}(\theta)=\sum_{i=1}^{N}\sum_{\alpha=1}^{m_{i}}\frac{\partial
E(\theta)}{\partial\theta_{i\alpha}}\dot{\theta}_{i\alpha},$ (A.3)
where each of the summands of the above expression can also be expanded
further to
$\frac{\partial
E(\theta)}{\partial\theta_{i\alpha}}\dot{\theta}_{i\alpha}=-\sum_{l=1}^{d_{i}}(x_{il}-x_{il}^{\ast})\frac{\partial
x_{il}}{\partial\theta_{i\alpha}}\dot{\theta}_{i\alpha}.$ (A.4)
Putting everything together (9) follows by some trivial substitutions. ∎
See 1
###### Proof.
First, observe that for any player $i\in\mathcal{N}$, and
$\theta_{i}\in\Theta_{i}$, since the player’s representation map $\chi_{i}$ is
faithful, i.e., $\mathbf{J}_{i}(\theta_{i})$ is maximal rank, it holds:
$\begin{split}\mathbf{J}_{i}(\theta_{i})\mathbf{P}_{i}(\theta_{i})\mathbf{J}_{i}(\theta_{i})^{\intercal}&=\mathbf{J}_{i}(\theta_{i})\mathbf{J}_{i}(\theta_{i})^{+}\big{[}\mathbf{J}_{i}(\theta_{i})^{+}\big{]}^{\intercal}\mathbf{J}_{i}(\theta_{i})^{\intercal}\\\
&=\big{[}\mathbf{J}_{i}(\theta_{i})\mathbf{J}_{i}(\theta_{i})^{+}\big{]}^{\intercal}\\\
&=\mathbf{I}.\end{split}$ (A.5)
That is, $\sum_{\alpha=1}^{m_{i}}\sum_{\beta=1}^{m_{i}}\frac{\partial
x_{il}}{\partial\theta_{i\alpha}}\frac{\partial
x_{ir}}{\partial\theta{i\beta}}\mathbf{P}_{i\alpha\beta}(\theta_{i})=\delta_{lr}$
for all $l,r\in\\{1,\ldots,d_{i}\\}$, where $\delta_{lr}$ is the Kronecker
delta.
Now, by Lemma 1 we have that:
$\begin{split}\dot{E}(\theta)&=-\sum_{i=1}^{N}\sum_{l=1}^{d_{i}}\sum_{r=1}^{d_{i}}\delta_{lr}(x_{il}-x_{il}^{\ast})\frac{\partial
f_{i}(x)}{\partial x_{ir}}\\\
&=-\sum_{i=1}^{N}\sum_{l=1}^{d_{i}}(x_{il}-x_{il}^{\ast})\frac{\partial
f_{i}(x)}{\partial x_{il}}\\\
&=-\big{\langle}g(x),\,x-x^{\ast}\big{\rangle}.\end{split}$ (A.6)
which is negative, since $x^{\ast}$ is an optimizer of $f$, i.e., it satisfies
the (MVI) due to monotonicity of $g$. Additionally, since $g$ is strictly
monotone, $x^{\ast}$ is the unique optimizer of $f$, and, therefore, the above
condition holds with equality if and only if $x=x^{\ast}$. ∎
## Appendix B Auxiliary results from Section 4
### B.1. Convergence metrics and merit functions
In this appendix, we provide the technical scaffolding required for the
analysis of(PHGD), namely Lemmas 2, 3 and 4 in Section 3.2. As before, we
restate the relevant results as needed.
See 2
###### Proof of Lemma 2.
Let $\hat{x}\in\mathcal{X}$ be a solution of (SVI)/(MVI) so $\langle
g(\hat{x}),\,x-\hat{x}\rangle\geq 0$ for all $x\in\mathcal{X}$. Then, by the
monotonicity of $g$, we get:
$\displaystyle\langle g(x),\,\hat{x}-x\rangle$ $\displaystyle\leq\langle
g(x)-g(\hat{x}),\,\hat{x}-x\rangle+\langle g(\hat{x}),\,\hat{x}-x\rangle$
$\displaystyle=-\langle g(\hat{x})-g(x),\,\hat{x}-x\rangle-\langle
g(\hat{x}),\,x-\hat{x}\rangle\leq 0,$ (B.1)
so $\operatorname{Gap}_{\mathcal{C}}(\hat{x})\leq 0$. On the other hand, if
$\hat{x}\in\mathcal{C}$, we also get $\operatorname{Gap}(\hat{x})\geq\langle
g(\hat{x}),\,\hat{x}-\hat{x}\rangle=0$, so we conclude that
$\operatorname{Gap}_{\mathcal{C}}(\hat{x})=0$.
For the converse statement, assume that
$\operatorname{Gap}_{\mathcal{C}}(\hat{x})=0$ for some $\hat{x}\in\mathcal{C}$
and suppose that $\mathcal{C}$ contains a neighborhood of $\hat{x}$ in
$\mathcal{X}$. We then claim that $\hat{x}$ is a solution of (MVI) over
$\mathcal{C}$, i.e.,
$\langle g(x),\,x-\hat{x}\rangle\geq 0\quad\text{for all $x\in\mathcal{C}$}.$
(B.2)
To see this, assume to the contrary that there exists some
$x_{1}\in\mathcal{C}$ such that
$\langle g(x_{1}),\,x_{1}-\hat{x}\rangle<0$ (B.3)
so, in turn, we get
$0=\operatorname{Gap}_{\mathcal{C}}(\hat{x})\geq\langle
g(x_{1}),\,\hat{x}-x_{1}\rangle>0,$ (B.4)
a contradiction.
With this intermediate “local” result in hand, we are now in a position to
prove that $\hat{x}$ solves (SVI). Indeed, if we suppose to the contrary that
there exists some $z_{1}\in\mathcal{X}$ such that $\langle
g(\hat{x}),\,z_{1}-\hat{x}\rangle<0$, then, by the continuity of $g$, there
exists a neighborhood $\mathcal{U}^{\prime}$ of $\hat{x}$ in $\mathcal{X}$
such that
$\langle g(x),\,z_{1}-x\rangle<0\quad\text{for all
$x\in\mathcal{U}^{\prime}$}.$ (B.5)
Hence, assuming without loss of generality that
$\mathcal{U}^{\prime}\subset\mathcal{U}\subset\mathcal{C}$ (the latter
assumption due to the assumption that $\mathcal{C}$ contains a neighborhood of
$\hat{x}$), and taking $\lambda>0$ sufficiently small so that
$x=\hat{x}+\lambda(z_{1}-\hat{x})\in\mathcal{U}^{\prime}$, we get that
$\langle g(x),\,x-\hat{x}\rangle=\lambda\langle
g(x),\,z_{1}-\hat{x}\rangle<0$, in contradiction to (B.2). We thus conclude
that $\hat{x}$ is a solution of (SVI) – and hence, by monotonicity, also of
(MVI). ∎
For intuition, we discuss below some other merit functions that could be
considered as valid convergence metrics. In this regard, the first thing to
note is that the definition of $\operatorname{Gap}(\hat{\theta})$ effectively
goes through the game’s _latent space_ , so it is natural to ask whether a
similar merit function can be defined directly on the game’s _control space_.
To do so, it will be more convenient to start with a linearized variant of
$\operatorname{Gap}_{\mathcal{X}}$ defined over the cone of tangent directions
at $\hat{x}$, namely the so-called “tangent residual gap”
$\operatorname{TGap}_{\mathcal{X}}(\hat{x})\coloneqq-\min\nolimits_{z\in\operatorname{TC}_{\mathcal{X}}(\hat{x}),\lVert
z\rVert\leq 1}\langle g(\hat{x}),\,z\rangle$ (B.6)
i.e., the maximum “ascent” step $\langle-g(x),\,z\rangle$ over all admissible
displacement directions $z$ from $\hat{x}$.444In the above,
$\operatorname{TC}_{\mathcal{X}}(\hat{x})$ denotes the tangent cone to
$\mathcal{X}$ at $\hat{x}$, that is, the closure of the set of all rays
emanating from $\hat{x}$ and intersecting $\mathcal{X}$ in at least one other
point. Just like $\operatorname{Gap}_{\mathcal{C}}$, this linearized merit
function correctly identifies solutions of (SVI)/(MVI):
###### Lemma B.1.
For all $\hat{x}\in\mathcal{X}$, we have
$\operatorname{TGap}_{\mathcal{X}}(\hat{x})\geq 0$ with equality if and only
if $\hat{x}$ solves (SVI)/(MVI).
###### Proof.
Let $\hat{x}\in\mathcal{X}$ be some arbitrary profile of latent variables. By
definition, we have that
$\operatorname{TGap}_{\mathcal{X}}(\hat{x})=-\min\nolimits_{z\in\operatorname{TC}_{\mathcal{X}}(\hat{x}),\lVert
z\rVert\leq 1}\langle g(\hat{x}),\,z\rangle$, where
$\operatorname{TC}_{\mathcal{X}}(\hat{x})$ is the tangent cone to
$\mathcal{X}$ at $\hat{x}$. Observe that since
$\operatorname{TC}_{\mathcal{X}}(\hat{x})$ is a cone,
$0\in\operatorname{TC}_{\mathcal{X}}(\hat{x})$, so
$\operatorname{TGap}_{\mathcal{X}}(\hat{x})\geq-\langle
g(\hat{x}),\,0\rangle=0$. Now assume that
$\operatorname{TGap}_{\mathcal{X}}(\hat{x})=0$. Then, the following are
equivalent:
$\begin{split}\operatorname{TGap}_{\mathcal{X}}(\hat{x})=0&\iff-\min\nolimits_{z\in\operatorname{TC}_{\mathcal{X}}(\hat{x}),\lVert
z\rVert\leq 1}\langle g(\hat{x}),\,z\rangle=0\\\ &\iff\langle
g(\hat{x}),\,z\rangle\geq 0\quad\text{for all
$z\in\operatorname{TC}_{\mathcal{X}}(\hat{x})$, $\lVert z\rVert\leq 1$}\\\
&\iff\langle g(\hat{x}),\,z\rangle\geq 0\quad\text{for all
$z\in\operatorname{TC}_{\mathcal{X}}(\hat{x})$},\end{split}$ (B.7)
where the last equivalence follows because
$\operatorname{TC}_{\mathcal{X}}(\hat{x})$ is a cone. Rearranging the terms,
we may equivalently write $\langle-g(\hat{x}),\,z\rangle\leq 0$ for all
$z\in\operatorname{TC}_{\mathcal{X}}(\hat{x})$. Notice that, by the definition
of the tangent cone, the latter is equivalent to
$\langle-g(\hat{x}),\,\hat{x}-z\rangle\leq 0$ for all $z\in\mathcal{X}$.
Therefore, $\operatorname{TGap}_{\mathcal{X}}(\hat{x})=0$ if and only if
$\hat{x}$ satisfies (SVI). ∎
Now, if $\theta^{\ast}$ is a Nash equilibrium of the base game, the latent
variable configuration $x^{\ast}=\chi(\theta^{\ast})$ solves (SVI)/(MVI), so
$\operatorname{TGap}_{\mathcal{X}}$ is a valid equilibrium convergence metric
for $\Gamma$. However, since $\operatorname{TGap}_{\mathcal{X}}$ is still
defined on the game’s latent space, it does not give a straightforward way of
definining the quality of a candidate solution directly on the game’s control
space. To that end, one could consider the gap function
$\operatorname{TGap}_{\Theta}(\hat{\theta})\coloneqq-\min\nolimits_{\eta\in\mathbb{R}^{m},\lVert\eta\rVert\leq
1}\langle v(\hat{\theta}),\,\eta\rangle=\lVert v(\hat{\theta})\rVert_{\ast}$
(B.8)
where $v(\theta)\coloneqq(\nabla_{i}\ell_{i}(\theta))_{i\in\mathcal{N}}$
collects the players’ loss gradients relative to their control variables, and
$\lVert\cdot\rVert_{\ast}$ denotes its dual norm (so
$\operatorname{TGap}_{\Theta}(\hat{\theta})$ is evaluated _directly_ on the
game’s control space). Of course, as control variables are mapped to latent
variables, $\chi$ introduces a certaint distortion (due to nonlinearities).
This distortion can be quantified by the following lemma (see also Fig. 6
above):
$\mathbf{J}_{i}(\theta)$$\sigma_{\min}(\theta)$$\sigma_{\max}(\theta)$$\lVert
z\rVert\leq\sigma_{\min}$$\lVert
z\rVert\leq\sigma_{\max}$$\lVert\eta\rVert\leq 1$
Figure 6. The distortion of a ball by $\mathbf{J}(\theta)$.
###### Lemma B.2.
Let $\hat{x}=\chi(\hat{\theta})$ for some $\hat{\theta}\in\Theta$. We then
have
$\sigma_{\min}\operatorname{TGap}_{\mathcal{X}}(\hat{x})\leq\operatorname{TGap}_{\Theta}(\hat{\theta})\leq\sigma_{\max}\operatorname{TGap}_{\mathcal{X}}(\hat{x})$
(B.9)
In particular, $\operatorname{TGap}_{\Theta}(\hat{\theta})\geq 0$ for all
$\hat{\theta}\in\Theta$, with equality if and only if $\hat{\theta}$ is a Nash
equilibrium of $\Gamma$.
###### Proof.
To begin with, let us define the balls
$\mathbb{B}_{\Theta}\equiv\\{\eta\in\mathbb{R}^{m}:\lVert\eta\rVert\leq 1\\}$,
and $\mathbb{B}_{\mathcal{X}}\equiv\\{z\in\mathbb{R}^{d}:\lVert z\rVert\leq
1\\}$. Now, let $\hat{\theta}\in\Theta$ be some arbitrary profile of control
variables. By Assumption 2, we have that the singular values of the Jacobian
$\mathbf{J}(\hat{\theta})$ of the representation map $\chi$ are bounded from
above and below by $\sigma_{\max}<\infty$ and $\sigma_{\min}>0$ respectively.
That is,
$\sigma_{\min}\lVert\eta\rVert\leq\lVert\mathbf{J}(\hat{\theta})\eta\rVert\leq\sigma_{\max}\lVert\eta\rVert$
for all $\eta\in\mathbb{R}^{m}$, which implies that
$\sigma_{\min}\mathbb{B}_{\mathcal{X}}\subseteq\mathbf{J}(\hat{\theta})\mathbb{B}_{\Theta}\subseteq\sigma_{\max}\mathbb{B}_{\mathcal{X}}$.
By definition, we also have that,
$v(\hat{\theta})=\mathbf{J}(\hat{\theta})^{\intercal}g\big{(}\chi(\hat{\theta})\big{)}=\mathbf{J}(\hat{\theta})^{\intercal}g(\hat{x})$,
and, hence
$\operatorname{TGap}_{\Theta}(\hat{\theta})=-\min_{\eta\in\mathbb{B}_{\Theta}}\langle
v(\hat{\theta}),\,\eta\rangle=-\min_{z\in\mathbf{J}(\hat{\theta})\mathbb{B}_{\Theta}}\langle
g(\hat{x}),\,z\rangle$ (B.10)
Now, recall that the set of Nash equilibria $\Theta^{\ast}\subset\Theta$ is
non-empty. In particular, since the $\Theta\equiv\mathbb{R}^{m}$ is an open
set, the solution set lies in an open set, which, in conjunction with the fact
that the representation maps are faithful representations of $\Theta$ to
$\mathcal{X}$, also implies that $\chi(\Theta^{\ast})$ lies in the interior of
$\mathcal{X}$. Finally, since $\chi$ is smooth, and in conjunction with the
previous observations, it follows that
$\operatorname{TGap}_{\Theta}(\hat{\theta})=\|v(\hat{\theta})\|$ and
$\operatorname{TGap}_{\mathcal{X}}(\hat{x})=\|g(\hat{x})\|$, hence it is easy
to see from the above discussion that
$\sigma_{\min}\operatorname{TGap}_{\mathcal{X}}(\chi(\hat{\theta}))\leq\operatorname{TGap}_{\Theta}(\hat{\theta})\leq\sigma_{\max}\operatorname{TGap}_{\mathcal{X}}(\chi(\hat{\theta}))$
(B.11)
and our proof is complete. ∎
In view of the above discussion, the reader may wonder why not use the tangent
gap $\operatorname{TGap}_{\Theta}$ directly as a performance metric. The
reason for this is that, if the solutions $x^{\ast}$ of (SVI)/(MVI) do not
belong to the image of $\chi$, the dual norm of $v(\hat{\theta})$ may be too
stringent as a performance metric as it does not vanish near an equilibrium
(e.g., think of the operator $g(x)=x$ for $x$ between $0$ and $1$). In this
case, equilibria can be approximated to arbitrary accurarcy but never
attained, so the latent gap functions $\operatorname{Gap}$ and
$\operatorname{Err}$ are more appropriate as performance measures.
### B.2. Convergence analysis and template inequalities
See 3
###### Proof.
Let $i\in\mathcal{N}$ be some arbitrary player, let $\theta\in\Theta$ be some
arbitrary profile of control variables, and let $\eta\in\mathbb{R}^{d_{i}}$ be
some arbitrary vector. Then for each coordinate
$\alpha\in\\{1,\ldots,m_{i}\\}$, and latent profile $\hat{x}\in\mathcal{X}$,
we have that
$\frac{\partial
E(\theta;\hat{x})}{\partial\theta_{i\alpha}}=\sum_{j=1}^{d_{i}}(x_{ij}-\hat{x}_{ij})\frac{\partial
x_{ij}}{\partial\theta_{i\alpha}}.$ (B.12)
That is
$\operatorname{\nabla}_{\theta_{i}}E(y;\hat{x})=\mathbf{J}_{i}(\theta_{i})^{\intercal}(x_{i}-\hat{x}_{i})$.
Now, we can multiply both sides of the above expression with
$\mathbf{P}_{i}(\theta_{i})$ to get
$\begin{split}\mathbf{P}_{i}(\theta_{i})\operatorname{\nabla}_{\theta_{i}}E(\theta;\hat{x})&=\mathbf{P}_{i}(\theta_{i})\mathbf{J}_{i}(\theta_{i})^{\intercal}(x_{i}-\hat{x}_{i})\\\
&=\mathbf{J}_{i}(\theta_{i})^{+}\big{(}\mathbf{J}_{i}(\theta_{i})^{+}\big{)}^{\intercal}\mathbf{J}_{i}(\theta_{i})^{\intercal}(x_{i}-\hat{x}_{i})\\\
&=\mathbf{J}_{i}(\theta_{i})^{+}(x_{i}-\hat{x}_{i}),\end{split}$ (B.13)
where in the last equality we used the fact that the $i$-th player’s
representation map $\chi_{i}$ is a faithful representation, i.e.,
$\mathbf{J}_{i}(\theta_{i})$ is maximal rank. Finally, by expanding the LHS of
(21), and applying the above simplification the result follows. ∎
See 4
###### Proof.
For each $t=1,2,\dotsc$ we expand $x_{t+1}$ using its second-order Taylor
approximation at $\theta_{t}$; i.e., for each player $i=1,\dots,N$ and each
coordinate $l=1,\dots,d_{i}$:
$\begin{split}x_{il,t+1}&=\chi_{il}(\theta_{t+1})\\\
&=\chi_{il}(\theta_{i,t})+\gamma_{t}\langle\operatorname{\nabla}\chi_{il}(\theta_{i,t}),\theta_{i,t+1}-\theta_{i,t}\rangle\\\
&\qquad+\gamma_{t}^{2}(\theta_{i,t+1}-\theta_{i,t})^{\intercal}H_{il}(\tilde{\theta}_{i,t})(\theta_{i,t+1}-\theta_{i,t}),\end{split}$
(B.14)
where $H_{il}(\tilde{\theta}_{i,t})$ is the Hessian of the latent map
$\chi_{il}$ at some convex combination $\tilde{\theta}_{i,t}$ of
$\theta_{i,t}$ and $\theta_{i,t+1}$. Then, further expanding $\theta_{t+1}$,
we also get
$\begin{split}x_{il,t+1}&=x_{il,t}-\gamma_{t}\langle\operatorname{\nabla}\chi_{il}(\theta_{i,t}),\mathbf{P}_{i,t}V_{i,t}\rangle+\gamma_{t}^{2}(\mathbf{P}_{i,t}V_{i,t})^{\intercal}H_{il}(\tilde{\theta}_{i,t})(\mathbf{P}_{i,t}V_{i,t}).\end{split}$
(B.15)
Plugging the above expansion to $E(\theta_{t+1};\hat{x})$ for arbitrary state
$\hat{x}$ we get
$\begin{split}E_{t+1}&=E(\theta_{t+1};\hat{x})\\\
&=\frac{1}{2}\sum_{i=1}^{N}\sum_{l=1}^{d_{i}}(x_{t+1}-\hat{x})^{2}\\\
&=\frac{1}{2}\sum_{i=1}^{N}\sum_{l=1}^{d_{i}}[x_{il,t}-\hat{x}_{il}-\gamma_{t}\langle\operatorname{\nabla}\chi_{il}(\theta_{i,t}),\mathbf{P}_{i,t}V_{i,t}\rangle\\\
&\qquad+\gamma_{t}^{2}(\mathbf{P}_{i,t}V_{i,t})^{\intercal}H_{il}(\tilde{\theta}_{i,t})(\mathbf{P}_{i,t}V_{i,t})]^{2}\\\
&=E_{t}-\gamma_{t}\sum_{i=1}^{N}\sum_{l=1}^{d_{i}}\langle\operatorname{\nabla}\chi_{il}(\theta_{i,t}),\mathbf{P}_{i,t}V_{i,t}\rangle(x_{il,t}-\hat{x}_{il})+\gamma_{t}^{2}\psi_{t}\\\
&=E_{t}-\gamma_{t}(\mathbf{J}_{t}\mathbf{P}_{t}V_{t})^{\intercal}(x_{t}-\hat{x})+\gamma_{t}^{2}\psi_{t},\end{split}$
(B.16)
where the second-order term $\psi_{t}$ is given by the formula
$\begin{split}\psi_{t}&=\sum_{i=1}^{N}\sum_{l=1}^{d_{i}}\langle\operatorname{\nabla}\chi_{il}(\theta_{i,t}),\mathbf{P}_{i,t}V_{i,t}\rangle^{2}\\\
&\qquad+\sum_{i=1}^{N}\sum_{l=1}^{d_{i}}(\mathbf{P}_{i,t}V_{i,t})^{\intercal}H_{il}(\tilde{\theta}_{i,t})(\mathbf{P}_{i,t}V_{i,t})(x_{il,t}-\hat{x}_{il})\\\
&\qquad-\gamma_{t}\sum_{i=1}^{N}\sum_{l=1}^{d_{i}}\langle\operatorname{\nabla}\chi_{il}(\theta_{i,t}),\mathbf{P}_{i,t}V_{i,t}\rangle(\mathbf{P}_{i,t}V_{i,t})^{\intercal}H_{il}(\tilde{\theta}_{i,t})(\mathbf{P}_{i,t}V_{i,t})\\\
&\qquad+\gamma_{t}^{2}\sum_{i=1}^{N}\sum_{l=1}^{d_{i}}[(\mathbf{P}_{i,t}V_{i,t})^{\intercal}H_{il}(\tilde{\theta}_{i,t})(\mathbf{P}_{i,t}V_{i,t})]^{2}.\end{split}$
(B.17)
For the second-order term, observe that by 2, we have that
$[\mathbf{J}_{t}^{\intercal}\mathbf{J}_{t}]^{+}\preceq\frac{1}{\sigma_{\min}^{2}}\mathbf{I}$.
Using that fact, it follows that
$\begin{split}\sum_{i=1}^{N}\sum_{l=1}^{d_{i}}\langle\operatorname{\nabla}\chi_{il}(\theta_{i,t}),\mathbf{P}_{i,t}V_{i,t}\rangle^{2}&=(\mathbf{J}_{t}\mathbf{P}_{t}V_{t})^{\intercal}\mathbf{J}_{t}\mathbf{P}_{t}V_{t}\\\
&=V_{t}^{\intercal}[\mathbf{J}_{t}^{\intercal}\mathbf{J}_{t}]^{+}\mathbf{J}_{t}^{\intercal}\mathbf{J}_{t}[\mathbf{J}_{t}^{\intercal}\mathbf{J}_{t}]^{+}V_{t}\\\
&=V_{t}^{\intercal}[\mathbf{J}_{t}^{\intercal}\mathbf{J}_{t}]^{+}V_{t}\\\
&\leq\frac{1}{\sigma_{\min}^{2}}\lVert V_{t}\rVert^{2},\end{split}$ (B.18)
Furthermore, since the representation maps $\chi_{i}$ are $\beta$-Lipschitz
smooth for some Lipschitz modulus $\beta$, we have that
$H_{i,l}(\bar{\theta}_{i,t})\preceq\beta\mathbf{I}$ for each player $i$ and
coordinate $l$. Consequently, we have that
$(\mathbf{P}_{i,t}V_{i,t})^{\intercal}H_{il}(\tilde{\theta}_{i,t})(\mathbf{P}_{i,t}V_{i,t})\leq\beta
V_{i,t}^{\intercal}\mathbf{P}_{i,t}^{2}V_{i,t}\\\
\leq\frac{\beta}{\sigma_{\min}^{4}}\lVert
V_{i,t}\rVert^{2}\leq\frac{\beta}{\sigma_{\min}^{4}}\lVert V_{t}\rVert^{2}.$
(B.19)
Let $D=\operatorname{diam}(\chi)$, then, by applying the Cauchy-Schwarz
inequality, we also get that
$\begin{split}\psi_{t}\leq\frac{1}{\sigma_{\min}^{2}}\lVert
V_{t}\rVert^{2}+\frac{dD\sqrt{\beta}}{\sigma_{\min}^{2}}\lVert
V_{t}\rVert+\gamma\frac{\sqrt{\beta}}{\sigma_{\min}^{3}}\lVert
V_{t}\rVert^{2}+\gamma^{2}\frac{d\beta^{2}}{\sigma_{\min}^{8}}\lVert
V_{t}\rVert^{2},\end{split}$ (B.20)
where $\gamma=\sup_{t=1,2,\dotsc}\gamma_{t}$, and $d=\sum_{i=1}^{N}d_{i}$.
Finally, let us consider the first-order term of (B.16). Let
$v_{t}=(\nabla_{i}\ell_{i}(\theta_{t}))_{i\in\mathcal{N}}$, and
$g_{t}=g(x_{t})$. Then, by definition, we also have that
$v_{t}=\mathbf{J}_{t}^{\intercal}g_{t}$. Moreover, recall that, by
construction,
$\mathbf{J}_{t}\mathbf{P}_{t}\mathbf{J}_{t}^{\intercal}=\mathbf{I}$.
Consequently, we can write
$\begin{split}(\mathbf{J}_{t}\mathbf{P}_{t}V_{t})^{\intercal}(x_{t}-\hat{x})&=(\mathbf{J}_{t}\mathbf{P}_{t}v_{t})^{\intercal}(x_{t}-\hat{x})+[\mathbf{J}_{t}\mathbf{P}_{t}(V_{t}-v_{t})]^{\intercal}(x_{t}-\hat{x})\\\
&=(\mathbf{J}_{t}\mathbf{P}_{t}\mathbf{J}_{t}^{\intercal}g_{t})^{\intercal}(x_{t}-\hat{x})+[\mathbf{J}_{t}\mathbf{P}_{t}(V_{t}-\mathbf{J}_{t}^{\intercal}g_{t})]^{\intercal}(x_{t}-\hat{x})\\\
&=g_{t}^{\intercal}(x_{t}-\hat{x})+(\mathbf{J}_{t}\mathbf{P}_{t}V_{t}-g_{t})^{\intercal}(x_{t}-\hat{x}),\end{split}$
(B.21)
which concludes our proof. ∎
## Appendix C Proofs of Theorems 1, 2 and 3
We are now in a position to prove Theorems 1, 2 and 3 regarding the
convergence properties of (PHGD). We proceed sequentially, restating the
relevant results as needed.
See 1
###### Proof.
Let $\theta_{1}\in\Theta$ be some arbitrary initialization of the algorithm,
and let $\tilde{x}\in\mathcal{X}$ be some arbitrary profile of latent
variables. Next, by Lemma 4, we have that for all $t\geq 1$:
$\displaystyle E_{t+1}$ $\displaystyle\leq E_{t}-\gamma_{t}\langle
g(x_{t}),\,x_{t}-\hat{x}\rangle+\gamma_{t}\phi_{t}+\gamma_{t}^{2}\psi_{t}$
(C.1) $\displaystyle\phi_{t}$
$\displaystyle=(\mathbf{J}(\theta_{t})\mathbf{P}(\theta_{t})V_{t}-g(x_{t}))^{\intercal}(x_{t}-\hat{x})$
$\displaystyle\psi_{t}$ $\displaystyle=\kappa\lVert
V_{t}\rVert^{2},\quad\text{for some $\kappa>0$}.$
Rearranging the terms, the above is equivalent to
$\gamma_{t}\big{\langle}g(x_{t}),\,x_{t}-\tilde{x}\big{\rangle}\leq
E_{t}-E_{t+1}+\gamma_{t}\phi_{t}+\gamma_{t}^{2}\psi_{t}.$ (C.2)
Furthermore, by the monotonicity of $g$, we also have that
$\big{\langle}g(x_{t})-g(\tilde{x}),\,x_{t}-\tilde{x}\big{\rangle}\geq 0$, and
by combining the two, we get that for all $t\geq 1$:
$\gamma_{t}\big{\langle}g(\tilde{x}),\,x_{t}-\tilde{x}\big{\rangle}\leq\gamma_{t}\big{\langle}g(x_{t}),\,x_{t}-\tilde{x}\big{\rangle}\leq
E_{t}-E_{t+1}+\gamma_{t}\phi_{t}+\gamma_{t}^{2}\psi_{t}.$ (C.3)
Summing up the above terms in both sides of the inequality, we also get that
$\begin{split}\sum_{s=1}^{t}\gamma_{s}\big{\langle}g(\tilde{x}),\,x_{s}-\tilde{x}\big{\rangle}&\leq\sum_{s=1}^{t}\big{[}E_{s}-E_{s+1}+\gamma_{s}\phi_{s}+\gamma_{s}^{2}\psi_{s}\big{]}\\\
&=E_{1}-E_{t+1}+\sum_{s=1}^{t}\gamma_{s}\phi_{s}+\sum_{s=1}^{t}\gamma_{s}^{2}\psi_{s}\\\
&\leq
E_{1}+\sum_{s=1}^{t}\gamma_{s}\phi_{s}+\sum_{s=1}^{t}\gamma_{s}^{2}\psi_{s}.\end{split}$
(C.4)
Dividing all terms by $\tilde{\gamma}_{t}\coloneqq\sum_{s=1}^{t}\gamma_{s}$,
also yields
$\begin{split}\big{\langle}g(\tilde{x}),\,\bar{x}_{t}-\tilde{x}\big{\rangle}&=\Big{\langle}g(\tilde{x}),\,\sum_{s=1}^{t}\frac{\gamma_{s}missing}{\tilde{\gamma}_{t}missing}x_{s}-\tilde{x}\Big{\rangle}\\\
&=\sum_{s=1}^{t}\frac{\gamma_{s}missing}{\tilde{\gamma}_{t}missing}\big{\langle}g(\tilde{x}),\,x_{s}-\tilde{x}\big{\rangle}\\\
&\leq\frac{E_{1}missing}{\tilde{\gamma}_{t}missing}+\frac{\sum_{s=1}^{t}\gamma_{s}\phi_{s}missing}{\tilde{\gamma}_{t}missing}+\frac{\sum_{s=1}^{t}\gamma_{s}^{2}\psi_{s}missing}{\tilde{\gamma}_{t}missing},\end{split}$
(C.5)
where
$\bar{x}_{t}\coloneqq\sum_{s=1}^{t}\frac{\gamma_{s}missing}{\tilde{\gamma}_{t}missing}x_{s}$
is the time-average.
Next, let us define the following auxiliary process that will assist us in
further bounding the above expression:
$\displaystyle y_{1}$ $\displaystyle=x_{1}$ (C.6) $\displaystyle y_{t+1}$
$\displaystyle=\operatorname*{arg\,min}_{x\in\mathcal{X}}\lVert
y_{t}-\gamma_{t}\eta_{t}-x\rVert\quad\text{for all $t\geq 2$},$
where
$\eta_{t}\coloneqq\mathbf{J}(\theta_{t})\mathbf{P}(\theta_{t})V_{t}-g(x_{t})$
for all $i\in\mathcal{N}$. Observe that
$\gamma_{t}\phi_{t}=\gamma_{t}\langle\eta_{t},\,x_{t}-\tilde{x}\rangle=\gamma_{t}\langle\eta_{t},\,y_{t}-\tilde{x}\rangle+\gamma_{t}\langle\eta_{t},\,x_{t}-y_{t}\rangle\quad\text{for
all $t\geq 1$}.$ (C.7)
Let us, first, focus on the former of the two terms. Notice that for all
$t\geq 1$, we can write
$\gamma_{t}\langle\eta_{t},\,y_{t}-\tilde{x}\rangle=\gamma_{t}\langle\eta_{t},\,y_{t+1}-\tilde{x}\rangle+\gamma_{t}\langle\eta_{t},\,y_{t}-y_{t+1}\rangle.$
(C.8)
Furthermore, by the optimality of $y_{t+1}$, $t\geq 2$, we have also have that
$\langle y_{t+1}-y_{t}+\gamma_{t}\eta_{t},\,y_{t+1}-\tilde{x}\rangle\leq 0.$
(C.9)
That is, $\gamma_{t}\langle\eta_{t},\,y_{t+1}-\tilde{x}\rangle\leq\langle
y_{t}-y_{t+1},\,y_{t+1}-\tilde{x}\rangle$.
Let us also recall a couple of useful quadratic identities, namely, we have
that $\lVert y_{t}-\tilde{x}\rVert^{2}=\lVert
y_{t}-y_{t+1}+y_{t+1}-\tilde{x}\rVert^{2}=\lVert
y_{t}-y_{t+1}\rVert^{2}+2\langle
y_{t}-y_{t+1},\,y_{t+1}-\tilde{x}\rangle+\lVert y_{t+1}-\tilde{x}\rVert^{2}$,
and
$\lVert\gamma_{t}\eta_{t}-(y_{t}-y_{t+1})\rVert^{2}=\gamma_{t}^{2}\lVert\eta_{t}\rVert^{2}-2\gamma_{t}\langle\eta_{t},\,y_{t}-y_{t+1}\rangle+\lVert
y_{t}-y_{t+1}\rVert^{2}$. Then, in conjunction with the above, we get that for
all $t\geq 1$:
$\begin{split}\gamma_{t}\langle\eta_{t},\,y_{t}-\tilde{x}\rangle&=\gamma_{t}\langle\eta_{t},\,y_{t+1}-\tilde{x}\rangle+\gamma_{t}\langle\eta_{t},\,y_{t}-y_{t+1}\rangle\\\
&\leq\langle
y_{t}-y_{t+1},\,y_{t+1}-\tilde{x}\rangle+\gamma_{t}\langle\eta_{t},\,y_{t}-y_{t+1}\rangle\\\
&=\frac{1}{2}\lVert y_{t}-\tilde{x}\rVert^{2}-\frac{1}{2}\lVert
y_{t+1}-\tilde{x}\rVert^{2}-\frac{1}{2}\lVert\gamma_{t}\eta_{t}-(y_{t}-y_{t+1})\rVert^{2}+\frac{\gamma_{t}^{2}}{2}\lVert\eta_{t}\rVert^{2}\\\
&\leq\frac{1}{2}\lVert y_{t}-\tilde{x}\rVert^{2}-\frac{1}{2}\lVert
y_{t+1}-\tilde{x}\rVert^{2}+\frac{\gamma_{t}^{2}}{2}\lVert\eta_{t}\rVert^{2}\end{split}$
(C.10)
Finally, summing both sides of the above inequalities, we also get that
$\begin{split}\sum_{s=1}^{t}\gamma_{s}\langle\eta_{s},\,y_{s}-\tilde{x}\rangle&\leq\frac{1}{2}\sum_{s=1}^{t}\lVert
y_{s}-\tilde{x}\rVert^{2}-\frac{1}{2}\sum_{s=1}^{t}\lVert
y_{s+1}-\tilde{x}\rVert^{2}+\frac{1}{2}\sum_{s=1}^{t}\gamma_{s}^{2}\lVert\eta_{s}\rVert^{2}\\\
&=\frac{1}{2}\lVert y_{1}-\tilde{x}\rVert^{2}-\frac{1}{2}\lVert
y_{t+1}-\tilde{x}\rVert^{2}+\frac{1}{2}\sum_{s=1}^{t}\gamma_{s}^{2}\lVert\eta_{s}\rVert^{2}\\\
&\leq\frac{1}{2}\lVert
y_{1}-\tilde{x}\rVert^{2}+\frac{1}{2}\sum_{s=1}^{t}\gamma_{s}^{2}\lVert\eta_{s}\rVert^{2}\\\
&=\frac{1}{2}\lVert
x_{1}-\tilde{x}\rVert^{2}+\frac{1}{2}\sum_{s=1}^{t}\gamma_{s}^{2}\lVert\eta_{s}\rVert^{2}\\\
&=E_{1}+\frac{1}{2}\sum_{s=1}^{t}\gamma_{s}^{2}\lVert\eta_{s}\rVert^{2}\end{split}$
(C.11)
With the above established, let us reconsider the quantity
$\big{\langle}g(\tilde{x}),\,\bar{x}_{t}-\tilde{x}\big{\rangle}$.
Specifically, due to the above derivations, we have that for all $t\geq 1$:
$\begin{split}\big{\langle}g(\tilde{x}),\,\bar{x}_{t}-\tilde{x}\big{\rangle}&\leq\frac{E_{1}missing}{\tilde{\gamma}_{t}missing}+\frac{\sum_{s=1}^{t}\gamma_{s}\phi_{s}missing}{\tilde{\gamma}_{t}missing}+\frac{\sum_{s=1}^{t}\gamma_{s}^{2}\psi_{s}missing}{\tilde{\gamma}_{t}missing}\\\
&=\frac{E_{1}missing}{\tilde{\gamma}_{t}missing}+\frac{1}{\tilde{\gamma}_{t}missing}\sum_{s=1}^{t}\gamma_{t}\langle\eta_{t},\,y_{t}-\tilde{x}\rangle+\frac{1}{\tilde{\gamma}_{t}missing}\sum_{s=1}^{t}\gamma_{t}\langle\eta_{t},\,x_{t}-y_{t}\rangle+\frac{\sum_{s=1}^{t}\gamma_{s}^{2}\psi_{s}missing}{\tilde{\gamma}_{t}missing}\\\
&\leq\frac{3E_{1}missing}{2\tilde{\gamma}_{t}missing}+\frac{1}{2\tilde{\gamma}_{t}missing}\sum_{s=1}^{t}\gamma_{s}^{2}\lVert\eta_{s}\rVert^{2}+\frac{1}{\tilde{\gamma}_{t}missing}\sum_{s=1}^{t}\gamma_{t}\langle\eta_{t},\,x_{t}-y_{t}\rangle+\frac{\kappa}{\tilde{\gamma}_{t}missing}\sum_{s=1}^{t}\gamma_{s}^{2}\lVert
V_{s}\rVert^{2}.\end{split}$
Considering the mean of supremum over $\tilde{x}\in\mathcal{X}$ of the LHS, we
end up with
$\begin{split}\operatorname{\mathbb{E}}\bigg{[}\sup_{\tilde{x}\in\mathcal{X}}\big{\langle}g(\tilde{x}),\,\bar{x}_{t}-\tilde{x}\big{\rangle}\bigg{]}&\leq\frac{3E_{1}missing}{2\tilde{\gamma}_{t}missing}+\frac{1}{2\tilde{\gamma}_{t}missing}\sum_{s=1}^{t}\gamma_{s}^{2}\operatorname{\mathbb{E}}[\lVert\eta_{s}\rVert^{2}]+\frac{\kappa}{\tilde{\gamma}_{t}missing}\sum_{s=1}^{t}\gamma_{s}^{2}\operatorname{\mathbb{E}}[\lVert
V_{s}\rVert^{2}]\\\
&\leq\frac{3E_{1}missing}{2\tilde{\gamma}_{t}missing}+\frac{M^{2}}{\tilde{\gamma}_{t}missing}\sum_{s=1}^{t}\gamma_{s}^{2}+\frac{\kappa
M^{2}}{\tilde{\gamma}_{t}missing}\sum_{s=1}^{t}\gamma_{s}^{2}\\\
&=\frac{3E_{1}missing}{2\tilde{\gamma}_{t}missing}+(1+\kappa)\frac{M^{2}}{\tilde{\gamma}_{t}missing}\sum_{s=1}^{t}\gamma_{s}^{2}\\\
&=\operatorname{\mathcal{O}}\Big{(}\sum_{s=1}^{t}\gamma_{s}^{2}/\tilde{\gamma}_{t}\Big{)}\\\
&=\operatorname{\mathcal{O}}(\log t/\sqrt{t}),\end{split}$ (C.12)
where the last inequality is direct consequence of the bounded second moment
of the stochastic gradient in Assumption 1 ∎
See 2
###### Proof.
Let $\theta_{1}\in\Theta$ be some arbitrary initialization of the algorithm,
and let $\gamma\geq\frac{1}{2\mu}$ be arbitrary. Since the map $f$ is
$\mu$-strongly monotone, we have, by the definition of strong monotonicity,
that
$\big{\langle}g(x_{t}),\,x_{t}-x^{\ast}\big{\rangle}\geq\mu\lVert
x-x^{\ast}\rVert^{2}\quad\text{for all $x\in\mathcal{X}$}.$ (C.13)
Next, by Lemma 4, we have that for all $t\geq 1$:
$\displaystyle E_{t+1}$ $\displaystyle\leq E_{t}-\gamma_{t}\langle
g(x_{t}),\,x_{t}-\hat{x}\rangle+\gamma_{t}\phi_{t}+\gamma_{t}^{2}\psi_{t}$
(C.14) $\displaystyle\phi_{t}$
$\displaystyle=(\mathbf{J}(\theta_{t})\mathbf{P}(\theta_{t})V_{t}-g(x_{t}))^{\intercal}(x_{t}-\hat{x})$
$\displaystyle\psi_{t}$ $\displaystyle=\kappa\lVert
V_{t}\rVert^{2},\quad\text{for some $\kappa>0$}.$
That is
$\begin{split}E_{t+1}&\leq E_{t}-\mu\gamma_{t}\lVert
x_{t}-x^{\ast}\rVert_{2}^{2}+\gamma_{t}\phi_{t}+\gamma_{t}^{2}\psi_{t}\\\
&=E_{t}-2\mu\gamma_{t}E_{t}+\gamma_{t}\phi_{t}+\gamma_{t}^{2}\psi_{t}\\\
&=(1-2\mu\gamma_{t})E_{t}+\gamma_{t}\phi_{t}+\gamma_{t}^{2}\psi_{t}.\end{split}$
(C.15)
Let $\mathcal{H}_{t}=\\{\theta_{s}:s=1,\dotsc,t\\}$ be the history of play up
to iteration $t$. In expectation, it follows from the above that, for all
$t\geq 1$:
$\begin{split}\operatorname{\mathbb{E}}[E_{t+1}]&\leq(1-2\mu\gamma_{t})\operatorname{\mathbb{E}}[E_{t}]+\gamma_{t}\operatorname{\mathbb{E}}[\phi_{t}]+\gamma_{t}^{2}\operatorname{\mathbb{E}}[\psi_{t}]\\\
&=(1-2\mu\gamma_{t})\operatorname{\mathbb{E}}[E_{t}]+\gamma_{t}\operatorname{\mathbb{E}}\Big{[}\operatorname{\mathbb{E}}[\phi_{t}\nonscript\,|\nonscript\,\mathopen{}\mathcal{H}_{t}]]+\gamma_{t}^{2}\operatorname{\mathbb{E}}\Big{[}\operatorname{\mathbb{E}}[\psi_{t}\nonscript\,|\nonscript\,\mathopen{}\mathcal{H}_{t}]].\end{split}$
(C.16)
Note that $x_{t}$ is $\mathcal{H}_{t}$-measurable; hence, we have that
$\begin{split}\operatorname{\mathbb{E}}[\phi_{t}\nonscript\,|\nonscript\,\mathopen{}\mathcal{H}_{t}]&=\big{(}\mathbf{J}(\theta_{t})\mathbf{P}(\theta_{t})\operatorname{\mathbb{E}}[V_{t}\nonscript\,|\nonscript\,\mathopen{}\mathcal{H}_{t}]-g(x_{t})\big{)}^{\intercal}(x_{t}-\hat{x})\\\
&=\big{(}\mathbf{J}(\theta_{t})\mathbf{P}(\theta_{t})v(\theta_{t})-g(x_{t})\big{)}^{\intercal}(x_{t}-\hat{x})\\\
&=\big{(}\mathbf{J}(\theta_{t})\mathbf{P}(\theta_{t})\mathbf{J}({\theta_{t}})^{\intercal}{g(x_{t})}-g(x_{t})\big{)}^{\intercal}(x_{t}-\hat{x})\\\
&=\big{(}g(x_{t})-g(x_{t})\big{)}^{\intercal}(x_{t}-\hat{x})\\\
&=0.\end{split}$ (C.17)
Furthermore, since, by Assumption 1, we have that
$\operatorname{\mathbb{E}}[\lVert
V_{t}\rVert^{2}\nonscript\,|\nonscript\,\mathopen{}\mathcal{H}_{t}]\leq
M^{2}$, it also holds that
$\operatorname{\mathbb{E}}[\psi_{t}\nonscript\,|\nonscript\,\mathopen{}\mathcal{H}_{t}]=\kappa\operatorname{\mathbb{E}}[\lVert
V_{t}\rVert^{2}\nonscript\,|\nonscript\,\mathopen{}\mathcal{H}_{t}]\leq\kappa
M^{2}$ (C.18)
Next, by some simple substitutions, we have
$\begin{split}\operatorname{\mathbb{E}}[E_{t+1}]&\leq(1-2\mu\gamma_{t})\operatorname{\mathbb{E}}[E_{t}]+\kappa
M^{2}\gamma_{t}^{2}\\\
&=\Big{(}1-\frac{2\mu\gamma}{t}\Big{)}\operatorname{\mathbb{E}}[E_{t}]+\frac{\kappa
M^{2}\gamma^{2}}{t^{2}}\end{split}$ (C.19)
Finally, since $\gamma>\frac{1}{2\mu}$, we may apply Chung’s lemma [8] to get
that, for all $t\geq 1$:
$\operatorname{\mathbb{E}}[E_{t}]\leq\frac{\kappa
M^{2}\gamma^{2}}{2\mu\gamma-1}\cdot\frac{1}{t}+\operatorname{\mathcal{O}}\Big{(}\frac{1}{t^{2}}+\frac{1}{t^{2\mu\gamma}}\Big{)}.$
(C.20)
∎
See 3
###### Proof.
Due to the absence of randomness, and therefore the absence of noise, Lemma 4
may be simplified to
$E_{t+1}\leq
E_{t}-\gamma_{t}\big{\langle}g(x_{t}),\,x_{t}-x^{\ast}\big{\rangle}+\kappa\gamma_{t}^{2}\lVert
v_{t}\rVert^{2}$ (C.21)
for some constants $\kappa>0$, where
$v_{t}\coloneqq(\nabla_{i}\ell_{i}(\theta_{t}))_{i\in\mathcal{N}}$.
Let $\beta$ be the modulus of Lipschitz continuity of the gradients
$v_{t}=\mathbf{J}({\theta_{t}})^{\intercal}{g(x_{t})}$, and let
$\frac{\mu}{2}$ be the modulus of the strong monotonicity of $g$. Recall that,
by Assumption 2, we have that the singular values of the Jacobian
$\mathbf{J}(\theta)$ of the representation map $\chi$ are bounded from above
and below by $\sigma_{\max}<\infty$ and $\sigma_{\min}>0$ respectively.
Therefore, since $v_{t}$ is $\beta$-Lipschitz, it follows that
$\lVert
v_{t}-v^{\ast}\rVert^{2}\leq\beta^{2}\lVert\theta_{t}-\theta^{\ast}\rVert^{2}\leq\frac{\beta^{2}}{\sigma_{\min}^{2}}\lVert
x_{t}-x^{\ast}\rVert^{2}$ (C.22)
In conjunction to the above, we then also have
$\begin{split}E_{t+1}&\leq
E_{t}-\gamma_{t}\big{\langle}g(x_{t}),\,x_{t}-x^{\ast}\big{\rangle}+\kappa\gamma_{t}^{2}\lVert
v_{t}\rVert^{2}\\\
&=E_{t}-\gamma_{t}\big{\langle}g(x_{t}),\,x_{t}-x^{\ast}\big{\rangle}+\kappa\gamma_{t}^{2}\lVert
v_{t}-v^{\ast}\rVert^{2}\\\ &\leq E_{t}-\mu\gamma_{t}\frac{1}{2}\lVert
x_{t}-x^{\ast}\rVert^{2}+\frac{\kappa\gamma_{t}^{2}\beta^{2}}{\sigma_{\min}^{2}}\lVert
x_{t}-x^{\ast}\rVert^{2}\\\
&=E_{t}-\mu\gamma_{t}E_{t}+\frac{2\kappa\gamma_{t}^{2}\beta^{2}}{\sigma_{\min}^{2}}E_{t}\\\
&=E_{t}\Big{(}1-\mu\gamma_{t}+\frac{2\kappa\gamma_{t}^{2}\beta^{2}}{\sigma_{\min}^{2}}\Big{)}\end{split}$
(C.23)
Restricting $\gamma_{t}$ such that
$1-\mu\gamma_{t}+\frac{2\kappa\gamma_{t}^{2}\beta^{2}}{\sigma_{\min}^{2}}<1\iff\gamma_{t}<\frac{\sigma_{\min}^{2}\mu}{2\kappa\beta^{2}},$
(C.24)
we finally have that, for the choice of
$\gamma_{t}=\hat{\gamma}\coloneqq\dfrac{\sigma_{\min}^{2}\mu}{8\kappa\beta^{2}}$,
it holds
$E_{t}=\operatorname{\mathcal{O}}(\rho^{t})\quad\text{where
$\rho\coloneqq\Big{(}1-\dfrac{3\sigma_{\min}^{2}\mu}{32\kappa\beta^{2}}\Big{)}<1$}.$
(C.25)
Leveraging the equivalence of $\operatorname{Err}_{\Theta}(\theta_{t})$ and
$\operatorname{Err}_{\mathcal{X}}(x_{t})=E_{t}$ completes the proof. ∎
## Appendix D Experiments
This section demonstrates the method’s applicability in a series of different
applccations. Along with revisiting the established examples of Section 5 in
greater detail, we also present how the PHGD method performs in a couple of
additional settings of interest. We begin with a high-level description of the
common test suite, and afterward, we move to the definitive details of each of
the presented applications.
In each example, we define a base game $\Gamma$ among two or more players
$\mathcal{N}\equiv\\{1,\ldots,N\\}$. Each player $i$ control a $m$-dimensional
vector of control variables $\theta_{i}\in\mathbb{R}^{m}$, which they feed in
an individual differentiable preconfigured MLP with two hidden layers that act
as the player’s representation map
$\chi_{i}:\mathbb{R}^{m}\to\mathcal{X}_{i}$. The dimensions of each layer of
the MLPs vary among the different examples. However, the MLP’s output
$x_{i}=\chi_{i}(\theta_{i})$ is guaranteed to be a representation of a
discrete probability distribution among the player’s actions in some normal-
form game, e.g., a Rock-Paper-Scissors game. The actual latent game
$\mathcal{G}$ of $\Gamma$, is given by the “hidden” loss functions
$f_{i}:\mathcal{X}\to\mathbb{R}$, $i\in\mathcal{N}$. Each loss function
$f_{i}$ is a regularized variation of the aforementioned normal-form game,
tuned by some $\frac{\mu}{2}$-strongly convex regularizer
$h_{\mu}(x)=\frac{\mu}{2}\cdot\big{(}\sum_{i\in\mathcal{N}}\lVert
x_{i}-x^{\ast}_{i}\rVert^{2}\big{)}$, where $x^{\ast}$ is the normal-form
game’s unique equilibrium point. The modulus $\mu$ is a hyper-parameter, which
we tune such that the vector field
$g(x)=\big{(}\operatorname{\nabla}_{x_{i}}f_{i}(x)\big{)}_{i\in\mathcal{N}}$
to be strongly monotone, and to avoid finite-precision numerical errors, which
can potentially arise during the computation of $x$.
(a) A trajectory of PHGD in a regularized game of Matching Pennies over the
sub-levels of the energy function in (5). The trajectory is depicted with
reference, both, the space of control variables (left), and the space of
latent variables (right). Notice that the trajectory crosses each sub-level
set of the energy function, at most, once, indicating that the function’s
value is monotone along the trajectory.
(b) A trajectory of GD in a regularized game of Matching Pennies over the sub-
levels of the energy function in (5). The trajectory is depicted with
reference, both, the space of control variables (left), and the space of
latent variables (right). Notice that the trajectory crosses each sub-level
set of the energy function multiple times indicating that the function’s value
is not monotone along the trajectory.
(c) The distance of $x$ to the latent game’s equilibrium point
$x^{\ast}=(\frac{1}{2},\frac{1}{2})$ along similar trajectories of PHGD and GD
in a regularized game of Matching Pennies. The distance is depicted in a
logarithmic scale in order to reveal the exponential rate of convergence of
PHGD.
#### A Hidden Game of Matching Pennies.
As a first example, we revisit the two-player hidden game of Matching Pennies
we introduced in Section 5. Here, each player’s $i$ control variables
$\theta_{i}$ are uni-dimensional, and are fed to each player’s individual MLP
given by the representation maps
$\chi_{i}(\theta_{i})=\operatorname{sigmoid}\big{(}\alpha^{(2)}_{i}\cdot
CeLU(\alpha^{(1)}_{i}\cdot\theta_{i})\big{)}\quad\text{for all $i=1,2$},$
(D.1)
where $\alpha^{(1)}_{i},\alpha^{(2)}_{i}\in[-1,1]$ are randomly chosen.
Although the definition of the activation functions
$\operatorname{sigmoid}:\mathbb{R}\to[0,1]$, and
$CeLU:\mathbb{R}\to(-1,\infty)$ are widely common, we give them below for
reference; that is:
$\displaystyle\operatorname{sigmoid}(x)$
$\displaystyle=\big{(}1+\exp(-x)\big{)}^{-1}$ (D.2a) $\displaystyle CeLU(x)$
$\displaystyle=\max\\{0,x\\}+\min\\{0,\exp(x)-1\\}.$ (D.2b)
In this, and the following examples, without any loss of the generality, we
deliberately set the bias of each of the MLP’s hidden layers to zero, in order
for the base game’s unique equilibrium to lie in $\theta^{\ast}=\vec{0}$, and
to simplify the notation. Each of the MLP’s output $x_{i}=\chi(\theta_{i})$ is
the $i$-th player’s probability of playing Heads in a regularized game of
Matching Pennies, given by the loss functions
$f_{1}(x)=-f_{2}(x)=-(2x_{1}-1)\cdot(2x_{2}-1)+h_{0.75}(x).$ (D.3)
The latent game’s unique equilibrium, in this case, is at
$x^{\ast}=\chi(\theta^{\ast})=(\frac{1}{2},\frac{1}{2})$.
In this particular example it is possible to visualize the trajectories of
PHGD and gradient descent with reference, both, the space of control variables
$\mathbb{R}^{2}$, and the space of latent variables $[0,1]^{2}$. In Fig. 8(a)
we depict the trajectory of PHGD, with step-size $0.01$, in the above game,
initialized at the arbitrary state $(1.25,2.25)$ with respect to, both, the
space of control variables (left), and the space of latent variables (right),
over the sub-level sets of the energy function in (5). A similar trajectory of
the GD algorithm is depicted in Fig. 8(b). Observe that the PHGD’s crosses the
sub-level sets of the energy function _at most once_ until it asymptotically
reaches the game’s equilibrium point at $\theta^{\ast}$, as opposed to the
trajectory of GD, which exhibits an erratic behavior. In particular, as is
depicted in the semi-log plot in Fig. 8(c), PHGD converges to the game’s
equilibrium point at an exponential rate as opposed to the rate of GD, which,
at best, can be described as linear.
(a) A trajectory of PHGD in a regularized game of Rock-Paper-Scissors as
depicted in each player’s $2$-dimensional simplex of latent variables. Notice
that the trajectory converges to the latent game’s equilibrium point
$x^{\ast}$.
(b) A trajectory of GD in a regularized game of Rock-Paper-Scissors as
depicted in each player’s $2$-dimensional simplex of latent variables. Notice
that the trajectory exhibits erratic behavior.
(c) The mean distance and confidence bound of $x$ to the latent game’s
equilibrium point $x^{\ast}$ along similar trajectories of PHGD and GD in a
regularized game of Rock-Paper-Scissors. The distance is depicted in a
logarithmic scale in order to reveal the exponential rate of convergence of
PHGD.
#### A Hidden Game of Rock-Paper-Scissors.
In the next example we consider a regularized hidden game of Rock-Paper-
Scissors between two players. In a standard Rock-Paper-Scissors game each
player may choose among three strategies, namely, Rock, Paper, or Scissors. No
strategy dominates over the others, with Rock ruling over Scissors, Paper
ruling over Rock, and Scissors ruling over Paper. In this example, each player
$i$ controls a $5$-dimensional vector $\theta_{i}\in\mathbb{R}^{5}$ of control
variables, which, once again, they may feed their individual differentiable
preconfigured MLP, whose output $x_{i}=\chi_{i}(\theta_{i})$ lies in the
$2$-dimensional simplex that encodes the space of probability distributions
among the three strategies of the latent game. The two MLPs are given by the
following representation maps:
$\chi_{i}(\theta_{i})=\operatorname{softmax}\big{(}A^{(2)}_{i}\times
CeLU(A^{(1)}_{i}\times\theta_{i})\big{)}\quad\text{for all $i=1,2$},$ (D.4)
where the matrices $A^{(1)}_{i}\in[-1,1]^{4\times 5}$, and
$A^{(2)}_{i}\in[-1,1]^{3\times 4}$ are random, and the activation function
$CeLU:\mathbb{R}\to(-1,\infty)$, given as in (D.2b), is applied pair-wise. The
definition of
$\operatorname{softmax}:\mathbb{R}^{d}\to\operatorname{\operatorname{\Delta}}_{d-1}$,
where $\operatorname{\operatorname{\Delta}}_{d-1}$ is the $(d-1)$-dimensional
simplex is given, for reference, by
$\operatorname{softmax}_{j}(x)=\frac{\exp(x_{j})}{\sum_{k=1}^{d}\exp(x_{k})}\quad\text{for
all $j=1,\dots,d$}.$ (D.5)
In this case, the latent game’s loss functions are given by the following
polynomial system of equations, whose unique equilibrium lies at the uniform
distributions
$x^{\ast}_{i}=\chi_{i}(\theta^{\ast}_{i})=(\frac{1}{3},\frac{1}{3},\frac{1}{3})$,
$i=1,2$:
$f_{1}(x)=-f_{2}(x)=-x_{1}^{\intercal}Ax_{2}+h_{0.2}(x)\quad\text{where
$A=\begin{bmatrix}[r]0&-1&1\\\ 1&0&-1\\\ -1&1&0\end{bmatrix}$.}$ (D.6)
Although, in this example, it is not possible to visualize the trajectories of
PHGD and GD in the space of control variables due to the large dimensionality
of the base game, we may still get a glimpse of the trajectories’ behavior in
the space of latent variables. Fig. 10(a) depicts an arbitrary trajectory of
PHGD with step-size $0.01$, as it evolves in the simplices of the two players.
Notice, that the trajectory, clearly, converges to the equilibrium of the
latent game. A trajectory of GD, with the same step size and initialization
point, is depicted in Fig. 10(b). In this case, the erratic behavior of GD is
more apparent than in the previous example. A more in-depth comparison of the
algorithms in the current setup is depicted in the semi-log plot of Fig.
10(c), where we visualize the mean distance, and confidence bounds, to the
equilibrium point across $100$ random trajectories of the two algorithms.
Observe, that PHGD exhibits an exponential rate of convergence, as opposed to
the linear rate of GD.
Figure 11. The mean distance and confidence bounds of $x$ to the latent game’s
equilibrium point $x^{\ast}$ along similar trajectories of PHGD and GD in a
regularized Shapley game. The distance is depicted in a logarithmic scale in
order to reveal the exponential rate of convergence of PHGD.
#### A Hidden Shapley’s Game.
The next game we are interested in is a hidden Shapley’s game. The standard
Shapley’s game is a two-player normal-form game with payoff matrices:
$A=\begin{bmatrix}[r]1&0&\beta\\\ \beta&1&0\\\
0&\beta&1\end{bmatrix}\qquad\text{and}\qquad B=\begin{bmatrix}[r]-\beta&1&0\\\
0&-\beta&1\\\ 1&0&-\beta\end{bmatrix},$ (D.7)
for some $\beta\in(0,1)$. The setup of this example is quite similar to the
one in the previous example. However, there are a few noteworthy differences.
To begin with, the hidden Shapley’s game is not a zero-sum game. Specifically,
the latent game’s loss functions are given by the following polynomial system
of equations:
$\displaystyle f_{1}(x)$ $\displaystyle=-x_{1}^{\intercal}Ax_{2}+h_{0.2}(x)$
(D.8) $\displaystyle f_{2}(x)$
$\displaystyle=-x_{2}^{\intercal}B^{\intercal}x_{1}+h_{0.2}(x).$
As in the hidden Rock-Paper-Scissors game, this game’s unique equilibrium also
lies in
$x^{\ast}_{i}=\chi_{i}(\theta^{\ast}_{i})=(\frac{1}{3},\frac{1}{3},\frac{1}{3})$,
$i=1,2$. A second difference is the small modulus, $\mu=0.2$, that we choose
for the strongly-convex regularizer of this game. In Fig. 11 we depict a semi-
log plot of the mean performance of PHGD and GD in the above game constructed
using the same parameters as in the case of the hidden Rock-Paper-Scissors
game. Notice that although the behaviour of PHGD is similar in both games, the
confidence bounds of GD are drastically larger in the current example.
Figure 12. The mean distance and confidence bounds of $x$ to the latent game’s
equilibrium point $x^{\ast}$ along similar trajectories of PHGD and GD in a
regularized El Farol Bar game. The distance is depicted in a logarithmic scale
in order to reveal the exponential rate of convergence of PHGD.
#### A Hidden El Farol Bar Game.
As a final example, we are going to revisit and describe the details of the
hidden El Farol Bar game we introduced in Section 5. The El Farol Bar game is
a famous congestion game that is often described as a game between
populations. However, in this example, we are interested in its atomic
$N$-playe variant. In the standard El Farol Bar game each player is given the
option of visiting a specific bar in El Farol, and the outcome of the game is
determined based on the number of players that decided to visit the bar. From
the perspective of the player, there are three possible situations they may
encounter, which carry some respective payoff. If the player decides to do not
to visit the bar, then, independently of the number of bar tenants, the player
receives a payoff $S$. On the other hand, if the player decides to visit the
El Farol bar, then depending on how crowded the bar is, they may receive a
payoff that is strictly smaller, or strictly larger than $S$. If the bar is
crowded, i.e., more than $C\geq 0$ other players have visited the bar at the
same time, then each of them receives payoff $B<S$. However, if the bar is not
crowded, i.e., the total number of tenants is less than $C$, then they receive
payoff $G>S$. It’s not difficult to verify that the El Farol Bar game has a
unique mixed Nash equilibrium, where each player chooses to visit the bar with
probability $\frac{C}{N}$.
We are going to consider a hidden variant of the above game among $N=30$
players. Each player $i$ controls a $5$-dimensional vector of control
variables $\theta_{i}\in\mathbb{R}^{5}$, which feeds to an individual,
differentiable, and preconfigured MLP. The MLP’s output
$x_{i}=\chi_{i}(\theta_{i})$ is guaranteed to lie in $[0,1]$ and is
interpreted as the probability of player $i$ visiting the El Farol bar.
Regarding the MLP configuration, we follow a similar structure as in the
previous examples. Specifically, the representation maps
$\chi_{i}:\mathbb{R}^{5}\to[0,1]$ are defined as
$\chi_{i}(\theta_{i})=\operatorname{sigmoid}\big{(}\alpha^{(2)}_{i}\cdot
CeLU(A^{(1)}_{i}\times\theta_{i})\big{)}\quad\text{for all $i=1,2$},$ (D.9)
where $A^{(1)}_{i}\in[-0.85,0.85]^{4\times 5}$, and
$\alpha^{(2)}_{i}\in[-1,1]^{4}$ are randomly chosen. The additional
restriction to the domain of $A^{(1)}$ reduces the chance for over-flow
numerical errors of the $\operatorname{sigmoid}$ activation function. The
activation function $CeLU:\mathbb{R}\to(-1,\infty)$ is given as in (D.2b) and
is applied pair-wise to the output of $A^{(1)}_{i}\times\theta_{i}$. The loss
functions of this game follow the standard variant’s definitions, namely, we
have that
$f_{i}(x)=S+x_{i}\bigg{(}G-S+\operatorname{\mathbb{P}}\Big{(}\sum_{i\neq
j}x_{i}\geq C\Big{)}(B-G)\bigg{)}+h_{0.5}(x)\quad\text{for all
$i\in\mathcal{N}$},$ (D.10)
and the latent game’s unique equilibrium is at $x^{\ast}_{i}=\frac{C}{N}$,
$i\in\mathcal{N}$.
The large dimensionality of the above game prohibits the usage of a detailed
visualization as opposed to the previous examples. However, sufficient
information about the behavior PHGD, and GD can be gathered by the performance
log-plot across $100$ random trajectories in Fig. 12. Observe that, regardless
of the increased number of players, the behavior of PHGD is unaffected, i.e.,
it converges to the game’s equilibrium at an exponential rate. On the other
hand, the GD fails to converge in the game’s equilibrium. In fact it
eventually maintains a constant distance from it, unable to procceed further;
an indication of a cycling behavior.
## References
* Antonakopoulos et al. [2019] Antonakopoulos, K., Belmega, E. V., and Mertikopoulos, P. An adaptive mirror-prox algorithm for variational inequalities with singular operators. In _NeurIPS ’19: Proceedings of the 33rd International Conference on Neural Information Processing Systems_ , 2019.
* Arthur [1995] Arthur, W. B. Complexity in economic and financial markets. _Complexity_ , 1(1):20–25, 1995.
* Auslender & Teboulle [2005] Auslender, A. and Teboulle, M. Interior projection-like methods for monotone variational inequalities. _Mathematical Programming_ , 104:39–68, 2005.
* Bakhtin et al. [2022] Bakhtin, A., Wu, D. J., Lerer, A., Gray, J., Jacob, A. P., Farina, G., Miller, A. H., and Brown, N. Mastering the game of no-press diplomacy via human-regularized reinforcement learning and planning. _arXiv preprint arXiv:2210.05492_ , 2022.
* Beltratti et al. [1996] Beltratti, A., Margarita, S., Terna, P., et al. _Neural networks for economic and financial modelling_. International Thomson Computer Press London, UK, 1996.
* Bichler et al. [2021] Bichler, M., Fichtl, M., Heidekrüger, S., Kohring, N., and Sutterer, P. Learning equilibria in symmetric auction games using artificial neural networks. _Nat. Mach. Intell._ , 3(8):687–695, 2021. doi: 10.1038/s42256-021-00365-4. URL https://doi.org/10.1038/s42256-021-00365-4.
* Bravo et al. [2018] Bravo, M., Leslie, D. S., and Mertikopoulos, P. Bandit learning in concave ${N}$-person games. In _NeurIPS ’18: Proceedings of the 32nd International Conference of Neural Information Processing Systems_ , 2018.
* Chung [1954] Chung, K.-L. On a stochastic approximation method. _The Annals of Mathematical Statistics_ , 25(3):463–483, 1954.
* Daskalakis et al. [2009] Daskalakis, C., Goldberg, P. W., and Papadimitriou, C. H. The complexity of computing a Nash equilibrium. _Communications of the ACM_ , 52(2):89–97, 2009\.
* Daskalakis et al. [2021] Daskalakis, C., Skoulakis, S., and Zampetakis, M. The complexity of constrained min-max optimization. In _STOC ’21: Proceedings of the 53rd annual ACM SIGACT symposium on the Theory of Computing_ , 2021.
* Duvocelle et al. [2023] Duvocelle, B., Mertikopoulos, P., Staudigl, M., and Vermeulen, D. Multi-agent online learning in time-varying games. _Mathematics of Operations Research_ , 48(2):914–941, May 2023.
* Facchinei & Pang [2003] Facchinei, F. and Pang, J.-S. _Finite-Dimensional Variational Inequalities and Complementarity Problems_. Springer Series in Operations Research. Springer, 2003.
* Gidel et al. [2021] Gidel, G., Balduzzi, D., Czarnecki, W. M., Garnelo, M., and Bachrach, Y. Minimax theorem for latent games or: How i learned to stop worrying about mixed-nash and love neural nets. _AISTATS_ , 2021.
* Goodfellow et al. [2014] Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In _NIPS ’14: Proceedings of the 28th International Conference on Neural Information Processing Systems_ , 2014.
* Hart & Mas-Colell [2003] Hart, S. and Mas-Colell, A. Uncoupled dynamics do not lead to Nash equilibrium. _American Economic Review_ , 93(5):1830–1836, 2003.
* Hart & Mas-Colell [2006] Hart, S. and Mas-Colell, A. Stochastic uncoupled dynamics and Nash equilibrium. _Games and Economic Behavior_ , 57:286–303, 2006.
* Hazan [2012] Hazan, E. A survey: The convex optimization approach to regret minimization. In Sra, S., Nowozin, S., and Wright, S. J. (eds.), _Optimization for Machine Learning_ , pp. 287–304. MIT Press, 2012.
* Hsieh et al. [2019] Hsieh, Y.-G., Iutzeler, F., Malick, J., and Mertikopoulos, P. On the convergence of single-call stochastic extra-gradient methods. In _NeurIPS ’19: Proceedings of the 33rd International Conference on Neural Information Processing Systems_ , pp. 6936–6946, 2019.
* Hsieh et al. [2021] Hsieh, Y.-P., Mertikopoulos, P., and Cevher, V. The limits of min-max optimization algorithms: Convergence to spurious non-critical sets. In _ICML ’21: Proceedings of the 38th International Conference on Machine Learning_ , 2021.
* Kelly et al. [1998] Kelly, F. P., Maulloo, A. K., and Tan, D. K. H. Rate control for communication networks: shadow prices, proportional fairness and stability. _Journal of the Operational Research Society_ , 49(3):237–252, March 1998.
* Lan [2020] Lan, G. _First-order and Stochastic Optimization Methods for Machine Learning_. Springer, 2020.
* Lee [1997] Lee, J. M. _Riemannian Manifolds: an Introduction to Curvature_. Number 176 in Graduate Texts in Mathematics. Springer, 1997.
* Lee [2003] Lee, J. M. _Introduction to Smooth Manifolds_. Number 218 in Graduate Texts in Mathematics. Springer-Verlag, New York, NY, 2003.
* Letcher [2021] Letcher, A. On the impossibility of global convergence in multi-loss optimization. In _ICLR ’21: Proceedings of the 2021 International Conference on Learning Representations_ , 2021.
* Liu et al. [2021] Liu, X., Yu, C., Zhang, Z., Zheng, Z., Rong, Y., Lv, H., Huo, D., Wang, Y., Chen, D., Xu, J., Wu, F., Chen, G., and Zhu, X. Neural auction: End-to-end learning of auction mechanisms for e-commerce advertising. In Zhu, F., Ooi, B. C., and Miao, C. (eds.), _KDD ’21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, Singapore, August 14-18, 2021_ , pp. 3354–3364. ACM, 2021. doi: 10.1145/3447548.3467103. URL https://doi.org/10.1145/3447548.3467103.
* Madry et al. [2018] Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. In _ICLR ’18: Proceedings of the 2018 International Conference on Learning Representations_ , 2018.
* Mertikopoulos & Moustakas [2016] Mertikopoulos, P. and Moustakas, A. L. Learning in an uncertain world: MIMO covariance matrix optimization with imperfect feedback. _IEEE Trans. Signal Process._ , 64(1):5–18, January 2016.
* Mertikopoulos & Zhou [2019] Mertikopoulos, P. and Zhou, Z. Learning in games with continuous action sets and unknown payoff functions. _Mathematical Programming_ , 173(1-2):465–507, January 2019.
* Mertikopoulos et al. [2017] Mertikopoulos, P., Belmega, E. V., Negrel, R., and Sanguinetti, L. Distributed stochastic optimization via matrix exponential learning. _IEEE Trans. Signal Process._ , 65(9):2277–2290, May 2017.
* Mertikopoulos et al. [2023] Mertikopoulos, P., Hsieh, Y.-P., and Cevher, V. A unified stochastic approximation framework for learning in games. _Mathematical Programming_ , forthcoming, 2023.
* Milionis et al. [2023] Milionis, J., Papadimitriou, C., Piliouras, G., and Spendlove, K. An impossibility theorem in game dynamics. _Proceedings of the National Academy of Sciences_ , 120(41):e2305349120, 2023.
* Mladenovic et al. [2021] Mladenovic, A., Sakos, I., Gidel, G., and Piliouras, G. Generalized natural gradient flows in hidden convex-concave games and gans. In _International Conference on Learning Representations_ , 2021.
* Nesterov [2007] Nesterov, Y. Dual extrapolation and its applications to solving variational inequalities and related problems. _Mathematical Programming_ , 109(2):319–344, 2007\.
* Nikaido & Isoda [1955] Nikaido, H. and Isoda, K. Note on non-cooperative convex games. _Pacific Journal of Mathematics_ , 5:807–815, 1955.
* Orda et al. [1993] Orda, A., Rom, R., and Shimkin, N. Competitive routing in multi-user communication networks. _IEEE/ACM Trans. Netw._ , 1(5):614–627, October 1993.
* Perolat et al. [2022] Perolat, J., De Vylder, B., Hennes, D., Tarassov, E., Strub, F., de Boer, V., Muller, P., Connor, J. T., Burch, N., Anthony, T., et al. Mastering the game of stratego with model-free multiagent reinforcement learning. _Science_ , 378(6623):990–996, 2022.
* Pinto et al. [2017] Pinto, L., Davidson, J., Sukthankar, R., and Gupta, A. Robust adversarial reinforcement learning. In _ICML ’17: Proceedings of the 34th International Conference on Machine Learning_ , 2017.
* Rosen [1965] Rosen, J. B. Existence and uniqueness of equilibrium points for concave ${N}$-person games. _Econometrica_ , 33(3):520–534, 1965.
* Scutari et al. [2010] Scutari, G., Facchinei, F., Palomar, D. P., and Pang, J.-S. Convex optimization, game theory, and variational inequality theory in multiuser communication systems. _IEEE Signal Process. Mag._ , 27(3):35–49, May 2010.
* Silver et al. [2017] Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al. Mastering the game of go without human knowledge. _nature_ , 550(7676):354–359, 2017.
* Singla & Feizi [2021] Singla, S. and Feizi, S. Fantastic four: Differentiable and efficient bounds on singular values of convolution layers. In _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_. OpenReview.net, 2021. URL https://openreview.net/forum?id=JCRblSgs34Z.
* Vinyals et al. [2019] Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., Oh, J., Horgan, D., Kroiss, M., Danihelka, I., Huang, A., Sifre, L., Cai, T., Agapiou, J. P., Jaderberg, M., Vezhnevets, A. S., Leblond, R., Pohlen, T., Dalibard, V., Budden, D., Sulsky, Y., Molloy, J., Paine, T. L., Gulcehre, C., Wang, Z., Pfaff, T., Wu, Y., Ring, R., Yogatama, D., Wünsch, D., McKinney, K., Smith, O., Schaul, T., Lillicrap, T., Kavukcuoglu, K., Hassabis, D., Apps, C., and Silver, D. Grandmaster level in starcraft ii using multi-agent reinforcement learning. _Nature_ , 575:350–354, 2019.
* Vlatakis-Gkaragkounis et al. [2019] Vlatakis-Gkaragkounis, E.-V., Flokas, L., and Piliouras, G. Poincaré recurrence, cycles and spurious equilibria in gradient-descent-ascent for non-convex non-concave zero-sum games. In _Advances in Neural Information Processing Systems_ , pp. 10450–10461, 2019.
* Vlatakis-Gkaragkounis et al. [2021] Vlatakis-Gkaragkounis, E.-V., Flokas, L., and Piliouras, G. Solving min-max optimization with hidden structure via gradient descent ascent. _Advances in Neural Information Processing Systems_ , 34:2373–2386, 2021.
* Whitehead et al. [2008] Whitehead, D. et al. The el farol bar problem revisited: Reinforcement learning in a potential game. _ESE discussion papers_ , 186, 2008.
* Zhang et al. [2019] Zhang, G., Martens, J., and Grosse, R. B. Fast convergence of natural gradient descent for over-parameterized neural networks. _Advances in Neural Information Processing Systems_ , 32, 2019.
*[PHGD]: preconditioned hidden gradient descent
*[MLPs]: multi-layer perceptron
*[MLP]: multi-layer perceptron
*[i.i.d.]: independent and identically distributed
*[GD]: gradient descent
*[LHS]: left-hand side
|
# On Free $\omega$-Continuous and Regular Ordered Algebras
Zoltán Ésik Institute of Informatics, University of Szeged, P.O. Box 652,
H-6701 Szeged, Hungary and Dexter Kozen Computer Science Department,
Cornell University, Ithaca, NY 14853-7501, USA<EMAIL_ADDRESS>
###### Abstract.
Let $E$ be a set of inequalities between finite $\Sigma$-terms. Let
${\mathcal{V}}_{\omega}$ and ${\mathcal{V}}_{r}$ denote the varieties of all
$\omega$-continuous ordered $\Sigma$-algebras and regular ordered
$\Sigma$-algebras satisfying $E$, respectively. We prove that the free
${\mathcal{V}}_{r}$-algebra $R(X)$ on generators $X$ is the subalgebra of the
corresponding free ${\mathcal{V}}_{\omega}$-algebra $F_{\omega}(X)$ determined
by those elements of $F_{\omega}(X)$ denoted by the regular $\Sigma$-coterms.
We actually establish this fact as a special case of a more general
construction for families of algebras specified by syntactically restricted
completeness and continuity properties. Thus our result is also applicable to
ordered regular algebras of higher order.
###### Key words and phrases:
Regular algebra; $\omega$-continuous algebra; iteration theories
## 1\. $\omega$-Continuous Algebras
Let $\Sigma$ be a ranked alphabet, which will be fixed throughout. A
$\Sigma$-algebra $A$ is called _ordered_ if $A$ is partially ordered by a
relation $\leq$ with least element $\bot^{A}$ and the algebraic operations are
monotone with respect to $\leq$; that is, if $f\in\Sigma_{n}$ and
$a_{i},b_{i}\in A$ with $a_{i}\leq b_{i}$ for $1\leq i\leq n$, then
$f^{A}(a_{1},\ldots,a_{n})\leq f^{A}(b_{1},\ldots,b_{n})$. A morphism $h:A\to
B$ of ordered algebras is a strict monotone map that commutes with the
algebraic operations:
$\displaystyle a\leq b\Rightarrow h(a)\leq h(b)\qquad\
h(\bot^{A})=\bot^{B}\qquad\
h(f^{A}(a_{1},\ldots,a_{n}))=f^{B}(h(a_{1}),\ldots,h(a_{n}))$
for all $a,b,a_{1},\ldots,a_{n}\in A$ and $f\in\Sigma_{n}$, $n\geq 0$.
For each set $X$, there is a free ordered algebra $\mathsf{T}X$ freely
generated by $X$. The elements of $\mathsf{T}X$ are represented by the finite
partial $\Sigma$-terms over $X$; here _partial_ means that some subterms may
be missing, which is the same as having the empty term $\bot^{\mathsf{T}X}$ in
that position. When $t\in\mathsf{T}X$ and $A$ is an ordered algebra, $t$
induces a function $t^{A}:A^{X}\to A$ in the usual way.
An ordered algebra $A$ is _$\omega$ -continuous_ [5, 7, 8] if it is
$\omega$-complete and the operations are $\omega$-continuous. That is, any
countable directed set (or countable chain) $C$ has a supremum $\bigvee C$,
and when $n\geq 0$, $f\in\Sigma_{n}$, and $C_{i}$ is a countable directed set
for $1\leq i\leq n$, then
$\displaystyle f^{A}(\bigvee C_{1},\ldots,\bigvee C_{n})$
$\displaystyle=\bigvee\\{f^{A}(x_{1},\ldots,x_{n})\mid x_{i}\in C_{i},\ 1\leq
i\leq n\\}.$
A morphism of $\omega$-continuous algebras is an $\omega$-continuous ordered
algebra morphism.
For each set $X$, there is a free $\omega$-continuous algebra $\mathsf{C}X$
freely generated by $X$. The elements of $\mathsf{C}X$ are the partial coterms
over $\Sigma$ (finite or infinite partial terms). When $t\in\mathsf{C}X$ and
$A$ is an $\omega$-continuous algebra, $t$ induces a function $t^{A}:A^{X}\to
A$.
Suppose that $X_{\omega}$ is a fixed countably infinite set. A set of
inequalities $t\sqsubseteq t^{\prime}$ between terms in
$\mathsf{T}{X_{\omega}}$ determines in the usual way a _variety of ordered
algebras_ , denoted ${\mathcal{V}}$, and a _variety of $\omega$-continuous
algebras_, devoted ${\mathcal{V}}_{\omega}$. An ordered algebra $A$ belongs to
${\mathcal{V}}$ iff $t^{A}\leq t^{\prime A}$ in the pointwise ordering of
functions $A^{X_{\omega}}\to A$ whenever $t\sqsubseteq t^{\prime}$ is in $E$.
The class ${\mathcal{V}}_{\omega}$ contains all $\omega$-continuous algebras
in ${\mathcal{V}}$. It is known that all free algebras exist both in
${\mathcal{V}}$ and in ${\mathcal{V}}_{\omega}$ [3, 8]. Moreover, the free
algebra in ${\mathcal{V}}$ freely generated by a set $X$ can be represented as
the $X$-generated ordered subalgebra $F(X)$ of the free $\omega$-continuous
algebra $F_{\omega}(X)$ in ${\mathcal{V}}_{\omega}$. Moreover, $F_{\omega}(X)$
is the completion of $F(X)$ by $\omega$-ideals (see §2).
More generally, for $C$ a subset of an ordered algebra $A$, define
$\displaystyle{C}\kern-2.0pt\downarrow$ $\displaystyle=\\{c\mid\exists a\in C\
c\leq a\\}.$
The set $C$ is an _order ideal_ if it is directed and downward closed; that
is, if $a,b\in C$ implies there exists $c\in C$ such that $a,b\leq c$, and if
$a\leq b$ and $b\in C$, then $a\in C$, i.e. $C={C}\kern-2.0pt\downarrow$. An
order ideal $C\mathrel{\subseteq}A$ is called an _$\omega$ -ideal_ if it is
countably generated; that is, there is a countable directed set
$C_{0}\mathrel{\subseteq}C$ such that $C={C_{0}}\kern-2.0pt\downarrow$. The
set $I_{\omega}(A)$ of all $\omega$-ideals of $A$ ordered by set inclusion
$\mathrel{\subseteq}$ is an $\omega$-complete poset, where the supremum of an
$\omega$-ideal $\mathcal{A}$ of $\omega$-ideals is the union
$\bigcup\mathcal{A}$. We can turn $I_{\omega}(A)$ into an $\omega$-continuous
algebra by defining
$\displaystyle f^{I_{\omega}(A)}(C_{1},\ldots,C_{n})$
$\displaystyle={\\{f^{A}(a_{1},\ldots,a_{n})\mid a_{i}\in C_{i},\ 1\leq i\leq
n\\}}\kern-2.0pt\downarrow,$
the order ideal generated by the set $\\{f^{A}(a_{1},\ldots,a_{n})\mid
a_{i}\in C_{i},\ 1\leq i\leq n\\}$. It is clear that $A$ can be embedded in
$I_{\omega}(A)$ by the ordered algebra morphism mapping $a\in A$ to the order
ideal ${\\{a\\}}\kern-2.0pt\downarrow$ generated by $a$. In fact, it is known
that $I_{\omega}(A)$ is the _free completion_ of $A$ to an $\omega$-continuous
algebra [3, 8]. In particular, $\mathsf{C}X$ is isomorphic to
$I_{\omega}(\mathsf{T}X)$ for all $X$.
## 2\. Free Completion
Every ordered algebra can be completed to an $\omega$-continuous algebra by
the method of completion by $\omega$-ideals. This result is well known and can
be proved by standard universal-algebraic argument [3, 8]. In this section we
give a more general construction that will allow us to apply this idea more
widely.
Suppose we have a monad $(F,\mu^{F},\eta^{F})$ on some category $C$.
$F^{3}$$F^{2}$$F^{2}$$F$$F\mu^{F}$$\mu^{F}F$$\mu^{F}$$\mu^{F}$$F$$F^{2}$$F^{2}$$F$$F\eta^{F}$$\eta^{F}F$$1F$$\mu^{F}$$\mu^{F}$
(1)
An _$F$ -algebra_ (aka Eilenberg–Moore algebra for $F$) is a pair
$(X,\alpha)$, where $\alpha:FX\to X$ is the _structure map_ of the algebra. It
must satisfy
$F^{2}X$$FX$$FX$$X$$F\alpha$$\mu^{F}_{X}$$\alpha$$\alpha$$X$$FX$$X$$\eta^{F}_{X}$$1_{X}$$\alpha$
(2)
An _$F$ -algebra morphism_ is a morphism of the underlying category $C$ that
commutes with the structure maps. The category of $F$-algebras and $F$-algebra
morphisms is denoted $F\mbox{-}\mathsf{Alg}$.
In our application, the underlying category $C$ is the category of posets with
$\bot$ and strict monotone maps. An $F$-algebra is a pair $(X,\alpha)$ where
$\alpha:FX\to X$ is a morphism of $C$. For functors representing algebraic
signatures (polynomial functors with an added $\bot$ or partial term
algebras), this implies that the algebraic operations are monotone, but not
necessarily strict.
The free ordered algebra $\mathsf{T}X$ and the free $\omega$-continuous
algebra $\mathsf{C}X$ are Eilenberg-Moore algebras for the partial term monad
and partial coterm monad for the signature $\Sigma$, respectively.
Suppose further that we have another monad $(D,\mu^{D},\eta^{D})$ on $C$,
along with a distributive law $\lambda:FD\to DF$.
$FD^{2}$$DFD$$D^{2}F$$FD$$DF$$\lambda
D$$F\mu^{D}$$D\lambda$$\mu^{D}F$$F^{2}D$$FDF$$DF^{2}$$F\lambda$$\mu^{F}D$$\lambda
F$$D\mu^{F}$$\lambda$$F$$FD$$DF$$F\eta^{D}$$\eta^{D}F$$D$$\eta^{F}D$$D\eta^{F}$$\lambda$
(3)
One can show that $D$ lifts to a monad $(\widehat{D},\mu^{D},\eta^{D})$ on
$F\mbox{-}\mathsf{Alg}$ with
$\displaystyle\widehat{D}(X,\alpha)$
$\displaystyle=(DX,D\alpha\circ\lambda_{X})$ $\displaystyle\widehat{D}h$
$\displaystyle=Dh.$
The monad operations $\mu^{D}$ and $\eta^{D}$ of $\widehat{D}$ are the same as
those of $D$. The monad laws (1) hold for $\widehat{D}$ because they are the
same as those for $D$. We must also show that the Eilenberg-Moore properties
(2) hold and that $\mu^{D}_{X}$, $\eta^{D}_{X}$ are $F\mbox{-}\mathsf{Alg}$
morphisms.
$F^{2}DX$$FDX$$FDX$$DX$$F(D\alpha\circ\lambda_{X})$$\mu^{F}_{DX}$$D\alpha\circ\lambda_{X}$$D\alpha\circ\lambda_{X}$$DX$$FDX$$DX$$\eta^{F}_{DX}$$1_{DX}$$D\alpha\circ\lambda_{X}$
$FD^{2}X$$FDX$$D^{2}X$$DX$$F\mu^{D}_{X}$$D(D\alpha\circ\lambda_{X})\circ\lambda_{DX}$$\mu^{D}_{X}$$D\alpha\circ\lambda_{X}$$FX$$FDX$$X$$DX$$F\eta^{D}_{X}$$\alpha$$\eta^{D}_{X}$$D\alpha\circ\lambda_{X}$
These results follow from the monad properties (1) for $F$ and $D$, the
Eilenberg-Moore properties (2) for $(X,\alpha)$, and the distributive laws (3)
by straightforward arrow chasing.
Now suppose $(X,\beta)$ is a $D$-algebra. Under what conditions can we
sensibly augment an $F$-algebra $(X,\alpha)$ with the new operation $\beta$? A
sufficient condition is when $((X,\alpha),\beta)$ forms a
$\widehat{D}$-algebra; that is, when $\beta$ is an $F\mbox{-}\mathsf{Alg}$
morphism $\beta:\widehat{D}(X,\alpha)\to(X,\alpha)$.
$FDX$$DX$$FX$$X$$D\alpha\circ\lambda_{X}$$F\beta$$\alpha$$\beta$ (4)
This gives an $F+D$-algebra $(X,\alpha,\beta)=((X,\alpha),\beta)$ is in which
$D$ distributes over $F$. Moreover, this is also a $\widehat{D}$-algebra,
since the conditions (2) are the same.
Now, as is well known, because we have a monad
$(\widehat{D},\mu^{D},\eta^{D})$, the functor
$(X,\alpha)\mapsto(DX,D\alpha\circ\lambda_{X},\mu^{D}_{X})$ is left adjoint to
the forgetful functor $(X,\alpha,\beta)\mapsto(X,\alpha)$. The algebra
$(DX,D\alpha\circ\lambda_{X},\mu^{D}_{X})$ is the free completion of
$(X,\alpha)$. The unit of the adjunction
$\eta^{D}_{X}:(X,\alpha)\to(DX,D\alpha\circ\lambda_{X})$ embeds the original
$F$-algebra in its completion. The counit
$(DX,D\alpha\circ\lambda_{X},\mu^{D}_{X})\to(X,\alpha,\beta)$ is $\beta$.
Let $C$ be the category of posets with $\bot$ and strict monotone maps.
Consider the monad $(D,\mu^{D},\eta^{D})$ such that $DX$ is the set of all
countably generated order ideals ordered by set inclusion.
$\displaystyle DX$ $\displaystyle=\\{{A}\kern-2.0pt\downarrow\ \mid
A\mathrel{\subseteq}X,\ \text{$A$ is a countable directed set}\\}.$
For $\mathcal{A}\mathrel{\subseteq}DX$ a countable directed set of order
ideals of $X$, and for $x\in X$, let
$\displaystyle\mu^{D}_{X}({\mathcal{A}}\kern-2.0pt\downarrow)$
$\displaystyle={(\textstyle\bigcup\mathcal{A})}\kern-2.0pt\downarrow$
$\displaystyle\eta^{D}_{X}(a)$ $\displaystyle={\\{a\\}}\kern-2.0pt\downarrow.$
The distributive law $\lambda_{X}:FDX\to DFX$ takes a term or coterm over a
countable collection of order ideals to an order ideal of terms or coterms
over $X$ of the same shape. For example, for $f\in\Sigma_{n}$,
$\displaystyle\lambda_{X}:f(A_{1},\ldots,A_{n})\ \mapsto\
{\\{f(a_{1},\ldots,a_{n})\mid a_{i}\in A_{i},\ 1\leq i\leq
n\\}}\kern-2.0pt\downarrow.$
If $(X,\alpha)$ is an ordered $\Sigma$-algebra, applying $D\alpha$ to the
right-hand side applies $\alpha$ to every element of that set, which is the
same as interpreting the term in the $F$-algebra $(X,\alpha)$. For example,
$\displaystyle D\alpha\circ\lambda_{X}:f(A_{1},\ldots,A_{n})\ \mapsto\
{\\{f^{(X,\alpha)}(a_{1},\ldots,a_{n})\mid a_{i}\in A_{i},\ 1\leq i\leq
n\\}}\kern-2.0pt\downarrow.$
This is the algebra $\widehat{D}(X,\alpha)=(DX,D\alpha\circ\lambda_{X})$.
An Eilenberg-Moore algebra $(X,\beta)$ for the monad $D$ is simply an
$\omega$-complete poset, the operation $\beta:DX\to X$ giving the supremum of
a countably generated directed set. The property (4) says exactly that the
algebraic operations commute with the supremum operator; that is, the
algebraic operations are $\omega$-continuous.
The $\widehat{D}$-algebra $(DX,D\alpha\circ\lambda_{X},\mu^{D}_{X})$, the free
extension of $(X,\alpha)$ to an $F+D$-algebra, is the completion by countably
generated order ideals. The unit of the adjunction
$\eta^{D}_{X}:(X,\alpha)\to(DX,D\alpha\circ\lambda_{X},\mu^{D}_{X})$ embeds
the original $F$-algebra in its ideal completion. The counit
$(DX,D\alpha\circ\lambda_{X},\mu^{D}_{X})\to(X,\alpha,\beta)$ is the supremum
operator $\beta$.
## 3\. Quasi-Regular Families
In this section we study limited completeness conditions in which not all
suprema need exist, but only those of a certain syntactically restricted form.
For each set $X$, let $\Delta X$ denote an ordered subalgebra of $\mathsf{C}X$
containing $X$. The set $\Delta X$ is a set of partial coterms closed under
the algebraic operations $\Sigma$. Note that each finite term in $\mathsf{T}X$
is in $\Delta X$, thus $\mathsf{T}X\mathrel{\subseteq}\Delta
X\mathrel{\subseteq}\mathsf{C}X$.
We further assume for all $X$ and $Y$ and for all morphisms
$h:\mathsf{C}X\to\mathsf{C}Y$ of $\omega$-continuous algebras with
$h(X)\mathrel{\subseteq}\Delta Y$ that $h(\Delta X)\mathrel{\subseteq}\Delta
Y$. A family of term algebras $(\Delta X)_{X}$ satisfying this property is
called a _quasi-regular family_.
Under these conditions, $\Delta$ determines a submonad
$(\Delta,\mu^{\Delta},\eta^{\Delta})$ of the coterm monad. The monad
operations are the same, except with suitably restricted domain. Thus
$\eta^{\Delta}_{X}:X\to\Delta X$ makes a singleton term $x$ out of an element
$x\in X$, and $\mu^{\Delta}_{X}:\Delta^{2}X\to\Delta X$ takes a term of terms
over $X$ and collapses it to a term over $X$.
Two examples of quasi-regular families are $\Delta X=\mathsf{C}X$, the set of
all partial coterms over $X$, and $\Delta X=\mathsf{T}X$, the set of all
partial terms over $X$. These are the maximum and minimum quasi-regular
families, respectively, for a given signature $\Sigma$. The regular or
algebraic trees [5, 8], or more generally, the set of regular (or rational)
trees of order $n$ [6] also form quasi-regular families. Also, the union of
the $n$-regular trees over $X$ for all $n\geq 0$ is a quasi-regular family.
###### Lemma 1.
For any set $X$, the set $\Delta X$ is uniquely determined by
$\Delta{X_{\omega}}$.
###### Proof 3.1.
Let $h:X\to Y$ be an injection. Then $h$ extends uniquely to a morphism
$\widehat{h}=\mu_{Y}\circ\Delta h:\Delta X\to\Delta Y$ of $\omega$-continuous
algebras. We claim that
$\displaystyle\Delta Y$ $\displaystyle=\\{\widehat{h}(t)\mid
X\mathrel{\subseteq}X_{\omega},\ \text{$h:X\to Y$ is an injection},\
t\in\Delta X\\}.$
The reverse inclusion holds by our assumption $\widehat{h}(\Delta
X)\mathrel{\subseteq}\Delta Y$. For the forward inclusion, suppose $s\in\Delta
Y$. Then $s\in\Delta Y^{\prime}$ for some finite or countable subset
$Y^{\prime}\mathrel{\subseteq}Y$, as there are at most countably many subterms
of $s$. Let $X\mathrel{\subseteq}X_{\omega}$ be of the same cardinality as
$Y^{\prime}$ and let $h:X\to Y^{\prime}$ be a bijection. Then
$\widehat{h}^{-1}(s)\in\Delta X_{\omega}$, $h:X\to Y$ is an injection, and
$s=\widehat{h}(\widehat{h}^{-1}(s))$.
For terms $t_{1},t_{2}\in\mathsf{C}X$, define $t_{1}\ll t_{2}$ if $t_{1}$ is
finite, but agrees as a labeled tree with $t_{2}$ wherever it is defined. That
is, $t_{1}\in\mathsf{T}X$ and $t_{1}$ can be obtained from $t_{2}$ by deleting
subterms. Clearly $t_{1}\leq t_{2}$ whenever $t_{1}\ll t_{2}$.
Let $A$ be an ordered $\Sigma$-algebra. The structure map
$\beta:\mathsf{T}A\to A$ interprets finite terms as elements of $A$. A set
$B\mathrel{\subseteq}A$ is called a _$\Delta$ -set_ if there is a term
$t\in\Delta A$ such that $B=\\{\beta(s)\mid s\ll t\\}$. The set $\\{s\mid s\ll
t\\}$ is a countable directed subset of $\mathsf{T}A$, and as $\beta$
preserves order, $B$ is a countable directed subset of $A$.
We say that an ordered algebra $A$ is a _$\Delta$ -regular algebra_ if the
suprema of all $\Delta$-sets exist and the algebraic operations preserve the
suprema of $\Delta$-sets.
We will show that any $\Delta$-regular algebra can be extended to an
Eilenberg-Moore algebra for the monad $\Delta$. Note that $\Delta X$ itself is
a $\Delta$-regular algebra. This is due to the fact that $\Delta$ is a monad.
###### Theorem 2.
Let $A$ be a $\Delta$-regular algebra with structure map $\beta:\mathsf{T}A\to
A$. Extend $\beta$ to domain $\Delta A$ by defining
$\displaystyle\beta(t)=\bigvee\\{\beta(s)\mid s\ll t\\}.$
The suprema of $\Delta$-sets exist by assumption. Then $A$ with the extended
structure map $\beta$ is an Eilenberg-Moore algebra for the monad $\Delta$.
###### Proof 3.2.
We must show that the Eilenberg-Moore properties (2) hold: $\beta(a)=a$, and
for any indexed set $(s_{i}\mid i\in I)$ of elements of $\Delta A$ and
$t(s_{i}\mid i\in I)\in\Delta^{2}A$,
$\displaystyle\beta(t(s_{i}\mid i\in I))$
$\displaystyle=\beta(t(\beta(s_{i})\mid i\in I)).$ (5)
The first property holds since $(A,\beta)$ is an Eilenberg-Moore algebra for
the term monad $\mathsf{T}$. Property (5) holds for finite $t$ and finite
$s_{i}$ for the same reason.
More generally, by our assumption that the algebraic operators preserve
suprema of $\Delta$-sets, it follows by induction that for finite
$t\in\mathsf{T}\\{x_{1},\ldots,x_{n}\\}$ and $s_{i}\in\Delta A$, $1\leq i\leq
n$,
$\displaystyle\beta(t(\beta(s_{1}),\ldots,\beta(s_{n}))$
$\displaystyle=\bigvee_{\begin{subarray}{c}s_{i}^{\prime}\ll s_{i}\\\ 1\leq
i\leq
n\end{subarray}}\beta(t(\beta(s_{1}^{\prime}),\ldots,\beta(s_{n}^{\prime}))).$
(6)
Now to show (5) in the general case,
$\displaystyle\beta(t(s_{i}\mid i\in I))$
$\displaystyle=\bigvee_{t^{\prime}\ll t(s_{i}\mid i\in I)}\beta(t^{\prime})$
(7) $\displaystyle=\bigvee_{t^{\prime}\ll t}\
\bigvee_{\begin{subarray}{c}s_{i}^{\prime}\ll s_{i}\\\ i\in
I\end{subarray}}\beta(t^{\prime}(s_{i}^{\prime}\mid i\in I))$ (8)
$\displaystyle=\bigvee_{t^{\prime}\ll t}\
\bigvee_{\begin{subarray}{c}s_{i}^{\prime}\ll s_{i}\\\ i\in
I\end{subarray}}\beta(t^{\prime}(\beta(s_{i}^{\prime})\mid i\in I))$ (9)
$\displaystyle=\bigvee_{t^{\prime}\ll t}\ \beta(t^{\prime}(\beta(s_{i})\mid
i\in I))$ (10) $\displaystyle=\beta(t(\beta(s_{i})\mid i\in I)).$ (11)
Step (7) is by definition of $\beta$. Step (8) is by definition of $\ll$. Step
(9) is by property (5) for finite terms. Step (10) is by property (6).
Finally, step (11) is by definition of $\beta$.
We have shown that every $\Delta$-regular algebra is a $\Delta$-algebra, that
is, an Eilenberg-Moore algebra for the monad $\Delta$. Henceforth, we drop the
“regular” and just call them $\Delta$-algebras.
The value of $\beta$ is uniquely determined on all elements of $\Delta A$,
thus each ordered algebra with $\bot$ can be turned into a $\Delta$-algebra in
at most one way. Moreover, every $\Delta$-algebra is determined by the
interpretation of the terms in $\Delta(X_{\omega})$.
A _morphism_ of $\Delta$-algebras is a morphism on the carriers as ordered
sets with $\bot$ that commutes with the structure maps. It follows that any
morphism of $\Delta$-algebras preserves suprema of $\Delta$-sets, as we now
show.
###### Theorem 3.
Let $(A,\alpha)$ and $(B,\beta)$ be $\Delta$-algebras and $h:A\to B$ a
morphism. For any $\Delta$-set $D\mathrel{\subseteq}A$, its image $\\{h(a)\mid
a\in D\\}\mathrel{\subseteq}B$ is a $\Delta$-set in $B$, and
$\displaystyle h(\bigvee D)$ $\displaystyle=\bigvee\\{h(a)\mid a\in D\\}.$
###### Proof 3.3.
Let $t\in\Delta A$ such that $D=\\{\alpha(s)\mid s\ll t\\}$. We must show that
$\\{h(\alpha(s))\mid s\ll t\\}$ is a $\Delta$-set in $B$ and
$\displaystyle h(\alpha(t))$ $\displaystyle=\bigvee\\{h(\alpha(s))\mid s\ll
t\\}.$
Since $h$ commutes with the structure maps,
$\displaystyle\\{h(\alpha(s))\mid s\ll t\\}$ $\displaystyle=\\{\beta(\Delta
h(s))\mid s\ll t\\}=\\{\beta(u)\mid u\ll\Delta h(t)\\},$
and this is a $\Delta$-set in $B$. Moreover,
$\displaystyle h(\alpha(t))$ $\displaystyle=\beta(\Delta
h(t))=\bigvee\\{\beta(u)\mid u\ll\Delta h(t)\\}=\bigvee\\{h(\alpha(s))\mid
s\ll t\\}.$
If $\Delta=\mathsf{T}$, then a $\Delta$-algebra is just an ordered algebra. If
$\Delta X$ is the collection of all regular trees over $X$, then a
$\Delta$-algebra is an ordered regular algebra [10, 11]. If $\Delta X$ is the
set of all regular trees of order $n$, $n\geq 0$, then a $\Delta$-algebra is
an $n$-regular (or $n$-rational) algebra of [6].
We have defined $\Delta$-algebras $A$ in terms of suprema of $\Delta$-sets in
$A$. However, one could generalize the notion of $\Delta$-set to include any
set of terms with a supremum in $\Delta A$.
Suppose that $(A,\alpha)$ is a $\Delta$-algebra. A subset
$E\mathrel{\subseteq}\Delta A$ is said to be _consistent_ if any two elements
of $E$, considered as labeled trees, agree wherever both are defined. That is,
if $t_{1},t_{2}\in E$, and $t_{1}$ and $t_{2}$ both have a node at position
$x$ in the tree, then the element of $\Sigma$ labeling $t_{1}$ at $x$ is the
same as the element of $\Sigma$ labeling $t_{2}$ at $x$. Any consistent set of
terms has a unique supremum.
We say that a set $D\mathrel{\subseteq}A$ is an _extended $\Delta$-set_ if
there is a consistent set $E\mathrel{\subseteq}\Delta A$ such that $\bigvee
E\in\Delta A$ and $D=\\{\alpha(t)\mid t\in E\\}$.
We state the following theorem without proof, but it is not difficult to prove
using the same technique as in Theorems 2 and 3.
###### Theorem 4.
The algebraic operations of any $\Delta$-algebra and all $\Delta$-algebra
morphisms preserve suprema of extended $\Delta$-sets.
As every $\Delta$-set is an extended $\Delta$-set, we can replace the
definition of $\Delta$-regular algebra by the stronger property that suprema
of all extended $\Delta$-sets exist and are preserved by the algebraic
operations.
## 4\. Main Result
We can introduce varieties of $\Delta$-algebras defined by a set of
inequalities between finite partial terms in $\mathsf{T}X_{\omega}$ in the
expected way.
Suppose that ${\mathcal{V}}_{\omega}$ is a variety of $\omega$-continuous
algebras defined by a set of inequalities between finite partial terms. Let
${\mathcal{V}}_{r}$ denote the variety of all $\Delta$-algebras defined by the
same set of inequalities. Let $F_{r}(X)$ denote the free ordered algebra on
generators $X$ in ${\mathcal{V}}_{r}$. Let $F_{\omega}(X)$ denote the free
$\omega$-continuous algebra on generators $X$ in ${\mathcal{V}}_{\omega}$ (see
[8]). Let $R(X)$ denote the ordered $\Delta$-subalgebra of $F_{\omega}(X)$
generated by $X$. The elements of $R(X)$ are all elements of $F_{\omega}(X)$
denoted by the terms in $\Delta X$:
$\displaystyle R(X)$ $\displaystyle=\\{t^{F_{\omega}(X)}\mid t\in\Delta X\\}.$
###### Theorem 5.
$R(X)$ is the free $\Delta$-algebra in ${\mathcal{V}}_{r}$ on generators $X$.
It is isomorphic to the completion of $F_{r}(X)$ by $\Delta$-ideals. (A
_$\Delta$ -ideal_ is simply the down-closure ${B}\kern-2.0pt\downarrow$ of a
$\Delta$-set $B$.)
###### Proof 4.1.
Since $R(X)$ embeds in $F_{\omega}(X)$, we have that
$R(X)\in{\mathcal{V}}_{r}$ by Birkhoff’s theorem.
Suppose that $A$ is an ordered $\Delta$-regular algebra and $h:X\to A$.
Consider the completion $I_{\omega}(A)$ of $A$ by $\omega$-ideals, which is an
$\omega$-continuous algebra in ${\mathcal{V}}_{\omega}$ [3, 8]. The function
$\eta^{D}_{A}:a\mapsto{\\{a\\}}\kern-2.0pt\downarrow$ embeds $A$ in
$I_{\omega}(A)$, so we can view $h$ as a function $X\to I_{\omega}(A)$. Since
$I_{\omega}(A)\in{\mathcal{V}}_{\omega}$, there is a unique extension of $h$
to a morphism $\widehat{h}:F_{\omega}(X)\to I_{\omega}(A)$ of
$\omega$-continuous algebras. The restriction of $\widehat{h}$ to $R(X)$ is an
ordered $\Delta$-regular algebra morphism $R(X)\to I_{\omega}(A)$. Let $R(A)$
denote the image of $R(X)$ under $\widehat{h}$. Then $R(A)$ is a
$\Delta$-regular subalgebra of $I_{\omega}(A)$ generated by $A$, and $R(A)$ is
in ${\mathcal{V}}_{r}$ because it is a homomorphic image of $R(X)$, which is
in ${\mathcal{V}}_{r}$.
Finally, let $\bigvee$ be the supremum operator on $R(A)$. The elements of
$R(A)$ are $\omega$-ideals generated by $\Delta$-sets, therefore the suprema
exist. The map $\bigvee$, being a component of the counit of the completion
construction, is a morphism of $\Delta$-algebras; that is, a strict monotone
function that preserves suprema of $\Delta$-sets and commutes with the
algebraic operations.
Now the restriction of $\widehat{h}$ composed with $\bigvee$ is the required
unique extension of $h$ to a morphism $R(X)\to A$ of $\Delta$-algebras.
We can also construct $R(X)$ directly as the free completion of $F_{r}(X)$ by
$\Delta$-ideals. Here we use the general construction of §2. Let
$F=\mathsf{T}$, the finite term monad for the signature $\Sigma$ and $D$ the
monad of $\Delta$-ideals. The algebra $F_{r}(X)$ is an $F$-algebra. We must
check that the monad $D$ satisfies the condition (4); but this is almost
automatic, as $D$ is a submonad of the monad of Example 2.
The free completion is
$(D(F_{r}(X)),D\alpha\circ\lambda_{F_{r}(X)},\mu^{D}_{F_{r}(X)})$, where
$\alpha$ is the structure map of $F_{r}(X)$. It is the free $\Delta$-algebra
on generators $X$. As $R(X)$ is also a $\Delta$-algebra, there is unique
$\Delta$-algebra morphism $g$ from $D(F_{r}(X))$ to $R(X)$ mapping $x\in X$ to
$x$. Since both algebras are generated by $X$, $g$ is surjective. Similarly,
since $D(F_{r}(X))$ is a $\Delta$-algebra, as noted above, there is a
$\Delta$-algebra morphism $h$ from $R(X)$ to $D(F_{r}(X))$ mapping $x\in X$ to
$x$. The composition $h\circ g$ is the unique morphism on $D(F_{r}(X))$
extending the identity on $X$, thus must be the identity morphism. Thus $R(X)$
and $D(F_{r}(X))$ are isomorphic.
The free $\omega$-continuous semiring on generators $X$ is the semiring of
power series ${\mathbb{N}}_{\infty}\llangle X^{*}\rrangle$ with a countable
support having coefficients in ${\mathbb{N}}_{\infty}$ obtained from the
semiring of natural numbers by adding a point at infinity. It follows that the
free regular semiring over $X$ is the semiring ${\mathbb{N}}_{\infty}^{\rm
alg}\llangle X^{*}\rrangle$ of all algebraic elements of
${\mathbb{N}}_{\infty}\llangle X^{*}\rrangle$.
The free $\omega$-continuous idempotent semiring on $X$ is the semiring of all
at most countably infinite languages in $X^{*}$. The $\omega$-continuous
idempotent semirings are the same as the closed semirings used in the study of
shortest-path algorithms [2] (modulo minor corrections to the axiomatization).
The free “1-regular” idempotent semiring over $X$ is the semiring of OI-macro
languages over $X$.
The free star-continuous Kleene algebra on generators $X$ is the family of
regular subsets of $X^{*}$ [9]. This is the free $\Delta$-algebra for $\Delta$
the set of regular coterms defined by finite systems of linear affine
equations in the variety of idempotent semirings. The family of context-free
languages is the free $\Delta$-algebra for $\Delta$ the set of regular coterms
defined by finite systems of algebraic equations in the variety of idempotent
semirings. The free $\omega$-continuous idempotent semiring is the completion
of the free Kleene algebra by countably generated star-ideals [9].
A final example is given by the category of iteration theories [4], which is
isomorphic to the category of Eilenberg-Moore algebras for the rational-tree
monad on the category of signatures [1].
## References
* [1] J. Adámek, S. Milius, and J. Velebil. What are iteration theories? In L. Kučera and A. Kučera, editors, Mathematical Foundations of Computer Science (MFCS 2007), pages 240–252, Berlin, Heidelberg, August 2007. Springer.
* [2] A. V. Aho, J. E. Hopcroft, and J. D. Ullman. The Design and Analysis of Computer Algorithms. Addison Wesley, 1975.
* [3] S. L. Bloom. Varieties of ordered algebras. J. Comp. Syst. Sci., 13:200–212, 1976.
* [4] S. L. Bloom and Z. Ésik. Iteration Theories. Springer, 1993.
* [5] B. Courcelle. Fundamental properties of infinite trees. Theor. Comput. Sci., 25:95–169, 1983.
* [6] J. H. Gallier. The semantics of recursive programs with function parameters of finite types: $n$-rational algebras and logic of inequalities.
* [7] J. A. Goguen, J. W. Thatcher, E. G. Wagner, and J. B. Wright. Initial algebra semantics and continuous algebras. J. ACM, 24:68–95, 1977.
* [8] I. Guessarian. Algebraic semantics. LNCS, 99, 1981.
* [9] D. Kozen. On Kleene algebras and closed semirings. In Rovan, editor, Proc. Mathematical Foundations of Computer Science (MFCS 1990), volume 452 of Lecture Notes in Computer Science, pages 26–47, Banska-Bystrica, Slovakia, 1990. Springer-Verlag.
* [10] J. Tiuryn. Fixed points and algebras with infinitely long expressions I. Fundamenta Informaticae, 2:103–128, 1978.
* [11] J. Tiuryn. Fixed points and algebras with infinitely long expressions II. Fundamenta Informaticae, 2:317–335, 1979.
|
# SEEK: model extraction attack against hybrid secure inference protocols
Si Chen<EMAIL_ADDRESS>Junfeng Fan<EMAIL_ADDRESS>
###### Abstract
Security concerns about a machine learning model used in a prediction-as-a-
service include the privacy of the model, the query and the result. Secure
inference solutions based on homomorphic encryption (HE) and/or multiparty
computation (MPC) have been developed to protect all the sensitive
information. One of the most efficient type of solution utilizes HE for linear
layers, and MPC for non-linear layers. However, for such hybrid protocols with
semi-honest security, an adversary can malleate the intermediate features in
the inference process, and extract model information more effectively than
methods against inference service in plaintext. In this paper, we propose
SEEK, a general extraction method for hybrid secure inference services
outputing only class labels. This method can extract each layer of the target
model independently, and is not affected by the depth of the model. For
ResNet-18, SEEK can extract a parameter with less than 50 queries on average,
with average error less than $0.03\%$.
## 1 Introduction
For a machine learning model used in a prediction-as-a-service (PaaS) setting,
the model provider usually is concerned about the privacy of the deployed
model. Revealing the model information will enable a user to develop his own
model. In addition, the model information can be reverse-engineered to reveal
its training data [1, 2], or enable an attacker to fabricate adversarial
samples [3]. On the other hand, users of PaaS may have privacy concerns about
the input data, and do not want to upload the input in plaintext to a service
hosted by the model provider. Thus neither the server side nor the client side
is a satisfactory place to perform the model inference computation.
To solve this dilemma, secure inference protocols are proposed, which enables
the client to query a model deployed in a remote server, while preventing the
client and server to learn any additional information. Secure inference
solutions are based on homomorphic encryption (HE), multiparty computation
(MPC), or both families of techniques. Solutions based on homomorphic
encryption suffer from limitation of the practical FHE schemes. The levelled,
and relatively efficient FHE schemes, including BFV, BGV, and CKKS, support
fixed number of multiplications without bootstrapping. By replacing the
activation functions with polynomial functions, the levelled FHE schemes can
compute both the linear layers and non-linear layers, but cannot support
multiplication depth needed by a deep neural network with more than 3 or 4
layers [4, 5]. On the other hand, the most efficient secure inference protocol
based on MPC either use garbled circuit and generally incur higher
communication cost [6, 7, 8], or require three non-colluding parties [9, 10,
11], which is an additional requirement not readily satisfied in practice.
To perform secure inference for deep neural networks, while utilizing the
efficiency of levelled FHE schemes, hybrid solutions based on FHE and MPC
emerged [12, 13, 14, 15, 16]. The linear part of a neural network, which
contains the majority of computation cost, is processed by a FHE scheme, while
the non-linear part is process by a MPC scheme. Between linear layers and non-
linear layers, a pair of protocols are performed to transfer the internal
features between encrypted form and secret-shared form.
However, these hybrid secure inference solutions assume semi-honest
participants. Such assumption is not guaranteed in real scenarios. We observe
that by considering malicious behavior, the client can secretly shift the
internal features during inference, and observe its effect on the final output
of the model. With this additional opportunity of changing the intermediate
data, in this paper we propose a general model extraction method called SEEK
(Safe-Error Extraction attacK), with which the client can extract the model
parameters, more effectively than the extraction attack on models without
secure inference.
## 2 Related Works
A model extraction attack method attepmts to retrieve information about a
remotely-deployed model, and consequently copy the model parameters, mimic the
model’s functionality, or infer information about its training data. For a
classification model, the target inference service may return class labels,
top-$k$ probabilities, logits (or equivalently, all class probabilities), or
even some intermediate features and/or gradients, among which class labels
contain minimal information, leading to the most secure setup.
Most existing model extraction attacks [17, 18, 19, 20] target traditional
model inference service in plaintext, while [21] and this work target
encrypted model inference service. Extraction methods also differ in their
objectives. We follow the taxonomy made in [19], which categorized the
extraction objectives into the following types:
* •
Exact Extraction: extract all parameters of the target model. This objective
is not possible for plaintext inference service, due the model’s inherent
symmetries. We will show it can be efficiently achieved for encrypted model
inference service.
* •
Functionally Equivalent Extraction: construct a model such that its output is
identical with that of the target model. The extracted model has the same
structure as the target model, and the same paramters up to a symmetry
transformation. This is the highest possible objective against a plaintext
inference service.
* •
Fidelity Extraction: For some input data distribution $\mathcal{D}$ and some
goal similarity function $S(\cdot,\cdot)$, Fidelity Extraction aims to
construct a model $\hat{O}$, such that
$\textrm{Pr}_{x\sim\mathcal{D}}[S(\hat{O}(x),O(x))]$ is maximized. Typically,
Fidelity Extraction only guarantee the outputs from the constructed model and
the target model are similar enough on some test dataset.
* •
Task Accuracy Extraction: construct a model to match or exceed the accuracy of
the target model.
Learning-based methods access the target model to generate a training dataset,
with which a substitute model is trained. Typically, learning-based methods do
not attempt to extract individual parameters, resulting in Fidelity Extraction
as the objective, and is generally query-efficient. In [18], in order to find
the decision boundaries between classes efficiently, an iterative training
algorithm is proposed, which uses the substitute model to create samples close
to decision boundaries. In [19], the authors leveraged several recent
optimizations in training, including unlabelled training, distillation,
rotation loss, and MixMatch, to be able to train a substitute model with much
fewer queries than the size of the original training set.
Direct recovery methods, which aims Functionally Equivalent Extraction, treat
the target model as a function explicitly expressed by the parameters, and
attempt to solve for the parameters given model query inputs and outputs. In
[19], for neural networks which use ReLU activation and return logits, an
extraction algorithm is devised by solving the parameters in the model, which
is a piecewise-linear function. In [20], the authors utilized methodologies
from cryptoanalysis, and carefully improved the differential extraction method
in [19], by treating more efficiently the issues arising from larger depths
and numerical errors.
In [21], the target inference service is performed with the hybrid MPC-HE
scheme, and is assumed to return logits. The extraction method shifts the
features so that the inference becomes a linear system, whose paramters can be
solved with enough query inputs and outputs.
Features of the related works are summarized in table 1. In comparison, the
proposed method SEEK considers the most restrictive setup in which only class
labels are returned. Additionally, by utilizing the “safe-error attack” method
[22, 23], SEEK does not suffer from the numerical error induced by very deep
networks, and can apply to models with arbitrary number of layers.
Method | Extraction method | Extraction target | | Model
---
output
| Highest
---
model depth
| # model calls
---
per parameter
Tramèr et al. [17] | Learning | Functional Equivalence | logits | 2 | $0.5\sim 5$
Tramèr et al. [17] | Direct Recovery | Functional Equivalence | labels | 1 | $20\sim 50$
Tramèr et al. [17] | Learning | Fidelity | labels | 3 | $\sim 100$
Papernot et al. [18] | Learning | Fidelity | labels | unlimited | $<1$
Jagielski et al. [19] | Learning | Fidelity | labels | unlimited | $\ll 1$
Jagielski et al. [19] | Direct Recovery | Functional Equivalence | logits | 2 | $\sim 10$
Carlini et al. [20] | Direct Recovery | Functional Equivalence | logits | 4 | $\sim 200$
MUSE [21] | MPC Malleation | Exact Extraction | logits | 10 | $1/n_{c}$
SEEK | MPC Malleation | Exact Extraction | labels | unlimited | $\sim 50$
Table 1: Feature of the extraction methods against neural network and logistic
regression models.
It is well-known that the MPC protocol malleation attacks as in [21] and this
work can be mitigated by using a protocol with malicious security. Recently, a
line of work with client-malicious model are proposed [21, 24, 25]. These
protocols are designed based on authenticated shares, and are closing the gap
of computational and communicational efficiency with respect to the protocols
with semi-honest security.
## 3 Secure inference setup
We consider a general deep convolutional neural network (CNN), trained for a
classification task. Layers in CNN can be categorized into linear layers and
non-linear layers. Linear layers include convolution layers, fully-connected
(FC) layers, as well as normalization layers, average-pooling layers. Addition
and concatenation layers can be viewed as linear layers as well. Consecutive
linear layers can be merged together to form a single linear layer. A linear
layer indexed with $\ell$ in general can be expressed as
$y_{\ell}=w_{\ell}\cdot x_{\ell}+b_{\ell},$ (1)
where $x_{\ell}$ is the input feature map, $y_{\ell}$ is the output feature
map, $w_{\ell}$ is the weight parameter, and $b_{\ell}$ is the bias parameter.
In this formalism, for a convolution layer, the weight parameters are sparse
due to localized kernel, and the values are shared across spatial locations.
Non-linear layers include activation layers, which perform some element-wise
nonlinear function, as well as max-pooling, softmax, and argmax layers. In
this paper, for activation layers, we use the most common ReLU activation. For
our purpose, two adjacent non-linear layers will be viewed as a single non-
linear layer. Typically for a CNN, a non-linear layer indexed with $\ell$ is
composed of an activation layer
$z_{\ell}=\textrm{ReLU}(y_{\ell}),$ (2)
or composed of a max-pooling layer followed by an activation layer,
$z_{\ell}=\textrm{ReLU}(\textrm{maxpool}(y_{\ell})),$ (3)
or, for the last layer, composed of an argmax layer,
$z_{\ell}=\textrm{argmax}(y_{\ell}),$ (4)
where $y_{\ell}$ is the input feature map, and $z_{\ell}$ is the output
feature map. We do not restrict the network structure to be linear, and
structures such as skip connection and Inception are allowed.
With hybrid secure inference solutions, the client encrypts its input $x_{0}$
into $[\\![{x_{0}}]\\!]$, and sends $[\\![{x_{0}}]\\!]$ to the server. For
each linear layer as in equation (1), the server computes
$[\\![{y_{\ell}}]\\!]=w_{\ell}\cdot[\\![{x_{\ell}}]\\!]+b_{\ell}.$
To perform a non-linear layer as in equation (2), (3), and (4), the server
generates a random mask $r^{y}_{\ell}$, computes
$[\\![{y_{\ell}}]\\!]-r^{y}_{\ell}=[\\![{y_{\ell}-r^{y}_{\ell}}]\\!]$, and
send this encrypted value to the client. The client decrypts to get
$y_{\ell}-r^{y}_{\ell}$. Now the two parties hold secret shares of the
intermediate value $y_{\ell}$. The server and the client invoke a two-party
MPC protocol corresponding to the non-linear layer. As the result, the client
holds $z_{\ell}-r^{z}_{\ell}$, and the server holds $r^{z}_{\ell}$. To
transform $z_{\ell}$ back to encrypted form, the client encrypts
$z_{\ell}-r^{z}_{\ell}$ and sends $[\\![{z_{\ell}-r^{z}_{\ell}}]\\!]$ to the
server, who can compute $[\\![{z_{\ell}}]\\!]$ and proceed to the next layer.
After all layers are processed, the client and the server run another MPC
protocol to compute equation (4), and the client reconstructs the shares to
get $c$.
Security of the hybrid protocol guarantees the privacy of input data,
intermediate features, final result, as well as the model parameters, if the
two parties follow the semi-honest model. However, we observe that the client
can add arbitrary shifts to the secret shares in this protocol, and semantic
security of the protocol ensures the server cannot detect the shift. Instead
of using $y_{\ell}-r^{y}_{\ell}$ as input of the MPC calculation, the client
can change it to $y_{\ell}-r^{y}_{\ell}+\delta y_{\ell}$, effectively changing
the underlying value from $y_{\ell}$ to $y_{\ell}+\delta y_{\ell}$. Similarly,
the client can change $z_{\ell}-r^{z}_{\ell}$ to $z_{\ell}-r^{z}_{\ell}+\delta
z_{\ell}$, effectively changing the underlying value from $z_{\ell}$ to
$z_{\ell}+\delta z_{\ell}$.
Thus the client is capable of shifting all inputs and outputs of the non-
linear layers by arbitrary values, although the client is ignorant of the
values of the features. We consider how the client can exploit these
additional inputs, to efficiently extract the model parameters. From the
viewpoint of a malicious client, the model service can be formulated as
$\displaystyle c$ $\displaystyle=C_{\\{w\\}}(x_{0},\\{\delta y_{\ell},\delta
z_{\ell}:\ell\in N\\})$
$\displaystyle=\textrm{argmax}(F_{\\{w\\}}(x_{0},\\{\delta y_{\ell},\delta
z_{\ell}:\ell\in N\\})),$
where $\\{w\\}$ denotes all model parameters, $C$ is the functionality of the
classification model, which outputs the predicted class index, $N$ is the set
of non-linear layers, and $F$ is the output of the last linear layer.
To ease the notation, we use “named arguments” to denote the set of inputs as
$(x_{0},\\{\delta y_{\ell},\delta z_{\ell}:\ell\in
N\\})=V(\widetilde{x_{0}}=x_{0},\ldots,\widetilde{\delta y_{\ell}}=\delta
y_{\ell},\ldots,\widetilde{\delta z_{\ell^{\prime}}}=\delta
z_{\ell^{\prime}},\ldots)$. If an input is not present in the list of
arguments of $v$, it means the input is set to zeros. For example,
$V(\widetilde{\delta y_{\ell}}=\delta
y_{\ell})=V(\widetilde{x_{0}}=0,\widetilde{\delta y_{\ell}}=\delta
y_{\ell},\widetilde{\delta y_{\ell^{\prime}}}=0,\widetilde{\delta
z_{\ell^{\prime\prime}}}=0)$, for all $\ell^{\prime}\neq\ell$ and all
$\ell^{\prime\prime}$. Two sets of inputs can be added with the natural
element-wise addition.
Because the output of the model is a discrete value, in order to extract model
parameters, the adversary needs to find the boundary between classes, where
for
$y_{\ell_{\textrm{last}}}=F_{\\{w\\}}(v),$
it satisfies
$y_{\ell_{\textrm{last}},c_{1}}=y_{\ell_{\textrm{last}},c_{2}}$ (5)
for two different classes $c_{1},c_{2}$, and
$y_{\ell_{\textrm{last}},c_{1}}>y_{\ell_{\textrm{last}},c^{\prime}}$ (6)
for all other $c^{\prime}$. In the following, we call a set of input
satisfying the above relations a critical point, and denote the corresponding
input variables with a $*$ subscript.
Starting from a set of inputs and changing feature values on a layer, a
critical point can always be found. Algorithm 1 shows a routine for finding a
critical point using bisection, in which all input variables are fixed except
$\delta y_{\ell}$.
## 4 Extraction of intermediate features
In this section, we present the concrete method of SEEK. The adversary is able
to shift all the inputs and outputs of the activation layers, and observe the
effect on the model output. One way to extract the parameters is to find the
space of critical points formed by shifting the intermediate features.
However, because the landscape of model output as a function of the shifts can
be very complicated, this method becomes intractable when the target layer is
far away from the output layer. Instead, starting from a critical point, the
proposed method will add a particular set of shifts, such that if the
corresponding feature satisfies certain condition, the added shifts would
cancel itself and do not affect any other features. We can test the
criticality of the shifted input, and determine the value of the target
feature. In this way, we keep the effect of the shifts to a minimal level,
making this extraction method numerically stable and the errors in the
extracted parameters independent of each other. This extraction strategy is in
concept similar with the safe-error attack [22, 23] as a type of fault
injection attack to security systems.
Input : A fixed set of inputs $v^{0}$, variable input layer index $\ell$, norm
$d$, and error threshold $\epsilon$
Output : $\delta y^{*}_{\ell}$
1 do
2 Randomly sample $\delta y^{1}_{\ell}$ and $\delta y^{2}_{\ell}$ with norm
$d$;
3 $c^{1}\leftarrow C\left(v^{0}+V(\widetilde{\delta y_{\ell}}=\delta
y^{1}_{\ell})\right)$, $c^{2}\leftarrow C\left(v^{0}+V(\widetilde{\delta
y_{\ell}}=\delta y^{2}_{\ell})\right)$;
4
5while _$c^{1}=c^{2}$_ ;
6while _$|\delta y^{2}_{\ell}-\delta y^{1}_{\ell}| >\epsilon$_ do
7 $\delta y^{3}_{\ell}\leftarrow(\delta y^{1}_{\ell}+\delta y^{2}_{\ell})/2$,
and normalize $\delta y^{3}_{\ell}$ with norm $d$;
8 $c^{3}\leftarrow C\left(v^{0}+V(\widetilde{\delta y_{\ell}}=\delta
y^{3}_{\ell})\right)$;
9 if _$c^{3}=c^{1}$_ then
10 $\delta y^{1}_{\ell}\leftarrow\delta y^{3}_{\ell}$;
11
12 else
13 $\delta y^{2}_{\ell}\leftarrow\delta y^{3}_{\ell}$, $c^{2}\leftarrow
c^{3}$;
14
15 end if
16
17 end while
_18_ return _$\delta y^{*}_{\ell}\leftarrow\delta y^{2}_{\ell}$ ;_
Algorithm 1 search_critical – Find a critical point by shifting $y_{\ell}$
from a given set of inputs.
### 4.1 Extraction of standalone ReLU layer inputs
In this subsection, we present the method to extract an input feature value of
a standalone ReLU activation as in equation (2).
Consider a critical point $v^{*}$. A target feature $y_{\ell,i}$, which is the
input of a standalone ReLU activation, takes the value $y^{*}_{\ell,i}$ from
the set of input $v^{*}$. If $y^{*}_{\ell,i}<0$, shifting it by a small
positive or any negative $\delta y_{\ell,i}$ will not affect the model output,
because $\textrm{ReLU}(y^{*}_{\ell,i}+\delta
y_{\ell,i})=\textrm{ReLU}(y^{*}_{\ell,i})=0$. In this case we add a positive
shift to $y_{\ell,i}$. If $\delta y_{\ell,i}$ is large enough such that
$y^{*}_{\ell,i}+\delta y_{\ell,i}>0$, which implies
$\textrm{ReLU}(y^{*}_{\ell,i}+\delta
y_{\ell,i})\neq\textrm{ReLU}(y^{*}_{\ell,i})$, the input is no longer a
critical point. We can test the criticality of the input while varying $\delta
y_{\ell,i}$. At the boundary between critical points and non-critical points,
$y^{*}_{\ell,i}=-\delta y_{\ell,i}$.
On the other hand, if $y^{*}_{\ell,i}>0$, subtracting a small positive or any
negative $\delta y_{\ell,i}$ from $y_{\ell,i}$, and at the same time adding
the same shift $\delta y_{\ell,i}$ to $z_{\ell,i}$, will not affect the model
output, because $\textrm{ReLU}(y^{*}_{\ell,i}-\delta y_{\ell,i})+\delta
y_{\ell,i}=\textrm{ReLU}(y^{*}_{\ell,i})=y^{*}_{\ell,i}$. If $\delta
y_{\ell,i}$ is large enough such that $y^{*}_{\ell,i}-\delta y_{\ell,i}<0$,
the input is no longer a critical point. We can test the criticality of the
input while varying $\delta y_{\ell,i}$. At the boundary between critical
points and non-critical points, $y^{*}_{\ell,i}=\delta y_{\ell,i}$.
To test the criticality of a set of inputs $v$, we use the properties equation
(5) and (6). Consider we start from a critical point $v^{*}$ at the boundary
between class $c_{1}$ and $c_{2}$, and add some shifts $\delta v$ to
$y_{\ell}^{*}$ and $z_{\ell}^{*}$. If the added shifts do not change any
feature values other than $y_{\ell}$ and $z_{\ell}$, then the set of inputs
$v=v^{*}+\delta v$ is also a critical point between class $c_{1}$ and $c_{2}$.
In this case, $v$ satisfy
$C\left(v+V(\widetilde{\delta
y_{\textrm{last},c_{1}}}=\epsilon)\right)=c_{1},$
and
$C\left(v+V(\widetilde{\delta
y_{\textrm{last},c_{2}}}=\epsilon)\right)=c_{2},$
where $\epsilon$ is a small positive value. If the added shifts affect other
feature values, then the above equations are not satisfied with overwhelming
probability.
The algorithm for extracting a input feature of a standalone ReLU is shown in
algorithm 2.
Input : Input critical point $v^{*}$, target activation layer index $\ell$,
and target feature index $i$
Output : $y^{*}_{\ell,i}$
1 if _$v=v^{*}+V(\widetilde{\delta y_{\ell,i}}=-1)$ is critical_ then
// $y^{*}_{\ell,i}\leq 0$
2 For points of the form $v=v^{*}+V(\widetilde{\delta y_{\ell,i}}=\eta)$,
where $\eta\in[0,\infty)$, search for the boundary $\eta=\bar{\eta}$ between
critical points and non-critical points;
3 return _$y^{*}_{\ell,i}\leftarrow-\bar{\eta}$_ ;
4
5else
// $y^{*}_{\ell,i}>0$
6 For points of the form $v=v^{*}+V(\widetilde{\delta
y_{\ell,i}}=-\eta,\widetilde{\delta z_{\ell,i}}=\eta)$, where
$\eta\in[0,\infty)$, search for the boundary $\eta=\bar{\eta}$ between
critical points and non-critical points;
7 return _$y^{*}_{\ell,i}\leftarrow\bar{\eta}$_ ;
8
9 end if
Algorithm 2 extract_feature – Extraction of an intermediate feature value at a
critical point
### 4.2 Extraction of maxpool-ReLU layer inputs
In this subsection, we present the method to extract an input feature value of
a maxpool layer followed by a ReLU layer, as in equation (3).
The method is similar with the one in previous subsection. Because the maxpool
layer maps multiple features into one feature, when adjusting one input
feature and one output feature, we need to find a way to suppress the effect
of the other related input features.
Assume the feature value to be extracted is $y_{\ell,i}$, and the set of
output features affected by shifting $y_{\ell,i}$ is $Z_{\ell,i}$. We can add
a large negative shift to all features in $y_{\ell}$, except $y_{\ell,i}$. As
a result, $z_{\ell}$ will be zero everywhere except features in $Z_{\ell,i}$,
which takes the value of $y_{\ell,i}$. Now we can shift the value of
$y_{\ell,i}$ and values in $Z_{\ell,i}$, observe the effect on the
criticality, and consequently extract $y_{\ell,i}$. The extraction process is
similar with algorithm 2, except now the feature $z_{\ell,i}$ is replaced by a
set of features which should be shifted together. See figure 1 for an
illustration of this method.
Figure 1: An example of extraction method for a maxpool-ReLU layer input
feature. Features in the orange boxes are the target feature $y_{\ell,i}$ and
its related features $Z_{\ell,i}$ in the post-target layer, respectively. The
dashed rectangles are ranges of maxpool kernels. $C$ is a large positive
constant, added in order to suppress the effect from other features on
$y_{\ell}$ to $Z_{\ell,i}$.
### 4.3 Extraction of linear layer parameters
The methods presented in the previous two subsections can extract all the
intermediate features of a critical point. Then for each linear layer as in
equation (1), with the input features and output features known, the formula
is a set of linear equations for $w_{\ell}$ and $b_{\ell}$. We can repeat this
process and collect enough equations to solve all the model parameters.
To further simplify the extraction process, we note that we can add a large
negative shift to the input of a ReLU activation, and ensure its output to be
zero. We can also add arbitrary shifts to the zeroed outputs. Thus we have a
means to accurately control the output values of ReLU activations. In equation
(1), by setting $x_{\ell}$ to be identically zero and extracting $y_{\ell}$,
the value of $b_{\ell}$ can be read off,
$b_{\ell,j}=y_{\ell,j},$
where $j$ is an output feature index. By setting all but one feature value of
$x_{\ell}$ zero and extracting $y_{\ell}$, the weight parameters can be
derived as,
$w_{\ell,j,i_{0}}=\frac{y_{\ell,j}-b_{\ell,j}}{x_{\ell,i_{0}}},$
where $i_{0}$ is index of non-zero $x_{\ell}$ value.
We observe that algorithm 2 can work on multiple target feature indices, if
all the target features at these indices have the same value. In practice,
running algorithm 2 on more indices improves the accuracy, because the
influence of a change of their value is more significant to the model output.
For convolutional layers, we can use its structure to create multiple target
features with the same value. For the bias, by setting $x_{\ell}$ to be
identically zero, all values on $y_{\ell,c_{\textrm{out}}}$ are equal to
$b_{\ell,c_{\textrm{out}}}$, where $c_{\textrm{out}}$ is an output channle
index. For the weight, instead of setting one feature value on $x_{\ell}$
nonzero, for an input channel index $c_{\textrm{in}}$, we can set
$x_{\ell,c_{\textrm{in}}}$ to be periodically nonzero, so that the target
kernel value is repeated on $y_{\ell}$.
The above extraction process is illustrated in figure 2. Algorithm 3 shows the
complete algorithm to extract the parameters in a convolution layer. For
clarity, we assume that the stride of the convolution to be 1. The extraction
algorithm for a fully-connected layer is similar, and is omitted for brevity.
### 4.4 Extraction of last linear layer parameters
The extraction method described in the previous subsection applies to all the
linear layers, except the last fully-connected layer before the argmax layer.
Without a ReLU layer after the last fully-connected layer, the features
$y_{\textrm{last}}$ cannot be extracted with the extract_feature routine.
Instead, the following method can be applied. Assume the numbers of input and
output features of the last fully-connected layer are $n_{0}$ and $n_{1}$,
respectively. To extract $b_{\textrm{last}}$, we can add shifts to the layer
before the last layer, so that $x_{\textrm{last}}=0$. Then we search for
critical points by varying $\delta y_{\textrm{last}}$, which gives the
relation about $b_{\textrm{last}}$,
$b_{\textrm{last},c_{1}}+\delta
y^{*}_{\textrm{last},c_{1}}=b_{\textrm{last},c_{2}}+\delta
y^{*}_{\textrm{last},c_{2}}.$
$n_{1}-1$ such equations give the values of $b_{\textrm{last}}$ up to an
additive constant. Similarly for $w_{\textrm{last}}$, we can manipulate the
layer before the last layer, so that $x_{\textrm{last}}$ is zero except at
feature $i_{0}$. Then we search for critical points by varying $\delta
y_{\textrm{last}}$, which gives the relation about $w_{\textrm{last}}$,
$\displaystyle
w_{\textrm{last},c_{1},i_{0}}x_{\textrm{last},i_{0}}+b_{\textrm{last},c_{1}}+\delta
y^{*}_{\textrm{last},c_{1}}$
$\displaystyle=w_{\textrm{last},c_{2},i_{0}}x_{\textrm{last},i_{0}}+b_{\textrm{last},c_{2}}+\delta
y^{*}_{\textrm{last},c_{2}},$ $\displaystyle
w_{\textrm{last},c_{1},i_{0}}-w_{\textrm{last},c_{2},i_{0}}$
$\displaystyle=\frac{(b_{\textrm{last},c_{2}}-b_{\textrm{last},c_{1}})+\delta
y^{*}_{\textrm{last},c_{2}}-\delta
y^{*}_{\textrm{last},c_{1}}}{x_{\textrm{last},i_{0}}}.$
$(n_{1}-1)n_{0}$ such equations gives the values of $w_{\textrm{last}}$, up to
$n_{0}$ additive constants. In fact, because only the class label is observed,
this is all the degrees of freedom of $w_{\textrm{last}}$ that can be
determined.
Figure 2: An example of the convolutin layer extraction method, as shown in
algorithm 3. For simplicity, only one channel for each layer is shown. By
setting the pre-target feature $x_{\ell}$ to be nonzero with a period of
kernel size, the target feature layer $y_{\ell}$ is also periodic, and the
values in the orange boxes can be extracted together for better accuracy,
which reveal the values of target convolution layer parameters.
Input : Target convolution layer index $\ell$, numbers of output and input
channels $n_{\textrm{out}}$ and $n_{\textrm{in}}$, convolution kernal size
$(k_{h},k_{w})$, input feature size $(f_{h},f_{w})$
Output : Target convolution layer parameters $b_{\ell}$ and $w_{\ell}$
1 Get the layer index $\ell_{0}$ whose output is the input of layer $\ell$,
i.e., $z_{\ell_{0}}=x_{\ell}$;
2 Get the index of last non-linear layer $\ell_{\textrm{last}}$;
3 Add a large negative shift $-d$ to all features in $y_{\ell_{0}}$;
4 $\delta
z^{*}_{\ell_{\textrm{last}}}\leftarrow\textsf{search\\_critical}(V(\widetilde{x_{0}}=x_{0},\widetilde{\delta
y_{\ell_{0}}}=-d),\ell_{\textrm{last}})$;
5 $v^{*}\leftarrow V(\widetilde{x_{0}}=x_{0},\widetilde{\delta
y_{\ell_{0}}}=-d,\widetilde{\delta z_{\ell_{\textrm{last}}}}=\delta
z^{*}_{\ell_{\textrm{last}}})$;
6 for _$c_{\textrm{out}}\leftarrow 0$ to $n_{\textrm{out}}-1$_ do
7 $\beta\leftarrow\\{(c_{\textrm{out}},i,j):0\leq i<f_{h},0\leq j<f_{w}\\}$;
8
$b_{\ell,c_{\textrm{out}}}\leftarrow\textsf{extract\\_feature}(v^{*},\ell,\beta)$;
9
10 end for
11$k_{h}^{\prime}\leftarrow(k_{h}-1)/2,k_{w}^{\prime}\leftarrow(k_{w}-1)/2$;
12 $\Delta\leftarrow(n_{\textrm{in}}\cdot k_{h}\cdot k_{w}/4)^{1/2}$;
13 for _$c_{\textrm{in}}\leftarrow 0$ to $n_{\textrm{in}}-1$_ do
14 Create a feature map $\alpha$ of size $(f_{h},f_{w})$ whose values are
$\alpha_{i,j}=\Delta\cdot\hat{\delta}((i-k_{h}^{\prime})\%k_{h})\cdot\hat{\delta}((j-k_{w}^{\prime})\%k_{w}))$,
where $\hat{\delta}(\cdot)$ is the discrete delta function;
15 $\delta
z^{*}_{\ell_{\textrm{last}}}\leftarrow\textsf{search\\_critical}(V(\widetilde{x_{0}}=x_{0},\widetilde{\delta
y_{\ell_{0}}}=-d,\widetilde{\delta
x_{\ell,c_{\textrm{in}}}}=\alpha,\ell_{\textrm{last}})$;
16 $v^{*}\leftarrow V(\widetilde{x_{0}}=x_{0},\widetilde{\delta
y_{\ell_{0}}}=-d,\widetilde{\delta
x_{\ell,c_{\textrm{in}}}}=\alpha,\widetilde{\delta
z_{\ell_{\textrm{last}}}}=\delta z^{*}_{\ell_{\textrm{last}}})$;
17 for _$c_{\textrm{out}}\leftarrow 0$ to $n_{\textrm{out}}-1$_ do
18 for _$i\leftarrow 0$ to $k_{h}-1$_ do
19 for _$j\leftarrow 0$ to $k_{w}-1$_ do
20 $\beta\leftarrow\\{(c_{\textrm{out}},i^{\prime},j^{\prime}):0\leq
i^{\prime}<f_{h},(i^{\prime}-k_{h}+1+i)\%k_{h}=0,0\leq
j^{\prime}<f_{w},(j^{\prime}-k_{w}+1+j)\%k_{w}=0\\}$;
21
$y_{\ell,c_{\textrm{out}},i,j}\leftarrow\textsf{extract\\_feature}(v^{*},\ell,\beta)$;
22
$w_{\ell,c_{\textrm{out}},c_{\textrm{in}},i,j}\leftarrow(y_{\ell,c_{\textrm{out}},i,j}-b_{\ell,c_{\textrm{out}}})/\Delta$;
23
24 end for
25
26 end for
27
28 end for
29
30 end for
31return _$b_{\ell}$ and $w_{\ell}$_;
Algorithm 3 Extraction of parameters in a convolution layer
## 5 Experiment
We test the proposed SEEK method on ResNet-18 [26], implemented in the latest
PyTorch [27] release. The model contains 11.7M parameters, and is trained for
ImageNet classification task.
In ResNet-18, some of the linear layers have a single preceding layer, while
the layers immediately after the addition layers have two preceding layers. In
addition, some skip connections are identity connections, and some are down-
sampling connections, which have their own convolution weights. In all of
these cases, we can use the methods in the previous section to extract the
linear layers’ parameters. Figure 3 shows several extraction paths for
different cases in ResNet.
Figure 3: Examples of extraction paths for ResNet. A large negative value is
added to the grey layers, so that the feature values in the pre-target layers
(yellow) can be adjusted to some convenient values. A bisection search is
performed on the last feature layers (blue) to find a critical point. The
feature values of the post-target layers (green) are extracted, based on
properties of the non-linear succeeding non-linear layers. Then the parameters
of the target layers (red) are extracted.
We implemented the extraction algorithm, and experimentally tested its
performance. Figure 4 shows the average number of model calls required for
extracting each parameter, as well as the average relative error, for
different layers in ResNet-18. For each parameter, the average number of model
calls is $45.8$. The average relative error of bias is $6.68\times 10^{-6}$,
and the average relative error of weight is $4.35\times 10^{-5}$.
Figure 4: Result of the proposed extraction method on ResNet-18.
$N_{\textrm{bias}}$($N_{\textrm{weight}}$) is the average number of model
calls for extracting a bias(weight) parameter.
$e_{\textrm{bias}}$($e_{\textrm{weight}}$) is the average relative error of
the extracted bias(weight) parameter.
As figure 4 shows, the average error of weight tends to be larger as the layer
is closer to the model output, except for the last FC layer. The reason for
this phenomenon is, if the shift of the target feature is larger than its
value (see algorithm 2), the output of ReLU function $z=z^{*}+\delta z$ will
be different from its original value $z^{*}$ and in turn changes the final
logits. However, for a layer closer to the model output, the relationship
between $\delta z$ and the final logits $y_{\textrm{last},c_{1}}$ and
$y_{\textrm{last},c_{2}}$ becomes simpler. In some rare cases, $\delta z$
affects $y_{\textrm{last},c_{1}}$ and $y_{\textrm{last},c_{2}}$ approximately
in the same way in a small neighborhood. In this small neighborhood of $\delta
z$ value, algorithm 2 cannot distinct the shift by criticality test, and
resulting in a larger error. This issue can be mitigated by repeating the
extraction multiple times with different initial critical points.
## 6 Conclusion and discussion
In this work, we proposed SEEK, a model extraction attack method against HE-
MPC hybrid inference service with semi-honest security, with the most
stringent assumption that the model outputs class labels only. Our method
makes use of the piecewise-linear property of the ReLU activation, and the
principle of safe-error attack, thus achieving an extraction process that can
accurately extract each layer’s parameters. As the method tests whether a
shift to the internal feature affects the criticality of the whole input, it
is not affected by the depth of the model, which can incur numerical issues
for other extraction methods. Furthermore, because the extraction of
parameters in a layer is not dependent on the extraction result of any other
layer, a distributed extraction attack is straightforward.
SEEK can be generalized to other secure inference protocols with semi-honest
security. In particular, if the ReLU activation function is replaced by other
piecewise-linear functions, such as ReLU6 or leaky ReLU, our method can be
applied in essentially the same manner. If the activation function is linear
only in part of the input range, such as the swish activation, we can also
manipulate the input so that it falls in the region of linear activation. For
secure inference of decision tree models, the general method of safe-error
attack is applicable, because the discrete nature of decision tree inference
makes it possible to change individual intemediate feature and observe the
effect on the final output. We leave the security analysis of the case of
decision tree models for future work.
As demonstrated by the proposed extraction method, the capability of changing
all the intermediate features with arbitrary shifts is quite powerful, and it
is non-trivial to prevent such attack. Shuffling the features in a layer
before the MPC protocol only increases the difficulty of this attack by a
constant factor. The model inference protocols with client-malicious security
[21, 24, 25], albeit with significant communicational and computational cost,
provide systematic countermeasure against our attack. Secure inference based
on fully homomorphic encryption with bootstrapping [28, 29], or garbled
circuits [6, 7, 8], lead to another direction of mitigation, in which the
inference is processed in constant communication rounds, so an adversary do
not have the oppotunity to malleate intermediate model features.
## References
* [1] M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in _Proceedings of the 22nd ACM SIGSAC conference on computer and communications security_ , 2015, pp. 1322–1333.
* [2] R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in _2017 IEEE symposium on security and privacy (SP)_. IEEE, 2017, pp. 3–18.
* [3] D. Lowd and C. Meek, “Adversarial learning,” in _Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining_ , 2005, pp. 641–647.
* [4] R. Gilad-Bachrach, N. Dowlin, K. Laine, K. Lauter, M. Naehrig, and J. Wernsing, “Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy,” in _International conference on machine learning_. PMLR, 2016, pp. 201–210.
* [5] A. Brutzkus, R. Gilad-Bachrach, and O. Elisha, “Low latency privacy preserving inference,” in _International Conference on Machine Learning_. PMLR, 2019, pp. 812–821.
* [6] B. D. Rouhani, M. S. Riazi, and F. Koushanfar, “Deepsecure: Scalable provably-secure deep learning,” in _Proceedings of the 55th annual design automation conference_ , 2018, pp. 1–6.
* [7] M. Ball, B. Carmer, T. Malkin, M. Rosulek, and N. Schimanski, “Garbled neural networks are practical,” _Cryptology ePrint Archive_ , 2019.
* [8] M. S. Riazi, M. Samragh, H. Chen, K. Laine, K. Lauter, and F. Koushanfar, “$\\{$XONN$\\}$:$\\{$XNOR-based$\\}$ oblivious deep neural network inference,” in _28th USENIX Security Symposium (USENIX Security 19)_ , 2019, pp. 1501–1518.
* [9] P. Mohassel and P. Rindal, “Aby3: A mixed protocol framework for machine learning,” in _Proceedings of the 2018 ACM SIGSAC conference on computer and communications security_ , 2018, pp. 35–52.
* [10] S. Wagh, D. Gupta, and N. Chandran, “Securenn: 3-party secure computation for neural network training.” _Proc. Priv. Enhancing Technol._ , vol. 2019, no. 3, pp. 26–49, 2019.
* [11] N. Kumar, M. Rathee, N. Chandran, D. Gupta, A. Rastogi, and R. Sharma, “Cryptflow: Secure tensorflow inference,” in _2020 IEEE Symposium on Security and Privacy (SP)_. IEEE, 2020, pp. 336–353.
* [12] C. Juvekar, V. Vaikuntanathan, and A. Chandrakasan, “$\\{$GAZELLE$\\}$: A low latency framework for secure neural network inference,” in _27th USENIX Security Symposium (USENIX Security 18)_ , 2018, pp. 1651–1669.
* [13] F. Boemer, A. Costache, R. Cammarota, and C. Wierzynski, “ngraph-he2: A high-throughput framework for neural network inference on encrypted data,” in _Proceedings of the 7th ACM Workshop on Encrypted Computing & Applied Homomorphic Cryptography_, 2019, pp. 45–56.
* [14] P. Mishra, R. Lehmkuhl, A. Srinivasan, W. Zheng, and R. A. Popa, “Delphi: A cryptographic inference service for neural networks,” in _29th USENIX Security Symposium (USENIX Security 20)_ , 2020, pp. 2505–2522.
* [15] D. Rathee, M. Rathee, N. Kumar, N. Chandran, D. Gupta, A. Rastogi, and R. Sharma, “Cryptflow2: Practical 2-party secure inference,” in _Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security_ , 2020, pp. 325–342.
* [16] Z. Huang, W.-j. Lu, C. Hong, and J. Ding, “Cheetah: Lean and fast secure two-party deep neural network inference.” _IACR Cryptol. ePrint Arch._ , vol. 2022, p. 207, 2022.
* [17] F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing machine learning models via prediction $\\{$APIs$\\}$,” in _25th USENIX security symposium (USENIX Security 16)_ , 2016, pp. 601–618.
* [18] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in _Proceedings of the 2017 ACM on Asia conference on computer and communications security_ , 2017, pp. 506–519.
* [19] M. Jagielski, N. Carlini, D. Berthelot, A. Kurakin, and N. Papernot, “High accuracy and high fidelity extraction of neural networks,” in _29th USENIX Security Symposium (USENIX Security 20)_ , 2020, pp. 1345–1362.
* [20] N. Carlini, M. Jagielski, and I. Mironov, “Cryptanalytic extraction of neural network models,” in _Annual International Cryptology Conference_. Springer, 2020, pp. 189–218.
* [21] R. Lehmkuhl, P. Mishra, A. Srinivasan, and R. A. Popa, “Muse: Secure inference resilient to malicious clients,” in _30th USENIX Security Symposium (USENIX Security 21)_ , 2021, pp. 2201–2218.
* [22] S.-M. Yen and M. Joye, “Checking before output may not be enough against fault-based cryptanalysis,” _IEEE Transactions on computers_ , vol. 49, no. 9, pp. 967–970, 2000.
* [23] M. Joye and S.-M. Yen, “The montgomery powering ladder,” in _International workshop on cryptographic hardware and embedded systems_. Springer, 2002, pp. 291–302.
* [24] N. Chandran, D. Gupta, S. L. B. Obbattu, and A. Shah, “Simc: Ml inference secure against malicious clients at semi-honest cost,” _Cryptology ePrint Archive_ , 2021.
* [25] G. Xu, X. Han, T. Zhang, H. Li, and R. H. Deng, “Simc 2.0: Improved secure ml inference against malicious clients,” _arXiv preprint arXiv:2207.04637_ , 2022.
* [26] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778.
* [27] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in _Advances in Neural Information Processing Systems 32_ , H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, Eds. Curran Associates, Inc., 2019, pp. 8024–8035. [Online]. Available: http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
* [28] I. Chillotti, M. Joye, and P. Paillier, “Programmable bootstrapping enables efficient homomorphic inference of deep neural networks,” in _International Symposium on Cyber Security Cryptography and Machine Learning_. Springer, 2021, pp. 1–19.
* [29] J.-W. Lee, H. Kang, Y. Lee, W. Choi, J. Eom, M. Deryabin, E. Lee, J. Lee, D. Yoo, Y.-S. Kim _et al._ , “Privacy-preserving machine learning with fully homomorphic encryption for deep neural network,” _IEEE Access_ , vol. 10, pp. 30 039–30 054, 2022.
|
# Multiplicative complements II.
Anett Kocsis , Dávid Matolcsi , Csaba Sándor and György Tőtős Eötvös Loránd
University, Budapest, Hungary. Email<EMAIL_ADDRESS>Supported by the
ÚNKP-21-1 New National Excellence Program of the Ministry for Innovation and
Technology from the source of the National Research, Development and
Innovation Fund. Eötvös Loránd University, Budapest, Hungary. Email:
<EMAIL_ADDRESS>Supported by the ÚNKP-21-1 New National Excellence
Program of the Ministry for Innovation and Technology from the source of the
National Research, Development and Innovation Fund.Department of Stochastics,
Institute of Mathematics, Budapest University of Technology and Economics,
Műegyetem rkp. 3., H-1111, Budapest, Hungary. Department of Computer Science
and Information Theory, Budapest University of Technology and Economics,
Műegyetem rkp. 3., H-1111 Budapest, Hungary, MTA-BME Lendület Arithmetic
Combinatorics Research Group, ELKH, Műegyetem rkp. 3., H-1111 Budapest,
Hungary . Email<EMAIL_ADDRESS>This author was supported by the NKFIH
Grants No. K129335. Faculty of Mathematics and Computer Science, Babeş-Bolyai
University.
###### Abstract
In this paper we prove that if $A$ and $B$ are infinite subsets of positive
integers such that every positive integer $n$ can be written as $n=ab$, $a\in
A$, $b\in B$, then $\displaystyle\lim_{x\to\infty}\frac{A(x)B(x)}{x}=\infty$.
We also prove many other results about sets like this.
2010 Mathematics Subject Classification: 11B34, 11N25
Keywords and phrases: Additive complements; counting function;
## 1 Introduction
The set of nonnegative integers is denoted by $\mathbb{N}$. The counting
function of a set $A\subseteq\mathbb{N}$ is defined as
$A(x)=|A\cap\\{0,1,\dots,x\\}|$ for every $x\in\mathbb{N}$. Let
$A,B\subseteq\mathbb{N}$. The sets $A$ and $B$ are said to be additive
complements if every nonnegative integers $n$ can be written as $n=a+b$, $a\in
A$, $b\in B$. Clearly, if $A,B\subseteq\mathbb{N}$ are additive complements,
then $A(x)B(x)\geq x+1$ for every $x\in\mathbb{N}$, therefore
$\displaystyle\liminf_{x\to\infty}\frac{A(x)B(x)}{x}\geq 1$. In 1964,
answering a question of Hanani, Danzer [1] proved that this bound is sharp.
###### Theorem 1 (Danzer, 1964).
There exist infinite additive complements $A,B\subseteq\mathbb{N}$ such that
$\lim_{x\to\infty}\frac{A(x)B(x)}{x}=1.$
In [2] we introduced the concept of multiplicative complements. Let us denote
by $\mathbb{Z}^{+}$ the set of positive integers and let
$A_{i}\subseteq\mathbb{Z}^{+}$ for every $1\leq i\leq h$. The $h$-tuple
$(A_{1},\dots,A_{h})$ form multiplicative complements of order $h$ if every
positive integers $n$ can be written as $n=a_{1}\dots a_{h}$, $a_{i}\in
A_{i}$. For brevity, we will use the notation $MC_{h}$ for the set of
multiplicative complements of order $h$. Similar to the additive complements,
if $(A,B)\in MC_{2}$, then we have $A(x)B(x)\geq x$, therefore
$\liminf_{x\to\infty}\frac{A(x)B(x)}{x}\geq 1.$
We show that, in contrast to the additive complements,
$\lim_{x\to\infty}\frac{A(x)B(x)}{x}=\infty$
for every infinite $(A,B)\in MC_{2}$.
We proved in [2] the following statement:
###### Theorem 2.
For every $\varepsilon>0$, there exists infinite $(A,B)\in MC_{2}$ such that
$\liminf_{x\to\infty}\frac{\max\\{A(x)B(x)\\}}{\frac{x}{\log x}}\leq
0.5+\varepsilon.$
Clearly, for every $(A,B)\in MC_{2}$ we have $\log min\\{A(x),B(x)\\}\leq\log
x$. It follows that
###### Corollary 3.
For every $\varepsilon>0$, there exists infinite $(A,B)\in MC_{2}$ such that
$\liminf_{x\to\infty}\frac{\max\\{A(x)B(x)\\}\log\min\\{A(x),B(x)\\}}{x}\leq
0.5+\varepsilon.$
We show that this bound can not be improved.
###### Theorem 4.
For every infinite $(A,B)\in MC_{2}$,
$\liminf_{x\to\infty}\frac{\max\\{A(x)B(x)\\}\log\min\\{A(x),B(x)\\}}{x}>0.5.$
If $|A|=\infty$ and $|B|=\infty$, then
$\displaystyle\lim_{x\to\infty}\frac{\min\\{A(x),B(x)\\}}{\log\min\\{A(x),B(x)\\}}=\infty$.
It follows that
###### Corollary 5.
For every infinite $(A,B)\in MC_{2}$,
$\lim_{x\to\infty}\frac{A(x)B(x)}{x}=\infty.$
In Theorem 4 we show that for infinite $(A,B)\in MC_{2}$, the fraction
$\frac{\max\\{A(x)B(x)\\}\log\min\\{A(x),B(x)\\}}{x}>0.5$ if $x$ is large
enough. Next theorem shows that this fraction can be arbitrary large.
###### Theorem 6.
For every infinite $(A,B)\in MC_{2}$,
$\limsup_{x\to\infty}\frac{\max\\{A(x)B(x)\\}\log\min\\{A(x),B(x)\\}}{x}=\infty.$
We proved in [2] that
###### Theorem 7.
1. 1.
For every $(A,B)\in MC_{2}$,
$\limsup_{x\to\infty}\frac{max\\{A(x),B(x)\\}}{\frac{x}{\sqrt{\log
x}}}\geq\frac{1}{\sqrt{\pi}}.$
2. 2.
There exists an $(A,B)\in MC_{2}$ such that
$A(x)=\left(\frac{1}{\sqrt{\pi}}+o(1)\right)\frac{x}{\sqrt{\log x}}\quad\hbox{
and }\quad B(x)=\left(\frac{1}{\sqrt{\pi}}+o(1)\right)\frac{x}{\sqrt{\log
x}}.$
In the next three theorems we consider the function $\min\\{A(x),B(x)\\}$. It
is easy to see that for every $\varepsilon>0$, there exists an $(A,B)\in
MC_{2}$ such that $|B|<\infty$ and
$\displaystyle\limsup_{x\to\infty}\frac{A(x)}{x}<\varepsilon$. On the other
hand, if $|B|<\infty$ and $(A,B)\in MC_{2}$, then
$\displaystyle\liminf_{x\to\infty}\frac{A(x)}{x}>0$. Therefore, the natural
requirement is that $A(x)=o(x)$ and $B(x)=o(x)$.
###### Theorem 8.
Let $f(x)$ be a function such that $f(x)\to\infty$ as $x\to\infty$. Then there
exists infinite $(A,B)\in MC_{2}$ such that $A(x)=o(x)$, $B(x)=o(x)$ and
$\liminf_{x\to\infty}\frac{\min\\{A(x),B(x)\\}}{f(x)}=0.$
As a corollary, we get that in Corollary 5 the function $x$ can not be
replaced by any other function $g(x)$, where $\frac{g(x)}{x}\to\infty$ as
$x\to\infty$.
###### Corollary 9.
Let $g(x)$ be a function such that
$\displaystyle\lim_{x\to\infty}\frac{g(x)}{x}=\infty$. Then there exist an
$(A,B)\in MC_{2}$ such that
$\limsup_{x\to\infty}\frac{A(x)B(x)}{g(x)}=0.$
The following theorem shows that for $(A,B)\in MC_{2}$ the function
$\min\\{A(x),B(x)\\}$ can be arbitrary large.
###### Theorem 10.
Let us suppose that for some function $f(x)>0$, $x\geq 1$, the series
$\displaystyle\sum_{n=1}^{\infty}\frac{f(n)}{n^{2}}$ converges. Then there is
no $(A,B)\in MC_{2}$ such that $A(x)=O(f(x))$ and $B(x)=o(x)$.
As a corollary, we get
###### Corollary 11.
Let $\varepsilon>0$. Then there is no infinite $(A,B)\in MC_{2}$, $B(x)=o(x)$
such that $A(x)=O\left(\frac{x}{(\log x)(\log\log x)^{1+\varepsilon}}\right)$.
If the function $f(x)$ satisfies some smoothness conditions and
$\displaystyle\sum_{n=1}^{\infty}\frac{f(n)}{n^{2}}=\infty$, then we can find
$(A,B)\in\mathbb{Z}^{+}$ such that $A(x)=O(f(x))$ and $B(x)=o(x)$ as
$x\to\infty$.
###### Theorem 12.
Let us suppose that the function $f(x)>0$ satisfies the following conditions
1. 1.
$f(x)$ is monotonically increasing for $x\geq 1$,
2. 2.
$f(x)=o(\frac{x}{\log x})$ as $x\to\infty$,
3. 3.
there exist $c_{1},c_{2}>0$ such that
$c_{1}\leq\frac{\frac{f(x_{2})}{f(x_{1})}}{\frac{x_{2}}{x_{1}}}\leq c_{2}$ for
$1\leq x_{1}\leq x_{2}\leq x_{1}^{2}$,
4. 4.
$\sum_{n=1}^{\infty}\frac{f(n)}{n^{2}}=\infty$.
Then there exists $(A,B)\in MC_{2}$ such that $A(x)=O(f(x))$ and $B(x)=o(x)$.
As a corollary, we get
###### Corollary 13.
There exists $(A,B)\in MC_{2}$ such that $A(x)=O(\frac{x}{(\log x)(\log\log
x)})$ and $B(x)=o(x)$.
In the last theorem, we consider $(A,B)\in MC_{2}$ such that $|B|<\infty$.
###### Theorem 14.
1. 1.
There exists a $c_{1}>0$ such that for every $(A,B)\in MC_{2}$,
$2\leq|B|<\infty$, we have
$\frac{A(x)\log|B|}{x}\geq c_{1}$
for every $x\in\mathbb{Z}^{+}$.
2. 2.
There exists a constant $c_{2}$ such that for every $N\geq 3$, there exists
$(A,B)\in MC_{2}$, $|B|=N$ such that
$\frac{A(x)\log\log|B|}{x}\leq c_{2}$
for every $x\geq 10$.
We finally pose some open problems for further research. As a corollary of
Theorem 7, we get the following statement:
###### Corollary 15.
There exists $(A,B)\in MC_{2}$ such that
$\lim_{x\to\infty}\frac{max\\{A(x),B(x)\\}\sqrt{\log\min\\{A(x),B(x)\\}}}{x}=\frac{1}{\sqrt{\pi}}.$
Inspired by this, Theorem 6 may be sharpened as follows:
###### Problem 1.
Is it true that for any infinite $(A,B)\in MC_{2}$
$\limsup_{x\to\infty}\frac{max\\{A(x),B(x)\\}\sqrt{\log\min\\{A(x),B(x)\\}}}{x}>0?$
It follows from Theorem 14 that
$\frac{c_{1}}{\log k}\leq\inf_{B:|B|=k}\\{\inf_{A:(A,B)\in
MC_{2}}\\{\limsup_{x\to\infty}\frac{A(x)}{x}\\}\\}\leq\frac{c_{2}}{\log\log
k}.$
It would be interesting to determine the right magnitude of the above infimum.
###### Problem 2.
Find the magnitude of the function
$f(k)=\inf_{B:|B|=k}\\{\inf_{A:(A,B)\in
MC_{2}}\\{\limsup_{x\to\infty}\frac{A(x)}{x}\\}\\}.$
In Theorem 12 we have some smoothness assumptions. Is it true that we can omit
these conditions?
###### Problem 3.
Is it true that if $f(x)>0$, $x\geq 1$ is a monotonically increasing function
such that $\sum_{n=1}^{\infty}\frac{f(n)}{n^{2}}=\infty$, then there exists an
$(A,B)\in MC_{2}$ such that $A(x)=O(f(x))$ and $B(x)=o(x)$?
## 2 Proofs
Throughout the paper the set of primes is denoted by $P$ and $p$ will always
denote a prime number.
###### Proof of Theorem 4.
The set $S\subseteq\mathbb{Z}^{+}$ is called multiplicative basis of order 2
if every positive integer $n$ can be written as $n=ss^{\prime}$, where
$s,s^{\prime}\in S$. The set of multiplicative bases of order 2 is denoted by
$MB_{2}$. Pach and Sándor [3] proved that if $S\in MB_{2}$, then
$\displaystyle\liminf_{x\to\infty}\frac{S(x)}{\frac{x}{\log x}}>1$. For
$(A,B)\in MC_{2}$, we have $A\cup B\in MB_{2}$ and $(A\cup B)(x)\leq
A(x)+B(x)$. It follows that
$\liminf\limits_{x\to\infty}\frac{A(x)+B(x)}{\frac{x}{\log x}}>1,$
that is, there exists a $\delta>0$ such that
$\displaystyle\liminf\limits_{x\to\infty}\frac{A(x)+B(x)}{\frac{x}{\log
x}}\geq 1+\delta.$ (2.1)
Let $\beta>0$ be a constant such that
$\frac{1}{2+\delta}<\frac{1}{2}-\beta<\frac{1}{2+\frac{\delta}{2}}.$
In order to prove the theorem, we define a partition of $\mathbb{Z}^{+}$. Let
$H_{1}=\\{x\in\mathbb{Z}^{+}:\frac{\delta}{2}\frac{x}{\log
x}\leq\min\\{A(x),B(x)\\}\\}$
$H_{2}=\\{x\in\mathbb{Z}^{+}:\frac{x^{\frac{1}{2}-\beta}}{\log^{2}x}\leq\min\\{A(x),B(x)\\}<\frac{\delta}{2}\frac{x}{\log
x}\\}$
$H_{3}=\\{x\in\mathbb{Z}^{+}:\min\\{A(x),B(x)\\}<\frac{x^{\frac{1}{2}-\beta}}{\log^{2}x}\\}.$
Because of (2.1),
$\liminf_{x\in H_{1},x\to\infty}\frac{\max\\{A(x),B(x)\\}}{\frac{x}{\log
x}}\geq\frac{1}{2}+\frac{\delta}{2}$
It is easy to see from the definition of $H_{1}$ that
$\liminf_{x\in H_{1},x\to\infty}\frac{\log\min\\{A(x),B(x)\\}}{\log x}=1.$
Hence
$\liminf_{x\in
H_{1},x\to\infty}\frac{\max\\{A(x),B(x)\\}\log\min\\{A(x),B(x)\\}}{x}\geq\frac{1}{2}+\frac{\delta}{2}.$
In the second case, it follows from (2.1) that
$\liminf_{x\in H_{2},x\to\infty}\frac{\max\\{A(x),B(x)\\}}{\frac{x}{\log
x}}\geq 1+\frac{\delta}{2}$
and it is easy to see from the definition of $H_{2}$ that
$\liminf_{x\in H_{2},x\to\infty}\frac{\log\min\\{A(x),B(x)\\}}{\log
x}\geq\frac{1}{2}-\beta.$
Hence
$\liminf_{x\in
H_{2},x\to\infty}\frac{\max\\{A(x),B(x)\\}\log\min\\{A(x),B(x)\\}}{x}\geq\left(1+\frac{\delta}{2}\right)\left(\frac{1}{2}-\beta\right)>\frac{1}{2}.$
It remains to consider the case $x\in H_{3}$ as $x\to\infty$. In this case
$\min\\{A(x),B(x)\\}\log^{2}\min\\{A(x),B(x)\\}<x^{\frac{1}{2}-\beta}.$
Let us suppose that $\min\\{A(x),B(x)\\}=B(x)$ for a given $x\in H_{3}$. Then
$\frac{\log x}{\log(B(x)\log^{2}B(x))}\geq 2+\frac{\delta}{2}.$ (2.2)
Clearly,
$\begin{split}x&\leq\phantom{=}|\\{n\leq x:\exists(a,b)\in A\times B\colon
n=ab,b\geq B(x)\log^{2}B(x)\\}|+\\\ &\phantom{=}+|\\{n\leq x:\exists(a,b)\in
A\times B\colon n=ab,1<b<B(x)\log^{2}B(x)\\}|+\\\ &\phantom{=}+|\\{n\leq
x:\exists(a,b)\in A\times B\colon n=ab,b=1\\}|.\end{split}$ (2.3)
The first term in (2.3) can be estimated by
$\displaystyle|\\{n\leq x:\exists(a,b)\in A\times B\colon n=ab,b\geq
B(x)\log^{2}B(x)\\}|$ $\displaystyle\leq\sum_{B(x)\log^{2}B(x)\leq b\leq
x,b\in B}\frac{x}{b}$
$\displaystyle\leq\frac{x}{B(x)\log^{2}B(x)}B(x)\leq\frac{x}{\log^{2}B(x)}.$
The second term in (2.3) can be estimated by
$\displaystyle|\\{n\leq x:$ $\displaystyle\exists(a,b)\in A\times B\colon
n=ab,1<b<B(x)\log^{2}B(x)\\}|\leq$ $\displaystyle\leq|\\{n\leq x:\exists p\in
P\colon p\mid n,1<p<B(x)\log^{2}B(x)\\}|.$
The third term in (2.3) is
$|\\{n\leq x\mid\exists(a,b)\in A\times B\colon n=ab,b=1\\}|=A(x),$
and so
$\displaystyle x\leq\frac{x}{\log^{2}B(x)}+|\\{n\leq x:\exists p\in P\colon
p\mid n,1<p<B(x)\log^{2}B(x)\\}|+A(x)$ (2.4)
On the other hand,
$\begin{split}x=&1+|\\{n\leq x:\exists p\in P\colon p\mid
n,1<p<B(x)\log^{2}B(x)\\}|\\\ &+|\\{n\leq x\mid\forall p\in P\colon p\mid
n\implies p\geq B(x)\log^{2}B(x)\\}|.\end{split}$ (2.5)
It follows from (2.4) and (2.5) that
$|\\{n\leq x:\forall p\in P\colon p\mid n\implies p\geq
B(x)\log^{2}B(x)\\}|+1-\frac{x}{\log^{2}B(x)}\leq A(x).$ (2.6)
To estimate the first term we need the Buchstab function. The Buchstab
function is the unique continuous function
$\omega:[1,\infty[\to\mathbb{R}^{+}$ defined by the delay differential
equation
$\omega(u)=\frac{1}{u}\quad\hbox{for $1\leq u\leq 2$}$
$\frac{d}{du}(u\omega(u))=\omega(u-1)\quad\hbox{for $u\geq 2$.}$
In the second equation, the derivative at $u=2$ should be taken as $u$
approaches $2$ from the right. Denote by $\varphi(x,y)$ the number of positive
integers less than or equal to $x$ with no prime factor less than $y$. In
1970, Warlimont [4] proved the following remarkable result.
###### Theorem 16 (Warlimont, 1970).
Let $u_{0}>1$. There exists a $C_{0}=C_{0}(u_{0})$ such that if $y\geq 2$,
$\frac{\log x}{\log y}\geq u_{0}$ then
$|\varphi(x,y)-e^{\gamma}\omega(u)x\prod_{p<y}(1-\frac{1}{p})|\leq\frac{C_{0}x\prod_{p<y}\left(1-\frac{1}{p}\right)}{\log
x}.$
It is well known that the minimum of $\omega(u)$ is taken at $u=2$,
$\omega(2)=0.5$ and for every $\mu>0$, there exists an $\varepsilon>0$ such
that for $u\geq 2+\mu$, we have $\omega(u)\geq 0.5+\varepsilon$. Let us
suppose that $\omega(u)\geq 0.5+\varepsilon_{0}$ if $u\geq
2+\frac{\delta}{2}$. According to the Warlimont’s theorem and (2.2) we get
that
$|\\{n\leq x\mid\forall p\in P\colon p\mid n\implies p\geq
B(x)\log^{2}B(x)\\}|=\varphi(x,B(x)\log^{2}B(x))\geq$
$e^{\gamma}\omega\left(\frac{\log
x}{\log(B(x)\log^{2}B(x))}\right)x\prod_{p<B(x)\log^{2}B(x)}(1-\frac{1}{p})-\frac{C_{0}x\prod_{p<B(x)\log^{2}B(x)}\left(1-\frac{1}{p}\right)}{\log
x}\geq$
$(0.5+\varepsilon_{0}+o(1))e^{\gamma}x\prod_{p<B(x)\log^{2}B(x)}(1-\frac{1}{p}),$
where $\gamma$ is the Euler-Mascheroni constant.
It is well-known that
$\prod_{p<y,\>p\in P}(1-\frac{1}{p})=(1+o(1))\frac{e^{-\gamma}}{\log y}$
as $y\to\infty$, so
$|\\{n\leq x\mid\forall p\in P\colon p\mid n\implies p\geq
B(x)\log^{2}B(x)\\}|\geq(0.5+\varepsilon_{0}+o(1))\frac{x}{\log\left(B(x)\log^{2}B(x)\right)}=$
$(0.5+\varepsilon_{0}+o(1))\frac{x}{\log B(x)}$
as $x\to\infty$. By (2.6),
$A(x)\geq 1-\frac{x}{\log^{2}B(x)}+(0.5+\varepsilon_{0}+o(1))\frac{x}{\log
B(x)}=(0.5+\varepsilon_{0}+o(1))\frac{x}{\log B(x)}.$
Hence we have
$(0.5+\varepsilon_{0}+o(1))x\leq A(x)\log B(x).$
It follows that
$\liminf_{x\in
H_{3},x\to\infty}\frac{\max\\{A(x),B(x)\\}\log\min\\{A(x),B(x)\\}}{x}\geq
0.5+\varepsilon_{0}.$
This means that
$\liminf\limits_{x\to\infty}\frac{\max\\{A(x),B(x)\\}\log\min\\{A(x),B(x)\\}}{x}\geq\min\\{\frac{1}{2}+\frac{\delta}{2},(1+\frac{\delta}{2})(\frac{1}{2}-\beta),\frac{1}{2}+\varepsilon_{0}\\}>\frac{1}{2},$
which completes the proof.
∎
###### Proof of Theorem 6.
If $A(x)\neq o(x)$ or $B(x)\neq o(x)$, then $\max\\{A(x),B(x)\\}\neq o(x)$ and
$\min\\{A(x),B(x)\\}\to\infty$, the conclusion directly follows. So we may
suppose that $A(x)=o(x)$ and $B(x)=o(x)$. It follows from Theorem 7 that
$\limsup_{x\to\infty}\frac{\max\\{A(x),B(x)\\}}{\frac{x}{\sqrt{\log
x}}}\geq\frac{1}{\sqrt{\pi}}.$ (2.7)
If $\displaystyle\liminf_{x\to\infty}\frac{\log\min\\{A(x),B(x)\\}}{\sqrt{\log
x}}=\infty$, then we are done. So we can suppose that there exists a constant
$D$ such that for infinitely many $x\in\mathbb{Z}^{+}$,
$\frac{\textrm{min}\\{A(x),B(x)\\}}{e^{D\sqrt{\log x}}}<1.$
Without loss of generality, we can assume that
$\frac{B(x)}{e^{D\sqrt{\log x}}}<1$
for infinitely many $x\in\mathbb{Z}^{+}$.
If $A(x)=o(x)$, then there exist infinitely many $x\in\mathbb{Z}^{+}$ such
that $B(x)>\sqrt{x}$, otherwise for $B=\\{b_{1},b_{2},\dots\\}$,
$b_{1}<b_{2}<\dots$ there exists an $n_{0}$ such that for $n\geq n_{0}$, we
have $n=B(b_{n})\leq\sqrt{b_{n}}$, that is $b_{n}\geq n^{2}$. It follows that
$\sum_{b\in
B}\frac{1}{b}=\sum_{n=1}^{n_{0}-1}\frac{1}{b_{n}}+\sum_{n=n_{0}}^{\infty}\frac{1}{b_{n}}\leq\sum_{n=1}^{n_{0}-1}\frac{1}{b_{n}}+\sum_{n=n_{0}}^{\infty}\frac{1}{n^{2}}<\infty,$
therefore $\sum_{b\in B}\frac{1}{b}$ convergent. Then there exists an $N_{1}$
such that $\sum_{b\geq N_{1},b\in B}\frac{1}{b}<\frac{1}{2}$. Clearly
$x\leq|\\{n:n\leq x,\exists b<N_{1},b\in B,a\in A,n=ab\\}|+|\\{n:n\leq
x,\exists b\geq N_{1},b\in B,a\in A,n=ab|\leq$ $\sum_{b<N_{1},b\in
B}A(\frac{x}{b})+\sum_{b\geq N_{1},b\in B}A(\frac{x}{b})\leq o(x)+\sum_{b\geq
N_{1},b\in B}\frac{x}{b}\leq(0.5+o(1))x,$
a contradiction.
So we may assume that there are infinitely many positive integers
$y_{1}<y_{2}<y_{3}<\ldots$ such that $0.5e^{D\sqrt{\log
y_{n}}}<B(y_{n})<e^{D\sqrt{\log y_{n}}}$ and $x_{1}<x_{2}<x_{3}<\dots$ such
that $0.5\sqrt{x_{n}}<B(x_{n})<\sqrt{x_{n}}$ with $e^{\sqrt{\log
x_{n}}}>y_{n}$ for every $n$.
We are going to estimate $A(x_{n})$. We need the following sets:
$C_{n}=\\{p:p>y_{n}\textrm{ and }\exists m\in B\cap\\{1,\dots,x_{n}\\}\textrm{
such that }p\textrm{ is the largest prime factor of }m\\},$
$D_{n}=\\{p:p<y_{n}\textrm{ and }p\in B\\},$
$E_{n}=\\{p:p<y_{n}\\}.$
It is easy to check that if $k$ is a positive integer such that $k<x_{n}$ and
$k=pq$, where $p$ and $q$ are primes, $p\in E_{n}\setminus D_{n}$, $q\notin
C_{n}$ and $q>y_{n}$, then $k$ must belong to the set $A$. Now we want to
estimate the number of such $k$’s. Clearly,
$A(x_{n})\geq\sum\limits_{p\in E_{n}\setminus
D_{n}}\left(\pi\left(\frac{x_{n}}{p}\right)-|C_{n}|-\pi(y_{n})\right)$
If $x_{n}$ is large enough, then for any $p\in E_{n}\setminus D_{n}$,
$\pi\left(\frac{x_{n}}{p}\right)=(1+o(1))\frac{x_{n}/p}{\log(x_{n}/p)}=(1+o(1))\frac{1}{p}\frac{x_{n}}{\log
x_{n}-\log p}>(1+o(1))\frac{1}{p}\frac{x_{n}}{\log
x_{n}}\geq\frac{1}{p}\frac{x_{n}}{2\log x_{n}}.$
Clearly,
$|C_{n}|\leq B(x_{n})<\sqrt{x_{n}},$
and
$\pi(y_{n})\leq y_{n}<e^{\sqrt{\log x_{n}}}.$
It follows that
$A(x_{n})>\sum\limits_{p\in E_{n}\setminus
D_{n}}\left(\frac{1}{p}\frac{x_{n}}{2\log x_{n}}-\sqrt{x_{n}}-e^{\sqrt{\log
x_{n}}}\right)>\frac{x_{n}}{3\log x_{n}}\sum\limits_{p\in E_{n}\setminus
D_{n}}\frac{1}{p}$
if $n$ is large enough. It is well known that
$\sum\limits_{p<N}\frac{1}{p}=\log\log N+O(1)$ and
$\sum\limits_{i=1}^{N}\frac{1}{p_{i}}=\log\log N+O(1)$ as well. Since
$|D_{n}|\leq B(y_{n})<e^{D\sqrt{\log y_{n}}}$, there are at most
$e^{\sqrt{\log y_{n}}}$ primes in the set $D_{n}$, so $\sum_{p\in
D_{n}}\frac{1}{p}\leq\sum\limits_{i=1}^{e^{\sqrt{\log
y_{n}}}}\frac{1}{p_{i}}$. Therefore
$\sum\limits_{p\in E_{n}\setminus D_{n}}\frac{1}{p}\geq\sum\limits_{p\in
E_{n}}\frac{1}{p}-\sum\limits_{i=1}^{e^{\sqrt{\log
y_{n}}}}\frac{1}{p_{i}}=\log\log y_{n}-\log\log e^{\sqrt{\log
y_{n}}}+O(1)=\frac{\log\log y_{n}}{2}+O(1).$
It follows that
$A(x_{n})\geq\frac{x_{n}}{6\log x_{n}}(\log\log y_{n}+O(1)).$
We have supposed that $B(x_{n})\geq 0.5\sqrt{x_{n}}$. Hence
$\frac{A(x_{n})\log B(x_{n})}{x_{n}}\geq\frac{\log\log y_{n}}{12}+O(1).$
This completes the proof, because $y_{n}\to\infty$ as $n\to\infty$. ∎
It order to prove Theorems 8 and 12 we need the following lemma.
###### Lemma 1.
Let $Q$ be a subset of prime numbers such that $\sum_{q\in
Q}\frac{1}{q}=\infty$. Let
$A=\\{n:\hbox{ $n$ is squarefree },n\in\mathbb{Z}^{+},p|n\Rightarrow p\in
Q\\}.$
Then there exists a set $B\subseteq\mathbb{Z}^{+}$ such that $(A,B)\in MC_{2}$
and $B(x)=o(x)$.
###### Proof.
Let $Q=\\{q_{1},q_{2},\dots\\}$, $q_{1}<q_{2}<\dots$. Define
$Q_{k}=\\{q_{1},\dots,q_{k}\\}$ for every $k\in\mathbb{Z}^{+}$. Let
$A_{k}=\\{n:\hbox{ $n$ is squarefree },n\in\mathbb{Z}^{+},p|n\Rightarrow p\in
Q_{k}\\}.$
Let
$B_{k}=\\{n:n=ml^{2},\hbox{$m$ is squarefree and }p|m\Rightarrow p\notin
Q_{k}\\}.$
Then $(A_{k},B_{k})\in MC_{2}$, $A_{1}\subseteq A_{2}\subseteq\dots$,
$\displaystyle A=\cup_{k=1}^{\infty}A_{k}$ and $B_{1}\supseteq
B_{2}\supseteq\dots$. Let $\displaystyle B=\cap_{k=1}^{\infty}B_{k}.$ Then
$(A,B)\in MC_{2}$. We are going to prove that for any $\varepsilon>0$, there
exists an $x_{0}=x_{0}(\varepsilon)$ such that
$B(x)\leq\varepsilon x$ (2.8)
for $x\geq x_{0}$.
Let $k$ be an integer such that
$4e^{-\sum_{i=1}^{k}\frac{1}{2q_{i}}}<\varepsilon$. Let
$1=b_{1}^{(k)}<b_{2}^{(k)}<\dots$ be the squarefree integers in the set
$B_{k}$. Then
$B(x)\leq B_{k}(x)\leq\sum_{b_{l}^{(k)}\leq x}\sqrt{\frac{x}{b_{l}^{(k)}}}.$
It follows from $\gcd(b_{l}^{(k)},q_{1}q_{2}\dots q_{k})=1$
$|\\{b_{1}^{(k)},b_{2}^{(k)},\dots\\}\cap\\{Nq_{1}\dots
q_{k}+1,\dots,(N+1)q_{1}\dots q_{k}\\}|\leq(q_{1}-1)\dots(q_{k}-1)$
for any nonnegative integer $N$. If $M(q_{1}-1)\dots(q_{k}-1)\leq
l<(M+1)(q_{1}-1)\dots(q_{k}-1)$, then
$b_{l}^{(k)}\geq(M+1)q_{1}\dots q_{k}-q_{1}\dots q_{k}\geq
l\prod_{i=1}^{k}\frac{q_{i}}{q_{i}-1}-q_{1}\dots q_{k}\geq
le^{\sum_{i=1}^{k}\frac{1}{q_{i}}}-q_{1}\dots q_{k}.$
If $l>2q_{1}\dots q_{k}$, then
$\frac{1}{\sqrt{b_{l}^{(k)}}}\leq\frac{1}{\sqrt{le^{\sum_{i=1}^{k}\frac{1}{q_{i}}}-q_{1}\dots
q_{k}}}=\frac{1}{\sqrt{le^{\sum_{i=1}^{k}\frac{1}{q_{i}}}(1-\frac{q_{1}\dots
q_{k}}{le^{\sum_{i=1}^{k}\frac{1}{q_{i}}}})}}\leq$
$\frac{1}{\sqrt{l}e^{\sum_{i=1}^{k}\frac{1}{2q_{i}}}\left(1-\frac{q_{1}\dots
q_{k}}{le^{\sum_{i=1}^{k}\frac{1}{q_{i}}}}\right)}\leq\frac{1}{\sqrt{l}e^{\sum_{i=1}^{k}\frac{1}{2q_{i}}}}+\frac{1}{\sqrt{l}e^{\sum_{i=1}^{k}\frac{1}{2q_{i}}}}\frac{2q_{1}\dots
q_{k}}{le^{\sum_{i=1}^{k}\frac{1}{q_{i}}}}.$
Hence
$B(x)\leq\sum_{l\leq 2q_{1}\dots
q_{k}}\sqrt{\frac{x}{b_{l}^{(k)}}}+\sum_{2q_{1}\dots q_{k}<l\leq
x}\sqrt{\frac{x}{b_{l}^{(k)}}}\leq O(\sqrt{x})+\sum_{2q_{1}\dots q_{k}<l\leq
x}\left(\frac{\sqrt{x}}{\sqrt{l}e^{\sum_{i=1}^{k}\frac{1}{2q_{i}}}}+\frac{\sqrt{x}}{l^{3/2}e^{\sum_{i=1}^{k}\frac{1}{2q_{i}}}}\right)\leq$
$O(\sqrt{x})+\frac{\sqrt{x}}{e^{\sum_{i=1}^{k}\frac{1}{2q_{i}}}}\int_{0}^{x}\frac{1}{\sqrt{t}}dt+O(1)=O(\sqrt{x})+2xe^{-\sum_{i=1}^{k}\frac{1}{2q_{i}}}\leq\frac{\varepsilon}{2}x+O(\sqrt{x}),$
therefore there exists an $x_{0}=x_{0}(\varepsilon)$ such that (2.8) holds. ∎
###### Proof of Theorem 8.
Let us choose sequences $M_{1}<M_{2}<\dots$ and $N_{1}<N_{2}<\dots$ such that
$M_{k}<N_{k}$, $\displaystyle\sum_{M_{k}<p\leq N_{k}}\frac{1}{p}\geq 1$,
$\displaystyle\prod_{p\leq N_{k-1}}p<M_{k}$ and $k2^{N_{k-1}}<f(M_{k})$ for
every $k\in\mathbb{Z}^{+}$. Let
$Q=\\{p:\hbox{ there exists a $k\in\mathbb{Z}^{+}$ such that }M_{k}<p\leq
N_{k}\\}.$
Let
$A=\\{n:n\in\mathbb{Z}^{+},\hbox{ $n$ is squarefree }p|n\Rightarrow p\in
Q\\}.$
By construction, we have $\displaystyle\sum_{q\in
Q}\frac{1}{q}=\sum_{k=1}^{\infty}\sum_{M_{k}<p\leq
N_{k}}\frac{1}{p}\geq\sum_{k=1}^{\infty}1=\infty$. By Lemma 1, we know that
there exists a set $B\in\mathbb{Z}^{+}$ such that $(A,B)\in MC_{2}$ and
$B(x)=o(x)$ as $x\to\infty$. Moreover,
$\frac{A(M_{k})}{f(M_{k})}\leq\frac{2^{\pi(N_{k-1})}}{f(M_{k})}<\frac{2^{N_{k-1}}}{k2^{N_{k-1}}}=\frac{1}{k}$
for every $k\in\mathbb{Z}^{+}$, therefore
$\liminf_{x\to\infty}\frac{A(x)}{f(x)}=0,$
which completes the proof. ∎
###### Proof of Theorem 10.
Let us suppose that for the function $f(x)>0$, $x\geq 1$ the series
$\displaystyle\sum_{n=1}^{\infty}\frac{f(n)}{n^{2}}$ is finite. Let us suppose
that for some $(A,B)\in MC_{2}$, we have $A(x)\leq Cf(x)$ for every
$x\in\mathbb{Z}^{+}$ and $B(x)=o(x)$. Then
$\sum_{a\leq N,a\in
A}\frac{1}{a}=\sum_{n=1}^{N}\frac{A(n)-A(n-1)}{n}=\sum_{n=1}^{N-1}\frac{A(n)}{n(n+1)}+\frac{A(N)}{N}\leq
1+C\sum_{n=1}^{\infty}\frac{f(n)}{n^{2}},$
therefore the series $\sum_{a\in A}\frac{1}{a}$ converges. It follows that
there exists an $n_{1}$ such that $\sum_{a\geq n_{1},a\in
A}\frac{1}{a}<\frac{1}{2}$. Clearly
$x\leq|\\{n:n\leq x,\exists a<n_{1},a\in A,b\in B,n=ab\\}|+|\\{n:n\leq
x,\exists a\geq n_{1},a\in A,b\in B,n=ab|\leq$ $\sum_{a<n_{1},a\in
A}B(\frac{x}{a})+\sum_{a\geq n_{1},a\in A}B(\frac{x}{a})\leq o(x)+\sum_{a\geq
n_{1},a\in A}\frac{x}{a}\leq(0.5+o(1))x,$
a contradiction. ∎
###### Proof of Theorem 12.
We are going to find a set $Q\subseteq P$ such that
$\sum_{q\in Q}\frac{1}{q}=\infty$ (2.9)
and the set
$A=\\{n:n\hbox{ is squarefree }p|n\Rightarrow p\in Q\\}$
satisfies
$A(x)=O(f(x)).$ (2.10)
Then by Lemma 1, there exists a $B\subseteq\mathbb{Z}^{+}$ such that $(A,B)\in
MC_{2}$ and $B(x)=o(x)$.
Let $\chi_{Q}(n)=\left\\{\begin{array}[]{ll}1,&\mbox{if $n\in Q$}\\\ 0,&\mbox{
if $n\notin Q$}\end{array}\right.$. Let
$A_{k}=\\{n:n\in A,n\hbox{ has $k$ different prime factors}\\}.$
To prove (2.10) it is enough to verify
$A_{k}(x)\leq\frac{e^{\frac{1}{c_{1}}}f(x)\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{Q}(n)}{n}\right)^{k-1}}{(k-1)!exp\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{Q}(n)}{n}\right)}$
(2.11)
holds for $x\geq 1$ and $k\in\mathbb{Z}^{+}$, because
$A(x)\leq 1+\sum_{k=1}^{\infty}A_{k}(x)\leq
1+\sum_{k=1}^{\infty}\frac{e^{\frac{1}{c_{1}}}f(x)\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{Q}(n)}{n}\right)^{k-1}}{(k-1)!exp\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{Q}(n)}{n}\right)}=1+e^{\frac{1}{c_{1}}}f(x).$
First, we define a set $R\subseteq P$, then by truncating it we get the new
set $Q$. The set $R$ is defined recursively. We run over the primes $p$ and
the prime number $p$ is included in the set $R$ if and only if
$R(p-1)+1\leq\frac{f(p)}{exp\left(\frac{2}{c_{1}}\sum_{n<p}\frac{\chi_{R}(n)}{n}\right)}.$
First, we show that
$\sum_{r\in R}\frac{1}{r}=\infty.$
By contradiction, let us suppose that $\sum_{r\in R}\frac{1}{r}=K<\infty$. Let
us suppose that $p\in R$ and the next prime in the set $R$ is $p^{\prime}$.
Let $p\leq x<p^{\prime}$. Then
$R(x)=R(p)=R(p-1)+1\leq\frac{f(p)}{exp\left(\frac{2}{c_{1}}\sum_{n<p}\frac{\chi_{R}(n)}{n}\right)}\leq\frac{f(x)}{exp\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{R}(n)}{n}-\frac{2}{c_{1}}\frac{1}{p}\right)}=$
$\frac{e^{\frac{2}{c_{1}}\frac{1}{p}}f(x)}{exp\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{R}(n)}{n}\right)}\leq\frac{e^{\frac{1}{c_{1}}}f(x)}{exp\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{R}(n)}{n}\right)}\leq
e^{\frac{1}{c_{1}}}f(x).$
It follows that for any $x\geq 1$,
$R(x)\leq e^{\frac{1}{c_{1}}}f(x)=o(\frac{x}{\log x}).$
It is well known that $\pi(n)-\pi(\frac{n}{2})=(0.5+o(1))\frac{n}{\log n}$ as
$n\to\infty$, and so there exists an $n_{1}$ such that for any $n\geq n_{1}$
there exists a prime number $p$ in the interval $[0.5n,n]$ such that $p\notin
R$. Then by the definition of set $R$ and condition 3,
$R(p-1)+1>\frac{f(p)}{exp\left(\frac{2}{c_{1}}\sum_{n<p}\frac{\chi_{R}(n)}{n}\right)}>\frac{f(p)}{e^{\frac{2K}{c_{1}}}}=\frac{\frac{f(p)}{f(n)}}{\frac{p}{n}}\frac{p}{n}\frac{1}{e^{\frac{2K}{c_{1}}}}f(n)\geq\frac{c_{1}f(n)}{2e^{\frac{2K}{c_{1}}}}.$
If $f(n)>\frac{6e^{\frac{2K}{c_{1}}}}{c_{1}}$, then
$R(n)\geq
R(p-1)\geq\frac{c_{1}f(n)}{2e^{\frac{2K}{c_{1}}}}-1\geq\frac{c_{1}f(n)}{2e^{\frac{2K}{c_{1}}}}-\frac{c_{1}f(n)}{6e^{\frac{2K}{c_{1}}}}\geq\frac{f(n)}{3c_{2}e^{\frac{2K}{c_{1}}}}.$
(2.12)
So inequality (2.12) holds if $n\geq n_{2}$. By condition 4, we get that
$\sum_{n=1}^{\infty}\frac{R(n)}{n^{2}}=\infty$, but
$\sum_{r\leq N,r\in
R}\frac{1}{r}=\sum_{n=1}^{N}\frac{R(n)-R(n-1)}{n}=\sum_{n=1}^{N-1}\frac{R(n)}{n(n+1)}+\frac{R(N)}{N}\geq\frac{1}{2}\sum_{n=1}^{N-1}\frac{R(n)}{n^{2}}\to\infty$
as $N\to\infty$, a contradiction.
Let $R=\\{r_{1},r_{2},\dots\\}$, $r_{1}<r_{2}<\dots$. It is well known that
$\displaystyle\lim_{x\to\infty}\sum_{\sqrt{x}\leq p\leq x}\frac{1}{p}=\log 2$.
We can find an $N_{0}\in\mathbb{Z}^{+}$ such that
$\sum_{\sqrt{x}\leq r_{i}\leq x,N_{0}|i}\frac{1}{r_{i}}<\frac{c_{1}\log 2}{2}$
(2.13)
for every $x\geq 1$. Let
$Q=\\{r_{N_{0}},r_{2N_{0}},\dots\\}.$
Clearly
$\sum_{q<x,q\in Q}\frac{1}{q}\geq\frac{\sum_{r<x,r\in
R}\frac{1}{r}-\sum_{i=1}^{N_{0}-1}\frac{1}{r_{i}}}{N_{0}}\to\infty$
as $N\to\infty$, therefore (2.9) holds.
It remains to prove (2.11). We use by induction on $k$.
For $k=1$,
$A_{1}(x)=Q(x)\leq
R(x)\leq\frac{e^{\frac{1}{c_{1}}}f(x)}{exp\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{R}(n)}{n}\right)}\leq\frac{e^{\frac{1}{c_{1}}}f(x)}{exp\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{Q}(n)}{n}\right)}.$
Let us suppose that the statement holds for $k$ and we prove it for $k+1$. If
$n\leq x$, $n\in A_{k+1}$, that is $n=p_{1}\dots p_{k+1}$,
$p_{1}<\dots<p_{k+1}$, $p_{i}\in Q$, then $p_{i}<\sqrt{x}$ for $1\leq i\leq
k$, therefore $kA_{k+1}(x)\leq\sum_{q\in Q,q<\sqrt{x}}A_{k}(\frac{x}{q})$.
Hence
$kA_{k+1}(x)\leq\sum_{q\in Q,q<\sqrt{x}}A_{k}(\frac{x}{q})\leq\sum_{q\in
Q,q<\sqrt{x}}\frac{e^{\frac{1}{c_{1}}}f(\frac{x}{q})\left(\frac{2}{c_{1}}\sum_{n<\frac{x}{q}}\frac{\chi_{Q}(n)}{n}\right)^{k-1}}{(k-1)!exp\left(\frac{2}{c_{1}}\sum_{n<\frac{x}{q}}\frac{\chi_{Q}(n)}{n}\right)}.$
By condition 3, we have
$f(\frac{x}{q})\leq\frac{1}{c_{1}}\frac{1}{q}f(x)$
and by (2.13),
$exp\left(\frac{2}{c_{1}}\sum_{n<\frac{x}{q}}\frac{\chi_{Q}(n)}{n}\right)=\frac{exp\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{Q}(n)}{n}\right)}{exp\left(\frac{2}{c_{1}}\sum_{\frac{x}{q}\leq
n<x}\frac{\chi_{Q}(n)}{n}\right)}\geq\frac{exp\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{Q}(n)}{n}\right)}{exp\left(\frac{2}{c_{1}}\sum_{\sqrt{x}\leq
n<x}\frac{\chi_{Q}(n)}{n}\right)}\geq$
$\frac{exp\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{Q}(n)}{n}\right)}{exp\left(\frac{2}{c_{1}}\frac{c_{1}\log
2}{2}\right)}=\frac{exp\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{Q}(n)}{n}\right)}{2}.$
Hence
$kQ_{k+1}(x)\leq\sum_{q\in
Q,q<\sqrt{x}}\frac{e^{\frac{1}{c_{1}}}\frac{1}{c_{1}}\frac{1}{q}f(x)\left(\frac{2}{c_{1}}\sum_{n<\frac{x}{q}}\frac{\chi_{Q}(n)}{n}\right)^{k-1}}{(k-1)!\frac{1}{2}exp\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{Q}(n)}{n}\right)}\leq$
$\frac{e^{\frac{1}{c_{1}}}f(x)\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{Q}(n)}{n}\right)^{k-1}}{(k-1)!exp\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{Q}(n)}{n}\right)}\sum_{q\in
Q,q<\sqrt{x}}\frac{2}{c_{1}}\frac{1}{q}<\frac{e^{\frac{1}{c_{1}}}f(x)\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{Q}(n)}{n}\right)^{k}}{(k-1)!exp\left(\frac{2}{c_{1}}\sum_{n<x}\frac{\chi_{Q}(n)}{n}\right)},$
which completes the proof. ∎
###### Proof of Theorem 14.
First we prove the first part of Theorem 14. Let
$V=\\{p\>\text{prime }:\exists n\in B,p\hbox{ is the largest prime factors of
}n\\}.$
If no prime in $V$ divides a number $n$, then equality $n=ab,a\in A,B\in B$
implies $b=1,a=n$, thus $n\in A$. Hence
$\liminf_{x\to\infty}\frac{A(x)}{x}\geq\prod_{p\in
V}\left(1-\frac{1}{p}\right)\geq\prod_{i=1}^{|B|}\left(1-\frac{1}{p_{i}}\right),$
where $p_{i}$ is the $i$th smallest prime number.
It is well known that
$\lim_{k\to\infty}\log
k\prod_{i=1}^{k}\left(1-\frac{1}{p_{i}}\right)=e^{-\gamma},$ (2.14)
where $\gamma$ is the Euler-Mascheroni constant. It follows that there exists
a constant $c_{1}>0$ such that
$\liminf_{x\to\infty}\frac{A(x)\log|B|}{x}\geq c_{1}.$
Now we prove the second part of Theorem 14. Let $2^{k}$ be the largest power
of $2$ that is less than or equal to $N$. Let $B\subseteq\mathbb{Z}^{+}$ be a
set that contains the set of all square-free numbers composed of the $k$
smallest primes. Clearly $2^{k-1}<|B|=2^{k}\leq N$. Let
$A=\\{mr^{2}:p_{i}\nmid m\>\forall\>1\leq i\leq k,r\in\mathbb{Z}^{+}\\}.$
Denote by $A_{sf}$ the set of squarefree integers in the set $A$. If $n\in
A_{sf}$, then $\gcd(n,p_{1}\dots p_{k})=1$, therefore $A_{sf}(x)\leq
O(1)+x\prod_{i=1}^{k}\left(1-\frac{1}{p_{i}}\right)$. Hence
$A(x)\leq\sum_{r\leq\sqrt{x}}A_{sf}\left(\frac{x}{r^{2}}\right)\leq\sum_{r\leq\sqrt{x}}\left(\frac{x}{r^{2}}\prod_{i=1}^{k}\left(1-\frac{1}{p_{i}}\right)+O(1)\right)=O(\sqrt{x})+\frac{\pi^{2}x}{6}\prod_{i=1}^{k}\left(1-\frac{1}{p_{i}}\right).$
Then $A$ and $B$ are multiplicative complements and by (2.14),
$\limsup_{x\to\infty}\frac{A(x)}{x}=\frac{\pi^{2}}{6}\prod_{i=1}^{k}\left(1-\frac{1}{p_{i}}\right)\leq\frac{c^{\prime}}{\log
k}\leq\frac{c_{2}}{\log\log N},$
which completes the proof.
∎
## Bibliography
* [1] L. Danzer, Über eine Frage von G. Hanani aus der additiven Zahlentheorie, J. Reine Angew. Math 214/215 (1964), 392–394.
* [2] A. Kocsis, D. Matolcsi, C. Sándor and G. Tőtős, Multiplicative complements I., Acta Arith. 207 (2023), no. 2, 101–120.
* [3] P. P. Pach; and C. Sándor, Multiplicative bases and an Erdős problem, Combinatorica 38 (2018), no. 5, 1175–1203.
* [4] R. Warlimont, Eine Bemerkung zu einem Ergebnis von N. G. de Bruijn, Monatsh. Math. 74 (1970), 273–276.
|
# Improving On-Screen Sound Separation for Open-Domain Videos with Audio-
Visual Self-Attention
Efthymios Tzinis
UIUC
<EMAIL_ADDRESS>
&Scott Wisdom
Google Research
<EMAIL_ADDRESS>
Tal Remez
Google Research
<EMAIL_ADDRESS>
&John R. Hershey
Google Research
<EMAIL_ADDRESS>
Work done during an internship at Google.
(Last compiled )
###### Abstract
We introduce a state-of-the-art audio-visual on-screen sound separation system
which is capable of learning to separate sounds and associate them with on-
screen objects by looking at in-the-wild videos. We identify limitations of
previous work on audio-visual on-screen sound separation, including the
simplicity and coarse resolution of spatio-temporal attention, and poor
convergence of the audio separation model. Our proposed model addresses these
issues using cross-modal and self-attention modules that capture audio-visual
dependencies at a finer resolution over time, and by unsupervised pre-training
of audio separation model. These improvements allow the model to generalize to
a much wider set of unseen videos. We also propose a robust way to further
improve the generalization capability of our models by calibrating the
probabilities of our audio-visual on-screen classifier, using only a small
amount of in-domain videos labeled for their on-screen presence. For
evaluation and semi-supervised training, we collected human annotations of on-
screen audio from a large database of in-the-wild videos (YFCC100m). Our
results show marked improvements in on-screen separation performance, in more
general conditions than previous methods.
## 1 Introduction
Humans are able to effortlessly perceive sounds in a noisy scene, and
associate them with any corresponding visible objects. In audio processing, a
corresponding challenge is to isolate sound sources from a mixture waveform
and identify the associated visual appearance of each sound source. Sound
sources in audio are analogous to objects in computer vision. An advantage in
vision is that objects tend to occupy distinct regions of pixels, so that
features from different regions of the image correspond to different objects.
In contrast, sound sources are superimposed in the audio, making it even more
difficult to disentangle the features of different sounds.
This creates a challenge for unsupervised learning of audio-visual separation,
because unlike their visual counterparts, the audio sources in a scene cannot
be easily selected for alignment with the video objects. The audio needs to be
separated into sources at some point in the process, either by conditioning
separation on selected video objects, or by separating the audio _a priori_ ,
before associating the sounds with video objects, or both. The _a priori_
separation of the sounds, which we pursue here, has a few advantages: thanks
to recent work, it can be learned in an unsupervised way, and it can handle an
unknown number of sounds, including those that do not appear on-screen.
Despite the recent progress in audio-visual machine perception using neural
networks, trained models still suffer from a variety of problems. Many of the
previous works rely heavily on labeled training data, or data with special
properties such as having one high quality sound that is on screen in each
example. They may also rely on pre-trained supervised object detectors that
require labeled data. This can work well in a restricted domain, where such
labeled data are available; however in an open-domain setting the reliance on
human labels effectively precludes scaling to large open-domain data. Thus a
typical problem is difficulty in generalizing from the domain of labeled
training data to the domain of realistic data that would be seen in a
practical application.
Although the co-perception and alignment of audio and visual modalities is not
trivial, a variety of works have shown promising results by proposing neural
network architectures apt for the task [8, 45, 6, 2, 13, 40, 7]. Audio-visual
sound separation [17, 10, 1] and more specifically separation of on-screen
versus off-screen sounds [29] have also seen remarkable performance
improvements since the initial works. Important innovations have included
using localization of objects [45, 48, 26], forcing consistency between audio
and visual modalities representation [24, 14, 3, 13], weakly [32] and self-
supervised approaches [33, 22, 2, 40]. Despite the remarkable progress in the
field of on-screen sound source separation, most of these works are
constrained to isolating a specific set of sounds which can appear on-screen
such as speech [10, 14, 1] or music [11, 12]. Recent work has started to
expand beyond music and speech to a wider variety of classes, using visual
scene graphs to model relationships between detected objects and use this
information to condition audio separation [7], but this approach is still
inherently limited by requiring labeled data to train supervised object
detection. Although the seminal works in separating on-screen sounds proposed
models which were somewhat invariant to the types of sources [29, 12], those
systems were unable to be trained with real world videos mainly because they
needed labeled videos assuming that sound sources always appeared on-screen
during training. A recently proposed model, known as AudioScope [40],
addressed this problem by enabling unsupervised training of audio-visual sound
separation models from in-the-wild videos, without requiring object detection
modules nor assuming that all sources have to be on-screen. The model is
trained to separate individual sound sources from a mixture of two videos
using mixture invariant training (MixIT) [44], which works by assigning
estimated sources to the best matching video mixture. The AudioScope model
then uses these source assignments as pseudo-labels to help train the audio-
visual correspondence classifier.
However, the problem of generalizing to a wide variety of videos still remains
largely unsolved. AudioScope [40] was able to obtain adequate performance in
terms of on-screen sound detection and reconstruction when training was
injected with a few thousands labeled examples, but performance suffered in
the purely unsupervised setting. In this work, we try to identify some of the
possible limitations of this approach and propose ways to overcome the
associated challenges. We hypothesize that AudioScope’s performance is limited
by the simplicity of its visually guided spatio-temporal attention layer [5],
and the low temporal resolution (one frame per second) of its visual model.
These factors may prevent AudioScope from capturing synchronization features
which might be crucial for detecting the inter-play between audio and video
[16, 22, 2]. Another limitation of the AudioScope architecture stems from the
outputs of the audio-visual classifier which are produced independent of the
possible domain mismatch between the train/test distributions. Consequently,
the probabilities which are later used as soft weights in front of the
separated sources might be heavily biased because of slight variations in the
recording setup (e.g. audio-energy levels, camera position, etc.), thus,
leading to poor generalization to previously unseen audio and video classes.
We propose to address these problems by introducing richer transformer-style
audio-visual attention models, operating at higher frame rates, in order to
better detect audio-visual synchrony. Furthermore, we propose to refine the
estimates of the on-screen audio-visual classifier per source by utilizing the
powerful representations obtained from unsupervised pre-training of the audio
source separation module with MixIT. We also propose a new way to perform
weakly-supervised domain adaptation by calibrating the output probabilities
produced by the classifier using a very small amount of videos which were
annotated for their on-screen presence. Finally, we also propose even more
challenging evaluation datasets containing unfiltered videos from the YFCC100m
[36] data collection, and show that our new proposed models both generalize
and perform better on these evaluations.
## 2 Relation to Prior Work
Audio-visual separation of in-the-wild on-screen sounds relies on the
capability of separating the individual sources that are superimposed in an
input audio recording. Recent work has shown that it is possible to train an
open-domain model to isolate individual sound sources using a mask-based
convolutional architecture regardless of the category of sound [20, 38]. A
related promising direction is to extract sources of interest by conditioning
the separation networks using identity cues. This has yielded performance
improvements for speech [42] as well as universal source separation [39, 28].
However, these experiments relied on having sufficient supervised training
data and were evaluated only on test sets with similar environmental
conditions and sound distributions. In order to extend the reach of this
approach, methods have been proposed to train separation models with no access
to ground truth clean sources by utilizing weak class labels [31], the spatial
separability of the sources [37, 34, 9] and self-supervision in the form of
MixIT [44]. This makes it possible to learn separation of signals well outside
the domains for which isolated source databases exist.
AudioScope [40] used MixIT in order to avoid the dependency on clean in-domain
training data, which may be unavailable for many types of sounds. However, it
also relied on filtering of the dataset in order to increase the proportion of
videos containing on-screen sounds. This was done using scores produced by an
audio-visual coincidence model [19] pre-trained on AudioSet [15]. A
disadvantage of this approach is that the coincidence model may be susceptible
to a looser semantic association, where sounds are thematically related to a
visual scene, as opposed to true audio-visual correspondence, where the sounds
actually appear onscreen. This, and any other biases in that model, may have
limited the ability of the resulting AudioScope model to generalize to the
distribution of sounds in unfiltered data. Our approach extends the
utilization of MixIT by pre-training on audio from a broader in-the-wild video
data collection. This may enable less reliance on pre-filtering, perhaps by
producing more robust pseudo-labels for training the audio-visual classifier.
Building upon that, we propose a novel way to refine the on-screen probability
estimates and perform domain adaptation on an on-screen separation model using
isotonic regression [47] with a small subset of labeled videos for their on-
screen presence.
An open question in audio-visual correspondence models concerns the level of
processing at which audio and video objects can be aligned. Typically audio-
visual models have used high-level features at the output of deep neural
networks to estimate correspondence between audio and video signals [24, 40,
19, 22, 7]. Such high-level representations may tend to focus on semantic
information about the class of objects and sounds, especially when the
features are computed from single frames at a low frame rate. Such methods may
work well for single instances of a class of object or sound, but may struggle
with identification for multiple instances of a class, or for classes not seen
during training. In contrast, there may be significant information in the
correspondence between lower-level features. Mutual information between low-
level features was used for audio-visual localization [16], and several more
recent works have shown promising results for self-supervised audio-visual
learning using low-level, motion [48] and optical flow [2] features. Such
features may help with generalization and instance-level correspondence by
detecting synchronous dynamics of the audio and video, regardless of their
semantic class.
Attention is a framework that may be useful at multiple levels of processing.
Attention-based models have been extensively utilized across audio-visual
learning tasks. An attention-based framework was recently used to modulate
audio representations using motion-based visual features [48] for separation
and localization. Conversely, modulating video features based on audio
embeddings has also been used for speech separation [24] as well as in the
spatio-temporal attention module of the initial AudioScope [40]. Other works
combined self-attention layers [41] for modeling inter-modality temporal
patterns as well as cross-modal attention modules for intra-modality
associations [46, 8, 45] for sound source localization. Our proposed model in
this paper applies attention at the level of audio sources, which is similar
to the operation of _slot attention_ models which apply attention across video
objects [27].
Inspired by the flexibility of self-attention models to represent spatio-
temporal associations, we propose to extend the applicability of self-
attention and cross-modal attention modules to unsupervised audio-visual on-
screen sound separation tasks. However, the applicability of these attention
layers is not trivial when we also want to scale to higher resolution audio-
visual representations in order to capture synchrony. The attention tensor
grows quadratically in complexity with the dimensionality of the space. For
this reason, we propose a variety of self-attention architectures that
factorize the attention operation across different dimensions and modalities.
The separable attention layers allow us to achieve similar performance to full
self-attention with a much lower computational footprint. Our proposed layers
differ from recently proposed separable attention layers [6, 25] in the sense
that we also accommodate modeling intra-modality patterns from both audio and
video features.
## 3 Model architecture
In Figure 1, the overall architecture of the proposed model is displayed
alongside the replaced modules from the previous version. The most noticeable
changes are with respect to the way we align the audio-visual features
extracted from the embedding networks. In the proposed version, we introduce a
family of transformer-based architectures (explained in Section 3.3) which are
able to obtain semantically rich representations between the audio and the
video modalities. In addition, we omit the temporal pooling operation for the
inputs of the proposed attention blocks since we want to let the model build
implicit latent representations capable of capturing low-level intra-modal
synchrony-based dependencies.
Figure 1: System diagram comparing our proposed approach with AudioScope [40],
with two input mixtures and four output sources. The modules common between
AudioScope and our approach are represented with green dark background,
AudioScope modules that are replaced use lighter font and dashed border, and
new proposed modules have orange background with solid border.
### 3.1 Separation Module
The separation module $\mathcal{M}_{s}$ has the same dilated convolutional
architecture as the one used in AudioScope [40] with learnable encoder and
decoder. This module takes as input a mixture waveform
$x\in\mathbb{R}^{T^{\prime}}$, estimates $M$ masks in the encoded latent space
and as a result outputs $M$ estimated sources $\hat{s}\in\mathbb{R}^{M\times
T^{\prime}}$ which are forced to add up to the input mixture through a
consistency layer [43]. Based on the ablation studies of [40] and our
experiments, we completely remove the conditional separation [39], as it did
not provide any significant gains over not using the visual representations as
inputs to the separation framework. More importantly, by removing the
dependence on the visual conditional embeddings, we are able to pre-train the
separation module on all YFCC100m [36] audio tracks in order to provide a
better initialization for training the audio-visual model.
### 3.2 Audio and video embedding networks
We extract the audio features for the $M$ estimated sources $\hat{s}$ and the
corresponding $T$ input video frames, with $128\times 128$ spatial resolution
each, using a MobileNetV1 architecture [18] following a similar setup used for
AudioScope [40]. Specifically, the audio embedding network takes as input log
mel-scale spectrogram representations of the time-domain separated sources
$\hat{s}$ while the visual embedding network is applied to each of the $T$
input video frames independently. We use local features extracted from
intermediate levels of the embedding networks as audio and video features. For
the audio part, we use the output of the intermediate activation at the $23$rd
layer, while for the video part, we use the one that has $8\times 8$ spatial
locations. These activations provide audio-visual representations which enjoy
both local and global feature descriptors. The final audio and video feature
tensors are also fed through dense layers in order to force them to have the
same depth $D$.
(a) Self-Attention (SA).
(b) Cross-modal attention (CMA).
(c) Separable attention.
Figure 2: Attention architectures for audio-visual alignment and feature
extraction. All features are depicted without including the batch and depth
dimensions for simplicity.
### 3.3 Audio-visual spatio-temporal attention
We seek to propose mechanisms which are able to effectively capture
correlation across source, space, and time for the audio features
$Z_{\mathrm{A}}\in\mathbb{R}^{M\times T\times D}$ and the corresponding video
features $Z_{\mathrm{V}}\in\mathbb{R}^{G\times T\times D}$, where $G=8\cdot 8$
is the total number of spatial locations, and the time dimension $T$ is shared
across both tensors. We provide a slightly more general version of an
attention layer [5] which computes similarities between a packed tensor of
queries $Q\in\mathbb{R}^{X_{Q}\times T_{Q}\times D}$ with respect to some
packed keys $K\in\mathbb{R}^{X_{K}\times T_{K}\times D}$, where $D$ denotes
the depth dimensionality of the tensors. The similarities are computed using a
generalized version of the typical inner product for tensors $\langle
Z_{1},Z_{2}\rangle_{\mathcal{A}}$, which reduces across the specified
dimensions $\mathcal{A}$ of the second tensor $Z_{2}$. Note that we assume
that the dimensions of $Z_{2}$ are a subset of the dimensions of $Z_{1}$. By
using a scaled tensor inner-product [41] and a
$\operatorname{softmax}_{\mathcal{A}}$ activation that averages over the
dimensions specified by $\mathcal{A}$ at the output of the tensor product
$\langle K,Q\rangle_{\\{\mathrm{D}\\}}$ we produce the resulting similarity
tensor which modulates the values $V\in\mathbb{R}^{X_{V}\times T_{V}\times
D}$:
$\displaystyle\operatorname{attention}_{\mathcal{A}}(Q,K,V)=\langle\alpha,f_{\mathrm{V}}(V)\rangle_{\mathcal{A}},\enskip\alpha=\operatorname{softmax}_{\mathcal{A}}\left(\frac{1}{\sqrt{D}}\langle
f_{\mathrm{K}}(K),f_{\mathrm{Q}}(Q)\rangle_{\\{\mathrm{D}\\}}\right),$ (1)
where $Q$, $K$, $V$, and $\alpha$ are the query tensor, the key tensor, the
value tensor, and the attention weight distribution tensor across the set of
specified axes $\mathcal{A}$ of the value/key tensors over which attention is
applied. For example, for input query $Q$ of shape $X_{Q}\times T_{Q}\times D$
and value $V$ of shape $X_{V}\times T_{V}\times D$,
$\operatorname{attention}_{\\{\mathrm{X_{V}},\mathrm{T_{V}}\\}}(Q,V,V)$
performs attention over the first and second axes of $V$, yielding an output
tensor of shape $X_{Q}\times T_{Q}\times D$. The dense layers
$f_{\mathrm{Q}}$, $f_{\mathrm{V}}$, $f_{\mathrm{K}}$ are all trainable and
applied to the depth dimension $D$.
We utilize as a main block of our network the multi-head attention (MHA) layer
proposed in [41]. Each one of the $H$ heads performs attention to some low-
dimensional embeddings derived from the tensors $Q$ and $V$, with the output
embedding depth reduced to $D/H$. These independent attention heads have the
capability to focus on different semantics of the input tensors. The final
output, after performing attention across the specified axes $\mathcal{A}$, is
given by aggregating all the intermediate heads’ outputs as shown next:
$\displaystyle
o^{(h)}=\operatorname{attention}_{\mathcal{A}}(f_{Q}^{(h)}(Q),f_{V}^{(h)}(V),f_{V}^{(h)}(V)),\enskip
h\in{1,\dots,H}$ (2)
$\displaystyle\operatorname{MHA}_{\mathcal{A}}(Q,V)=f(\operatorname{Concat}([o_{1},\dots,o_{H}]))\in\mathbb{R}^{X_{Q}\times
T_{Q}\times D}$
where $f$ denotes a dense layer $\mathbb{R}^{X_{Q}\times T_{Q}\times
D}\rightarrow\mathbb{R}^{X_{Q}\times T_{Q}\times D}$ and the dense layers
$f_{Q}^{(h)}$ and $f_{V}^{(h)}$ are defined as linear maps:
$\mathbb{R}^{X_{Q}\times T_{Q}\times D}\rightarrow\mathbb{R}^{X_{Q}\times
T_{Q}\times D/H}$ and $\mathbb{R}^{X_{V}\times T_{V}\times
D}\rightarrow\mathbb{R}^{X_{V}\times T_{V}\times D/H}$, respectively. In this
version we have simplified the usage of the attention layer in that we always
assume that the keys and values tensors are the same. Now, we describe in
detail the proposed versions of the transformer-based [41] audio-visual
attention architectures.
#### 3.3.1 Self-attention (SA)
First, we concatenate the audio and the video tensors across the sources and
spatial dimensions:
$\displaystyle
Z_{\mathrm{AV}}^{(0)}=\operatorname{Concat}([Z_{\mathrm{A}},Z_{\mathrm{V}}])\in\mathbb{R}^{(M+G)\times
T\times D}$ (3)
which is used as the input to the first self-attention layer.
Joint: In the joint version of the self-attention module we seek to model
correlations across space, time, and sources by attending across all those
dimensions. Formally, we express the operation performed at the $l$-th layer
of a joint self-attention module as follows, also illustrated in Figure 2(a):
$\displaystyle r^{(l)}=$
$\displaystyle\operatorname{LayerNorm}\left(\operatorname{MHA}_{\\{\mathrm{M+G},\mathrm{T}\\}}\left(Z_{\mathrm{AV}}^{(l-1)},Z_{\mathrm{AV}}^{(l-1)}\right)+Z_{\mathrm{AV}}^{(l-1)}\right)$
(4) $\displaystyle Z_{\mathrm{AV}}^{(l)}=$
$\displaystyle\operatorname{LayerNorm}\left(f^{(l)}\left(\operatorname{Dropout}(r^{(l)})\right)\right)+r^{(l)}$
where $f^{(l)}$ is an output dense layer, $\operatorname{LayerNorm}$ is layer
normalization [4] and $\operatorname{Dropout}$ denotes a dropout layer [35].
We define the sequence of operations in Equation (4) as
$Z_{\mathrm{AV}}^{(l)}=\operatorname{SA}_{\\{\mathrm{M+G},\mathrm{T}\\}}(Z_{\mathrm{AV}}^{(l-1)})$
where the self-attention is performed across the joint sources-and-spatial
dimension $\mathrm{M+G}$ and the time axis $\mathrm{T}$. The final
representation $z$, after the repetition of $L$ self-attention blocks, for all
$M$ sources, is obtained through the appropriate slicing and by performing
attentional pooling [40] across the time axis.
$\displaystyle z=$
$\displaystyle\operatorname{MHA}_{\\{\mathrm{T}\\}}\left(\sum_{t}^{T}\widehat{z}_{t},\widehat{z}\right)\in\mathbb{R}^{M\times
D},\enskip\widehat{z}=$ $\displaystyle
Z_{\mathrm{AV}}^{(L)}[:M]\in\mathbb{R}^{M\times T\times D}.$ (5)
Separable: An issue with the joint attention defined above is that as the
number of audio sources $M$, spatial locations $G$, and/or time resolution $T$
of the input tensor $Z_{\mathrm{AV}}^{(0)}$ is increased, the computational
and space complexity of the conventional multi-head attention layer grows
quadratically. For this reason, we propose a separable version of attention:
first attend across time only, $\\{\mathrm{T}\\}$, and then subsequently
attend across the sources and spatial axis, $\\{\mathrm{M+G}\\}$, similar to a
previous work [6]. In this way, we are able to separate the computation and
not construct the full attention tensor with space complexity
$\mathcal{O}(T^{2}\cdot(M+G)^{2}\cdot H)$, where $H$ is the number of
attention heads. Formally, we write the following sequential operations for
the $l$-th layer of the separable self-attention block using the self-
attention module defined in Equation (4):
$\displaystyle a^{(l)}=$
$\displaystyle\operatorname{SA}_{\\{\mathrm{T}\\}}(Z_{\mathrm{AV}}^{(l-1)}[1:M])$
(6) $\displaystyle v^{(l)}=$
$\displaystyle\operatorname{SA}_{\\{\mathrm{T}\\}}(Z_{\mathrm{AV}}^{(l-1)}[M:M+G])$
$\displaystyle Z_{\mathrm{AV}}^{(l)}=$
$\displaystyle\operatorname{Concat}([a^{(l)},v^{(l)}])$ $\displaystyle
Z_{\mathrm{AV}}^{(l)}=$
$\displaystyle\operatorname{SA}_{\\{\mathrm{M+G}\\}}\left(Z_{\mathrm{AV}}^{(l)},Z_{\mathrm{AV}}^{(l)}\right).$
The final audio-visual representation is obtained through attentional pooling
and slicing as before. This is illustrated in Figure 2(c).
### 3.4 Cross-modal attention (CMA)
In this variation of the audio-visual attention module we keep the audio and
the video modality tensors separate, and we perform queries from one modality
to another. Formally the input to the stacked CMA blocks are always a pair of
an audio feature tensor $A^{(0)}_{\mathrm{CMA}}\in\mathbb{R}^{M\times T\times
D}$ and a video feature tensor $V^{(0)}_{\mathrm{CMA}}\in\mathbb{R}^{G\times
T\times D}$.
Joint: We perform a directional attention from the audio (video) modality
tensor to the video (audio) tensor, attending across both sources and time
(space and time) axes. Formally, at the $l$-th layer we have the following
sequence of operations, also illustrated in Figure 2(b):
$\displaystyle a_{1}^{(l)}=$
$\displaystyle\operatorname{MHA}_{\\{\mathrm{G},\mathrm{T}\\}}\left(A_{\mathrm{CMA}}^{(l-1)},V_{\mathrm{CMA}}^{(l-1)}\right)$
$\displaystyle v_{1}^{(l)}=$
$\displaystyle\operatorname{MHA}_{\\{\mathrm{M},\mathrm{T}\\}}\left(V_{\mathrm{CMA}}^{(l-1)},A_{\mathrm{CMA}}^{(l-1)}\right)$
(7) $\displaystyle a_{2}^{(l)}=$
$\displaystyle\operatorname{LayerNorm}(a_{1}^{(l)}+A_{\mathrm{CMA}}^{(l-1)})$
$\displaystyle v_{2}^{(l)}=$
$\displaystyle\operatorname{LayerNorm}\left(v_{1}^{(l)}+V_{\mathrm{CMA}}^{(l-1)}\right)$
$\displaystyle a_{3}^{(l)}=$ $\displaystyle
f\left(\operatorname{Dropout}(a_{2}^{(l)})\right)$ $\displaystyle
v_{3}^{(l)}=$ $\displaystyle
f\left(\operatorname{Dropout}(v_{2}^{(l)})\right)$ $\displaystyle
A_{\mathrm{CMA}}^{(l)}=$
$\displaystyle\operatorname{LayerNorm}\left(a_{3}^{(l)}+A_{\mathrm{CMA}}^{(l-1)}\right)$
$\displaystyle V_{\mathrm{CMA}}^{(l)}=$
$\displaystyle\operatorname{LayerNorm}\left(v_{3}^{(l)}+V_{\mathrm{CMA}}^{(l-1)}\right).$
The left side of the set of the above equations describes video-guided
attention where we use the video features $v^{(l)}$ to modulate the audio
features $a^{(l)}$, and vice versa on the right side. We define the sequence
of operations in Equation (7) as
$A_{\mathrm{CMA}}^{(l)},V_{\mathrm{CMA}}^{(l)}=\operatorname{CMA}_{\\{\mathrm{M|G},\mathrm{T}\\}}(A_{\mathrm{CMA}}^{(l-1)},V_{\mathrm{CMA}}^{(l-1)})$
where each cross-modal attention is performed across the dimension of audio
sources $M$ or spatial locations $G$ (denoted in our notation as
"$\mathrm{M|G}$") and the time axis. The output audio-visual embedding $z$
contains information for all $M$ sources and is obtained after the repetition
of $L$ cross-modal attention blocks by performing attentional pooling across
the time axis on the output audio tensor, as follows:
$\displaystyle z=$
$\displaystyle\operatorname{MHA}_{\\{\mathrm{T}\\}}\left(\sum_{t}^{T}\widehat{z}_{t},\widehat{z}\right)\in\mathbb{R}^{M\times
D},\enskip\widehat{z}=$ $\displaystyle
A_{\mathrm{CMA}}^{(l)}\in\mathbb{R}^{M\times T\times D}.$ (8)
Separable: Similar as discussed in Section 3.3.1, we can reduce the space
complexity of the intermediate representations of the proposed cross-modal
attention layer by performing a separate self-attention across the time-axis
for each modality individually and then perform CMA across the remaining axis
(i.e. sources or spatial locations) as shown next, also illustrated in Figure
2(c):
$\displaystyle a^{(l)}=$
$\displaystyle\operatorname{SA}_{\\{\mathrm{T}\\}}(A_{\mathrm{CMA}}^{(l-1)})$
(9) $\displaystyle v^{(l)}=$
$\displaystyle\operatorname{SA}_{\\{\mathrm{T}\\}}(V_{\mathrm{CMA}}^{(l-1)})$
$\displaystyle A_{\mathrm{CMA}}^{(l)},V_{\mathrm{CMA}}^{(l)}=$
$\displaystyle\operatorname{CMA}_{\\{\mathrm{M|G}\\}}(a^{(l)},v^{(l)}).$
### 3.5 Audio-visual on-screen sound classifier
The audio-visual correspondence between each estimated source $\hat{s}_{m}$
and the video is computed using the extracted audio-visual representation from
the output of our transformer based models defined in Equations (5) and (8),
for the self-attention and cross-modal attention encoders, respectively.
Specifically, we feed this audio-visual embedding $z\in\mathbb{R}^{M\times D}$
through a dense layer $f_{z}$ and we apply a sigmoid activation element-wise
to compute the audio-visual coincidence probabilities $\hat{y}_{1:M}$ for all
sources. Therefore, the final on-screen waveform estimate
$\hat{x}^{\mathrm{on}}$ is produced using these probabilities as soft weights
and multiplied with the corresponding estimated sources:
$\hat{y}_{m}=\sigma\left(f_{z}(z)\right)_{m}\in[0,1],\forall\enskip
m={1,\dots,M},\enskip\hat{x}^{\mathrm{on}}=\sum_{m=1}^{M}\hat{y}_{m}\hat{s}_{m}.$
(10)
## 4 Experimental Framework
### 4.1 Data preparation
We use the same training data as AudioScope [40], except instead of only using
a subset of the data filtered by an audio-visual coincidence prediction model
[19], we use all data, respecting the published
splits111https://github.com/google-research/sound-
separation/tree/master/datasets/yfcc100m for the Yahoo Flickr Creative Commons
100 Million Dataset (YFCC100m) [36]. The unfiltered training data consists of
about 1600 hours. We gathered human annotations for 5 second clips from
unfiltered videos from the validation and test splits. Out of 6500 validation
clips labeled, 109 were unanimously rated as on-screen-only, and 1421
unanimous off-screen-only. For 3500 test clips, there were 44 unanimous on-
screen-only, and 788 unanimous off-screen-only. We also experiment with using
a faster video frame rate of 16 frames per second (FPS), instead of 1 FPS as
used by AudioScope [40].
### 4.2 Training
We use the same training procedure as AudioScope [40]. We focus on the
unsupervised scenario, which means all batches are composed of “noisy-labeled
on-screen (NOn)” examples (in the terminology of the original paper [40]).
Each NOn example consists of the video frames and audio for a 5 second video
clip, where additional audio from another random 5 second video clip is used
as synthetic off-screen audio, and is added to the audio of the first clip.
These examples provide noisy labels, because we make the assumption that all
sources that map to the first clip’s audio are on-screen. However, not all
sources that map to the clip’s audio will necessarily be on-screen (e.g. there
could be off-screen background noise in the first clip that is separated as a
source).
Despite the training examples having these noisy labels, we found that the
exact cross-entropy loss works well, which is computed between the pseudo-
label assignments $y$ provided by MixIT and the audio-visual on-screen
classifier predictions $\hat{y}$ provided by the model. Both audio and visual
embedding networks were pre-trained on AudioSet [15] for unsupervised
coincidence prediction [19]. But unlike AudioScope, we freeze these networks
during training, which we found to yield better results. Also, instead of
initializing the separation model from scratch, we initialize the separation
model with weights pre-trained on unfiltered audio-only MoMs drawn from
YFCC100m, trained for 3.6M steps, which also significantly boosted the
performance of our models. All models are trained on 32 Google Cloud TPU v3
cores with the Adam optimizer [21], a batch size of $256$, and learning rate
of $10^{-4}$.
#### 4.2.1 Calibration
In contrast to AudioScope, we perform an additional post-training calibration
step. Using validation data that is human-labeled as unanimously on-screen-
only or unanimously off-screen-only, we calibrate the on-screen classifier
using the scikit-learn [30] implementation of isotonic regression [47]. The
dataset for this calibration consists of both single-mixture and MoM examples.
For single-mixture examples, the labels are all 1 for on-screen-only videos,
and all 0 for off-screen only videos. MoM examples are constructed from an on-
screen-only or off-screen-only video and the audio from either off-screen-only
videos, or random videos chosen from the entire validation set. For on-screen
MoM examples, the labels are the MixIT assignments for the on-screen mixture,
so they are 1 for sources that map to the on-screen-only video, and 0
otherwise. For off-screen MoM examples, the labels are all 0.
### 4.3 Evaluation
For AudioScope [40], the videos in the validation and test sets were drawn
from a subset of YFCC100m filtered by an unsupervised coincidence model [19].
MoM examples in this dataset use unanimously-rated off-screen-only videos as
the background regardless of whether the foreground video is on-screen-only or
off-screen-only. We refer to this as “filtered off-screen background.” In
addition to this filtered dataset, we constructed new test sets drawn from the
unfiltered validation and test splits of YFCC100m. We created two versions of
these unfiltered datasets: one version, “unfiltered off-screen background,”
uses audio from unanimously-rated off-screen-only unfiltered videos as the
background in MoMs, and the other version, “unfiltered random background,”
uses audio sampled from all videos in the unfiltered validation and test sets.
Evaluation metrics are the same as used for AudioScope [40]: we measure power-
weighted area under the curve of the receiver operating characteristic (AUC-
ROC), scale-invariant signal-to-noise ratio (SI-SNR) [23] of the on-screen
estimate $\hat{x}^{\mathrm{on}}$ for on-screen examples (a measure of the
reconstruction fidelity of on-screen audio), and off-screen suppression ratio
(OSR) for off-screen examples, which measures the power reduction of the on-
screen estimate $\hat{x}^{\mathrm{on}}$ relative to the input audio mixture
power. These metrics are measured on both single mixtures and MoMs. For MoMs,
we additionally report an oracle metric, MixIT∗, which is the SI-SNR of the
estimated on-screen audio using the MixIT assignments from separated sources
to the reference on-screen audio.
## 5 Results
Results are shown in Tables 1, 2, and 3 for filtered off-screen background,
unfiltered off-screen background, and unfitered random background,
respectively. The first row in each table lists the performance of the
equivalent original AudioScope [40] model. Note that this model performs
relatively well on the filtered off-screen background dataset (Table 1), but
its performance drops precipitously on the unfiltered datasets (Tables 2 and
3). In particular, AUC-ROC drops from from 0.81 to 0.63 and 0.65, and on-
screen MoM SI-SNR of $\hat{x}^{\mathrm{on}}$ drops from 8.0 dB to 0.7 dB and
-0.7 dB. Notice that the OSRs actually increase from filtered to unfiltered
evaluation; this is due to the model tending to predict probability 0 for all
sources regardless of whether the video is on-screen or not. Thus, AudioScope
exhibits significant mismatch to the unfiltered evaluation datasets.
Using a pre-trained separation model and fine-tuning on unfiltered boosts the
performance of AudioScope by about 2 dB in terms of both oracle MixIT* on-
screen SI-SNR and $\hat{x}^{\mathrm{on}}$ on-screen SI-SNR, and also
generalizes better to unfiltered datasets. There is a slight performance
improvement using 16 FPS; in particular, OSR increases a bit. Notice that
using calibration shifts the operating point of models: it tends to reduce on-
screen SI-SNR by up to 1 dB, while substantially boosting OSR. This is likely
because the uncalibrated on-screen probablities tend towards predicting 1, and
this bias degrades rejection of off-screen sounds when the on-screen estimate
is created.
Our proposed attention-based models further improve performance over pre-
trained models that use the original spatio-temporal attention used by
AudioScope [40]. For calibrated models, on filtered off-screen background
MoMs, our attention-based models achieve a slight gain of 0.4 dB on-screen SI-
SNR and 0.6 dB OSR. On unfiltered MoMs with off-screen background, attention-
based models boost on-screen SI-SNR by 1.2 dB with comparable OSR, and on
unfiltered random background MoMs gain almost 3 dB on-screen SI-SNR, with only
slightly lower OSR. It is remarkable that the performance improvements for
these attention based models seem to increase as the difficulty of the
evaluation data increases.
Table 1: Evaluation results for “filtered off-screen background” test set with
calibration. On-screen MoMs have a median input SI-SNR of 4.4 dB. “PT”
indicates separation model pre-training, and “Cal.” indicates calibration on
labeled validation data.
| | | | Single mixture | Mixture of mixtures
---|---|---|---|---|---
| | | | | SI-SNR (dB) | OSR (dB) | | SI-SNR (dB) | OSR (dB)
AV alignment | PT | Cal. | Train data | AUC | On: $\hat{x}^{\mathrm{on}}$ | Off: $\hat{x}^{\mathrm{on}}$ | AUC | MixIT* | On: $\hat{x}^{\mathrm{on}}$ | Off: $\hat{x}^{\mathrm{on}}$
Spatio-temporal [40] | | | Filt. 1 FPS | 0.62 | 36.6 | 0.5 | 0.81 | 10.6 | 8.0 | 5.3
Spatio-temporal | ✓ | | Unfil. 1 FPS | 0.58 | 29.2 | 1.7 | 0.79 | 12.5 | 9.7 | 4.0
Spatio-temporal | ✓ | | Unfil. 16 FPS | 0.65 | 29.0 | 1.6 | 0.82 | 12.5 | 10.0 | 3.7
Spatio-temporal | ✓ | ✓ | Unfil. 1 FPS | 0.58 | 33.0 | 9.5 | 0.79 | 12.5 | 9.4 | 10.2
Spatio-temporal | ✓ | ✓ | Unfil. 16 FPS | 0.65 | 30.6 | 9.2 | 0.82 | 12.5 | 8.4 | 10.2
Joint SA $\times 4$ | ✓ | | Unfil. 16 FPS | 0.67 | 29.4 | 1.5 | 0.84 | 12.5 | 11.0 | 4.6
Separable SA $\times 4$ | ✓ | | Unfil. 16 FPS | 0.71 | 28.8 | 2.2 | 0.85 | 12.5 | 10.9 | 4.9
Joint CMA $\times 4$ | ✓ | | Unfil. 16 FPS | 0.71 | 30.8 | 1.8 | 0.86 | 12.5 | 11.0 | 4.7
Separable CMA $\times 4$ | ✓ | | Unfil. 16 FPS | 0.69 | 33.2 | 1.3 | 0.85 | 12.5 | 10.9 | 4.3
Joint SA $\times 4$ | ✓ | ✓ | Unfil. 16 FPS | 0.67 | 31.3 | 8.6 | 0.84 | 12.5 | 9.5 | 10.6
Separable SA $\times 4$ | ✓ | ✓ | Unfil. 16 FPS | 0.71 | 30.4 | 8.9 | 0.85 | 12.5 | 9.1 | 10.7
Joint CMA $\times 4$ | ✓ | ✓ | Unfil. 16 FPS | 0.71 | 30.5 | 8.9 | 0.86 | 12.5 | 9.8 | 10.8
Separable CMA $\times 4$ | ✓ | ✓ | Unfil. 16 FPS | 0.69 | 32.5 | 8.1 | 0.85 | 12.5 | 9.6 | 10.6
Table 2: Evaluation results for “unfiltered off-screen background” test set
with calibration. On-screen MoMs have a median input SI-SNR of 4.0 dB.
| | | | Single mixture | Mixture of mixtures
---|---|---|---|---|---
| | | | | SI-SNR (dB) | OSR (dB) | | SI-SNR (dB) | OSR (dB)
AV alignment | PT | Cal. | Train data | AUC | On: $\hat{x}^{\mathrm{on}}$ | Off: $\hat{x}^{\mathrm{on}}$ | AUC | MixIT* | On: $\hat{x}^{\mathrm{on}}$ | Off: $\hat{x}^{\mathrm{on}}$
Spatio-temporal [40] | | | Filt. 1 FPS | 0.54 | 19.5 | 9.1 | 0.63 | 10.2 | 0.7 | 16.6
Spatio-temporal | ✓ | | Unfil. 1 FPS | 0.59 | 29.1 | 5.2 | 0.72 | 12.1 | 8.1 | 7.0
Spatio-temporal | ✓ | | Unfil. 16 FPS | 0.70 | 30.2 | 4.9 | 0.78 | 12.1 | 8.4 | 6.3
Spatio-temporal | ✓ | ✓ | Unfil. 1 FPS | 0.59 | 30.2 | 22.6 | 0.72 | 12.1 | 8.6 | 24.9
Spatio-temporal | ✓ | ✓ | Unfil. 16 FPS | 0.70 | 29.2 | 24.8 | 0.78 | 12.1 | 8.4 | 26.0
Joint SA $\times 4$ | ✓ | | Unfil. 16 FPS | 0.68 | 30.5 | 5.0 | 0.80 | 12.0 | 10.2 | 6.3
Separable SA $\times 4$ | ✓ | | Unfil. 16 FPS | 0.77 | 30.4 | 5.2 | 0.83 | 12.0 | 10.4 | 6.8
Joint CMA $\times 4$ | ✓ | | Unfil. 16 FPS | 0.70 | 31.1 | 4.7 | 0.80 | 12.0 | 9.8 | 6.6
Separable CMA $\times 4$ | ✓ | | Unfil. 16 FPS | 0.69 | 32.1 | 4.1 | 0.81 | 12.1 | 9.8 | 6.4
Joint SA $\times 4$ | ✓ | ✓ | Unfil. 16 FPS | 0.68 | 29.9 | 24.6 | 0.80 | 12.0 | 9.1 | 25.1
Separable SA $\times 4$ | ✓ | ✓ | Unfil. 16 FPS | 0.77 | 29.8 | 24.8 | 0.83 | 12.0 | 9.8 | 26.2
Joint CMA $\times 4$ | ✓ | ✓ | Unfil. 16 FPS | 0.70 | 32.1 | 24.5 | 0.80 | 12.0 | 9.5 | 25.7
Separable CMA $\times 4$ | ✓ | ✓ | Unfil. 16 FPS | 0.69 | 32.0 | 23.1 | 0.81 | 12.1 | 9.0 | 25.2
Table 3: Evaluation results for “unfiltered random background” test set with
calibration. On-screen MoMs have a median input SI-SNR of 2.5 dB.
| | | | Single mixture | Mixture of mixtures
---|---|---|---|---|---
| | | | | SI-SNR (dB) | OSR (dB) | | SI-SNR (dB) | OSR (dB)
AV alignment | PT | Cal. | Train data | AUC | On: $\hat{x}^{\mathrm{on}}$ | Off: $\hat{x}^{\mathrm{on}}$ | AUC | MixIT* | On: $\hat{x}^{\mathrm{on}}$ | Off: $\hat{x}^{\mathrm{on}}$
Spatio-temporal [40] | | | Filt. 1 FPS | 0.54 | 19.5 | 9.1 | 0.65 | 8.0 | -0.7 | 17.4
Spatio-temporal | ✓ | | Unfil. 1 FPS | 0.59 | 29.1 | 5.2 | 0.72 | 10.1 | 5.9 | 5.6
Spatio-temporal | ✓ | | Unfil. 16 FPS | 0.70 | 30.2 | 4.9 | 0.73 | 10.1 | 5.8 | 4.2
Spatio-temporal | ✓ | ✓ | Unfil. 1 FPS | 0.59 | 30.2 | 22.6 | 0.72 | 10.1 | 5.8 | 23.4
Spatio-temporal | ✓ | ✓ | Unfil. 16 FPS | 0.70 | 29.2 | 24.8 | 0.73 | 10.1 | 5.2 | 22.5
Joint SA $\times 4$ | ✓ | | Unfil. 16 FPS | 0.68 | 30.5 | 5.0 | 0.76 | 10.1 | 8.9 | 3.9
Separable SA $\times 4$ | ✓ | | Unfil. 16 FPS | 0.77 | 30.4 | 5.2 | 0.78 | 10.1 | 8.9 | 4.1
Joint CMA $\times 4$ | ✓ | | Unfil. 16 FPS | 0.70 | 31.1 | 4.7 | 0.75 | 10.2 | 8.0 | 4.4
Separable CMA $\times 4$ | ✓ | | Unfil. 16 FPS | 0.69 | 32.1 | 4.1 | 0.77 | 10.1 | 8.9 | 3.9
Joint SA $\times 4$ | ✓ | ✓ | Unfil. 16 FPS | 0.68 | 29.9 | 24.6 | 0.76 | 10.1 | 8.1 | 21.2
Separable SA $\times 4$ | ✓ | ✓ | Unfil. 16 FPS | 0.77 | 29.8 | 24.8 | 0.78 | 10.1 | 8.4 | 20.0
Joint CMA $\times 4$ | ✓ | ✓ | Unfil. 16 FPS | 0.70 | 32.1 | 24.5 | 0.75 | 10.2 | 7.1 | 20.6
Separable CMA $\times 4$ | ✓ | ✓ | Unfil. 16 FPS | 0.69 | 32.0 | 23.1 | 0.77 | 10.1 | 8.5 | 20.6
## 6 Conclusion
In this paper we have presented extensions and refinements of the AudioScope
model that further improve upon previous audio-visual self-supervised methods
for separating on-screen sounds. Our proposed model is able to operate on
higher frame-rates and build rich semantic associations between audio and
video modalities using the proposed self-attention as well as cross-modal
attention mechanisms. Our experimental results show that our model is able to
generalize to a much wider set of in-the-wild videos than existing approaches
while being trained solely with in-the-wild videos. The approach we followed
here is to first separate the audio before aligning each on-screen source with
its corresponding video object. Other works have followed the converse process
of first identifying visual objects and using them to condition sound
separation for each object. Future works may explore unifying these two
directions into a single model that works in both ways.
## References
* [1] T. Afouras, J. S. Chung, and A. Zisserman. The conversation: Deep audio-visual speech enhancement. Proc. Interspeech 2018, pages 3244–3248, 2018.
* [2] T. Afouras, A. Owens, J. S. Chung, and A. Zisserman. Self-supervised learning of audio-visual objects from video. arXiv preprint arXiv:2008.04237, 2020.
* [3] R. Arandjelovic and A. Zisserman. Objects that sound. In Proceedings of the European conference on computer vision (ECCV), pages 435–451, 2018.
* [4] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
* [5] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
* [6] G. Bertasius, H. Wang, and L. Torresani. Is space-time attention all you need for video understanding? arXiv preprint arXiv:2102.05095, 2021.
* [7] M. Chatterjee, J. Le Roux, N. Ahuja, and A. Cherian. Visual scene graphs for audio source separation. In Proc. IEEE International Conference on Computer Vision (ICCV), pages 1204–1213, 2021.
* [8] Y. Cheng, R. Wang, Z. Pan, R. Feng, and Y. Zhang. Look, listen, and attend: Co-attention network for self-supervised audio-visual representation learning. In Proceedings of the 28th ACM International Conference on Multimedia, pages 3884–3892, 2020.
* [9] L. Drude, D. Hasenklever, and R. Haeb-Umbach. Unsupervised training of a deep clustering model for multichannel blind source separation. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 695–699, 2019.
* [10] A. Ephrat, I. Mosseri, O. Lang, T. Dekel, K. Wilson, A. Hassidim, W. T. Freeman, and M. Rubinstein. Looking to listen at the cocktail party: a speaker-independent audio-visual model for speech separation. ACM Transactions on Graphics (TOG), 37(4):1–11, 2018.
* [11] C. Gan, D. Huang, H. Zhao, J. B. Tenenbaum, and A. Torralba. Music gesture for visual sound separation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10478–10487, 2020.
* [12] R. Gao, R. Feris, and K. Grauman. Learning to separate object sounds by watching unlabeled video. In Proceedings of the European Conference on Computer Vision (ECCV), pages 35–53, 2018.
* [13] R. Gao and K. Grauman. Co-separating sounds of visual objects. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3879–3888, 2019.
* [14] R. Gao and K. Grauman. Visualvoice: Audio-visual speech separation with cross-modal consistency. arXiv preprint arXiv:2101.03149, 2021.
* [15] J. F. Gemmeke, D. P. W. Ellis, D. Freedman, A. Jansen, W. Lawrence, R. C. Moore, M. Plakal, and M. Ritter. Audio set: An ontology and human-labeled dataset for audio events. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 776–780, 2017.
* [16] J. Hershey and J. Movellan. Audio vision: Using audio-visual synchrony to locate sounds. Advances in neural information processing systems, 12:813–819, 1999\.
* [17] J. R. Hershey and M. Casey. Audio-visual sound separation via hidden markov models. In Advances in Neural Information Processing Systems, pages 1173–1180, 2002.
* [18] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
* [19] A. Jansen, D. P. Ellis, S. Hershey, R. C. Moore, M. Plakal, A. C. Popat, and R. A. Saurous. Coincidence, categorization, and consolidation: Learning to recognize sounds with minimal supervision. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 121–125, 2020.
* [20] I. Kavalerov, S. Wisdom, H. Erdogan, B. Patton, K. Wilson, J. Le Roux, and J. R. Hershey. Universal sound separation. In Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2019.
* [21] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In Proc. International Conference on Learning Representations (ICLR), 2015.
* [22] B. Korbar, D. Tran, and L. Torresani. Cooperative learning of audio and video models from self-supervised synchronization. In Advances in Neural Information Processing Systems, pages 7763–7774, 2018.
* [23] J. Le Roux, S. Wisdom, H. Erdogan, and J. R. Hershey. SDR–half-baked or well done? In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 626–630, 2019.
* [24] J. Lee, S.-W. Chung, S. Kim, H.-G. Kang, and K. Sohn. Looking into your speech: Learning cross-modal affinity for audio-visual speech separation. arXiv preprint arXiv:2104.02775, 2021.
* [25] X. Li, Y. Zhang, C. Liu, B. Shuai, Y. Zhu, B. Brattoli, H. Chen, I. Marsic, and J. Tighe. Vidtr: Video transformer without convolutions. arXiv preprint arXiv:2104.11746, 2021.
* [26] Y.-B. Lin, H.-Y. Tseng, H.-Y. Lee, Y.-Y. Lin, and M.-H. Yang. Unsupervised sound localization via iterative contrastive learning. arXiv preprint arXiv:2104.00315, 2021.
* [27] F. Locatello, D. Weissenborn, T. Unterthiner, A. Mahendran, G. Heigold, J. Uszkoreit, A. Dosovitskiy, and T. Kipf. Object-centric learning with slot attention. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 11525–11538. Curran Associates, Inc., 2020.
* [28] T. Ochiai, M. Delcroix, Y. Koizumi, H. Ito, K. Kinoshita, and S. Araki. Listen to what you want: Neural network-based universal sound selector. arXiv preprint arXiv:2006.05712, 2020.
* [29] A. Owens and A. A. Efros. Audio-visual scene analysis with self-supervised multisensory features. In Proceedings of the European Conference on Computer Vision (ECCV), pages 631–648, 2018.
* [30] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
* [31] F. Pishdadian, G. Wichern, and J. Le Roux. Finding strength in weakness: Learning to separate sounds with weak supervision. arXiv preprint arXiv:1911.02182, 2019.
* [32] T. Rahman and L. Sigal. Weakly-supervised audio-visual sound source detection and separation. arXiv preprint arXiv:2104.02606, 2021.
* [33] A. Rouditchenko, H. Zhao, C. Gan, J. McDermott, and A. Torralba. Self-supervised audio-visual co-segmentation. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2357–2361. IEEE, 2019.
* [34] P. Seetharaman, G. Wichern, J. Le Roux, and B. Pardo. Bootstrapping single-channel source separation via unsupervised spatial clustering on stereo mixtures. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 356–360, 2019.
* [35] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014\.
* [36] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li. Yfcc100m: The new data in multimedia research. Communications of the ACM, 59(2):64–73, 2016.
* [37] E. Tzinis, S. Venkataramani, and P. Smaragdis. Unsupervised deep clustering for source separation: Direct learning from mixtures using spatial information. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 81–85, 2019.
* [38] E. Tzinis, S. Venkataramani, Z. Wang, C. Subakan, and P. Smaragdis. Two-step sound source separation: Training on learned latent targets. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 31–35. IEEE, 2020.
* [39] E. Tzinis, S. Wisdom, J. R. Hershey, A. Jansen, and D. P. W. Ellis. Improving universal sound separation using sound classification. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 96–100, 2020.
* [40] E. Tzinis, S. Wisdom, A. Jansen, S. Hershey, T. Remez, D. P. Ellis, and J. R. Hershey. Into the wild with audioscope: Unsupervised audio-visual separation of on-screen sounds. In Proc. International Conference on Learning Representations (ICLR), 2021.
* [41] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems, 30:5998–6008, 2017.
* [42] Q. Wang, H. Muckenhirn, K. Wilson, P. Sridhar, Z. Wu, J. Hershey, R. A. Saurous, R. J. Weiss, Y. Jia, and I. L. Moreno. Voicefilter: Targeted voice separation by speaker-conditioned spectrogram masking. In Proc. Interspeech, 2019.
* [43] S. Wisdom, J. R. Hershey, K. Wilson, J. Thorpe, M. Chinen, B. Patton, and R. A. Saurous. Differentiable consistency constraints for improved deep speech enhancement. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 900–904, 2019.
* [44] S. Wisdom, E. Tzinis, H. Erdogan, R. J. Weiss, K. Wilson, and J. R. Hershey. Unsupervised sound separation using mixtures of mixtures. In Advances in Neural Information Processing Systems, 2020.
* [45] Y. Wu, L. Zhu, Y. Yan, and Y. Yang. Dual attention matching for audio-visual event localization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6292–6300, 2019.
* [46] J. Yu, Y. Cheng, and R. Feng. Mpn: Multimodal parallel network for audio-visual event localization. arXiv preprint arXiv:2104.02971, 2021.
* [47] B. Zadrozny and C. Elkan. Transforming classifier scores into accurate multiclass probability estimates. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 694–699, 2002.
* [48] L. Zhu and E. Rahtu. Visually guided sound source separation and localization using self-supervised motion representations. arXiv preprint arXiv:2104.08506, 2021.
|
# The Drag Instability in a 2D Isothermal C-shock
Pin-Gao Gu Institute of Astronomy & Astrophysics, Academia Sinica, Taipei
10617, Taiwan
###### Abstract
We extend the linear analysis of the drag instability in a 1D perpendicular
isothermal C-shock by Gu & Chen to 2D perpendicular and oblique C-shocks in
the typical environment of star-forming clouds. Simplified dispersion
relations are derived for the unstable modes. We find that the mode property
of the drag instability generally depends on the ratio of the transverse
(normal to the shock flow) to longitudinal (along the shock flow) wavenumber.
For the transversely large-scale mode, the growth rate and wave frequency of
the drag instability in a 2D shock resemble those in a 1D shock. For the
transversely small-scale mode, the drag instability is characterized by an
unstable mode coupled with an acoustic mode primarily along the transverse
direction. When the shock is perpendicular or less oblique, there exists a
slowly propagating mode, which can potentially grow into a nonlinear regime
and contribute to the maximum growth of the instability. In contrast, when the
shock is more oblique, this slowly propagating unstable mode disappears, and
the maximum growth of the drag instability is likely contributed from the
transversely large-scale mode (i.e., almost 1D mode). In all cases that we
consider, the magnitude of the density perturbations is significantly larger
than that of the velocity and magnetic field perturbations, implying that the
density enhancement governs the dynamics in the linear regime of the
instability. A few issues in the linear analysis, as well as the possible
astrophysical implications, are also briefly discussed.
## 1 Introduction
Stars form in the cold and dense cores of molecular clouds (e.g., Kennicutt &
Evans, 2012; Hennebelle & Inutsuka, 2019; Girichidis et al., 2020). Many
observations suggest that molecular clouds exhibit highly supersonic
turbulence (e.g., Elmegreen & Scalo, 2004; Ballesteros-Paredes et al., 2007;
Hennebelle & Falgarone, 2012). Additionally, cosmic rays can weakly ionize the
neutral gas to produce ions with a typical ionization fraction $\lesssim
10^{-6}$ (e.g., Draine et al, 1983; Tielens, 2005; Dalgarno, 2006; Indriolo &
McCall, 2012). Since the magnetic fields are normally assumed to be tightly
coupled to the ions, which in turn interact with the neutrals through the drag
force, the ions together with the magnetic fields can systematically drift
relative to the neutrals, a phenomenon known as ambipolar diffusion (Spitzer,
1956; Mestel & Spitzer, 1956; Shu, 1992; Zweibel, 2015).
Of particular interest in this study is the interplay between ambipolar
diffusion and shocks. While jump-type (J-type) shocks exhibit sharp supersonic
(or super-Alfvénic for magnetized shocks) discontinuities in physical
properties, in the case of nonideal magnetohydrodynamics (MHD), the different
signal speeds of the ions and neutrals along with the ambipolar diffusion
broaden the discontinuity, leading to continuous-type (C-type) shocks with a
smooth transition in physical quantities between the pre- and post-shock
regions (Draine et al, 1983; Draine & McKee, 1993). Observationally, various
efforts have been made to detect such features in turbulent molecular clouds
(e.g., Li & Houde, 2008; Hezareh et al., 2010, 2014; Xu & Li, 2016; Tang et
al., 2018), though most of them are indirect measurements and highly dependent
on the adopted dynamical and chemical models (e.g., Flower & Pineau Des
Forêts, 1998, 2010; Gusdorf et al., 2008; Lehmann & Wardle, 2016; Valdivia et
al., 2017, see also the Introduction of Gu & Chen 2020, hereafter GC (20)).
Theoretically, Chen & Ostriker (2012, 2014) analyzed the structures of
perpendicular and oblique C-shocks in exquisite detail in the typical
environment of star-forming clouds and extended their work to colliding flows
to explore the formation of cores and filaments in numerical simulations.
However, a C-shock is not necessarily a dynamically stable structure. Based on
the 1D background state of steady, plane-parallel C-shocks derived by Chen &
Ostriker (2012), GC (20) conducted a Wentzel-Kramers-Brillouin-Jeffreys (WKBJ)
analysis and confirmed the postulation of Gu et al. (2004) that the drag
instability can occur in a 1D isothermal perpendicular C-shock. In an
environment where the ambipolar diffusion is efficient and the
ionization–recombination equilibrium is nearly attained, the drag instability,
a plasma effect discovered by Gu et al. (2004), ensues from the ion-neutral
drag and is a local linear overstability phenomenon associated with an
exponentially growing mode of a propagating wave. On the other hand, it is
well known that C-shocks are also susceptible to the Wardle instability
(Wardle, 1990, 1991). However, the Wardle instability is suppressed by the
fast ionization–recombination process expected to pervade star-forming clouds
(e.g., Smith & Mac Low, 1997; Stone, 1997; Falle et al., 2009).
It was found by GC (20) that the growing wave mode of the drag instability can
only propagate downstream within a 1D shock and subsequently decay in the
post-shock region. The authors demonstrated that the growth of the drag
instability in a 1D perpendicular shock is limited by the short time span for
the unstable wave to stay within a C-shock. Consequently, the maximum growth
of an unstable wave is given by the maximum total growth (MTG), which is
defined by GC (20) as the maximum value of the total growth of an unstable
mode traveling over an entire shock width before it is damped in the post-
shock region. Specifically, MTG$=\exp\int_{\rm shock\
width}\Gamma_{grow}dx/v_{ph,x}$, where $\Gamma_{grow}$ is the growth rate and
$v_{ph,x}$ is the longitudinal phase velocity. The authors estimated the MTG
for such an unstable wave in typical environments of star-forming clouds. They
found that a stronger shock with a larger shock width favors a more
appreciable growth of the 1D drag instability. Most importantly, the linear
analysis suggests that the density enhancement induced by the drag instability
dominates over the magnetic field and velocity enhancements in the dynamical
growth of the unstable mode.
One of the most important questions for shocks in this context is whether a
shock instability can lead to fragmentation that subsequently undergoes
gravitational collapse and eventually induces star formation. The predominant
density enhancement of the drag instability provides a possible mechanism to
form supercritical clumps/cores within a C-shock in the typical environments
of star-forming clouds. As a preliminary activity, the analysis by GC (20)
focused on the basic behavior of the drag instability within the steady-state
profiles of C-shocks, which is 1D, linear, and non-self-gravitating. Hence,
how exactly this plasma instability plays a role in the observational evidence
of prestellar cores and filamentary networks in relation to magnetic field
morphology (e.g., André et al., 2014; Li et al., 2014) is not clear and has
yet to be addressed in terms of the current theoretical framework in 1D.
Obviously, the theory is still in its infancy and requires further development
toward more realistic physical applications.
Therefore, as a natural consequence of an ongoing effort, we extend the 1D
analysis of the drag instability by GC (20) and conduct a 2D linear analysis
for both perpendicular and oblique shocks in this study. The aim is to develop
a more comprehensive analytical work to provide useful information for probing
and characterizing the drag instability in future numerical simulations and
proceed to a more realistic case for future astrophysical applications.
The contents of the paper are structured as follows. In Section 2, we begin
with a linear analysis of the drag instability in a 2D perpendicular shock and
identify the mode with the maximal growth in the fiducial model of a C-shock.
Based on that experience, we proceed to the linear analysis of the drag
instability in an oblique shock in Section 3 and study the instability
properties in both the fiducial model and a model referred to as model V06 for
a stronger and wider shock for comparison. Finally, the summary and brief
discussions are presented in Section 4.
## 2 Linear analysis: isothermal perpendicular shocks
In cold molecular clouds and their substructures, the dynamical evolution of
ions and neutrals is governed by their individual continuity and momentum
equations, which include cosmic-ray ionization, ion–electron recombination in
the gas phase, mutual collisional drag force, the Lorentz force on ions, and
the pressure force with the isothermal equation of state. Additionally, the
evolution of magnetic fields is governed by the induction equation for ions.
The entire set of equations then reads as follows (e.g., Draine, 1980; Shu,
1992; Chen & Ostriker, 2012; GC, 20):
$\displaystyle{\partial\rho_{n}\over\partial t}+\nabla\cdot(\rho_{n}{\bf
v_{n}})=0,$ (1) $\displaystyle{\partial\rho_{i}\over\partial
t}+\nabla\cdot(\rho_{i}{\bf
v_{i}})=-\beta\rho_{i}^{2}+\xi_{\mathrm{CR}}\rho_{n},$ (2)
$\displaystyle\rho_{n}\left[{\partial{\bf v_{n}}\over\partial t}+({\bf
v_{n}}\cdot\nabla){\bf v_{n}}\right]+\nabla p_{n}={\bf f_{d}},$ (3)
$\displaystyle\rho_{i}\left[{\partial{\bf v_{i}}\over\partial t}+({\bf
v_{i}}\cdot\nabla){\bf v_{i}}\right]+\nabla p_{i}-{1\over
4\pi}(\nabla\times{\bf B})\times{\bf B}=-{\bf f_{d}},$ (4)
$\displaystyle{\partial{\bf B}\over\partial t}+\nabla\times({\bf B}\times{\bf
v_{i}})=0,$ (5)
where ${\bf v}$ is the velocity, $\rho$ is the density, ${\bf B}$ is the
magnetic field, $p=\rho c_{s}^{2}$ is the gas pressure when the isothermal
sound speed $c_{s}$ is 0.2 km/s at a temperature of $\sim 10$ K (Fukui &
Kawamura, 2010), and the subscripts $i$ and $n$ denote the ion and neutral
species, respectively. Note that the neutrals and ions are coupled by the
collisional drag force ${\bf f_{d}}\equiv\gamma\rho_{i}\rho_{n}{\bf
v_{d}}=\gamma\rho_{i}\rho_{n}({\bf v_{i}}-{\bf v_{n}})$, where $\gamma\approx
3.5\times 10^{13}$ cm3 s-1 g-1 is the drag force coefficient (Draine et al,
1983). The evolution of ion number density is controlled by the cosmic-ray
ionization rate $\xi_{\mathrm{CR}}$ and the ion recombination in the gas phase
$\beta$ (see e.g., Chen & Ostriker, 2012). We define the ionization parameter
$\chi_{i0}\equiv 10^{6}\sqrt{\xi_{\mathrm{CR}}(m_{n}/m_{i})/(\beta m_{i})}$.
as done by Chen & Ostriker (2012). We thus adopt $\beta\approx 10^{-7}$ cm3
s${}^{-1}/m_{i}$ and $\xi_{\mathrm{CR}}\approx 10^{-17}$ s-1 ($m_{i}/m_{n}$)
in this study (see, e.g., Shu, 1992; Tielens, 2005), where $m_{n}=2.3\times$
and $m_{i}=30\times$ the hydrogen mass are considered. Indeed, $\chi_{i0}=10$
falls in the typical range of $\chi_{i0}$ observed in star-forming regions
($\sim 1-20$; see, e.g., McKee et al. 2010).
### 2.1 Background States and Linearized Equations
Since there is no background structure parallel to the shock front in a 2D
plane-parallel, perpendicular shock, no fluid motion is normal to the shock
flow. Resultantly, the background states of a 2D perpendicular shock are the
same as those in a 1D perpendicular shock. Therefore, a detailed description
of the background states can be found in GC (20). For clarity, here we simply
summarize the shock model to smoothly connect to the linear analysis to be
presented later in the paper. We consider the gas flow toward the $+x$
direction across the shock with the magnetic field in the $y$-direction. The
equilibrium equations for a plane-parallel shock (i.e. $\partial/\partial
t=\partial/\partial y=0$) are derived from Equations(1)–(5), which are given
by Equations(6)–(10) in GC (20) to provide the background state of our linear
analysis in the shock frame. In these equilibrium equations, GC (20) also
employed a strong coupling approximation under which the ion-neutral drag is
balanced by the magnetic pressure gradient of the ions, i.e.,
$\gamma\rho_{n}L_{B}V_{d}=V_{A,i}^{2}$ where $L_{B}\equiv(-d\ln B/dx)^{-1}$
and $V_{A,i}$ is the Alfvén speed of the ions. Additionally, the equilibrium
between cosmic ionization and recombination is assumed; namely,
$\beta\rho_{i}^{2}=\xi_{\mathrm{CR}}\rho_{n}$ (see Chen & Ostriker (2012) for
justifications of such a choice).
Applying the zero-gradient boundary conditions ($d/dx=0$) far upstream and
downstream (i.e., no structures in the steady pre- and post-shock regions),
Chen & Ostriker (2012) derived the 1D structure equation of a C-shock.
Together with the pre-shock conditions described by $n_{0}$ (neutral number
density), $v_{0}=V_{i,0}=V_{n,0}$ (shock velocity), $B_{0}$, and $\chi_{i0}$
(here and throughout this paper, we use the subscript 0 to denote a physical
quantity in the pre-shock region), the field compression ratio $r_{B}\equiv
B/B_{0}=V_{i,0}/V_{i}$ as well as the neutral compression ratio
$r_{n}\equiv\rho_{n}/\rho_{n,0}=V_{n,0}/V_{n}$ can be solved, and $\rho_{i}$,
$\rho_{n}$, $B$, $V_{i}$, and $V_{n}$ can be subsequently obtained throughout
the shock width. Following GC (20), we place the shock front at $x=0$ pc for
convenience to plot and refer to shock properties as a function of $x$. In
this setup of the problem, the background drift velocity $V_{d}=V_{i}-V_{n}<0$
inside the C-shock (i.e. within the smooth shock transition). As in GC (20),
we adopt the 1D steady C-shock model shown in Figure 3 of Chen & Ostriker
(2012) as the fiducial model for the background state in the shock frame, with
the pre-shock parameters $n_{0}=500$ cm-3, $v_{0}=5$ km/s, $B_{0}=5\mu$G, and
$\chi_{i0}=10$. The left panel of Figure 1 shows $r_{n}/r_{B}$ in the fiducial
model, which is the same as Figure 1 in GC (20).
We now consider the perturbation eigenvector given by
$U(\omega,k_{x},k_{y})\equiv(\delta\rho_{i},\delta v_{x,i},\delta
v_{y,i},\delta B_{x},\delta B_{y},\delta\rho_{n},\delta v_{x,n},\delta
v_{y,n})^{T}$ multiplied by $\exp[\mathrm{i}(k_{x}x+k_{y}y+\omega t)]$ under
the WKBJ approximation, where $k_{x}$ is the longitudinal wavenumber, $k_{y}$
is the transverse wavenumber, and $\omega$ is the eigenvalue evaluated in the
shock frame. By substituting these perturbations and the background states
into the equations (1)–(5), the following linearized equations are obtained
(see Gu et al. (2004) and GC (20) in the 1D case):
$PU=\mathrm{i}\omega U,$ (6)
where
$\displaystyle
P=\left[\begin{array}[]{cccccccc}-\mathrm{i}k_{x}V_{i}-2\beta\rho_{i}&-\mathrm{i}k_{x}\rho_{i}&-\mathrm{i}k_{y}\rho_{i}&0&0&\xi_{\mathrm{CR}}&0&0\\\
-\mathrm{i}k_{x}\frac{c^{2}_{s}}{\rho_{i}}&-\mathrm{i}k_{x}V_{i}-\gamma\rho_{n}&0&\mathrm{i}k_{y}\frac{V^{2}_{A,i}}{B_{y}}&-\mathrm{i}k_{x}\frac{V^{2}_{A,i}}{B_{y}}&-\gamma
V_{d}&\gamma\rho_{n}&0\\\
-\mathrm{i}k_{y}\frac{c^{2}_{s}}{\rho_{i}}&0&-\mathrm{i}k_{x}V_{i}-\gamma\rho_{n}&0&0&0&0&\gamma\rho_{n}\\\
0&\mathrm{i}k_{y}B_{y}&0&-\mathrm{i}k_{x}V_{i}&0&0&0&0\\\
0&-\mathrm{i}k_{x}B_{y}&0&0&-\mathrm{i}k_{x}V_{i}&0&0&0\\\
0&0&0&0&0&-\mathrm{i}k_{x}V_{n}&-\mathrm{i}k_{x}\rho_{n}&-\mathrm{i}k_{y}\rho_{n}\\\
\gamma
V_{d}&\gamma\rho_{i}&0&0&0&-\mathrm{i}k_{x}\frac{c_{s}^{2}}{\rho_{n}}&-\mathrm{i}k_{x}V_{n}-\gamma\rho_{i}&0\\\
0&0&\gamma\rho_{i}&0&0&-\mathrm{i}k_{y}\frac{c_{s}^{2}}{\rho_{n}}&0&-\mathrm{i}k_{x}V_{n}-\gamma\rho_{i}\end{array}\right].$
(15)
Without the loss of generality, we adopt a constant wavenumber $k_{x}$ of
$1/0.015$ pc${}^{-1}(\equiv k_{fid}$), as used in GC (20). The choice of
$k_{x}$ is made to satisfy the WKBJ approximation, as demonstrated in the
right panel of Figure 1 where $k_{x}L_{p}$ and $k_{x}L_{B}$ are shown to be
all much larger than unity (also refer to Figure 1 in GC, 20).
Figure 1: The $r_{n}/r_{B}$ ratio of the background state throughout the
C-shock in the fiducial model (left panel) and a test for the WKBJ
approximation when $k_{x}=k_{fid}\equiv 1/0.015$ pc-1 (right panel). Here
$L_{p}\equiv|d\ln p/dx|^{-1}$ and $L_{B}$ are the scale heights for the gas
pressure and magnetic field of the background states, respectively.
Figure 2: Growth rate $\Gamma_{grow}$ (left panel) and its corresponding wave
frequency $\omega_{wave}$ (right panel) of the drag instability as a function
of $k_{y}/k_{x}$ in the four cases of $k_{x}$ at $x\approx 0.2$ pc. Note that
the wave frequency undergoes a steep change from a negative to a positive
value at $k_{y}/k_{x}\approx 16.84$ regardless of the value of $k_{x}$. The
wave frequency is not presented when the growth rate is zero, i.e., the green
curve in the range of $k_{y}/k_{x}\sim 1-10$ and the red curve in the range of
$k_{y}/k_{x}\lesssim 10$.
Figure 2 shows the growth rate $\Gamma_{grow}$ ($=$Im[$\omega]<0$, left panel)
and its corresponding wave frequency $\omega_{wave}$ ($=$Re[$\omega]$, right
panel) of the unstable wave mode as a function of $k_{y}/k_{x}$ in the shock
frame, at the location of $x\approx 0.2$ pc in the middle of the C-shock. It
is the only unstable mode among all eigenmodes computed from Equation(6). When
the longitudinal wavelengths are not exceptionally short (i.e.,
$k_{x}=k_{fid}$ and $5k_{fid}$ in the figure), the growth rate decreases
approximately with increasing $k_{y}/k_{x}$. However, when $k_{x}$ is as large
as $20k_{fid}$ (green curve) and $50k_{fid}$ (red curve), the growth rate
decreases and even drops to zero, except when $k_{y}/k_{x}$ is sufficiently
large in this particular example, which corresponds to a jump of the wave
frequency around $k_{y}/k_{x}\gtrsim 12.7$, as shown in the right panel of
Figure 2. We explore the underlying physics for the mode properties with
simplified dispersion relations in the following subsection.
### 2.2 Simplified Dispersion Relations
The eigenvalue problem for Equation(6) amounts to a complicated equation
containing a polynomial of $\omega$ to the eighth power. To make the physical
analysis of the unstable mode extractable, we attempt to numerically solve
Equation(6) by removing as many terms as possible in the matrix $P$ but still
obtain the same growth rate and wave frequency as those shown in Figure 2.
After this exercise, we realize that Equation(6) can be further reduced to
$\displaystyle(\Gamma_{i}+2\beta\rho_{i}){\delta\rho_{i}\over\rho_{i}}+\mathrm{i}k_{y}\delta
v_{i,y}-\xi_{CR}{\rho_{n}\over\rho_{i}}{\delta\rho_{n}\over\rho_{n}}=0,$ (16)
$\displaystyle(\Gamma_{i}+\gamma\rho_{n})\delta v_{i,y}-\gamma\rho_{n}\delta
v_{n,y}=0,$ (17)
$\displaystyle\Gamma_{n}{\delta\rho_{n}\over\rho_{n}}+\mathrm{i}k_{x}\delta
v_{n,x}+\mathrm{i}k_{y}\delta v_{n,y}=0,$ (18)
$\displaystyle-\gamma\rho_{i}V_{d}{\delta\rho_{i}\over\rho_{i}}+\mathrm{i}k_{x}c_{s}^{2}{\delta\rho_{n}\over\rho_{n}}+(\Gamma_{n}+\gamma\rho_{i})\delta
v_{n,x}=0,$ (19) $\displaystyle-\gamma\rho_{i}\delta
v_{i,y}+\mathrm{i}k_{y}c_{s}^{2}{\delta\rho_{n}\over\rho_{n}}+(\Gamma_{n}+\gamma\rho_{i})\delta
v_{n,y}=0,$ (20)
where $\Gamma_{i}=\mathrm{i}(\omega+k_{x}V_{i})$ is the eigenvalue in the
comoving frame of the ions and $\Gamma_{n}=\mathrm{i}(\omega+k_{x}V_{n})$ is
the eigenvalue in the comoving frame of the neutrals. Since the collisional
rate for an ion with the ambient neutrals $\gamma\rho_{n}$ is tremendously
large in a weakly ionized cloud, Equation(17) implies that $\delta
v_{i,y}\sim\delta v_{n,y}$, which allows Equation(20) to be further simplified
to
$\mathrm{i}k_{y}c_{s}^{2}{\delta\rho_{n}\over\rho_{n}}+\Gamma_{n}\delta
v_{n,y}\approx 0,$ (21)
and consequently, Equation (16) can be approximated to
$(\Gamma_{i}+2\beta\rho_{i}){\delta\rho_{i}\over\rho_{i}}+{k_{y}^{2}c_{s}^{2}\over\Gamma_{n}}{\delta\rho_{n}\over\rho_{n}}-\xi_{CR}{\rho_{n}\over\rho_{i}}{\delta\rho_{n}\over\rho_{n}}\approx
0.$ (22)
The absence of the induction equations for $\delta{\bf B}$ and the momentum
equation for $\delta v_{i,x}$ in the above reduced set of linear equations
(16)–(20) indicates that the disturbance of the ion flow along the shock
direction and that of magnetic fields plays a minor role in the dynamical
evolution of the 2D drag instability.
In a 2D perpendicular shock, the linearized equations should approach the 1D
result when $k_{y}\ll k_{x}$ (i.e., a transversely large-scale mode). The drag
instability can occur in a 1D steady C-shock (GC, 20). Specifically, for the
definite occurrence of the instability in 1D, the rate of the mode
$\Gamma_{n}$ observed in the comoving frame of the neutrals is considerably
smaller than both the recombination rate $2\beta\rho_{i}$ and the ion-neutral
drift rate across a distance of one wavelength (i.e., $k|V_{d}|$), whereas it
is considerably larger than the neutral collision rate with the ions (i.e.,
$\gamma\rho_{i}$) and the sound-crossing rate over one wavelength (i.e.,
$k_{x}c_{s}$). When the aforementioned conditions are satisfied, the 2D
linearized equations (i.e., Equations(18), (19), & (22)) are reduced to those
for a 1D perpendicular shock, which yield the simplified dispersion relation
(Gu et al., 2004; GC, 20)
$\Gamma_{n}\approx\pm{(1+\mathrm{i})\over
2}\sqrt{{k_{x}V_{d}}\gamma\rho_{i}}=\pm{(1+\mathrm{i})\over 2}\sqrt{k_{x}\over
L_{B}}V_{A,n},$ (23)
where the positive sign corresponds to a growing wave. As pointed out by GC
(20), the wave frequency in the shock frame $\omega_{wave}$ is dominated by
$k_{x}V_{n}$ instead of the imaginary part of $\Gamma_{n}$ due to the fast
background flow across the shock width downstream.
On the other hand, in the regime where $k_{y}\gg k_{x}$ (i.e., a transversely
small-scale mode) such that $k_{y}c_{s}$ is much larger than $2\beta\rho_{i}$,
$k_{x}V_{i}$, and $k_{x}V_{n}$, Equation(22) suggests that the terms
associated with ionization, recombination, and the background shock flows are
less significant, revealing a mode with the wave frequency
Re[$\omega$]$\approx k_{y}c_{s}$ much larger than Im[$\omega]$. Therefore,
Equation(22) suggests the relation
$\delta\rho_{i}/\rho_{i}\approx\delta\rho_{n}/\rho_{n}$. In this regime, the
in-phase relation between $\delta\rho_{i}$ and $\delta\rho_{n}$ is no longer
maintained by the ionization–recombination equilibrium in the regime of a
small $k_{y}/k_{x}$ but is a consequence of the fast acoustic wave along the
background magnetic fields,111In comparison,
$\delta\rho_{i}/\rho_{i}=(1/2)\delta\rho_{n}/\rho_{n}$ when the
ionization–recombination equilibrium is attained. The factor $1/2$ does not
appear in this regime when the slow mode is faster than the recombination
process. i.e., the so-called slow mode, because $c_{s}<V_{A,n}$ in our
fiducial model (e.g., Shu, 1992). This result, along with Equations(18), (19),
and (21), yield the following simplified dispersion relation:
$\Gamma_{n}\approx\pm\left(-\mathrm{i}k_{x}|V_{d}|\gamma\rho_{i}-k^{2}c_{s}^{2}\right)^{1/2}\approx\pm\left({1\over
2}{k_{x}|V_{d}|\gamma\rho_{i}\over
k_{y}^{2}c_{s}^{2}}+\mathrm{i}\right)k_{y}c_{s},$ (24)
where $k^{2}\equiv k_{x}^{2}+k_{y}^{2}$ and moreover, we have used
$\Gamma_{n}^{2}\gamma\rho_{i}\approx k_{y}^{2}c_{s}^{2}\gamma\rho_{i}$ from
Equation(22) and $k_{y}c_{s}\gg\sqrt{k_{x}|V_{d}|\gamma\rho_{i}}$ in deriving
the above equation. The inequality
$k_{y}c_{s}\gg\sqrt{k_{x}|V_{d}|\gamma\rho_{i}}$ holds in the regime where
$k_{y}c_{s}\gg 2\beta\rho_{i}$ is being considered here, along with
$2\beta\rho_{i}\gg\sqrt{k_{x}|V_{d}|\gamma\rho_{i}}$ in the fiducial model
(see the left panel of Figure 3 in GC, 20).222Recall that in a 1D C-shock,
$\sqrt{k_{x}|V_{d}|\gamma\rho_{i}}$ is about the growth rate of the drag
instability (see Equation 23), which is smaller than $2\beta\rho_{i}$ such
that the ionization–recombination equilibrium can be attained for the
perturbations to allow the instability to occur.
The positive sign of Equation(24) corresponds to a growing wave with the
growth rate related to the ion-neutral drag and the wave frequency
$k_{y}c_{s}$ associated with the slow mode. More specifically, in the comoving
frame of the neutrals, the phase velocity of the unstable wave is about
$-k_{y}c_{s}/k_{x}\hat{x}-c_{s}\hat{y}$ and the group velocity is
approximately $-(k_{x}/k_{y})c_{s}\hat{x}-c_{s}\hat{y}$. Hence, in this
regime, while both the phase and signal of the unstable wave propagate mainly
along the background field lines at the sound speed, the signal also
propagates upstream slowly at a speed much smaller than the sound speed. In
the shock frame, the phase velocity of the unstable wave is about
$[-k_{y}c_{s}/k_{x}+V_{n}]\hat{x}+[-c_{s}+(k_{x}/k_{y})V_{n}]\hat{y}\approx-
k_{y}c_{s}/k_{x}\hat{x}-c_{s}\hat{y}$.
Based on the above simplified dispersion relations for unstable modes on
different transverse scales, we may be able to interpret the results
illustrated in Figure 2. When $k_{y}/k_{x}\rightarrow 0$, both the growth rate
and wave frequency converge to the 1D result for the drag instability (GC,
20), as illustrated by the flat curves for $k_{y}/k_{x}<1$ in Figure 2.
However, as $k_{y}/k_{x}$ increases, a new property of the growing mode
associated with 2D emerges when the acoustic wave along the background field
line, i.e. the slow mode, becomes important. This happens when $c_{s}k_{y}\sim
k_{x}V_{n}$ as suggested by the transition of the dispersion relation from
Equation(23) to Equation(24). Physically, it occurs when the slow-mode rate
$c_{s}k_{y}$ introduced by the additional dimension along the background field
in the $y$-direction becomes comparable to the shock-crossing rate
$k_{x}V_{n}$ in the $x$-axis through the C-shock. Owing to this transition at
the large $k_{y}$, the jump of the wave frequency from a negative to a
positive value occurs at $k_{y}/k_{x}\sim
V_{n}/c_{s}\equiv\hat{k}_{jump}\approx 16.84$ at $x\approx 0.2$ pc, which
agrees with the right panel of Figure 2. While we show the value of
$\hat{k}_{jump}$ at one particular $x$, the range of $\hat{k}_{jump}$ within a
C-shock can be estimated by the pre-shock conditions. The $V_{n}$ decreases
from $v_{0}$ at the beginning of a shock to $v_{0}/r_{f}$ at the end of a
shock, where $r_{f}$ is the final compression ratio of the neutral density in
the post-shock region given approximately by $\sqrt{2}v_{0}/v_{A,n,0}$ (Chen &
Ostriker, 2012). Hence, $\hat{k}_{jump}$ decreases from $v_{0}/c_{s}$ to
$(v_{0}/r_{f})/c_{s}\approx B_{0}/(\sqrt{4\pi\rho_{n,0}}c_{s})$. In our
fiducial model, $\hat{k}_{jump}$ changes from about 25 to 3 across the shock
width. In the typical environments of star-forming clouds, $v_{0}\sim$ 1-6
km/s, $n_{0}\sim$ 100-1000 1/cm3, $B_{0}\sim 5-10$ $\mu$G, and the temperature
is about 10 K (e.g., see Table 1 in Chen & Ostriker, 2012). Therefore,
$\hat{k}_{jump}$ can decrease from $\sim$ 10-30 at the beginning of a C-shock
to $\sim$ 1.5-5 at the end of a C-shock.
Figure 3: Blowup of the region where $k_{y}/k_{x}>20$ in Figure 2 to compare
the exact behavior of the growing mode with the approximate result from
Equation(24). The curves of $k_{y}c_{s}$ corresponding to various $k_{x}$ are
also plotted on the right panel for comparison.
As $k_{y}/k_{x}$ further increases to an even larger value, the behavior of
the growing mode follows the dispersion relation described by Equation(24).
This is demonstrated in Figure 3, which zooms in to the region where
$k_{y}/k_{x}>20$ in Figure 2. The dashed lines present the growth rate
Re[$\Gamma_{n}$] (left panel) and the wave frequency
Im[$\Gamma_{n}-k_{x}V_{n}$] (right panel) calculated from Equation(24). As
$k_{y}/k_{x}$ increases, the exact growth rates of the unstable mode for
various $k_{x}$ computed from Equation(6) converge to Re[$\Gamma_{n}$] while
the exact wave frequencies $\omega_{wave}$ overlap with
Im[$\Gamma_{n}-kV_{n}$] in the entire zoomed-in region. The slow-mode
frequency $k_{y}c_{s}$, as illustrated by the dashed-dotted lines, is also
plotted for each value of $k_{x}$. As $k_{y}/k_{x}$ increases, the wave
frequency $\omega_{wave}$ in all cases converges toward $k_{y}c_{s}$,
confirming that the slow mode emerges and predominates on small transverse
scales for the drag instability. Note that the positive wave frequency for
$k_{y}/k_{x}>\hat{k}_{jump}$ indicates a growing wave propagating upstream
within the 2D perpendicular C-shock, in stark contrast to the growing mode for
$k_{y}/k_{x}<\hat{k}_{jump}$, which is more dynamically dominated by the wave
convected with the downstream flow across the shock width, similar to the
result for a 1D C-shock.
In the case of a moderate value of $k_{x}$ (i.e., $k_{x}=k_{fid}$ and
$5k_{fid}$), the growth rate decreases almost monotonically with increasing
$k_{y}$ when $k_{y}/k_{x}\gtrsim\hat{k}_{jump}$, in an approximate agreement
with the trend described by Equation(24). On the other hand, in the regime
where $k_{y}/k_{x}<1$, the growth rate is small or nearly zero when $k_{x}$ is
large enough (i.e., $k_{x}=50$ and $100k_{fid}$ in Figure 2) for the small-
scale gas pressure perturbation along the shock direction to suppress the
growth (Gu et al., 2004; GC, 20). However, a growing mode exists when
$k_{y}/k_{x}\gtrsim 12.7$. Its growth rate rises significantly during the wave
frequency transition and then declines, according to Equation(24), as
$k_{y}/k_{x}$ increases to a large value. We are unable to derive a simple
dispersion relation to describe the rise of the growth rate during the wave
frequency transition. It is because more terms in Equations(16)–(20) become
comparably important and thus cannot be neglected as a proper approximation
for deriving Equation(24). Nevertheless, this complexity of the distinct
behaviors of the growth rate at the wave frequency transition can be unraveled
by studying the phase differences between perturbed quantities, which will be
presented in the next subsection.
### 2.3 An auxiliary analysis based on phase differences between
perturbations
We illustrate the phase differences in Figure 4 corresponding to the growth
rate and wave frequency shown in Figure 2. Figure 4 includes the phase
difference between the density perturbations, as well as those between the
neutral velocity and density perturbations. The flat part of the curves in the
range of $k_{y}/k_{x}\lesssim 1$ corresponds to the flat part of the curves in
Figure 2, resembling the 1D drag instability for the 2D unstable modes. In
contrast to the cases for the small $k_{x}$ (i.e., blue and orange curves),
the phase difference between $\delta\rho_{i}$ and $\delta\rho_{n}$ in the
cases for the large $k_{x}$ (green and red curves in panel (a) of Figure 4)
exhibit a dramatic change when $k_{y}/k_{x}\approx\hat{k}_{jump}=16.8$,
corresponding to the jump of the wave frequency. Since the wave frequency of
the mode $\omega_{wave}$ is small around this jump transition, the Doppler-
shift frequency $k_{x}V_{i}$ with a large $k_{x}$ becomes nonnegligible
compared to the ionization rate in the continuity equation of the ions.
Consequently, the phase difference between the ion and neutral density
perturbations does not stay small by the ionization–recombination equilibrium
but becomes large. The phase difference decreases steeply with increasing
$k_{y}/k_{x}$ because the rate of the mode is predominated quickly by the slow
mode $\approx k_{y}c_{s}$, resulting in the small phase difference between the
ion and neutral density perturbations for a large $k_{y}/k_{x}$. It is in
accordance with the result that
$\delta\rho_{i}/\rho_{i}\approx\delta\rho_{n}/\rho_{n}$ when deriving
Equation(24).
The drastic change in $\phi_{\delta\rho_{i}}-\phi_{\delta\rho_{n}}$ also
causes the steep change in $\phi_{\delta v_{n,x}}-\phi_{\delta\rho_{n}}$ from
$\pi$ to $\approx 0.8\pi$ around the frequency transition (see the green and
red curves in panel (b) of Figure 4) through the linearized continuity and
momentum equations, leading to a bump in the growth rate of the drag
instability near the frequency transition, as shown in the left panel of
Figure 2.333Recall that the drag instability in a 1D case is driven by the
canonical phase difference $3\pi/4$ between $\phi_{\delta v_{n,x}}$ and
$\phi_{\delta\rho_{n}}$ (Gu et al., 2004; GC, 20). Since an acoustic mode can
be characterized by the phase difference $\pi$ between density and velocity
perturbations, the nearly out-of-phase difference between $\delta v_{n,y}$ and
$\delta\rho_{n}$ shown in panel (c) of Figure 4 for $k_{y}/k_{x}\gtrsim 16.8$
reconfirms the right panel of Figure 3, i.e., the emergence of the slow mode
when $k_{y}/k_{x}\gtrsim\hat{k}_{jump}$. It also agrees with Equation(21) on
the relation $\Gamma_{n}\sim\mathrm{i}k_{y}c_{s}$ for
$k_{y}/k_{x}>\hat{k}_{jump}$, which leads to
$\delta\rho_{n}/\rho_{n}\sim-\delta v_{n,y}/c_{s}$, i.e., out of phase between
$\delta\rho_{n}$ and $\delta v_{n,y}$ for a slow mode.
Figure 4: Phase differences of unstable modes between $\delta\rho_{i}$ and
$\delta\rho_{n}$ (panel (a)), $\delta v_{n,x}$ and $\delta\rho_{n}$ (panel
(b)), and $\delta v_{n,y}$ and $\delta\rho_{n}$ (panel (c)) as a function of
$k_{y}/k_{x}$ at the location of $x\approx 0.2$ pc in the fiducial model of
the perpendicular C-shock. Note that there is a steep change in the green and
red curves during the wavelength transition at
$k_{y}/k_{x}\approx\hat{k}_{jump}=16.84$ in panels (a) and (b). The phase
differences are not presented when the growth rates are zero for some values
of $k_{y}/k_{x}$ (refer to the left panel of Figure 2).
### 2.4 Substantial growth of slowly propagating waves
It was shown by GC (20) that the growth of the drag instability in a 1D
perpendicular shock is limited by the short time span for an unstable wave to
stay within a C-shock as the wave is convected downstream by a fast flow
across the shock; specifically, the local growth rate $\Gamma_{grow}$ is much
smaller than the wave frequency in the shock frame $\omega_{wave}$.
Consequently, a stronger shock with a larger shock width favors a more
appreciable growth of the 1D drag instability in the typical environments of
star-forming clouds investigated by the authors. However, for a 2D
perpendicular shock, as has been studied in the preceding subsection, the
transition of the wave frequency from a negative to a positive value allows
for the presence of a slowly traveling mode with an arbitrarily small wave
frequency in the shock frame when $k_{y}/k_{x}\sim\hat{k}_{jump}$, thereby
allowing a tremendous amount of time for the growth of the drag instability
within a shock width in this transition regime.
We investigate this expectation by simply considering a mode with a small wave
frequency $\omega_{wave}=-10^{-14}$ 1/s $\ll$ the flow crossing time over one
longitudinal wavelength $k_{x}V_{n}$. According to the eigenvalue problem, a
family of unstable modes should exist with the proper combination of the
wavenumbers $k_{x}$ and $k_{y}$ as a function of $x$. For the purpose of
computational convenience without loss of generality, we first consider the
uniform $k_{x}$ given by the fiducial wavenumber $k_{x}=k_{fid}=$ 1/(0.015
pc), while allowing the $k_{y}$ of the mode to vary with $x$. The results are
shown in Figure 5. The resulting growth rate $\Gamma_{grow}$ is indeed larger
than $\omega_{wave}$ in most of the region within the C-shock, as illustrated
in the left panel of Figure 5. The corresponding profile of $k_{y}/k_{x}$
decreases from 25 to 3 with increasing $x$, in agreement with the transition
wavenumber ratio $k_{y}/k_{x}\approx\hat{k}_{jump}=V_{n}/c_{s}$, where $V_{n}$
decreases across the shock width due to density compression, as has been
explained in Section 2.2. Next, we consider the uniform $k_{y}$ given by
$15k_{fid}$ (i.e. a transversely small-scale mode) while allowing the $k_{x}$
of the mode to vary with $x$. The results of the growth rate and wave
frequency are considerably similar to those shown in Figure 5, affirming the
slow propagation of the unstable modes with $k_{y}/k_{x}\approx\hat{k}_{jump}$
in a 2D perpendicular C-shock. We also study the “counterpart” mode with the
small positive wave frequency $\omega_{wave}=10^{-14}$ 1/s for the same case
of either the uniform $k_{x}$ or uniform $k_{y}$. We find that the profiles of
the growth rate and $k_{y}/k_{x}$ are almost identical to those presented in
Figure 5, except that the unstable wave slowly propagates upstream rather than
downstream due to the sign change of $\omega_{wave}$.
As the unstable wave propagates slowly at the phase velocity $v_{ph,x}$, the
unstable mode is nearly comoving with the shock during the cloud lifetime,
which is typically about tens of millions of years (Engargiola et al., 2003;
Blitz et al., 2007; Kawamura et al., 2009; Murray, 2011; Miura et al., 2012;
Meidt et al., 2015; Jeffreson & Kruijssen, 2018). Consequently, the MTG is no
longer a proper measure for the growth of the slowly propagating modes in the
shock frame. For instance, it takes about 82 million years for the unstable
mode that we have considered with $\omega_{wave}=-10^{-14}$ 1/s and uniform
$k_{x}=k_{fid}$ to travel across the entire shock width,444The wave-crossing
time through the shock width is given by $\int_{\rm shock\
width}dx/|v_{ph,x}|=\int dx/(|\omega_{wave}|/k_{x})$. which is $\gtrsim$ the
cloud lifetime. It is evident from Figure 2 that there are modes that are
almost/exactly stationary in the shock frame (i.e., $\omega_{wave}\approx 0$)
and have $\Gamma_{grow}\sim 10^{-13}$ 1/s $\sim 1/0.3$ 1/Myr. Hence, the
growth of these modes is approximately given by $\exp(\Gamma_{grow}t)$, which
can be substantially larger than unity if $t$ is some fraction of the cloud
lifetime. In contrast, the maximum growth of the unstable mode in the 1D case
is limited by the shock width and thus is given by MTG, which is merely about
9.9 in the fiducial model (GC, 20). The 2D mode ought to grow to a nonlinear
phase according to the expectation from the linear theory.
Figure 5: Growth rate $\Gamma_{grow}$ as a function of $x$ for the unstable
mode with the wave frequency $\omega_{wave}=-10^{-14}$ 1/s and $1/k_{x}=0.015$
pc (left panel) and the profile of the corresponding wavenumber ratio
$k_{y}/k_{x}$ (right panel) for the fiducial model of the perpendicular
C-shock.
Figure 6 shows the amplitude of perturbations for the unstable mode presented
in Figure 5. The perturbed quantities are normalized by the $y$-component of
the magnetic field perturbation $|\delta B_{y}|/B$. It is evident from the
figure that within the shock width where the unstable mode exists, the density
perturbations $\delta\rho_{n}/\rho_{n}$ and $\delta\rho_{i}/\rho_{i}$ dominate
over the others, implying that the growth of density perturbations plays a
decisive role in the dynamical evolution of the instability. Although this
outcome of promoting the local density growth of a wave turns out to be the
same as that in the 1D case (GC, 20), the drag instability is coupled with the
slow mode in the regime $k_{y}/k_{x}>\hat{k}_{jump}$ for a 2D perpendicular
shock, as described in the previous section. This can also be realized from
Figure 6, which shows $|\delta v_{n,y}|\gg|\delta v_{n,x}|$ and $|\delta
v_{i,y}|\gg|\delta v_{i,x}|$ as a result of the presence of the slow mode
propagating along the background magnetic fields in the $y$-direction.
It is worth noting that $\delta B_{x}$ and $\delta B_{y}$ are generated from
$B$ due to the gradients of the longitudinal motion $\partial_{y}(\delta
v_{i,x})$ (i.e., magnetic wiggling) and $\partial_{x}(\delta v_{i,x})$ (i.e.,
magnetic compression), respectively, through the induction equation such that
$|\delta B_{x}|/|\delta B_{y}|=k_{y}/k_{x}$, in agreement with the ratios
$|\delta B_{x}|/|\delta B_{y}|$ in Figure 6 and $k_{y}/k_{x}$ in the right
panel of Figure 5. However, given their small amplitudes, magnetic field
perturbations simply grow passively through the induction equation and do not
play any significant role in the dynamics of the drag instability, confirming
that the absence of the induction equation in the reduced set of linearized
equations from Equations(16)–(20) still leads to the same growth rate and wave
frequency of the drag instability. Finally, as a reminder, in the transversely
large-scale regime where $k_{y}/k_{x}\ll V_{n}/c_{s}=\hat{k}_{jump}$ , the
property of the drag instability in 2D resembles that in 1D, according to the
analysis in the previous subsection. Therefore, the density growth also
governs the dynamics of the drag instability for the transversely large-scale
mode (GC, 20).
Figure 6: Amplitude of perturbations normalized by the $y$-component of the
magnetic perturbation $|\delta B_{y}|/B$ for the unstable mode presented in
Figure 5.
### 2.5 Recap
When $k_{y}/k_{x}\lesssim V_{n}/c_{s}\equiv\hat{k}_{jump}$ (i.e., transversely
large-scale modes), the drag instability behaves similarly between 1D and 2D
perpendicular C-shocks. When $k_{y}/k_{x}\gtrsim\hat{k}_{jump}$ (i.e.,
transversely small-scale modes), a new property of the growing mode emerges
for 2D perpendicular C-shocks even for a large $k_{x}$, resulting from the
ion-neutral drag coupled with the slow mode along the background magnetic
field. When $k_{y}/k_{x}\sim\hat{k}_{jump}$, the sign transition of wave
frequency occurs. This leads to an unstable mode of a small wave frequency and
thus enables a slowly propagating mode to grow substantially within a C-shock,
unlimited by the shock width. The linear result suggests that the density
growth dominates the evolution of the perturbation growth driven by the drag
instability for the modes on both transversely large and small scales.
## 3 Linear analysis: isothermal oblique shocks
Figure 7: Oblique angle of $\bf B$ (solid curves), $\bf V_{i}$ (dashed
curves), and $\bf V_{n}$ (dotted curves) relative to the direction of the pre-
shock inflow (i.e., the $+x$ direction) as a function of $x$ for the fiducial
model of oblique C-shocks. The cases for three different initial oblique
angles $\theta_{0}$ (70∘, 45∘, 20∘) are presented. These results provide the
background states of oblique C-shocks for the linear analysis of the drag
instability.
### 3.1 Background States and Linearized Equations
In this section, we study the local stability of a 2D steady oblique shock in
which magnetic fields are not normal to the shock flow. We adopt the
background states constructed by Chen & Ostriker (2012). That is, the pre-
shock flow is still along the $x$-direction, the shock front in the $y$-$z$
plane, and the pre-shock magnetic field in the $x$-$y$ plane (${\bf
B_{0}}=B_{x,0}{\hat{x}}+B_{y,0}{\hat{y}}$), at an initial oblique angle
$\theta_{0}$ to the inflow ($B_{y,0}/B_{x,0}=\tan\theta_{0}$). For a steady,
parallel-plane oblique shock (i.e.,
$\partial_{t}=\partial_{y}=\partial_{z}=0$), Chen & Ostriker (2012) derived
the equilibrium equations for $r_{n}\equiv\rho_{n}/\rho_{n,0}=v_{0}/V_{n,x}$,
$r_{B}\equiv B_{y}/B_{y,0}$ ($B_{x}$ is constant because $\nabla\cdot{\bf
B}=0$), and $r_{ix}\equiv v_{0}/V_{i,x}$ from Equations(1)-(5) based on the
strong coupling approximation and ionization–recombination equilibrium. Note
that $r_{B}$ is not equal to $r_{ix}$ for an oblique shock. By solving the
equilibrium equations with a set of pre-shock conditions, the background
states including $\rho_{i}$, $\rho_{n}$, $V_{i,x}$, $V_{n,x}$, $V_{i,y}$,
$V_{n,y}$, and $B_{y}$ can be subsequently obtained as a function of $x$
within the shock and post-shock regions. Since the field line is not
perpendicular to the incoming shock flow, both the field line and gas inflow
will continuously change their directions relative to their initial direction
as they move across the shock width until they reach the post-shock region
(see Figure 7 in the case of the pre-shock conditions given by the fiducial
model). As expected, $\bf B$ and $\bf V_{i}$ are first tilted toward the shock
front (i.e. a large oblique angle shown in Figure 7) due to the earlier
compression of $\rho_{i}$, followed by the subsequent tilt of $\bf V_{n}$
toward the shock front due to the later compression of $\rho_{n}$ within the
shock. Finally, ${\bf V_{i}}={\bf V_{n}}$ in the post-shock region as in the
pre-shock region, but with a final nonzero oblique angle. The shock width is
given by the region between the pre- and post-shock. It is evident from Figure
7 that as $\theta_{0}$ decreases, the compression ratio increases and shock
width decreases. Besides, $\bf B$ in the three cases of $\theta_{0}$ shown in
Figure 7 all quickly becomes more or less normal to the shock flow by shock
compression. All of these results are anticipated from the analysis in Chen &
Ostriker (2012).
With the background states, the linearized equations are given by
$OU=\mathrm{i}\omega U,$ (25)
where
$\displaystyle
O=\left[\begin{array}[]{cccc}-\mathrm{i}k_{x}V_{i,x}-\mathrm{i}k_{y}V_{i,y}-2\beta\rho_{i}&-\mathrm{i}k_{x}\rho_{i}&-\mathrm{i}k_{y}\rho_{i}&0\\\
-\mathrm{i}k_{x}\frac{c^{2}_{s}}{\rho_{i}}&-\mathrm{i}k_{x}V_{i,x}-\mathrm{i}k_{y}V_{i,y}-\gamma\rho_{n}&0&\mathrm{i}k_{y}\frac{V^{2}_{A,i}}{B_{y}}\\\
-\mathrm{i}k_{y}\frac{c^{2}_{s}}{\rho_{i}}&0&-\mathrm{i}k_{x}V_{i,x}-\mathrm{i}k_{y}V_{i,y}-\gamma\rho_{n}&-\mathrm{i}k_{y}{B_{x}\over
B_{y}}{V_{A,i}^{2}\over B_{y}}\\\
0&\mathrm{i}k_{y}B_{y}&-\mathrm{i}k_{y}B_{x}&-\mathrm{i}k_{x}V_{i,x}-\mathrm{i}k_{y}V_{i,y}\\\
0&-\mathrm{i}k_{x}B_{y}&\mathrm{i}k_{x}B_{x}&0\\\ 0&0&0&0\\\ \gamma
V_{d,x}&\gamma\rho_{i}&0&0\\\ \gamma
V_{d,y}&0&\gamma\rho_{i}&0\end{array}\right.$ (34)
$\displaystyle\left.\begin{array}[]{cccc}0&\xi_{\mathrm{CR}}&0&0\\\
-\mathrm{i}k_{x}\frac{V^{2}_{A,i}}{B_{y}}&-\gamma V_{d,x}&\gamma\rho_{n}&0\\\
\mathrm{i}k_{x}{B_{x}\over B_{y}}{V_{A,i}^{2}\over B_{y}}&-\gamma
V_{d,y}&0&\gamma\rho_{n}\\\ 0&0&0&0\\\
-\mathrm{i}k_{x}V_{i,x}-\mathrm{i}k_{y}V_{i,y}&0&0&0\\\
0&-\mathrm{i}k_{x}V_{n,x}-\mathrm{i}k_{y}V_{n,y}&-\mathrm{i}k_{x}\rho_{n}&-\mathrm{i}k_{y}\rho_{n}\\\
0&-\mathrm{i}k_{x}\frac{c_{s}^{2}}{\rho_{n}}&-\mathrm{i}k_{x}V_{n,x}-\mathrm{i}k_{y}V_{n,y}-\gamma\rho_{i}&0\\\
0&-\mathrm{i}k_{y}\frac{c_{s}^{2}}{\rho_{n}}&0&-\mathrm{i}k_{x}V_{n,x}-\mathrm{i}k_{y}V_{n,y}-\gamma\rho_{i}\end{array}\right],$
(43)
where $V_{A,i}$ is still defined as $B_{y}^{2}/4\pi\rho_{i}$ in terms of the
field component parallel to the shock front, which follows the same definition
in the case of the 2D perpendicular shock. When the $x$-component of the pre-
shock $B$ field is zero, $B_{x}=0$, $V_{i,y}=0$, and $V_{n,y}=0$ (thus
$V_{d,y}=0$) throughout the shock. Thus, the matrix $O$ in the above equation
for an oblique shock is reduced to the matrix $P$ in equation(6) for a
perpendicular shock.
Figure 8: Growth rate $\Gamma_{grow}$ (left panel) and its corresponding wave
frequency in the shock frame $\omega_{wave}$ (right panel) of the drag
instability in the fiducial model for a 2D oblique shock with
$\theta_{0}=45^{\circ}$, plotted as a function of $k_{y}/k_{x}$ in the four
cases of $k_{x}$ at $x\approx 0.2$ pc. The wave frequency is not presented
when the growth rate is zero.
We solve the above eigenvalue problem for the fiducial model of the C-shock
with a pre-shock oblique angle $\theta_{0}<90^{\circ}$. As in the
perpendicular shock, we find that there is one unstable mode among the eight
eigenmodes within a C-shock. Figure 8 shows the growth rate $\Gamma_{grow}$
and wave frequency in the shock frame $\omega_{wave}$ of the unstable mode as
a function of $k_{y}/k_{x}$ in the case of $\theta_{0}=45^{\circ}$ with four
values of $k_{x}$ at $x\approx 0.2$ pc, approximately in the middle of the
oblique C-shock. The figure exhibits two clear general features. First of all,
the growth rates and wave frequencies do not vary significantly but have
values similar to the case of $k_{y}=0$ in the range of $k_{y}/k_{x}<1$, as
shown by the flat part of the curves. The second feature is that regardless of
the difference in $k_{x}$, the growth rates of the unstable mode all drop to
zero when $k_{y}/k_{x}\approx 9.3$, and the corresponding wave frequencies
exhibit a discontinuity at the same wavenumber ratio, which turns out to be
exactly equal to $|V_{d,x}|/V_{d,y}$ at $x\approx 0.2$ pc in the fiducial
model. We investigate the reason in the next subsections.
### 3.2 Simplified Dispersion Relation
Since $V_{n,y}$ and $V_{i,y}$ other than $V_{n,x}$ and $V_{i,x}$ are present
in an oblique shock, the Doppler-shift rates in the comoving frame of the ions
$\Gamma_{i}$ and neutrals $\Gamma_{n}$ are expressed by
$\mathrm{i}(\omega+k_{x}V_{i,x}+k_{y}V_{i,y})$ and
$\mathrm{i}(\omega+k_{x}V_{n,x}+k_{y}V_{n,y})$, respectively. Following the
same approach in Section 2.2 for perpendicular shocks, we focus on the
particular mode with $\Gamma_{n}$ smaller than $2\beta\rho_{i}$ and $kV_{d}$
but larger than $\gamma\rho_{i}$. The linear equations expressed in
Equation(25) may be reduced to the same set of Equations(18), (19), (21), and
(22), except that the additional drag term $\gamma V_{d,y}\delta\rho_{i}$
appears in the $y$-component of the momentum equation for the neutrals, namely
(cf. Equation(21)):
$-\gamma\rho_{i}V_{d,y}{\delta\rho_{i}\over\rho_{i}}+\mathrm{i}k_{y}c_{s}^{2}{\delta\rho_{n}\over\rho_{n}}+\Gamma_{n}\delta
v_{n,y}\approx 0.$ (44)
Moreover, as the recombination rate $2\beta\rho_{i}>\Gamma_{n}$ for this mode,
Equation(22) is reduced to
$\delta\rho_{i}/\rho_{i}\approx(1/2)\delta\rho_{n}/\rho_{n}$ due to the
ionization equilibrium. The resulting dispersion reads
$\Gamma_{n}^{2}=-\mathrm{i}{\gamma\rho_{i}\over
4}(k_{x}V_{d,x}+k_{y}V_{d,y})-k^{2}c_{s}^{2}.$ (45)
In the regime where $k_{y}/k_{x}\ll 1$ and $\Gamma_{n}>k_{x}c_{s}$, the above
equation is reduced to Equation(23), the typical growth rate and wave
frequency for the 1D drag instability (Gu et al., 2004; GC, 20), but with a
background state for an oblique shock. As in the case of a 2D perpendicular
shock discussed in the preceding section, this mode behavior of an oblique
shock is consistent with the flat part of the curves for both growth rate and
wave frequency in Figure 8. As $k_{y}/k_{x}$ increases to the moderate value
$|V_{d,x}|/V_{d,y}\equiv\hat{k}_{discon}=9.3$, Figure 8 shows that the growth
rates decline and become zero around $\hat{k}_{discon}$. Besides, the
corresponding wave frequencies decrease gradually with increasing
$k_{y}/k_{x}$ and then abruptly drop around $\hat{k}_{discon}$. This behavior
of the growth rate and wave frequency can be understood in terms of
Equation(45), which admits a special solution Re$[\Gamma_{n}]=0$ when
$k_{x}V_{d,x}+k_{y}V_{d,y}=0$. The dynamics attributed to the absence of the
growing mode at $k_{y}/k_{x}=\hat{k}_{discon}$ is that the drag force in the
$x$-direction (i.e., $\gamma V_{d,x}\delta\rho_{i}$) acts out of phase with
the drag force in the $y$-direction for the neutrals (i.e., $\gamma
V_{d,y}\delta\rho_{i}$), therefore suppressing the density enhancement in the
neutral continuity equation and thus quenching the drag instability.
To explain the property of the wave frequency shown in the right panel of
Figure 8, we investigate the behavior of Equation(45) near
$k_{y}/k_{x}=\hat{k}_{discon}$. Because $kc_{x}\gg
k_{x}V_{d,x}+k_{y}V_{d,y}\approx 0$ around this wavenumber ratio, the
dispersion relation of the unstable mode described by Equation(45) is
simplified to
$\Gamma_{n}=\mathrm{i}\omega+\mathrm{i}k_{x}V_{n,x}+\mathrm{i}k_{y}V_{n,y}\approx{\gamma\rho_{i}\over
4}{|k_{x}V_{d,x}+k_{y}V_{d,y}|\over kc_{s}}\pm\mathrm{i}kc_{s},$ (46)
where the plus (minus) sign of the imaginary part of the above expression
(i.e., the wave frequency in the comoving frame of the neutrals) is taken when
$k_{x}V_{d,x}+k_{y}V_{d,y}<0$ ($>0$). Due to this sign change, the wave
frequency in the shock frame $\omega_{wave}$ (=Re[$\omega$]) at
$k_{y}/k_{x}=k_{discon}$ exhibits a discontinuity, with the difference given
by $\approx 2kc_{s}$. As $k_{y}/k_{x}$ increases beyond $\hat{k}_{discon}$ for
a given $k_{x}$, Re[$\omega$] ($=-k_{x}V_{n,x}-k_{y}V_{n,y}-kc_{s}$) increases
more negatively; approximately speaking, Re[$\omega$]$\approx-
k_{y}(V_{n,y}+c_{s})\propto-k_{y}$, in close agreement with the right panel of
Figure 8.
### 3.3 An auxiliary analysis based on phase differences between
perturbations
Although we attempt to obtain the simplified dispersion relations for guiding
us in understanding the key features of the growth rate and wave frequency of
an unstable wave, not all of the features can be explained. Figure 8 shows
that the unstable mode is suppressed when $k_{y}/k_{x}$ is sufficiently large,
which is unable to be described by Equation(46), because some neglected terms
in deriving the simplified dispersion relations can become comparably
important for a large $k_{y}/k_{x}$. To gain a more comprehensive insight into
the 2D drag instability, we study the phase differences between perturbation
quantities of the unstable mode as in the preceding section to complement the
limited analysis from the dispersion relations. The results as a function of
$k_{y}/k_{x}$ are illustrated in Figure 9.
As in the case of perpendicular shocks, the flat part of the curves in panels
(a) and (b) of Figure 9 correspond to that in Figure 8 for
$k_{y}/k_{x}\lesssim 1$, arising from the fact that 2D oblique shocks resemble
1D shocks (i.e., $k_{y}=0$) in terms of the dynamics in the $x$-direction. In
the $y$-direction, however, the phase difference between $\delta\rho_{n}$ and
$\delta v_{n,y}$ changes gradually from $\approx 1.75\pi$ (equivalent to
$-\pi/4$) to $\pi$ as $k_{y}/k_{x}$ increases from $\approx 0.01$ to
$\hat{k}_{discon}$ ($\approx 9.3$ at $x=0.2$ pc). It arises because when
$k_{y}/k_{x}\ll 1$, the ionization equilibrium (i.e.
$\delta\rho_{i}/\rho_{i}=(1/2)\delta\rho_{n}/\rho_{n}$) and the ion-neutral
drag dictate the dynamics of Equation(44), yielding the phase difference
$-\pi/4$ in the $y$-direction as well. As $k_{y}/k_{x}$ increases for a given
$k_{x}$, Equation(46) implies that the wave frequency is gradually dominated
by $kc_{s}$ rather than the Doppler-shift frequency
$k_{x}V_{n,x}+k_{y}V_{n,y}$. Additionally, the drag term becomes subdominant
to the pressure term in Equation(44). Thus, Equation(44) leads to the relation
$\delta\rho_{n}/\rho_{n}\approx-\delta v_{n,y}/c_{s}$; i.e., $\delta\rho_{n}$
and $\delta v_{n,y}$ gradually become out of phase as $k_{y}/k_{x}$ increases
to $\hat{k}_{discon}$, as illustrated in panel (c) of Figure 9. Once
$k_{y}/k_{x}$ increases to $\hat{k}_{discon}$, the phase difference between
$\delta\rho_{n}$ and $\delta v_{n,y}$ jumps from $\pi$ to zero because of the
sign change of the wave frequency. Furthermore, panel (c) of Figure 9 depicts
that the phase difference between $\delta\rho_{n}$ and $\delta v_{n,y}$ is
almost zero when $k_{y}/k_{x}\gtrsim\hat{k}_{discon}$. It can be realized from
Equation(44) with $\Gamma_{n}\approx-\mathrm{i}k_{y}c_{s}$ as suggested by
Equation(46) for $k_{y}/k_{x}>\hat{k}_{discon}$, resulting in the relation
that $\delta\rho_{n}/\rho_{n}\approx\delta v_{n,y}/c_{s}$; i.e.,
$\delta\rho_{n}$ and $\delta v_{n,y}$ are almost in phase and are associated
with an acoustic wave propagating in the $y$-direction. The resulting
$v_{ph,y}$ of the acoustic wave points to the positive $y$-direction in the
frame comoving with the neutrals.
Panel (a) of Figure 9 also shows that when $k_{y}/k_{x}>\hat{k}_{discon}$, the
phase difference between $\delta\rho_{i}$ and $\delta\rho_{n}$ increases with
$k_{y}/k_{x}$, which can be realized from Equation(22). Since
$\Gamma_{i}=\Gamma_{n}+\mathrm{i}(k_{x}V_{d,x}+k_{y}V_{d,y})$, the term
$k_{x}V_{d,x}+k_{y}V_{d,y}$ is nearly zero at $k_{y}/k_{x}=k_{discon}$ and
thus is negligible compared to the recombination rate $2\beta\rho_{i}$,
resulting in the ionization–recombination equilibrium. However, as
$k_{y}/k_{x}$ increases from $\hat{k}_{discon}$, the term
$k_{x}V_{d,x}+k_{y}V_{d,y}$, which is neglected for deriving Equation(45),
increases and thus becomes comparable to and even larger than $2\beta\rho_{i}$
at a large value of $k_{y}$ in the term $\Gamma_{i}+2\beta\rho_{i}$ of
Equation(22) for the ion continuity equation. Consequently, the
ionization–recombination equilibrium is poorly attained, and the phase
difference between $\delta\rho_{i}$ and $\delta\rho_{n}$ shifts significantly
away from zero at a large value of $k_{y}/k_{x}$.
As a result of the aforementioned phase shift between $\delta\rho_{i}$ and
$\delta\rho_{n}$, the $x$-component of the linearized momentum equation for
the neutrals, i.e. Equation(19), implies that the phase difference between
$\delta\rho_{n}$ and $\delta v_{n,x}$ shifts accordingly away from
$\sim-\pi/4$ (i.e. a typical phase shift for the 1D drag instability; see GC,
20) to a large negative value as $k_{y}/k_{x}$ increases from
$\hat{k}_{discon}$. Together with the continuity equation for the neutrals
described by Equation(18), the phase shift leads to the change of the growth
rate with $k_{y}/k_{x}$ and yields no growth at a certain large value of
$k_{y}/k_{x}$ ($\sim 10^{2}$) where $\delta\rho_{n}$ and $\delta v_{n,x}$ are
almost out of the phase, as shown in panel (b) of Figure 9. At this point, the
wave is no longer unstable but is transformed into an acoustic wave, with
$v_{ph,x}$ pointing to the negative $x$-direction in the comoving frame of the
neutrals. The $v_{ph,y}$ of the acoustic wave points to the positive
$y$-direction in the comoving frame with the neutrals, as was explained
earlier in this subsection.
Figure 9: Phase differences between perturbations as a function of
$k_{y}/k_{x}$ for the unstable mode in the fiducial model of an oblique shock
with $\theta_{0}=45^{\circ}$. The cases of the four values of $k_{x}$ are
shown. The phase differences are not presented when the growth rate is zero
(refer to the left panel of Figure 8).
### 3.4 Growth of an unstable mode and shock obliquity
The distinct behaviors of wave frequency for a large value of $k_{y}/k_{x}$ –
frequency jump versus discontinuity – between Figure 2 (perpendicular shock
with $\theta_{0}=90^{\circ}$) and Figure 8 (oblique shock with
$\theta_{0}=45^{\circ}$) suggests that a shock with a proper range of the
oblique angle between $45^{\circ}$ and $90^{\circ}$ can allow for the unstable
modes with both behaviors of wave frequency. Figure 10 shows the results for
the same shock model with $\theta_{0}=80^{\circ}$. Indeed, the right panel of
the figure shows that a frequency jump from negative to positive values
happens at $k_{y}/k_{x}\approx
V_{n,x}/(c_{s}-V_{n,y})\equiv\hat{k}_{jump}\approx 27$, and a frequency
discontinuity occurs at an even smaller scale where
$k_{y}/k_{x}=|V_{d,x}|/V_{d,y}\equiv\hat{k}_{discon}\approx 37$. Consequently,
the growth rate of the modes with large $k_{x}$ (i.e., $20k_{fid}$ and
$50k_{fid}$) rises around $\hat{k}_{jump}$ and the growth rates all go to zero
at $\hat{k}_{discon}$, as shown and expected in the left panel of Figure 10.
Note that $\hat{k}_{jump}$ involves $V_{n,y}$ for an oblique shock because the
Doppler-shift frequency is $k_{x}V_{n,x}+k_{y}V_{n,y}$. What happens is that
as the oblique angle $\theta_{0}$ decreases from $90^{\circ}$, $V_{n,y}$ and
thus $V_{d,y}$ start deviating from zero such that
$\hat{k}_{jump}<\hat{k}_{discon}<\infty$. Consequently, the modes with
behaviors of the frequency jump and discontinuity both appear in the case for
$\theta_{0}=80^{\circ}$. As $\theta_{0}$ continues to decrease (i.e. the shock
is more oblique) until $\hat{k}_{discon}$ becomes smaller than
$\hat{k}_{jump}$, the mode with the behavior of a frequency jump disappears at
$k_{y}/k_{x}=\hat{k}_{jump}$, leaving the sole phenomenon of the frequency
discontinuity at $k_{y}/k_{x}=\hat{k}_{discon}$ such as the case for
$\theta_{0}=45^{\circ}$. At the location of $x\approx 0.2$ pc in the fiducial
model, $\hat{k}_{jump}=\hat{k}_{discon}$ occurs between
$\theta_{0}=78^{\circ}$ and 79∘; i.e., it occurs when the shock is mildly
oblique. Analogous to perpendicular shocks, the presence of $\hat{k}_{jump}$
enables substantial growth of the slowly propagating unstable mode (i.e.,
unlimited by the shock width) in a mildly oblique shock. Consequently, the
range of $\hat{k}_{jump}$ is similar to that for the perpendicular shocks in
the typical environments of star-forming clouds.
Figure 10: Same as Figures 2 & 8 but for $\theta=80^{\circ}$. The wave
frequency is not presented when the growth rate is zero.
Because the slowly traveling mode with substantial growth appears only in a
background environment close to a perpendicular shock, the growth of the drag
instability in a moderately or exceedingly oblique shock is still limited by
the short time span for the unstable mode to remain within a shock. We explore
this issue by comparing the C-shock model described by the fiducial model and
by the model V06 from Table 2 in GC (20). The pre-shock conditions of model
V06 are given by $n_{0}=200$ cm-3, $v_{0}=6$ km/s, $B_{0}=10\mu$G, and
$\chi_{i0}=5$. Model V06 is discussed in GC (20) because it exhibits a larger
MTG than that of the fiducial model in a 1D shock due to its broader shock
width. The MTG is easily computed for a 1D shock, whereas the same calculation
involves a more elaborate work for a 2D shock. In general, the growth rate
$\Gamma_{grow}$ of a mode with a particular wave frequency $\omega_{wave}$ is
a function of both $k_{x}$ and $k_{y}$, which vary with $x$ in a 2D shock. In
this work, we do not intend to perform an accurate analysis to identify the
combination of $k_{x}(x)$ and $k_{y}(x)$ that provides the MTG of the unstable
mode. Rather, we keep $k_{y}$ uniform but allow $k_{x}$ to change with $x$
across a shock for a given wave frequency of an unstable mode. We then
estimate the MTG of an oblique shock by varying the wave frequency for a few
constant $k_{y}$, which we refer to as MTG${}_{k_{y}}$ in short. Comparing
MTG${}_{k_{y}}$ for different $k_{y}$ and $\theta_{0}$ would provide the
guidance on how the MTG varies with the initial oblique angle $\theta_{0}$.
The results are enumerated in Table 1 for the fiducial model and Table 2 for
model V06. Given the $k_{y}$ of an unstable mode and $\theta_{0}$ for each
C-shock model, the tables display MTG${}_{k_{y}}$ with the corresponding mode
frequency $\omega_{wave}$ as well as the value of $k_{x}$ (in terms of
$k_{y}/k_{x}$) ranging from the beginning to the end of the C-shock width.
Table 1 shows that in the fiducial shock model, MTG${}_{k_{y}}$ does not vary
significantly with $\theta_{0}$. The smaller the $\theta_{0}$, the larger the
compression of the density and the magnetic field (see Figure 7), leading to a
slightly larger growth rate. However, a smaller $\theta_{0}$ results in a
slightly narrower shock width (see Figure 7 and the parameter $L_{shock}$ in
Table 1). As a result of the competition between these two moderate effects,
the oblique angle does not noticeably affect the overall growth of the drag
instability in the fiducial model. Moreover, when $k_{y}/k_{fid}\lesssim 1$,
MTG${}_{k_{y}}$ is comparable to the MTG for the 1D shock ($=9.9$; refer to
GC, 20) and is smaller by more than a factor of 2 for $k_{y}/k_{kid}=10$. It
can be expected from the range of the resulting $k_{y}/k_{x}$ shown in Table 1
. When $k_{y}/k_{x}\ll 1$, the instability behaves as a 1D mode for which
$k_{y}=0$. On the other hand, when $k_{y}/k_{x}\gtrsim 1$, the overall growth
rate decreases within the shock, and thus MTG${}_{k_{y}}$ is small.
Figure 11: The amplitude of perturbations normalized by the $y$ component of
the magnetic perturbation $|\delta B_{y}|/B$ for the unstable mode with
$k_{y}/k_{fid}=0.1$ and $\omega_{wave}=-2\times 10^{-11}$ s-1 in the fiducial
model (left panel) and model V06 (right panel) of the C-shock with
$\theta_{0}=45^{\circ}$.
Similar to the fiducial model, Table 2 shows that MTG${}_{k_{y}}$ in model V06
does not vary significantly with $\theta_{0}$ as well. Though in the case of
$k_{y}/k_{fid}=0.1$, MTG${}_{k_{y}}$ can increase moderately from 36 for
$\theta_{0}=70^{\circ}$ to 47 for $\theta_{0}=20^{\circ}$. This trend actually
also happens in the fiducial model with $k_{y}/k_{fid}=0.1$ but is less
prominent (see Table 1). For a smaller $\theta_{0}$, the effect of the
slightly larger growth rate predominates more than the effect of the slightly
narrower width, resulting in a larger MTG${}_{k_{y}}$. Evidently, a wider
shock in model V06 (see the parameter $L_{shock}$ in Tables 1 and 2) allows an
unstable mode to grow more and therefore makes this trend clearer. In
contrast, the two competing effects are comparable for $k_{y}/k_{fid}=1$ and
10; hence, the MTG${}_{k_{y}}$ changes less noticeably with $\theta_{0}$.
Similar to the fiducial model, the overall trend of the decrease in
MTG${}_{k_{y}}$ with increasing $k_{y}$ also happens in model V06, for the
same reason. Tables 1 and 2 also show that MTG${}_{k_{y}}$ in model V06 is in
general larger than that in the fiducial model, which is expected because the
shock in model V06 has a broader shock width for the instability to acquire
more time to grow. In summary, for a C-shock with the oblique angle $\lesssim
70^{\circ}$, we expect that the MTG of the drag instability in the fiducial
model is about 10 and that in model V06 is about 30-47. Therefore, we expect
that the unstable mode responsible for the MTG has $k_{y}/k_{x}\lesssim 1$.
Figure 11 shows the magnitude of the perturbations across the shock width for
the unstable mode corresponding to $k_{y}/k_{fid}=0.1$ and
$\theta_{0}=45^{\circ}$ in Tables 1 (left panel) and 2 (right panel). It is
evident from the figure that the magnitude of the density perturbation is
always larger than that of the velocity and magnetic field perturbations of
the unstable mode. In fact, it is true for all cases listed in Tables 1 and 2.
The density enhancement is expected to play a critical role in the dynamics of
the drag instability in an oblique shock as well.
When $\theta_{0}\lesssim 20^{\circ}$ in our fiducial model, $B_{y}$ and
$\rho_{n}$ are compressed extremely quickly near the beginning and the end of
the C-shock width, respectively. Therefore, the WKBJ approximation with
$k_{x}=k_{fid}$ becomes invalid in these shock regions. Except for these
regions, the basic behavior of the drag instability for $\theta_{0}\lesssim
20^{\circ}$ is similar to that for $\theta_{0}=45^{\circ}$, shown in Figure 8
at $x\approx 0.2$ pc. When $\theta_{0}$ is smaller than the critical angle
$\theta_{crit}\approx 6^{\circ}$, the background state admits two additional
solutions with field reversal, referred to as intermediate shocks (e.g.,
Wardle, 1998; Chen & Ostriker, 2014). In this study, we restrict ourselves to
the oblique shocks without field reversal and leave the instability analysis
for intermediate shocks to a future work.
Table 1: MTG${}_{k_{y}}$ for various $k_{y}$ and $\theta_{0}$ in the fiducial
model
$k_{y}/k_{fid}$ | $\theta_{0}$ | $L_{shock}$ (pc) | $\omega_{wave}$ (1/s) | $k_{y}/k_{x}$ | MTG${}_{k_{y}}$
---|---|---|---|---|---
0.1 | 70∘ | 0.48 | $-3$e$-11$ | 0.035–0.005 | 10.0
0.1 | 45∘ | 0.40 | $-2$e$-11$ | 0.052–0.007 | 10.3
0.1 | 20∘ | 0.24 | $-2$e$-11$ | 0.052–0.005 | 10.9
1 | 70∘ | 0.48 | $-3$e$-11$ | 0.35–0.05 | 9.57
1 | 45∘ | 0.40 | $-3$e$-11$ | 0.35–0.05 | 9.49
1 | 20∘ | 0.24 | $-3$e$-11$ | 0.35–0.05 | 9.55
10 | 70∘ | 0.48 | $-5$e$-11$ | 1.97–0.32 | 4.4
10 | 45∘ | 0.40 | $-5$e$-11$ | 2.4–0.32 | 3.8
10 | 20∘ | 0.24 | $-5$e$-11$ | 2.4–0.08 | 3.4
Note. — Given a uniform value of $k_{y}$ (in units of $k_{fid}$), the mode
frequency $\omega_{wave}$ and the corresponding range of $k_{x}(x)$ (presented
in terms of $k_{y}/k_{x}$) associated with the MTG at a constant $k_{y}$
(i.e., MTG${}_{k_{y}}$) of the drag instability are listed for the fiducial
model of a steady C-shock with various oblique angles $\theta_{0}$. The shock
width $L_{shock}$ is estimated using Equation(A19) in Chen & Ostriker (2012).
Table 2: Same as Table 2 but for the C-shock model given by model V06. $k_{y}/k_{fid}$ | $\theta_{0}$ | $L_{shock}$ (pc) | $\omega_{wave}$ (1/s) | $k_{y}/k_{x}$ | MTG${}_{k_{y}}$
---|---|---|---|---|---
0.1 | 70∘ | 2.08 | $-2$e$-11$ | 0.062–0.011 | 36
0.1 | 45∘ | 1.70 | $-2$e$-11$ | 0.062–0.009 | 40
0.1 | 20∘ | 1.0 | $-2$e$-11$ | 0.063–0.007 | 47
1 | 70∘ | 2.08 | $-3$e$-11$ | 0.42–0.07 | 31
1 | 45∘ | 1.70 | $-3$e$-11$ | 0.42–0.07 | 30
1 | 20∘ | 1.0 | $-3$e$-11$ | 0.45–0.05 | 31
10 | 70∘ | 2.08 | $-7$e$-11$ | 1.72–0.33 | 7.3
10 | 45∘ | 1.70 | $-8$e$-11$ | 1.75–0.28 | 5.9
10 | 20∘ | 1.0 | $-8$e$-11$ | 1.73–0.24 | 5.7
## 4 Summary and Discussions
In this work, we extend the study of the drag instability in 1D perpendicular
C-shocks by GC (20) to 2D perpendicular and oblique C-shocks. We focus on the
fiducial model for an isothermal steady C-shock with the pre-shock conditions
described by the same fiducial model as GC (20) in the typical environment of
star-forming clouds. The WKBJ linear analyses are subsequently performed based
on the background states in the fiducial model. To understand the underlying
physics for the linear results, we make an attempt to derive simplified
dispersion relations, aided by the auxiliary analysis of phase differences
between perturbation quantities. We observe that the drag instability remains
in a 2D shock, and its behavior in general depends on $k_{y}/k_{x}$. When
$k_{y}/k_{x}\lesssim 1$ (i.e., transversely large-scale modes), the growth
rate $\Gamma_{grow}$ and wave frequency $\omega_{wave}$ of the drag
instability in a 2D shock are similar to those in a 1D shock, which is
insensitive to the initial oblique angle $\theta_{0}$ of the shock. When
$k_{y}/k_{x}\gtrsim 1$ (i.e., transversely small-scale modes), the drag
instability is characterized by an unstable mode coupled with the acoustic
mode primarily along the $y$-direction (note that the acoustic mode is the
slow mode in the case of a perpendicular shock). Additionally, in contrast to
the perpendicular shock, $V_{d,y}$ exists in an oblique shock. Therefore,
there exists a particular mode for an oblique shock with
$k_{y}/k_{x}=\hat{k}_{discon}$ where the growth rate is zero and discontinuity
in the wave frequency appears (see Equation(46)).
When the shock is less oblique (i.e., $\theta_{0}\gtrsim 80^{\circ}$ in the
fiducial model), there exists a jump transition of wave frequency in the shock
frame – from the negative Doppler-shift frequency due to the shock flow to the
positive acoustic wave frequency in the $y$-direction – for a mode with
$k_{y}/k_{x}\sim\hat{k}_{jump}$. Owing to the small wave frequency in the
shock frame, this unstable mode propagates slowly within a shock and thus has
sufficient time to potentially grow to a nonlinear phase, thereby contributing
to the maximum growth. While the density enhancement of the ions approximately
lies in phase with that of the neutrals by means of the ionization equilibrium
for $k_{y}/k_{x}<\hat{k}_{jump}$, the same phase overlap for the density
perturbations is maintained by the fast acoustic wave for
$k_{y}/k_{x}>\hat{k}_{jump}$. For the mode with an exceedingly large $k_{x}$,
there is no growing mode for $k_{y}/k_{x}\lesssim 1$, as the drag instability
is suppressed by the pressure effect (Gu et al., 2004; GC, 20). However,
unstable modes appear for $k_{y}/k_{x}\sim\hat{k}_{jump}$, as $k_{x}$ is large
enough for the Doppler-shift frequency of the slowly traveling wave to
dominate over the ionization rate, which in turn produces a proper phase
difference between $\delta v_{n,x}$ and $\delta\rho_{n}$ for the drag
instability to occur.
On the other hand, when the shock is more oblique (i.e., $\theta_{0}\lesssim
80^{\circ}$ in the fiducial model), this slowly propagating unstable mode
disappears. The maximum growth of the drag instability is limited by the short
time span of an unstable mode to stay within a shock and hence is given by
MTG, as is the case for a 1D perpendicular shock. We compute $MTG_{k_{y}}$,
i.e. the MTG for a constant $k_{y}$, to infer the MTG of the drag instability
for a given $\theta_{0}$. We find that the MTG of the drag instability is
contributed by the mode with $k_{y}/k_{x}\ll 1$ (i.e., almost 1D mode) and is
expected to be about 10 insensitive to the initial oblique angle $\theta_{0}$
in the fiducial model. We also conduct the linear analysis for the C-shock
model V06, which has a larger shock width than that of the fiducial model and
thus exhibits a larger MTG in the 1D shock (GC, 20). We find that the MTG of
the drag instability arises from the mode with $k_{y}/k_{x}\ll 1$ as well and
increases from about 36 to 47 as the initial oblique angle $\theta_{0}$
decreases from 70∘ to 20∘. The overall larger MTG in model V06 is expected for
a shock with a larger width. A larger MTG for a smaller $\theta_{0}$ is
primarily caused by the stronger shock compression. In all the cases that we
consider (see Tables 1 and 2, as well as the case of the perpendicular shock),
the magnitude of the density perturbations is much larger than that of the
velocity and magnetic field perturbations (e.g., Figures 6 & 11), implying
that the density enhancement predominantly governs the dynamics of the
instability in the linear regime.
Self-gravity is not considered in our linear analysis in order to study and
present the basic properties of the drag instability in a clean physical
picture. The background magnetic fields with different initial oblique angles
tend to be approximately parallel to the shock front during shock compression
(see Figure 7). In this work, the minimal value of $k_{y}/k_{fid}$ that we
show for the results is 0.01. This transversely large-scale mode has a
transverse wavelength larger than the Jeans scale ($\sim
c_{s}/\sqrt{G\rho_{n}}$) and therefore may be subject to the gravitational
instability along the compressed field lines primarily in the $y$-direction.
However, it should be kept in mind that the background state of the shock is
not static; thus, applying the Jeans criterion is debatable. Our analysis
based on $MTG_{k_{y}}$ suggests that the transversely large-scale mode would
give the MTG of the drag instability within a shock (see Tables 1 & 2),555The
transversely large-scale mode with $k_{y}/k_{fid}=0.1$ shown in the tables is
marginally gravitationally stable against thermal pressure along the field
direction almost in the $y$-direction due to shock compression. Having said
that, we should bear in mind that the shock is not a static structure for the
Jeans criterion to reasonably apply. except when the shock is mildly oblique.
Considered together, after the most unstable mode of the drag instability sets
in, it could grow to become gravitationally unstable more quickly in the
direction of the field line against thermal pressure as a result of its larger
transverse scale. In this regard for a laminar background, it could be
interesting to investigate the interaction between the drag and gravitational
instabilities by including self-gravity in the linear analysis.
While we assume that C-shocks arise from supersonic, turbulent flows, the
effects of turbulence are not actually modeled in our linear analysis using
any phenomenological approaches, such as turbulent diffusion or turbulent
pressure. The turbulent diffusion, if it exists, may weaken/eliminate
transversely small-scale modes of the drag instability or may promote the drag
instability by enhancing ambipolar diffusion in a shock (e.g., Li & Nakamura,
2004). The supersonic turbulent pressure, if it exists, may support the
shocked gas against gravitational collapse along the compressed field lines on
the scales smaller than the turbulent Jeans scale. On the other hand,
supersonic turbulence in shocks is known to dissipate quickly over one
turbulent crossing time, provided that there is no energy supply from the
turbulent injection scale by feedback processes from protostars (e.g., Mac Low
et al., 1998; Stone et al., 1998; Mac Low, 1999; Nakamura & Li, 2005).
Nevertheless, even large-scale modes within C-shocks would not be affected by
the decay of the large-scale turbulence, as that would only reduce the
strength of C-shocks present in the environment, rather than modifying the
behavior of an individual C-shock during its passage. In a nearly
perpendicular shock, the slowly propagating modes responsible for the maximum
growth are transversely small-scale modes (i.e.,
$k_{y}/k_{x}\sim\hat{k}_{jump}\sim 10$ in our models). Analogous to dense
regions seeded by small-scale turbulence in a shock-compressed layer (Chen &
Ostriker, 2014), the small-scale turbulence may help initiate the perturbation
of these small-scale modes of the drag instability in compressed gas within
shocks. As the modes grow to the nonlinear regime, the nonlinear saturation of
the drag instability by itself could be the prime candidate to drive the
turbulence within a C-shock. Compression of background turbulence is likely to
be negligible in comparison, while the background turbulence driven from
stellar feedback operates on much larger time and length scales from those of
an individual C-shock.
A possible consequence of the turbulence driven by drag-induced instabilities
is clump formation. One of the notable examples is the clump formation of dust
in the turbulence driven by the streaming instability due to the dust–gas drag
in a protoplanetary disk under certain favorable conditions (Johansen &
Youdin, 2007). Moreover, we show that the density perturbation dominates over
the velocity and magnetic field perturbations in the linear regime of the drag
instability, which may hint the dynamical importance of density enhancement.
The observations of nearby molecular clouds with the Herschel Space
Observatory suggested that about 70%-80% of dense cores lie within filaments
(Polychroni et al., 2013; Könyves et al., 2015). The numerical simulation
conducted by Chen & Ostriker (2012) revealed that a pair of steady C-shocks
propagate away from the shocked layer of two colliding flows where prestellar
cores and filaments can form (Chen & Ostriker, 2014). The drag instability in
a steady C-shock might be potentially capable of core formation lying outside
of filaments. Alternatively, the drag instability is likely to nonlinearly
saturate, perhaps at a level as high as the velocity difference across the
shock front. This would substantially limit the ability of the instability to
drive a subsequent gravitational instability beyond what has already been
produced by the jump in density across the shock. In any case, whether the
nonlinear outcome of the density enhancement by the drag instability in a
C-shock can facilitate the clump/core formation is beyond the reach of the
linear analysis. Given the small longitudinal wavelength of the drag
instability within a C-shock, nonideal MHD simulations with high resolutions
are required to explore this possibility (see the discussion section in GC,
20).
Apart from the steady C-shock, a transient C-shock appearing in the shocked
layer of two colliding flows has been modeled to be a promising site for the
major formation of cores and filamentary structures (Nakamura & Li, 2008; Chen
& Ostriker, 2012, 2014). We restrict ourselves to the drag instability in
steady C-shocks because the background state has settled to an equilibrium
state and thus provides an easy test bed for demonstrating the existence of
the drag instability in a linear analysis. Nevertheless, the ion-neutral drift
is expected to be extremely fast in a transient C-shock, perhaps favoring the
occurrence of the drag instability (Gu et al., 2004). Given the time-dependent
nature of a transient C-shock, setting up an appropriate background state for
a perturbation theory is expected to be challenging.
Undoubtedly, the aforementioned issues and possible implications that we have
discussed related to the drag instability are highly speculative and
intriguing, requiring prudent and elaborate studies for further investigation.
After all, the interplay between ambipolar diffusion, turbulence, shocks,
gravity, etc. for the dynamical processes of star formation has been a complex
and broad topic. From a theoretical perspective, the 2D linear analysis
conducted in this work based on the 1D analysis presented in Gu et al. (2004)
and GC (20) would advance our understanding of the instabilities of
astrophysical plasma in general. In practice, our framework provides the basic
properties of the drag instability to be studied in future nonideal MHD
simulations for the confirmation of their existence and for understanding the
nonlinear outcome of the linear instability in C-shocks.
We are grateful to Che-Yu Chen, Min-Kai Lin, Hau-Yu Baobab Liu, and Chien-
Chang Yen for useful discussions. We would also like to thank the referee for
helpful comments that greatly improved the manuscript, especially the contents
of the discussion section. This work has been supported by the Ministry of
Science and Technology in Taiwan through the grant MOST 109-2112-M001-052.
## References
* André et al. (2014) André, P., Di Francesco, J., Ward-Thompson, D., Inutsuka, S. -I., Pudritz, R. E., Pineda, J. E. 2014, Protostars and Planets VI, Henrik Beuther, Ralf S. Klessen, Cornelis P. Dullemond, and Thomas Henning (eds.), University of Arizona Press, Tucson, 914 pp., p.27-51
* Ballesteros-Paredes et al. (2007) Ballesteros-Paredes, J., Klessen, R. S., Mac Low, M.-M., et al. 2007, Protostars and Planets V, B. Reipurth, D. Jewitt, and K. Keil (eds.), University of Arizona Press, Tucson, 951 pp., p.63-80
* Blitz et al. (2007) Blitz, L., Fukui, Y., Kawamura, A., Leroy, A., Mizuno, N., & Rosolowsky, E., 2007, Protostars and Planets V. Univ. Arizona Press, Tucson, AZ, p. 81
* Chen & Ostriker (2012) Chen, C.-Y, & Ostriker, E. 2012, ApJ, 744, 124
* Chen & Ostriker (2014) Chen, C.-Y, & Ostriker, E. 2014, ApJ, 785, 69 2, ApJ, 567, 947
* Dalgarno (2006) Dalgarno, A. 2006, Proceedings of the National Academy of Science, 103, 12269
* Draine (1980) Draine, B. T. 1980, ApJ, 241, 1021
* Draine & McKee (1993) Draine, B. T., & McKee, C. F. 1993, ARA&A, 31, 373
* Draine et al (1983) Draine, B. T., Roberge, W. G., & Dalgarno, A. 1983, ApJ, 264, 485
* Elmegreen & Scalo (2004) Elmegreen, B. G., & Scalo, J. 2004, Annu. Rev. Astron. Astrophys., 42, 211
* Engargiola et al. (2003) Engargiola, G., Plambeck, R. L., Rosolowsky, E., & Blitz, L., 2003, ApJS, 149, 343
* Falle et al. (2009) Falle, S.A.E.G., Hartquist, T.W., van Loo, S. In: Pogorlov, N.V., Audit, E., Colella, P., Zank, G.P. 2009, (eds.) Numerical Modeling of Space Plasma Flows, p. 80. Astronomical Society of the Pacific
* Flower & Pineau Des Forêts (1998) Flower, D. R., & Pineau Des Forêts, G. 1998, MNRAS, 297, 1182
* Flower & Pineau Des Forêts (2010) Flower, D. R., & Pineau Des Forêts, G. 2010, MNRAS, 406, 1745
* Fukui & Kawamura (2010) Fukui, Y., & Kawamura, A. 2010, ARA&A, 48, 547
* Girichidis et al. (2020) Girichidis, P., Offner, S.S.R., Kritsuk, A.G. et al. 2020, Space Sci Rev, 216, 68
* GC (20) Gu, P.-G., & Chen, C.-Y. 2020, ApJ, 898, 67 (GC20)
* Gu et al. (2004) Gu, P.-G., Lin, D. N. C., & Vishniac, E. T., Astrophysics & Space Science, 292, 261
* Gusdorf et al. (2008) Gusdorf, A., Cabrit, S., Flower, D. R., et al. 2008, A&A, 482, 809
* Hennebelle & Falgarone (2012) Hennebelle, P., & Falgarone, E. 2012, Astron. Astrophys. Rev., 20, 55
* Hennebelle & Inutsuka (2019) Hennebelle, P., & Inutsuka, S.-I. 2019, Frontiers in Astronomy and Space Sciences, 6, 5
* Hezareh et al. (2010) Hezareh, T., Houde, M., McCoey, C., et al. 2010, ApJ, 720, 603
* Hezareh et al. (2014) Hezareh, T., Csengeri, T., Houde, M., et al. 2014, MNRAS, 438, 663
* Indriolo & McCall (2012) Indriolo, N., & McCall, B. J. 2012, ApJ, 745, 91
* Jeffreson & Kruijssen (2018) Jeffreson, S. M. R. & Kruijssen, J. M. D. 2018, MNRAS, 476, 3688
* Johansen & Youdin (2007) Johansen, A., & Youdin, A. 2007 ApJ, 662, 627
* Kawamura et al. (2009) Kawamura, A. et al., 2009, ApJS, 184, 1
* Kennicutt & Evans (2012) Kennicutt, R. C., & Evans, N. J. 2012, Annu. Rev. Astron. Astrophys. 50, 531
* Könyves et al. (2015) Könyves, V., André, P., Menśhchikov, A., Palmeirim, P., Arzoumanian, D., Schneider, N., et al. 2015, A&A, 584, 91
* Lehmann & Wardle (2016) Lehmann, A., & Wardle, M. 2016, MNRAS, 455, 2066
* Li & Nakamura (2004) Li, Z.-Y., & Nakamura, F. 2004, ApJ, 609. 83
* Li et al. (2014) Li, H.-B., Goodman, A., Sridharan, T. K., et al. 2014, Protostars and Planets VI, 101
* Li & Houde (2008) Li, H.-B., & Houde, M. 2008, ApJ, 677, 1151
* Mac Low (1999) Mac Low, M.-M. 1999, ApJ, 524, 169
* Mac Low et al. (1998) Mac Low, M.-M., Klessen, R. S., Burkert A. et al., 1998, Phys. Rev. Lett. 80, 2754
* McKee et al. (2010) McKee, C. F., Li, P. S., & Klein, R. I. 2010, ApJ, 720, 1612
* Meidt et al. (2015) Meidt, S. E. et al., 2015, ApJ, 806, 72
* Mestel & Spitzer (1956) Mestel, L., & Spitzer, L. 1956, MNRAS, 116, 503
* Miura et al. (2012) Miura, R. E. et al., 2012, ApJ, 761, 37
* Murray (2011) Murray, N., 2011, ApJ, 729, 133
* Nakamura & Li (2005) Nakamura, F., & Li, Z.-Y. 2005, ApJ, 631, 411
* Nakamura & Li (2008) Nakamura, F., & Li, Z.-Y. 2008, ApJ, 687, 354
* Polychroni et al. (2013) Polychroni, D., Schisano, E., Elia, D., Roy, A., Molinari, S., Martin, P., et al. 2013, ApJ,777, 33
* Shu (1992) Shu, F. H. 1992, in Physics of Astrophysics, Vol. II, ed. F. H. Shu (Mill Valley, CA: Univ. Science Books)
* Smith & Mac Low (1997) Smith, M. D., & Mac Low, M.-M. 1997, A&A, 326, 801
* Spitzer (1956) Spitzer, L. 1956, Physics of Fully Ionized Gases, New York: Interscience Publishers
* Stone (1997) Stone, J. M. 1997, ApJ, 487, 271
* Stone et al. (1998) Stone, J. M., Ostriker, E. C., Gammie, C. F. 1998, ApJ508, 99
* Tang et al. (2018) Tang, K. S., Li, H.-B., & Lee, W.-K. 2018, ApJ, 862, 42
* Tielens (2005) Tielens, A. G. G. M. 2005, The Physics and Chemistry of the Interstellar Medium, Cambridge, UK: Cambridge University Press
* Valdivia et al. (2017) Valdivia, V., Godard, B., Hennebelle, P., et al. 2017, A&A, 600, A114
* Wardle (1990) Wardle, M. 1990, ApJ, 246, 98
* Wardle (1991) Wardle, M. 1991, MNRAS, 251, 119
* Wardle (1998) Wardle, M. 1998, MNRAS, 298, 507
* Xu & Li (2016) Xu, D., & Li, D. 2016, ApJ, 833, 90
* Zweibel (2015) Zweibel, E. 2015, Ambipolar Diffusion, in Magnetic Fields in Diffuse Media, ed. Alexander Lazarian, Elisabete M. de Gouveia Dal PinoClaudio Melioli, 407, 285
|
# Density matrix of the superposition of excitation on coherent states with
thermal light and its statistical properties
Li-yun Hu Department of Physics, Shanghai Jiao Tong University, Shanghai
200030, China Hong-yi Fan Department of Physics, Shanghai Jiao Tong
University, Shanghai 200030, China
(5 July 2008)
###### Abstract
A beam’s density matrix that is described by the superposition of excitation
on coherent states with thermal noise (SECST) is presented, and its matrix
elements in Fock space are calculated. The maximum information transmitted by
the SECST beam is derived. It is more than that by coherent light beam and
increases as the excitation photon number increases. In addition, the
nonclassicality of density matrix is demonstrated by calculating its Wigner
function.
PACS numbers: 42.50.Dv, 03.67.Hk, 03.65.Ud
excitation on coherent states, thermal light, Wigner function
## I Introduction
Recently, much attention has been paid to the excitation on coherent states
(ECS) 1 ; 2 ; 3 ; 4 ; 5 ; 6 . As pointed out in Refs.2 ; 3 , the single photon
ECS causes a classical-to-quantum (nonclassical) transition. The ECSs can be
considered as a generalization of coherent states 7 ; 8 and number
eigenstates. All these states can be used as signal beams in optical
communications field, in which the nonclassicality of signals plays an
important role.
However, in reality, signal beams are usually mixed with thermal noise.
Statistical properties of the superposition of (squeezed) coherent states with
thermal light (SCST) have been investigated by calculating the photon number
matrix elements $\left\langle N\right|\rho\left|M\right\rangle$ of SCST’s
density matrix 9 ; 10 . These properties are useful in quantum optics and
quantum electronics (e.g. how lasers working well above threshold, heterodyne
detection of light, etc.) 11 . Some general properties of the density matrices
which describe coherent, squeezed and number eigenstates in thermal noise are
studied in Ref.12 . It is found that the information transmitted by the
superposition of number eigenstates with thermal light (SNET) beam is less
than that by the SCST beam 13 .
In this paper, we investigate statistical properties of the superposition of
ECS with thermal light (SECST). We present the relevant density matrix in Fock
space and derive the Mandel $Q$ parameter. The SECST field can exhibit a
significant amount of super-Poissonian photon statistics (PPS) due to the
presence of thermal noise for excitation photon number $m=0;$ while for $m\neq
0$ the SECST field can present the sub-PPS when the thermal mean photon number
is less than a threshold value. In addition, the threshold value increases as
$m$ increases. We also calculate the maximum information (channel capacity)
transmitted by the SECST beam, which increases as $m$ increases. In addition,
the nonclassicality of density matrix is also presented by calculating the
Wigner function of the SECST.
Our paper is arranged as follows. In Sec. II we present the density matrix
$\rho$ that describes the SECST and calculate its matrix elements in Fock
space by using the normal ordered form of $\rho$. The PPS distributions are
discussed in Sec III. The maximum information is calculated in Sec. IV. Sec. V
is devoted to deriving the Wigner function of the SECST and discussing its
nonclassicality in details. Conclusions are summarized in the last section.
## II Excitation on coherent states with thermal noise
Firstly, let us briefly review the excitation on coherent states (ECSs). The
ECSs, first introduced by Agarwal and Tara 1 , are the result of successive
elementary one-photon excitations of a coherent state, and is an intermediate
state in between the Fock state and the coherent state, since it exhibits the
sub-Poissonian character. Theoretically, the ECSs can be obtained by
repeatedly operating the photon creation operator $a^{{\dagger}}$ on a
coherent state, so its density operator is
$\rho_{0}=C_{\alpha,m}a^{{\dagger}m}\left|\alpha\right\rangle\left\langle\alpha\right|a^{m},$
(1)
where $C_{\alpha,m}=[m!L_{m}(-\left|\alpha\right|^{2})]^{-1}$ is the
normalization factor,
$\left|\alpha\right\rangle=\exp(-\left|\alpha\right|^{2}/2+\alpha
a^{\dagger})\left|0\right\rangle$ is the coherent state 7 ; 8 , and
$L_{m}\left(x\right)$ is the $m$th-order Laguerre polynomial.
The SECST is described by the density matrix 12
$\displaystyle\rho$
$\displaystyle=\int\frac{d^{2}z}{\pi}P\left(z\right)D\left(z\right)\rho_{0}D^{{\dagger}}\left(z\right),$
(2) $\displaystyle P\left(z\right)$
$\displaystyle=\frac{1}{\bar{n}_{t}}\exp\left[-\frac{\left|z\right|^{2}}{\bar{n}_{t}}\right],$
(3)
where $D\left(z\right)=\exp(za^{{\dagger}}-z^{\ast}a)$ is the displacement
operator, and $\bar{n}_{t}$ is the mean number of thermal photons for
$\rho_{0}\rightarrow\left|0\right\rangle\left\langle 0\right|$. We can easily
prove that $\mathtt{Tr}\rho=1,$ as it should be. In fact,
$\displaystyle\mathtt{Tr}\rho$
$\displaystyle=\int\frac{d^{2}z}{\pi}P\left(z\right)\mathtt{Tr}\left[D\left(z\right)\rho_{0}D^{{\dagger}}\left(z\right)\right]$
$\displaystyle=\int\frac{d^{2}z}{\pi}P\left(z\right)\mathtt{Tr}\left(\rho_{0}\right)$
$\displaystyle=\int\frac{d^{2}z}{\pi}P\left(z\right)=1.$ (4)
### II.1 Normal ordering form of the SECST
For the simplicity in our later calculation, we first perform the integration
in Eq.(2) by using the technique of integration within an ordered product
(IWOP) of operators 14 ; 15 . Using the normal ordering form of the vacuum
projector $\left|0\right\rangle\left\langle
0\right|=\colon\exp(-a^{{\dagger}}a)\colon,$ we can reform Eq.(2) as the
following form
$\displaystyle\rho$
$\displaystyle=C_{\alpha,m}e^{-\left|\alpha\right|^{2}}\int\frac{d^{2}z}{\pi}P\left(z\right)D\left(z\right)\colon
a^{{\dagger}m}$ $\displaystyle\times\exp\left(\alpha
a^{{\dagger}}+\alpha^{\ast}a-a^{{\dagger}}a\right)a^{m}\colon
D^{{\dagger}}\left(z\right)$
$\displaystyle=\frac{C_{\alpha,m}}{\bar{n}_{t}}\colon\exp\left(-\left|\alpha\right|^{2}-a^{{\dagger}}a+\allowbreak
a^{{\dagger}}\alpha+a\alpha^{\ast}\right)$
$\displaystyle\times\int\frac{d^{2}z}{\pi}\exp\left[-\frac{1+\bar{n}_{t}}{\bar{n}_{t}}\left|z\right|^{2}\right]$
$\displaystyle\times\exp\left[\left(a^{{\dagger}}-\alpha^{\ast}\right)\allowbreak
z+\left(a-\alpha\right)z^{\ast}\right]$
$\displaystyle\times\left(a^{{\dagger}}-z^{\ast}\right)^{m}\left(a-z\right)^{m}\colon.$
(5)
In the last step of (5), we noticed that for any operator $f(a^{{\dagger}},a)$
$D\left(z\right)f(a^{{\dagger}},a)D^{{\dagger}}\left(z\right)=f(a^{{\dagger}}-z^{\ast},a-z).$
(6)
Making two independent variable displacements,
$a^{{\dagger}}-z^{\ast}\rightarrow\beta^{\ast},a-z\rightarrow\beta,$
(note that operators $a^{{\dagger}},a$ can be considered as C-number within
the normal order $\colon\colon$), thus Eq.(5) can be rewritten as
$\displaystyle\rho$
$\displaystyle=\frac{C_{\alpha,m}}{\bar{n}_{t}}\colon\exp\left(-\left|\alpha\right|^{2}-\frac{1}{\bar{n}_{t}}a^{{\dagger}}a\right)$
$\displaystyle\times\int\frac{d^{2}\beta}{\pi}\beta^{\ast
m}\beta^{m}\exp\left[-\lambda_{t}^{-2}\left|\beta\right|^{2}\right.$
$\displaystyle\left.+\left(\allowbreak\alpha^{\ast}+\allowbreak\frac{a^{{\dagger}}}{\bar{n}_{t}}\right)\beta+\left(\frac{a}{\bar{n}_{t}}+\alpha\right)\beta^{\ast}\right]\colon$
$\displaystyle=\frac{C_{\alpha,m}}{\bar{n}_{t}}\lambda_{t}^{2m+2}\colon\exp\left(-\left|\alpha\right|^{2}-\frac{1}{\bar{n}_{t}}a^{{\dagger}}a\right)$
$\displaystyle\times\int\frac{d^{2}\beta}{\pi}\beta^{\ast
m}\beta^{m}\exp\left[-\left|\beta\right|^{2}+A^{{\dagger}}\beta+A\beta^{\ast}\right]\colon,$
(7)
where we have set $\lambda_{t}=\sqrt{\bar{n}_{t}/(1+\bar{n}_{t})}$ and
$A=\lambda_{t}(\frac{1}{\bar{n}_{t}}a+\alpha).$ Then using the integration
expression of two-variable Hermite polynomial $H_{m,n}$ 16 ,
$\displaystyle(-1)^{n}e^{-\xi\eta}H_{m,n}\left(\xi,\eta\right)$
$\displaystyle=\int\frac{d^{2}z}{\pi}z^{n}z^{\ast
m}\exp\left[-\left|z\right|^{2}+\xi z-\eta z^{\ast}\right],$ (8)
we can put Eq.(7) into
$\displaystyle\rho$
$\displaystyle=\frac{C_{\alpha,m}}{\bar{n}_{t}}\lambda_{t}^{2m+2}\colon\left(-1\right)^{m}H_{m,m}\left(A^{{\dagger}},-A\right)$
$\displaystyle\times\exp\left[-\frac{\left(a-\alpha\right)\left(a^{{\dagger}}-\alpha^{\ast}\right)}{\bar{n}_{t}+1}\right]\colon.$
(9)
In particular, when $m=0$, corresponding to the case of superposition of
coherent state with thermal noise, Eq.(9) reduces to
$\rho=\frac{1}{\bar{n}_{t}+1}D\left(\alpha\right)\colon
e^{-\frac{a^{{\dagger}}a}{\bar{n}_{t}+1}}\colon
D^{{\dagger}}\left(\alpha\right),$ (10)
which can be directly checked by using Eqs.(2) and (3) as well as noticing
$\rho_{0}=\left|\alpha\right\rangle\left\langle\alpha\right|.$
Further employing the relation between Hermite polynomial and Laguerre
polynomial 16 ,
$H_{m,n}\left(\xi,\kappa\right)=\left\\{\begin{array}[c]{cc}n!\left(-1\right)^{n}\xi^{m-n}L_{n}^{m-n}\left(\xi\kappa\right),&m>n\\\
m!\left(-1\right)^{m}\kappa^{n-m}L_{m}^{n-m}\left(\xi\kappa\right),&m<n\end{array}\right.,$
(11)
we can see that
$\displaystyle\rho$
$\displaystyle=\frac{1}{L_{m}(-\left|\alpha\right|^{2})}\frac{\bar{n}_{t}^{m}}{(1+\bar{n}_{t})^{m+1}}\colon
L_{m}\left(-A^{{\dagger}}A\right)$
$\displaystyle\times\exp\left[-\frac{\left(a-\alpha\right)\left(a^{{\dagger}}-\alpha^{\ast}\right)}{\bar{n}_{t}+1}\right]\colon.$
(12)
Eqs.(9) and (12) are the normal ordering form of the SECST. From these it is
convenient to calculate the phase space distributions, such as Q-function,
P-representation and Wigner function.
### II.2 The matrix elements $\left\langle N\right|\rho\left|M\right\rangle$
Now we calculate the matrix elements of $\rho$ in Eq.(2) between two number
states $\left\langle N\right|$ and $\left|M\right\rangle,$ i.e., $\left\langle
N\right|\rho\left|M\right\rangle.$ Employing the overcompleteness of coherent
states, one can express the matrix elements $\left\langle
N\right|\rho\left|M\right\rangle$ as
$\left\langle N\right|\rho\left|M\right\rangle=\int\frac{d^{2}\beta
d^{2}\gamma}{\pi^{2}}\left\langle
N\right.\left|\beta\right\rangle\left\langle\beta\right|\rho\left|\gamma\right\rangle\left\langle\gamma\right.\left|M\right\rangle,$
(13)
where the overlap between the coherent state and the number state is given by
$\left\langle\gamma\right.\left|M\right\rangle=\frac{1}{\sqrt{M!}}e^{-\left|\gamma\right|^{2}/2}\gamma^{\ast
M},$ (14)
and the matrix elements
$\left\langle\beta\right|\rho\left|\gamma\right\rangle$ can be obtained from
Eq.(9) due to $\rho^{\prime}$s normal ordering form,
$\displaystyle\left\langle\beta\right|\rho\left|\gamma\right\rangle$
$\displaystyle=\left(-1\right)^{m}\frac{C_{\alpha,m}}{\bar{n}_{t}}\lambda_{t}^{2m+2}e^{-\left|\alpha\right|^{2}/(\bar{n}_{t}+1)}$
$\displaystyle\times\frac{\partial^{2m}}{\partial\tau^{m}\partial\tau^{\prime
m}}\exp\left[-\tau\tau^{\prime}+\lambda_{t}\tau\alpha^{\ast}-\lambda_{t}\alpha\tau^{\prime}\right]$
$\displaystyle\times\exp\left\\{\left(\frac{\alpha+\bar{n}_{t}\allowbreak\gamma}{\bar{n}_{t}+1}+\frac{\lambda_{t}\tau}{\bar{n}_{t}}\right)\beta^{\ast}-\frac{1}{2}\left|\beta\right|^{2}\right.$
$\displaystyle-\left.\frac{1}{2}\left|\gamma\right|^{2}+\left(\frac{\alpha^{\ast}}{\bar{n}_{t}+1}-\frac{\lambda_{t}\tau^{\prime}}{\bar{n}_{t}}\right)\gamma\right\\}_{\tau=\tau^{\prime}=0},$
(15)
where we have used the generating function of two-variable Hermite polynomial
$H_{m,n},$
$H_{m,n}\left(x,y\right)=\left.\frac{\partial^{m+n}}{\partial t^{m}\partial
t^{\prime
n}}\exp\left[-tt^{\prime}+tx+t^{\prime}y\right]\right|_{t=t^{\prime}=0}.$ (16)
When $M=N,$ $\left\langle N\right|\rho\left|N\right\rangle$ is just the photon
number distribution of the SECST. Then combing with Eqs.(15), (13) and (14),
after a lengthy but straightforward calculation, one can get the matrix
elements $\left\langle N\right|\rho\left|M\right\rangle,$ (without loss of
generality, let $M\geqslant N$)
$\displaystyle\left\langle N\right|\rho\left|M\right\rangle$
$\displaystyle=\frac{\left(-1\right)^{N}}{\sqrt{M!N!}}\frac{\lambda_{t}^{2N}C_{\alpha,m}}{\bar{n}_{t}+1}e^{-\left|\alpha\right|^{2}}\frac{\partial^{2m}}{\partial\upsilon^{m}\partial\upsilon^{\prime
m}}$
$\displaystyle.\left\\{e^{\lambda_{t}^{2}\upsilon\upsilon^{\prime}}H_{M,N}\left(\frac{\upsilon^{\prime}}{\bar{n}_{t}+1},-\frac{\upsilon}{\bar{n}_{t}}\right)\right\\}_{\upsilon=\alpha,\upsilon^{\prime}=\alpha^{\ast}},$
(17)
where we have used the integral formula 17
$\int\frac{d^{2}\beta}{\pi}f\left(\beta^{\ast}\right)\exp\left\\{-\left|\beta\right|^{2}+\tau\beta\right\\}=f\left(\tau\right),$
(18)
and another expression of two-variable Hermite polynomial $H_{m,n},$
$H_{m,n}\left(\xi,\kappa\right)=\sum_{l=0}^{\min(m,n)}\frac{m!n!\left(-1\right)^{l}\xi^{m-l}\kappa^{n-l}}{l!\left(n-l\right)!\left(m-l\right)!}.$
(19)
In particular, when $m=0$, noticing $M\geqslant N$ and Eq.(11), Eq.(17)
reduces to
$\displaystyle\left\langle N\right|\rho\left|M\right\rangle$
$\displaystyle=\sqrt{\frac{N!}{M!}}\alpha^{\ast
M-N}\frac{\left(\bar{n}_{t}\right)^{N}}{\left(\bar{n}_{t}+1\right)^{M+1}}$
$\displaystyle\times
e^{-\left|\alpha\right|^{2}/(\bar{n}_{t}+1)}L_{N}^{M-N}\left[-\frac{\left|\alpha\right|^{2}}{\bar{n}_{t}\left(\bar{n}_{t}+1\right)}\right],$
(20)
which is just the Glauber-Lachs formula 9 when
$\bar{n}_{t}=(e^{\beta\omega}-1)^{-1}$. While for $\alpha=0,$ corresponding to
the case of superposition of number state with thermal light, using Eq.(19),
Eq.(17) becomes
$\left\langle N\right|\rho\left|M\right\rangle=\delta_{M,N}P_{N},$ (21)
where ($k_{0}=\max[0,m-N]$)
$P_{N}=\frac{m!N!}{\bar{n}_{t}+1}\sum_{k=k_{0}}^{m}\frac{1}{k!}\frac{\left(\frac{\bar{n}_{t}}{\bar{n}_{t}+1}\right)^{k+N}\left[\bar{n}_{t}\left(\bar{n}_{t}+1\right)\right]^{k-m}}{\left(k+N-m\right)!\left[(m-k)!\right]^{2}}.$
(22)
Eq.(21) is just the result of Ref. 13 .
## III Sub-Poissonian photon statistics
To see clearly the photon statistics properties of the SECST, in this section,
we pay our attention to the variance of the photon number operator
$\left\langle\left(\Delta\hat{n}\right)^{2}\right\rangle=\left\langle\hat{n}^{2}\right\rangle-\left\langle\hat{n}\right\rangle^{2}.$
In particular, we will examine the evolution of the Mandel $Q$ parameter
defined as
$\displaystyle Q$
$\displaystyle=\frac{\left\langle\left(a^{{\dagger}}a\right)^{2}\right\rangle}{\left\langle
a^{{\dagger}}a\right\rangle}-\left\langle a^{{\dagger}}a\right\rangle$
$\displaystyle=\frac{\left\langle
a^{2}a^{{\dagger}2}\right\rangle-\left\langle
aa^{{\dagger}}\right\rangle^{2}-\left\langle
aa^{{\dagger}}\right\rangle}{\left\langle aa^{{\dagger}}\right\rangle-1},$
(23)
which measures the derivation of the variance of the photon number
distribution of the field state under consideration from the Poissonian
distribution of the coherent state. $Q=1,Q>1$ and $Q<1$ correspond to
Poissonian photon statistics (PPS), super-PPS and sub-PPS, respectively.
In order to calculate the average value in Eq.(23), we first calculate the
value of
$\left\langle\alpha\right|a^{n}a^{{\dagger}m}\left|\alpha\right\rangle.$ In
fact, using
$\left\langle\alpha\right|a^{m+n}a^{{\dagger}m+n}\left|\alpha\right\rangle=\left(m+n\right)!L_{m+n}(-\left|\alpha\right|^{2})$
(24)
and
$\int\frac{d^{2}z}{\pi}z^{n}z^{\ast
m}P\left(z\right)=\bar{n}_{t}^{m}m!\delta_{m,n},$ (25)
we can evaluate (for writing’s convenience, let $L_{m}$ denote
$L_{m}(-\left|\alpha\right|^{2})$)
$\left\langle
a^{{\dagger}}a\right\rangle=\frac{1+m}{L_{m}}L_{m+1}+\bar{n}_{t}-1,$ (26)
and
$\left\langle
a^{2}a^{{\dagger}2}\right\rangle=2\bar{n}_{t}^{2}+\frac{m+1}{L_{m}}\left[4\bar{n}_{t}L_{m+1}+\left(m+2\right)L_{m+2}\right].$
(27)
Substituting Eqs.(26) and (27) into (23) leads to
$\displaystyle Q$
$\displaystyle=\frac{\bar{n}_{t}\left(\bar{n}_{t}-1\right)L_{m}+\left(2\bar{n}_{t}-1\right)\left(m+1\right)L_{m+1}}{\left(1+m\right)L_{m+1}+\left(\bar{n}_{t}-1\right)L_{m}}$
$\displaystyle+\frac{(m+1)(m+2)L_{m+2}-\frac{\left(m+1\right)^{2}}{L_{m}}L_{m+1}^{2}}{\left(1+m\right)L_{m+1}+\left(\bar{n}_{t}-1\right)L_{m}}.$
(28)
At the zero-temperature limit $(\bar{n}_{t}\rightarrow 0)$, Eq.(28) just
reduces to Eq.(2.20) in Ref.1 .
Figure 1: (Color online) The evolution of Mandel $Q$ parameter as a function
of ($n_{t},\left|\alpha\right|$) for different values $m.$
In Fig.1, we display the parameter $Q\left(n_{t},\left|\alpha\right|\right)$
as a function of $\left(n_{t},\left|\alpha\right|\right)$ for different values
$m.$ From Fig.1, we see that, for the excitation photon number $m=0$ (see
Fig.1 (a)), $Q\left(\bar{n}_{t}=0,\left|\alpha\right|\right)=1$ corresponding
to coherent state (a PPS); while $Q\left(\bar{n}_{t}\neq
0,\left|\alpha\right|\right)>1$, i.e., the SECST field exhibits a significant
amount of super-PPS due to the presence of $\bar{n}_{t}$. From Fig.1 (b) and
(c), we see that, when $m\neq 0,$ the SECST field presents the sub-PPS when
$\bar{n}_{t}$ is less than a threshold value for a given
$\left|\alpha\right|;$ the threshold value increases as $m$ increases. For
example, when $\left|\alpha\right|=0,$ the threshold values are about 0.414
and 0.481, respectively, for $m=1$ and $m=6$.
## IV Information transmitted by the SECST beam
According to the negentropy principle of Brillouin 18 , the maximum
information $I$ transmitted by a beam is
$I=S_{\max}-S_{act},$ (29)
in which $S_{\max}$ and $S_{act}$ represent the maximum entropy and the actual
entropy, respectively, possessed by the quantum mechanical system described by
a density matrix $\rho$. Here the maximum information $I$ is an ideal one
transmitted through an ideal optical communication system.
For the SECST system, the actual entropy is
$S_{act}=-\mathtt{Tr}\left(\rho\ln\rho\right)=-\sum_{N}\sigma_{N}\ln\sigma_{N},$
(30)
where $\rho=\sum_{N}\sigma_{N}\left|N\right\rangle\left\langle N\right|,$ and
$\sigma_{N}=\left\langle N\right|\rho\left|N\right\rangle.$ $\sigma_{N}$ can
be obtained from Eq.(17), i.e.,
$\displaystyle\sigma_{N}$
$\displaystyle=\frac{\bar{n}_{t}^{N}e^{-\left|\alpha\right|^{2}}C_{\alpha,m}}{\left(\bar{n}_{t}+1\right)^{N+1}}\frac{\partial^{2m}}{\partial\upsilon^{m}\partial\upsilon^{\prime
m}}$
$\displaystyle\times\left\\{e^{\lambda_{t}^{2}\upsilon\upsilon^{\prime}}L_{N}\left(\frac{-\upsilon\upsilon^{\prime}}{\bar{n}_{t}\left(\bar{n}_{t}+1\right)}\right)\right\\}_{\upsilon=\alpha,\upsilon^{\prime}=\alpha^{\ast}},$
(31)
which is independent of the phase of $\alpha.$ On the other hand, for a system
in thermal equilibrium, described by the density matrix $\rho_{th}$, with mean
photons number $\bar{n}_{t}$, its entropy is
$S=-\sum_{N}P_{N}\ln
P_{N}=\ln(1+\bar{n}_{t})+\bar{n}_{t}\ln\frac{\bar{n}_{t}+1}{\bar{n}_{t}},$
(32)
where $P_{N}=\bar{n}_{t}^{N}/\left(\bar{n}_{t}+1\right)^{N=1}$ obtained from
Eq.(20) under the condition $m=0,\alpha=0.$ Note that the maximum entropy of
the system is equal to the entropy of a system in thermal equilibrium, with an
equal mean number of photons. The mean photons number of the SECST is given by
Eq.(26). Therefore, using Eq.(32), we have
$\displaystyle S_{\max}$
$\displaystyle=\ln\left(\left(1+m\right)\frac{L_{m+1}}{L_{m}}+\bar{n}_{t}\right)$
$\displaystyle+\left(\left(1+m\right)\frac{L_{m+1}}{L_{m}}+\bar{n}_{t}-1\right)$
$\displaystyle\times\ln\left(\frac{\left(1+m\right)L_{m+1}+\bar{n}_{t}L_{m}}{\left(1+m\right)L_{m+1}+\left(\bar{n}_{t}-1\right)L_{m}}\right).$
(33)
Figure 2: (Color online) The maximum information
$I\left(\bar{n}_{t},\left|\alpha\right|\right)$as a function of
$\left(\bar{n}_{t},\left|\alpha\right|\right)$ for some different values (a)
$m=0,(b)$ $m=1,$(truncating the infinite sum at $N_{\max}=70$). Figure 3:
(Color online) The maximum information
$I\left(\bar{n}_{t},\left|\alpha\right|=1\right)$as a function of
$\left(\bar{n}_{t}\right)$ for some different values (a) $m=0,(b)$ $m=1,(c)$
${\small m=2}$ (truncating the infinite sum at $N_{\max}=70$).
From Eqs.(29), (30) and (33), we can calculate the maximum information
transmitted by the SECST beam. In Fig. 2, the maximum information
$I\left(\bar{n}_{t},\left|\alpha\right|\right)$ is plotted as a function of
$\left(\bar{n}_{t},\left|\alpha\right|\right)$ for some different values $m$
(truncating the infinite sum at $N_{\max}=70$). From Fig.2, we can see that,
for a given $\bar{n}_{t},$ $I\left(\bar{n}_{t},\left|\alpha\right|\right)$
increases as the value $\left|\alpha\right|$ increases; for given
$\left|\alpha\right|,$ in general,
$I\left(\bar{n}_{t},\left|\alpha\right|\right)$ grows up as the value
$\bar{n}_{t}$ increases. In order to see clearly the effect of different
parameter $m$ to $I\left(\bar{n}_{t},\left|\alpha\right|\right),$ we presented
a plot in Fig.3, from which it is obvious that
$I\left(\bar{n}_{t},\left|\alpha\right|\right)$ becomes bigger due to the
presence of $m$, and increases as $m$ increases. In other words, the maximum
information transmitted by the SECST beam is larger than that by the SCST
($m=0$). The channel of ECS can carry with more information than that of
coherent state. In Ref. 13 , Vourdas has pointed out that the coherent signals
(of known phase) can transmit more information than the number eigenvectors
signals. Thus among these three beams, the SECST beam can transmit most
information.
## V The Wigner function of the SECST
### V.1 The Wigner function
The Wigner function (WF) plays an important role in quantum optics, especially
the WF can be reconstructed from measurements 19 ; 20 . The WF is a powerful
tool to investigate the nonclassicality of optical fields. The presence of
negativity in the WF of optical field is a signature of its nonclassicality is
often used to describe the decoherence of quantum states. In this section,
using the normally ordered form of the SECST, we evaluate its WF. For a
single-mode system, the WF is given by 21
$W\left(\gamma,\gamma^{\ast}\right)=\frac{e^{2\left|\gamma\right|^{2}}}{\pi}\int\frac{d^{2}\beta}{\pi}\left\langle-\beta\right|\rho\left|\beta\right\rangle
e^{2\left(\beta^{\ast}\gamma-\beta\gamma^{\ast}\right)},$ (34)
where $\left|\beta\right\rangle$ is the coherent state and $\gamma=x+iy$. From
Eq.(34) it is easy to see that once the normal ordered form of $\rho$ is
known, we can conveniently obtain the WF of $\rho.$
On substituting Eq.(9) into Eq.(34) we obtain the WF of the SECST
$\displaystyle W\left(\gamma,\gamma^{\ast}\right)$
$\displaystyle=\frac{\left(\lambda_{t}^{2}A_{1}^{2}\right)^{m}C_{\alpha,m}}{\pi\left(2\bar{n}_{t}+1\right)}\exp\left\\{-\frac{2\left|\alpha-\gamma\right|^{2}}{2n_{t}+1}\right\\}$
$\displaystyle\times\left(-1\right)^{m}H_{m,m}\left(\frac{A_{2}^{\ast}}{A_{1}},-\frac{A_{2}}{A_{1}}\right)$
$\displaystyle=\frac{\left(\lambda_{t}^{2}A_{1}^{2}\right)^{m}\exp\left\\{-\frac{2\left|\alpha-\gamma\right|^{2}}{2\bar{n}_{t}+1}\right\\}}{\pi\left(2\bar{n}_{t}+1\right)L_{m}\left(-\left|\alpha\right|^{2}\right)}L_{m}\left(-\left|A_{2}\right|^{2}/A_{1}^{2}\right),$
(35)
where we have set
$\displaystyle A_{1}^{2}$
$\displaystyle=1-\frac{1}{\left(2\bar{n}_{t}+1\right)\bar{n}_{t}},$
$\displaystyle A_{2}$
$\displaystyle=\frac{\lambda_{t}\left(\bar{n}_{t}+1\right)}{\left(2\bar{n}_{t}+1\right)\bar{n}_{t}}\left(2\bar{n}_{t}\alpha-\alpha+2\gamma\right).$
(36)
Figure 4: (Color online) The evolution of the Wigner function of the SECST
with $\alpha=0.2+0.2i$ for several different values $m$ and $\bar{n}_{t}.$
Noticing that $L_{m}(-\left|\alpha\right|^{2})>0,$ and
$L_{m}[-\left|A_{2}\right|^{2}/A_{1}^{2}]>0$ when
$1-\frac{1}{\left(2\bar{n}_{t}+1\right)\bar{n}_{t}}>0,$ thus the WF of the
SECST is always positive under the condition of $\bar{n}_{t}>1/2$. In
particular, when $m=0$, Eq.(35) becomes
$W\left(\gamma,\gamma^{\ast}\right)=\frac{1}{\pi\left(2\bar{n}_{t}+1\right)}\exp\left\\{-\frac{2\left|\alpha-\gamma\right|^{2}}{2\bar{n}_{t}+1}\right\\},$
(37)
which corresponds to the thermal state with mean photon number $\bar{n}_{t}$.
While for $\alpha=0,$ $A_{2}\rightarrow
2\gamma\lambda_{t}\left(\bar{n}_{t}+1\right)/[\left(2\bar{n}_{t}+1\right)\bar{n}_{t}],$
$\left|A_{2}\right|^{2}/A_{1}^{2}\rightarrow
4\left|\gamma\right|^{2}\left(\bar{n}_{t}+1\right)/\\{\left(2\bar{n}_{t}+1\right)\left[\left(2\bar{n}_{t}+1\right)\bar{n}_{t}-1\right]\\}\equiv\xi$,
Eq. (35) yields
$W\left(\gamma,\gamma^{\ast}\right)=\frac{\left[\left(2\bar{n}_{t}+1\right)\bar{n}_{t}-1\right]^{m}}{\pi\left(2\bar{n}_{t}+1\right)^{m+1}\left(\bar{n}_{t}+1\right)^{m}}e^{-\frac{2\left|\gamma\right|^{2}}{2\bar{n}_{t}+1}}L_{m}\left(-\xi\right).$
(38)
At the zero-temperature limit, $T\rightarrow 0,\bar{n}_{t}\rightarrow 0,$
Eq.(37) reduces now into
$\frac{1}{\pi}\exp(-2\left|\alpha-\gamma\right|^{2}),$ i.e., the WF of
coherent state (a Guassian form), which can be seen from Eq.(2) yielding
$\rho=\left|\alpha\right\rangle\left\langle\alpha\right|$ under the condition
$m=0$; while Eq.(38) becomes
$\frac{1}{\pi}(-1)^{m}e^{-2\left|\gamma\right|^{2}}L_{m}(4\left|\gamma\right|^{2}),$
corresponding to the WF of number state.
Using Eq.(35), the WFs of the SECST are depicted in Fig.4 in phase space with
$\alpha=0.2+0.2i$ for several different values $m$ and $\bar{n}_{t}.$ It is
easy to see that the negative region of WF gradually disappears as $m$ and
$\bar{n}_{t}$ and increases.
### V.2 The marginal distributions of the SECST
We now find the probability distribution of position or momentum$|$—–the
marginal distributions, by performing the WF either over the variable $y$ or
the variable $x$, respectively. Using Eqs.(35) and (36) we can derive (denote
$\gamma=x+iy,$ $\alpha=q+ip$)
$\displaystyle\mathrm{P}\left(x,\bar{n}_{t}\right)$ $\displaystyle\equiv\int
W\left(x,y\right)dy$
$\displaystyle=\frac{\left(\lambda_{t}^{2}A_{1}^{2}\right)^{m}C_{\alpha,m}}{\sqrt{2\pi\left(2\bar{n}_{t}+1\right)}}\frac{\left[m!\right]^{2}e^{-\frac{2\left(q-x\right)^{2}}{2\bar{n}_{t}+1}}}{\left(2\bar{n}_{t}-1\right)^{m}}$
$\displaystyle\times\sum_{k=0}^{m}\frac{2^{2k-m}\bar{n}_{t}^{k}}{k!\left[(m-k)!\right]^{2}}\left|H_{m-k}\left(E_{1}\right)\right|^{2},$
(39)
where $H_{m}\left(x\right)$ is single variable Hermite polynomial and
$E_{1}=\left[\left(2\bar{n}_{t}-1\right)\alpha+2x+2ip\right]/\sqrt{2\left(2\bar{n}_{t}+1\right)}$.
Eq.(39) is the marginal distribution of WF of the SECST in “$x$-direction”.
On the other hand, performing the integration over $dx$ yields the other
marginal distribution in “$y$-direction”,
$\displaystyle\mathrm{P}\left(y,\bar{n}_{t}\right)$
$\displaystyle=\frac{\left(\lambda_{t}^{2}A_{1}^{2}\right)^{m}C_{\alpha,m}}{\sqrt{2\pi\left(2\bar{n}_{t}+1\right)}}\frac{\left[m!\right]^{2}e^{-\frac{2(p-y)^{2}}{2\bar{n}_{t}+1}}}{\left(2\bar{n}_{t}-1\right)^{m}}$
$\displaystyle\times\sum_{k=0}^{m}\frac{2^{2k-m}\bar{n}_{t}^{k}}{k!\left[(m-k)!\right]^{2}}\left|H_{m-k}\left(E_{2}\right)\right|^{2},$
(40)
where
$E_{2}=i\left(2\bar{n}_{t}\alpha-\alpha+2q+2iy\right)/[\sqrt{2\left(2\bar{n}_{t}+1\right)}]$.
As expected, the two marginal distributions are both real.
## VI Conclusions
In summary, we have investigated the photon statistics properties of the
SECST, described by the density matrix $\rho$ (2). We have calculated the
matrix elements $\left\langle N\right|\rho\left|M\right\rangle$ in Fock space
and the Mandel $Q$ parameter. It is found that the SECST field exhibits a
significant amount of super-PPS due to the presence of thermal noise
($\bar{n}_{t}$) for excitation photon number $m=0$ and that, for $m\neq 0$ and
a given $\left|\alpha\right|,$ the SECST field presents the sub-PPS when
$\bar{n}_{t}$ is less than a threshold value. In addition, the threshold value
increases as $m$ increases. We have presented the maximum information (channel
capacity) transmitted by the SECST beam. It is shown that the maximum
information transmitted increases as $m$ increases. This implies that among
the coherent signals, the eigen-number signals and the ECS in thermal light,
the last one can transmit the most information. Further, as one of the photon
statistical properties, the Wigner function and the marginal distributions of
the SECST have also been derived, from which one can clearly see the
nonclassicality. The negative region has no chance to be present when the
average photon number $\bar{n}_{t}$ of thermal noise exceeds $1/2.$ The
marginal distributions are related to the Hermite polynomial.
###### Acknowledgements.
This work is supported by the National Natural Science Foundation of China
(Grant No 10775097). L.-Y. Hu’s email address is<EMAIL_ADDRESS>or
<EMAIL_ADDRESS>
## References
* (1) G. S. Agarwal, K. Tara, Phys. Rev. A 43, 492 (1991).
* (2) A. Zavatta, S. Viciani, M. Bellini, Science 306, 660 (2004).
* (3) A. Zavatta, S. Viciani, M. Bellini, Phys. Rev. A 72, (2006) 023820.
* (4) Y. Li, H. Jing and M. S. Zhan, arXIV: quantum-ph/0610143v1.
* (5) D. K.alamidas, C. C. Gerry, A. Benmoussa, Phys. Lett. A 372, 1937-1940 (2008).
* (6) Truong Minh Duc, Jaewoo Noh, Opt. Commun. 281, 2842 (2008).
* (7) R. J. Glauber, Phys. Rev. 130, 2529 (1963); R. J. Glauber, Phys. Rev. 131, 2766 (1963).
* (8) J. R. Klauder and B. S. Skargerstam, Coherent States, (World Scientific, Singapore, 1985).
* (9) B. R. Mollow and R. J. Glauber, Phys. Rev. 160, 1076 (1967); G. Lachs, Phys. Rev. B 138, 1012 (1965).
* (10) A. Vourdas, Phys. Rev. A 34, 3466 (1986).
* (11) B. Saleh, Photoelectrons Statistics (Springer-Verlag, Berlin, 1978).
* (12) A. Vourdas, Phys. Rev. A 39, 206 (1989).
* (13) A. Vourdas, Phys. Rev. A 37, 3890 (1988).
* (14) H.-Y. Fan, H. R. Zai and J. R. Klauder, Phys. Rev. D 35, 1831 (1987); H.-Y. Fan, H.-L. Lu and Y. Fan, Ann. Phys. 321, 480 (2006).
* (15) A. Wünsche, J. Opt. B: Quantum Semiclass. Opt. 1, R11 (1999).
* (16) A. Wünsche, J. Phys. A: Math. and Gen. 33, 1603 (2000).
* (17) R. R. Puri, Mathematical Methods of Quantum Optics, (Springer-Verlag, Berlin, 2001), Appendix A.
* (18) L. Brillouin, J. Appl. Phys. 24, 1152 (1953); A. Wehrl, Rev. Mod. Phys. 50, 221 (1978).
* (19) K. Vogel and H. Risken, Phys. Rev. A 40, 2847 (1989).
* (20) D. T. Smithey et al., Phys. Rev. Lett. 70, 1244 (1993).
* (21) M. O. Scully and M. S. Zubairy, Quantum Optics (Cambridge: Cambridge University Press, 1997).
|
# Efficient Diffusion Models for Vision: A Survey
Anwaar Ulhaq , Naveed Akhtar , , Ganna Pogrebna
###### Abstract
Diffusion Models (DMs) have demonstrated state-of-the-art performance in
content generation without requiring adversarial training. These models are
trained using a two-step process. First, a forward - diffusion \- process
gradually adds noise to a datum (usually an image). Then, a backward - reverse
diffusion \- process gradually removes the noise to turn it into a sample of
the target distribution being modelled. DMs are inspired by non-equilibrium
thermodynamics and have inherent high computational complexity. Due to the
frequent function evaluations and gradient calculations in high-dimensional
spaces, these models incur considerable computational overhead during both
training and inference stages. This can not only preclude the democratization
of diffusion-based modelling, but also hinder the adaption of diffusion models
in real-life applications. Not to mention, the efficiency of computational
models is fast becoming a significant concern due to excessive energy
consumption and environmental scares. These factors have led to multiple
contributions in the literature that focus on devising computationally
efficient DMs. In this review, we present the most recent advances in
diffusion models for vision, specifically focusing on the important design
aspects that affect the computational efficiency of DMs. In particular, we
emphasize the recently proposed design choices that have led to more efficient
DMs. Unlike the other recent reviews, which discuss diffusion models from a
broad perspective, this survey is aimed at pushing this research direction
forward by highlighting the design strategies in the literature that are
resulting in practicable models for the broader research community. We also
provide a future outlook of diffusion models in vision from their
computational efficiency viewpoint.
###### Index Terms:
Diffusion models, Generative models, Stable diffusion, Text-to-image, Text-to-
video, Image synthesis.
## I Introduction
Deep generative modelling has emerged as one of the most exciting
computational tools that is even challenging human creativity [1]. In the last
decade, Generative Adversarial Networks (GANs) [91], [92] have received a lot
of attention due to their high quality sample generation. However, diffusion
models [2, 3, 4] have recently emerged as an even more powerful generative
technique, threatening the reign of GANs in the synthetic data generation.
Diffusion models are gaining quick popularity because of their more stable
training as compared to GANs, as well as higher quality of the generated
samples. These models are able to address some notorious limitations of GANs,
like mode collapse, overhead of adversarial learning, and convergence failure
[5]. The training process of the diffusion models uses a much different
strategy as compared to GANs, which involves contaminating the training data
with Gaussian noise and then learning to recover the original data from the
noisy one. These models are also found to be suitable from the scalability and
parallelizability perspective, which adds to their appeal. Moreover, since
their training process is based on applying small modifications to original
data and rectifying those, they learn a data distribution whose samples
closely follow the original data. Thereby, enabling strong realism in the
generated samples. It is thanks to these attributes that the current state-of-
the-art in image generation has been strongly influenced by the diffusion
models, achieving astonishing results [6, 7, 10].
Due to their amazing generative abilities, diffusion models are quickly
finding applications in both low- and high-level vision tasks, including but
not limited to image denoising [93], [74], inpainting [100], image super-
resolution [98], [99], [101], semantic segmentation [94], [95], [96], image-
to-image translation [4] etc. Hence, unsurprisingly, since the seminal
advancement of diffusion probabilistic models [8] over the original proposal
of diffusion modelling [46], there has been a continuous rise in the number of
research papers appearing in this direction, and new exciting models are
emerging everyday. In particular, diffusion modelling has gained a
considerable social media hype after DALL-E [7], Imagen [102], and Stable [80]
models that enabled high quality text-to-image generation. This hype has
recently been fuelled further by the text-to-video generation techniques,
where the videos appear considerably sophisticated [88], [103]. Figure 1
provides statistics and a timeline overview of the recent literature on
diffusion models to show their popularity, particularly in the vision
community.
Figure 1: (a) Timeline of notable developments (non-exhaustive) in diffusion
modelling. (b) The number of per-month and accumulative papers in diffusion
models in the last 12 months based on Google Scholar search. (c) The
proportion of research papers in terms of main application areas for diffusion
models. The applications include Image Denoising (ID), Image Generation (IG),
Time Series (TS), Semantic Segmentation (SS), Image Super-resolution (IS),
BIG-bench Machine learning (BM), Image Inpainting (II), Decision Making (DM),
and Image-to-image Translation (IT).
Diffusion models belong to a category of probabilistic models that require
excessive computational resources to model unobserved data details. Their
training process requires evaluating models that follow iterative estimation
(and gradient computations). The computational cost becomes particularly huge
while dealing with high dimensional data like images and videos [9]. For
instance, a high-end diffusion model training in [11] takes 150-1000 V100 GPU
days. Moreover, since the inference stage also requires repeated evaluations
of the noisy input space, this stage is also computationally demanding. In
[11], 5 days of A100 GPU are required to produce 50k samples. Rombach et al.
[80] rightly noted that the huge computational requirements to train effective
diffusion models present a critical bottleneck in terms of democratizing this
technology because the research community generally lacks such resources. It
is evident that the most exciting results using diffusion models are first
achieved by e.g., Meta AI [88] and Google Research [103] who have an enormous
computational power at their disposal. It is also notable that evaluating an
already trained model has a considerable time and memory cost because the
model may need to run for multiple steps (e.g., 25-1000) to generate a sample
[10]. This is a potential hindrance in the practical applications of diffusion
models, especially in resource constrained environments.
In the contemporary era of large-scale data, early works on diffusion models
focused on high-quality sample generation, largely disregarding the
computational cost [8, 11, 12]. However, after achieving reasonable quality
milestones, the more recent works have also started to consider computational
efficiency, e.g., [80], [97], [60]. In particular, to address the genuine
drawback of a slow generation process at the inference stage, a new trend is
setting up for the more recent works, focusing on efficiency gains. In this
review article, we collectively term the diffusion models evolved under the
computational efficiency perspective, efficient diffusion models. These are
the emerging models that are more valuable to the research community because
they demand accessible computational resources. Whereas progress is being
consistently made in terms of improving the computational efficiency,
diffusion models are still far slower than GANs in terms of sample generation
[13, 14]. We review the existing works concerned with efficiency without
sacrificing the high quality of sample generation. Moreover, we discuss the
trade-offs between the model speed and sampling quality.
Why model efficiency is critical? Diffusion models have been able to produce
an astonishing quality of images and videos, virtually requiring no effort on
their users’ part - see Fig. 2. This foretells a widespread usage of these
models in the daily-life application domains, such as entertainment industry.
The creative abilities of diffusion models, or any AI platform, does not come
for free. High-quality generative modeling is energy-intensive, and the higher
the quality demand, the more power it consumes. Training a sophisticated AI
model needs time, money, and power [15, 16], leaving behind a significant
carbon footprint. To put things into a perspective, OpenAI trained GPT-3 model
[17] On 45 terabytes of data. Nvidia trained the final version of MegatronLM,
a language model comparable to but smaller than GPT-3, using 512 V100 GPUs for
nine days. A single V100 GPU may consume up to 300 watts. If we estimate the
power consumption of 250 watts, 512 V100 GPUs utilise 128,000 watts or 128
kilowatts (kW) [18]. Running for nine days requires 27,648 kilowatt hours of
power for the MegatronLM. The average home consumes 10,649 kWh per year as per
the US Energy Information Administration. Implying, training MegatronLM
required nearly as much energy as three houses use in a year. Among the
currently most hyped diffusion models (due to their ability to perform text-
to-image task), e.g., DALL-E [7], Imagen [102], and Stable [80], Stable is by
far the most efficient because its diffusion process is mainly carried out in
a lower dimension latent space. However, even this model’s training requires
an energy equivalent to burning nearly 7,000 kgs of coal111Computed with
https://mlco2.github.io/impact/#compute. Not to mention the text-to-image
diffusion models already rely on language models such as GPT-3 mentioned
above. Other diffusion models, especially for the more complex tasks, e.g.
text-to-video, are expected to require orders of magnitude more energy222The
carbon footprint calculations are not possible from the details provided in
the original papers.. Hence, due to the fast growing popularity of these
models, it is critical to focus on more efficient schemes.
Motivation and uniqueness of this survey: Since the diffusion models have
recently received a significant attention of the research community, the
literature is experiencing a large influx of contributions in this direction.
This has also led to review articles surfacing recently. Among them, Yang et
al. [3] reviewed the broad direction of diffusion modelling from the methods
and applications viewpoint, and Cao et al. [2] also discussed diffusion models
more broadly. More related to our review is [4], which focuses on the
diffusion model in the vision domain. On one hand, all these reviews already
surfaced before this direction fully matured. For instance, the breakthrough
of high quality text-to-video generation with diffusion models [88], [103] is
actually achieved after the appearance of all these surveys. On the other
hand, none of these surveys focuses on computational efficiency of the models,
which is the central aspect in pushing this research direction forward. Hence,
these surveys leave a clear open gap. We aim at addressing that by
highlighting the underlying schemes of the techniques that are improving the
computational efficiency of diffusion models. Our comprehensive review of the
existing methods from this pragmatic perspective is expected to advance this
research direction in the ways not covered by the reviews appeared during the
preparation of this article333We note that this manuscript is still a work in
progress, which will be updated in the future by improving its quality and
including further progress in this direction..
The rest of the article is organised as follows: Section II provides an
overview of diffusion models with a brief discussion on three representative
architectures. Section III provides a description of design choices and
discusses how these choices lead to computation-efficient designs. In Section
V compares representative works w.r.t quality and efficiency trade-off.
Section VI discusses future work directions, followed by a conclusion.
Figure 2: State-of-the-art diffusion models are able to generate excellent
quality samples for different tasks with minimal effort on their user’s part.
This portends a large-scale use of these models in the future in the
applications ranging from research to entertainment. The shown images are
cropped from the original works.
## II An Overview of Diffusion Models
The original idea of the probabilistic diffusion model is to model a specific
distribution from random noise. Therefore, the distributions of the generated
samples should be as close as those of the original samples. It includes a
forward process (or diffusion process), in which complex data (generally an
image) is progressively noised, and a reverse process (or reverse diffusion
process), in which noise is transformed back into a sample from the target
distribution. Here, we describe three models in particular due to their
influence on efficient diffusion architecture. It includes denoising diffusion
probabilistic models (DDPM) [8], latent diffusion models (LDM) [10] and
Feature Pyramid Latent Diffusion Model [19].
### II-A The Baseline: Denoising diffusion probabilistic models (DDPM):
Suppose we have an original data point sampled from a real data distribution
$\mathbf{x}_{0}\sim q(\mathbf{x})$. Let’s define a forward diffusion process
where we gradually add a small amount of Gaussian noise to the samples,
resulting in a series of noisy samples $\mathbf{x}_{1},\dots,\mathbf{x}_{T}$.
The step sizes are controlled by a variance schedule
$\\{\beta_{t}\in(0,1)\\}_{t=1}^{T}$.
$q(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_{t}}\mathbf{x}_{t-1},\beta_{t}\mathbf{I})\quad
q(\mathbf{x}_{1:T}|\mathbf{x}_{0})=\prod^{T}_{t=1}q(\mathbf{x}_{t}|\mathbf{x}_{t-1})$
(1)
The actual strength of diffusion models, however, is the reverse process
called reverse diffusion, as the goal of training a diffusion model is to
learn the reverse process. It can be done by training a neural network o
approximate these conditional probabilities in order to run the reverse
diffusion process.
$p_{\theta}(\mathbf{x}_{0:T})=p(\mathbf{x}_{T})\prod^{T}_{t=1}p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\quad
p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}(\mathbf{x}_{t-1};\bm{\mu}_{\theta}(\mathbf{x}_{t},t),\bm{\Sigma}_{\theta}(\mathbf{x}_{t},t))$
(2)
The reverse conditional probability is tractable when conditioned on ${x}_{0}$
$q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t-1};{\tilde{\bm{\mu}}}(\mathbf{x}_{t},\mathbf{x}_{0}),{\tilde{\beta}_{t}}\mathbf{I})$
(3)
The reverse Markov transitions that maximise the probability of the training
data are used to train a diffusion model. Training in practice is similar to
reducing the variational upper bound on the negative log probability. Because
this configuration is extremely similar to VAE, we may apply the variational
lower limit to optimise the negative log-likelihood.
$\text{Let
}L_{\text{VLB}}=\mathbb{E}_{q(\mathbf{x}_{0:T})}\Big{[}\log\frac{q(\mathbf{x}_{1:T}|\mathbf{x}_{0})}{p_{\theta}(\mathbf{x}_{0:T})}\Big{]}\geq-\mathbb{E}_{q(\mathbf{x}_{0})}\log
p_{\theta}(\mathbf{x}_{0})$ (4)
To make each component in the equation analytically computable, the objective
may be reformulated as a mixture of many KL-divergence and entropy terms. Let
us label each component of the variational lower bound loss separately:
Figure 3: The directed graphical model illustrates processes involved in a
diffusion model. This is probably the simplest description of diffusion models
as described by the original work. (Source:[8] )
$\displaystyle L_{\text{VLB}}$ $\displaystyle=L_{T}+L_{T-1}+\dots+L_{0}$ (5)
$\displaystyle\text{where }L_{T}$
$\displaystyle=D_{\text{KL}}(q(\mathbf{x}_{T}|\mathbf{x}_{0})\parallel
p_{\theta}(\mathbf{x}_{T}))$ $\displaystyle L_{t}$
$\displaystyle=D_{\text{KL}}(q(\mathbf{x}_{t}|\mathbf{x}_{t+1},\mathbf{x}_{0})\parallel
p_{\theta}(\mathbf{x}_{t}|\mathbf{x}_{t+1}))\text{ for }1\leq t\leq T-1$
$\displaystyle L_{0}$ $\displaystyle=-\log
p_{\theta}(\mathbf{x}_{0}|\mathbf{x}_{1})$
Because every KL term in $L\text{VLB}$ (excluding $L0$) compares two Gaussian
distributions, they can be calculated in closed form. In the reverse diffusion
process, a neural network is trained to approximate the conditioned
probability distributions. As $\mathbf{x}_{t}$ is available as input at
training time, the Gaussian noise term could be reparameterized as:
$\text{Thus
}\mathbf{x}_{t-1}=\mathcal{N}(\mathbf{x}_{t-1};\frac{1}{\sqrt{\alpha_{t}}}\Big{(}\mathbf{x}_{t}-\frac{1-\alpha_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\bm{\epsilon}_{\theta}(\mathbf{x}_{t},t)\Big{)},\bm{\Sigma}_{\theta}(\mathbf{x}_{t},t))$
(6)
Empirically, the training the diffusion model works better with a simplified
objective that ignores the weighting term:
$L_{t}=\mathbb{E}_{t\sim[1,T],\mathbf{x}_{0},\bm{\epsilon}_{t}}\Big{[}\|\bm{\epsilon}_{t}-\bm{\epsilon}_{\theta}(\mathbf{x}_{t},t)\|^{2}\Big{]}\\\
=\mathbb{E}_{t\sim[1,T],\mathbf{x}_{0},\bm{\epsilon}_{t}}\Big{[}\|\bm{\epsilon}_{t}-\bm{\epsilon}_{\theta}(\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\bm{\epsilon}_{t},t)\|^{2}\Big{]}$
(7)
The final simple objective is $L=L_{t}+C$, where C is a constant not depending
on $\theta$.
Model Efficiency: It is very slow to generate a sample from DDPM by following
the Markov chain of the reverse diffusion process, as $T$ can be up to one or
a few thousand steps. For example, takes around 20 hours to sample 50k images
of size 32 × 32 from a DDPM but less than a minute to do so from a GAN on an
Nvidia 2080 Ti GPU.
### II-B Latent diffusion model (LDM):
These models perform the diffusion process in latent space rather than pixel
space, lowering training costs and increasing inference speed. It is driven by
the discovery that the majority of picture bits contribute to perceptual
details and that the semantic and conceptual composition persists after
extreme compression. With generative modelling learning, LDM loosely
decomposes perceptual compression and semantic compression by first cutting
out pixel-level redundancy with autoencoder and then manipulating/generating
semantic ideas with diffusion process on learnt latent.
Figure 4: The architecture of the latent diffusion model (LDM) that is
considered a revolutionary work that has been employed in stable diffusion and
turned the direction of research towards efficient discussion models in
general. (Source:[10] )
An autoencoder model is used in the perceptual compression process. A
$\mathcal{E}$ encoder is used to compress the input picture
$\mathbf{x}\in\mathbb{R}^{H\times W\times 3}$ to a smaller 2D latent vector.
$\mathbf{z}=\mathcal{E}(\mathbf{x})\in\mathbb{R}^{h\times w\times c}$ , where
the downsampling rate $f=H/h=W/w=2^{m},m\in\mathbb{N}$ . Then an decoder
$\mathcal{D}$ reconstructs the images from the latent vector,
$\tilde{\mathbf{x}}=\mathcal{D}(\mathbf{z})$ To prevent arbitrarily large
variance in the latent spaces, the research investigated two kinds of
regularisation in autoencoder training.
The neural backbone of the LDM model is realized as a time-conditional UNet.
The model has the ability to build the underlying UNet primarily from 2D
convolutional layers and further focus the objective on the perceptually most
relevant bits using the reweighted bound, which now reads:
$L_{L}DM=\mathbb{E}_{t\sim[1,T],\mathbf{x}_{0},\bm{\epsilon}_{t}}\Big{[}\|\bm{\epsilon}_{t}-\bm{\epsilon}_{\theta}(\mathbf{z}_{t},t)\|^{2}\Big{]}$
(8)
On the latent vector $\mathbf{z}$, the diffusion and denoising processes take
place. The denoising model is a time-conditioned U-Net that has been
supplemented with a cross-attention mechanism to manage flexible conditioning
information for picture production (e.g. class labels, semantic maps, blurred
variants of an image). The design is akin to fusing representations of various
modalities into a model with a cross-attention mechanism. Each kind of
conditioning information is associated with a domain-specific encoder
$\tau\theta$, which projects the conditioning input to an intermediate
representation that can be translated into the cross-attention component,
$\tau\theta(y)\in\mathbb{R}^{M\times d}\tau$:
$\text{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{softmax}\Big{(}\frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{d}}\Big{)}\cdot\mathbf{V}\\\
$ (9)
where,
$\mathbf{Q}=\mathbf{W}^{(i)}_{Q}\cdot\varphi_{i}(\mathbf{z}_{i}),\;\mathbf{K}=\mathbf{W}^{(i)}_{K}\cdot\tau_{\theta}(y),\;\mathbf{V}=\mathbf{W}^{(i)}_{V}\cdot\tau_{\theta}(y)\\\
$ (10)
and
$\mathbf{W}^{(i)}_{Q}\in\mathbb{R}^{d\times
d^{i}_{\epsilon}},\;\mathbf{W}^{(i)}_{K},\mathbf{W}^{(i)}_{V}\in\mathbb{R}^{d\times
d_{\tau}},\;\varphi_{i}(\mathbf{z}_{i})\in\mathbb{R}^{N\times
d^{i}_{\epsilon}},\;\tau_{\theta}(y)\in\mathbb{R}^{M\times d_{\tau}}$ (11)
Figure 5: The architecture of the Feature Pyramid Diffusion Model (Frido)
encodes an image into multi-scale feature maps to improve the efficiency of
diffusion models. (Source:[19] )
(Source: )
### II-C Feature Pyramid Latent Diffusion Model ( Frido):
Frido decomposes the input image into scale-independent quantized features and
then obtains the output result through coarse-to-fine gating. In short, the
author first uses multi-scale MS-VQGAN (Multi-Scale VQGAN), encodes the input
image into the latent space, and then uses Frido to do diffusion in the latent
space. The encoder of MS-VQGAN encodes the input image into N-scale latent
variables, similar to an image pyramid, but in latent space. The low-level
latent variables keep lower-level visual details, while the high-level latent
variables keep high-level shapes and structures. Then the decoder decodes the
obtained hidden variables of all scales into the output image. The size of the
pyramid of this hidden variable also decreases with the number of layers, and
each layer is half of the upper layer. In this way, both high-level semantic
information and more low-level details can be maintained. Given an image
$x_{0}$, the encoder E first produces a latent feature map set of N scales
$Z=\mathcal{E}(x_{0})=\\{z_{(}1:N)\\}.$ (12)
Next is the diffusion model in the latent space. The previous methods directly
implement the diffusion model on all hidden variables, but the method in this
paper includes multiple scales. So the author sequentially performs diffusion
on different scales, and each scale T step requires $NTN\times TNStepT$. Then
the diffusion operation of adding noise forward first destroys the details of
the image, then the high-level shape, and finally, the structure of the entire
image.
The corresponding process of Denoising is a process from high level to low
level. Based on the previous U-Net, the author proposes a feature pyramid
U-Net (PyU-Net) [19] to realize the denoising process at multiple scales. Two
innovations of this PyU-Net: By adding a lightweight network to each scale,
the hidden variables of each layer are mapped to the same dimension so that
they can be unified as the input of U-Net. Correspondingly, it is also
necessary to add light weight to the input of U-Net. The amount of network to
remap back to the dimension of the current scale information. Coarse-to-fine
gating has been added to allow low-level denoises to utilize existing high-
level information. To train more effectively, the author uses a teacher
forcing trick, Works with teacher forcing to maintain training efficiency
while preventing overfitting, and enables UNet to obtain information on the
current scale level and time step. Finally, another level-specific projection
decodes the U-Net output to predict the noise added on z with the following
objective.
$L_{F}rido=\mathbb{E}_{t\sim[1,T],\mathbf{x}_{0},\bm{\epsilon}_{t}}\|\bm{\epsilon}_{t}-\bm{\epsilon}_{\theta}(\mathbf{z}_{t}^{n},\mathbf{z}_{t}^{n}+1:N,t)\|^{2}$
(13)
## III Effective Strategies for Efficient Diffusion Models:
The diffusion model needs Reconstructing data distribution that needs
sampling. The major hurdle in the way of efficient diffusion models is their
sampling process inefficiency, as it is very slow to generate samples from
DDPM. Diffusion models rely on a long Markov chain of diffusion steps to
generate samples, so it can be quite expensive in terms of time and computing.
Significant efforts have been made to accelerate the sampling procedures in
recent years. We divide these influencing strategies into two categories:
Efficient Design Strategies (EDS), which recommend modifications to the design
of baseline diffusion models and Efficient Process Strategies (EPS), which
suggest ways to improve the efficiency of diffusion models or speed up the
sampling process. However, these strategies are inferred by revising the
literature, and future work may include some novel strategies not mentioned
below.
Figure 6: The influencing strategies for efficient diffusion models can be
divided into two categories: Efficient Design Strategies (EDS), which
recommend modifications to the design of baseline diffusion models, and
Efficient Process Strategies (EPS), which suggest ways to improve the
efficiency of diffusion models or speed up the sampling process.
(Source: )
### III-A Efficient Design Strategies (EDS)
These strategies are based on the architecture of diffusion models. Table 1
includes some representative work in each architectural category that is
included. A brief description of each category and its influence on the
efficiency of diffusion models are discussed below:
Architecture | Model | Citation | Strategy
---|---|---|---
Guided | ADM | [11] | Classifier guidance and upsampling
Not-Guided | CFDG | [20] | Classifier-free guidance
Guided | SDEDIT | [21] | Stochastic Differential Editing
Guided | VDM | [22] | Reconstruction-guided sampling
Guided | SDG | [23] | Semantic Diffusion Guidance
Not-Guided | Make-A-Scene | [24] | Classifier-free guidance
Not-Guided | VQ-Diffusion | [25] | DiscreteClassifier-free Guidance
Discrete | DDPM | [8] | Discrete data
Continuous | DP stride | [26] | Continuous Time Affine Diffusion Processes
Continuous | FastDPM | [27] | Continuous diffusion steps
Continuous | DSB | [28] | Diffusion Schrödinger Bridge
Discrete | VQ-Diffusion | [25] | Discrete Classifier-free Guidance
Score Network | BDDM | [29] | score and scedulng network
Score Network | CLD | [30] | Score matching objective
Score Network | GGDM | [33] | Sample quality scores.
SDE | DSB | [28] | Diffusion Schrödinger Bridge
Score Network | ScoreFlow | [34] | Inproving likelihood of Scores
SDE | EMSDE | [35] | Adaptive step sizes
Pyramidal | CDM | [36] | Cascading pipelines
Pyramidal | Frido | [19] | Pyramid diffusion
Pyramidal | PDDPM | [37] | Pyramidal reverse sampling
Non-Pyramidal | Blur diffusion | [38] | Frequency Diffusion at variable speeds
Non-Pyramidal | NWDM | [39] | Frequency domain diffusion
Latent | ILVR | [40] | Iterative Latent Variable Refinement
Latent | LDM | [10] | latent space
Latent | latent DDIM | [41] | Semantic latent interpolation
Latent | INDM | [42] | Linear diffusion on the latent space
Pixel | DDPM | [8] | image upsampling and downsampling
Pixel | DDIM | [43] | image upsampling and downsampling
TABLE I: Representative works on Efficient design strategies (EDS) with
mention of the model, architectural approach, citation and strategy used in
the existing literature. These strategies are based on the architecture of
diffusion models.
1- Classifier Guided or unguided Design: Classifier guidance is a recently
developed strategy for balancing mode coverage and sample fidelity in post-
training conditional diffusion models, in the same way, that low-temperature
sampling or truncation is used in other forms of generative models. An example
is a work by Nichol [44] that trained a classifier
$f_{\phi}(y|\mathbf{x}_{t},t)$ on noisy image $\mathbf{x}_{t}$ To explicit
incorporate class information into the diffusion process, and use gradients
$\nabla_{\mathbf{x}}\log f_{\phi}(y|\mathbf{x}_{t})$ to guide the diffusion
sampling process toward the conditioning information $y$ (e.g. a target class
label) by altering the noise prediction.
Guidance is a trade-off: it enhances adherence to the conditioning signal and
overall sample quality, but at a high cost to vary. While classifier guidance
successfully trades off quality metrics (IS and FID) as expected from
truncation or low-temperature sampling, it is nonetheless reliant on gradients
from an image classifier.
Classifier-free guidance [20] achieves the same effect without such gradients.
Classifier-free guidance is an alternative method of modifying gradients to
have the same effect as classifier guidance but without a classifier. It
increases sample quality while decreasing sample diversity in diffusion
models.
2- Discrete or Continuous Design: The Diffusion Process is a continuous
example that may be characterized by a Stochastic Differential
Equation.Probability Flow ODE (Diffusion ODE) is the continuous-time
differential equation [45]. Denoising diffusion probabilistic models (DDPMs)
[8] have shown impressive results in image and waveform generation in
continuous state spaces.
Denoising diffusion models have produced both remarkable log-likelihood scores
on numerous common picture datasets and high-quality image production in
continuous situations. Many datasets are discrete, but for the convenience of
modeling, they are frequently embedded in a continuous space and modeled
continuously.
Structured corruption processes appropriate for text data, using similarity
between tokens to enable gradual corruption and denoising. Diffusion models
with discrete state spaces were first introduced by Sohl-Dickstein et al.[46]
who considered a diffusion process over binary random variables. Considered a
simple 2×2 transition matrix for binary random variables. Hoogeboom et al.
[47] later extended this to categorical variables, proposing a transition
matrix.
This, however, can lead to tough modeling concerns such as ”de-quantization”
blockages, weird gradient issues, and difficulties understanding log-
likelihood metrics. All of these concerns are avoided by representing discrete
data separately. Instead of transitioning uniformly to any other state, for
ordinal data, discrete models imitate a continuous space diffusion model by
using a discretized, truncated Gaussian distribution.
In terms of efficient design, discrete diffusion design is preferable as it
helps reduce the number of samples. Diffusion models with discrete state
spaces were first introduced by Sohl-Dickstein et al. [46], who considered a
diffusion process over binary random variables.
Even though diffusion models have been presented in both discrete and
continuous state spaces, much current work has concentrated on Gaussian
diffusion processes that operate in continuous state spaces (e.g. for real-
valued image and waveform data).
3- Score Matching Networks or SDEs Design: The score network may be used to
create an ODE (”scoring-based diffusion ODE”) for evaluating precise
probability [30, 48]. They simulate the distribution of data by matching a
parameterized score network with first-order data score functions. The
gradient of the log-likelihood concerning the random variable x is defined as
the score.
$\nabla_{\mathbf{x}}\log
p_{\theta}(\mathbf{x})=-\nabla_{\mathbf{x}}f_{\theta}(\mathbf{x}).$ (14)
The purpose of score-matching is to reduce the difference between $ptextdata$
and $ptextdata$ by optimizing the Fisher divergence. It has been used in
medical applications such as low-dose computed tomography (LDCT), resulting in
a low signal-to-noise ratio (SNR) and potential impairment of diagnostic
performance. The Diffusion Probability Model for Conditional Noise Reduction
(DDPM) has been shown to improve LDCT noise reduction performance with
encouraging results at high computational efficiency. Especially considering
the high sampling cost of the original DDPM model, the fast ordinary
differential equation (ODE) solver can be scaled for greatly improved sampling
efficiency. Experiments [49] show that accelerated DDPM can achieve 20X
speedup without degrading image quality.
Process | Model | Citation | Strategy
---|---|---|---
Training | FSDM | [50] | Conditioned on a small class set
Training | VDM | [22] | Joint training from image and video
Training | MMDM | [51] | Argmax Flows and Multinomial Diffusion
Training | P2 | [52] | Perception PrioritizedWeighting
Training | DDRM | [53] | Pre-trained demonising diffusion
Noise Ditribution | DGM | [54] | noise from Gamma distribution
Noise Ditribution | GGDM | [33] | Differentiable Diffusion Sampler Search
Noise Ditribution | NEDM | [55] | Noise Schedule Adjustments
Noise Ditribution | CCDF | [56] | Non Gaussian initialisation
Noise Ditribution | Cold Diffusion | [57] | Deterministic noise degradation
Noise Ditribution | BDDM | [29] | Non-isotropic noise
Noise Ditribution | GDDIM | [58] | Mixture Gaussian, Gamma Distribution
Mixing | DiVAE | [59] | Input image embedding
Mixing | FastDPM | [27] | Unified framework
Mixing | DiffFlow | [60] | Normalizing flow and diffusion
Mixing | MMDM | [51] | Argmax Flows Multinomial Diffusion
Mixing | LDM+DDIM | [61] | Latent Implicit Diffusion
Mixing | Diffusion-GAN | [62] | Gaussian mixture distribution
Scheduling | DP stride | [26] | Learning time schedule by optimization
Scheduling | Bit Diffusion | [63] | Asymmetric Time Intervals
Scheduling | DiffusionTimes | [64] | Trade off on diffusion time
Scheduling | ProgressiveDistillation | [65] | Deterministic diffusion sampler
Scheduling | ES-DDPM | [66] | Early Diffusion Stoppage
Scheduling | CMDE | [67] | Multiscale diffusion
Scheduling | FHDM | [68] | Termination at a random first hitting time
Scheduling | IDSD | [69] | Stochastic sampling
Retreival | KNN-Diffusion | [70] | KNN adapted training
Retreival | RDM | [71] | Database subset conditioning
Retreival | RDM | [72] | Informative samples Conditioning
TABLE II: Representative works on Efficient process strategies (EPS) with
mention of the model, process, citation and the strategy used in the existing
literature. These strategies target the improvement of the diffusion process
itself.
A stochastic differential equation (SDE) [21] is a differential equation where
one or more of the terms is a stochastic process, resulting in a solution,
which is itself a stochastic process.
Diffusion ODE can be seen as a semi-linear form by which the discretization
errors are reduced. DPM-solver accomplished the SOTA within 50 steps on
CIFAR-10 [73], and it can generate high-quality images) with ten steps, which
is an extensive upgrade.
Compared to traditional diffusion methods with discrete steps, numerical
formulations of differential equations achieve more efficient sampling with
advanced solvers. Inspired by Score SDE and Probability Flow (Diffusion), ODE.
4- Pyramidal or non-Pyramidal Design: Pyramidal approaches to training the
diffusion model such that it can understand the different scales of the input
by giving coordinate information as a condition. These models concatenate an
input image and coordinate the values of each pixel. Then, random resizing to
the target resolution is applied to the merged input. The resized coordinate
values are encoded with the sinusoidal wave, expanded to high dimensional
space, and act as conditions when training. Benefiting from the UNet-like
model structure [59], the cost function is kelp invariant to all different
resolutions so that the optimization can be performed with only a single
network. The multi-scale score function, the sampling speed, which is the most
critical disadvantage of the diffusion models, can also be made much faster
compared to a single full DDPM by a reverse sampling process.
Therefore, the pyramidal or multiscale approach provides better efficiency to
diffusion models.
5- Pixel or Latent Representation-based Design: The majority of a digital
image’s bits correspond to insignificant information. While DMs allow for the
suppression of semantically meaningless information by minimizing the
responsible loss term, gradients (during training) and the neural network
backbone (during training and inference) must still be evaluated on all
pixels, resulting in redundant computations and unnecessarily expensive
optimization and inference.
The model class Latent Diffusion Models (LDMs) provide efficient image
generation from the latent space with a single network pass. LDMs work in the
learned latent space, which exhibits better-scaling properties concerning the
spatial dimensionality.
Therefore, latent models are efficient compared to pixel-based designs.
### III-B Efficient Process Strategies (EPS)
These strategies target the improvement of the diffusion process itself. Table
2 includes some representative work in each process category that is included.
A brief description of each category and its influence on the efficiency of
diffusion models are discussed below:
1- Training Strategy: To enhance sampling speed, several strategies focus on
modifying the pattern of training and noise schedule. However, re-training
models require more processing and increase the risk of unstable training.
Fortunately, there is a family of approaches known as training-free sampling
that directly augments the sample algorithm using a pre-trained model. The
purpose of advanced training-free sampling is to offer an efficient sampling
method for learning from a pre-trained model in fewer steps and with improved
accuracy. Analytical approaches, implicit sampler, differential equation
solver sampler, and dynamic programming adjustment are the three types.
By using a memory technique, dynamic programming may traverse all options to
discover the optimal solution in a relatively short amount of time. In
comparison to previous efficient sampling approaches, dynamic programming
methods discover the optimal sample path rather than constructing strong steps
that decrease error more rapidly.
2- Noise Distribution Strategy: Unlike DDPM [8], which defines noise scale as
a constant, research into the effect of noise scale learning has received a
lot of interest [55], because noise schedule learning also counts during
diffusion and sampling. Each sample step may be seen as a random walk on the
direct line heading to the preceding distribution, demonstrating that noise
reduction may help the sampling operation. The random walk of random noise is
guided by noise learning in both the diffusion and sampling processes,
resulting in more efficient reconstruction.
The underlying noise distribution of the diffusion process is Gaussian noise
in most known approaches. Fitting distributions with more degrees of freedom,
on the other hand, may increase the performance of such generative models.
Other noise distribution forms for the diffusion process are being researched.
The Denoising Diffusion Gamma Model (DDGM) [54] demonstrates that noise from
the Gamma distribution improves picture and voice creation.
The sample obtained from random noise will be tweaked anew in each sampling
step to get closer to the original distribution. However, sampling with
diffusion models requires too many steps, resulting in a time-consuming
condition [74].
3- The Mixing or Unifying Strategy: Mixed-Modeling entails incorporating
another form of the generative model into the diffusion model pipeline to make
use of others’ high sampling speed, such as adversarial training networks and
autoregressive encoders, as well as high expressiveness, such as normalizing
flow [75, 60, 62]. Thus, extracting all of the strengths by combining two or
more models with a specified pattern results in a possible upgrade known as
Mixed-modeling.
The goal of diffusion scheme learning is to investigate the influence of
different diffusion patterns on model speed. Truncating both the diffusion and
sampling processes, resulting in shorter sampling time, is advantageous for
lowering sampling time while enhancing producing quality. The main goal behind
truncating patterns is to generate less dispersed data using various
generative models such as GAN [76] and VAE [77].
By gradually distilling knowledge from one sample model to another, a
diffusion model may be enhanced [78]. Before being taught to create one-step
samples as near to teacher models as possible, student models re-weight from
teacher models in each distillation step. As a consequence, student models can
cut the number of sample steps in half during each distillation operation.
The acceleration approach for generalized diffusion aids in the solution of a
wide range of models and provides insights into effective sampling mechanisms.
Other related research establishes the relationship between the diffusion
model and denoising score matching, which may be considered one sort of
unification.
4- Scheduling Strategy: Improving the training schedule entails updating
classic training methods such as the diffusion scheme, noise scheme, and data
distribution scheme, all of which are independent of sampling.
When solving the diffusion SDE, decreasing the discretization step size helps
speed up the sampling operation. Such techniques, however, would result in
discretization mistakes and significantly impact model performance [60]. As a
result, several strategies for optimizing the discretization scheme with a
time to minimize sampling steps while maintaining excellent sample quality
have been devised.
To create a prediction, the Markov process only uses the sample from the
previous phase, which restricts the use of plentiful earlier data. In
contrast, the transition kernel of the Non-Markovian process may rely on more
samples and use more information from these samples. As a result, it can
create accurate predictions with a high step size, which speeds up the
sampling method.
Alternatively, by just performing certain phases of the reverse process to
obtain samples, one might trade sample quality for sampling speed. Some
sampling can be accomplished by pausing or truncating the forward and reverse
processes early on or by retraining student networks and bypassing partial
phases through knowledge distillation.
Diffusion sampling may be accomplished in a few steps with the use of strong
conditioned conditions. Early Stop (ES) DDPM produced implicit distribution by
producing previous data with VAE, which learned the latent space [66].
As previously stated, it generally takes the same number of steps for the
generative process as it does for the diffusion process to reconstruct the
original data distribution in DDPM [8]. However, the diffusion model has the
so-called decoupling property in that it does not require the same number of
steps for diffusing and sampling. The implicit sampling approach, which is
based on the generative implicit model, includes deterministic diffusion and
jump-step sampling. Surprisingly, implicit models do not require re-training
since the forward’s diffusion probability density is constant at all times.
DDIM[43] uses continuous process formulation to tackle the jump-step
acceleration problem.
5- Retrieval Strategy: During training, RDMs [71, 72] obtain a collection of
closest neighbors from an external database, and the diffusion model is
conditioned on these informative samples. Retrieval-augmentation works by
looking for photos that are similar to the prompt you offer and then letting
the model view them during creation.
During training, the diffusion model is fed comparable visual characteristics
obtained via CLIP and from the vicinity of each training instance. By using
CLIP’s combined image-text embedding space [79], the model delivers very
competitive performance on tasks for which it has not been explicitly trained,
such as class-conditional or text-image synthesis, and may be conditioned on
both text and picture embeddings improving its performance. Retrieval-
Augmented Diffusion Models [80] are recently used for the text-guided
synthesis of artistic images efficiently.
Retrieval-Augmented Text-to-Image Generator (Re-Imagen), [81] is a generative
model that uses the extracted information to produce highly faithful images
even for rare or invisible entities. At a text message, Re-Imagen accesses an
external multimodal knowledge base to retrieve the relevant pairs (image,
text) and uses them as references to generate the image.
## IV Comparative Performance and Discussion
In this section, we will discuss the comparative performance of different
diffusion models, particularly in terms of sampling efficiency and the number
of parameters. We will also discuss future work directions to lead new
research into this exciting area.
As mentioned earlier, the research focus to date was heavily on increasing the
quality of generated samples, and stable diffusion has changed the course with
a focus on efficiency. Before comparative analysis, we will mention the
important quality and efficiency metrics being used in the research community
to compare diffusion models’ performance.
Inception Score (IS): The inception score is designed to value both the
variety and resolution of created pictures based on the ImageNet dataset [82].
It is split into two sections: diversity measurement and quality measurement.
Diversity is measured in terms of the class entropy of generated samples: the
higher the entropy, the more diverse the samples. Quality is measured using
entropy and the similarity between a sample and the relevant class pictures
because the samples will have a higher resolution if they are closer to the
ImageNet dataset’s specific class of pictures.
2 Frechet Inception Distance (FID): Although the Inception Score includes
suitable assessment approaches, the establishment is dependent on a particular
dataset with 1000 classes as well as a trained network that includes
randomnesses such as initial weights and code structure. As a result, the bias
between ImageNet and real-world photos may result in an incorrect result.
Furthermore, the sample batch size is substantially lower than 1000 classes,
resulting in low-belief statistics. To address the bias from specific
reference datasets, FID is proposed [83]. Using the mean and covariance, the
score calculates the distance between the real-world data distribution and the
produced samples.
3 Negative Log Probability (NLL) Negative log-likelihood is viewed as a common
assessment metric that describes all patterns of data distribution by Razavi
et al. There has been a lot of effort on normalising flow fields [and VAE
fields employ NLL as one of the assessment options. Some diffusion models,
such as enhanced DDPM, consider the NLL to be the training aim.
Some efficiency metrics include the following:
1- Sampling speed or throughput: Fast sampling is one major efficiency target
of diffusion models alongside sampling quality metrics. Sample/second. A
simple measure will be the number of steps in generating these samples, as a
low number of steps is preferable.
2- Computing Workload: Modern HPC data centers are key to solving heavy
computing workloads like diffusion models. As NVIDIA Tesla V100 Tensor Core is
one of the most advanced data center GPUs, some works have compared the
performance of the diffusion model by V100 days.
3- Model Complexity: Number of Parameters: Model parameters are important
metrics. However, it is hard to directly relate it to efficiency as more
parameters in new heavy and best-performing models are intensive in the number
of parameters. However, if the same performance can be achieved with a low
number of parameters, that indicates model efficiency.
However, compared to well-established quality metrics, efficiency metrics are
still not standardized, and open challenges and benchmarks based on efficiency
metrics are still missing. This is another direction to contribute to
diffusion model efficiency research.
Image Inpainting has recently become an important research problem due to the
rise of generative image synthesis models [44, 40, 84, 85]. Most inpainting
solutions perform well on object removal or texture synthesis, while semantic
generation is still difficult to achieve. To address these issues, NTIRE 2022
[84] Image Inpainting Challenge was introduced with the target to develop
solutions that can achieve a robust performance across different and
challenging masks while generating compelling semantic images. The proposed
challenge consists of two tracks: unsupervised image inpainting and
semantically-guided image inpainting. For Track 1, the participants were
provided with four datasets: FFHQ, Places, ImageNet, and WikiArt, and trained
their models to perform a mask-agnostic image inpainting solution. For Track
2, FFHQ and Places.
Overall, diffusion models showed excellent results in image painting, as they
can be applied to this task without direct supervision. In this challenge,
these methods were tested on 7,000 images per dataset. However, the winner of
the challenge relies on a Latent Diffusion Model (LDM) cite LDM system, which
performed the noise reduction process at a latent representation rather than
at the pixel level, drastically reducing the inference time to an average of
10 seconds per image size of 512 × 512.
Model | Year | FID | Steps | Parameters M
---|---|---|---|---
PGGAN | 2017 | 8.03 | NA | 23.1
DDGAN | 2021 | 7.64 | NA | -
StyleSwin | 2021 | 3.25 | NA | -
ADM | 2021 | 1.9 | 1000 | -
LDM | 2022 | 2.95 | 200 | 1.9
Model | FLOP | Parameters | Inference Time
---|---|---|---
LDM-8 | 37.1 G | 589.8 M | 0.82547
Frido | 37.3 G | 1.179 B | 1.02865
Dido-gating | 39.7 G | 697.8 M | 0.91782
TABLE III: Efficiency comparison of best performing works on challenging Image
Generation datasets ImageNet (Top) and COCO (Bottom) in terms of reported
efficiency metrics for diffusion models in the literature.
To find the impact of latent diffusion model [10] on emerging trends in
literature, we use bibliographic networks. For this purpose, we use a
clustering approach. In cluster analysis, the number of sub-problems is set by
the resolution. The greater the value of this parameter, the more clusters
will be created. We tried to minimize the number of clusters to focus on the
most representative work in terms of relevance ad impact, which resulted in
only three clusters based on 50 research papers. Fig. 7 displays these
clusters in three primary colors, and Table lists one representative papers in
each cluster. Such visualization of a bibliometric network provides an
automated insight into relevant literature that can not be figured out
manually. This visualization and its depth of understanding helped us to
revise our taxonomies, which are discussed in the following sections.
Figure 7: Visualization of Bibliometric Networks for assessing the impact of latent diffusion models” It shows the top 50 papers in terms of their relevance and impact on the topic and with each other. Each cluster provides a distinct theme and is represented by the same color dots. Mutual connections in the network are based on their term similarities. Cluster | Research Paper: The Title
---|---
Yellow | High-Resolution Image Synthesis with Latent Diffusion Models
Green | Text-Guided Synthesis of Artistic Images with Retrieval-Augmented Diffusion Models
Blue | What the DAAM: Interpreting Stable Diffusion Using Cross Attention
Red | Unifying Diffusion Models’ Latent Space, with Applications to CycleDiffusion and Guidance
## V Future Work Directions
The popularity, usability, and creativity of diffusion models are attracting
new research efforts in the computer vision community, especially after the
efficient use of computing resources and open source availability of stable
diffusion. It is fair to say that stable diffusion has proved to be a game-
changing model. However, new work in literature is emerging every day that
addresses other challenges. Some of the emerging research directions are as
follows:
* •
Retrieval-augmentation works by looking for images similar to the specified
prompt and then the model can see them during generation.
* •
Another emerging area is the development of Few-Shot Diffusion Models (FSDM),
which present a framework for few-shot generation leveraging conditional
DDPMs. These models are trained to adapt the generative process conditioned on
a small set of images from a given class by aggregating image patch
information using a set-based Vision Transformer (Vit). A new approach like
DreamBooth [86] is the ”personalization” of text-to-image diffusion models
(specializing them to users’ needs). Given as input just a few images of a
subject, such models can fine-tune a pre-trained text-to-image model such that
it learns to bind a unique identifier with that specific subject. By
leveraging the semantic prior embedded in the model with a new autogenous
class-specific prior preservation loss, these models enable synthesizing the
subject in diverse scenes, poses, views, and lighting conditions that do not
appear in the reference images. CycleDiffusion is introduced by [32] that
shows that large-scale text-to-image diffusion models can be used as zero-shot
image-to-image editors. It can guide pre-trained diffusion models and GANs by
controlling the latent codes in a unified, plug-and-play formulation based on
energy-based models.
* •
In the past, most text-to-image models were developed as propriety
applications. However, the coming of stable diffusion open source has
initiated another trend that will help evolve diffusion research.
* •
Another new research direction is innovative architectures for video diffusion
models [87, 3] which is a natural extension of the standard image
architecture. This architecture may use joint training from image and video
data to generate long and higher-resolution videos. Generation of video
without text-video data can introduce efficient designs like demonstrated
[88].
* •
Diffusion models are promising candidates for human movement due to their
many-to-many nature, but they tend to be resource-intensive and difficult to
control. Motion Diffusion Model (MDM) , a carefully tuned classifier-free
generative diffusion-based model is introduced for the human motion domain.
The model is based on a transformer and combines knowledge from the motion
generation literature. It uses sample prediction rather than noise at each
scattering stage. This facilitates the use of established geometric losses at
motion locations and velocities, such as loss of foot contact. It is a generic
approach that allows different conditioning modes and different generation
tasks. Similar work is Motiondiffuse [90] which is text-driven human motion
generation with a diffusion model. It indicates a future trend to generate
complex nature of visual data with diffusion models.
* •
The interpretability and explainability of diffusion models will show the
internal working and learning process of these models. If the actual learning
process is well-interpreted, it can lead to efficient diffusion model designs.
An interpretability method called DAAM [31] is introduced to produce pixel-
level attribution maps based on upscaling and aggregating cross-attention
activations in the latent denoising subnetwork.
## VI Conclusion
In this review, we presented the most recent advances in diffusion models and
discussed important design aspects that cause DMs to become inefficient and
computationally expensive models. We focused on recently proposed design
choices that have resulted in efficient diffusion models. Unlike previous
works that categorized diffusion models generally, this review article has
discussed efficient strategies leading to efficient and inefficient diffusion
models. We have provided a comparative analysis of existing diffusion
approaches in terms of efficiency metrics and provided new directions for
future research work regarding computationally efficient diffusion models.
## References
* [1] L. Regenwetter, A. H. Nobari, and F. Ahmed, “Deep generative models in engineering design: A review,” _Journal of Mechanical Design_ , vol. 144, no. 7, p. 071704, 2022.
* [2] H. Cao, C. Tan, Z. Gao, G. Chen, P.-A. Heng, and S. Z. Li, “A survey on generative diffusion model,” _arXiv preprint arXiv:2209.02646_ , 2022.
* [3] L. Yang, Z. Zhang, and S. Hong, “Diffusion models: A comprehensive survey of methods and applications,” _arXiv preprint arXiv:2209.00796_ , 2022.
* [4] F.-A. Croitoru, V. Hondru, R. T. Ionescu, and M. Shah, “Diffusion models in vision: A survey,” _arXiv preprint arXiv:2209.04747_ , 2022.
* [5] H. Alqahtani, M. Kavakli-Thorne, G. Kumar, and F. SBSSTC, “An analysis of evaluation metrics of gans,” in _International Conference on Information Technology and Applications (ICITA)_ , vol. 7, 2019.
* [6] A. Borji, “Generated faces in the wild: Quantitative comparison of stable diffusion, midjourney and dall-e 2,” 2022.
* [7] A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen, “Hierarchical text-conditional image generation with clip latents,” _arXiv preprint arXiv:2204.06125_ , 2022.
* [8] J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” _Advances in Neural Information Processing Systems_ , vol. 33, pp. 6840–6851, 2020.
* [9] H. Zheng, P. He, W. Chen, and M. Zhou, “Truncated diffusion probabilistic models,” _arXiv preprint arXiv:2202.09671_ , 2022.
* [10] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 10 684–10 695.
* [11] P. Dhariwal and A. Nichol, “Diffusion models beat gans on image synthesis,” _Advances in Neural Information Processing Systems_ , vol. 34, pp. 8780–8794, 2021.
* [12] R. San-Roman, E. Nachmani, and L. Wolf, “Noise estimation for generative diffusion models,” _arXiv preprint arXiv:2104.02600_ , 2021.
* [13] X. Yang, S.-M. Shih, Y. Fu, X. Zhao, and S. Ji, “Your vit is secretly a hybrid discriminative-generative diffusion model,” _arXiv preprint arXiv:2208.07791_ , 2022.
* [14] B. Jing, G. Corso, R. Berlinghieri, and T. Jaakkola, “Subspace diffusion generative models,” _arXiv preprint arXiv:2205.01490_ , 2022.
* [15] J. Dodge, T. Prewitt, R. Tachet des Combes, E. Odmark, R. Schwartz, E. Strubell, A. S. Luccioni, N. A. Smith, N. DeCario, and W. Buchanan, “Measuring the carbon intensity of ai in cloud instances,” in _2022 ACM Conference on Fairness, Accountability, and Transparency_ , 2022, pp. 1877–1894.
* [16] J. M. Haut, S. Bernabé, M. E. Paoletti, R. Fernandez-Beltran, A. Plaza, and J. Plaza, “Low–high-power consumption architectures for deep-learning models applied to hyperspectral image classification,” _IEEE Geoscience and Remote Sensing Letters_ , vol. 16, no. 5, pp. 776–780, 2018.
* [17] L. Floridi and M. Chiriatti, “Gpt-3: Its nature, scope, limits, and consequences,” _Minds and Machines_ , vol. 30, no. 4, pp. 681–694, 2020\.
* [18] R. Xu, F. Han, and Q. Ta, “Deep learning at scale on nvidia v100 accelerators,” in _2018 IEEE/ACM Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS)_. IEEE, 2018, pp. 23–32.
* [19] W.-C. Fan, Y.-C. Chen, D. Chen, Y. Cheng, L. Yuan, and Y.-C. F. Wang, “Frido: Feature pyramid diffusion for complex scene image synthesis,” _arXiv preprint arXiv:2208.13753_ , 2022.
* [20] J. Ho and T. Salimans, “Classifier-free diffusion guidance,” _arXiv preprint arXiv:2207.12598_ , 2022.
* [21] C. Meng, Y. He, Y. Song, J. Song, J. Wu, J.-Y. Zhu, and S. Ermon, “Sdedit: Guided image synthesis and editing with stochastic differential equations,” in _International Conference on Learning Representations_ , 2021.
* [22] J. Ho, T. Salimans, A. Gritsenko, W. Chan, M. Norouzi, and D. J. Fleet, “Video diffusion models,” _arXiv preprint arXiv:2204.03458_ , 2022.
* [23] X. Liu, D. H. Park, S. Azadi, G. Zhang, A. Chopikyan, Y. Hu, H. Shi, A. Rohrbach, and T. Darrell, “More control for free! image synthesis with semantic diffusion guidance,” _arXiv preprint arXiv:2112.05744_ , 2021.
* [24] O. Gafni, A. Polyak, O. Ashual, S. Sheynin, D. Parikh, and Y. Taigman, “Make-a-scene: Scene-based text-to-image generation with human priors,” _arXiv preprint arXiv:2203.13131_ , 2022.
* [25] Z. Tang, S. Gu, J. Bao, D. Chen, and F. Wen, “Improved vector quantized diffusion models,” _arXiv preprint arXiv:2205.16007_ , 2022.
* [26] D. Watson, J. Ho, M. Norouzi, and W. Chan, “Learning to efficiently sample from diffusion probabilistic models,” _arXiv preprint arXiv:2106.03802_ , 2021.
* [27] Z. Kong and W. Ping, “On fast sampling of diffusion probabilistic models,” _arXiv preprint arXiv:2106.00132_ , 2021.
* [28] V. De Bortoli, J. Thornton, J. Heng, and A. Doucet, “Diffusion schrödinger bridge with applications to score-based generative modeling,” _Advances in Neural Information Processing Systems_ , vol. 34, pp. 17 695–17 709, 2021\.
* [29] M. W. Lam, J. Wang, R. Huang, D. Su, and D. Yu, “Bilateral denoising diffusion models,” _arXiv preprint arXiv:2108.11514_ , 2021.
* [30] T. Dockhorn, A. Vahdat, and K. Kreis, “Score-based generative modeling with critically-damped langevin diffusion,” _arXiv preprint arXiv:2112.07068_ , 2021.
* [31] R. Tang, A. Pandey, Z. Jiang, G. Yang, and K. Kumar, “ What the DAAM: Interpreting Stable Diffusion Using Cross Attention,” _arXiv preprint arXiv:2210.04885_ , 2022.
* [32] C. Wu, D. Fernando, “ Unifying Diffusion Models’ Latent Space, with Applications to CycleDiffusion and Guidance,” _arXiv preprint arXiv:2210.05559_ , 2022.
* [33] D. Watson, W. Chan, J. Ho, and M. Norouzi, “Learning fast samplers for diffusion models by differentiating through sample quality,” in _International Conference on Learning Representations_ , 2021.
* [34] Y. Song, C. Durkan, I. Murray, and S. Ermon, “Maximum likelihood training of score-based diffusion models,” _Advances in Neural Information Processing Systems_ , vol. 34, pp. 1415–1428, 2021.
* [35] A. Jolicoeur-Martineau, K. Li, R. Piché-Taillefer, T. Kachman, and I. Mitliagkas, “Gotta go fast when generating data with score-based models,” _arXiv preprint arXiv:2105.14080_ , 2021.
* [36] J. Ho, C. Saharia, W. Chan, D. J. Fleet, M. Norouzi, and T. Salimans, “Cascaded diffusion models for high fidelity image generation.” _J. Mach. Learn. Res._ , vol. 23, pp. 47–1, 2022.
* [37] D. Ryu and J. C. Ye, “Pyramidal denoising diffusion probabilistic models,” _arXiv preprint arXiv:2208.01864_ , 2022.
* [38] S. Lee, H. Chung, J. Kim, and J. C. Ye, “Progressive deblurring of diffusion models for coarse-to-fine image synthesis,” _arXiv preprint arXiv:2207.11192_ , 2022.
* [39] K.-H. Hui, R. Li, J. Hu, and C.-W. Fu, “Neural wavelet-domain diffusion for 3d shape generation,” _arXiv preprint arXiv:2209.08725_ , 2022.
* [40] J. Choi, S. Kim, Y. Jeong, Y. Gwon, and S. Yoon, “Ilvr: Conditioning method for denoising diffusion probabilistic models,” _arXiv preprint arXiv:2108.02938_ , 2021.
* [41] K. Preechakul, N. Chatthee, S. Wizadwongsa, and S. Suwajanakorn, “Diffusion autoencoders: Toward a meaningful and decodable representation,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 10 619–10 629.
* [42] D. Kim, B. Na, S. J. Kwon, D. Lee, W. Kang, and I.-C. Moon, “Maximum likelihood training of implicit nonlinear diffusion models,” _arXiv preprint arXiv:2205.13699_ , 2022.
* [43] Q. Zhang, M. Tao, and Y. Chen, “gddim: Generalized denoising diffusion implicit models,” _arXiv preprint arXiv:2206.05564_ , 2022.
* [44] A. Nichol, P. Dhariwal, A. Ramesh, P. Shyam, P. Mishkin, B. McGrew, I. Sutskever, and M. Chen, “Glide: Towards photorealistic image generation and editing with text-guided diffusion models,” _arXiv preprint arXiv:2112.10741_ , 2021.
* [45] A. Vahdat, K. Kreis, and J. Kautz, “Score-based generative modeling in latent space,” _Advances in Neural Information Processing Systems_ , vol. 34, pp. 11 287–11 302, 2021.
* [46] J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli, “Deep unsupervised learning using nonequilibrium thermodynamics,” in _International Conference on Machine Learning_. PMLR, 2015, pp. 2256–2265.
* [47] E. Hoogeboom, A. A. Gritsenko, J. Bastings, B. Poole, R. v. d. Berg, and T. Salimans, “Autoregressive diffusion models,” _arXiv preprint arXiv:2110.02037_ , 2021.
* [48] V. De Bortoli, J. Thornton, J. Heng, and A. Doucet, “Diffusion schrödinger bridge with applications to score-based generative modeling,” _Advances in Neural Information Processing Systems_ , vol. 34, pp. 17 695–17 709, 2021\.
* [49] W. Xia, Q. Lyu, and G. Wang, “Low-dose ct using denoising diffusion probabilistic model for 20x speedup,” _arXiv preprint arXiv:2209.15136_ , 2022.
* [50] G. Giannone, D. Nielsen, and O. Winther, “Few-shot diffusion models,” _arXiv preprint arXiv:2205.15463_ , 2022.
* [51] E. Hoogeboom, D. Nielsen, P. Jaini, P. Forré, and M. Welling, “Argmax flows and multinomial diffusion: Learning categorical distributions,” _Advances in Neural Information Processing Systems_ , vol. 34, pp. 12 454–12 465, 2021.
* [52] J. Choi, J. Lee, C. Shin, S. Kim, H. Kim, and S. Yoon, “Perception prioritized training of diffusion models,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 11 472–11 481.
* [53] B. Kawar, M. Elad, S. Ermon, and J. Song, “Denoising diffusion restoration models,” _arXiv preprint arXiv:2201.11793_ , 2022.
* [54] E. Nachmani, R. S. Roman, and L. Wolf, “Denoising diffusion gamma models,” _arXiv preprint arXiv:2110.05948_ , 2021.
* [55] R. San-Roman, E. Nachmani, and L. Wolf, “Noise estimation for generative diffusion models,” _arXiv preprint arXiv:2104.02600_ , 2021.
* [56] H. Chung, B. Sim, and J. C. Ye, “Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 12 413–12 422.
* [57] A. Bansal, E. Borgnia, H.-M. Chu, J. S. Li, H. Kazemi, F. Huang, M. Goldblum, J. Geiping, and T. Goldstein, “Cold diffusion: Inverting arbitrary image transforms without noise,” _arXiv preprint arXiv:2208.09392_ , 2022.
* [58] E. Nachmani, R. S. Roman, and L. Wolf, “Non gaussian denoising diffusion models,” _arXiv preprint arXiv:2106.07582_ , 2021.
* [59] J. Shi, C. Wu, J. Liang, X. Liu, and N. Duan, “Divae: Photorealistic images synthesis with denoising diffusion decoder,” _arXiv preprint arXiv:2206.00386_ , 2022.
* [60] Q. Zhang and Y. Chen, “Diffusion normalizing flow,” _Advances in Neural Information Processing Systems_ , vol. 34, pp. 16 280–16 291, 2021.
* [61] W. H. Pinaya, P.-D. Tudosiu, J. Dafflon, P. F. da Costa, V. Fernandez, P. Nachev, S. Ourselin, and M. J. Cardoso, “Brain imaging generation with latent diffusion models,” _arXiv preprint arXiv:2209.07162_ , 2022.
* [62] Z. Wang, H. Zheng, P. He, W. Chen, and M. Zhou, “Diffusion-gan: Training gans with diffusion,” _arXiv preprint arXiv:2206.02262_ , 2022.
* [63] T. Chen, R. Zhang, and G. Hinton, “Analog bits: Generating discrete data using diffusion models with self-conditioning,” _arXiv preprint arXiv:2208.04202_ , 2022.
* [64] G. Franzese, S. Rossi, L. Yang, A. Finamore, D. Rossi, M. Filippone, and P. Michiardi, “How much is enough? a study on diffusion times in score-based generative models,” _arXiv preprint arXiv:2206.05173_ , 2022.
* [65] T. Salimans and J. Ho, “Progressive distillation for fast sampling of diffusion models,” _arXiv preprint arXiv:2202.00512_ , 2022.
* [66] Z. Lyu, X. Xu, C. Yang, D. Lin, and B. Dai, “Accelerating diffusion models via early stop of the diffusion process,” _arXiv preprint arXiv:2205.12524_ , 2022.
* [67] G. Batzolis, J. Stanczuk, C.-B. Schönlieb, and C. Etmann, “Non-uniform diffusion models,” _arXiv preprint arXiv:2207.09786_ , 2022.
* [68] M. Ye, L. Wu, and Q. Liu, “First hitting diffusion models,” _arXiv preprint arXiv:2209.01170_ , 2022.
* [69] T. Karras, M. Aittala, T. Aila, and S. Laine, “Elucidating the design space of diffusion-based generative models,” _arXiv preprint arXiv:2206.00364_ , 2022\.
* [70] O. Ashual, S. Sheynin, A. Polyak, U. Singer, O. Gafni, E. Nachmani, and Y. Taigman, “Knn-diffusion: Image generation via large-scale retrieval,” _arXiv preprint arXiv:2204.02849_ , 2022.
* [71] A. Blattmann, R. Rombach, K. Oktay, and B. Ommer, “Retrieval-augmented diffusion models,” _arXiv preprint arXiv:2204.11824_ , 2022.
* [72] R. Rombach, A. Blattmann, and B. Ommer, “Text-guided synthesis of artistic images with retrieval-augmented diffusion models,” _arXiv preprint arXiv:2207.13038_ , 2022.
* [73] Y. Abouelnaga, O. S. Ali, H. Rady, and M. Moustafa, “Cifar-10: Knn-based ensemble of classifiers,” in _2016 International Conference on Computational Science and Computational Intelligence (CSCI)_. IEEE, 2016, pp. 1192–1195.
* [74] A. Q. Nichol and P. Dhariwal, “Improved denoising diffusion probabilistic models,” in _International Conference on Machine Learning_. PMLR, 2021, pp. 8162–8171.
* [75] E. Hoogeboom, A. A. Gritsenko, J. Bastings, B. Poole, R. v. d. Berg, and T. Salimans, “Autoregressive diffusion models,” _arXiv preprint arXiv:2110.02037_ , 2021.
* [76] Z. Pan, W. Yu, X. Yi, A. Khan, F. Yuan, and Y. Zheng, “Recent progress on generative adversarial networks (gans): A survey,” _IEEE Access_ , vol. 7, pp. 36 322–36 333, 2019.
* [77] A. Razavi, A. Van den Oord, and O. Vinyals, “Generating diverse high-fidelity images with vq-vae-2,” _Advances in neural information processing systems_ , vol. 32, 2019.
* [78] E. Luhman and T. Luhman, “Knowledge distillation in iterative generative models for improved sampling speed,” _arXiv preprint arXiv:2101.02388_ , 2021\.
* [79] S. Shen, L. H. Li, H. Tan, M. Bansal, A. Rohrbach, K.-W. Chang, Z. Yao, and K. Keutzer, “How much can clip benefit vision-and-language tasks?” _arXiv preprint arXiv:2107.06383_ , 2021.
* [80] R. Rombach, A. Blattmann, and B. Ommer, “Text-guided synthesis of artistic images with retrieval-augmented diffusion models,” _arXiv preprint arXiv:2207.13038_ , 2022.
* [81] W. Chen, H. Hu, C. Saharia, and W. W. Cohen, “Re-imagen: Retrieval-augmented text-to-image generator,” _arXiv preprint arXiv:2209.14491_ , 2022.
* [82] S. Barratt and R. Sharma, “A note on the inception score,” _arXiv preprint arXiv:1801.01973_ , 2018.
* [83] T. Kynkäänniemi, T. Karras, M. Aittala, T. Aila, and J. Lehtinen, “The role of imagenet classes in fr$\backslash$’echet inception distance,” _arXiv preprint arXiv:2203.06026_ , 2022.
* [84] A. Romero, A. Castillo, J. Abril-Nova, R. Timofte, R. Das, S. Hira, Z. Pan, M. Zhang, B. Li, D. He, _et al._ , “Ntire 2022 image inpainting challenge: Report,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 1150–1182.
* [85] C. Saharia, W. Chan, H. Chang, C. Lee, J. Ho, T. Salimans, D. Fleet, and M. Norouzi, “Palette: Image-to-image diffusion models,” in _ACM SIGGRAPH 2022 Conference Proceedings_ , 2022, pp. 1–10.
* [86] N. Ruiz, Y. Li, V. Jampani, Y. Pritch, M. Rubinstein, and K. Aberman, “Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation,” _arXiv preprint arXiv:2208.12242_ , 2022.
* [87] J. Ho, T. Salimans, A. Gritsenko, W. Chan, M. Norouzi, and D. J. Fleet, “Video diffusion models,” _arXiv preprint arXiv:2204.03458_ , 2022.
* [88] U. Singer, A. Polyak, T. Hayes, X. Yin, J. An, S. Zhang, Q. Hu, H. Yang, O. Ashual, O. Gafni, _et al._ , “Make-a-video: Text-to-video generation without text-video data,” _arXiv preprint arXiv:2209.14792_ , 2022.
* [89] G. Tevet, S. Raab, B. Gordon, Y. Shafir, A. H. Bermano, and D. Cohen-Or, “Human motion diffusion model,” _arXiv preprint arXiv:2209.14916_ , 2022\.
* [90] M. Zhang, Z. Cai, L. Pan, F. Hong, X. Guo, L. Yang, and Z. Liu, “Motiondiffuse: Text-driven human motion generation with diffusion model,” _arXiv preprint arXiv:2208.15001_ , 2022.
* [91] A. Brock, J. Donahue, K. Simonyan, K., “Large scale GAN training for high fidelity natural image synthesis”. arXiv preprint arXiv:1809.11096. 2018.
* [92] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, A. and Bengio, Y.. “Generative adversarial networks”. Communications of the ACM, 63(11), pp.139-144, 2020.
* [93] Ho, J., Jain, A. and Abbeel, P., 2020. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33, pp.6840-6851.
* [94] D. Baranchuk, I. Rubachev, A. Voynov, V. Khrulkov, and A. Babenko, “Label-Efficient Semantic Segmentation with Diffusion Models,” in Proceedings of ICLR, 2022.
* [95] A. Graikos, N. Malkin, N. Jojic, and D. Samaras, “Diffusion models as plug-and-play priors,” arXiv preprint arXiv:2206.09012, 2022\.
* [96] J. Wolleb, R. Sandkuhler, F. Bieder, P. Valmaggia, and P. C. Cattin, ¨ “Diffusion Models for Implicit Image Segmentation Ensembles,” in Proceedings of MIDL, 2022.
* [97] J. Song, C. Meng, and S. Ermon, “Denoising Diffusion Implicit Models,” in Proceedings of ICLR, 2021.
* [98] Li, H., Yang, Y., Chang, M., Chen, S., Feng, H., Xu, Z., Li, Q. and Chen, Y., 2022. Srdiff: Single image super-resolution with diffusion probabilistic models. Neurocomputing, 479, pp.47-59.
* [99] G. Batzolis, J. Stanczuk, C.-B. Schonlieb, and C. Etmann, “Conditional image generation with score-based diffusion models,” arXiv preprint arXiv:2111.13606, 2021.
* [100] P. Esser, R. Rombach, A. Blattmann, and B. Ommer, “ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis,” in Proceedings of NeurIPS, vol. 34, pp. 3518– 3532, 2021.
* [101] Lugmayr, A., Danelljan, M., Romero, A., Yu, F., Timofte, R. and Van Gool, L., 2022. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11461-11471).
* [102] C. Saharia, W. Chan, S. Saxena, L. Li, J. Whang, E. Denton, S. K. S. Ghasemipour, B. K. Ayan, S. S. Mahdavi, R. G. Lopes, et al., “Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding,” arXiv preprint arXiv:2205.11487, 2022.
* [103] Ho, J., Chan, W., Saharia, C., Whang, J., Gao, R., Gritsenko, A., Kingma, D.P., Poole, B., Norouzi, M., Fleet, D.J. and Salimans, T., 2022. Imagen Video: High Definition Video Generation with Diffusion Models. arXiv preprint arXiv:2210.02303.
| Anwaar Ulhaq holds a PhD (Artificial Intelligence) from Monash University,
Australia. He is working as a senior lecturer (AI) at the school of computing,
Mathematics, and Engineering at Charles Sturt University, Australia. He has
developed national and international recognition in computer vision and image
processing. His research has been featured 16 times in national and
international news venues, including ABC News and IFIP (UNESCO). He is an
active member of IEEE, ACS and the Australian Academy of Sciences. As Deputy
Leader of the Machine Vision and Digital Health Research Group (MaViDH), he
provides leadership in Artificial Intelligence research and leverages his
leadership vision and strategy to promote AI research by mentoring junior
researchers in AI and supervising HDR students devising plans to increase
research impact.
---|---
| Naveed Akhtar is a Sr. Lecturer (Machine Learning, AI and Data Science) at
the Department of Computer Science and Software Engineering, UWA. He has
published over 50 scientific papers in the world-leading sources in Computer
Vision, Machine Learning and AI. He serve(d) as an Area Chair for IEEE CVPR,
ECCV, IEEE WACV and an Associate Editor for IEEE Trans. Neural Networks and
Learning System (TNNLS) and IEEE Access. He has also served as a Guest Editor
for NCAA and Remote Sensing journals. His current research interests include
explainable AI, deep learning, adversarial machine learning, 3D point cloud
analysis and remote sensing.
---|---
| Ganna Pogrebna is Director of AI and Cyber institute and Professor of
Behavioral Analytics and Data Science at the Charles Sturt University She also
serves as a Lead of Behavioral Data Science strand at the Alan Turing
Institute – the national centre for AI and Data Science in London (UK), where
Ganna is a fellow working on hybrid modelling approaches between behavioral
science and data science (e.g., anthropomorphic learning). She also currently
serves as an associate editor of Judgement and Decision Making journal. Her
recent projects focus on smart technological and social systems,
cybersecurity, behavioural change for digital security, human-computer and
human-data interactions and business models. Ganna’s work on risk analytics
and modelling was recognized by the Leverhulme Research Fellowship award. In
January 2020, she was also named as the winner of TechWomen100 – the prize
awarded to leading female experts in Science, Technology, Engineering and
Mathematics in the UK, which she received for her contribution to the research
and practice of human aspects of cybersecurity. She is also named as one of
20+ Inspiring Data Scientists by the AI Time Journal. Her work is regularly
covered by the traditional as well as social media. Her research was funded by
ESRC, EPSRC, Leverhulme Trust and industry. She has also completed projects
funded by UK MOD and GCHQ. She is an author of “Navigating New Cyber Risks”.
CI Pogrebna has supervised a large number of PhD students on behavioural
science and data science topics. She published extensively on risk modelling
and human behaviour as well as human behaviour and cyber security in high-
quality peer-refereed journals.
---|---
|
# Reduced cross sections of electron and neutrino charged current quasielastic
scattering on nuclei
A. V. Butkevich Institute for Nuclear Research, Russian Academy of Sciences,
Moscow 117312, Russia
###### Abstract
The semi-exclusive averaged reduced cross sections for (anti)neutrino charged
current quasi-elastic scattering on carbon, oxygen, and argon are analyzed
within the relativistic distorted wave impulse approximation. We found that
these cross sections as functions of missing nucleon energy are similar to
those of electron scattering and are in agreement with electron scattering
data for three nuclei. The difference between the electron and neutrino cross
sections can be attributed to Coulomb distortion on the electron wave
function. The averaged reduced cross sections depend slowly upon incoming
lepton energy. The approach presented in this paper provide novel constraints
on nuclear models of quasi-elastic neutrino-nucleus scattering and can be
easily applied to test spectral functions and final state interactions,
employed in neutrino event generators.
###### pacs:
25.30.-c, 25.30.Bf, 25.30.Pt, 13.15.+g
## I Introduction
For current NOvA1 ; T2K and future DUNE ; HK2T ; SBN accelerator-based
neutrino experiments the primary physics goals are measuring the lepton CP
violation phase, determing neutrino mass ordering and testing the three flavor
paradigm. In these experiments to evaluate the oscillation parameters, the
probabilities of neutrino oscillations as functions of neutrino energy are
measured. The neutrino beams are not monoenergetic and have broad
distributions that range from tens of MeVs to a few GeVs. The accuracy to
which neutrino oscillation parameters can be extracted depends on the ability
of experiments to determine the individual energy of detected neutrino.
Measurements at neutrino energy 1 GeV are critical for the T2K T2K and HK
HK2T programs, which are carbon and water (oxygen) detectors as well as for
the SBN (argon) SBN program. Measurements from 1 to 2 GeV are important for
the NOvA (carbon, chlorine) NOvA1 experiment, and measurements spanning from
1 to 10 GeV critical for the DUNE (argon) DUNE program. At the GeV- scale
neutrino energies the neutrino can interact with a nucleus through a wide
range of reaction channels. These include the charged-current (CC)
quasielastic (QE) scattering, two-body meson exchange current (MEC) channels,
resonance production and deep inelastic scattering.
The incident neutrino energy is reconstructed using kinematic or calorimetric
methods. At energy about 1 GeV, where the CCQE scattering is dominant, the
incoming neutrino energy can be derived from lepton kinematics alone. The
calorimetric method relies not only on the visible energy measured in the
detector, but also on the models of the neutrino-nucleus interactions that are
implemented in neutrino event generators. In addition the neutrino-nucleus
scattering model is critical for obtain background estimates, and for correct
extrapolations of the near detector constraints to the far detector in
analyses aimed at determing the neutrino oscillation parameters.
Unfortunately, due to wide range of neutrino energy beams and poor statistics
available from current experiments, it is very difficult to measure
differential neutrino-nucleus cross sections for specific energies and to test
beam energy reconstruction techniques. On the theoretical side, many studies
have been presented aiming at improving our knowledge on lepton-nucleus
interaction BAV1 ; BAV2 ; Martini1 ; Martini2 ; Nieves1 ; Nieves2 ; BAV4 ;
Martini3 ; Simo ; Megias1 ; Megias2 ; Megias3 ; Rocco ; BAV5 ; BAV6 ; Gon1 ;
Gon2 ; Gon3 ; BAV7 ; BAV8 ; BAV9 ; Kim . However, it is extremely challenging
to provide reliable and consistent predictions for the diversity of processes
that can take place in the energy range covered by the neutrino beams. Various
contributions to the cross sections can significantly overlap with each other
making it difficult to identify, diagnose and remedy shortcoming of nuclear
models.
While electron and neutrino interactions are different at the primary vertex,
many underlying physics process in the nucleus are the same, and electron
scattering data collected with precisely controlled kinematics (initial and
final energies and scattering angles) and large statistic allows validation
and improvement of the description of nuclear effects. There are a large body
of electron-scattering data on carbon and calcium and only a few data sets
available for scattering on argon.
All of the above reaction mechanisms are very similar for electrons and for
neutrinos. From the nuclear point of view the influence of nuclear medium
effects such as the nuclear ground state and interaction of the outgoing
nucleon with the residual nucleus can be expected to be largely the same for
electron as for neutrino-induced processes. We can exploit this similarity and
use electron scattering data with known beam energies to test the neutrino
energy reconstruction methods CLAS and interaction models. The vector part of
the electroweak interaction can be inferred directly from the electron
scattering data. Because electron and neutrino scattering are strongly linked
in theory, any model of neutrino interactions (vector+axial) should also be
able to reproduce electron (vector) interactions. A model unable to reproduce
electron measurements cannot be expected to provide accurate prediction for
neutrino cross sections.
It is therefore unsurprising that recent years have seen a plethora of
analyses of electron-scattering data to test the vector current part of the
lepton-nucleus interaction against existing inclusive electron scattering
cross sections for different target nuclei at several incident beam energies
and scattering electron angles. The relativistic distorted wave impulse
approximation (RDWIA), initially designed for description of exclusive
$(e,e^{\prime}p)$ data Pick ; Udias ; JKelly and then adopted for neutrino
reactions was successfully tested against of inclusive $(e,e^{\prime})$ data
BAV7 ; Gon3 . The SuSAv2 model exploits the similarities between both
interaction types to guide the description of weak scattering process Megias2
; Megias3 . The utility of validating neutrino events generators against
inclusive electron scattering data that they had not been tuned to was
demonstrated in Refs. Ankow ; e4v1 ; NEUT ; Dytman .
Such inclusive reactions involve total hadronic cross sections and typically
are relatively insensitive to the details of the final nuclear states. Rather
simple models may yield cross sections that are not very different from those
found in the most sophisticated models. Typically, the inclusive predictions
using different models are rather similar and agree to about 10-20%, but they
cannot make predictions on both leptons and hadrons in final states. The semi-
exclusive $(l,l^{\prime}p)$ lepton scattering process involves not the total
cross sections, but the specific asymptotic states and allows to test more in
detail the nuclear model. Microscopic and unfactorized models like the RDWIA
can be used to model both lepton-boson and boson-nucleus vertexes in the same
detail and compare the results to semi-exclusive observables. The comparison
of the results of the RDWIA approach and cascade models employed in the
neutrino event generators provides constraints on cascade models from proton-
nucleus scattering Udias .
The reduced cross section, obtained from the measured differential semi-
exclusive electron scattering cross section dividing on the kinematic factor
and the off-shell electron-proton cross section, can be identified with the
distorted spectral function. Final state interactions between the ejected
nucleon and the residual nucleus make the reduced cross sections depend upon
the initial and ejectile nucleon’s momenta and angle between them (depends
upon momentum transfer). Thus, irrespective of the type of interaction
(electromagnetic or weak) the distorted spectral function is determined mainly
by the intrinsic properties of the target and the ejected nucleon interaction
with residual nucleus.
The purpose of the present work is calculation of the CCQE neutrino scattering
reduced cross sections averaged over phase space as functions of the missing
nucleon momentum and incoming neutrino energy, and comparison of them with
ones obtained from measurements of $(e,e^{\prime}p)$ scattering on carbon,
oxygen and argon targets. The direct comparison of the spectral functions used
in the factorized approach in neutrino event generators to the measured
reduced cross sections of the electron-nucleus scattering can provide an
additional test of the nuclear models employed in these generators.
The outline of this paper is the following. In Sec.II we introduce the
formalism needed to describe the semi-exclusive lepton-nucleus CCQE scattering
process. The RDWIA model is briefly introduced in Sec.III. Results of the
calculations are presented in Sec.IV. Our conclusions are summarized in Sec.V.
## II Formalism of quasi-elastic scattering
We consider the formalism used to describe electron and neutrino quasi-elastic
exclusive
$l(k_{i})+A(p_{A})\rightarrow l^{\prime}(k_{f})+N(p_{x})+B(p_{B}),$ (1)
scattering off nuclei in the one-photon (W-boson) exchange approximation. Here
$l$ labels the incident lepton [electron or muon (anti)neutrino], and
$l^{\prime}$ represents the scattered lepton (electron or muon),
$k_{i}=(\varepsilon_{i},\mbox{\boldmath$k$}_{i})$ and
$k_{f}=(\varepsilon_{f},\mbox{\boldmath$k$}_{f})$ are the initial and final
lepton momenta, $p_{A}=(\varepsilon_{A},\mbox{\boldmath$p$}_{A})$, and
$p_{B}=(\varepsilon_{B},\mbox{\boldmath$p$}_{B})$ are the initial and final
target momenta, $p_{x}=(\varepsilon_{x},\mbox{\boldmath$p$}_{x})$ is the
ejectile nucleon momentum, $q=(\omega,\mbox{\boldmath$q$})$ is the momentum
transfer carried by the virtual photon (W-boson), and
$Q^{2}=-q^{2}=\mbox{\boldmath$q$}^{2}-\omega^{2}$ is the photon (W-boson)
virtuality.
### II.1 CCQE lepton-nucleus cross sections
In the laboratory frame the differential cross section for exclusive electron
($\sigma^{el}$) and (anti)neutrino ($\sigma^{cc}$) CC scattering can be
written as
$\displaystyle\frac{d^{6}\sigma^{el}}{d\varepsilon_{f}d\Omega_{f}d\varepsilon_{x}d\Omega_{x}}$
$\displaystyle=\frac{|\mbox{\boldmath$p$}_{x}|\varepsilon_{x}}{(2\pi)^{3}}\frac{\varepsilon_{f}}{\varepsilon_{i}}\frac{\alpha^{2}}{Q^{4}}L_{\mu\nu}^{(el)}\mathcal{W}^{\mu\nu(el)}$
(2a)
$\displaystyle\frac{d^{6}\sigma^{cc}}{d\varepsilon_{f}d\Omega_{f}d\varepsilon_{x}d\Omega_{x}}$
$\displaystyle=\frac{|\mbox{\boldmath$p$}_{x}|\varepsilon_{x}}{(2\pi)^{5}}\frac{|\mbox{\boldmath$k$}_{f}|}{\varepsilon_{i}}\frac{G^{2}\cos^{2}\theta_{c}}{2}L_{\mu\nu}^{(cc)}\mathcal{W}^{\mu\nu(cc)},$
(2b)
where $\Omega_{f}$ is the solid angle for the lepton momentum, $\Omega_{x}$ is
the solid angle for the ejectile nucleon momentum, $\alpha\simeq 1/137$ is the
fine-structure constant, $G\simeq$ 1.16639 $\times 10^{-11}$ MeV-2 is the
Fermi constant, $\theta_{C}$ is the Cabbibo angle ($\cos\theta_{C}\approx$
0.9749), $L^{\mu\nu}$ is the lepton tensor, and $\mathcal{W}^{(el)}_{\mu\nu}$
and $\mathcal{W}^{(cc)}_{\mu\nu}$ are correspondingly the electromagnetic and
weak CC nuclear tensors.
For exclusive reactions in which only a single discrete state or a narrow
resonance of the target is excited, it is possible to integrate over the peak
in missing energy and obtain a fivefold differential cross section of the form
$\displaystyle\frac{d^{5}\sigma^{el}}{d\varepsilon_{f}d\Omega_{f}d\Omega_{x}}$
$\displaystyle=R\frac{|\mbox{\boldmath$p$}_{x}|\tilde{\varepsilon}_{x}}{(2\pi)^{3}}\frac{\varepsilon_{f}}{\varepsilon_{i}}\frac{\alpha^{2}}{Q^{4}}L_{\mu\nu}^{(el)}W^{\mu\nu(el)}$
(3a)
$\displaystyle\frac{d^{5}\sigma^{cc}}{d\varepsilon_{f}d\Omega_{f}d\Omega_{x}}$
$\displaystyle=R\frac{|\mbox{\boldmath$p$}_{x}|\tilde{\varepsilon}_{x}}{(2\pi)^{5}}\frac{|\mbox{\boldmath$k$}_{f}|}{\varepsilon_{i}}\frac{G^{2}\cos^{2}\theta_{c}}{2}L_{\mu\nu}^{(cc)}W^{\mu\nu(cc)},$
(3b)
where $R$ is a recoil factor
$R=\int d\varepsilon_{x}\delta(\varepsilon_{x}+\varepsilon_{B}-\omega-
m_{A})={\bigg{|}1-\frac{\tilde{\varepsilon}_{x}}{\varepsilon_{B}}\frac{\mbox{\boldmath$p$}_{x}\cdot\mbox{\boldmath$p$}_{B}}{\mbox{\boldmath$p$}_{x}\cdot\mbox{\boldmath$p$}_{x}}\bigg{|}}^{-1},$
(4)
$\tilde{\varepsilon}_{x}$ is solution to equation
$\varepsilon_{x}+\varepsilon_{B}-m_{A}-\omega=0,$ where
$\varepsilon_{B}=\sqrt{m^{2}_{B}+\mbox{\boldmath$p$}^{2}_{B}}$,
$~{}\mbox{\boldmath$p$}_{B}=\mbox{\boldmath$q$}-\mbox{\boldmath$p$}_{x}$ and
$m_{A}$ and $m_{B}$ are masses of the target and recoil nucleus, respectively.
Note, that missing momentum is
$\mbox{\boldmath$p$}_{m}=\mbox{\boldmath$p$}_{x}-\mbox{\boldmath$q$}$ and
missing energy $\mbox{\boldmath$\varepsilon$}_{m}$ is defined by
$\mbox{\boldmath$\varepsilon$}_{m}=m+m_{B}-m_{A}$.
All information about the nuclear structure and effects of final-state
interaction (FSI) between the ejectile nucleon and residual nucleus is
contained in the electromagnetic and weak CC hadronic tensors,
$W^{(el)}_{\mu\nu}$ and $W^{(cc)}_{\mu\nu}$, which are given by the bilinear
products of the transition matrix elements of the nuclear electromagnetic or
CC operator $J^{(el)(cc)}_{\mu}$ between the initial nucleus state $|A\rangle$
and the final state $|B_{f}\rangle$ as
$\displaystyle W^{(el)(cc)}_{\mu\nu}$ $\displaystyle=$
$\displaystyle\sum_{f}\langle B_{f},p_{x}|J^{(el)(cc)}_{\mu}|A\rangle\langle
A|J^{(el)(cc)\dagger}_{\nu}|B_{f},p_{x}\rangle,$ (5)
where the sum is taken over undetected states.
In the exclusive reaction (1) the outgoing lepton and proton are detected and
the exclusive lepton scattering cross sections (3a) and (3b) in terms of
response functions can be written as
$\displaystyle\frac{d^{5}\sigma^{el}}{d\varepsilon_{f}d\Omega_{f}d\Omega_{x}}$
$\displaystyle=\frac{|\mbox{\boldmath$p$}_{x}|\tilde{\varepsilon}_{x}}{(2\pi)^{3}}\sigma_{M}R\big{(}V_{L}R^{(el)}_{L}+V_{T}R^{(el)}_{T}+V_{LT}R^{(el)}_{LT}\cos\phi+V_{TT}R^{(el)}_{TT}\cos
2\phi\big{)},$ (6a)
$\displaystyle\frac{d^{5}\sigma^{cc}}{d\varepsilon_{f}d\Omega_{f}d\Omega_{x}}$
$\displaystyle=\frac{|\mbox{\boldmath$p$}_{x}|\tilde{\varepsilon}_{x}}{(2\pi)^{5}}G^{2}\cos^{2}\theta_{c}\varepsilon_{f}|\mbox{\boldmath$k$}_{f}|R\big{\\{}v_{0}R_{0}+v_{T}R_{T}+v_{TT}R_{TT}\cos
2\phi+v_{zz}R_{zz}$ $\displaystyle+(v_{xz}R_{xz}-v_{0x}R_{0x})\cos\phi-
v_{0z}R_{0z}+h\big{[}v_{yz}(R^{\prime}_{yz}\sin\phi+R_{yz}\cos\phi)$
$\displaystyle-
v_{0y}(R^{\prime}_{0y}\sin\phi+R_{0y}\cos\phi)-v_{xy}R_{xy}\big{]}\big{\\}},$
(6b)
where
$\sigma_{M}=\frac{\alpha^{2}\cos^{2}\theta/2}{4\varepsilon^{2}_{i}\sin^{4}\theta/2}$
(7)
is the Mott cross section and $h$ is +1 for positive lepton helicity and -1
for negative lepton helicity. The coupling coefficient $V_{k}$ and $v_{k}$,
the expression of which are given in Ref. BAV1 are kinematic factors
depending on the lepton’s kinematics. The response functions $R_{i}$ are given
in terms of components of the exclusive hadronic tensors BAV1 and depend on
the variables $(Q^{2},\omega)$ or $(|\mbox{\boldmath$q$}|,\omega)$.
It is also useful define a reduced cross section
$\sigma_{red}=\frac{d^{5}\sigma^{(el)(cc)}}{d\varepsilon_{f}d\Omega_{f}d\Omega_{x}}/K^{(el)(cc)}\sigma_{lN},$
(8)
where $K^{el}=R{p_{x}\varepsilon_{x}}/{(2\pi)^{3}}$ and
$K^{cc}=R{p_{x}\varepsilon_{x}}/{(2\pi)^{5}}$ are phase-space factors for
electron and neutrino scattering and $\sigma_{lN}$ is the corresponding
elementary cross section for the lepton scattering from the moving free
nucleon normalized to unit flux. The reduced cross section is an interesting
quantity that can be regarded as the nucleon momentum distribution modified by
FSI, i.e. as the distorted spectral function. Final-state interactions make
the reduced cross sections
$\sigma_{red}({\mbox{\boldmath$p$}}_{m},{\mbox{\boldmath$p$}}_{x})$ depend
upon ejectile momentum ${\mbox{\boldmath$p$}}_{x}$, angle between the initial
and final nucleon momentum and upon incident lepton energy. These cross
sections for (anti)neutrino scattering off nuclei are similar to the electron
scattering apart from small differences at low beam energy due to effects of
Coulomb distortion of the incoming electron wave function as shown in Refs.
BAV1 ; BAV2 ; BAV4
The factorization approximation to the knockout cross section stipulates that
$\frac{d^{5}\sigma^{(el)(cc)}}{d\varepsilon_{f}d\Omega_{f}d\Omega_{x}}=K^{(el)(cc)}\times\sigma_{lN}\times\sigma_{red}(\mbox{\boldmath$\varepsilon$}_{m},{\mbox{\boldmath$p$}}_{m},{\mbox{\boldmath$p$}}_{x})$
(9)
This factorization implies that the initial nuclear sate and FSI effects are
decoupled from leptonic vertex with preserved the correlations between the
final lepton and nucleon.
The reduced cross section as a function of missing momentum $p_{m}$, averaged
over phase volume in $(\omega,\Omega_{f},\phi)$ coordinates, where
$\Omega_{x}=(\cos\theta_{pq},\phi)$, can be written as
$\langle\sigma^{red}(p_{m})\rangle=\frac{1}{V}\int d\phi\int
d\mbox{\boldmath$\varepsilon$}_{f}\int
d\Omega_{f}\frac{p_{m}}{p_{x}|\mbox{\boldmath$q$}|}R_{c}\sigma^{red}(\mbox{\boldmath$\varepsilon$}_{f},\Omega_{f},p_{m},\phi),$
(10)
where
$p_{m}=|\mbox{\boldmath$p$}_{m}|,~{}p_{x}=|\mbox{\boldmath$p$}_{x}|,~{}\mbox{\boldmath$p$}_{m}=\mbox{\boldmath$p$}_{x}-\mbox{\boldmath$q$}$,
and
$\displaystyle\cos\theta_{pq}$
$\displaystyle=\frac{\mbox{\boldmath$p$}^{2}_{x}+\mbox{\boldmath$q$}^{2}-\mbox{\boldmath$p$}^{2}_{m}}{2p_{x}|\mbox{\boldmath$q$}|},$
(11a) $\displaystyle R_{c}$
$\displaystyle=1+\frac{\varepsilon_{x}}{2p^{2}_{x}\varepsilon_{B}}(\mbox{\boldmath$p$}^{2}_{x}+\mbox{\boldmath$q$}^{2}-\mbox{\boldmath$p$}^{2}_{m}),$
(11b) $\displaystyle V$ $\displaystyle=\int d\phi\int
d\mbox{\boldmath$\varepsilon$}_{f}\int
d\Omega_{f}\frac{p_{m}}{p_{x}|\mbox{\boldmath$q$}|}R_{c}.$ (11c)
Precise electron reduced cross sections data can be used to validate the
neutrino reduced cross sections (spectral functions) that are implemented in
neutrino generators.
### II.2 Nuclear current
Obviously, the determination of the response tensor $W^{\mu\nu}$ requires the
knowledge of the nuclear current matrix elements in Eq.(5). We describe the
lepton-nucleon scattering in the impulse approximation, assuming that the
incoming lepton interacts with only one nucleon, which is subsequently
emitted. The nuclear current is written as the sum of single-nucleon currents.
Then, the nuclear matrix element in Eq.(5) takes the form
$\displaystyle\langle p,B|J^{\mu}|A\rangle$ $\displaystyle=$
$\displaystyle\int d^{3}r~{}\exp(i\mbox{\boldmath$t$}\cdot\mbox{{\bf
r}})\overline{\Psi}^{(-)}(\mbox{\boldmath$p$},\mbox{{\bf
r}})\Gamma^{\mu}\Phi(\mbox{{\bf r}}),$ (12)
where $\Gamma^{\mu}$ is the vertex function,
$\mbox{\boldmath$t$}=\varepsilon_{B}\mbox{\boldmath$q$}/W$ is the recoil-
corrected momentum transfer,
$W=\sqrt{(m_{A}+\omega)^{2}-\mbox{\boldmath$q$}^{2}}$ is the invariant mass,
$\Phi$ and $\Psi^{(-)}$ are relativistic bound-state and outgoing wave
functions. For electron scattering, most calculations use the CC2
electromagnetic vertex function for a free nucleon deFor
$\Gamma^{\mu}=F^{(el)}_{V}(Q^{2})\gamma^{\mu}+{i}\sigma^{\mu\nu}\frac{q_{\nu}}{2m}F^{(el)}_{M}(Q^{2}),$
(13)
where $\sigma^{\mu\nu}=i[\gamma^{\mu}\gamma^{\nu}]/2$, $F^{(el)}_{V}$ and
$F^{(el)}_{M}$ are the Dirac and Pauli nucleon form factors. Because the bound
nucleons are off shell, the vertex $\Gamma^{\mu}$ in Eq.(13) should be taken
for an off-shell nucleon. We employ the de Forest prescription for the off-
shell vertex deFor
$\tilde{\Gamma}^{\mu}=F^{(el)}_{V}(Q^{2})\gamma^{\mu}+{i}\sigma^{\mu\nu}\frac{\tilde{q}_{\nu}}{2m}F^{(el)}_{M}(Q^{2}),$
(14)
where $\tilde{q}=(\varepsilon_{x}-\tilde{E},\mbox{\boldmath$q$})$ and the
nucleon energy
$\tilde{E}=\sqrt{m^{2}+(\mbox{\boldmath$p$}_{x}-\mbox{\boldmath$q$})^{2}}$ is
placed on shell. We use the approximation of MMD on the nucleon form factors.
The Coulomb gauge is assumed for the single-nucleon current. Although the
experimental analysis usually employ the de Forest CC1 prescription for
$\sigma_{lN}$, consistency requires that calculation of $\sigma_{red}$ to
employ the $\sigma_{lN}$ that corresponds to the current operator used in the
RDWIA calculations.
The single-nucleon charged current has $V{-}A$ structure
$J^{\mu(cc)}=J^{\mu}_{V}+J^{\mu}_{A}$. For a free nucleon vertex function
$\Gamma^{\mu(cc)}=\Gamma^{\mu}_{V}+\Gamma^{\mu}_{A}$ we use CC2 vector current
vertex function
$\Gamma^{\mu}_{V}=F_{V}(Q^{2})\gamma^{\mu}+{i}\sigma^{\mu\nu}\frac{q_{\nu}}{2m}F_{M}(Q^{2})$
(15)
and the axial current vertex function
$\Gamma^{\mu}_{A}=F_{A}(Q^{2})\gamma^{\mu}\gamma_{5}+F_{P}(Q^{2})q^{\mu}\gamma_{5}.$
(16)
Weak vector form factors $F_{V}$ and $F_{M}$ are related to corresponding
electromagnetic ones for proton $F^{(el)}_{i,p}$ and neutron $F^{(el)}_{i,n}$
by the hypothesis of conserved vector current (CVC)
$F_{i}=F^{(el)}_{i,p}-F^{(el)}_{i,n}.$ (17)
The axial $F_{A}$ and psevdoscalar $F_{P}$ form factors in the dipole
approximation are parameterized as
$F_{A}(Q^{2})=\frac{F_{A}(0)}{(1+Q^{2}/M_{A}^{2})^{2}},\quad
F_{P}(Q^{2})=\frac{2mF_{A}(Q^{2})}{m_{\pi}^{2}+Q^{2}},$ (18)
where $F_{A}(0)=1.2724$, $m_{\pi}$ is the pion mass, and $M_{A}$ is the axial
mass. We use de Forest prescription for off-shell extrapolation of
$\Gamma^{\mu(cc)}$. Similar to electromagnetic current, the Coulomb gauge is
applied for the vector current $J_{V}$.
## III Model
The semi-exclusive differential and reduced cross sections for neutrino
scattering were studied in Refs. BAV1 ; BAV2 ; BAV4 ; BAV8 ; BAV9 , using the
relativistic shell model approach and taking into account the FSI effects. A
formalism for the $A(e,e^{\prime}N)B$ reaction that describes the channel
coupling in the FSI of $N+B$ system was developed in Ref. JKelly .
In this work the independent particle shell model (IPSM) is assumed for the
nuclear structure. The model space for 12C$(l,l^{\prime}N)$ consists of
$1s_{1/2}$ and $1p_{3/2}$ nucleon-hole states in the 11B and 11C nuclei. The
model space for 16O$(l,l^{\prime}N)$ consists of $1s_{1/2}$, $1p_{3/2}$, and
$1p_{1/2}$ nucleon-hole states in the 15N and 15O nuclei. The model space for
40Ar$(l,l^{\prime}N)$ consists of $1s_{1/2}$, $1p_{3/2}$, $1p_{1/2}$,
$1d_{5/2}$, $2s_{1/2}$, and $1d_{3/2}$ nucleon-hole states in the 39Cl, and
$1s_{1/2}$, $1p_{3/2}$, $1p_{1/2}$, $1d_{5/2}$, $2s_{1/2}$, $1d_{3/2}$, and
$1f_{7/2}$ nucleon-hole states in the 39Ar nuclei. All states in these nuclei
are regarded as discrete states even though their spreading widths are
actually appreciable.
In the independent particle shell model the relativistic bound-state function
$\Phi$ in Eq.(12) is obtained as the self-consistent solutions of a Dirac
equation, derived within a relativistic mean-field approach, from a Lagrangian
containing $\sigma$, $\omega$, and $\rho$ mesons Serot . The nucleon bound-
state functions were calculated by the TIMORA code Horow with the
normalization factors $S(\alpha)$ relative to full occupancy of the IPSM
orbitals. According the RDWIA analysis of the JLab 12C$(e,e^{\prime}p)$ data
Dutta ; Kelly1 $S(1p_{3/2})=84\%$, $S(1s_{1/2})=100\%$ and average factor
about $\approx 89\%$. We use also the following values of normalization
factors of 16O: $S(1p_{3/2})=66\%$, $S(1p_{1/2})=70\%$, and
$S(1s_{1/2})=100\%$, that were obtained in the RDWIA analysis of the JLab data
Fissum . From the RDWIA analysis BAV4 of NIKHEF data Kramer1 ; Kramer2 ;
Kramer3 follows that the occupancy of the
Figure 1: Proton momentum distributions for the different single particle
states in 12C nucleus. Also shown is the total proton momentum distribution
(solid line). Figure 2: Same as Fig.1 but in 16O nucleus. Figure 3: Total
proton and neutron momentum distributions in 40Ar nucleus.
orbitals of 40Ca and 40Ar are approximately 87% on average. Proton and neutron
binding energies and the occupancy’s of the orbitals in 40Ar are given in
Table II of Ref. BAV4 . In this work we assume that the missing strength can
be attributed to the short-range nucleon-nucleon $(NN)$ correlations, leading
to the appearance of the high-momentum and high-energy nucleon distribution in
the target.
Figures 1 and 2 show the proton momentum distributions for occupied orbitals
in 12C and 16O, calculated within the mean-field approach. The neutron
momentum distributions in these nuclei are almost identical to proton ones.
The total proton and neutron momentum distributions in 40Ar are presented in
Fig.3. These distributions are normalized to the total number of
protons/neutrons on the IPSM shells.
For an outgoing nucleon, the simplest chose is to use plane-wave function
$\Psi$ in Eq.(12) that is, no interactions are between the ejected nucleon $N$
and the residual nucleus $B$, i.e. to use the so-called plane-wave impulse
approximation (PWIA). For a more realistic description, final state
interaction effects should be taken into account. In the RDWIA the distorted-
wave function of the knocked out nucleon $\Psi$ is evaluated as a solution of
a Dirac equations containing a phenomenological relativistic optical potential
Fissum . This potential consists of a real part, which describes the
rescattering of the ejected nucleon and an imaginary part for the absorption
of it into unobserved channels. We use the LEA program LEA for numerical
calculation of the distorted-wave function with the EDAD1 parameterization
Cooper of the relativistic optical potential for carbon, oxygen and calcium.
## IV Results and analysis
The reduced cross sections of 12C$(e,e^{\prime}p)$ reaction in the range of
missing energy, that corresponds to knockout of $1s$ and $1p$-shells protons
were measured at Tokyo Tokyo , Saclay SaclayC1 ; SaclayC2 , NIKHEF NIKHEFC ,
SLAC SLAC , and JLab Dutta . The knockout of $1p$-shell protons in
16O$(e,e^{\prime}p)$ was studied at Saclay SaclayC1 ; SaclayO , NIKHEF
NIKHEFO1 ; NIKHEFO2 , Mainz Mainz , and JLab Fissum . In these experiments,
cross sections data for the lowest-lying fragments of each shell were measured
as functions of $p_{m}$, and normalization factors (relating how much the
measured cross section data were less than predicted in IPSM) were extracted.
The E12-14-012 experiment JLabE , performed in JLab has measured the
$(e,e^{\prime}p)$ reduced cross sections using 40Ar JLabAr and 48Ti JLabTi
targets. The reduced cross sections measured in the missing momentum and
missing energy ranges $15\leq p_{m}\leq 300$ MeV/c and $12\leq E_{m}\leq 80$
MeV.
The distorted spectral function depends upon initial momentum
$\mbox{\boldmath$p$}_{m}$, ejectile momentum $\mbox{\boldmath$p$}_{x}$ and
angle between the initial and final nucleon momenta. Thus it depends upon
kinematical conditions and is different for parallel and perpendicular
kinematics.
Figure 4: Comparison of the RDWIA calculations for electron, neutrino and
antineutrino reduced cross sections for the removal of nucleons from 1$s$ and
1$p$ shells of 12C as functions of the missing momentum. JLab data Dutta for
beam energy $E_{beam}$=2.455 GeV, proton kinetic energy $T_{p}$=350 MeV, and
$Q^{2}$=0.64 (GeV/$c)^{2}$. The RDWIA calculations are shown for electron
scattering (dashed line) and neutrino (solid line) and antineutrino (dashed-
dotted line) scattering. This figure taken from Ref. BAV2 .
Furthermore, $\sigma_{red}$ depends upon initial electron energy due to
Coulomb distortion. The RDWIA approach with LEA code was successfully tested
against measured 12C$(e,e^{\prime}p)$ Kelly1 , 16O$(e,e^{\prime}p)$ Fissum ,
and 40Ca$(e,e^{\prime}p)$ BAV4 differential and reduced cross sections, and
the normalization factors $S(\alpha)$ for the IPSM orbitals were derived.
In Refs. BAV1 ; BAV2 ; BAV4 electron and CCQE (anti)neutrino scattering on
oxygen, carbon, calcium, and argon targets were studied. It was found that the
reduced cross sections for (anti)neutrino scattering are similar to those of
electron scattering, and the latter are in good agreement with electron data.
The difference between the electron and (anti)neutrino reduced cross sections
calculated for Saclay kinematics is less than 10%.
Figure 5: The RDWIA calculation of neutrino (solid line) and antineutrino
(dashed-dotted line) averaged reduced cross sections compared with measured
exclusive cross section data for the removal of nucleons from 1$p$ and 1$s+1p$
shells of 12C as functions of the missing momentum. The data are from Saclay
SaclayC1 for $1p$ and the beam energy $E_{beam}=500$ MeV, SLAC SLAC for
$1s+1p$ shells and $E_{beam}=2015$ MeV, and from JLab Dutta for $1s+1p$
shells and $E_{beam}$=2455 MeV
This can be attributed to Coulomb distortion upon electron wave function which
is usually described as the effective momentum approximation (EMA) EMA . In
the EMA, the electron Coulomb wave function is replaced by a plane wave
function with effective momentum whose value is larger than the value of
electron momentum at infinity, because of Coulomb attraction. The flux is also
increased in the interaction zone by focusing of electron wave. This effect is
proportional to charge of the target and weakens as the beam energy increases.
The small difference between neutrino and antineutrino reduced cross sections
is due to the difference in the FSI of the proton and neutron with the
residual nucleus.
In this section we present the results of the RDWIA calculations of the
averaged reduced cross sections Eq.(10) for (anti)neutrino scattering off
carbon, oxygen, and argon as functions of the missing momentum $p_{m}$ and
compare them with the measured $(e,e^{\prime}p)$ reduced cross sections. In
Ref. BAV2 electron, neutrino, and antineutrino cross sections for the removal
protons from the $1s$, $1p$, and $1s+1p$ shells of 120C as functions of
missing momentum $p_{m}$ were calculated and compared with JLab data Dutta .
For illustration, Fig. 4 shows measured the removal cross sections as compared
with the LEA code calculations BAV2 .
Figure 6: The RDWIA neutrino averaged reduced cross section for removal of
nucleons from the $1s+1p$ shells of 12C as a function of neutrino energy and
missing momentum $p_{m}$.
It should be note that negative value of $p_{m}$ corresponds to $\phi=\pi$ and
positive to $\phi=0$, where $\phi$ is the angle between the scattering
$(\mbox{\boldmath$k$}_{i},\mbox{\boldmath$k$}_{f})$ and reaction
$(\mbox{\boldmath$p$}_{x},\mbox{\boldmath$p$}_{B})$ planes. The data for beam
energy $E_{beam}=2.445$ GeV and $Q^{2}=0.64$ (GeV/c)2 were measured in the
quasi-perpendicular kinematics with constant $(\omega,\mbox{\boldmath$q$})$.
The electron and neutrino scattering off the nuclei are closely interrelated
and one can treat both processes within the same formalism. There is an
overall good agreement between the cross sections calculated in the RDWIA and
data.
The averaged reduced cross sections for removal of nucleons from the $1p$, and
$1s+1p$ shells of 12C$(\nu_{\mu},\mu p)$ and 12C$(\bar{\nu}_{\mu},\mu n)$
reactions are shown in Fig. 5 as functions of positive $p_{m}$ values together
with Saclay SaclayC2 , SLAC Kelly1 , and JLab Dutta data.
Figure 7: Comparison of the RDWIA calculations for neutrino (dashed line) and
antineutrino (dashed-dotted line) averaged reduced cross sections for the
removal of nucleons from 1$p$ shell of 16O with Saclay SaclayC1 and NIKHEF
NIKHEFO1 data as functions of $p_{m}$. Also shown are the RDWIA calculations
of the reduced cross section for electron scattering (solid line) from Ref.
BAV2 .
The data for beam energies $E_{beam}=500,2015$, and 2445 MeV were measured.
There is an overall agreement between the calculated averaged cross sections
and reduced cross sections of the $(e,e^{\prime}p)$ reaction measured in
different kinematics. The RDWIA averaged reduced cross section for removal
nucleon from $1s+1p$ shells in 12C$(\nu_{\mu},\mu p)$ is shown in Fig. 6 as a
function of incoming neutrino energy and missing momentum $p_{m}$. In the
range of the maximum at $60\leq p_{m}\leq 90$ MeV/c the cross section
increases slowly with neutrino energy, and slight changes at $p_{m}\geq 120$
MeV/c and $p_{m}\leq 40$ MeV/c.
The averaged reduced cross sections for the removal of nucleons from the $1p$
shell in 16O$(\nu_{\mu},\mu p)$ and 16O$(\bar{\nu}_{\mu},\mu n)$ reactions are
shown in Fig. 7 as functions of $p_{m}$ together with Saclay SaclayC1 and
NIKHEF NIKHEFO1 data. There is an overall agreement between calculated cross
sections and data, but the values of the calculated cross sections at maximum
is systematically higher (about 15-20%) than measured ones for NIKHEF
kinematics. Unfortunately, there are no data for removal protons from the $1s$
and $1s+1p$ shells of 16O. Therefore the models of lepton-nucleus interaction
that don’t take into account the shell structure of nucleus can not be tested
against the available reduced cross sections measured in 16O$(e,e^{\prime}p)$
reaction.
Figure 8: Same as Fig. 6 but for the 16O.
The RDWIA averaged reduced cross section for removal nucleon from $1s+1p$
shells in 16O$(\nu_{\mu},\mu p)$ is shown in Fig. 8 as a function of incoming
neutrino energy and missing momentum $p_{m}$. As can be seen from figures 6
and 8 the dependence of these cross sections upon neutrino energy and $p_{m}$
are almost similar.
Figure 9: Comparison of the RDWIA calculations for electron (solid line),
neutrino (dashed line), and antineutrino (dashed-dotted line) reduced cross
sections for the removal of nucleons from 1$d_{3/2}$, 2$s_{1/2}$, and
1$d_{5/2}$ shells of 40Ca with NIKHEF data Kramer3 . The cross sections are
presented as functions of missing momentum $p_{m}$. The figure taken from Ref.
BAV4 .
The structure of calcium and argon nuclei is similar, although unlike
${}^{40}_{18}$Ar, ${}^{40}_{20}$Ca is a symmetric and closed-shell nucleus.
Figure 10: Missing momentum distribution in argon obtained by integrating
over the missing energy range of 0 - 30 MeV (left panel) and 30 - 54 MeV
(right panel), presented with the geometrical factor of $4\pi p_{m}^{2}$. The
gray band shows the measured spectral function including the full error.
In Ref. BAV4 the $(e,e^{\prime}p)$ reduced cross sections for removal of the
proton from $1d_{3/2}$ shell, for transition to the 1/2+ exited state of the
39K nucleus at excitation energy $E_{x}=2.522$ MeV, and for the transitions to
the 5/2+ excited states at $E_{x}$=5.258 MeV and $E_{x}$=6.328 MeV, obtained
by knocking out protons from the 2$s_{1/2}$ and 1$d_{5/2}$ orbitals,
correspondingly were calculated. The calculated reduced cross sections are
shown in Fig. 9 with the NIKHEF data Kramer1 ; Kramer2 and provide a good
description of the shape and magnitude of the measured distribution. Neutrino
and antineutrino calculated reduced cross sections of
40Ca$(\nu,\mu^{-}p){}^{39}$Ca and 40Ca$(\bar{\nu},\mu^{+}n){}^{39}$K reactions
also shown are in Fig. 9. There is an overall good agreement between
calculated cross sections, but the values of the electron cross sections at
the maximum is systematically higher than those for (anti)neutrino. This can
be attributed to Coulomb distortion upon the incident electron wave function.
The JLab experiment JLabE has measured the $(e,e^{\prime}p)$ cross sections
using argon and titanium targets JLabAr ; JLabTi . The reduced cross sections
were obtained in the missing momentum $15\leq p_{m}\leq 300$ MeV/c and missing
energy range $12\leq p_{m}\leq 80$ MeV.
Figure 11: Missing momentum distribution in titanium obtained by integrating
over the missing energy range of 0 - 30 MeV, presented with the geometrical
factor of $4\pi p_{m}^{2}$. The gray band shows the measured spectral function
including the full error.
The procedure to obtained information on neutron distribution in argon is
based on the observation that neutron spectrum of ${}^{40}_{18}$Ar is mirrored
by proton spectrum of the nucleus of titanium, having charge $Z=22$. Therefor
one can expect that the proton spectral function obtained from
Ti$(e,e^{\prime}p)$ data provides information on the neutron spectral function
of argon.
The ${}^{40}_{18}$Ar and ${}^{48}_{22}$Ti data were analyzed to obtain the
spectral functions, describing the energy and momentum distributions of
protons in the argon and titanium ground states. The effect of FSI, which are
known to be significant in $(e,e^{\prime}p)$ reactions, was taken into account
within the distorted-wave impulse approximation approach. Figure 10 shows the
missing momentum distributions of protons in argon obtained by integrating the
data over the missing energy ranges $0-30$ MeV and $30-50$ MeV. The proton
missing momentum distribution in titanium was obtained by integrating the data
over the missing energy range $0-30$ MeV is shown in Fig. 11. Also shown in
Figs. 10 and 11 are the results obtained without FSI effects in the
relativistic plane wave impulse approximation (RPWIA), with normalization
factors $S_{\alpha}$ from Ref. BAV4 . There is an overall agreement between
the RPWIA calculations and data within sizable uncertainties of the measured
proton momentum distributions. A more accurate determination of the distorted
spectral functions for different shells of 40Ar and 48Ti will improve the
testing of models using for description of neutrino interaction with these
nucleus.
Figure 12: Same as Fig. 6 but for the 40Ar.
The averaged reduced cross section of 40Ar$(\nu_{\mu},\mu p)$ reaction
calculated in the RDWIA approach is shown in Fig. 12 as a function of neutrino
energy and missing momentum $p_{m}$. It has to be pointed out that unlike 12C
and 16O, in 40Ar the maximum of these cross sections is shifted to the range
of lower missing momentum $p_{m}\approx 15$ MeV/c. The cross section increases
very slowly with neutrino energy.
Neutrino event generators employ the factorization approach to make
predictions about the lepton and also the outgoing nucleon kinematics from
inclusive models. These models are aimed to describe inclusive cross section
that is only as function of the final lepton kinematics. This factorization
uses the spectral functions, which are generated from different nucleon
distributions in the initial nuclear state (local Fermi gas, shell model,
etc). While the behavior of the cross section against the lepton kinematics
may be described correctly, there is no guarantee that the correlations
between the final lepton and nucleon for a given event are preserved. The
comparison of the employed spectral function with the measured reduced cross
sections allows the estimation of the accuracy of the nuclear effects
calculations. On the other hand, the effective spectral functions can be
obtained within the microscopic and unfactorized models, like the RDWIA, that
successfully describes exclusive $(e,e^{\prime}p)$ cross sections and employed
in the neutrino event generators.
## V Conclusions
In this article, we studied within the RDWIA approach the semi-exclusive
reduced cross sections of CCQE (anti)neutrino scattering on carbon, oxygen,
and argon. We calculated averaged over phase space reduced cross sections for
removal of nucleons from the $\mathrm{1p}$, and $\mathrm{1s+1p}$ shells of
12C$(\nu_{\mu},\mu p)$, 12C$(\bar{\nu}_{\mu},\mu n)$ and 16O$(\nu_{\mu},\mu
p)$, 16O$(\bar{\nu}_{\mu},\mu n)$ reactions as functions of missing momentum
and incoming neutrino energy, and compared them with the reduced cross
sections obtained from measurements of $(e,e^{\prime}p)$ scattering on 12C and
16O. We also calculated in the relativistic plane wave impulse approximation
the averaged reduced cross sections for single nucleon knocked out in
40Ar$(\nu_{\mu},\mu p)$ and 49Ti$(\bar{\nu}_{\mu},\mu n)$ reactions as
functions of $p_{m}$ and $\varepsilon_{\nu}$ and compared them with proton
momentum distributions in argon and titanium obtained from $(e,e^{\prime}p)$
scattering in the JLab experiments
We found that the shape and magnitude of the averaged reduced cross sections
for (anti)neutrino scattering as a function of missing momentum are similar to
measured reduced cross sections of electron scattering. The averaged removal
cross sections calculated for argon and titanium within RPWIA approach seem
mostly consistent with the data within sizable uncertainties of measured
proton momentum distributions. The difference less than 10% between the
electron and (anti)neutrino cross sections can be attributed to Coulomb
distortion upon incoming electron wave function. The small difference between
neutrino and antineutrino reduced cross sections is due to difference in the
FSI of proton and neutron with the residual nucleus. The averaged reduced
cross sections for removal nucleon from $\mathrm{1s+1p}$ shells in carbon and
oxygen have maximum in the range of missing momentum $60\leq p_{m}\leq 90$
MeV/c and in 40Ar the maximum is shifted in the range of $p_{m}\approx 15$
MeV/c. The cross sections increase very slowly with neutrino energy.
Some neutrino event generators employ the factorization approach to make
prediction about the lepton and outgoing nucleon kinematics, using different
nucleon distributions in the ground nuclear state. In this way the direct
comparison of the implemented spectral functions with the precise electron
reduced cross sections data allows to estimate the accuracy of the nuclear
effects calculations, such as the nuclear ground state and FSI.
## Acknowledgments
The author greatly acknowledge A. Habig for fruitful discussions and a
critical reading of the manuscript. A would like to thank S. Luchuk for his
constructive comments and suggestions.
## References
* (1) M. A. Acero et al., (NOvA Collaboration), Phys. Rev. Lett. 123, 151803 (2019).
* (2) K. Abe et al., (T2K Collaboration), Phys. Rev. Lett. 121, 171802 (2018).
* (3) R. Acciarri et al., (DUNE Collaboration), FERMILAB-DESIGN-2016-03.
* (4) K. Abe et al., (Hyper-Kamiokande Collaboration) arXiv:1805.04163 [physics.ins-det].
* (5) M. Antonello et al. (MicroBooNE, LAr1-ND, ICARUS-WA104 Collaboration), arXiv:1503.01520 [physics.ins-det].
* (6) A. V. Butkevich and S. A. Kulagin, Phys. Rev. C 76, 045502 (2007).
* (7) A. V. Butkevich, Phys. Rev. C 80, 014610 (2009).
* (8) M. Martini, M. Ericson, and G. Chanfray, Phys. Rev. C 84, 055502 (2011).
* (9) M. Martini, and M. Ericson, Phys. Rev. C 87, 065501 (2013).
* (10) J. Nieves, I. Ruiz Simo, and M. J. Vicente Vacas, Phys. Lett. B 707, 72 (2012).
* (11) J. Nieves, I. Ruiz Simo, and M. J. Vicente Vacas, Phys. Lett. B 721, 90 (2013).
* (12) A. V. Butkevich, Phys. Rev. C 85, 065501 (2012).
* (13) M. Martini, N. Jachowicz, M. Ericson, V. Pandey, T. Van Cuyck, and N. Van Dessel, Phys. Rev. C 94, 015501 (2016).
* (14) I. Ruiz Simo, J. E. Amaro, M. B. Barbaro, A. De Pace, J. A. Caballero, J. Phys. G 44, 065105 (2017).
* (15) G. D. Megias, T. W. Donnelly, O. Moreno, C. F. Williamson, J. A. Caballero, R. Gonzalez-Jimenez, A. De Pace, M. B. Barbaro, W. M. Alberico, M. Nardi, and J. E. Amaro, Phys. Rev. D 91, 073004 (2015).
* (16) G. D. Megias, J. E. Amaro, M. B. Barbaro, J. A. Caballero, T. W. Donnelly, Phys. Rev. D 94, 013012 (2016).
* (17) G. D. Megias, J. E. Amaro, M. B. Barbaro, J. A. Caballero, T. W. Donnelly, and I. R. Simo, Phys. Rev. D 94, 093004 (2016).
* (18) Noemi Rocco, Carlo Barbieri, Omar Benhar, Arturo De Pace, and Alessandro Lovato, Phys. Rev. C 99, 025502 (2019).
* (19) A. V. Butkevich and S. V. Luchuk, Phys. Rev. C 97, 045502 (2018).
* (20) A. V. Butkevich and S. V. Luchuk, Phys. Rev. D 99, 093001 (2019).
* (21) M. B. Barbaro, J. A. Caballero, A. De Pace, T. W. Donnelly, R. Gonzalez-Jimenez, G. D. Megias, Phys. Rev. C 99, 042501(R) (2019).
* (22) R. Gonzalez-Jimenez, A. Nikolakopoulos, N. Jachowicz, J. M. Udias, Phys. Rev. C 100, 045501 (2019).
* (23) R. Gonzalez-Jimenez, M. B. Barbaro, J. A. Caballero, T. W. Donnelly, N. Jachowicz, G. D. Megias, K. Niewczas, A. Nikolakopoulos, J. M. Udias, Phys. Rev. C 101, 015503 (2020).
* (24) A. V. Butkevich and S. V. Luchuk, Phys. Rev. C 102 024602 (2020).
* (25) A. V. Butkevich, Phys. Rev. C 105, 025501 (2022).
* (26) A. V. Butkevich, Phys. Rev. D 107, 073001 (2023).
* (27) K. S. Kim, S. Choi, T. Miyatsu, M. K. Cheoun, H. Kim and W. Y. So, Phys. Rev.C 107, 024607 (2023)
* (28) M. Khachatryan, A. Papapdopoulou, A. Ashkenazi, F. Hauenstein, L. B. Weinstein, O. Hen, E. Piasetzky (CLAS and e4v Collaboration), Nature 599, 565 (2021).
* (29) A. Picklesimer, J. W. Van Orden, S. J. Wallace, Phys. Rev. C 32, 1312 (1985).
* (30) J. M. Udias, P. Sarriguren, E. Moya de Guerra, E. Garrido, and J. A. Caballero, Phys. Rev. C 51, 3246 (1995).
* (31) James J. Kelly, Phys. Rev. C 59, 3256 (1999).
* (32) A. M. Ankowski and A. Friedland, Phys. Rev. D 102, 053001 (2020).
* (33) A. Papandopoulou et al. (e4v Collaboration) Phys. Rev. D 103, 113003 (2021).
* (34) S. Dolan, J. McElwee, S. Bolognesi, Y. Yayato, K. McFarland, G. Megias, K. Nlewczas, L. Pickering, J. Sobczyk, L. Tompson, G. Wret, arXiv:2301.09195 [hep-ex].
* (35) S. Dytman, Y. Hayato, R. Raboanary, J. T. Sobczyk, J. Tena Vidal and N. Vololoniaina, Phys. Rev. 104, 053006 (2021)
* (36) A. Nikolakopoulos, R. Gonzalez-Jimenez, N. Jachowicz, K. Niewczas, F. Sanchez, J. M. Udias, Phys. Rev. C 105 054603 (2022)
* (37) T. de Forest, Nucl. Phys. A392, 232 (1983).
* (38) P. Mergell, U.-G. Meissner, and D. Drechsel, Nucl. Phys. A596, 367 (1996).
* (39) B. Serot, J. Walecka, Adv. Nucl. Phys. 16, 1 (1986).
* (40) C. J. Horowitz D. P. Murdock, and Brian D. Serot, in Computational Nuclear Physics 1: Nuclear Structure edited by K. Langanke, J. A. Maruhn, Steven E. Koonin (Springer-Verlag,Berlin, 1991), p.129.
* (41) D. Dutta et al., Phys. Rev. C 68, 064603 (2003).
* (42) J. J. Kelly, Phys. Rev. C 71, 064610 (2005).
* (43) K. G. Fissum et al., Phys. Rev. C 70, 034606, 2004
* (44) G. J. Kramer, https://inis.iaea.org/search/search.aspx?orig_q=RN:22024922.
* (45) G. J. Kramer et al., Phys. Lett. B 227, 199 (1989).
* (46) G. J. Kramer, H. P. Blok, and L. Lapikas, Nucl. Phys. A679, 267 (2001).
* (47) J. J Kelly, http://www.physics.umd.edu/enp/jjkelly/LEA
* (48) E. D. Cooper, S. Hama, B. C. Clark, and R. L. Mercer, Phys. Rev. C 47, 297 (1993).
* (49) K. Nakamura, and N. Izutsu, Nucl. Phys. A259, 301 (1976).
* (50) M. Bernheim et al. Nucl. Phys. A375, 381 (1982).
* (51) J. Mougey, M. Bernheim, A. Bussiere, A. Gillebert, P. X. Ho, M. Priou, D. Royer,I. Sick, and G. Wagner, Nucl. Phys. A262, 461 (1976).
* (52) M. C. R. Makins et al., Phys. Rev. Lett. 72, 1986 (1994).
* (53) G. van der Steenhoven, H. P. Block, E. Jans, M. de Jong, L. Lapikas, E. N. Quint, and P. K. A. Witt Huberts, Nucl. Phys. A480, 547 (1988).
* (54) L. Chinitz et al., Phys. Rev. Lett.67, 568 (1991).
* (55) K. I. Blomqvist et al., Phys. Lett. B 344, 85 (1995).
* (56) M. Leuschner et al., Phys. Rev. C 49, 955 (1994).
* (57) K. I. Blomqvist et al., Z. Phys. A351, 353 (1995).
* (58) L. Gu et al. (Jefferson Lab HAll A Collaboration ), Phys. Rev. D 103, 034604 (2021).
* (59) L. Jiang et al. (Jefferson Lab HAll A Collaboration ), Phys. Rev. D 105, 112002 (2022).
* (60) L. Jiang et al. (Jefferson Lab HAll A Collaboration ), Phys. Rev. D 107, 012005 (2023).
* (61) L. L. Schiff et al., Phys. Rev. 103, 443 (1956).
|
# Quantum state tomography of an itinerant squeezed microwave field
F. Mallet JILA, National Institute of Standards and Technology and the
University of Colorado, Boulder, CO 80309, USA M. A. Castellanos-Beltran
JILA, National Institute of Standards and Technology and the University of
Colorado, Boulder, CO 80309, USA Department of Physics, University of
Colorado, Boulder, CO 80309, USA H. S. Ku JILA, National Institute of
Standards and Technology and the University of Colorado, Boulder, CO 80309,
USA Department of Physics, University of Colorado, Boulder, CO 80309, USA S.
Glancy National Institute of Standards and Technology, Boulder, Colorado
80305, USA E. Knill National Institute of Standards and Technology, Boulder,
Colorado 80305, USA K. D. Irwin National Institute of Standards and
Technology, Boulder, Colorado 80305, USA G. C. Hilton National Institute of
Standards and Technology, Boulder, Colorado 80305, USA L. R. Vale National
Institute of Standards and Technology, Boulder, Colorado 80305, USA K. W.
Lehnert<EMAIL_ADDRESS>JILA, National Institute of
Standards and Technology and the University of Colorado, Boulder, CO 80309,
USA Department of Physics, University of Colorado, Boulder, CO 80309, USA
###### Abstract
We perform state tomography of an itinerant squeezed state of the microwave
field prepared by a Josephson parametric amplifier (JPA). We use a second JPA
as a pre-amplifier to improve the quantum efficiency of the field quadrature
measurement (QM) from 2 % to $36\pm 4~{}\%$. Without correcting for the
detection inefficiency we observe a minimum quadrature variance which is
$68^{+9}_{-7}~{}\%$ of the variance of the vacuum. We reconstruct the state’s
density matrix by a maximum likelihood method and infer that the squeezed
state has a minimum variance less than 40 % of the vacuum, with uncertainty
mostly caused by calibration systematics.
Josephson parametric amplifier, squeezed state, quantum state tomography
###### pacs:
42.50.Dv, 42.50.Lc, 03.67.Bg
Fundamental quantum optics experiments at microwave frequencies have been
recently performed with superconducting qubits or Rydberg atoms inside high-
quality microwave cavities. Examples include the reconstruction of the Wigner
functions of Fock states from one Houck et al. (2007) to a few photons and
coherent superpositions of few photons Deléglise et al. (2008); Hofheinz et
al. (2008, 2009). States such as these, which are manifestly nonclassical
light states, are crucial for quantum information processing, because they can
be used to generate entanglement. However, in the cited experiments, these
states are confined in cavities. Therefore distributing entanglement to
separate parties, as required in quantum communication protocols, remains
challenging for microwave implementations. In contrast to the discrete Fock
state approach, continuous variables quantum information (CVQI) strategy uses
another type of nonclassical states, the squeezed states, which are readily
created in itinerant modes. These states exhibit reduced noise, below the
vacuum fluctuations, in one of their quadrature components and amplified noise
in the other one. They are also easily generated at optical frequencies in the
itinerant output modes of parametric amplifiers made of optically nonlinear
crystals. At optical frequencies, CVQI has progressed rapidly from the initial
creation of squeezed states Slusher et al. (1985) and tomographic
reconstruction Smithey et al. (1993); Schiller et al. (1996); Breitenbach et
al. (1997) of those states to teleportation Furusawa et al. (1998); Yonezawa
et al. (2007) and quantum error correction Aoki et al. (2009); Lassen et al.
(2010).
At microwave frequencies, the field is less advanced. The generation of
microwave squeezed states using the nonlinear electrical response of
superconducting Josephson junctions has been reported Yurke et al. (1988),
with inferred squeezing down to $10~{}\%$ of vacuum variance Castellanos-
Beltran et al. (2008). Such states can be powerful tools for quantum
information processing and communication because microwaves and
superconducting qubits can mimic useful light–atom interactions, as
demonstrated in Wallraff et al. (2004). Furthermore, these devices are made of
compact and integrable electrical circuits, with much promise for building
complex quantum information processors. The lack of an efficient quadrature
measurement (QM) for itinerant modes has slowed the advancement of CVQI.
However, as demonstrated recently in Teufel et al. (2009), it is possible with
a JPA to realize an efficient single QM.
In this Letter, we report the tomography of an itinerant squeezed microwave
field. We demonstrate that our JPA based measurement scheme has a quantum
efficiency $20$ times greater than a QM employing state-of-art semiconductor
amplifiers. We infer the quantum state prepared by maximum likelihood
tomography, correcting for inefficiency in our QM. We discuss the achieved
degree of squeezing, from the perspective of generating entanglement on chip.
Homodyne tomography is a standard experimental tool to infer the quantum state
of a single mode of light. It was proposed in Vogel and Risken (1989) and
pioneered on a squeezed optical field in Smithey et al. (1993). Its principle
is depicted in the Fig. 1. A homodyne detection apparatus measures the value
of the quadrature $X_{\theta}$, where $\theta$ is set by adjusting the phase
of the local oscillator. The probability density function
$\textrm{pr}(X_{\theta})$ for measuring a particular value of $X_{\theta}$ is
the marginal density function of the Wigner function, i.e.
$\textrm{pr}(X_{\theta})=\int
dX_{\theta+\pi/2}W(X_{\theta},X_{\theta+\pi/2})$, as shown in Fig. 1 (b). Thus
by performing measurements of $X_{\theta}$ on many identical copies of the
state and varying $\theta$, the “hidden” quantum object can be seen from
different angles and its state inferred. Losses and other Gaussian noise
sources in the homodyne detector can be modeled with the insertion of a
fictitious beam splitter of transmissivity $\eta$, as shown in Fig. 1 (a). In
such a case, the measured $\textrm{pr}(X_{\theta})$ are no longer the
projections of the desired Wigner function $W$, but of a smoother distribution
which is the convolution of $W$ with a Gaussian Wigner function Leonhardt and
Paul (1993). However methods like maximum likelihood quantum state tomography
can be used to deconvolve the effect of inefficiency Lvovsky and Raymer
(2009).
Figure 1: Principle of the experiment. (a-left): The squeezer (SQ, in red)
prepares a squeezed state whose quadrature distributions are measured for
different phases $\theta$ with an efficiency $\eta$. (a-right): Simulated
measurement results for 20,000 realizations of creating the squeezed state and
measuring it at a single $\theta$. The top graph shows the measured quadrature
value versus realization number. The bottom plot is a histogram (blue circles)
and Gaussian probability distribution $\textrm{pr}(X_{\theta})$ (red curve) of
this random process. (b): Graphical Interpretation: the probability
distribution $\textrm{pr}(X_{\theta})$ is simply the projection of the Wigner
function.
At optical frequencies, $\eta\geq 90~{}\%$ is routinely obtained using a pair
of balanced photodiodes Lvovsky and Raymer (2009). Such detectors are not
available for microwaves and until recently the best setup was a chain of
phase-insensitive amplifiers followed by a mixer, or two such chains in
parallel Menzel et al. (2010); Mariantoni et al. (2010); Bozyigit et al.
(2010). In such a case, noise $A_{n}$ greater than $1/2$ (the vacuum variance)
must be added to the QM Caves (1982). This noise can be modeled as an
effective efficiency by the relation $\eta=1/(1+2A_{n})$ Leonhardt and Paul
(1994), so the QM efficiency using phase-insensitive amplifiers is limited to
$50~{}\%$. State of the art microwave amplifiers, high-electron-mobility
transistors (HEMTs), have $A_{n}\approx 10-20$. In practice, the unavoidable
losses present in a microwave experiment typically result in $\eta\approx 2$%.
However, as demonstrated in Teufel et al. (2009), inserting a JPA used as a
single quadrature preamplifier before the HEMT increases the experimentally
achieved $\eta$ by a factor of approximately 20.
To perform a high-quality reconstruction of the Wigner function of a squeezed
microwave state we operate two JPAs in series, as shown in Fig. 2 (b). The
first JPA, referred to as the squeezer (SQ), prepares the squeezed state. The
second JPA, referred to as the pre-amplifier (AMP), amplifies the quadrature
of the squeezed state determined by the phase difference $\theta$ between the
AMP and the SQ pump tones. We vary $\theta$ by applying to the two cavities
pump tones slightly detuned from one another. The SQ stage is pumped at $7.45$
GHz, while the AMP stage is pumped at $100$ kHz higher frequency; therefore,
sweeping $\theta$ through $2\pi$ every $10$ µs.
Our implementation of an SQ or an AMP at microwaves, as shown in Fig. 2 (a),
requires three elements: (i) a JPA used in reflection, (ii) a directional
coupler and (iii) a circulator. As described in Castellanos-Beltran et al.
(2008), the JPAs are nonlinear resonant cavities built from coplanar
waveguides whose central conductor has been replaced by a series of many
Josephson junctions. The Josephson junctions’ nonlinearity causes the cavity’s
phase velocity to be intensity dependent. Therefore when the cavity is pumped
it becomes a phase sensitive amplifier for input modes whose frequencies lie
within the bandwidth of the JPA centered on the pump frequency. Such microwave
modes incident on the JPA are reflected and exit the cavity with one
quadrature amplified and the other squeezed, depending on their phase relative
to the pump’s phase. A directional coupler is used to add the pump tone to the
incident signal and remove the pump tone from the reflected signal. Finally
the incident and reflected modes are separated into different cables using a
circulator.
Figure 2: (a): To implement a SQ or AMP at microwaves, three microwaves
components are required: (i) a JPA, (ii) a directional coupler and (iii) a
circulator. Taking port (1) of the directional coupler as reference, (2) is
the weakly coupled port, (3) the isolated port and (4) the direct port. Port
(2) is used to pump the JPA. Port (3) is used to apply a cancelation tone
(adjusted with a room temperature attenuator and phase shifter) that nulls the
pump and displaces the output of the JPA back to the origin of the phase
space. (b): Schematic of the experiment. In this figure, all the microwave
components and cables are considered lossless; their imperfections are
absorbed into the experimentally determined total transmissivities $\xi$,
$\alpha$ and $\beta$.
Following Fig. 2 (b), in the limit of large HEMT power gain $G_{H}$, our
quantum efficiency can be cast as
$\eta=\frac{\alpha}{2+2A_{A}-\alpha+[2A_{H}-(1-\beta)]/G_{A}\beta},$ (1)
where $A_{A}$ ($A_{H}$) is the AMP (HEMT) added noise, $\alpha$ ($\beta$) is
the fraction of power transmitted by the microwave circuitry between the SQ
and the AMP (the AMP and the HEMT), and $G_{A}$ is the power gain of the AMP
stage. A detailed description of how we calibrate each of these parameters is
in the supplementary information. Briefly, we inject different amounts of
thermal noise into the amplifier chain while operating each JPA either as an
amplifier (ON) or as a noiseless element with unit gain (OFF). We then infer
the added noise and loss of the elements by observing the variation in the
noise at the output of the measurement chain. The thermal noise is varied by
connecting the input of the SQ through a switch to either a “hot load” (50
$\Omega$ microwave termination at $4.1$ K) or a “cold load” (at $20$ mK).
Although the tomography is only performed with the “cold load”, both are
required for calibration. We obtain $A_{A}=0.25\pm 0.06$, $A_{H}=17.3\pm 0.1$,
$\alpha=68\pm 2~{}\%$ and $\beta=74\pm 5~{}\%$. However, as the switch is
operated at the 4.1 K stage and is slightly lossy, the state presented at the
input of the SQ with the “cold load” is not pure quantum vacuum, but a low
occupancy thermal state with average photon number $\overline{n}\simeq 0.15\pm
0.15$. One quadrature of the resulting squeezed state is then amplified at the
AMP stage with sufficient gain $G_{A}=180$ such that the noise in the
amplified quadrature exceeds $A_{H}$ for any $\theta$. From Eq. (1), we
obtained an overall quantum efficiency of $36\pm 4~{}\%$, which can be
compared to $\eta\approx 2~{}\%$ without the AMP stage.
In this experiment our uncertainty in $\eta$ and $\overline{n}$ create a
systematic source of error. We thus perform our data analysis under three
assumptions (1) high efficiency ($\eta=0.40$) and high mean photon number
($\overline{n}=0.30$), (2) best estimate for both efficiency ($\eta=0.36$) and
mean photon number ($\overline{n}=0.15$), and (3) low efficiency ($\eta=0.33$)
and low mean photon number ($\overline{n}=0$). These three cases give us
“pessimistic”, “best-guess”, and “optimistic” analyses, in terms of the purity
of the squeezed state estimated by the tomography. Using a lower estimate for
$\eta$ and $\overline{n}$ as inputs to the tomography algorithm causes it to
return a more pure, more squeezed, and therefore a more “optimistic” estimate
of the squeezed state. Associated with each of these three cases, we also have
statistical uncertainty, so the given error bounds cover an interval that
includes both uncertainties around the “best-guess” estimate. They are
reported in the form $X_{-L}^{+U}$, where $X$ is the statistical mean using
the “best-guess” calibration and $L$ and $U$ are respectively the lower and
upper bounds of the one standard deviation uncertainty in the “pessimistic”
and “optimistic” cases.
We must also calibrate the QM to convert the measured voltage noise into units
of noise quanta. In optical homodyne tomography, this is usually done by
inserting the vacuum and observing the quadrature noise. Analogously, we
insert the weak thermal state with mean photons $\overline{n}$ (by simply
turning the SQ stage OFF) and measure voltages proportional to quadrature
values at many $\theta$, as shown (in blue) in Fig. 3. As expected this
voltage noise is $\theta$ independent, with a variance $\Delta
V_{\mathrm{SQ,OFF}}^{2}=3.2\times 10^{-5}~{}\textrm{mV}^{2}$. Under the
convention that vacuum has variance $1/2$ in unitless quadrature space (or in
units of “quanta”), we calibrate this voltage variance to $\Delta
X_{\mathrm{SQ,OFF}}^{2}=(1-\eta)/2+\eta(1/2+\overline{n})=0.55_{-0.05}^{+0.07}$
quanta. Therefore the desired conversion factor $\Delta
X_{\mathrm{SQ,OFF}}^{2}/\Delta
V_{\mathrm{SQ,OFF}}^{2}=1.71_{-0.17}^{+0.20}\times
10^{4}~{}\mathrm{quanta/mV}^{-2}$ is used to rescale the variances in Fig. 3
(c).
Figure 3: (a): Density plot of number of occurrences in a $1~{}$µV bin size
of the amplified quadrature voltage $V_{\theta}$ versus $\theta/2\pi$, with
the SQ pump OFF (top) and ON (bottom). (b): In particular, histograms of
$V_{\theta}$ at the maximum of squeezing: data ($\circ$) and Gaussian fit
(continuous lines) for the SQ pump OFF (blue) and ON (red). (c): Noise
variance $\Delta X^{2}_{\theta}$ in quanta units on a log scale versus
$\theta/2\pi$ for the SQ pump ON (red) and OFF (blue). The (black) line
indicates our estimate of the vacuum noise level under the “best-guess”
calibration.
In Fig. 3 (a), we show QM data of the squeezed state. With SQ ON (red) we
observe the characteristic phase dependent noise for a squeezed state. At the
phase for which the variance is minimum, we show the histogram of quadrature
measurements in Fig. 3 (b). The SQ OFF histogram is clearly wider than the SQ
ON histogram, demonstrating our ability to observe squeezing directly at the
output of our measurement chain. In Fig. 3 (c) we plot the variance of the QM
with SQ ON and OFF as a function of $\theta$, expressed in units of quanta,
clearly showing squeezing below the vacuum level. Without correcting for
$\eta$, we observe a minimum quadrature variance which is $\Delta
X^{2}_{\mathrm{SQ,MIN}}=68_{-7}^{+9}~{}\%$ of the vacuum variance.
To infer the quantum state created by the squeezer, correcting for loss during
the QM, we used maximum likelihood quantum state tomography Hradil et al.
(2004). For each of the three calibration cases, we performed 35
reconstructions using independent subsets each containing 10,000 QMs of the
total measured data. We estimated statistical uncertainty from the spread of
properties (such as fidelity or minimum variance) of the set of 35
reconstructions. The statistical uncertainty was significantly lower than the
systematic uncertainty. In Fig. 4 we show the Wigner function of the “best-
guess” reconstructed state $\rho$. The pure squeezed vacuum state
$|\psi\rangle$ that has the highest fidelity with $\rho$ has minimum
quadrature variance $6.0_{-1.1}^{+1.4}~{}\%$ of the vacuum variance, and that
maximum fidelity is $F=\langle\psi|\rho|\psi\rangle=0.81_{-0.17}^{+0.16}$. As
explained in the supplementary information, the minimum variance of $\rho$ is
biased by an amount comparable to our systematic uncertainty, so we infer the
minimum variance $\Delta x^{2}_{\mathrm{SQ,MIN}}$ directly from the observed
minimum variance as $\Delta x^{2}_{\mathrm{SQ,MIN}}=(1/\eta)(\Delta
X^{2}_{\mathrm{SQ,MIN}}-(1-\eta)/2)$. We find $\Delta
x^{2}_{\mathrm{SQ,MIN}}=12^{+30}_{-12}~{}\%$ of the vacuum variance. For
comparison, the most highly squeezed optical state ever made has a variance of
only $7~{}\%$ of the vacuum variance Mehmet et al. (2010).
Figure 4: Mean of 35 reconstructions of the Wigner function of the state
exiting the SQ, inferred by maximum likelihood under the “best-guess”
assumption, in quanta units. The faint pattern of ripples extending from the
origin is caused by truncation at 30 photons of the density matrix used to
represent the state. The white circle at the origin shows the full-width at
half-maximum of the vacuum state.
Producing squeezed states of itinerant modes allows the generation of
distributable entanglement by sending two copies of a squeezed vacuum state
through the two input ports of a balanced beam splitter. The coherent
information Schumacher and Nielsen (1996) is one useful way to characterize
the entanglement between the two output modes. The asymptotic number of
maximally entangled qubit pairs (e-bits) that can be distilled per copy of the
noisy entangled state, by using local operations and one-way classical
communication, is at least as large as the coherent information Devetak and
Winter (2005). Given two copies of $\rho$, one could make two entangled modes
with $2.5^{+1.0}_{-0.4}$ e-bits of coherent information.
In conclusion, we have reconstructed the Wigner function of an itinerant
squeezed microwave field generated at the output of a Josephson Parametric
Amplifier. Using a second JPA as a preamplifier has increased the quantum
efficiency of the microwave homodyne detection from approximately $2~{}\%$ to
$36~{}\%$. The level of squeezing is primarily limited by noise added to the
squeezed state by the JPA. Improving the performance of the JPAs (as both
squeezers and phase-sensitive amplifiers) will require more detailed
investigation of the source of this noise. We used maximum likelihood quantum
state tomography to deconvolve the QM inefficiency in order to precisely
characterize the state generated. This is an important step toward generating
easily distributable microwave entanglement on chip.
Notes: A different method was recently used to obtain a similar state
reconstruction Eichler et al. (2010).
###### Acknowledgements.
The authors acknowledge support from the DARPA/MTO QuEST program.
## References
* Houck et al. (2007) A. A. Houck et al., Nature 449, 328 (2007).
* Deléglise et al. (2008) S. Deléglise et al., Nature 455, 510 (2008).
* Hofheinz et al. (2008) M. Hofheinz et al., Nature 454, 310 (2008).
* Hofheinz et al. (2009) M. Hofheinz et al., Nature 459, 546 (2009).
* Slusher et al. (1985) R. E. Slusher et al., Phys. Rev. Lett. 55, 2409 (1985).
* Smithey et al. (1993) D. T. Smithey et al., Phys. Rev. Lett. 70, 1244 (1993).
* Schiller et al. (1996) S. Schiller et al., Phys. Rev. Lett. 77, 2933 (1996).
* Breitenbach et al. (1997) G. Breitenbach, S. Schiller, and J. Mlynek, Nature 387, 471 (1997).
* Furusawa et al. (1998) A. Furusawa et al., Science 282, 706 (1998).
* Yonezawa et al. (2007) H. Yonezawa, S. L. Braunstein, and A. Furusawa, Phys. Rev. Lett. 99, 110503 (2007).
* Aoki et al. (2009) T. Aoki et al., Nature Physics 5, 541 (2009).
* Lassen et al. (2010) M. Lassen et al., Nature Photonics 4, 700 (2010).
* Yurke et al. (1988) B. Yurke et al., Phys. Rev. Lett. 60, 764 (1988).
* Castellanos-Beltran et al. (2008) M. A. Castellanos-Beltran et al., Nature Physics 4, 929 (2008).
* Wallraff et al. (2004) A. Wallraff et al., Nature 431, 162 (2004).
* Teufel et al. (2009) J. D. Teufel et al., Nature Nanotechnology 4, 820 (2009).
* Vogel and Risken (1989) K. Vogel and H. Risken, Phys. Rev. A 40, 2847 (1989).
* Leonhardt and Paul (1993) U. Leonhardt and H. Paul, Phys. Rev. A 48, 4598 (1993).
* Lvovsky and Raymer (2009) A. I. Lvovsky and M. G. Raymer, Rev. Mod. Phys 81, 299 (2009).
* Menzel et al. (2010) E. P. Menzel et al., Phys. Rev. Lett. 105, 100401 (2010).
* Mariantoni et al. (2010) Mariantoni et al., Phys. Rev. Lett. 105, 133601 (2010).
* Bozyigit et al. (2010) D. Bozyigit et al., Nature Physics p. Advance Online Publication (2010).
* Caves (1982) C. M. Caves, Phys. Rev. D 26, 1817 (1982).
* Leonhardt and Paul (1994) U. Leonhardt and H. Paul, Phys. Rev. Lett. 72, 4086 (1994).
* Hradil et al. (2004) Z. Hradil et al., in _Quantum State Estimation_ (2004), pp. 59–112.
* Mehmet et al. (2010) M. Mehmet et al., Phys. Rev. A 81, 013814 (2010).
* Schumacher and Nielsen (1996) B. Schumacher and M. A. Nielsen, Phys. Rev. A 54, 2629 (1996).
* Devetak and Winter (2005) I. Devetak and A. Winter, Proc. R. Soc. A 462, 207 (2005).
* Eichler et al. (2010) C. Eichler et al. (2010), eprint arXiv:1011.6668v1 [quant-ph].
* Castellanos-Beltran et al. (2009) M. A. Castellanos-Beltran et al., IEEE Transactions on Applied Superconductivity 19, 944 (2009).
* Glancy et al. (2009) S. Glancy et al., Perimeter Institute Recorded Seminar Archive (2009), URL http://pirsa.org/09090003.
## I Supplementary Materials for “Quantum state tomography of an itinerant
squeezed microwave field”
## II Data acquisition and calibration
Determining the amplifier added noise and loss requires several calibration
steps that permit us to isolate the effect of a specific loss or added-noise
contribution to the overall efficiency of the homodyne measurement. The
crucial aspect that makes this calibration possible is that the JPA cavities
have widely-tunable resonance frequencies, adjusted by imposing a magnetic
flux Castellanos-Beltran et al. (2008, 2009). Far from resonance the JPA
cavities behave as open circuits. They are simply mirrors that reflect the
microwave field without otherwise transforming it; therefore, either the SQ or
AMP or both stages can effectively be bypassed.
We begin with both JPA stages bypassed, so that they have
$G_{\rm{S}}=G_{\rm{A}}=1$. If the switch were lossless, when it is connected
to the cold load, the noise power exiting the HEMT amplifier would be
$S=G_{H}(A_{H}+S_{f})$, where
$S_{f}=(1/2)+n_{f}=(1/2)+[\exp(\hbar\omega/k_{B}T_{f})-1]^{-1}$ and $T_{f}$ is
the refrigerator’s temperature. Notice that the result doesn’t depend on the
transmissivities $\alpha$, $\beta$, or $\xi$ because these are at the same
temperature as the cold load, consequently each loss component emits as much
power as it absorbs. However, with the switch connected to the hot load, the
expression for the total power at the output becomes
$S=G_{H}(A_{H}+(\xi\alpha\beta)S_{h}+(1-\xi\alpha\beta)S_{f})$, with
$S_{h}=(1/2)+n_{h}=(1/2)+[\exp(\hbar\omega/k_{B}T_{h})-1]^{-1}$ and
$T_{h}=4.1$ K. In both cases, we expect and observe that $S$ depends linearly
on $S_{f}$ with an offset. By fitting these linear dependencies we can extract
$G_{H}$, $A_{H}$, and the product $\xi\alpha\beta$.
We cannot assume that the switch is lossless. Because its loss sits at 4.1 K,
it will always emit noise power $S_{h}(1-\lambda)+S_{in}\lambda$, where
$S_{in}$ is the incident noise and $\lambda$ is the switch transmissivity. So,
even when $n_{f}\ll 1/2$, the state presented at the SQ stage will have
average thermal occupancy $\bar{n}=(1-\lambda)\xi n_{h}$. We write the noise
power at the output as function of $S_{f}$, for switches in both positions as
$S_{1c}=G_{H}A_{H}+S_{h}G_{H}(1-\lambda)\xi\alpha\beta+S_{f}[G_{H}\lambda\xi\alpha\beta+G_{H}(1-\xi\alpha\beta)]=b_{1c}+m_{1c}S_{f}$
(2)
$S_{1h}=G_{H}A_{H}+S_{h}G_{H}(\xi\alpha\beta)+S_{f}[G_{H}(1-\xi\alpha\beta)]=b_{1h}+m_{1h}S_{f},$
(3)
where the subscript $1c$ ($1h$) corresponds to the switch connected to the
cold (hot) load. Fitting our noise data to the right hand side of Eq. 2 and 3,
we can obtain the four parameters $b_{1h},b_{1c},m_{1h}$ and $m_{1c}$. However
as these parameters are not independent,
$S_{h}=(b_{1h}-b_{1c})/(m_{1c}-m_{1h})$, we cannot extract the switch loss
independently. We can nevertheless bound this unknown loss by taking a worst
case estimate as the manufacturers minimum specified transmission (at room
temperature) $\lambda=0.83$ and assuming it is less lossy at 4.1 K. We
moreover confirmed that at room temperature the frequency dependent loss of
the switch is within the manufacturer’s specification. Then by using
$1<\lambda<0.83$, we can bound the desired parameters using Eq. 2 and 3, with
the expressions $(\xi\alpha\beta)^{-1}=1+m_{1h}S_{h}\lambda/(b_{1h}-b_{1c})$,
and $G_{H}=m_{1h}/(1-\xi\alpha\beta)$, and
$A_{H}=(b{1c}/G_{H})-(1-\lambda)S_{h}(\xi\alpha\beta)$.
We then perform the same analysis, finding the linear dependence of the output
noise on $S_{f}$ and on the switch position, with the AMP ON and SQ OFF. From
these fits and knowledge of $A_{H}$ and $G_{H}$ we find $\xi\alpha$, $A_{A}$,
and$G_{A}\beta$. Finally, we operate the experiment with AMP OFF and SQ ON. A
third time we fit the linear dependence of $S$ on $S_{f}$ with the switch in
both positions, determining $\xi$, $\alpha$ and $\beta$ separately (Fig. 5b).
We evaluate the expressions for $\alpha$, $\beta$, $A_{A}$, $A_{H}$, $G_{H}$
and $\bar{n}$ at the bounds on $\lambda$, finding the range of values in the
main text. We also find $\xi=-9.9\pm 1$ dB, of which 6 dB arises from an
attenuator that has been placed at the input of the SQ stage.
Figure 5: The noise density $S$ in arbitrary units at the output of the
measurement versus refrigerator temperature $T_{f}$. a.) Data acquired with
the AMP and SQ OFF and the switch connected to the cold load (circles) and hot
load (squares). The lines are linear fits to $S$ versus $S_{f}$ for the case
of the switch connected to the cold load (solid) and hot load (dashed). b.)
Data acquired with the AMP ON and SQ OFF (blue) and AMP OFF and SQ ON (red),
with the switch connected to the cold load (circles) with a linear fit (solid)
and hot load (squares) with a linear fit (dashed). The arbitrary y-scale is
consistent between the six plots. The linear fits do not appear as lines
because we plot $S$ versus $T_{f}$ rather than $S_{f}$.
To acquire these calibration data sets, we regulate the refrigerator’s
temperature at 10 values between base temperature ($T<50$ mK) and 800 mK,
which requires about 7 hours to complete. For each temperature point we
measure the noise at the output under all six conditions, 2 switch positions,
and 3 amplifier configurations (AMP OFF SQ OFF, AMP ON SQ OFF, and AMP OFF SQ
ON). We inject a tone detuned from the AMP pump by 20 kHz. By dividing the
noise power at the output of the chain by the power in this tone, we become
insensitive to any variation in $G_{H}$ over the time needed to acquire the
data. At the end of the calibration, we immediately operate the experiment
with SQ ON and AMP ON, to acquire the data in the paper. In addition, we use
the tone to ensure that we do not saturate the amplifier chain.
Data is acquired by digitizing the output (IF port) of the mixer at rate of
$10^{7}$ samples per second. We filter the IF port with a 5 MHz anti-aliasing
low-pass filter. The digitized data is digitally filtered with a 3rd-order
Butterworth high-pass filter with a 500 kHz corner (3 dB) frequency. The noise
density $S$ is the average noise density in the frequency range between 500
kHz and 5 MHz.
## III Maximum likelihood analysis of the squeezed state
Table 1: Inferred properties of the squeezed state, upon our three analysis assumptions. | Pessimistic | Best guess | Optimistic
---|---|---|---
Fidelity | $0.66\pm 0.02$ | $0.807\pm 0.016$ | $0.960\pm 0.005$
Min. var. of comparison pure state $\left|\psi\right\rangle$111Ratio of the
variance of the squeezed quadrature of the pure squeezed vacuum state with
highest fidelity to the variance of the vacuum.:
| $0.065\pm 0.009$ | $0.060\pm 0.003$ | $0.0493\pm 0.0006$
$\rho$’s purity | $0.62\pm 0.02$ | $0.74\pm 0.02$ | $0.96\pm 0.01$
$\rho$’s sq. var.222Ratio of variance of most likely state $\rho$’s squeezed or anti-squeezed quadrature to the variance of the vacuum. | $0.918\pm 0.002$ | $0.484\pm 0.013$ | $0.304\pm 0.008$
$\rho$’s anti-sq. var.222Ratio of variance of most likely state $\rho$’s squeezed or anti-squeezed quadrature to the variance of the vacuum. | $25.54\pm 0.07$ | $20.17\pm 0.06$ | $19.18\pm 0.05$
Coherent info.333Coherent information (in e-bits) that could be produced with two copies of the squeezed state and a beam splitter. | $2.19\pm 0.08$ | $2.46\pm 0.09$ | $3.42\pm 0.05$
Linear sq. var.444Direct linear inference of the squeezed state’s minimum variance, relative to vacuum variance. | $0.40\pm 0.02$ | $0.12\pm 0.02$ | $-0.18\pm 0.02$
Table 1 shows the statistical errors in our estimates of inferred parameters
characterizing the squeezed state, for the three analysis cases, based upon
our systematic calibration uncertainties. The first line presents the fidelity
$F=\langle\psi|\rho|\psi\rangle$, where $\rho$ is the maximum likelihood
reconstructed density matrix of the field exiting the SQ and $|\psi\rangle$ is
the pure vacuum squeezed state that maximizes the fidelity. The second line
gives the ratio of the minimum variance of $|\psi\rangle$ to the variance of
vacuum. The third line gives the purity $\mathrm{Tr}({\rho^{2}})$ of $\rho.$
The fourth and fifth lines give the ratios of the squeezed and anti-squeezed
variances of the reconstructed state to the variance of vacuum. The sixth line
presents the coherent information that could be obtained by combining on a
beam splitter two copies $\rho$. The last line gives our estimate the the
experimental states’ minimum variance based on direct linear inference.
We have stated three variances that characterize the state created in this
experiment: the linear estimate of the experimental state’s minimum variance
($12~{}\%$), the most likely state $\rho$’s minimum variance ($48~{}\%$), and
the minimum variance of the pure squeezed vacuum state
$\left|\psi\right\rangle$ that maximizes the fidelity with $\rho$
($6.0~{}\%$). Here we give more discussion of these variances.
The quadrature measurements we observe are the linear combination of the
quantum state created by the squeezer and vacuum fluctuations:
$X_{\theta}=\sqrt{\eta}x_{\theta}+\sqrt{(1-\eta)}y_{\theta},$
where $x_{\theta}$ is the quadrature of the squeezed state, and $y_{\theta}$
is the quadrature of the vacuum state. Solving for $x_{\theta}$ gives
$x_{\theta}=\frac{1}{\sqrt{\eta}}\left(X_{\theta}-\sqrt{\left(1-\eta\right)}y_{\theta}\right).$
Therefore the inferred variance of the squeezed state’s quadrature $\Delta
x_{\theta}^{2}$ is
$\Delta x_{\theta}^{2}=\frac{1}{\eta}\left[\Delta
X_{\theta}^{2}-\left(1-\eta\right)\Delta y_{\theta}^{2})\right].$
The vacuum variance $\Delta y_{\theta}^{2}=1/2$, and we can easily calculate
an unbiased estimate of $\Delta X_{\theta}^{2}$ for every phase $\theta$. This
gives us an unbiased estimate of $\Delta x_{\theta}^{2}$ that does not depend
on the details (for example, Gaussianity) of the quantum state. We calculate
$\Delta x_{\theta}^{2}$ using 20,000 quadrature measurements at each of 100
evenly spaced $\theta$ and calculate the minimum value $\Delta
x_{\mathrm{SQ,MIN}}^{2}$, in Table 1. The statistical uncertainties show one
standard deviation in the estimate of $\Delta x_{\mathrm{SQ,MIN}}^{2}$. For
the “optimistic case” we calculate a negative variance, which is clearly
unphysical. This is a sign of inconsistency in the “optimistic” calibration
parameters. Because the “optimistic” estimate for the squeezed state is
computed using the lower bounds on $\eta$ and $\overline{n}$, this negative
variance is evidence that the detector’s true $\eta$ and / or effective gain
($\Delta X_{\mathrm{SQ,OFF}}^{2}/\Delta V_{\mathrm{SQ,OFF}}^{2}$) must be
larger than the lower bounds set by calibration.
The minimum variance of $\rho$ is significantly higher than this linear
estimate. This is caused by bias in the maximum likelihood method. Quantum
state estimation by maximum likelihood is biased toward more mixed states, and
the amount of bias increases with increasing purity of the state from which
the measurements are drawn Glancy et al. (2009). Based on numerical
experiments, the bias in our estimates of the fidelity should be well below
the uncertainty level set by systematic effects. However, the bias in our
estimates of the minimum variance of the inferred state could be larger. To
attempt to quantify this effect, we simulated measuring and performing maximum
likelihood tomography on a Gaussian state. This Gaussian state is chosen to
have minimum and maximum variances equal to those calculated by the linear
method described above for the “best-guess” case. By computer we simulate
10,000 quadrature measurements (the same number we used for ML analysis of the
true experiment) from this Gaussian state and perform maximum likelihood
tomography on those measurements. The inferred state has minimum variance
$40~{}\%$. Therefore it is possible that the experimental state has smaller
minimum variance than the most likely state inferred from only 10,000
measurements. Because we have some independent evidence for non-Gaussian
effects in the experiment, we cannot quantify this size of this bias using
this Gaussian simulation. Other numerical simulations have confirmed that this
bias decreases as the number of measurements analyzed increases and that this
bias is not caused by truncation of the Hilbert space at 30 photons.
The apparent discrepancy between the $6~{}\%$ for the variance of
$|\psi\rangle$ and the $48~{}\%$ for the variance of $\rho$ also deserves some
comments. It is important to note that one would not expect the minimum
variance of a mixed state to equal the minimum variance of its highest
fidelity pure state. The fidelity between a mixed Gaussian state (centered at
the origin of phase space) whose minimum and maximum variances are $v_{x}$ and
$v_{p}$ and a pure squeezed vacuum state with minimum variance $v_{s}$ is
given by
$F_{\mathrm{Gauss}}=\frac{2}{\sqrt{\frac{(1+4v_{s}v_{p})(v_{s}+v_{x})}{v_{s}}}}.$
The highest fidelity pure state has minimum variance
$v_{s}=\frac{1}{2}\sqrt{\frac{v_{x}}{v_{p}}}$, and the fidelity between these
two states is
$F_{\mathrm{Gauss,max}}=\frac{2}{1+2\sqrt{v_{x}v_{p}}}$
Consider the state $\sigma$ to be a Gaussian state with minimum variance of
$48~{}\%$ and maximum variance $2017~{}\%$. ($\sigma$ has variances equal to
those of our state $\rho$, but unlike $\rho$, $\sigma$ is guaranteed to be
Gaussian.) Then let $|\psi\rangle$ be the pure squeezed vacuum state that has
maximum fidelity with $\sigma$.
$F_{\mathrm{Gauss,max}}=\langle\psi|\sigma|\psi\rangle=0.49$, and the minimum
variance of $|\psi\rangle$ is $7.7~{}\%$. The difference between the minimum
variances of $\rho$ and $\left|\psi\right\rangle$ is to be expected. However,
the maximum fidelity of $\rho$ is significantly larger than we would expect if
it was perfectly Gaussian. This non-Gaussianity could be caused by bias in the
maximum likelihood inference and / or genuine non-Gaussian effects in the
experiment.
Tomographic reconstruction of a quantum state requires that the experimental
device always creates the same (potentially mixed) quantum state, that the
measurements are well described by inefficient quadrature measurements, and
that the calibration of those measurements is consistent. In this experiment
we have observed some evidence that at least one of these assumptions is
violated. The likelihood of the maximum likelihood state is significantly
lower than one should expect from simulated measurements on that state. That
is, if the tomographic assumptions above were true, we expect to find a
significantly higher value for the maximum likelihood. We believe this effect
could be caused by an interaction between the state preparation and
measurement stages of the experiment, such as a phase dependent efficiency of
the measurement JPA, and/or non linear processes in the measurement.
|
# Easy asteroid phase curve fitting for the Python ecosystem: Pyedra
Milagros R. Colazo Juan B. Cabral Martín Chalela Bruno O. Sánchez
Instituto de Astronomía Teórica y Experimental - Observatorio Astronómico de
Córdoba (IATE, UNC–CONICET), Córdoba, Argentina. Facultad de Matemática,
Astronomía y Física, Universidad Nacional de Córdoba (FaMAF–UNC) Bvd. Medina
Allende s/n, Ciudad Universitaria, X5000HUA, Córdoba, Argentina Centro
Internacional Franco Argentino de Ciencias de la Información y de Sistemas
(CIFASIS, CONICET–UNR), Ocampo y Esmeralda, S2000EZP, Rosario, Argentina.
Department of Physics, Duke University, 120 Science Drive, Durham, NC, 27708,
USA
###### Abstract
A trending astronomical phenomenon to study is the the variation in brightness
of asteroids, caused by its rotation on its own axis, non-spherical shapes,
changes of albedo along its surface and its position relative to the sun. The
latter behavior can be visualized on a “Phase Curve” (phase angle vs. reduced
magnitude). To enable the comparison between several models proposed for this
curve we present a Python package called Pyedra. Pyedra implements three
phase-curve-models, and also providing capabilities for visualization as well
as integration with external datasets. The package is fully documented and
tested following a strict quality-assurance workflow, whit a user-friendly
programmatic interface. In future versions, we will include more models, and
additional estimation of quantities derived from parameters like diameter, and
types of albedo; as well as enabling correlation of information of physical
and orbital parameters.
###### keywords:
minor planets, asteroids: general ; planets and satellites: fundamental
parameters ; Python Package
††journal: astronomy & computing
## 1 Introduction
The brightness variation of asteroids is a fascinating astronomical phenomenon
to study. One cause of the object’s varying magnitude is its rotation about
its own axis. This is because asteroids have non-spherical shapes and albedo
differences along their surface. On the other hand, the brightness of an
asteroid will also vary just by moving in its orbit around the Sun. When the
object is close to opposition, i.e. at angles close to $0^{\circ}$, sunlight
hitting the asteroid and light reflected from the object’s surface will come
from the same direction, causing the object to have a maximum in apparent
brightness. As it moves in its orbit and its phase angle increases, the Sun’s
light will begin to cast shadows on the asteroid’s surface causing a decrease
in its brightness. In summary, the magnitude of an asteroid drops as it
approaches the opposition and as it moves away the magnitude will begin to
grow again. This behavior can be visualized on a phase angle ($\alpha$) vs.
reduced magnitude $V$ diagram, known as ”Phase Curve”. Although several models
have been proposed to describe this curve, there is no comprehensive tool
available that provides with the necessary fitting procedures and enables a
reliable comparison between these models.
In 1989, Bowell et al. proposed the $G$ model (also known as $H,G$ model), a
semi-empirical model derived from the basic principles of radiative transfer
theory with some assumptions (Waszczak et al. 2015). Shevchenko (1996)
proposed an empirical tri-parametric phase function model valid for phase
angles in the range of $0^{\circ}-40^{\circ}$. There is a third model for the
phase function proposed by Muinonen et al. in 2010. It is a model similar to
Bowell’s but replaces the $G$ parameter with two parameters $G_{1}$ and
$G_{2}$, making it also a tri-parametric model. According to Muinonen et al.,
the $G$ model is a good approximation in the region of
$10^{\circ}{\sim}60^{\circ}$, while the $G_{1}$ and $G_{2}$ model works well
also for angles close to the opposition (${\sim}0^{\circ}$).
Many large sky surveys are currently in operation. Thousands of asteroids are
observed by these telescopes, providing a unique opportunity to study them.
Some of these large sky surveys are Gaia (Gaia Collaboration et al. 2018),
TESS (Ricker et al. 2015), in the near future the Vera Rubin’s Legacy Survey
of Space and Time (LSST, Schwamb et al. 2019). This poses the opportunity to
characterize the phase curve of a large numbers of asteroids. We must be
prepared to take full advantage of this information. One of the main analysis
with these datasets would be the calculation of the absolute magnitude $H$ of
hundreds or thousands of these objects, enabling also the estimation of their
diameters. The parameters $G$, $G_{1}$, $G_{2}$, $b$ can provide a good
estimate of the albedo of the asteroids and, even more, can help in the
taxonomic classification (Shevchenko 1996; Belskaya & Shevchenko 2000;
Carbognani et al. 2019).
In this context of ”big data for asteroids” we developed Pyedra . Pyedra
enables the analysis of large amounts of data, it provides the parameters of
the selected phase function model that best fits the observations. Thus, it
can quickly create parameter catalogs for large databases as well as providing
the possibility of working with non-survey data, i.e. personal observations.
This paper is organized as follows: in Section 2 we provide a brief
description of the algorithm. In Section 3 we introduce technical details
about the Pyedra package. In Section 4 we present the conclusions and future
perspectives.
## 2 The Algorithm
In this section, we present the three phase function models implemented in
Pyedra and for each one of them, we provide details on the method used for
parameter estimation. In general we adopt the procedure proposed by Muinonen
et al. (2010), hereafter M10.
### 2.1 H, G model
The $H,G$ phase function model for asteroids can be described analytically
through the following equation (Muinonen et al. 2010):
$V(\alpha)=H-2.5\log_{10}[(1-G)\Phi_{1}(\alpha)+G\Phi_{2}(\alpha)],$ (1)
where $H$ and $G$ are the two free parameters of the model, $\alpha$ is the
phase angle, $V(\alpha)$ is the reduced $V$ magnitude (brightness on Johnson’s
filter $V$ normalized at 1 AU from the Sun and the observer),
$\Phi_{1}$($\alpha$) and $\Phi_{2}$($\alpha$) are two basis function
normalized at unity for $\alpha=0^{\circ}$. The base functions can be
accurately approximated by:
$\displaystyle\Phi_{1}(\alpha)$ $\displaystyle=\exp$
$\displaystyle\left(-3.33\tan^{0.63}\frac{1}{2}\alpha\right),$ (2)
$\displaystyle\Phi_{2}(\alpha)$ $\displaystyle=\exp$
$\displaystyle\left(-1.87\tan^{1.22}\frac{1}{2}\alpha\right).$
To obtain the value of $H$ and $G$, M10 proposes to write to the reduced
magnitude as:
$10^{-0.4V(\alpha)}=a_{1}\Phi_{1}(\alpha)+a_{2}\Phi_{2}(\alpha),$ (3)
then we can write the absolute magnitude $H$ and the coefficient $G$ as:
$\displaystyle H$ $\displaystyle=-2.5\ \log_{10}(a_{1}+a_{2}),$ (4)
$\displaystyle G$ $\displaystyle=\frac{a_{2}}{a_{1}+a_{2}}.$
The coefficients $a_{1}$ and $a_{2}$ are estimated from the observations using
the standard method of least squares.
### 2.2 H, G1, G2 model
This three parameter magnitude phase function can be described as in M10:
$\begin{split}V(\alpha)&=H-2.5\log_{10}[G_{1}\Phi_{1}(\alpha)+G_{2}\Phi_{2}(\alpha)\\\
&+(1-G_{1}-G_{2})\Phi_{3}(\alpha)],\end{split}$ (5)
where $\Phi_{1}$(0°)=$\Phi_{2}$(0°)=$\Phi_{3}$(0°)=1. $H$, $G_{1}$ and $G_{2}$
are the parameters of the model, $\alpha$ is the phase angle, $V(\alpha$) is
the reduced magnitude and $\Phi_{1}$, $\Phi_{2}$, $\Phi_{3}$ are basis
functions.
These basis are defined piecewise using linear terms as well as cubic splines
(Penttilä et al. 2016) along the orbit.
In this case we write the reduced magnitude as:
$10^{-0.4V(\alpha)}=a_{1}\Phi_{1}(\alpha)+a_{2}\Phi_{2}(\alpha)+a_{3}\Phi_{3}(\alpha).$
(6)
For this calculation we use the tabulated values for the base functions
presented in Penttilä et al. (2016). The model free parameters can be obtained
from:
$\displaystyle H$ $\displaystyle=-2.5\ \log_{10}(a_{1}+a_{2}+a_{3}),$ (7)
$\displaystyle G_{1}$ $\displaystyle=\frac{a_{1}}{a_{1}+a_{2}+a_{3}},$
$\displaystyle G_{2}$ $\displaystyle=\frac{a_{2}}{a_{1}+a_{2}+a_{3}}.$
The coefficients $a_{1}$, $a_{2}$ and $a_{3}$ are estimated from observations
using the method of least squares.
### 2.3 Shevchenko model
This model is described in the following equation (Shevchenko 1996):
$V(1,\alpha)=V(1,0)-\frac{a}{1+\alpha}+b\cdot\alpha,$ (8)
where $a$ characterizes the amplitude of the so-called “opposition effect”,
$b$ is a parameter describing the linear term of the phase-magnitude
relationship, $\alpha$ is the phase angle and $V(1,0)$ is the absolute
magnitude.
Although M10 does not present this model in its work, we have extended the use
of least squares for this case as well given that Shevchenko’s formula is
already written in the form of a linear equation.
## 3 Technical details about the Pyedra package
### 3.1 User functionalities and application example
The Pyedra package consists of 3 main functions to perform the fitting of
observations. Each of these functions corresponds to one of the models
mentioned in Section 2. These functions are:
* •
HG_fit(): fits the 2.1 model to the observations.
* •
HG1G2_fit(): fists the 2.2 model to the observations.
* •
Shev_fit(): fits the 2.3 model to the observations.
All these functions return an object that we call PyedraFitDataFrame,
containing the parameters obtained after the fit with a format that is
analogous to a pandas Dataframe. The plot() method of this object returns:
* •
a graph of our observations in the plane ($\alpha$, $V$) together with the
fitted model.
* •
any of the graphics that pandas allows to create.
Pyedra also offers the possibility to add Gaia observations to the user’s
sample. This is done by using:
1. 1.
.load_gaia() to read the files containing Gaia observations.
2. 2.
.merge_obs() to merge the user and Gaia tables.
3. 3.
One can apply any of the above functions to this new dataframe.
This is a very interesting feature because, in general, observations from the
ground correspond to small phase angles. In contrast, Gaia can only observe
for phase angles $\alpha>10$° (Gaia Collaboration et al. 2018). Both sets of
data are complementary, thus achieving a more complete coverage of the phase
angle space. This also leads to a better determination of the phase function
parameters.
As a simple usage application we show how to calculate the parameters of the
$H,G$ model and how to plot the observations with the fit obtained using the
dataset of Carbognani et al. (2019). The respective plots are shown in Figs. 1
& 2. This dataset is provided with Pyedra for the user to test the
functionalities.
⬇
>>> import pyedra
>>> import pandas as pd
>>> import matplotlib.pyplot as plt
# load the data
>>> df = pyedra.datasets.load_carbognani2019()
# fit the data
>>> HG = pyedra.HG_fit(df)
id H error_H G error_G R
0 85 7.492423 0.070257 0.043400 0.035114 0.991422
1 208 9.153433 0.217270 0.219822 0.097057 0.899388
2 236 8.059719 0.202373 0.104392 0.094382 0.914150
3 306 8.816185 0.122374 0.306459 0.048506 0.970628
4 313 8.860208 0.098102 0.170928 0.044624 0.982924
5 338 8.465495 0.087252 -0.121937 0.048183 0.992949
6 522 8.992164 0.063690 0.120200 0.028878 0.991757
PyedraFitDataFrame - 7 rows x 6 columns
# take the mean value of H
>>> HG.H.mean()
>>> 8.54851801238607
# plot the data and the fit
>>> HG.plot(df=df, ax=None)
>>> plt.show()
<AxesSubplot:title={’center’:’Phase curves’},
xlabel=’Phase angle’, ylabel=’V’>
# scatter plot of G vs H
>>> HG.plot(x=’G’, y=’H’, kind=’scatter’)
>>> plt.show()
<AxesSubplot:xlabel=’G’, ylabel=’H’>
Figure 1: Example of the figure obtained for the model $H,G$. The points
correspond to the observations and the dotted lines to the best fit. Points of
the same color correspond to the same object. Figure 2: Example of the
’scatter’ graphic of pandas dataframe, also available for PyedraDataFrame.
### 3.2 Quality assurance
Software quality assurance refers to the set of standards and procedures that
must be used in order to verify that the software meets certain subjective
quality criteria. The most common procedures to carry out this task are unit-
testing and code-coverage.
The purpose of unit-testing is to check that each of the individual components
of the software works as expected (Jazayeri 2007). That is, we isolate a
function from our code and verify that it works correctly. On the other hand,
code-coverage is a measure of how much of our software has been tested (Miller
& Maloney 1963). In this way, we can identify parts of the code that we have
not verified. In the Pyedra package we provide five suites of unit-tests that
evaluate different sections of the code, reaching 99$\%$ of code-coverage. The
testing suites are tested for Python versions 3.7, 3.8 and 3.9. We are also
interested in the maintainability of Pyedra, therefore we have adopted PEP 8 –
Style Guide for Python Code (Van Rossum et al. 2001) in such a way that our
project meets current code standard and readability. For this purpose, we use
the flake8111https://flake8.pycqa.org/en/latest/ tool hat automatically
detects any case where we are not respecting the style imposed by PEP 8 as
well as programming errors, such as: ”library imported but unused”.
Finally, the entire source code is MIT-licensed and available in a public
repository222https://github.com/milicolazo/Pyedra. All changes and new
versions of the package committed to this repository are automatically tested
with continuous-integration services 333https://travis-
ci.com/milicolazo/Pyedra, 444https://github.com/milicolazo/Pyedra/actions.
Documentation is automatically generated from Pyedra docstrings and made
public in the read-the-docs
service555https://pyedra.readthedocs.io/en/latest/?badge=latest.
Finally, Pyedra is available for installation on the PythonPackage-Index
(PyPI)666https://pypi.org/project/Pyedra/; and is currently going through
registration process to appear in the Astrophysics Source Code Library
(ASCL.net, Grosbol & Tody 2010)
### 3.3 Integration with the Python scientific–stack
Python has become an important programming language within the astronomical
community (Stansby et al. 2020). This is mainly because it is a simple to use,
free and versatile language for manipulating and visualizing data (Faes 2012).
Pyedra is built on top of the Python scientific stack: Pandas (McKinney et al.
2010) since the main object on which Pyedra operates is a dataframe; Scipy
(Virtanen et al. 2020) for function interpolation and fit of least squares
optimization; Numpy (Walt et al. 2011) to manipulate arrays; Matplotlib
(Hunter 2007) for the data visualization; and attrs777https://www.attrs.org to
facilitate the implementation of classes.
#### 3.3.1 Short comparison with other similar packages
Pyedra’s main objective is to calculate the parameters of different phase
function models for large and small volumes of data. The
sbpy888https://sbpy.org/ (Mommert et al. 2019) package offers the possibility
to model phase curves. In this subsection, we will make a brief contrast
between both projects.
Regarding the available models, sbpy and Pyedra share the HG and HG1G2 model.
In the case of sbpy, the models HG12 (Revised H, G12 model by Penttilä et al.)
and a linear model are also available. Although these models were not
considered in Pyedra (but will be implemented in the next release), we have
included Shevchenko’s model which is not present in sbpy.
On the other hand, sbpy does not provide the functionality to estimate the
best fit model parameters (as Pyedra does) but returns other quantities
derived from these parameters. In addition, Pyedra has an error estimate for
each calculated value, something that is not present in sbpy.
Finally, Pyedra ’s main strength against sbpy is its simplicity of use. With
sbpy we have not found a quick way to get phase function parameter catalogs
for databases with large numbers of entries. With Pyedra , the user can
accomplish this task by just writing one line of code. The same is true for
graphic capabilities: since plotting phase functions is one of Pyedra ’s
features, one single method call allows to obtain a visualization of the phase
function. It is also worth noticing that with Pyedra not only phase curve
plots can be easily obtained, all pandas visualization tools are also
available enabling a more comprehensive analysis of the resulting catalog.
Moreover, as it is based on pandas’ dataframe manipulation, the output catalog
is simple to visualize, modify and to carry out different calculations from
it.
## 4 Conclusions
In this paper, we present Pyedra, a python implementation for asteroid phase
curve fitting. This package allows the user to fit three different models of
phase functions to observations of asteroid phase angle and photometry.
Pyedra is suitable for analysis of private datasets, of one or more asteroids,
as well as large volumes of information from any public survey data release
such as TESS, Gaia, K2, among others. SiConsequently Pyedra is a tool that
will enable the creation of phase curve model parameter catalogs for hundreds
of thousands of asteroids.
Pyedra also offers the possibility of producing numerous visualization plots.
Not only it can produce a graph of the phase functions but it makes available
all the graphs natively offered for pandas dataframes. In this way, we provide
the possibility of a complete analysis of the results obtained.
As we have already mentioned, we are living in an era of big surveys. We must
be able to have tools capable of processing the vast amount of data that these
surveys constantly provide to the scientific community.
### 4.1 Future Work
Pyedra is still in development process, so there are still topics to be
improved.
The first thing to consider would be to have more phase function models added
to those already offered by the package. In addition, it would be convenient
to be able to estimate certain quantities derived from the parameters
obtained, such as the diameter, the integral function, the different types of
albedo, etc. It would be interesting to have a tool that (in the case of a
database containing several asteroids) allows combining information on
physical parameters with orbital parameters. For example, the possibility of
studying which $G$ values the asteroids have for different semi-axes $a$.
Finally, we intend to add more large survey data for the user to combine with
their observations, such as TESS, SDSS, etc.
## 5 Acknowledgments
The authors would like to thank their families and friends, as well as the
IATE astronomers for useful comments and suggestions.
This work was partially supported by the Consejo Nacional de Investigaciones
Científicas y Técnicas (CONICET, Argentina). M.R.C., J.B.C and M.Ch. were
supported by a fellowship from CONICET.
This research employed the http://adsabs.harvard.edu/, Cornell University
xxx.arxiv.org repository, the Python programming language, the Numpy and Scipy
libraries, and the other packages utilized can be found at the GitHub webpage
for Pyedra.
## References
* Belskaya & Shevchenko (2000) Belskaya, I. N. & Shevchenko, V. G. 2000, Icarus, 147, 94
* Bowell et al. (1989) Bowell, E., Hapke, B., Domingue, D., et al. 1989, in Asteroids II, ed. R. P. Binzel, T. Gehrels, & M. S. Matthews, 524–556
* Carbognani et al. (2019) Carbognani, A., Cellino, A., & Caminiti, S. 2019, Plan. Space Sci., 169, 15
* Faes (2012) Faes, D. 2012, Journal of Colloid and Interface Science, 3, E1
* Gaia Collaboration et al. (2018) Gaia Collaboration, Spoto, F., Tanga, P., et al. 2018, A&A, 616, A13
* Grosbol & Tody (2010) Grosbol, P. & Tody, D. 2010, arXiv preprint arXiv:1004.4430
* Hunter (2007) Hunter, J. D. 2007, Computing in science & engineering, 9, 90
* Jazayeri (2007) Jazayeri, M. 2007, in Future of Software Engineering (FOSE’07), IEEE, 199–213
* McKinney et al. (2010) McKinney, W. et al. 2010, in Proceedings of the 9th Python in Science Conference, Vol. 445, Austin, TX, 51–56
* Miller & Maloney (1963) Miller, J. C. & Maloney, C. J. 1963, Communications of the ACM, 6, 58
* Mommert et al. (2019) Mommert, M., Kelley, M. S., Val-Borro, M., et al. 2019, Journal of open source software
* Muinonen et al. (2010) Muinonen, K., Belskaya, I. N., Cellino, A., et al. 2010, Icarus, 209, 542
* Penttilä et al. (2016) Penttilä, A., Shevchenko, V. G., Wilkman, O., & Muinonen, K. 2016, Plan. Space Sci., 123, 117
* Ricker et al. (2015) Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003
* Schwamb et al. (2019) Schwamb, M. E., Hsieh, H., Bannister, M. T., et al. 2019, Research Notes of the AAS, 3, 51
* Shevchenko (1996) Shevchenko, V. G. 1996, in Lunar and Planetary Science Conference, Vol. 27, Lunar and Planetary Science Conference, 1193
* Stansby et al. (2020) Stansby, D., Yeates, A., & Badman, S. 2020, The Journal of Open Source Software, 5, 2732
* Van Rossum et al. (2001) Van Rossum, G., Warsaw, B., & Coghlan, N. 2001, Python. org, 1565
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261
* Walt et al. (2011) Walt, S. v. d., Colbert, S. C., & Varoquaux, G. 2011, Computing in science & engineering, 13, 22
* Waszczak et al. (2015) Waszczak, A., Chang, C.-K., Ofek, E. O., et al. 2015, AJ, 150, 75
|
# The Adoption of Image-Driven Machine Learning for Microstructure
Characterization and Materials Design: A Perspective
This pre-print is currently undergoing peer review for publication in JOM.
Arun Baskaran ${}^{1},^{6}$ Elizabeth J. Kautz 2 Aritra Chowdhary 3 Wufei Ma 4
Bulent Yener 5 Daniel J. Lewis 1
(1Department of Materials Sci. & Engg., Rensselaer Polytechnic Institute, NY,
USA
2Pacific Northwest National Laboratory, WA, USA
3Artificial Intelligence, GE Research, NY, USA
4Department of Computer Science, Purdue University, IN, USA
5Department of Computer Science, Rensselaer Polytechnic Institute, NY, USA
6Currently at the Center for Nanoscale Materials, Argonne National Laboratory,
IL, USA )
###### Abstract
The recent surge in the adoption of machine learning techniques for materials
design, discovery, and characterization has resulted in an increased interest
and application of Image Driven Machine Learning (IDML) approaches. In this
work, we review the application of IDML to the field of materials
characterization. A hierarchy of six action steps is defined which
compartmentalizes a problem statement into well-defined modules. The studies
reviewed in this work are analyzed through the decisions adopted by them at
each of these steps. Such a review permits a granular assessment of the field,
for example the impact of IDML on materials characterization at the nanoscale,
the number of images in a typical dataset required to train a semantic
segmentation model on electron microscopy images, the prevalence of transfer
learning in the domain, etc. Finally, we discuss the importance of
interpretability and explainability, and provide an overview of two emerging
techniques in the field: semantic segmentation and generative adversarial
networks.
Keywords: IDML, Microstructure Characterization, Machine Learning, Machine
Learning in Material Science
## 1 Introduction
Microstructure is a broad term that encompasses the description of a
material’s structure at spatial length scales ranging from millimeters to
nanometers. A material’s property and performance is strongly influenced by
its microstructure, which is often studied by capturing and analyzing images
using various microscopy methods. Such micrographs are rich in information
about the material’s origin and processing history and its chemical make-up.
Hence, microstructure visualization, analysis, and interpretation has become
ubiquitous in materials science research. Significant advances in both
experimental and computational methods have enabled advances in how we
visualize and describe material microstructures. For example, three
dimensional microstructures can be visualized with techniques such as serial
sectioning and atom probe tomography [1, 2], and in situ methods have been
developed for direct visualization of atomic-scale phenomena, including oxide
growth mechanisms and element redistribution [3, 4]. Computational methods
have been at the forefront of many materials research studies due to the need
for accelerated materials discovery, design, and development emphasized in the
Materials Genome Initiative [5]. Methods including ab initio calculations,
density functional theory (DFT), n-point statistics, and machine learning have
recently gained more widespread use, enabled by modern (often, open source)
algorithms and high-performance computing. More recently, the application of
artificial intelligence (AI) has grown significantly in engineering and
science domains, particularly in the materials science area. This boom in AI
has been referred to as the fourth paradigm of science [6] and the fourth
industrial revolution [7], and has enormous potential in altering how
materials scientists discover new material systems, predict properties, and
even interpret and analyze micrographs.
The topic area of machine learning in materials science, or material
informatics, is broad. Reviews of the overall field can be found in [8, 9].
Within this broad field, a specific area of importance is the characterization
and interpretation of image data using machine learning methods, referred to
here as image driven machine learning (IDML). Image data is ubiquitous in
materials characterization efforts, thus, there is a strong need to develop
models and approaches that can accurately, effectively, and reliably link
image data to other parameters of interest (i.e. processing parameters,
properties, etc.). In the specific area of microstructure image analysis,
machine learning is proving highly valuable for characterizing diverse
microstructure data sets paramount to the materials design and discovery
process [10, 11]. Yet, there is still much to be learned about how to apply
machine learning methods to the important tasks of microstructure recognition,
characterization, development of processing-microstructure-property
relationships, and microstructure design. The motivation of the current review
is to provide a perspective on the different aspects involved in the
application of IDML techniques for material characterization, by looking at
the existing literature. The text is likely to provide a high level overview
of the field to material scientists who are interested in the field of applied
machine learning, as well as enable the experienced practitioners to obtain a
summary of the field or an insight into an allied field.
In this article, we begin by outlining the current state-of-the-art of IDML in
materials science. The scope of this review includes image datasets obtained
from camera (imaging sample surface at the resolution of naked eye), optical
microscopy, electron microscopy, spectroscopy, diffraction patterns, and from
simulations of material structure at different length scales. Following this,
we define a canonical hierarchy of procedural stages which is used to
modularize the IDML studies reviewed in this work. We briefly discuss the
importance of interpretability and explainability, towards a widespread
adoption of IDML in materials science. Finally, we briefly discuss two
emerging techniques that have gained importance in the last couple of years:
generative adversarial network and semantic segmentation.
## 2 Overview of the field
Although it is common for materials scientists to relate microstructure and
properties, the availability of microstructural information is a relatively
recent discovery compared to the length of time materials technology has been
known. C. S. Smith[12] has written extensively on the subject of the discovery
of microstructure in materials and cites what is considered to be the first
observations of metallic crystallanity through the use of chemical etchants in
meteorites by Thompson in 1804 and in Damascus steel by Breant in 1821. The
need to quantify the description of these structures was immediately apparent
and we refer the reader to C. S. Smith’s writings on the history of technology
and structure of metals. A significant contribution of Smith’s was in the
description of polycrystalline structures and an early application of a
computational image processing technique to a microscopy image was the use of
the intercept method [13] to calculate the average grain size of a
polycrystal. Since then, the field has experienced the use of progressively
advanced techniques to extract complex insights from microscopy images. While
the next section will review journal articles in this field on the basis of
their technical aspects, this section will focus on the materials science
problems that have been addressed so far using IDML.
In the last decade, IDML has augmented the human expert’s understanding of
material characterization at a range of length scales spanning multiple orders
of magnitudes. At the lower magnification levels, IDML techniques have been
applied to image datasets with a field of view in the range of 10-100 mm.
Examples of such studies include porous microstructure reconstruction using
generative methods [14], in-situ defect detection in melt pools for additive
manufacturing [15, 16], and segmentation of images generated from x-ray
computed tomography and serial sectioning [17]. At comparatively higher
magnification levels, IDML has been used to analyze optical and electron
microscopy image datasets with a field of view in the range of 10-500 $\mu m$.
A few examples of such studies include high level classification of images
into an appropriate microstructure class([11, 10]), semantic segmentation of
images generated from x-ray computed tomography ([18, 19]), semantic
segmentation of images generated from Scanning Electron Microscopy ([20, 21]),
quantitative metallographic analysis of microscopy images ([22, 23]). At the
high end of magnification, IDML has been used to analyze images with a field
of view in the range of 10-500 nm. Examples of such studies include high-level
classification of chirality of carbon nanotubes from images generated by High-
Resolution Transmission Electron Microscopy [24], semantic segmentation of
crystallographic defects from images generated by Scanning Transmission
Electron Microscopy ([25, 26], nanoparticle segmentation from images generated
by liquid-phase Transmission Electron Microscopy [27], and analyzing the grain
boundary character [28]. A visualization of characteristic images spanning the
length scales discussed above has been shown in Figure 1.
A different perspective about the field can be obtained by extracting the
domain specific problem statements from the studies. The most extensive
application of IDML in the domain has been towards phase segmentation and
quantitative analysis of microstructures. The objectives of a segmentation
study include the isolation of specific phases [21, 19] and the extraction of
quantitative information, such as volume fraction, about a specific phase [23,
29]. In [132], a supervised approach for binary edge detection allowed for
segmentation of phases with different morphologies than were present in the
training dataset. Another common application of IDML in this field is the
assigninment to an image a feature label that best describes the image, from a
pre-determined candidate set of features exhibited by the material. The label
can indicate a morphology present in a microstructure image ([30, 11, 10]),
the chirality exhibited by the sample [24], processing history of a given
microstructure [31], or crystal structure identification from a Transmission
Electron Microscopy image [32]. Assigning a class label can providing the
human expert with a high-level overview of a microstructure or be an integral
part of a multi-stage machine learning pipeline. IDML has also been used for
the purpose of defect detection at different length scales and quality
control. Examples of such studies include in-situ defect detection in additive
manufacturing [16, 15], fracture surface analysis from electron microscopy
images [20], and segmentation of crystallographic defects from electron
microscopy images [25, 26].
Figure 1: IDML has influenced material analysis across a range of
magnifications. (Top) On the smaller end of the spatial length scale, studies
have targeted microscopy images with a field of view spanning between 10-500
nm. Figure reproduced with permission from [25]; Copyright American Chemical
Society (2017). (Bottom) On the larger end of the spatial length scale,
studies have targeted microscopy images with a field of view of 10 $\mu m$ or
more. Figure reproduced with permission from [21]; Creative Commons CC BY.
### 2.1 Related Work
Machine learning, and more specifically the IDML framework, is considered here
to be a subset of the larger domain of Materials Informatics. It is important
to note that the terms data-driven, machine learning, and materials
informatics are typically used interchangeably. Materials informatics,
however, is a large field that encompasses machine learning and statistical
models for the quantitative description of microstructures. Several prior
works have implemented statistical characterization and reconstruction of
microstructures [33, 34, 35, 36, 37]. Another important component of
microstructure quantification is the uncertainty quantification, specifically
the processing-induced variability in features and its subsequent effect on
the material properties. Examples of studies on this topic include [38, 39].
While these studies are incredibly valuable and insightful, the application of
ML methods for microstructure quantification offer some unique advantages over
these statistical techniques, including the ability to incorporate large
amounts of data about the material microstructure and analysis of never-before
seen data.
Machine learning is now being increasingly used in the materials science
domain for a wide range of applications, beyond analysis/classification of
images. Although IDML is the focus of this article, it is important to mention
that image classification and analysis is merely a subset of the broader field
of machine learning in materials science. Hence, we briefly summarize related
work on machine learning methods (e.g., neural networks, Support Vector
Machines, k-means clustering, random forests, generative networks, and more)
applied to several diverse challenges in molecular and materials science
fields. In particular, active research areas for ML in materials science
include (but are not limited to): accelerated materials design and property
prediction [40, 41, 42, 43, 44, 45, 46], process optimization [47, 48],
discovery of structure-property relationships [49, 50], construction of
potential energy surfaces for molecular dynamics simulations [51, 52, 53],
prediction of atomic scale properties [54], text mining for knowledge
extraction[55], microstructure and materials characterization [56, 10, 57, 30,
11, 58, 25], and generation of synthetic microstructure images [31]. Such
applications span multiple length scales and a variety of material systems
(metals and alloys, oxides, polymers) [59].
The merger of machine learning with materials science is a relatively new and
growing field that contributes to the evolution from traditional
methodologies, where experimental characterization techniques were used to
understand processing-structure-property relationships, to one that is data-
driven. This paradigm shift can, in many cases, help accelerate materials
science research through a more autonomous, objective, and reliable design and
characterization process, which is impacted less by researcher bias and chance
discovery [60].
The ability to analyze smaller data sets is important in materials science
where a limited number of micrographs are usually available for any single
study. Recent advances have led to developments that allow human-level
performance in one-shot, or few-shot learning problems [60, 61]. Typically,
data analysis via deep learning methods require large amounts of data (such as
the large image data available through the ImageNet database [62, 63]).
Although in many materials science studies, researchers are limited to a few
data points, an advantage of many micrographs is that several microstructural
features of interest are available for interest in a single image (depending
on magnification). This one-shot or few-shot learning concept has significant
implications for future materials science studies, for example
characterization of neutron irradiation or corrosion effects, where limited
material is available for analysis due to long lead-time experimentation. In
other cases, there exists data from previous studies that may be very limited,
or not well understood, for which advanced data analysis methods could be
applied, as was done in prior work [46, 64].
This relatively recent surge in research efforts at the intersection between
materials science and machine learning domains have been enabled by the ever
decreasing cost of computing resources. In addition, several frameworks such
as Keras[65], PyTorch [66], and TensorFlow [67] have helped materials
scientists (amongst other domain scientists such as nuclear engineers,
chemists, physicists, etc.) and data scientists readily apply sophisticated
machine learning algorithms with relatively low/minimal developmental efforts.
The low cost associated with application of machine learning algorithms to
various domain-specific research challenges, and the aforementioned benefits
enable studies such as those discussed in this article.
## 3 Component-based breakdown of IDML research efforts
This section provides a modularized approach to discuss the published
literature in the field of IDML for materials characterization. A series of
six canonical action steps is defined in Figure 2. The review is divided into
six components, one for each of the steps outlined in the figure. The
granularity of the modularization is chosen so as to keep the individual
modules comparable to application of ML in allied fields.
Each action step in the series is approached sequentially, from the bottom-up,
in the logical order required to successfully implement a ML technique to the
domain. The first step in the series is the definition of a domain-specific
goal, i.e., the problem specific to materials science that is being answered
by the adoption of IDML. An appropriate problem definition is important to
assess the degree of success of the implementation. A review of the field from
the perspective of this step is likely to help a beginner to the field with
identifying the different components that together make up a rigorous problem
definition. well as an experienced practitioner. For an experienced
practitioner, this component of the review provides a summary of the domain-
specific problems that have been approached through IDML methods.
The second step in the series is dataset acquisition. Data collection,
augmentation, and pre-processing are important steps for a project
implementing both computer vision and machine learning techniques. In
addition, the allocation of data into training, validation, and testing have
also been discussed where appropriate.
Following this, an overview is provided of the classification models and image
processing algorithms used among the published literature in the field.
Specific focus is provided to the rationale behind model selection that is
appropriate for a particular domain-specific goal and dataset. Specific model-
based tuning to address domain-specific requirements are highlighted in this
subsection.
The fourth component of the review is the training and optimization methods
employed by the published studies in the domain. Here as well, the specific
modifications to the training process to address domain-specific constraints
are highlighted.
The fifth component of the review is the evaluation metrics adopting in the
field to assess the performance of the IDML techniques. Evaluation metrics are
an assessment of the extent to which the technique addresses a given materials
characterization analysis, and hence a greater focus is given to this section.
This section is aimed at giving the reader an overview of the benchmark
results currently in the field, as well as providing an introduction to a set
of metrics that are commonly used in the field.
The final component of the review pertains to a discussion of ways by which
research efforts have integrated IDML into their characterization workflows.
Figure 2: The canonical series of procedural stages used to review the studies
in the field of IDML for material characterization. Each stage can be
considered as being independent of the others, but crucial to the final
performance of the model or algorithm that is being implemented.
### 3.1 Domain-specific goals
Table 1 outlines the functional goals outlined in the 30 papers reviewed in
this study.
Table 1: Domain-specific goals adopted by a subset of studies that have been reviewed in this work. Paper | Functional goals
---|---
DeCost2015 [10] | Classify microstructure images into one of seven image classes, and develop a bag of visual features
Chowdhury2016 [11] | Binary classification of microstructural features (dendrites) and feature orientations
Zhang2021 [14] | Generate isotropic and anisotropic synthetic 3D porous structures using 2D slices as input
Gobert2018 [15] | In situ defect detection detection using supervised machine learning, for powder bed fusion (PBF) additive manufacturing
Scime2019 [16] | In-situ defect detection in additive manufacturing melt pools
Stan2020 [17] | Semantic segmentation of computed tomography and serial sectioning images
Strohmann2019 [18] | Semantic segmentation of 3D microstructure of Al-Si
Evsevleev2020 [19] | Deep-learning based semantic segmentation of individual phases from synchrotron x-ray computed tomography images
Tsopanidis2019 [20] | Semantic segmentation of fracture images of MgAl2O4, using a deep-cnn
Azimi2018 [21] | Pixel-wise segmentation of steel microstructure datasets
Campbell2018 [22] | Automated extraction of quantitative data from material microstructures using advanced image processing technique
Agbozo2019 [23] | Quantitative metallographic analysis through object segmentation of SEM images
Forster2020 [24] | Classification of HRTEM images of Carbon Nanotubes into its appropriate chirality
Ziatdinov2017 [25] | Semantic segmentation of defects and subsequent defect structure identification from atomic scale STEM images
Roberts2019 [26] | Semantic segmentation of crystallographic defects from electron micrographs
Yao2020 [27] | Nanoparticle segmentation from liquid-phase TEM images by U-Net
Chan2020 [29] | Automated quantitative analysis of microstructure using unsupervised algorithms
Baskaran2020 [30] | Contextual segmentation of morphological features in titanium alloys using a two-stage machine learning pipeline
Aguiar2019 [32] | Classification of TEM data, atomic resolution images and diffraction data into crystal structures at family level and genera level
Zhu2017 [69] | Particle recognition from cryo-EM datasets
Chen2020 [70] | Instance semantic segmentation of Al alloy metallographic images
DeCost2018 [71] | Semantic segmentation of ultrahigh carbon steel microstructures through deep learning techniques
Furat2019 [72] | Semantic segmentation of computed tomography data of Al-Cu specimens
Vuola2019 [73] | Nuclei segmentation from microscopy images for biomedical imaging using an ensemble model
Hwang2020 [74] | Semantic segmentation of multi-phase composite microstructures
Ma2018 [75] | Semantic segmentation of Al-La microstructure images
Decost2017 [77] | Classify AM feedstock powder images into their respective material system
Yang2019 [85] | Prediction of the microscale elastic strain field in a 3D voxel-based microstructure
Cang2016 [86] | Microstructure reconstruction from a low-dimensional representation
Arganda-Carreras2017 [87] | Develop a library for integration of machine learning methods available in WEKA into image processing toolkit Fiji for biological and non-biological specimens.
Ciresan2012 [88] | Binary membrane segmentation of neuronal structures in stacks of electron microscopy images
Wang2013 [89] | Binary membrane segmentation of neuronal structures in stacks of electron microscopy images
Haan 2019 [90] | Super-resolving low-resolution SEM images through a generative adversarial network
Kaufmann2020 [91] | Crystal structure identification from EBSD diffraction patterns using deep learning
Madsen2018 [92] | Semantic segmentation of HRTEM images for local structure recognition
One of the main goals of material scientists in relation to the use of
computer vision techniques has been towards microstructure classification. The
research efforts in this area can be divided into two broad categories, coarse
classification and dense classification, depending on the number of pixels per
classification label. The first category includes classification models that
assign one label per image in the dataset. The label that is chosen is the one
that best represents the microstructure in the image, such as “dendritic”,
“lamellar”, etc. Examples of previous studies that perform such a
classification include [10], [11], [21], [30], and [68]. Dense classification
models include typically assign one label per pixel in a given image. In this
case, the label describes the phase of the microstructure at the spatial
location represented by the pixel. Examples of research articles that perform
this classification include [17, 18, 19, 20, 22, 23, 29, 69, 70, 71, 72, 73,
74, 75, 76]. A fast emerging technique in this category is semantic
segmentation, which can be summarized as pixel classification performed in the
context of the whole image. It is noted that there also exists models that
perform classification at a scale that is in between the two extremes stated
above, such as the prediction of bounding boxes using a Region Proposal
Network (RPN) in [70]. Figure 3 shows characteristic examples of application
of the different types of classification discussed above to characterization
analysis.
These styles of classification have been used by the domain experts towards
two broad objectives, object/class detection and extraction of quantitative
information from characterization results. The object (for example,
precipitate) or class (for example, dendritic) detection is typically
performed by either coarse-grained classification or semantic segmentation.
Another use for object detection has been the detection of defects and
abnormalities. The goal for such a study would be binary classification into
“defect” and “no defect”. The classification result is typically integrated
with the feedback for process-correction or quality-control. Examples of
research articles with this goal include [15, 16, 26, 77, 78, 79, 80]. Among
the research articles cited above as examples of pixel-level classification,
one of the goals is to aggregate the classification into developing a
quantitative finger-print of the microstructure, such as area fraction of a
particular phase or the number of precipitates in the field of view
represented by the image. Both object detection and quantitative analysis of
the microstructure potentially aid the study of a process-structure
relationship for the material.
Among the studies that have performed quantitative microstructure analysis, a
subset of them have developed a lower-dimensional representation for
microstructures. Techniques like Principal Component Analysis are used to
identify the most discriminative features for the dataset. Such a
representation could be an end-goal in itself, like in [86, 93, 81, 82, 68,
83, 84], or could be developed in order to improve the classification
performance, like in [11, 31].
Figure 3: Distinction in the types of classification that can be performed,
based on the number of pixels per classification label. (Top) Coarse
classification, in which a label is assigned to each image in the dataset.
Figure reproduced with permission from [30]; Copyright Elsevier (2020). The
example image is extracted from [30]. (Bottom) Dense classification (or pixel-
based classification), in which a label is assigned to each pixel in the
image. Figure reproduced with permission from [20]; Copyright Elsevier (2020).
### 3.2 Datasets
The dataset characteristics, such as its size and distribution, have a strong
influence on the overall performance of the model and/or algorithm. Most
applications of IDML in the materials science domain (not including the
studies involving microstructure reconstruction) were performed on real data,
i.e. data generated from real material samples. A few exceptions to this can
be found in studies such as [85, 31, 24, 32, 25], where simulated data either
comprised the entire training dataset or supplemented the data obtained from a
real sample. The real data were generated from a wide variety of sources,
which have been briefly summarized in Section 2. One can observe a significant
variation in dataset sizes among the published literature in this field. There
is no rigid rule to determine the appropriate size of the training dataset. It
depends on factors such as the problem statement, variation in the dataset,
choice of model, required level of performance, etc. Naturally, coarse
classification and dense classification studies require different number of
images for training. Since the former has a greater number of pixels per
classification label than the latter, it requires a comparatively larger
dataset. Even among studies that have employed a coarse classification model,
a significant variation in dataset size can be observed. For example, while
[10] uses 105 images to train a SVM for classification into 7 classes, [11]
employs 528 images for a binary classification task using different models,
and [30] uses 1225 images to train a CNN for 3-class classification. To the
best of our knowledge, the largest dataset used in the domain of materials
characterization has 1.3e+6 images and is reported in [24]. Similarly, a
variation in dataset sizes can also be observed among studies that have used
dense classification models. For example, while the reported datasets in [23,
70, 71] are 50, 100, and 24 respectively, the reported datasets in [21, 25]
are 5093 and 2000, respectively.
In addition to the size of the overall dataset, it is also important to
appropriately split the dataset for the purposes of training, validation, and
testing. A good evaluation of the performance of a trained model is its
performance on data that was not used for training. A general trend observed
in the studies reviewed in this work is that the data split depends on the
size of the overall dataset. When the dataset size is comparatively large,
such as in [21, 32], the authors have allocated data for a held-out dataset to
evaluate the performance of their trained models. However when the dataset is
comparatively small, like in [71, 11], the performance of the trained model is
evaluated using a cross-validation technique. The evaluation metrics are
discussed in detail in the next subsection. A common challenge faced by the
data acquisition methods in the field is that it is time-consuming and
requires expertise to assemble a large, high-quality, image dataset that can
effectively train a machine learning model. Data augmentation has been widely
used to overcome this challenge. Among studies reviewed in this work, some of
the common augmentation techniques include cropping [30, 20, 21], rotation
[18, 23, 69, 19], and translation [18] As mentioned previosusly, simulated
data have also been used to supplement the data from real imaging sources.
Figure 4 lists the dataset characteristics observed in the studies reviewed in
the current work.
Figure 4: A representation of the dataset sizes used by a subset of studies
reviewed in this work. Since coarse classification and pixel-level
classification have dataset requirements spanning different orders of
magnitude, they are represented in separate charts
### 3.3 Model selection and training
The selection of the appropriate model for a given application depends on
factors such as the the problem statement, distribution and size of the
dataset, the computational resources available, etc. The field of IDML for
material characterization has adopted a wide variety of models, ranging from
shallow to deep learning. Reflecting the current state of the field, the
models can be categorized into two distinct categories: non-neural network and
neural network. The former belongs to a shallow learning paradigm, whereas the
latter belongs to a deep learning paradigm. Shallow learning refers to a
learning paradigm in which the model parameters (for example, the weights of
support vector machine model) are learnt directly from the input features. In
other words, the input features are connected directly to the output via the
model. In contrast, deep learning refers to a learning paradigm in which the
input features are sequentially fed to one or more intermediate layers(also
known as hidden layers) before an output is obtained. In this paradigm, only
the first intermediate layer learns directly from the input features. The
hidden layers learn an alternate, often lower-dimensional, representation of
the dataset. Owing to model simplicity and the fewer number of trainable
parameters, shallow learning models are easy to train and can provide
reasonable performance with a small training dataset. However, the training
process is strictly empirical and it is less likely to learn hidden patterns
in the dataset. On the contrary, a deep learning method is computationally
expensive to train and requires a relatively larger dataset for training
compared to shallow learning models. However, the trained hidden layers can
provide an insight into the underlying patterns in the dataset, and can be
used to explain the predictions of the model.
Coarse-grained classification tasks that generate one class label per image
have been demonstrated with shallow learning algorithms such as Support Vector
Machine [11, 16, 77, 15] and Random Forest Classifier [11, 31]. Reflecting the
growing popularity of the convolutional neural network (CNN) in related
fields, there has been a push in the application of CNNs for analyzing
microstructure images. This trend can be attributed to two main factors:
improved performance for multi-label classification and its application for
semantic segmentation. It is important to note that CNNs and deep CNNs (DCNNs)
are often selected, as opposed to multi-layered perceptron networks (MLPs)
because the image data input is conducive to CNNs, which requires a spatially-
dependent data format [94]. CNNs and DCNNs take input image data and convolve
intermediate image data with learned kernels in several successive layers,
allowing for the network to learn highly nonlinear features.
The design of a typical CNN is such that it learns about inherent patterns,
through trainable filters, at different spatial resolutions compared to the
input image. Libraries, such as Keras [65], enable a visualization of these
patterns [30] and thus can provide insight into the rationale behind the
model’s predictions. A few examples that perform coarse classification using
CNNs include studies detailed in [32, 30, 24]. The rising popularity of deep
learning techniques for semantic segmentation applications in allied
fields[97] is reflected in an increased adoption of neural networks for pixel-
based classification of material images. It should be noted a typical neural
network architecture for coarse classification is different from a typical
neural network architecture for pixel-level classification. For the former,
the neural network (CNN) progressively downsamples the input image and
provides as output a vector of class probabilities. The part of the model that
is responsible for downsampling an input image is typically known as an
encoder. For a pixel-level classification, the encoder is connected to a
decoder which upsamples the lower resolution image to provide an output of the
original resolution. Examples of studies that have used an encoder-decoder
structure include [23, 21, 70, 71, 72, 73]. The emerging field of semantic
segmentation is discussed in a later section. Notable advancements in neural
network architecture that have been adopted by the material science community
include the use of Mask R-CNN for instance segmentation [70], use of an
architecture with atrous convolution layers [74, 75], use of skip connections
[26], etc. A few examples of model architectures used in the field have been
highlighted in Figure 5. The availability of trainable filters have also
helped in augmenting domain-specific knowledge into the training process [95,
96].
Depending on the availability of data, two different training modes can be
implemented. In the first mode, the model is trained from a random initial
state, i.e., the parameters of the model such as the filter weights are
initialized to a specified random distribution. During the training period,
these parameters converge to values that are optimal for the dataset. he
number of training samples for this mode scales with the number of parameters,
and hence it may not be feasible to implement complex deep learning models
from scratch to data-deficient applications. Previously implemented works
which have performed training from scratch include [30, 25, 69, 85]. The
second training mode is referred to as transfer learning, and takes advantage
of models that have been pre-trained on large datasets. In this mode, the
model parameters are initialized to the values that were obtained at the end
of the pre-training. Subsequently, a part or whole of the model is fine-tuned
with the dataset in hand. It is not common to directly use the pre-trained
model parameters without fine-tuning, unless the current dataset and the
dataset for pre-training are similar in nature. A major advantage of the
transfer learning is that complex deep learning models can be implemented
using comparatively fewer training data. Previously implemented works which
have performed transfer learning include [23, 71, 73, 20]. An emerging sub-
field within transfer learning is few-shot learning [129], in which a pre-
trained model is fine-tuned using a very small training dataset (often 5
images). Few shot learning in particular, and transfer learning in general, is
especially beneficial in cases where images dissimilar to the training dataset
are encountered. In addition, transfer learning also reduces the demand for
computational resources imposed by large models, and hence is likely to help
in implementing machine learning at the edge (for example, in individual
computers connected to microscopes). The reader can refer to open source
libraries such as [128] which have made it easy to load pre-trained states of
commonly used models.
In some cases, the bottleneck may be in the generation of ground truth data
for supervised training. One way to overcome this is to explore the
possibility of unsupervised training, i.e., training without the presence of
ground truth labels. Unsupervised training enables models to learn to separate
the data into distinct classes without explicitly learning what these classes
represent. This method is particularly useful for cases where it is difficult
to ascertain the number of classes beforehand. Examples of models based on
unsupervised learning can be found in [29, 130, 131].
Table 2 lists the classification models and image processing algorithms
employed by the 20 papers reviewed in this study.
Figure 5: A selection of models employed by studies reviewed in this work. From top left, the figure were reproduced with permission from [26]; Creative Commons CC BY, [21]; Creative Commons CC BY, [85]; Copyright Elsevier (2019), and [70]; Creative Common CC BY. Table 2: The class of model and their corresponding training process implemented in a subset of studies reviewed in this work. Training type Model used | SVM | Neural network | Others
---|---|---|---
Training from scratch | [11, 10, 77, 15, 16] | [32, 25, 69, 14, 27, 85, 86, 30, 70, 88, 19, 24, 72, 18, 17, 26, 92, 75, 91, 90] | [29, 11, 89]
Transfer learning | NIL | [23, 21, 11, 71, 73, 20, 74] | NIL
### 3.4 Model evaluation
Choosing the appropriate evaluation metric is important in order to obtain a
fair assessment of the trained model. A variety of evaluation metrics can be
seen among the published literature in the field. The choice of a metric is
dependent on factors such as the domain-specific problem, size of the dataset,
distribution of data among the different candidate classes, etc. For example,
pixel-based classification studies incorporate a distinct set of metrics to
evaluate the segmentation performance of different classes. It is unfair to
compare the performance of two different IDML models with similar domain-
specific problem statements, if the models are implemented on different
datasets or if they are evaulated using different metrics. However, it is
useful to understand the different metrics employed by the studies and the
rationale behind the choice. Some of the common evaluation metrics are listed
below and are defined based on the following statistics:
* •
True positives, TPi, are data of class label i that were correctly classified
as i;
* •
True negatives, TNi, are data not belonging to class label i that were not
classified as i;
* •
False positives, FPi, are data not belonging to class label i that were
incorrectly classified as i; and
* •
False negatives, FNi, are data of class label i that were not classified as i.
* •
For binary classification, average accuracy is defined as
$\frac{TP+TN}{TP+TN+FP+FN}.$ (1)
This is the most common evaluation metric used, and provides an overall
effectiveness of the classifier. This metric has been used for coarse
classification ([30, 11, 32, 77, 24, 15]) as well as for pixel-level
classification ([18, 17, 26, 70]). The average accuracy is an appropriate
metric when data is uniformly distributed across class labels.
* •
For binary classification, precision (P) and recall (R) are defined as
follows:
$P=\frac{TP}{(TP+FP)}$ (2)
and
$R=\frac{TP}{(TP+FN)}$ (3)
For multi-class classification, two different ways of averaging are followed:
micro-averaging and macro-averaging. Micro-averaged precision and recall are
defined as the application of the respective metric on the cumulative sum of
TP, TN, FP, and FN over all the individual classes:
$P_{\mu}=\frac{\sum_{i}TP_{i}}{\sum_{i}(TP_{i}+FP_{i})}$ (4)
Macro-averaged precision and recall are defined as the average of the
respective metric applied to each of the individual classes:
$R_{M}=\frac{\sum_{i}R_{i}}{l}$ (5)
where l is the number of classes. Precision and recall are more appropriate
metrics than average accuracy when there is an imbalance of data points across
different classes. For example, in a pixel-level classification to segment
defects, the number of defect pixels is likely to be small compared to the
number of non-defect pixels or background pixels. Examples of studies that
have used these two metrics include [69, 70, 71, 15, 73, 26].
* •
The F-score metric, $F_{score}$, derives from the precision and recall values
and is defined as follows:
$\frac{(\beta^{2}+1)PR}{(\beta^{2}P+R)}$ (6)
A common value that is used for $\beta$ is 1, and the corresponding F1 score
is the geometric mean of the precision and recall. Examples of studies that
have used the F1 metric include [69, 70, 20, 17]. The F score is important
towards in designing a model with a specific trade-off between precision and
recall.
* •
Intersection over Union (IoU): This metric is used in applications of object
or phase segmentation. When segmentation is performed through pixel-level
classification, IoU can be defined as follows:
$IoU=\frac{TP}{(TP+FP+FN)}.$ (7)
Examples of studies that have used this metric include [21, 27, 70, 71, 20,
17, 26].
* •
Other metrics: Some of the studies reviewed in this work have employed
evaluation metrics that are unique to the specific problem at hand. While the
use of such metrics make it difficult to compare the model’s performance with
similar work, they are a useful measure of the effectiveness of IDML in
solving the material science problem. Examples of such metrics include
difference in predicted and actual interphase connectivity [18], relative
errors in the predicted center of mass and volume of grains as compared to the
ground truth values [72], comparison of synthesized and original 3-dimensional
structures based on spatial correlation metrics such as two-point correlation
function and cluster function [14], etc.
Table 3 lists the evaluation criteria used by each of the 20 papers reviewed
in this study.
Table 3: Evaluation methods adopted by a subset of studies reviewed in this work. Evaluation metric | References
---|---
Average accuracy | [32, 25, 30, 70, 11, 10, 77, 24, 15, 18, 17, 16, 26, 75, 91]
Precision and Recall | [69, 70, 71, 15, 73, 26, 92]
F1 | [69, 70, 20, 17]
IoU | [21, 27, 70, 71, 20, 17, 26, 74]
Material specific metrics | [23, 14, 85, 86, 29, 22, 88, 89, 72, 74, 90]
### 3.5 Integration with the existing workflow
The final step in leveraging the capability of artificial intelligence to
improve materials characterization is integrating the analysis with existing
instrumentation or simulation workflows, and thereby handling the
implementation challenges. Most of the studies reviewed in the work have
focused on implementing state of the art machine learning or computer vision
techniques to a representative dataset of images that exemplify their domain
of study. In contrast, examples of integration of IDML with existing workflows
can be seen in the works of Gobert et al. [15] and Scime et al.[16]. Both
these studies have incorporated an imaging source to their additive
manufacturing infrastructure in order to perform in-situ analysis of melt
pools. In general, the integration of such techniques with existing workflows
present a different set of challenges as compared to the process of adoption
of IDML to material datasets. For example, the domain of electron microscopy
is fast emerging as multi-modal and data-intensive [98]. The use of high-
performance computing infrastructure could enable the speed of analysis to
keep pace with the data generation, and also augment the efforts to automate
data acquisition through real-time control of instrumentation parameters. As
an example, Patton et al.[99] demonstrate the implementation of a deep
learning network on a GPU supercomputer to extract structural information from
atomic-resolution microscopy data. A software pipeline,built over a
supercomputing architecture, to perform segmentation of three-dimensional
electron microscopy data directly from the source is discussed in Vescovi et
al.[100]. Seal et al.[101] report the work in progress towards the use of
high-performance computing for segmenting large-scale images. Factors such as
an increasing access to powerful computing resources as well as a focus on
developing models that can be run at the edge, i.e., connected to existing
instrumentation capabilities, are likely to accelerate the integration.
## 4 Challenges with IDML in the Materials Science Domain
Although there are several advantages of using an IDML approach in many
materials science studies, there remain several challenges that are crucial to
the growth and more widespread application of IDML. Among them, small or
imbalanced data sets, noisy data, or standard validation techniques are
prominent. In addition, there are very limited benchmark data sets widely
available for the purpose of comparing different models.
In part, the rapid growth in the machine learning in materials science field
has contributed to these challenges, and made it difficult to
understand/identify appropriate methods for domain-specific goals, best
practices, and opportunities [102]. To address some of these areas, emerging
techniques, best practices, and potential new paths are subsequently
discussed.
## 5 Emerging techniques and potential new paths
Microstructure characterization using methods from ML and AI has been a
nascent endeavour as indicated in the previous sections. The revolution of
deep learning to solve seemingly complex computer vision tasks like image
recognition has fueled researchers in material science to transfer some of
these successes to the field. Section 3 has represented a broad overview of
this work from the perspective of application of state of the art machine
learning technologies to materials characterization. A lot of this work
involves using image datasets of microstructures and other materials
characteristics to solve controlled classification problems. Even though this
represents a major advancement in the field, several problems still remain as
discussed in Section 4. These challenges can be addressed by emerging
methodologies in AI that have the potential to alleviate some of the concerns
raised in this study.
One of the challenges in materials characterization is the apparent dearth of
labelled datasets. Deep learning problems require massive amounts of labelled
data to produce results. Unfortunately, annotating data is time consuming and
expensive. Recent advancements in AI promise methods to tackle this problem
such as active learning. Active learning is inspired from the idea that a
predictor that is trained on a smaller set of well-chosen samples can perform
as efficiently as a predictor that is trained on a much larger number of
randomly sampled data points. In particular, active learning can be
implemented as a form of expert-machine interaction. Starting from a small
non-optimally sampled batch of data, the algorithm presents the users the
images or pixels whose inclusion in the set improves the performance of the
predictor. The user or domain expert interacts with the machine in the process
of annotation. The procedure is iterated until a stopping criteria has been
achieved. Approaches for active learning have been demonstrated in image
classification [103], deep learning [104], remote sensing [105], medical
imaging [106] and image retrieval [107]. A similar approach can be taken for
materials characterization where active learning can be used to reduce
annotation burden of material scientists.
A alternate set of methods to reducing the problem of small datasets is known
as semi-supervised learning. It considers the problem of supervised learning
when only a small subset of data is available for training, which is exactly
the challenge that we are looking to overcome. Semi supervised learning
algorithms address this problem by learning from unlabelled data as well as
labelled data to build better predictors [108]. Semi supervised approaches
have been explored for multi-model image classification [109] and for deep
generative modelling [110]. Material characterization will improve with the
help of semi-supervised learning algorithms that learn from unlabelled data
which is much easier to curate.
A common problem with modern ML and DL methods is that they are inherently
inscrutable. Even though they are responsible for reproducing results with
astonishing accuracy, they fundamentally lack the ability to provide
explanations for their predictions. As a consequence, it is very difficult to
interpret the results of a classification or segmentation model. A number of
approaches have been proposed recently to tackle this problem. Common
techniques for ML model explainability is provided here [111]. Another way is
to visualize the model [112, 113] by observing the values of filters in
trained convolutional neural networks. Examples of studies which have used
these techniques to explain model predictions on material characterization
datasets include [135] and [134]. In [135], a CNN was used for prediction of
electromagnetic response of nanophotonic structures, following which
relevancy-based heatmaps were used to highlight the spatial regions in the
input that were important for the prediction. Similarly, the technique of
computing saliency maps was utilized in [134] to gain insight into the
property prediction (short-circuit current) from morphology maps of organic
photovoltaic films. Using such techniques to understand the latent
representation of data and the trained state of hidden layers could accelerate
the process of material design and discovery. Another viewpoint of
interpretability is by reducing model complexity. The core idea is that
simpler models are easier to understand than larger esoteric models. For
example, [114] tries to approximate the complex model with a locally linear
model. A related idea is using feature importance [115] for interpreting
predictions. Symbolic approaches [116, 117, 118] generate symbolic expressions
to generate semantic understanding of the predictions. An example of an
interpretable workflow for material characterization is [136], in which a
regression model is developed to predict peak stress from a morphology image
generated from SEM. Using a generative method, structural features (such as
porosity) are systematically modified to study its effect on the stress. This
analysis enables one to identify the morphological features that are key for a
targeted output. As discussed before, these challenges involving
interpretability need to be addressed before materials characterization using
IDML is to be widely accepted.
### 5.1 Generative methods
Before GANs, there have been many generative models, such as the Deep
Boltzmann Machine (DBM) and Variational Auto-encoder (VAE). However, they had
less of an impact due to the difficulty of of approximating intractable
probabilistic computations in maximum likelihood estimation, known as the
“maximum likelihood training paradigm”, and leveraging the benefits of
piecewise linear units in the generative context [119]. Unlike previous
generative models, Generative Adversarial Network (GAN) brings a new breath to
the deep generative models with its simple structure and effectiveness in
image generation. With the help of Convolutional Neural Networks, the GANs can
generate realistic images, as large as 1024 by 1024 pixels.
A GAN framework consists of a generator, $G$, that generates samples from a
noise variable, $z$, and a discriminator,$D$, that aims to distinguish between
samples from the real data distribution and those from the synthetic data
distribution (from the generator). The generator can be thought of as
counterfeiters trying to produce fake but realistic images, while the
discriminator aims to distinguish the fake images from the real ones.
GANs have been proven successful for many image synthesis and unsupervised
learning tasks. It is a popular framework for representation learning, such as
disentangle pose from lighting in 3D rendered images [120], and image
completion, where large missing regions are synthesized utilizing the
surrounding image features [121]. Variants of GANs have surpassed many other
generative models in the quality of samples as well as their underlying
representation. GANs have also been applied to the task of image-to-image
translation, which learns a mapping between an input image and an output
image. Several examples of this image-to-image translation exist in
literature, including [122] used CycleGAN to transfer images of one style to
another, such as between landscape images in summer and in winter, and between
photographs and paintings of Monet. [123] used GauGAN to create photorealistic
images from segmentation maps, which are labeled sketches that depict the
layout of a scene. [124] used Pix2Pix on many applications of image-to-image
translation, such as mapping from aerial photos to maps, and mapping from
edges to photos [124].
Recently, GANs have emerged as a promising methodology for application in
computational materials design, for the purpose of developing structure-
property and structure-performance relations via physical simulations [125,
126, 31]. With the help of CNN backbone, GANs are able to capture complex
microstructural characteristics. In [133], GAN is used to generate high-
resolution three-dimensional images of the pore space at different scales. In
[125], the authors proposed a deep adversarial learning methodology to
overcome the limitations of existing microstructure characterization and
reconstruction (MCR) techniques. In this work, the GAN is used identify the
mapping between latent codes and microstructures. In [126], the GANs are used
to learn a mapping between latent variables and microstructures, which is
later used as the design variables to obtain microstructures of desired
material property with the help of a Bayesian optimization framework. In [31],
the authors generated high-resolution microstructure images with the advanced
progressive-growing GAN (pgGAN), and interpret the task of microstructure
generation as microstructure distribution characterization and image-to-image
translation. Figure 6 highlights the two types of applications that have
benefited from using generative models.
Figure 6: Two types of applications in the field of materials characterization
that have benefited from using generative models. (Top) Example synthetic
images generated by the trained Progressive Growing GAN. The generated images
show varying microstructural features, specifically different extent of
lamellar transformation products, and distribution of carbides. Figure
reproduced with permission from [31]; Creative Common CC BY. (Bottom) The use
of a generative model to enhance the resolution of an image acquired from a
Scanning Transmission Electron Microscope. The benefit of this application
lies in a reduced imaging time as well as a reduced likelihood for electron
beam damage to the samples. Figure reproduced with permission from [90] ;
Creative Commons Attribution 4.0 International.
### 5.2 Merging data from multiple characterization techniques
Materials characterization studies typically involve the application of
multiple experimental techniques, sometimes in combination with modelling or
computation, in order to develop microstructure-processing-property
relationships. The multi-length scale nature of such studies necessitates
varying data types to characterize any microstructure, which is schematically
shown in Figure 7. The multiple data types shown here span from atom probe
tomography data (sub-nanometer scale spatial resolution, typically analyzing
volumes to 50 nm by 50 nm by 100 nm) to TEM images, to SEM and optical
micrographs. These data types are only a few examples of the data types used
in materials characterization. Considering a more specific example, of
analyzing an alloy after casting. This alloy may then be subjected to a
homogenization anneal, and subsequently several thermo-mechanical processing
to achieve the desired dimensions and properties. To examine the
microstructures produced after processing, a researcher may begin with cross-
sectioning and polishing the material and examining the cross-section using an
optical microscope. Some microstructural features may be visible, depending on
the microstructure (i.e. grain boundaries, inclusions, etc.), but others may
not, hence the researcher will go to the SEM to look at the same
microstructure with higher resolution. Secondary electron (SE) and back
scatter electron (BSE) images may be collected, all images, but providing
different detail about the microstructure (i.e. topography for SE images
versus atomic number contrast for BSE images). Energy dispersive spectroscopy
(EDS) may be performed to obtain element distributions or composition
information from varying regions of the microstructure. EDS data can be
collected in points, or as maps. Now, multiple image and data types have been
collected, all providing unique detail about the microstructure in question.
Next, electron backscatter diffraction (EBSD) may be collected to provide
grain orientation information, and another data type and instance is provided.
For even higher resolution characterization, transmission electron microscopy
(TEM) may be employed to identify structure, and perform EDS mapping and point
analysis in small volumes, with a typical sample size of 10µm by 10µm with
less than 100 nm thickness (to be electron transparent). Higher resolution
chemical and isotopic mapping can also be performed using atom probe
tomography (APT), yielding a three dimensional point cloud data set with the
x,y,z position and elemental and isotopic identity of each atom detected. The
workflow detailed here is just one example, with limited techniques. Several
other techniques exist and are selected based on the material of interest and
characterization challenge faced. The methods discussed here are limited to
characterization of a material via microscopy methods that yield data
primarily in the form of images, without regard to thermal or mechanical
properties. Yet, the key concept demonstrated through this example is sound,
and remains a critical component to the future of machine learning in
materials characterization and design: a single material condition can be
described by multi-lengthscale, multi-modal data sets. While many IDML
research works have been performed using images from a single imaging
modality, the concept of fusing multiple imaging modalities is still in its
infancy in materials science. Multimodal learning is when learning algorithms
relate information from multiple data collection sources, as detailed by Ngiam
et al [138], who focused on relating speech audio to videos of lip movement.
In the materials science domain, a multimodal learning problem could involve
data collected using multiple different chemical imaging modalities in order
to relate information such as crystal structure to composition, for example.
Independent of application area, multimodal data fusion aims to capture
correlations across modalities, such as from SEM images that detail
microstructural features of interest to TEM and APT that detail structure and
composition of phases present, grain boundary character, etc. Multimodal
chemical imaging, correlative and complementary microscopy experiments are
uniquely powerful since they allow for correlation of properties with chemical
composition and structure, which is central to understanding how these drive
material performance [137].
Figure 7: Schematic of data from different image types that span across length
scales and are instrumental to materials characterization studies. From left
to right: atom probe tomography data, STEM bright field image, SEM
micrographs, optical micrographs.
## 6 Summary
In this work, we have reviewed the field of image driven machine learning
(IDML) towards the analysis of material structure characterization. The scope
of the discussion included studies based on sample surface images from
cameras, digital images from optical and electron microscopes, diffraction
pattern images, and simulated images of material structures at various length
scales. A canonical series of six action steps was defined, recognizing the
importance of multiple independent stages required for a successful
application of IDML in the field: problem definition, dataset building, model
selection, model training, model evaluation, and integration with the existing
workflow. Two of the most widely adopted characterization problems have been
morphology class detection using image classification (one class label per
image) and phase segmentation for quantitative analysis using pixel-level
classification (one label per pixel). An emerging sub-field in the last couple
of years has been the adoption of generative models to augment the image
acquisition process, especially useful for high-resolution and destructive
characterization techniques such as electron microscopy. In the last few
years, the field has seen the widespread adoption of neural network models
such as the convolutional neural network for image classification as well as
for semantic segmentation through pixel-level classification. Though the
difficulty in acquiring high quality data in the field is a challenge to
training complex models, the rise in adoption of the training paradigm of
transfer learning promises to partially alleviate this limitation. In large
part, the material science community has thus far shown the utility of image
driven machine learning as a useful paradigm to characterize material
structures. With increasing access to high-performance computing and data
architectures, robust model development through the incorporation of
interpretability and explainability, the next stage of progress for the field
will entail the integration of IDML into existing instrumentation and
simulation workflows.
## 7 Acknowledgements
This work was in part supported by the U.S. Department of Energy (DOE)
National Nuclear Security Administration, and in part supported by the
National Science Foundation ; Project award No. CMMI-1729336. A portion of
this work was performed at the Pacific Northwest National Laboratory (PNNL),
which is operated for the U.S. DOE by Battelle Memorial Institute under
Contract No. DE-AC05-76RLO1830.
## References
* [1] J.Alkemper, P.Voorhees, J. Microsc., 201 (2001). https://doi.org/10.1046/j.1365-2818.2001.00832.x
* [2] A.Devaraj, D.E. Perea, J.Liu, L. M. Gordon, T. J. Prosa, P. Parikh, D. R. Diercks, S. Meher, R. P. Kolli, Y. S. Meng, and S. Thevuthasan., Int. Mater. Rev. 63 (2018). https://doi.org/10.1080/09506608.2016.1270728
* [3] L.Luo, L.Li, D.K. Schreiber, Y. He, D. R. Baer, S. M. Bruemmer, and C. Wang., Sci. Adv., 6(17) (2020). http://doi.org/10.1126/sciadv.aay8491
* [4] E.J. Kautz, S.V. Lambeets, D.E. Perea, A.Y.Gerard, J.Han, J.R.Scully, J.E.Saal, and D.K.Schreiber, Scr. Mater., 194 (2021). https://doi.org/10.1016/j.scriptamat.2020.10.051
* [5] D.L. McDowell, S.R. Kalidindi, MRS Bull., 41(4) (2016). https://doi.org/10.1557/mrs.2016.61
* [6] A. Agrawal, A. Choudhary, APL Mater., 4(5) (2016). http://doi.org/10.1063/1.4946894
* [7] K. Schwab, The Fourth Industrial Revolution — Foreign Affairs, http://www.foreignaffairs.com/articles/2015-12-12/fourth-industrial-revolution. Accessed on Dec 20, 2020
* [8] J.M. Rickman, T.Lookman, S.V. Kalinin, Acta Mater., 168(473) (2019). http://doi.org/10.1016/j.actamat.2019.01.051
* [9] R.Ramprasad, R.Batra, G.Pilania, A. Mannodi-Kanakkithodi, and C.Kim, npj Comput. Mater., 54 (2017). http://doi.org/10.1038/s41524-017-0056-5
* [10] B.L. DeCost, E.A. Holm, Comput. Mater. Sci., 110, (2015). http://doi.org/10.1016/j.commatsci.2015.08.011
* [11] A.Chowdhury, E.Kautz, B.Yener, D.Lewis, Comput. Mater. Sci., 123, 176 (2016). http://doi.org/10.1016/j.commatsci.2016.05.034
* [12] C. S. Smith, “A History of Metallography”, University of Chicago Press, 1960.
* [13] H.Abrams, Metallography, 4(1), 59 (1971). https://doi.org/10.1016/0026-0800(71)90005-X
* [14] F.Zhang, Q.Tenga, H.Chen, X. He, X. Dong., Comput. Mater. Sci. 186 (2021). https://doi.org/10.1016/j.commatsci.2020.110018
* [15] C.Gobert, E.W. Reutzel, J.Petrich, A.R. Nassar, S.Phoha, Addit. Manuf., 21 (517) (2018). http://doi.org/10.1016/j.addma.2018.04.005
* [16] L.Scime, J.Beuth, Addit. Manuf., 25 (2019). http://doi.org/10.1016/j.addma.2018.11.010
* [17] T.Stan, Z.T. Thompson, P.W. Voorhees, Mater. Charact. 160 (2020).
* [18] T.Strohmann, K.Bugelnig, E.Breitbarth, F. Wilde, T. Steffens, H. Germann, G. Requena, Sci. Rep., 9 (2019). https://doi.org/10.1038/s41598-019-56008-7
* [19] S.Evsevleev, S.Paciornik, G.Bruno, Adv. Eng. Mater., 22(4) (2020). https://doi.org/10.1002/adem.201901197
* [20] S.Tsopanidis, R.H. Morenz, S.Osovski, Eng. Fract. Mech. 231 (2020)
* [21] S.M. Azimi, D.Britz, M.Engstler, M. Fritz, F. Mucklich, Sci. Rep., 8 (2018). http://doi.org/10.1038/s41598-018-20037-5
* [22] A.Campbell, P.Murray, E.Yakushina, S.Marshall, W. Ion, Mater. Des., 141 395 (2018). https://doi.org/10.1016/j.matdes.2017.12.049
* [23] R.Agbozo, W.Jin, J. Korean Soc. Precis. Eng., 37(5) 361 (2019).
* [24] G.D. Forster, A.Castan, A.Loiseau, J. Nelayah, D. Alloyeau, F. Fossard, C. Bichara, H. Amara, Carbon, 169, 465 (2020). https://doi.org/10.1016/j.carbon.2020.06.086
* [25] M.Ziatdinov, O.Dyck, A.Maksov, X. Li, X. Sang, K. Xiao, R. R. Unocic, R. Vasudevan, S. Jesse and S. V. Kalinin, ACS Nano, 11 (2017) https://doi.org/10.1021/acsnano.7b07504
* [26] G.Roberts, S.Y. Haile, R.Sainju, D. J. Edwards, B. Hutchinson, Y. Zhu, Sci. Rep., 9(1) (2019). https://doi.org/10.1038/s41598-019-49105-0
* [27] L.Yao, Z.Ou, B.Luo, C. Xu, and Q. Chen, ACS Cent. Sci., 6 (2020). https://doi.org/10.1021/acscentsci.0c00430
* [28] Y.Wei, Z.Peng, M.Kuhbach, A. Breen, M. Legros, M. Larranaga, F. Mompiou, B. Gault, PLoS One, 14(11) (2019). https://doi.org/10.1371/journal.pone.0225041
* [29] H.Chan, M.Cherukara, T.D. Loeffler, B. Narayanan, and S. K. R. S. Sankaranarayanan, npj Comput. Mater., 6(1) (2020). https://doi.org/10.1038/s41524-019-0267-z
* [30] A.Baskaran, G.Kane, K.Biggs, R. Hull, D.Lewis, Comput. Mater. Sci., 177 (2020). https://doi.org/10.1016/j.commatsci.2020.109593
* [31] W.Ma, E.J. Kautz, A.Baskaran, A. Chowdhury, V. Joshi, B. Yener, and D. J. Lewis, J. Appl. Phys., 128(13) (2020). https://doi.org/10.1063/5.0013720
* [32] J.A. Aguiar, M.L. Gong, R.R. Unocic, T. Tasdizen, and B. D. Miller, Sci. Adv., 5(10) (2019). http://doi.org/10.1126/sciadv.aaw1949
* [33] R.Bostanabad, Y.Zhang, X.Li, T. Kearney, L. C. Brinson, D. W. Apley, W. K. Liu, W. Chen, Prog. Mater. Sci., 95 (2018). https://doi.org/10.1016/j.pmatsci.2018.01.005
* [34] N.H. Paulson, M.W. Priddy, D.L. McDowell, S.R. Kalidindi, Acta Mater., 129, 428 (2017). https://doi.org/10.1016/j.actamat.2017.03.009
* [35] S.Torquato, G.Stell, J. Chem. Phys., 77(4) (1982).
* [36] S.R. Kalidindi, S.R. Niezgoda, A.A. Salem, JOM, 63(4) (2011). https://doi.org/10.1007/s11837-011-0057-7
* [37] P.E. Chen, W.Xu, N.Chawla, Y. Ren, Y. Jiao, Acta Mater., 179, 317 (2019). https://doi.org/10.1016/j.actamat.2019.08.045
* [38] P. Acar and V. Sundararaghavan, AIAA, 55 (8), (2017)
* [39] T.Huang, J. Gao, Q. Sun, D. Zeng, X. Su, W. K. Liu, W. Chen, Comp. Struc., 260(113470) (2021)
* [40] P.Nikolaev, D.Hooper, F.Webber, R. Rao, K. Decker, M. Krein, J. Poleski, R. Barto, and B. Maruyama , npj Comput. Mater., 2 (2016). http://doi.org/10.1038/npjcompumats.2016.31
* [41] W.Ye, C.Chen, S.Dwaraknath, A. Jain, S. P. Ong, and K. A. Persson , MRS Bull., 43(9), 664 (2018). http://doi.org/10.1557/mrs.2018.202
* [42] C.Oses, C.Toher, S.Curtarolo, MRS Bull., 43(9), 670 (2018). http://doi.org/10.1557/mrs.2018.207
* [43] C.Draxl, M.Scheffler, MRS Bull., 43(9), 676 (2018). http://doi.org/10.1557/mrs.2018.208
* [44] J.J. Plata, P.Nath, D.Usanmaz, J. Carrete, C. Toher, M. de Jong, M. Asta, M. Fornari, M. B. Nardelli, and S. Curtarolo, npj Comput. Mater., 3(1), 45 (2017). https://doi.org/10.1038/s41524-017-0046-7
* [45] S.V. Kalinin, B.G. Sumpter, R.K. Archibald, Nat. Mater., 14, (2015). https://doi.org/10.1038/nmat4395
* [46] E.J. Kautz, A.R. Hagen, J.M. Johns, D. E.Burkes.,Comput. Mater. Sci., 161, 107 (2019). https://doi.org/10.1016/j.commatsci.2019.01.044
* [47] Q.Liu, H.Wu, M.J. Paul, P. He, Z. Peng, B. Gludovatz, J. J. Kruzic, C. H. Wang, X. Li,, Acta Mater., 201 316 (2020). https://doi.org/10.1016/j.actamat.2020.10.010
* [48] S.F.Fang, M.P.Wang, M.Song, Mater. Des., 30(7), 2460 (2009). http://doi.org/10.1016/j.matdes.2008.10.008
* [49] Z.Yang, Y.C.Yabansu, R Al-Bahrani, W. Liao, A. N. Choudhary, S. R. Kalidindi, A. Agrawal,Comput. Mater. Sci. 151, (2018). http://doi.org/10.1016/j.commatsci.2018.05.014
* [50] A.Seko, K.Toyoura, S.Muto, T. Mizoguchi, and S. Broderick , MRS Bull., 43(9), 690 (2018). http://doi.org/10.1557/mrs.2018.206
* [51] A.J. Ballard, R.Das, S.Martiniani, D. Mehta, L. Sagun, J. D. Stevensond, and D. J. Wales, Phys. Chem. Chem. Phys., 19(20) (2017). http://doi.org/10.1039/C7CP01108C
* [52] S.Chmiela, H.E. Sauceda, K.R. Maller, A. Tkatchenko, Nat. Commun., 9(1) (2018). http://doi.org/10.1038/s41467-018-06169-2
* [53] K.J. Jose, N.Artrith, J.Behler, J. Chem. Phys., 136(19) (2012).
* [54] A.P. Bartok, S.De, C.Poelking, N. Bernstein, J. R Kermode, G. Csányi, M. Ceriotti, Sci. Adv., 3(12) (2017). https://doi.org/10.1126/sciadv.1701816
* [55] E.Kim, K.Huang, A.Saunders, A. McCallum, G. Ceder, E. Olivetti, Chem. Mater. , 29(21) (2017). https://doi.org/10.1021/acs.chemmater.7b03500
* [56] J.Ling, M.Hutchinson, E.Antono, B. DeCost, E. A.Holm, B. Meredig, Mat. Disc., 10 (2017). http://doi.org/10.1016/j.md.2018.03.002
* [57] E.Kautz, W.Ma, S.Jana, A. Devaraj, V. Joshi, B. Yener, D. Lewis, Mater. Charact., 166 (2020). https://doi.org/10.1016/j.matchar.2020.110379
* [58] W.B. Park, J.Chung, J.Jung, K. Sohn, S. P. Singh, M. Pyo, N. Shin, K-S Sohn, IUCrJ, 4(4) (2017). https://doi.org/10.1107/S205225251700714X
* [59] Y.Liu, T.Zhao, W.Ju, S. Shi, J Materiomics., 3(3) (2017). http://doi.org/10.1016/j.jmat.2017.08.002
* [60] K.T. Butler, D.W. Davies, H.Cartwright, O. Isayev, A.Walsh, Nature, 559(7715) (2018). http://doi.org/10.1038/s41586-018-0337-2
* [61] B.M. Lake, R.Salakhutdinov, J.B. Tenenbaum, Science, 350(6266) (2015). http://doi.org/10.1126/science.aab3050
* [62] A.Krizhevsky, I.Sutskever, G.E. Hinton, Adv. in Neu. Inf. Proc. Sys, 25
* [63] J.Deng, W.Dong, R.Socher, L-J. Li, K. Li, L. Fei-Fei , in _In CVPR_ (2009)
* [64] E.K.Mace, J.D.Ward, C.E.Aalseth, J. Radioanal. Nucl. Chem., 318 (2018). http://doi.org/10.1007/s10967-018-5983-1
* [65] keras-team/keras: Deep Learning for humans. https://github.com/fchollet/keras. Accessed on Dec 20, 2020
* [66] A. Paszke, S. Gross, F. Massa, A. Lerer, J.B. et al.,arXiv:1912.01703
* [67] M.Abadi, A.Agarwal, P.Barham, et al., http://arXiv:1603.04467
* [68] K.Tsutsui, H.Terasaki, T.Maemura, K. Hayashi, K. Moriguchi, S. Morito, Comput. Mater. Sci. 159 (2019). https://doi.org/10.1016/j.commatsci.2018.12.003.
* [69] Y.Zhu, Q.Ouyang, Y.Mao, BMC Bioinf., 18(348) (2017).
* [70] D.Chen, D.Guo, S.Liu, F.Liu, Symmetry, 12, 639 (2020)
* [71] T.F. B.L.DeCost, E.Holm, Microsc. Microanal., 25 21 (2019)
* [72] O.Furat, M.Wang, M.Neumann, L. Petrich, M. Weber, C. E. Krill and V. Schmidt, Front. Mater., 6 (2019). https://doi.org/10.3389/fmats.2019.00145
* [73] A.O. Vuola, S.U. Akram, J.Kannala, in _IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)_ (2019), pp. 208-212
* [74] H.Hwang, S.M. Choi, J.Oh, S-M Bae, J-H. Lee, J-P. Ahn, J-O. Lee, K-S. An, Y. Yoon, J-H. Hwang, J. Power Sources, 471 (2020). https://doi.org/10.1016/j.jpowsour.2020.228458
* [75] B.Ma, X.Ban, H.Huang, Y. Chen, W. Liu, Y. Zhi, Symmetry, 10(4) (2018). https://doi.org/10.3390/sym10040107
* [76] H.Wang, IOP Conf. Ser.: Mater. Sci. Eng. 652 (2019)
* [77] B.L. DeCost, H.Jain, A.D. Rollett, E.A.Holm, JOM 69(3), 456 (2017). https://doi.org/10.1007/s11837-016-2226-1
* [78] F.Burger, C.Buck, J.Pauli, W.Luther, Intl. Conf. on Comp. Vis. Theory and App., (VISAPP) pp. 143–152 (2014)
* [79] R.Lorenzoni, I.Curosu, S.Paciornik, V. Mechtcherine, M. Oppermann, F. Silva, Cem. Concr. Compos., 108 (2020). https://doi.org/10.1016/j.cemconcomp.2020.103551
* [80] L.YiHao, H.ZiHeng, S.ZhiGuang, et al., Sci. China: Technol. Sci., 62(4) 521 (2019)
* [81] N.Lubbers, T.Lookman, K.Barros, Phys. Rev. E, 96 (2017). https://doi.org/10.1103/PhysRevE.96.052111
* [82] S.R. Niezgoda, A.K. Kanjarla, S.R. Kalidindi, Integr. Mater. Manuf. Innov., 2(1) (2013). https://doi.org/10.1186/2193-9772-2-3
* [83] H.Xu, D.A. Dikin, C.Burkhart, W.Chen, Comput. Mater. Sci., 85 (2014). https://doi.org/10.1016/j.commatsci.2013.12.046
* [84] H.Xu, R.Lu, A.Choudhary, W.Chen, J. Mech. Des, 137 (2015). https://doi.org/10.1115/1.4029768
* [85] Z.Yang, Y.C. Yabansu, D.Jha, W. Liao, A. N. Choudhary, S. R. Kalidindi, A. Agrawal,, Acta Mater., 166 (2019). https://doi.org/10.1016/j.actamat.2018.12.045
* [86] R.Cang, M.Y. Ren, ASME 2016 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, (2016). http://doi.org/10.1115/DETC2016-59404
* [87] I.Arganda-Carreras, V.Kaynig, C.Rueden, K. W. Eliceiri, J. Schindelin, A. Cardona, H.S. Seung, Bioinformatics, 33 (2017). http://doi.org/10.1093/bioinformatics/btx180
* [88] D.Cirean, A.Giusti, L.M. Gambardella, J.Schmidhuber, Proc. Neural Inf. Proc. Sys. 25 (2012)
* [89] S.Wang, G.Cao, B.Wei, Y.Yin, G.Yang, C.Li Biomed. Eng. Online, 12 (2013). https://doi.org/10.1186/1475-925X-12-59
* [90] K.de Haan, Z.S. Ballard, Y.Rivenson, Y.Wu, A.Ozcan, Sci. Rep., 9(12050) (2019). https://doi.org/10.1038/s41598-019-48444-2
* [91] K.Kaufmann, C.Zhu, A.S. Rosengarten, D. Maryanovsky, T. J. Harrington, E. Marin, K. S. Vecchio, Science, 367 (2020). http://doi.org/10.1126/science.aay3062
* [92] J.Madsen, P.Liu, J.Kling, J. B. Wagner, T. W. Hansen, O. Winther, J. Schiøtz, Adv. Theor. Simul., 1(8) (2018). https://doi.org/10.1002/adts.201800037
* [93] Y.Han, R.J. Griffiths, H.Z. Yu, Y.Zhu, J. Mater. Res., 35(15) (2020). http://doi.org/10.1557/jmr.2020.120
* [94] I.Goodfellow, Y.Bengio, A.Courville, _Deep Learning_ (MIT Press, 2016)
* [95] B.Chidester, T.Zhou, M.N. Do, J.Ma, Bioinformatics, 35 (2019). https://doi.org/10.1093/bioinformatics/btz353
* [96] D.Marcos, M.Volpi, D.Tuia, in _23rd International Conference on Pattern Recognition (ICPR)_ (2016)
* [97] A.Garcia-Garcia, S.Orts-Escolano, S.Oprea, V. Villena-Martinez, J. Garcia-Rodriguez, arXiv:1704.06857 (2017)
* [98] S. Spurgeon, C. Ophus, L. Jonestextit, et al., Nat. Mater., 20 (2020) https://doi.org/10.1038/s41563-020-00833-z
* [99] R. M. Patton; J. T. Johnston, S. R. Young, C.D. Schuman, D.D. March, T.E, Potok, D.C. Rose, S-H.Lim, T.P. Karnowski, M.A. Ziatdinov, S.V. Kalinin, SC18: Intl. Conf. for High Perf. Comp., Networking, Storage and Analysis, (IEEE, 2018) http://doi.org/10.1109/SC.2018.00053
* [100] R.Vescovi, H.Li, J.Kinnison, M. Keceli, M. Salim, N. Kasthuri, T. D. Uram, N. Ferrier, arXiv:2011.03204 (2020)
* [101] S.K.Seal, S. Lim, D.Wang, J.Hinkle, D.Lunga, A.Tsaris, in _49th Intl. Conf. on Parallel Proc. - ICPP_ , (ACM, 2020). https://doi.org/10.1145/3404397.3404468
* [102] D.Morgan, R.Jacobs, Annu. Rev. Mater. Res., 50(1) 71 (2020). http://doi.org/10.1146/annurev-matsci-070218-010015
* [103] A.J. Joshi, F.Porikli, N.Papanikolopoulos, in _2009 IEEE Conf. on Comp. Vis. and Pat. Recog._ (2009)
* [104] K.Wang, D.Zhang, Y.Li, et al., IEEE Transactions on Circuits and Systems for Video Technology, 27(12) 2591 (2016)
* [105] D.Tuia, F.Ratle, F.Pacifici, M. Kanevski, and W. J. Emery, IEEE Trans. Geosci. Electron., 47(7) 2218 (2009)
* [106] A.Chowdhury, S.K. Biswas, S.Bianco, (2017). https://doi.org/10.1101/211060
* [107] S.Tong, E.Chang, in _Proceed. of the ninth ACM intl. conf. on multimedia_ (2001)
* [108] X.J. Zhu, Tech. rep.. http://digital.library.wisc.edu/1793/60444. (2005)
* [109] M.Guillaumin, J.Verbeek, C.Schmid, in _2010 IEEE Computer society conference on computer vision and pattern recognition_ (IEEE, 2010), pp. 902–909
* [110] D.P. Kingma, S.Mohamed, D.Jimenez-Rezende, M. Welling, Adv. Neural Inf. Process. Syst. 27, 3581 (2014)
* [111] L.H. Gilpin, D.Bau, B.Z. Yuan, A. Bajwa, M. Specter, L. Kagal, in _2018 IEEE 5th International Conference on data science and advanced analytics (DSAA)_ (IEEE, 2018), pp. 80–89
* [112] Q.Zhang, Y.Nian-Wu, S.C. Zhu, in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ (2018), pp. 8827–8836
* [113] Q.s. Zhang, S.C. Zhu, Front. of Inf. Tech. & Elec. Eng., 19(1), 27 (2018)
* [114] M.T. Ribeiro, S.Singh, C.Guestrin, in _Proc. of the 22nd ACM SIGKDD Intl. Conf. on knowledge discovery and data mining_ (2016), pp. 1135–1144
* [115] S.M. Lundberg, S.I. Lee, in _Advances in neural information processing systems_ (2017), pp. 4765–4774
* [116] A.Lazaridou, A.Peysakhovich, M.Baroni, arXiv:1612.07182 (2016)
* [117] A.Chowdhury, J.R. Kubricht, A.Sood, P. Tu, A. Santamaria-Pang, in _2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI)_ (IEEE, 2020), pp. 1604–1607
* [118] A.Santamaria-Pang, J.Kubricht, A.Chowdhury, C. Bhushan, P. Tu, in _Intl. Conf. on Medical Image Comp. and Computer-Assisted Intervention_ (Springer, 2020), pp. 326–334
* [119] I.Goodfellow, J.Pouget-Abadie, M.Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, in _Advances in Neural Information Processing Systems 27_ , (Curran Associates, Inc., 2014), pp. 2672–2680.
* [120] X.Chen, Y.Duan, R.Houthooft, John Schulman, I. Sutskever, P. Abbeel, in _Adv. in Neural Inf. Proc. Sys. 29_ , pp. 2172–2180. arXiv:1606.03657
* [121] J.Yu, Z.Lin, J.Yang, X. Shen, X. Lu, T. S. Huang, in _2018 IEEE/CVF Conf. on Comp. Vis and Pat. Recog_ (2018), pp. 5505–5514
* [122] J.Zhu, T.Park, P.Isola, A.A. Efros, in _2017 IEEE Intl. Conf. on Comp. Vis (ICCV)_ (2017), pp. 2242–2251
* [123] T.Park, M.Liu, T.Wang, J. Zhu, in _2019 IEEE/CVF Conf. on Comp. Vis. and Pat. Recog. (CVPR)_ (2019), pp. 2332–2341
* [124] P.Isola, J.Zhu, T.Zhou, A. A. Efros, in _2017 IEEE Conf. on Comp. Vis. and Pat. Recog. (CVPR)_ (2017), pp. 5967–5976
* [125] Z.Yang, X.Li, L.Catherine-Brinson, A. N. Choudhary , W. Chen , A. Agrawal, J. of Mech. Des., 140(11) (2018)
* [126] X.Li, Z.Yang, L.C. Brinson, A. Choudhary , A. Agrawal , W. Chen, in _ASME 2018 Intl. Des. Eng. Tech. Conf. and Comp. and Info. in Eng. Conf._ (American Society of Mechanical Engineers Digital Collection, 2018)
* [127] L.Mosser, O.Dubrule, M.J. Blunt, arXiv:1704.03225 (2017)
* [128] qubvel/segmentation_models: Segmentation models with pretrained backbones. Keras and TensorFlow Keras. https://github.com/qubvel/segmentation_models. Accessed 02 April 2021.
* [129] S. Akers, E. Kautz, A. Trevino-Gavito, M. Olszta, B. Matthews, L. Wang, Y. Du,S. Spurgeon, (2021) https://doi.org/10.21203/rs.3.rs-346102/v1
* [130] S. V. Kalinin, O. Dyck, A. Ghosh, Y. Liu, R. Proksch, B. G. Sumpter, M. Ziatdinov, arXiv:2010.09196 (2020)
* [131] X. Wang, J. Li, H. D. Ha, J. C. Dahl, J. C. Ondry, I. Moreno-Hernandez, T. Head-Gordon, and A. P. Alivisatos, JACS Au, 1 (2021) https://doi.org/10.1021/jacsau.0c00030
* [132] S. Madireddy, D.W. Chung, T. Loeffler, S.K. Sankaranarayanan, D.N. Seidman, P. Balaprakash, O. Heinonen, Sci. Rep. 9 (1) 1 (2019)
* [133] L. Mosser, O. Dubrule, M.J. Blunt, arXiv:1704.03225
* [134] B.S.S. Pokuri, S. Ghosal, A. Kokate, S. Sarkar, and B. Ganapathysubramanian, npj Comp. Mater. 5 (95) (2019)
* [135] C. Yeung, J.M. Tsai, B. King, Y. Kawagoe, D. Ho, M. W. Knight, and A.P. Raman, ACS Photonics, 7 (2309-2318) (2020)
* [136] S. Liu, B. Kailkhura, J. Zhang, A. M. Hiszpanski, E. Robertson, D. Loveland, T. Yong-Jin Han, arXiv:2007.08631 (2020)
* [137] A. Belianinov, A. V. Ievlev, M. Lorenz, N. Borodinov, B. Doughty, S. V. Kalinin, F. M. Fernández, and O. S. Ovchinnikova, ACS Nano, 12 (12), (2018)
* [138] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee and A. Y. Ng, Multimodal Deep Learning. ICML (2011)
|
# Rawgment: Noise-Accounted RAW Augmentation Enables Recognition in a Wide
Variety of Environments
Masakazu Yoshimura Junji Otsuka Atsushi Irie Takeshi Ohashi
Sony Group Corporation
{masakazu.yoshimura, junji.otsuka, atsushi.irie<EMAIL_ADDRESS>
###### Abstract
Image recognition models that can work in challenging environments (e.g.,
extremely dark, blurry, or high dynamic range conditions) must be useful.
However, creating a training dataset for such environments is expensive and
hard due to the difficulties of data collection and annotation. It is
desirable if we could get a robust model without the need of hard-to-obtain
dataset. One simple approach is to apply data augmentation such as color
jitter and blur to standard RGB (sRGB) images in simple scenes. Unfortunately,
this approach struggles to yield realistic images in terms of pixel intensity
and noise distribution due to not considering the non-linearity of Image
Signal Processor (ISP) and noise characteristics of an image sensor. Instead,
we propose a noise-accounted RAW image augmentation method. In essence, color
jitter and blur augmentation are applied to a RAW image before applying non-
linear ISP, yielding realistic intensity. Furthermore, we introduce a noise
amount alignment method that calibrates the domain gap in noise property
caused by the augmentation. We show that our proposed noise-accounted RAW
augmentation method doubles the image recognition accuracy in challenging
environments only with simple training data.
### 1 Introduction
Although image recognition has been actively studied recently, the performance
in challenging environments still needs improvements [15]. In sensitive
application such as mobility sensing and head-mounted wearables, the devices
have to be robust against various kinds of difficulties such as low-light,
high dynamic range (HDR) illuminance, motion blur, and camera shake. One
possible solution is to use image enhancement and restoration methods. A lot
of DNN-based low-light image enhancement [29, 54, 20, 46, 12, 30], denoising
[53, 32, 43], and deblur [52, 48, 43] methods were proposed and improved the
pre-captured sRGB image quality. They are quite useful for improving pre-
captured image quality, but a recent work [15] showed that the accuracy gain
using them as a preprocessing of image recognition models was limited since
they already lost some information and restoring the lost information is
difficult.
unrealistic intensity
---
ISP
---
(Contrast)
---
augmentation
---
(Contrast)
---
augmentation
---
ISP
---
unrealistic noise
---
(a) Usual training pipeline
---
RAW image
---
luminance+noise
---
realistic intensity
---
realistic noise
---
(b) Noise-accounted RAW augmentation pipeline
---
RAW image
---
luminance+noise
---
Noise correction
---
Figure 1: The concept of the proposed noise-accounted RAW augmentation. The
conventional augmentation (a) is applied to the output of an ISP. It generates
images that cannot be taken with any ambient light intensities due to the
nonlinear operations in the ISP. Instead, ours (b) apply augmentation before
an ISP. It generates realistic pixel intensity distribution that can be taken
when the light intensity is different. Moreover, the noise amount is also
corrected to minimize the domain gap between real and augmented ones.
Another possible solution is to prepare a dataset of the difficult environment
[33, 3]. However, the mentioned datasets only cover one or a few difficulties
and to create datasets in various environments is too expensive. Especially,
manual annotation of challenging scenes is difficult and time-consuming. For
example, we can see almost nothing in usual sRGB images under extremely low-
light environments due to heavy noise. In addition, some regions in HDR scenes
suffer from halation or blocked-up shadows because the 8-bit dynamic range of
usual sRGB images cannot fully capture the real world, e.g. 0.000001
$[cd/m^{2}]$ under the starlight and 1.6 billion $[cd/m^{2}]$ under the direct
sunlight [37]. Heavy motion blur and camera shake also make annotation
difficult. Some works took paired short and long exposure image, and the long-
exposure clean image was used for annotation or ground truth [15, 16, 17, 19].
The limitation is that the target scene needed to be motionless if the pairs
are taken sequentially with one camera [15] or positional calibration is
needed if the pairs are taken with synchronized cameras [16, 17]. Some works
used a beam splitter to capture challenging images and their references
without calibration [45, 19]. However, it is difficult to apply to dark and
blurry scenes because one of them becomes blurry or dark. Moreover, HDR images
cannot be taken in the same way because some regions become overexposed or
underexposed in both of the cameras.
To this end, we aim to train image recognition models that work in various
environments only using a training dataset in simple environments like bright,
low dynamic range, or blur-less. In this case, image augmentation or domain
adaptation is important to overcome the domain gap between easy training data
and difficult test data. However, we believe usual augmentation on sRGB space
is ineffective because it does not take nonlinear mapping of an ISP into
consideration. Especially, tone mapping drastically changes the RAW image
values that are roughly proportional to physical brightness [44]. Contrast,
brightness, and hue augmentation on sRGB space result in unrealistic images
that cannot be taken under any different ambient light intensity as shown in
Fig. 1(a). In contrast, we propose augmentation on RAW images. In other words,
augmentation is applied before ISP to diminish the domain shift as shown in
Fig. 1(b).
Another possible source of domain gap is the noise amount and noise
distribution difference. To tackle these problems, we propose a method to
align both light intensity and noise domain. In fact, recent works showed that
adding physics-based realistic noise improves the performance of DNN-based
denoisers [44, 47, 2, 50] and dark image recognition [15, 4]. Although their
proposed sensor noise modelings were accurate, they assumed that original
bright images were noise free. In contrast, we propose to modify the noise
amount after contrast, brightness, and hue conversion considering the noise
amount of the original images. It enables a more accurate alignment of the
noise domain. Even in bright images, there might be some dark parts due to
shadow or the color of objects and their prior noise can not be ignored.
Another merit of our method is that it can take an input of a dark image which
already contains a lot of noise. In addition to noise amount alignment after
color jitter augmentation, we show the importance of noise alignment after
blur augmentation which is firstly proposed in this paper.
Our contributions are as follows:
* •
It is the first work to emphasize the importance of augmentation before ISP
for image recognition to the best of our knowledge.
* •
Noise amount alignment method is proposed to reduce the noise domain gap after
RAW image augmentation. In contrast to previous works, our proposed method
takes into account prior noise in the input image. It enables more accurate
alignment and use of any strength of augmentation and even already noisy
input.
* •
We show qualitative analysis for the validity of our sensor noise modeling and
corresponding noise accounted augmentation. We prove that our proposed noise-
accounted RAW image augmentation has the edge over the previous methods.
### 2 Related Works
#### 2.1 Recognition in Difficult Environment
Many works have tackled image recognition in difficult environments. For low-
light environments, several works improved the accuracy by replacing a
traditional ISP with a powerful DNN-based ISP to create clean images for
downstream image recognition models [5, 33, 25]. Even though these methods are
promising because there is no information loss, computational cost is the
problem. Another approach is direct RAW image recognition without ISP [39,
15]. Their image recognition models benefit from the richest information and
improve the accuracy under low-light. However, several works reported ISPs
especially tone mapping is helpful for machine vision [49, 13]. We argue that
direct RAW image recognition works well if the images have low dynamic range.
Another approach is domain adaptation or related methods which support low-
light recognition with bright images [38, 15, 4].
For HDR environments, some works have proposed DNN-based auto-exposure control
[42, 35] to improve downstream recognition. Also, multi-frame HDR synthesis
methods [6, 1] can be used as a preprocessing, but camera motion makes them
challenging. Luminance normalization method was also introduced to improve
recognition performance under varying illumination conditions [18].
For blurry environments, deblurring methods were actively studied [48, 51].
These DNN-based methods successfully restored clear images from heavy blurred
ones.
Differ from the above, we aim to do image recognition under all the above
difficulties using simple scene training data with our proposed augmentation
method. We did not use domain adaptation methods since these methods were
usually used in a setting where the target domain is equal to or smaller than
the source domain [10]. On the contrary, in our setting, the distribution of
target domain is much wider than that of source domain.
#### 2.2 Image Conversion on RAW
Recently, several methods [2, 29, 4] converted light sRGB images into
realistic dark images by the following procedures. First, they inverted an ISP
pipeline to generate RAW-like images followed by illumination change on the
RAW data space with plausible sensor noise. Afterward, degraded sRGB was
generated by applying the forward ISP pipeline. By this operation, ISP’s non-
linear operation could be avoided and short exposure or dark environment could
be simulated. With a similar intention, we propose to apply augmentation
before ISPs to train image recognitiom model.
#### 2.3 Noise Modeling and Noise Amount Alignment
In the electronic imaging sensor community, detailed noise modelings based on
electric current and circuit have been studied [41, 7, 23, 11]. They are
precise but difficult to be applied to the image-to-image conversion. Thus, in
the machine vision community, simplified pixel value-based noise modelings
were proposed based on electric noise modelings [44, 47, 2]. Although the
noise model of [47] was well designed with a high degree of freedom Tukey
lambda distribution [21], we are based on the well-established heteroscedastic
Gaussian model [2, 36, 9, 50] because it was still well fitted to real sensor
noise and we can consider prior noise in original images which will be
explained later.
Recently, adding realistic, model-based sensor noise to the ground truth clean
images was proved to be helpful to train DNN-based denoiser [44, 50, 47, 2]
and low-light object detection models [15, 4]. Although they use highly
consistent noise models, they regarded original images as noise-free. In
contrast, we propose to modify the noise amount after image conversion
considering the noise amount of the original images. It enables a more
accurate alignment of the noise domain and enables the use of any intensity of
augmentation and already noisy images as input.
### 3 Methodology
In this section, we introduce our noise model, calibration procedure, and
proposed noise-accounted RAW image augmentation.
#### 3.1 Noise Model
First of all, we briefly introduce our noise model for later explanation
although it is based on the well-established heteroscedastic Gaussian model
[2, 36, 9, 50]. The number of photons $u$ hit the photodiode of each pixel is
converted to a voltage with quantum efficiency $\alpha$. It is then followed
by some processes to read out the voltage, in which noise $n_{d}$ is
inevitably mixed. Then, analog gain $g$ is multiplied to amplify the value.
Lastly, the voltage is converted to a digital value. We simplify and summarize
the noise after analog gain as $n_{r}$. Since it is common to use analog gain
which has better signal to noise ratio (SNR), we omit the digital gain term in
our noise model. To sum up, the photon-to-RAW pixel value conversion can be
formulated as,
$x=g\left(\alpha u+n_{d}\right)+n_{r}.$ (1)
We approximate $n_{d}$ and $n_{r}$ as Gaussian noise
$\mathcal{N}\left(0,\sigma_{d}^{2}\right)$
$\mathcal{N}\left(0,\sigma_{r}^{2}\right)$ and the number of photons $u$
itself obeys the Poisson distribution $\mathcal{P}\left(\bar{u}\right)$ where
$\bar{u}$ is the expected number of photons. If $\bar{u}$ is large enough, we
can approximate as
$\mathcal{P}\left(\bar{u}\right)\fallingdotseq\mathcal{N}\left(\bar{u},\bar{u}\right)$
[9]. Thus, our noise model is as follows:
$x\sim
g\left(\alpha\mathcal{N}\left(\bar{u},\bar{u}\right)+\mathcal{N}\left(0,\sigma_{d}^{2}\right)\right)+\mathcal{N}\left(0,\sigma_{r}^{2}\right).$
(2)
We show the validity of the Gaussian approximation of $n_{d}$, $n_{r}$, and
$\mathcal{P}\left(\bar{u}\right)$ in the Section 4.3. We don’t follow the
further development of the formula in [9] for our purpose.
Gaussian distribution has the following convenient natures;
$\begin{cases}X\sim\mathcal{N}\left(\mu_{X},\sigma_{X}^{2}\right)\\\
Y\sim\mathcal{N}\left(\mu_{Y},\sigma_{Y}^{2}\right)\\\
X+Y\sim\mathcal{N}\left(\mu_{X}+\mu_{Y},\sigma_{X}^{2}+\sigma_{Y}^{2}\right)\\\
cX\sim\mathcal{N}\left(c\mu_{X},c^{2}\sigma_{X}^{2}\right)\end{cases},$ (3)
if $X$ and $Y$ are independent, and that’s why we choose the simple Gaussian
approximation instead of the recently proposed more expressive noise model
[47]. These natures enable the proposed noise-accounted RAW augmentation to
account for prior noise in input images. Furthermore, they further simplify
our noise model as,
$x\sim\mathcal{N}\left(g\alpha\bar{u},g^{2}\alpha^{2}\bar{u}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2}\right).$
(4)
Because the expected number of photon $\bar{u}$ is inconvenient to use in
image-to-image conversion, we replace it with the expected pixel value
$\mu_{x}=g\alpha\bar{u}$ and our final noise model is defined as
$x\sim\mathcal{N}\left(\mu_{x},\sigma_{x}^{2}=g\alpha\mu_{x}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2}\right).$
(5)
#### 3.2 Noise Model Calibration
N burst images
---
compute the mean and variance
---
along the time dimension
---
(a) burst capture with several analog gains
---
(b) estimate linear relationship of the mean and variance
---
(c) estimate the sensor specific parameters
---
solve the simultaneous equation
---
mean (expected) pixel value
---
noise variance
---
Figure 2: Our noise model calibration procedures to a target sensor.
Our sensor noise model shown in Eq. (5) has three parameters, $\alpha$,
$\sigma_{d}^{2}$, and $\sigma_{r}^{2}$, which have to be calibrated per target
sensor. We capture a series of raw images of a color checker as shown in Fig.
2(a). We then calculate the mean $\mu_{x}$ and variance $\sigma_{x}^{2}$ along
the time direction of each pixel position. We calculate them along the time
direction instead of the spatial direction as performed in [44] since lens
distortion changes the luminance of the same color patch. These operations are
performed several times by changing the analog gain and exposure time.
Eventually, we get various sets of $\left\\{\mu_{x},\sigma_{x}^{2}\right\\}$
for each analog gain. Note that we calculate mean and variance without
separating RGB channels because there is no significant difference in noise
properties.
In Eq. (5), $\mu_{x}$ and $\sigma_{x}^{2}$ have a linear relationship per
analog gain $g_{n}$,
$\sigma_{x}^{2}=a_{g_{n}}\mu_{x}+b_{g_{n}}.$ (6)
Therefore, we solve linear regression to estimate $a_{g_{n}}$ and $b_{g_{n}}$
per gain like Fig. 2(b). In addition, we use RANSAC [8] to robustly take care
of outlier $\left\\{\mu_{x},\sigma_{x}^{2}\right\\}$ pairs.
Finally, we estimate $\alpha$, $\sigma_{d}^{2}$, and $\sigma_{r}^{2}$ from the
following redundant simultaneous equations by least-squares method,
$\begin{cases}a_{g_{1}}=g_{1}\alpha\\\ \>\>\>\vdots\\\
a_{g_{n}}=g_{n}\alpha\end{cases}$ (7)
$\begin{cases}b_{g_{1}}=g_{1}^{2}\sigma_{d}^{2}+\sigma_{r}^{2}\\\
\>\>\>\vdots\\\ b_{g_{n}}=g_{n}^{2}\sigma_{d}^{2}+\sigma_{r}^{2}\end{cases}.$
(8)
As the procedure above, we can calibrate the sensor noise model without using
any special devices. We later show that our sensor model and the calibration
method represent the real sensor noise with enough preciseness.
#### 3.3 Noise-Accounted RAW Augmentation
We propose augmentation before ISP instead of the usual augmentation after ISP
to generate realistic images. Furthermore, we improve the reality of the
augmented images by considering the sensor noise model. Unlike the previous
works [44, 47, 4, 15, 2, 50], ours takes the prior noise amount of input
images into account. It generates more realistic noise since even bright
images have some extent of noise. Especially, dark parts due to shadow or the
color of objects might have a non-negligible amount of noise. Moreover, it
allows any brightness of input images different from previous works.
Specifically, we introduce how to adjust noise amount after contrast,
brightness, hue, and blur augmentation.
##### 3.3.1 Color Jitter Augmentation
Contrast, brightness, and hue augmentation simulate different exposure time,
light intensity, and analog gain. Hence, we firstly assume to multiply the
exposure time, light intensity, and analog gain by $p_{e}$, $p_{i}$, and
$p_{g}$ respectively. Because $p_{e}$ and $p_{i}$ equally change the number of
photon $u$ in the case of our noise model, we rewrite them as
$p_{u}=p_{e}p_{i}$. Then, images in the above environment settings $x_{new}$
can be rewritten as,
$x_{new}\sim\mathcal{N}\left(\begin{split}(p_{g}g)\alpha(p_{u}\bar{u}),\\\
(p_{g}g)^{2}\alpha^{2}(p_{u}\bar{u})+(p_{g}g)^{2}\sigma_{d}^{2}+\sigma_{r}^{2}\end{split}\right).$
(9)
Based on Eq. (3), it can be expanded as
$\displaystyle x_{new}$ $\displaystyle\sim\mathcal{N}\left((p_{g}p_{u})\alpha
g\bar{u},\>(p_{g}p_{u})^{2}(g^{2}\alpha^{2}\bar{u}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2})\right)$
$\displaystyle\>\>\>\;+\mathcal{N}\left(\right.0,\>-(p_{g}p_{u})^{2}(g^{2}\alpha^{2}\bar{u}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2})$
(10)
$\displaystyle\>\>\>\>\;\>\>\;\>\>\>\>\>\>\;\>\>\>\>\>\>\>+(p_{g}g)^{2}\alpha^{2}(p_{u}\bar{u})+(p_{g}g)^{2}\sigma_{d}^{2}+\sigma_{r}^{2}\left.\right).$
By inserting $\mu_{x}=g\alpha\bar{u}$ and original pixel value,
$x_{pre}\sim\mathcal{N}\left(\mu_{x},g\alpha\mu_{x}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2}\right)$,
it can be expressed with a pixel value-based equation as follows;
$\displaystyle x_{new}$ $\displaystyle\sim p_{u}p_{g}x_{pre}+$
$\displaystyle\>\>\>\;\mathcal{N}(0,\>p_{u}(1-p_{u})p_{g}^{2}g\alpha\mu_{x}$
(11)
$\displaystyle\>\;\>\>\;\>\>\>\>\>\>\;\>\>+(1-p_{u}^{2})p_{g}^{2}g^{2}\sigma_{d}^{2}+(1-p_{u}^{2}p_{g}^{2})\sigma_{r}^{2}).$
Because the expected original pixel value $\mu_{x}$ in the Gaussian term is
impossible to obtain, we approximate it as $\mu_{x}=x_{pre}$. Based on this
equation, we can precisely simulate as if exposure time, light intensity, and
analog gain were $p_{e}$, $p_{i}$, and $p_{g}$ times. Then, let’s come back to
contrast, brightness, and hue augmentation. When contrast is multiplied by
$p_{c}$ and brightness is changed by $p_{b}$, it can be expressed as,
$x_{new}=p_{c}x_{pre}+p_{b}.$ (12)
This function is represented as multiplication by
$\frac{\left(p_{c}x_{pre}+p_{b}\right)}{x_{pre}}$. Therefore, noise-accounted
contrast and brightness augmentation is finally defined as,
$\begin{cases}random\>p_{c},\>p_{b}\\\
random\>p_{u},\>p_{g}\>(where\>p_{u}p_{g}=\frac{\left(p_{c}x_{pre}+p_{b}\right)}{x_{pre}},\>p_{u},p_{g}>0)\\\
Eq.(\ref{eq:11})\>(\mu_{x}\leftarrow x_{pre})\end{cases}.$ (13)
We can also convert hue by changing $p_{c}$ and $p_{b}$ per color filter
position in the RAW bayer.
##### 3.3.2 Blur Augmentation
Next, we introduce noise-accounted blur augmentation. Usual blur augmentation
makes noise smaller than real blur because the noise $n_{d}$ and $n_{r}$ are
smoothed although their noise amounts in reality are not related to how fast
you shake a camera or object movements. Only the photon number-related noise
is smoothed in real motion blur. The blurred pixel can be expressed as,
$x_{new}\sim\mathcal{N}\left(g\alpha\sum_{k}w_{k}\bar{u_{k}},\>g^{2}\alpha^{2}\sum_{k}w_{k}\bar{u_{k}}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2}\right),$
(14)
where $\sum_{k}w_{k}=1$ is the blur kernel. With similar equation manipulation
from Eq. (9) to Eq. (11) , noise-accounted blur augmentation is
$\displaystyle x_{new}$
$\displaystyle\sim\mathcal{N}\left(\sum_{k}w_{k}g\alpha\bar{u_{k}},\>\sum_{k}w_{k}^{2}(g^{2}\alpha^{2}\bar{u_{k}}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2})\right)$
$\displaystyle\>\;\>\>+\mathcal{N}(0,\>-\sum_{k}w_{k}^{2}(g^{2}\alpha^{2}\bar{u_{k}}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2})$
$\displaystyle\>\;\>\>\;\>\>\>\>\>\>\;\>\>\;\>\>\>\>+g^{2}\alpha^{2}\sum_{k}w_{k}\bar{u_{k}}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2})$
(15)
$\displaystyle=\sum_{k}w_{k}x_{pre}+\mathcal{N}(0,\>g\alpha\sum_{k}(1-w_{k})w_{k}x_{pre,k}$
$\displaystyle\>\;\>\;\>\>\;\>\>\>\>\>\>\;\>\>\;\>\>\>\>\>\>\;\>\>\;\>\>\;\>\>\>\;\>\>\;\>\>+(1-\sum_{k}w_{k}^{2})(g^{2}\sigma_{d}^{2}+\sigma_{r}^{2})).$
We account for prior noise but not for prior blur amounts because most of
images in usual datasets are not blurred very much. Furthermore, estimating
the prior blur amount is difficult.
In addition, please note that augmentation to make images clean (make it
brighter and deblurred) is inevitably difficult with these noise-accounted RAW
image augmentation. Clipping noise variance in Eq. (11) to zero forcibly
enables brightening but brightening too much causes a mismatch of noise
domain.
### 4 Evaluation
#### 4.1 Dataset
Figure 3: Some examples of our introduced dataset. The upper row shows the
training dataset collected in simple environment while the lower row shows the
challenging test dataset collected in various environments from dark to
birght, HDR, and with or without handshake blur.
Although our method is applicable to any computer vision task, we chose a
human detection task as a target because of its wide usage. we prepared a RAW
image dataset for human detection task captured with a internally developed
sensor. As mentioned earlier, our objective is to train image recognition
models that work in various environments despite only using a training dataset
in simple environments. So, most of training images were taken under normal
light conditions with fixed camera position in several environments. Note that
moderately dark and HDR images are also included to some extent in the
training dataset. The analog gain was set to 6dB for outdoor, 12dB for
indoors, and 32db for moderately dark nights to generate realistic easy images
without auto-exposure. On the other hand, the test images were taken under HDR
or extremely dark environment. In addition, about 50% of them were taken with
strong camera shake. Moreover, the analog gain was chosen from 3dB, 6dB, 12dB,
and 24dB regardless of the environment. Both of the datasets were taken with
around 1 fps to increase diversity between images.
We manually annotated the human bounding boxes of both training and test data.
Because precise annotation of test data on sRGB was impossible due to the
noise and blur, we applied an offline ISP per image and then annotated the
bounding boxes. We manually set adequate ISP parameters per image and had to
change the parameters several times to grasp the entire image. To avoid
annotating large training dataset like these, it is desirable to train the
model with a simple dataset. In total, we collected 18,880 images for the
training and 2,800 images for the test. The examples are shown in Fig. 3.
#### 4.2 Implementation Details
We mainly tested with TTFNet [27] whose backbone was ResNet18 [14]. The
network was trained for 48 epochs from scratch using Adam optimizer [22] and a
cosine decay learning rate scheduler with a linear warmup [28] for the first
1,000 iterations whose maximum and minimum learning rates were 1e-3 and 1e-4.
We implemented a simple software ISP consisting of only a gamma tone mapping.
We implemented two types of gamma tone mapping. One was the simplest gamma
tone mapping, $y=x^{\frac{1}{\gamma}}\>(0\leq x\leq 1)$. The $\gamma$ was set
to 5 after tuning with a rough grid search manner. The other was a gamma tone
mapping parameterized with three parameters [34]. Because the grid search for
three parameters is time-consuming, we tuned the parameters with
backpropagation together with the detector’s weights as performed in [35, 49].
We did not use other ISP functions because they were known to have less impact
on image recognition compared with tone mapping [13]. We also prepared an
elaborated black-box ISP consisting of many functions in addition to a tone
mapping function. The parameters were tuned for human perceptual quality by
experts. We only used the elaborated black-box ISP under the conventional
training pipeline. In other words, we experimented under ISP-augmentation-
detection order, due to the hardware limitation. If contrast augmentation was
used, the hue was also changed with a probability of 50%. In detail, the
contrast factor per color channel $p_{c,c}$ was randomized from the base
contrast factor $p_{c,base}$ by (-0.2$p_{c,base}$, 0.2$p_{c,base}$) after
$p_{c,base}$ was randomly decided. If blur augmentation was used, we applied a
random-sized blur kernel with a probability of 50%. Random shift and random
scale augmentation whose maximum transformations were 10% and 3% of the input
size were also applied with the probability of 80% before color jitter
augmentation. The input size to the detector was $\left(576,\,352,\,3\right)$.
We evaluated the performance of the detector with average precision
([email protected]:0.95) [24].
#### 4.3 Calibration of the Noise Model
For each analog gain of 6dB, 12dB, and 24dB, we captured two burst seqences
with different illumination. Each sequence consisted of 100 images. $24\times
24$ Bayer pixels were sampled from each of the 24 color patches to calculate
the mean and variance. In total $2\times 24\times 24\times 24$ pairs of mean
and variance sets were obtained per analog gain to estimate noise model.
Thanks to the various color fillters, exposure values, and color patches, two
sequences was enough to ensure diversity. The lines in Fig. 4 shows the
estimated linear relationship of Eq. (6). The coefficients of determination,
$R^{2}$, for these line estimation were 0.9833, 0.9884, 0.9862 for 6, 12, 24dB
respectively. A high $R^{2}$ value indicated that the noise intensity was well
modeled against illumination intensity. Also, the $R^{2}$ of the Eq. (7) and
Eq. (8) were $1.0000$ and $0.9984$. It means the noise intensity was well
modeled against analog gain. Based on the above, our noise model and the
calibration method were well suited to the sensor in terms of noise intensity.
mean (expected) pixel value
---
noise variance
---
Figure 4: The calibration result of the sensor noise model. The dots in the
left graph are mean and variance pairs of each pixel and the lines are the
estimated linear relationship per analog gain. Since we plot a large numbers
of dots, they seem to be widely spread. In the other hand, the histogram plot
of per analog gain (right) shows a clear difference between them. (a) Shapiro-
Wilk test
---
expected pixel value
---
pixel value
---
(b) pixel value distributions of small p-value cases
---
p-value
---
the number of pixels
---
Figure 5: The result of Shapiro-Wilk [40] test per expected pixel value. We
tested whether the 100 pixel values of each position fall under Gaussian
distribution and most of the p-values were larger than 0.05 (a). The right (b)
shows the cases where p < 0.05. It indicates the small p-values came from the
sparsity not the skew.
Then, we checked the validity of the shape of the distribution. All of the
noises were assumed to follow a Gaussian distribution. Especially, it is
unclear whether the approximation
$\mathcal{P}\left(\bar{u}\right)\fallingdotseq\mathcal{N}\left(\bar{u},\bar{u}\right)$
[9] is true. Therefore, the Shapiro-Wilk test [40] was performed. If the
p-value of the test is higher than 0.05, it indicates the null hypothesis that
the data are normally distributed cannot be rejected with more than a 95%
confidence interval. Fig. 5(a) shows that most of them were higher than 0.05,
but some results for dark pixels were less than 0.05. However, the
distributions of the dark pixels were like Fig. 5(b). It was not very skewed
and the sparsity causes the small p-value. Thereby, we concluded that all the
noise sources can be regarded as Gaussian noise.
Based on the above, our sensor noise model and the calibration method
represent the sensor noise well in terms of both intensity and distribution.
#### 4.4 Statistical Validation of the Noise Alignment
Before checking the effectiveness of the proposed noise-accounted RAW
augmentation to computer vision applications, the statistical validity was
evaluated one more time by utilizing the sequential images of the color
checker. The evaluation method was as follows. First, the contrast of the
sequential images was changed with or without noise consideration. Second,
mean and variance pairs along the sequential dimension were calculated. Third,
the distributions of real and converted pairs were examined. If the real and
converted pairs matched well, it means the converted images have the same
noise amount as the original real images.
We then compared contrast conversion method utilizing three different noise
alignment methods, i.e., no noise consideration, usual noise-accounted method
that disregards prior noise in the input [15, 4, 44, 47], and our proposed
noise alignment method. Fig. 6 shows the comparison result. It indicates that
prior noise consideration is unnecessary if pixels are darkened considerably.
However, a small contrast factor caused a noise domain gap even if the input
was bright like Fig. 6 (right). In contrast, our noise-accounted conversion
always converted images with realistic noise. It implies that our proposed
method is suited for various strengths of augmentation. If prior noise is not
accounted, the inputs always have to be darkened considerably and already dark
images are difficult to use.
variance
---
real data
---
w/o noise-accounted
---
w/o prior-noise-accounted
---
w/ prior-noise-accounted (ours)
---
real data
---
w/o noise-accounted
---
w/o prior-noise-accounted
---
w/ prior-noise-accounted (ours)
---
mean pixel value
---
mean pixel value
---
(a) dark-to-extreme-dark conversion
---
(b) bright-dark conversion
---
Figure 6: The statistical validation of the noise alignment. The green dots
represents real image data and the others are converted from it. The
conversions are ×0.1 and ×0.5 contrast conversion with several methods. The
(a) is from dark-to-extreme-dark conversion and the (b) is bright-to-dark
conversion.
#### 4.5 Augmentation Parameters Tuning
The optimal augmentation parameters should be different between augmentation
before and after ISP. To make a fair comparison, we roughly tuned both of the
augmentation parameters. The strategy was as follows. First, we searched the
appropriate range of contrast factor $p_{c}$ and brightness perturbation
$p_{b}$ successively. To be robust to any illumination changes, we re-
parameterized $p_{b}$ as $p_{b}=\hat{p_{b}}min(x)$ and randomized the
$\hat{p_{b}}$ instead of $p_{b}$. Lastly, we searched for the appropriate max
blur distance $p_{d}$. In these experiments, we do not account for sensor
noise. We used the elaborated black-box ISP for the augmentation after ISP
setting and the simplest gamma function for the augmentation before ISP
setting.
The results were shown by Table 1. The best parameter settings were used in
the next section.
Table 1: The augmentation hyperparameter tuning for a fair comparison between
before and after ISP augmentation. We tuned the range of contrast, brightness,
and blur distance one by one, and the previous best parameters were taken
over.
| [email protected]:0.95 [%]
---|---
| augmentation after ISP (tuned for the black-box ISP) | augmentation before ISP (tuned for the simplest ISP)
contrast | 0.2-1 | 0.1-1 | 0.2-5 | 0.1-10 | 0.05-20 | 0.1-1 | 0.02-1 | 0.01-1 | 0.005-1 | 0.01-1.1
41.9 | 41.8 | 43.2 | 45.1 | 44.5 | 36.9 | 39.8 | 40.4 | 35.8 | 38.9
brightness | 0-0 | -0.1-0.1 | -0.2-0.2 | -0.5-0.5 | -0.7-0.7 | 0-0 | -0.1-0.1 | -0.2-0.2 | -0.5-0.5 | -0.7-0.7
45.1 | 44.5 | 44.5 | 45.2 | 44.7 | 40.4 | 40.9 | 39.9 | 39.4 | 39.4
blur distance | 0-0 | 0-3 | 0-5 | 0-9 | 0-13 | 0-0 | 0-3 | 0-5 | 0-9 | 0-13
45.2 | 46.8 | 45.7 | 45.9 | 37.9 | 40.9 | 39.8 | 39.6 | 40.6 | 43.3
#### 4.6 Evaluation of the Noise-Accounted RAW Augmentation
In this section, the proposed noise alignment was also applied. As Table 2
shows, noise alignment for both color jitter and blur augmentation improves
the accuracy against difficult test data. It suggests the noise model
decreased the noise domain gap well and the noise domain is an important
factor in the DNN detector. In the color jitter augmentation only setting, we
improved the accuracy from the general noise alignment method [9, 31, 15, 4,
26, 47, 36, 50, 2] by considering prior noise. In the color jitter and blur
augmentation setting, noise-accounted color jitter augmentation plus normal
blur augmentation did not improve much from no noise-accounted settings.
Instead, noise alignment in both color jitter and blur augmentation improved
the accuracy. It indicates that random noise is not effective and realistic
noise is important. If we compared the accuracy under the same simplest gamma
tone mapping setting, our proposed noise-accounted RAW image augmentation
doubled the accuracy of conventional augmentation after ISP. Furthermore, when
parameterized gamma tone mapping was used as our simple ISP, the accuracy was
even superior to the elaborated black-box ISP consisting of many functions in
addition to a tone mapping function. As the visualization results in Appendix
shows, the elaborated black-box ISP output more perceivable images. It
suggests that minimizing the domain gap caused by augmentation is more
important than the superiority of the ISP. We might improve the accuracy more
by an elaborated ISP and the proposed augmentation.
Table 2: Evaluation of the noise-accounted RAW augmentation. The color
augmentation contains default hue augmentation plus tuned contrast and
brightness augmentation. The _w/o prior_ means the prior input noise was
disregarded like many of the previous noise-accounted image conversion methods
[9, 31, 15, 4, 26, 47, 36, 50, 2]. Because we adopt the well-established
heteroscedastic Gaussian model Eq. (2), it is identical to the noise alignment
of [2, 36, 9, 50]. In this experiment, we also used the parameterized gamma
tone mapping as the simple ISP, although it can’t be used in the augmentation
after ISP settings because the gradient from the detection loss is needed to
tune.
| | [email protected]:0.95 [%]
---|---|---
| | | black-box | simple ISP
augmentation | noise | ISP | simplest | parameterized
Color | after | - | 45.2 | 19.3 | -
before (ours) | - | - | 40.9 | 44.4
w/o prior | - | 43.5 | 47.7
ours | - | 44.6 | 48.1
Color + Blur | after | - | 46.8 | 20.4 | -
before (ours) | - | - | 43.3 | 43.8
ours$\dagger$ | - | 43.4 | 47.9
ours | - | 45.3 | 48.3
$\dagger$: The noise alignment was only applied to the color jitter
augmentation.
As mentioned earlier, there were noise dealing works in noise-related fields
like denoising. We compared ours with these methods in the detection task. One
was K-Sigma transform [44], a kind of noise domain generalization. It
normalized images as the pixel value and the standard deviation of noise have
a linear correlation. The other was noise amount notification with a
concatenation of noise variance map [2]. To obey the previous settings, direct
RAW input without an ISP was also compared. Color jitter and blur augmentation
were also applied to the methods different from the previous papers for a fair
comparison. Table 3 shows the comparison results. As to the K-Sigma transform,
simply applying "aug." before or after the K-Sigma transforms gives better
result. However, there is a theoretical problem in both cases. If “aug.” is
applied after the transform, the linear relation between pixel value and noise
amount is retained but pixel intensity becomes inconsistent. On the other
hand, applying “aug.” before the transform makes the noise amount unrealistic.
Changing the augmentation to “our aug.” makes the intensity and noise
realistic and improved the accuracy. From the experiment, we found the
proposed augmentation boosts previous noise-dealing methods if ISP was used.
However, if ISP was not used, noise accounted augmentation slightly
deteriorated the accuracy. We argue that it is because the intensity
distribution was too difficult and unrealistic clean images might help
training. However, the overall accuracy was lower than with ISP. Also, for the
detection task, only our proposed method was enough. It might be an indication
that, unlike the denoising task which should focus on noise, it is important
to make the detector to focus on pixel intensity distribution for the
detection task.
Table 3: The comparison results with other noise-dealing techniques. The “aug.” means contrast and blur augmentation without noise-accounted and “our aug.” means with noise-accounted. We used the simplest gamma function in the ISP. | [email protected]:0.95 [%]
---|---
method | w/o ISP | w/ ISP
concat [2] | 16.5 | 21.5
aug. + concat [2] | 35.0 | 31.6
our aug. + concat [2] | 33.7 | 40.4
K-Sigma [44] | 14.3 | 27.5
K-Sigma [44] \+ aug. | 25.0 | 34.1
aug. + K-Sigma [44] | 26.6 | 42.1
our aug. + K-Sigma [44] | 26.3 | 44.0
our aug. | 32.8 | 45.3
### 5 Conclusion
We propose a noise-accounted RAW augmentation method in which augmentation was
applied before ISP to minimize the luminance domain gap and a sensor noise
model was taken into account to minimize the noise domain gap. Unlike previous
noise-accounted methods, ours takes the prior input noise into account. It
minimizes the domain gap more and enables the use of already noisy images as
training data. Thanks to the realistic augmentation, our method improved the
detection accuracy in difficult scenes compared to the conventional methods.
In the future, we would like to investigate whether the proposed augmentation
with an elaborate ISP improves the computer vision performance even further.
We are glad if this work shed light on the importance of RAW images.
### References
* [1] Goutam Bhat, Martin Danelljan, Fisher Yu, Luc Van Gool, and Radu Timofte. Deep reparametrization of multi-frame super-resolution and denoising. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2460–2470, 2021.
* [2] Tim Brooks, Ben Mildenhall, Tianfan Xue, Jiawen Chen, Dillon Sharlet, and Jonathan T Barron. Unprocessing images for learned raw denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11036–11045, 2019.
* [3] Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3291–3300, 2018.
* [4] Ziteng Cui, Guo-Jun Qi, Lin Gu, Shaodi You, Zenghui Zhang, and Tatsuya Harada. Multitask aet with orthogonal tangent regularity for dark object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2553–2562, 2021.
* [5] Steven Diamond, Vincent Sitzmann, Frank Julca-Aguilar, Stephen Boyd, Gordon Wetzstein, and Felix Heide. Dirty pixels: Towards end-to-end image processing and perception. ACM Transactions on Graphics (TOG), 40(3):1–15, 2021.
* [6] Akshay Dudhane, Syed Waqas Zamir, Salman Khan, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Burst image restoration and enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5759–5768, 2022.
* [7] Abbas El Gamal, Boyd A Fowler, Hao Min, and Xinqiao Liu. Modeling and estimation of fpn components in cmos image sensors. In Solid State Sensor Arrays: Development and Applications II, volume 3301, pages 168–177. SPIE, 1998.
* [8] Martin A Fischler and Robert C Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381–395, 1981.
* [9] Alessandro Foi, Mejdi Trimeche, Vladimir Katkovnik, and Karen Egiazarian. Practical poissonian-gaussian noise modeling and fitting for single-image raw-data. IEEE Transactions on Image Processing, 17(10):1737–1754, 2008.
* [10] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
* [11] Ryan D Gow, David Renshaw, Keith Findlater, Lindsay Grant, Stuart J McLeod, John Hart, and Robert L Nicol. A comprehensive tool for modeling cmos image-sensor-noise performance. IEEE Transactions on Electron Devices, 54(6):1321–1329, 2007.
* [12] Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam Kwong, and Runmin Cong. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1780–1789, 2020.
* [13] Patrick Hansen, Alexey Vilkin, Yury Krustalev, James Imber, Dumidu Talagala, David Hanwell, Matthew Mattina, and Paul N Whatmough. Isp4ml: The role of image signal processing in efficient deep learning vision systems. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 2438–2445. IEEE, 2021.
* [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [15] Yang Hong, Kaixuan Wei, Linwei Chen, and Ying Fu. Crafting object detection in very low light. In Proceedings of the British Machine Vision Virtual Conference, 2021.
* [16] Andrey Ignatov, Nikolay Kobyshev, Radu Timofte, Kenneth Vanhoey, and Luc Van Gool. Dslr-quality photos on mobile devices with deep convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 3277–3285, 2017.
* [17] Andrey Ignatov, Luc Van Gool, and Radu Timofte. Replacing mobile camera isp with a single deep learning model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 536–537, 2020.
* [18] Tomas Jenicek and Ondrej Chum. No fear of the dark: Image retrieval under varying illumination conditions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9696–9704, 2019.
* [19] Haiyang Jiang and Yinqiang Zheng. Learning to see moving objects in the dark. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7324–7333, 2019.
* [20] Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, and Zhangyang Wang. Enlightengan: Deep light enhancement without paired supervision. IEEE Transactions on Image Processing, 30:2340–2349, 2021.
* [21] Brian L Joiner and Joan R Rosenblatt. Some properties of the range in samples from tukey’s symmetric lambda distributions. Journal of the American Statistical Association, 66(334):394–399, 1971.
* [22] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
* [23] Mikhail Konnik and James Welsh. High-level numerical simulations of noise in ccd and cmos photosensors: review and tutorial. arXiv preprint arXiv:1412.4031, 2014.
* [24] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
* [25] Shuai Liu, Chaoyu Feng, Xiaotao Wang, Hao Wang, Ran Zhu, Yongqiang Li, and Lei Lei. Deep-flexisp: A three-stage framework for night photography rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1211–1220, 2022.
* [26] Xinhao Liu, Masayuki Tanaka, and Masatoshi Okutomi. Practical signal-dependent noise parameter estimation from a single noisy image. IEEE Transactions on Image Processing, 23(10):4361–4371, 2014.
* [27] Zili Liu, Tu Zheng, Guodong Xu, Zheng Yang, Haifeng Liu, and Deng Cai. Training-time-friendly network for real-time object detection. In proceedings of the AAAI conference on artificial intelligence, volume 34, pages 11685–11692, 2020.
* [28] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
* [29] Feifan Lv, Feng Lu, Jianhua Wu, and Chongsoon Lim. Mbllen: Low-light image/video enhancement using cnns. In BMVC, volume 220, page 4, 2018.
* [30] Long Ma, Tengyu Ma, Risheng Liu, Xin Fan, and Zhongxuan Luo. Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5637–5646, 2022.
* [31] Markku Makitalo and Alessandro Foi. Optimal inversion of the generalized anscombe transformation for poisson-gaussian noise. IEEE transactions on image processing, 22(1):91–103, 2012.
* [32] Kristina Monakhova, Stephan R Richter, Laura Waller, and Vladlen Koltun. Dancing under the stars: video denoising in starlight. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16241–16251, 2022.
* [33] Igor Morawski, Yu-An Chen, Yu-Sheng Lin, Shusil Dangi, Kai He, and Winston H Hsu. Genisp: Neural isp for low-light machine cognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 630–639, 2022.
* [34] Ali Mosleh, Avinash Sharma, Emmanuel Onzon, Fahim Mannan, Nicolas Robidoux, and Felix Heide. Hardware-in-the-loop end-to-end optimization of camera image processing pipelines. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7529–7538, 2020.
* [35] Emmanuel Onzon, Fahim Mannan, and Felix Heide. Neural auto-exposure for high-dynamic range object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7710–7720, 2021.
* [36] Abhijith Punnappurath, Abdullah Abuolaim, Abdelrahman Abdelhamed, Alex Levinshtein, and Michael S Brown. Day-to-night image synthesis for training nighttime neural isps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10769–10778, 2022.
* [37] Erik Reinhard, Wolfgang Heidrich, Paul Debevec, Sumanta Pattanaik, Greg Ward, and Karol Myszkowski. High dynamic range imaging: acquisition, display, and image-based lighting. Morgan Kaufmann, 2010.
* [38] Yukihiro Sasagawa and Hajime Nagahara. Yolo in the dark-domain adaptation method for merging multiple models. In European Conference on Computer Vision, pages 345–359. Springer, 2020.
* [39] Eli Schwartz, Alex Bronstein, and Raja Giryes. Isp distillation. arXiv preprint arXiv:2101.10203, 2021.
* [40] Samuel S Shapiro and RS Francia. An approximate analysis of variance test for normality. Journal of the American statistical Association, 67(337):215–216, 1972.
* [41] Sungho Suh, Shinya Itoh, Satoshi Aoyama, and Shoji Kawahito. Column-parallel correlated multiple sampling circuits for cmos image sensors and their noise reduction effects. Sensors, 10(10):9139–9154, 2010.
* [42] Justin Tomasi, Brandon Wagstaff, Steven L Waslander, and Jonathan Kelly. Learned camera gain and exposure control for improved visual feature detection and matching. IEEE Robotics and Automation Letters, 6(2):2028–2035, 2021.
* [43] Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxim: Multi-axis mlp for image processing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5769–5780, 2022.
* [44] Yuzhi Wang, Haibin Huang, Qin Xu, Jiaming Liu, Yiqun Liu, and Jue Wang. Practical deep raw image denoising on mobile devices. In European Conference on Computer Vision, pages 1–16. Springer, 2020.
* [45] Zhixiang Wang, Xiang Ji, Jia-Bin Huang, Shin’ichi Satoh, Xiao Zhou, and Yinqiang Zheng. Neural global shutter: Learn to restore video from a rolling shutter camera with global reset feature. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17794–17803, 2022.
* [46] Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560, 2018.
* [47] Kaixuan Wei, Ying Fu, Jiaolong Yang, and Hua Huang. A physics-based noise formation model for extreme low-light raw denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2758–2767, 2020.
* [48] Jay Whang, Mauricio Delbracio, Hossein Talebi, Chitwan Saharia, Alexandros G Dimakis, and Peyman Milanfar. Deblurring via stochastic refinement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16293–16303, 2022.
* [49] Chyuan-Tyng Wu, Leo F Isikdogan, Sushma Rao, Bhavin Nayak, Timo Gerasimow, Aleksandar Sutic, Liron Ain-kedem, and Gilad Michael. Visionisp: Repurposing the image signal processor for computer vision applications. In 2019 IEEE International Conference on Image Processing (ICIP), pages 4624–4628. IEEE, 2019.
* [50] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Cycleisp: Real image restoration via improved data synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2696–2705, 2020.
* [51] Hongguang Zhang, Yuchao Dai, Hongdong Li, and Piotr Koniusz. Deep stacked hierarchical multi-patch network for image deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5978–5986, 2019.
* [52] Meina Zhang, Yingying Fang, Guoxi Ni, and Tieyong Zeng. Pixel screening based intermediate correction for blind deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5892–5900, 2022.
* [53] Yi Zhang, Dasong Li, Ka Lung Law, Xiaogang Wang, Hongwei Qin, and Hongsheng Li. Idr: Self-supervised image denoising via iterative data refinement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2098–2107, 2022.
* [54] Yonghua Zhang, Jiawan Zhang, and Xiaojie Guo. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM international conference on multimedia, pages 1632–1640, 2019.
## Part I Visualization Results
In Fig. 7, the detection results are drawn on the corresponding ISP output.
The proposed method shows a significant improvement in accuracy under the
condition that the simplest gamma tone mapping is used as ISP. In addition,
the accuracy of the proposed method is the best despite the use of the simple
ISP with limited visibility against a rich black box ISP because of the
effective noise-accounted RAW augmentation.
parameterized gamma
---
before ISP (ours)
---
w/ (ours)
---
Black-box ISP
---
after ISP
---
w/o
---
simplest gamma
---
before ISP (ours)
---
w/ (ours)
---
simplest gamma
---
before ISP (ours)
---
w/o prior
---
simplest gamma
---
before ISP (ours)
---
w/o
---
simplest gamma
---
after ISP
---
w/o
---
ISP
---
augmentation order
---
noise-accounted
---
Figure 7: The visualization of the detection results. To make a fair
comparison, we set an adequate confidence threshold per model. Specifically,
we adjusted the threshold to achieve an<EMAIL_ADDRESS>value of 80%. The darker
bounding boxes represent the ground truth while the brighter ones represent
the prediction result.
|
# Missing Data Infill with Automunge
Nicholas J. Teague
Automunge
Altamonte Springs, FL 32714
<EMAIL_ADDRESS>
https://www.automunge.com
###### Abstract
Missing data is a fundamental obstacle in the practice of data science. This
paper surveys a few conventions for imputation as available in the Automunge
open source python library platform for tabular data preprocessing, including
“ML infill” in which auto ML models are trained for target features from
partitioned extracts of a training set. A series of validation experiments
were performed to benchmark imputation scenarios towards downstream model
performance, in which it was found for the given benchmark sets that in many
cases ML infill outperformed for both numeric and categoric target features,
and was otherwise at minimum within noise distributions of the other
imputation scenarios. Evidence also suggested supplementing ML infill with the
addition of support columns with boolean integer markers signaling presence of
infill was usually beneficial to downstream model performance. We consider
these results sufficient to recommend defaulting to ML infill for tabular
learning, and further recommend supplementing imputations with support columns
signaling presence of infill, each as can be prepared with push-button
operation in the Automunge library. Our contributions include an auto ML
derived missing data imputation library for tabular learning in the python
ecosystem, fully integrated into a preprocessing platform with an extensive
library of feature transformations, with a novel production friendly
implementation that bases imputation models on a designated train set for
consistent basis towards additional data.
_Keywords_ Tabular $\cdot$ Missing Data $\cdot$ Software
## 1 Introduction
Missing data is a fundamental obstacle for data science practitioners. Missing
data refers to feature sets in which a portion of entries do not have samples
recorded, which may interfere with model training and/or inference. In some
cases, the missing entries may be randomly distributed within the samples of a
feature set, a scenario known as missing at random. In other cases, certain
segments of a feature set’s distribution may have a higher prevalence of
missing data than other portions, a scenario known as missing not at random.
In some cases, the presence of missing data may even correlate with label set
properties, resulting in a kind of data leakage for a supervised training
operation.
In a tabular data set (that is a data set aggregated as a 2D matrix of feature
set columns and collected sample rows), missing data may be represented by a
few conventions. A common one is for missing entries to be received as a NaN
value, which is a special numeric data type representing “not a number”. Some
dataframe libraries may have other special data types for this purpose. In
another configuration, missing data may be represented by some particular
value (like a string configuration) associated with a feature set.
When a tabular data set with missing values present is intended to serve as a
target for supervised training, machine learning (ML) libraries may require as
a prerequisite some kind of imputation to ensure the set has all valid
entries, which for most libraries means all numeric entries (although there
are some libraries that accept designated categoric feature sets in their
string representations). Conventions for imputation may follow a variety of
options to target numeric or categoric feature sets [Table 1], many of which
apply a uniform infill value, which may either be arbitrary or derived as a
function of other entries in the feature set.
Table 1: Imputation Conventions Imputation Value | Numeric | Categoric
---|---|---
mean | ✓ |
median | ✓ |
mode | ✓ | ✓
adjacent cell | ✓ | ✓
arbitrary (e.g. 0 or 1) | ✓ | ✓
distinct activation | | ✓
ML infill | ✓ | ✓
Other, more sophisticated conventions for infill may derive an imputation
value as a function of corresponding samples of the other features. For
example, one of many learning algorithms (like random forest, gradient
boosting, neural networks, etc.) may be trained for a target feature where the
populated entries in that feature are treated as labels and surrounding
features sub-aggregated as features for the imputation model, and where the
model may serve as either a classification or regression operation based on
properties of the target feature.
This paper is to document a series of validation experiments that were
performed to compare downstream model performance as a result of a few of
these different infill conventions. We crafted a contrived set of scenarios
representing paradigms like missing at random or missing not at random as
injected in either a numeric or categoric target feature selected for
influence toward downstream model performance. Along the way we will offer a
brief introduction to the Automunge library for tabular data preprocessing,
particularly those aspects of the library associated with missing data infill.
The results of these experiments summarized below may serve as a validation of
defaulting to ML infill for tabular learning even when faced with different
types of missing data, and further defaulting to supplementing imputations
with support columns signaling presence of infill.
Our contributions include an auto ML derived missing data imputation library
for tabular learning in the python ecosystem, fully integrated into a
preprocessing platform with an extensive library of feature transformations,
extending the ML imputation capabilities of R libraries like MissForest [1] to
a more production friendly implementation that bases imputation models on a
designated train set for consistent basis towards additional data.
## 2 Automunge
Automunge [2], put simply, is a python library platform for preparing tabular
data for machine learning, built on top of the Pandas dataframe library [3]
and open sourced under a GNU GPL v3.0 license. The interface is channeled
through two master functions: automunge(.) for the initial preparation of
training data, and postmunge(.) for subsequent efficient preparation of
additional “test” data on the train set basis. In addition to returning
transformed data, the automunge(.) function also populates and returns a
compact dictionary recording all of the steps and parameters of
transformations and imputations, which dictionary may then serve as a key for
consistently preparing additional data in the postmunge(.) function on the
train set basis.
Under automation the automunge(.) function performs an evaluation of feature
set properties to derive appropriate simple feature engineering
transformations that may serve to normalize numeric sets and binarize (or
hash) categoric sets. A user may also apply custom transformations, or even
custom sets of transformations, assigned to distinct columns. Such
transformations may be sourced from an extensive internal library, or even may
be custom defined. The resulting transformed data log the applied stages of
derivations by way of suffix appenders on the returned column headers.
Missing data imputation is handled automatically in the library, where each
transformation applied includes a default imputation convention to serve as a
precursor to imputation model training, one that may also be overridden for
use of other conventions by assignment.
Included in the library of infill options is an auto ML solution we refer to
as ML infill, in which a distinct model is trained for each target feature and
saved in the returned dictionary for a consistent imputation basis of
subsequent data in the postmunge(.) function. The model architecture defaults
to random forest [4] by Scikit-Learn [5], and other auto ML library options
are also supported.
The ML infill implementation works by first collecting a ‘NArw’ support column
for each received feature set containing boolean integer markers (1’s and 0’s)
with activations corresponding to entries with missing or improperly formatted
data. The types of data to be considered improperly formatted are tailored to
the root transformation category to be applied to the column, where for
example for a numeric transform non-numeric entries may be subject to infill,
or for a categoric transform invalid entries may just be special data types
like NaN or None. Other transforms may have other configurations, for example
a power law transform may only accept positive numeric entries, or an integer
transform may only accept integer entries. This NArw support column can then
be used to perform a target feature specific partitioning of the training data
for use to train a ML infill model [Fig 1]. The partitioning segregates rows
between those corresponding to missing data in the target feature verses those
rows with valid entries, with the target feature valid entries to serve as
labels for a supervised training and the other corresponding features’ samples
to serve as training data. Feature samples corresponding to the target feature
missing data are grouped for an inference operation. For cases where a
transformation set has prepared a target input feature in multiple
configurations, those derivations other than the target feature are omitted
from the partitions to avoid data leakage. Features with correlated missing
entries are also excluded from basis. A similar partitioning is performed for
test data sets for ML infill imputation, although in this case only the rows
corresponding to entries of missing data in the target feature are utilized
for inference. As a further variation available for any of the imputation
methods, the NArw support columns may themselves be appended to the returned
data sets as a signal to training of entries that were subject to infill.
Figure 1: ML Infill partitioning
There is a categorization associated with each preprocessing transformation
category to determine the type of ML infill training operation, for example a
target feature set derived from a transform that returns a numeric form may be
a target for a regression operation or a target feature set derived from a
transform that returns an ordinal encoding may be a target for a
classification operation. In some cases a target feature may be composed of a
set of more than one column, like in the case of a set returned from a one-hot
encoding. For cases where a learner library does not accept some particular
form of encoding as valid labels there is a conversion of the target feature
set for training and an inverse conversion after any inference, for example it
may be necessary to convert a binarized target feature set to one-hot encoding
or ordinal encoding for use as labels in different auto ML frameworks.
The sequential training of feature imputation models may be iterated through
repeated rounds of imputations. For instance in the first round of model
trainings and imputations the models’ performance may be degraded by high
prevalence of missing data, but after that first round of imputations a second
iteration of model trainings may have slight improvement of performance due to
the presence of ML infill imputations, and similarly ML infill may benefit
from any additional iterations of model trainings and imputations. In each
iteration the sequence of imputations between columns are applied in an order
from features with highest prevalence of missing data to least. The library
defaults to a single round of imputations, with the option to set a max and
halt once reaching a stability tolerance. Stochasticity is injected into the
derived imputations to make them non-deterministic.
The final trained models for each target feature, as derived from properties
of a designated train set passed to the automunge(.) function, are
collectively saved and returned to the user in a dictionary that may serve as
a key for consistent imputation basis to additional data in the postmunge(.)
function, with such dictionary also serving as a key for any applied
preprocessing transformations.
## 3 Preprocessing
The utility of the library extends well beyond missing data infill. Automunge
is intended as a platform for all of the tabular learning steps following
receipt of tidy data [6] (meaning one column per feature and one row per
sample) and immediately preceding the application of machine learning. We
found that by integrating the imputations directly into a preprocessing
library, benefits included that imputations can be applied to returned multi-
column categoric representations like one-hot encodings or binarized
encodings, can account for potential data leakage between redundantly encoded
feature sets, and can accept raw data as input as may include string encoded
and date-time entries with only the minimal requirement of data received in a
tidy form.
Under automation, Automunge normalizes numeric sets by z-score normalization
and binarizes categoric sets (where binarize refers to a multi-column boolean
integer representation where each categoric unique entry is represented by a
distinct set of zero, one, or more simultaneous activations). We have a
separate kind of binarization for categoric sets with two unique entries,
which returns a single boolean integer encoded column (available as a single
column by not having a distinct encoding set for missing data which is instead
grouped with the most common entry). High cardinality categoric sets with
unique entry count above a configurable heuristic threshold are instead
applied with a hashing trick transform [7, 8], and for highest cardinality
approaching all unique entries features are given a parsed hashing [9] which
accesses distinct words found within entries. Further automated encodings are
available for date-time sets in which entries are segregated by time scale and
subject to separate sets of sine and cosine transforms at periodicity of time
scale and additionally supplemented by binned activations for business hours,
weekdays, and holidays. Designated label sets are treated a little
differently, where numeric sets are z-score normalized and categoric sets are
ordinal encoded (a single column of integer activations). All of the defaults
under automation are custom configurable.
A user need not defer to automation. There is a built in extensive library of
feature transformations to choose from. Numeric features may be assigned to
any range of transformations, normalizations, and bin aggregations [10].
Sequential numeric features may be supplemented by proxies for derivatives
[10]. Categoric features may be subject to encodings like ordinal, one-hot,
binarization, hashing, or even parsed categoric encoding [11] with an
increased information retention in comparison to one-hot encoding by a
vectorization as a function of grammatical structure shared between entries.
Categoric sets may be collectively aggregated into a single common
binarization. Categoric labels may have label smoothing applied [12], or
fitted smoothing where null values are fit to class distributions. Data
augmentation transformations [10] may be applied which make use of noise
injection, including several variants for both numeric and categoric features.
Sets of transformations to be directed at a target feature can be assembled
which include generations and branches of derivations by making use of our
“family tree primitives” [13], as can be used to redundantly encode a feature
set in multiple configurations of varying information content. Such
transformation sets may be accessed from those predefined in an internal
library for simple assignment or alternatively may be custom configured. Even
the transformation functions themselves may be custom defined with only
minimal requirements of simple data structures. Through application statistics
of the features are recorded to facilitate detection of distribution drift.
Inversion is available to recover the original form of data found preceding
transformations, as may be useful to recover the original form of labels after
inference.
Or of course if the data is received already numerically encoded the library
can simply be applied as a tool for missing data infill.
## 4 Code Demonstration
Jupyter notebook install and imports are as follows:
!pip install Automunge
from Automunge import *
am = AutoMunge()
The automunge(.) function accepts as input a Pandas dataframe or tabular Numpy
array of training data and optionally also corresponding test data. If any of
the sets include a label column that header should be designated, similarly
with any index header or list of headers to exclude from the ML infill basis.
For Numpy, headers are the index integer and labels should be positioned as
final column.
import pandas as pd
df_train = pd.read_csv(’train.csv’)
df_test = pd.read_csv(’test.csv’)
labels_column = ’<labels_column_header>’
trainID_column = ’<ID_column_header>’
These data sets can be passed to automunge(.) to automatically encode and
impute. The function returns 10 sets (9 dataframes and 1 dictionary) which in
some cases may be empty based on parameter settings, we suggest the following
optional naming convention. The final set, the “postprocess_dict”, is the key
for consistently preparing additional data in postmunge(.). Note that if a
validation set is desired it can be partitioned from df_train with valpercent
and prepared on the train set basis. Shuffling is on by default for train data
and off by default for test data, the associated parameter is shown for
reference. Here we demonstrate with the assigncat parameter assigning the root
category of a transformation set to some target column which will override the
default transform under automation. We also demonstrate with the assigninfill
parameter assigning an alternate infill convention to a column. The ML infill
and NArw column aggregation are on by default, their associated activation
parameters are shown for reference. Note that if the data is already
numerically encoded and user just desires infill, they can pass parameter
`powertransform = ’infill’`.
train, train_ID, labels, \
val, val_ID, val_labels, \
test, test_ID, test_labels, \
postprocess_dict = \
am.automunge(df_train,
df_test = df_test,
labels_column = labels_column,
trainID_column = trainID_column,
valpercent = 0.2,
shuffletrain = True,
assigncat = {’or23’ : [’<parsed_categoric_target_column>’] },
assigninfill = {’modeinfill’ : [’<infill_target_column>’] },
MLinfill = True,
NArw_marker = True)
A list of columns returned from some particular input feature can be accessed
with `postprocess_dict[’column_map’][’<input_feature_header>’]`. A report
classifying the returned column types (such as continuous, boolean, ordinal,
onehot, binary, etc.) and their groupings can be accessed with
`postprocess_dict[’columntype_report’]`.
If the returned train set is to be used for training a model that may go into
production, the postprocess_dict should be saved externally, such as with the
pickle library.
We can then prepare additional data on the train set basis with postmunge(.).
test, test_ID, test_labels, \
postreports_dict = \
am.postmunge(postprocess_dict,
df_test)
## 5 Related Work
The R ecosystem has long enjoyed access to missing data imputation libraries
that apply learned models to predict infill based on other features in a set,
such as MissForest [1] and mice [14], where MissForest differs from mice as a
deterministic imputation built on top of random forest and mice applies
chained equations with pooled linear models and sampling from a conditional
distribution. One of the limitations of these libraries are that the
algorithms must be run through both training and inference for each separate
data set, as may be required if test data is not available at time of
training, which practice may not be amenable to production environments.
Automunge on the other hand bases imputations on a designated train set,
returning from application a collected dictionary of feature set specific
models that can then be applied as a key for consistently preparing additional
data on the train set basis.
Automunge’s ML infill also differs from these R libraries by providing
multiple auto ML options for imputation models. We are continuing to build out
a range that currently includes Catboost [15], AutoGluon [16], and FLAML [17]
libraries. Our default configuration is built on top of Scikit-Learn [5]
random forest [4] models and may be individually tuned to each target feature
with grid or random search by passing fit parameters to ML infill as lists or
distributions.
There are of course several other variants of machine learning derived
imputations that have been demonstrated elsewhere. Imputations from generative
adversarial networks [18] may improve performance compared to ML infill (at a
cost of complexity). Gaussian copula imputation [19] has a benefit of being
able to estimate uncertainty of imputations. There are even imputation
solutions built around causal graphical models [20]. Towards the other end of
complexity spectrum, k-Nearest Neighbor imputation [21] for continuous data is
available in common frameworks like Scikit-Learn.
Being built on top of the Pandas library, there is an inherent limitation that
Automunge operations are capped at in-memory scale data sets. Other dataframe
libraries like Spark [22] have the ability to operate on distributed datasets.
We believe this is not a major limitation because the in memory scale is only
associated with datasets passed to automunge(.) to serve as the basis for
transformations and imputations. Once the basis has been established,
transformations to any scale of data can be applied by passing partitions to
the postmunge(.) function. We expect there may be potential to parallelize
such an operation with a library like Dask [23] or Ray [24], such an
implementation is currently intended as a future direction of research.
Another limitation associated with Pandas dataframes is that operations take
place on the CPU. There are emerging dataframe platforms like Rapids [25]
which are capable of GPU accelerated operations, which may particularly be of
benefit when you take account for the elimination of a handoff step between
main and GPU memory to implement training. Although the Pandas aspects of
Automunge are CPU bound, the range of auto ML libraries incorporated are in
some cases capable of GPU training for ML infill.
There will always be a simplicity advantage to deep learning libraries like
Tensorflow [26] or PyTorch [27] which can integrate preprocessing as a layer
directly into a model’s architecture, eliminating the need to consider
preprocessing in inference. We believe the single added inference step of
passing data to the postmunge(.) function is an acceptable tradeoff because by
keeping the preprocessing operations separate it facilitates a ML framework
agnostic tabular preprocessing platform.
## 6 Experiments
Some experiments were performed to evaluate efficacy of a few different
imputation methods in different scenarios of missing data. To amplify the
impact of imputations, each of two data sets were pared down to a reduced set
of the top 15 features based on an Automunge feature importance evaluation
[11] by shuffle permutation [28]. (This step had the side benefit of reducing
the training durations of experiments.) The top ranked importance categoric
and numeric features were selected to separately serve as targets for
injections of missing data, with such injections simulating scenarios of both
missing at random and missing not at random.
To simulate cases of missing not at random, and also again to amplify the
impact of imputation, the target features were evaluated to determine the most
influential segments of the features’ distributions [29], which for the target
categoric features was one of the activations and for the target numeric
features turned out to be the far right tail for both benchmark data sets.
Further variations were aggregated associated with either the ratio of full
feature or ratio of distribution segments injected with missing data, ranging
from no injections to full replacement.
Finally, for each of these scenarios, variations were assembled associated
with the type of infill applied by Automunge, including scenarios for defaults
(mean imputation for numeric or distinct activations for categoric),
imputation with mode, adjacent cell, and ML infill. The ML infill scenario was
applied making use of the CatBoost library to take advantage of GPU
acceleration.
Having prepared the data in each of these scenarios with an automunge(.) call,
the final step was to train a downstream model to evaluate impact, again here
with the CatBoost library. The performance metric applied was root mean
squared error for the regression applications. Each scenario was repeated 100
or more times with the metrics averaged to de-noise the results.
Finally, the ML infill scenarios were repeated again with the addition of the
NArw support columns to supplement the target features.
## 7 Results
The results of the various scenarios are presented [Fig 2, 3, 4, 5]. Here the
y axis are the performance metrics and the x axis the ratio of entries with
missing data injections, which were given as {0, 0.1, 0.33, 0.67, 1.0}, where
in the 0.0 case no missing data was injected and with 1.0 the entire feature
or feature segment was injected. Because the 0.0 cases had equivalent entries
between infill types, their spread across the four infill scenarios are a good
approximation for the noise inherent in the learning algorithm. An additional
source of noise for the other ratios was from the stochasticity of injections,
with a distinct set for each trial. Consistent with common sense, as the
injection ratio was ramped up the trend across infill scenarios was a
degradation of the performance metric.
We did find that with increased repetitions incorporated the spread of the
averaged performance metrics were tightened, leading us to repeat the
experiments at increased scale for some improved statistical significance.
For the missing at random injections [Fig 2, 3], ML infill was at or near top
performance for the numeric feature and closer to falling within noise
tolerance for the categoric feature. In many of the setups, mode imputation
and adjacent cell trended as reduced performance in comparison to ML infill or
the default imputations (mean for numeric sets and distinct activation set for
categoric).
Figure 2: Missing at Random - Numeric Target Feature
Figure 3: Missing at Random - Categoric Target Feature
For not at random injections to the right tail of numeric sets [Fig 4], it
appears that ML infill had a pronounced benefit to the Ames Housing data set
[30], especially as the injection ratio increased, and more of an intermediate
performance to the Allstate Claims data set [31]. We speculate that ML infill
had some degree of variability across these demonstrations due to correlations
(or lack thereof) between the target feature and the other features, without
which ML infill may struggle to establish a basis for inference. In the final
scenario of not at random injections to categoric [Fig 5] we believe default
performed well because it served as a direct replacement for the single
missing activation.
Figure 4: Not at Random - Numeric Target Feature
Figure 5: Not at Random - Categoric Target Feature
An additional comparable series of injections were conducted with ML infill
and the added difference of appending the NArw support columns corresponding
to the target columns for injections. Again these NArw support columns are the
boolean integer markers for presence of infill in the corresponding entries
which support the partitioning of sets for ML infill. The expectation was that
by using these markers to signal to the training operation which of the
entries were subjected to infill, there would be some benefit to downstream
model performance. For many of the scenarios the visible impact was that
supplementing with the NArw support column improved the ML infill performance,
demonstrated here for missing at random [Fig 6, 7] and missing not at random
[Fig 8, 9] with the other imputation scenarios shown again for context.
## 8 Discussion
One of the primary goals of this experiment was to validate the efficacy of ML
infill as evidenced by improvements to downstream model performance. For the
Ames Housing benchmark data set, there was a notable demonstration of ML
infill benefiting model performance in both scenarios of missing at random and
also missing not at random for the numeric target feature. We speculate an
explanation for this advantage towards the numeric target columns for missing
not at random may partly be attributed to the fact that the downstream model
was also a regression application, so that the other features selected for
label correlation may by proxy have correlations with the target numeric
feature. The corollary is that the more mundane performance of ML infill
toward a few of the Allstate data set scenarios may be a result of these
having less correspondence with the surrounding features. The fact that even
in these cases the ML infill still fell within noise distribution of the other
imputation scenarios we believe presents a reasonable argument for defaulting
to ML infill for tabular learning.
Note that as another argument for defaulting to ML infill as opposed to static
imputations is that the imputation model may serve as a hedge against
imperfections in subsequent data streams, particularly if one of the features
experiences downtime in a streaming application for instance.
The other key finding of the experiment was that including the NArw support
column in the returned data set as a supplement to ML infill was usually of
benefit to downstream model performance. This finding was consistent with our
intuition, which was that increased information retention about infill points
should help model performance. Note there is some small tradeoff, as the added
training set dimensionality may increase training time. Another benefit to
including NArw support columns may be for interpretability in inspection of
imputations. We recommend including the NArw support columns for model
training based on these findings, with the one caveat that care should be
taken to avoid inclusion in the data leakage scenario where there is some kind
of correlation between presence of missing data and label set properties that
won’t be present in production.
Figure 6: NArw comparison - Missing at Random - Numeric Target Feature
Figure 7: NArw comparison - Missing at Random - Categoric Target Feature
Figure 8: NArw comparison - Not at Random - Numeric Target Feature
Figure 9: NArw comparison - Not at Random - Categoric Target Feature
## 9 Conclusion
Automunge offers a push-button solution to preparing tabular data for ML, with
automated data cleaning operations like normalizations, binarizations, and
auto ML derived missing data imputation aka ML infill. Transformations and
imputations are fit to properties of a designated train set, and with
application of automunge(.) a compact dictionary is returned recording
transformation parameters and trained imputation models, which dictionary may
then serve as a key for consistently preparing additional data on the train
set basis with postmunge(.).
We hope that these experiments may serve as a kind of validation of defaulting
to ML infill with supplemented NArw support columns in tabular learning for
users of the Automunge library, as even if in our experiments the material
benefits towards downstream model performance were not demonstrated for all
target feature scenarios, in other cases there did not appear to be any
material penalty. Note that ML infill can be activated for push-button
operation by the automunge(.) parameter MLinfill=True and the NArw support
columns included by parameter NArw_marker=True. Based on these findings these
two parameter settings are now cast as defaults for the Automunge platform.
### Acknowledgments
A thank you owed to those facilitators behind Stack Overflow, Python, Numpy,
Scipy Stats, PyPI, GitHub, Colaboratory, Anaconda, VSCode, and Jupyter.
Special thanks to Scikit-Learn and Pandas.
## References
[1] Daniel J. Stekhoven, Peter Bühlmann. MissForest - nonparametric missing
value imputation for mixed-type data (2011) arXiv:1105.0828
[2] N. Teague. Automunge, GitHub repository
https://github.com/Automunge/AutoMunge
[3] W. McKinney. Data structures for statistical computing in python.
Proceedings of the 9th Python in Science Conference, pages 51–56, 2010.
[4] L. Breiman. Random Forests. Machine Learning, 45(1), 2001.
[5] Pedregosa et al., Scikit-learn: Machine Learning in Python, JMLR 12, pp.
2825-2830, 2011.
[6] H. Wickham. Tidy data. Journal of Statistical Software, 59(10), 2014.
[7] John Moody. Fast Learning in Multi-Resolution Hierarchies. NIPS
Proceedings, 1989
[8] Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, Josh
Attenberg. Feature Hashing for Large Scale Multitask Learning. ICML
Proceedings, 2009
[9] N. Teague. Hashed Categoric Encodings with Automunge (2020)
https://medium.com/automunge/hashed-categoric-encodings-with-
automunge-92c0c4b7668c
[10] N. Teague. Numeric Encoding Options with Automunge (2020)
https://medium.com/automunge/a-numbers-game-b68ac261c40d
[11] N. Teague. Parsed Categoric Encodings with Automunge (2020)
https://medium.com/automunge/string-theory-acbd208eb8ca
[12] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew
Wojna. Rethinking the Inception Architecture for Computer Vision. IEEE
conference on computer vision and pattern recognition, 2016
[13] N. Teague. Specification of Derivations with Automunge (2020)
https://medium.com/automunge/specification-of-derivations-with-
automunge-6174ca227184
[14] Stef van Buuren, Karin Groothuis-Oudshoorn. mice: Multivariate Imputation
by Chained Equations in R (2011)
https://www.jstatsoft.org/article/view/v045i03
[15] Anna Veronika Dorogush, Vasily Ershov, Andrey Gulin. CatBoost: gradient
boosting with categorical features support (2018) arXiv:1810.11363
[16] Nick Erickson, Jonas Mueller, Alexander Shirkov, Hang Zhang, Pedro
Larroy, Mu Li, and Alexander Smola. AutoGluon-Tabular: Robust and Accurate
AutoML for Structured Data (2020) arxiv:2003.06505
[17] Chi Wang, Qingyun Wu, Markus Weimer, Erkang Zhu. FLAML: A Fast and
Lightweight AutoML Library (2019) arXiv:1911.04706
[18] Jinsung Yoon, James Jordon, Mihaela van der Schaar. GAIN: Missing Data
Imputation using Generative Adversarial Nets (2018 International Conference of
Machine Learning), arXiv:1806.02920
[19] Yuxuan Zhao, Madeleine Udell. Missing Value Imputation for Mixed Data via
Gaussian Copula (KDD 2020), arXiv:1910.12845
[20] K. Mohan, J. Pearl. Graphical Models for Processing Missing Data (2019),
arXiv:1801.03583
[21] Olga Troyanskaya, Michael Cantor, Gavin Sherlock, Pat Brown, Trevor
Hastie, Robert Tibshirani, David Botstein and Russ B. Altman. Missing value
estimation methods for DNA microarrays, BIOINFORMATICS Vol. 17 no. 6, 2001
Pages 520-525.
[22] Matei Zaharia, Reynold S. Xin, Patrick Wendell, Tathagata Das, Michael
Armbrust, Ankur Dave, Xiangrui Meng, Josh Rosen, Shivaram Venkataraman,
Michael J. Franklin, Ali Ghodsi, Joseph Gonzalez, Scott Shenker, Ion Stoica.
Apache Spark: a unified engine for big data processing. Communications of the
ACM, 59(11), 2016
[23] Dask Development Team. Dask: Library for dynamic task scheduling (2016)
https://dask.org
[24] Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard
Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I.
Jordan, Ion Stoica. Ray: A Distributed Framework for Emerging AI Applications.
13th USENIX Symposium on Operating Systems Design and Implementation (2018),
arXiv:1712.05889
[25] Rapids Development Team. Open GPU Data Science | RAPIDS https://rapids.ai
[26] Abadi, Martín, Barham P, Chen J, Chen Z, Davis A, Dean J, et al.
Tensorflow: A system for large-scale machine learning. 12th USENIX Symposium
on Operating Systems Design and Implementation (2016) p. 265–83.
[27] Paszke, Adam and Gross, Sam and Massa, Francisco and Lerer, Adam and
Bradbury, James and Chanan, Gregory and Killeen, Trevor and Lin, Zeming and
Gimelshein, Natalia and Antiga, Luca and Desmaison, Alban and Kopf, Andreas
and Yang, Edward and DeVito, Zachary and Raison, Martin and Tejani, Alykhan
and Chilamkurthy, Sasank and Steiner, Benoit and Fang, Lu and Bai, Junjie and
Chintala, Soumith. PyTorch: An Imperative Style, High-Performance Deep
Learning Library. NeurIPS Proceedings, 2019
[28] Terrence Parr, Kerem Turgutlu, Christopher Csiszar, and Jeremy Howard.
Beware default random forest importances. Explained.ai (blog), 2018.
https://explained.ai/rf-importance/.
[29] N. Teague. Automunge Influence (2020) https://medium.com/p/382d44786e43
[30] Dean De Cock. Ames, Iowa: Alternative to the Boston Housing Data as an
End of Semester Regression Project, Journal of Statistics Education, Volume
19, Number 3 (2011)
[31] Kaggle: Allstate Claims Severity, https://www.kaggle.com/c/allstate-
claims-severity
## Appendix A Assigning Infill
Each transformation category has some default infill convention to serve as
precursor to ML infill. For cases where a user wishes to override defaults
assignments can be passed in the assigninfill parameter. ML infill is applied
for columns not otherwise assigned, which default can be deactivated with the
MLinfill parameter. When MLinfill is deactivated, columns not explicitly
assigned will have infill per the default initialization associated with a
transformation category. Here we demonstrate deactivating the ML infill
default and assigning infill types zero infill to column1 and ML infill just
to column2.
Not shown, if the data includes a label set or other features such as an index
column appropriate for exclusion from ML infill basis, they should be
designated with labels_column or trainID_column.
assigninfill = {’zeroinfill’ : [’column1’],
’MLinfill’ : [’column2’] }
train, train_ID, labels, \
val, val_ID, val_labels, \
test, test_ID, test_labels, \
postprocess_dict = \
am.automunge(df_train,
MLinfill = False,
assigninfill = assigninfill)
Note that the column headers can be assigned in assigninfill using the
received column headers to apply consistent infill to all sets derived from an
input column, or may alternatively be assigned using the returned column
headers with transformation suffix appenders to assign infill to distinct
returned columns, which take precedence.
## Appendix B ML Infill Parameters
The default ML infill architecture is a Scikit-Learn random forest with
default parameters. Alternate auto ML options are currently available as
CatBoost, FLAML, and AutoGluon. Parameters can be passed to the models with
ML_cmnd.
First we’ll demonstrate applying ML infill with the CatBoost library. Note
that we can either defer to the library default parameters or also pass
parameters to the model initializations or fit operations. Here we also
demonstrate assigning a particular GPU device number.
ML_cmnd = {’autoML_type’ : ’catboost’}
#GPU device assignment takes place in model initialization
ML_cmnd.update({’MLinfill_cmnd’ :
{’catboost_classifier_model’ :
{’task_type’ : ’GPU’, ’devices’ : 0 },
’catboost_regressor_model’ :
{’task_type’ : ’GPU’, ’devices’ : 0 }}})
train, train_ID, labels, \
val, val_ID, val_labels, \
test, test_ID, test_labels, \
postprocess_dict = \
am.automunge(df_train,
MLinfill = True,
ML_cmnd = ML_cmnd)
The FLAML library is also available. Here we demonstrate setting a training
time budget (in seconds) for each imputation model.
ML_cmnd = {’autoML_type’ : ’flaml’}
ML_cmnd.update({’MLinfill_cmnd’ :
{’flaml_classifier_fit’ : {’time_budget’ : 60 },
’flaml_regressor_fit’ : {’time_budget’ : 60 }}})
train, train_ID, labels, \
val, val_ID, val_labels, \
test, test_ID, test_labels, \
postprocess_dict = \
am.automunge(df_train,
MLinfill = True,
ML_cmnd = ML_cmnd)
As another demonstration, here is an example of applying the AutoGluon library
for ML infill and also applying the best_quality option which causes AutoGluon
to train extra models for the aggregated ensembles. (Note this will likely
result in large disk space usage, especially when applying to every column, so
recommend saving this for final production if at all.)
ML_cmnd = {’autoML_type’ : ’autogluon’}
ML_cmnd.update({’MLinfill_cmnd’ :
{’AutoGluon’ :
{’presets’ : ’best_quality’ }}})
train, train_ID, labels, \
val, val_ID, val_labels, \
test, test_ID, test_labels, \
postprocess_dict = \
am.automunge(df_train,
MLinfill = True,
ML_cmnd = ML_cmnd)
To be complete, here we’ll demonstrate passing parameters to the Scikit-Learn
random forest models. Note that for random forest there are built in methods
to perform grid search or random search hyperparameter tuning when parameters
are passed as lists or distributions instead of static figures. Here we’ll
demonstrate performing tuning of the n_estimators parameter (which otherwise
would default to 100).
ML_cmnd = {’autoML_type’ : ’randomforest’}
#by passing random forest parameters as a list
#each imputation model will perform a grid search
ML_cmnd.update({’MLinfill_cmnd’ :
{’RandomForestClassifier’ :
{’n_estimators’ : [100, 222, 444] },
’RandomForestRegressor’ :
{’n_estimators’ : [100, 222, 444] }}})
train, train_ID, labels, \
val, val_ID, val_labels, \
test, test_ID, test_labels, \
postprocess_dict = \
am.automunge(df_train,
MLinfill = True,
ML_cmnd = ML_cmnd)
## Appendix C Leakage Detection
To bidirectionally exclude particular features from each other’s imputation
model bases (such as may be desired in expectation of data leakage), a user
can designate via entries to ML_cmnd[‘leakage_sets’], which accepts entry of a
list of column headers or as a list of lists of column headers, where for each
list of column headers, entries will be excluded from each other’s imputation
model basis.
To unidirectionally exclude particular features from another feature’s
imputation model basis, a user can designate via entries to
ML_cmnd[‘leakage_dict’], which accepts entry of a dictionary with target
feature keys and values of a set of features to exclude from the target
feature’s basis.
To exclude a feature from ML infill and PCA basis of all other features, can
pass as a list of entries to ML_cmnd[‘full_exclude’]. Passthrough features by
‘excl’ transform are automatically excluded.
The example features shown can be populated as input column headers to exclude
all derivations returned from an input feature or as specific returned column
headers with suffix appenders included.
ML_cmnd.update({’leakage_sets’ :
[[’feature1’, ’feature2’], [’feature2’, ’feature3’]],
’leakage_dict’ :
{’feature1’ : {’feature3’, ’feature4’}},
’full_exclude’ :
[’feature5’, ’feature6’]})
Leakage detection for a target feature are performed automatically in cases of
correlated missing data between features as a function of sets of NArw
aggregations associated with a target feature and a surrounding feature.
$\frac{\left(\left(NArw_{1}+NArw_{2}\right)==2\right).sum\left(\right)}{NArw_{1}.sum\left(\right)}>tolerance$
The tolerance value can be edited from shown default or passed as 1 to
deactivate.
ML_cmnd.update({’leakage_tolerance’ : 0.85})
## Appendix D Halting Criteria
In cases where the max number of ML infill iterations is set higher than 1
with the infilliterate automunge(.) parameter, early stopping is available
based on a comparison on imputations of a current iteration to the preceding,
with a halt when reaching both of tolerances associated with numeric features
in aggregate and categoric features in aggregate. Early stopping evaluation
can be activated by passing ML_cmnd[‘halt_iterate’]=True. The tolerances can
be updated from the shown defaults as:
ML_cmnd.update({’halt_iterate’ : True,
’categoric_tol’ : 0.05,
’numeric_tol’ : 0.01})
Further detail on early stopping criteria is that the numeric halting criteria
is based on comparing for each numeric feature the ratio of max(abs(delta))
between the current and prior imputation iterations to the mean(abs(entries))
of the current iteration, which are then weighted between features by the
quantity of imputations associated with each feature and compared to a numeric
tolerance value. The categoric halting criteria is based on comparing the
ratio of number of inequal imputations between iterations to the total number
of imputations across categoric features to a categoric tolerance value. Early
stopping is applied when the tolerances are met for both numeric and categoric
features.
Our numeric criteria has some similarity to an approach in Scikit-Learn’s
IterativeImputer, although we apply a different denominator in the formula (we
believe the IterativeImputer formula may lose validity in scenario of presence
of distribution outliers in their denominator set), and our categoric stopping
criteria has some similarity to a MissForest criteria, although we take a
different approach of evaluating for metric within a tolerance instead of the
sign of rate of change.
## Appendix E Stochastic Injections
There is a defaulted option to inject stochastic noise into derived
imputations that can be deactivated for numeric or categoric features as:
ML_cmnd.update({’stochastic_impute_numeric’ : False,
’stochastic_impute_categoric’ : False})
Numeric noise injections sample from either a default normal distribution or
optionally a laplace distribution. Default noise profile is mu=0, sigma=0.03,
and flip_prob=0.06 (where flip_prob is ratio of a feature set’s imputations
receiving injections), each as can be custom configured. Please note that this
noise scale is for injection to a min/max scaled representation of the
imputations, which after injection are converted back to prior form, where the
min/max scaling is based on properties of the feature in the training set.
Please note that noise outliers are capped and noise distribution is scaled to
ensure a consistent range of resulting imputation values is maintained in
comparison to the feature properties in the training set.
Categoric noise injections sample from a uniform random draw from the set of
unique activation sets in the training data (as may include one or more
columns of activations for categoric representations), such that for a ratio
of a feature’s set’s imputations based on the flip_prob (defaulting to 0.03
for categoric), each target imputation activation set is replaced with the
randomly drawn activation set in cases of noise injection. We make use of
numpy.random for distribution sampling in each case.
To update these settings can replace values from the shown defaults:
ML_cmnd.update({’stochastic_impute_numeric_mu’ : 0,
’stochastic_impute_numeric_sigma’ : 0.03,
’stochastic_impute_numeric_flip_prob’ : 0.06,
’stochastic_impute_numeric_noisedistribution’ : 0.3})
## Appendix F Broader Impacts
The following discussions are somewhat speculative in nature. At the time of
this writing Automunge has yet to establish what we would consider a
substantial user base and there may be a bias towards optimism at play in how
we have been proceeding, which we believe is our sole leverage of bias.
From an ethical standpoint, we believe the potential benefits of our platform
far outweigh any negative aspects. We have sought to optimize the postmunge(.)
function for speed, used as a proxy for computational efficiency and carbon
intensity. As a rule of thumb, processing times for equivalent data in the
postmunge(.) function, such as could be applied to streams of data in
inference, have shown to operate on the order of twice the speed of initial
preparations in the automunge(.) function, although for some specific
transforms like those implementing string parsing that advantage may be
considerably higher. While the overhead may prevent achieving the speed of
directly applying manually specified transformations to a dataframe, the
postmunge(.) speed gets close to manual transformations with increasing data
size.
We believe too that the impact to the machine learning community of a
formalized open source standard to tabular data preprocessing could have
material benefits to ensuring reproducibility of results. There for some time
has been a gap between the wide range of open source frameworks for training
neural networks in comparison to options for prerequisites of data pipelines.
I found some validation for this point from the tone of the audience Q&A at a
certain 2019 NeurIPS keynote presentation by the founder of a commercial data
wrangling package. In fact it may be considered a potential negative impact of
this research in the risk to commercial models of such vendors, as Automunge’s
GNU GPL 3.0 license coupled with patent pending status on the various
inventions behind our library (including parsed categoric encodings, family
tree primitives, ML infill, and etc.) will preclude commercial platforms
offering comparable functionality. We expect that the benefits to the machine
learning community in aggregate will far outweigh the potential commercial
impacts to this narrow segment. Further, benefits of automating machine
learning derived infill to missing data may result in a material impact to the
mainstream data science workflow. That old rule of thumb often thrown around
about how 80% of a machine learning project is cleaning the data may need to
be revised to a lower figure.
Regarding consequence of system failure, it should be noted that Automunge is
an industry agnostic toolset, with intention to establish users across a wide
array of tabular data domains, potentially ranging from the trivial to mission
critical. We recognize that with this exposure comes additional scrutiny and
responsibility. Our development has been performed by a professional engineer
and we have sought to approach validations, which has been an ongoing process,
with a commensurate degree of rigor.
Our development has followed an incremental and one might say evolutionary
approach to systems engineering, with frequent and sequential updates as we
iteratively added functionality and transforms to the library within defined
boundaries of the data science workflow. The intent has always been to
transition to a more measured pace at such time as we may establish a more
substantial user base.
## Appendix G Intellectual Property Disclaimer
Automunge is released under GNU General Public License v3.0. Full license
details available on GitHub. Contact available via automunge.com. Copyright
(C) 2021 - All Rights Reserved. Patent Pending, including applications
16552857, 17021770
|
# Invariance principle and McKean-Vlasov limit for randomized load balancing
in heavy traffic
Rami Atar Viterbi Faculty of Electrical and Computer Engineering
Technion – Israel Institute of Technology<EMAIL_ADDRESS>and Gershon
Wolansky Department of Mathematics
Technion – Israel Institute of Technology<EMAIL_ADDRESS>
(Date: February 28, 2024)
###### Abstract.
We consider a load balancing model where a Poisson stream of jobs arrive at a
system of many servers whose service time distribution possesses a finite
second moment. A small fraction of arrivals pass through the so called power-
of-choice algorithm, which assigns a job to the shortest among $\ell$,
$\ell\geq 2$, randomly chosen queues, and the remaining jobs are assigned to
queues chosen uniformly at random. The system is analyzed at critical load in
an asymptotic regime where both the number of servers and the usual heavy
traffic parameter associated with individual queue lengths grow to infinity.
The first main result is a hydrodynamic limit, where the empirical measure of
the diffusively normalized queue lengths is shown to converge to a path in
measure space whose density is given by the unique solution of a parabolic PDE
with nonlocal coefficients.
Further, two forms of an invariance principle are proved, corresponding to two
different assumptions on the initial distribution, where individual normalized
queue lengths converge weakly to solutions of SDE. In one of these results,
the limit is given by a McKean-Vlasov SDE, and propagation of chaos holds. The
McKean-Vlasov limit is closely related to limit results for Brownian particles
on ${\mathbb{R}}_{+}$ interacting through their rank (with a specific
interaction). However, an entirely different set of tools is required, as the
collection of $n$ prelimit particles does not obey a Markovian evolution on
${\mathbb{R}}_{+}^{n}$.
The PDE has a stationary solution expressed explicitly, which in particular
provides a quantification of the balance achieved by the algorithm. For fixed
$\ell$, as the intensity of the load balanced stream varies between its
limits, this solution varies from exponential distribution to a Dirac
distribution, demonstrating that the results cover the whole range from
independence to full coordination.
###### Key words and phrases:
Randomized load balancing, power of choice, low sampling rate, diffusion
limit, hydrodynamic limit, McKean-Vlasov limit, propagation of chaos,
parabolic initial boundary value problem, viscous scalar conservation law,
Daley-Miyazawa semimartingale representation.
###### 2010 Mathematics Subject Classification:
68M20, 90B22, 60F17, 35K20
## 1\. Introduction
This paper is concerned with a load balancing model where a Poisson stream of
arrivals faces a system of $n$ servers working in parallel, with a general
common service time distribution having a finite second moment. Routing is
based on the join the shortest of $\ell$ queues (abbreviated JSQ($\ell$))
algorithm, which assigns a job to the shortest among $\ell$, $\ell\geq 2$,
queues chosen uniformly at random. Following a setting proposed in [6],
motivated by maintaining low communication overhead, the stream undergoes
thinning, where a small fraction of the arrivals are routed via JSQ($\ell$)
while the other arrivals are assigned to a queue chosen uniformly at random
among all queues.
The focus is on the asymptotics where the number of servers, $n$, becomes
large, while individual queues are kept at their diffusive regime. This is
achieved by taking $n$ to also serve as the heavy traffic parameter of
individual queue lengths, letting the per-server processing rate and arrival
intensity scale like $n$, and critically loading the servers (to first order).
In order for the load balanced stream to have a non-negligible effect, its
average intensity per queue should scale (at least) like $n^{1/2}$. Hence it
is assumed that the intensity of the overall arrival stream, the overall load
balanced stream, and the per-server arrival stream scale like $n^{2}$,
$n^{3/2}$ and $n$, respectively (see Fig. 1). Specifically, we take the
overall load balanced intensity asymptotic to $bn^{3/2}$, and the average per-
server arrival asymptotic to $\lambda n$, where $b>0$ and $\lambda>0$ are
constants. Moreover, the average load on each queue, defined as the rate of
arrival of work per server divided by a server’s processing rate, is
asymptotic to $1+\rho n^{-1/2}$, where $\rho\in{\mathbb{R}}$, and stability is
expresses by the condition $\rho<0$. The $i$-th queue length process at the
$n$-th system is denoted by $X^{n}_{i}(t)$, and its normalized version by
$\hat{X}^{n}_{i}(t)=n^{-1/2}X^{n}_{i}(t)$, $i\in[n]:=\\{1,\ldots,n\\}$. The
empirical distribution at time $t$ is denoted by
$\bar{\xi}^{n}_{t}=n^{-1}\sum_{i\in[n]}\delta_{\hat{X}^{n}_{i}(t)}$.
The parameters $b$ and $\ell$ determine the volume of communication between
the servers and the dispatcher. They also have dramatic influence on the
degree to which the system is load balanced. One of the goals of this work is
to quantify the level of achieved load balancing, which could be defined e.g.
by the empirical standard deviation, or other functionals of
$\bar{\xi}^{n}_{t}$, as a function of these parameters. Assuming
$\bar{\xi}^{n}_{0}\to\xi_{0}$ in probability, $\xi_{0}$ a Borel probability
measure on ${\mathbb{R}}_{+}$, our first main result addresses this by
providing a hydrodynamic limit, which shows that $\bar{\xi}^{n}_{t}$ converges
in probability, as a measure-valued process, to a deterministic measure-valued
path given by $\xi_{t}(dx)=u(x,t)dx$, $t>0$. Here, the density $u$ is the
unique classical solution of the initial-boundary value problem
(1.1) $\begin{cases}u_{t}=-[(b\ell
v^{\ell-1}-c_{1})u]_{x}+au_{xx}&(x,t)\in(0,\infty)^{2}\\\ \displaystyle
v(x,t)=\int_{x}^{\infty}u(y,t)dy&(x,t)\in(0,\infty)^{2}\\\
(c_{1}-b\ell)u(0,t)+au_{x}(0,t)=0&t\in(0,\infty)\\\
u(\cdot,t)dx\to\xi_{0}(dx)&\text{weakly as $t\downarrow 0$.}\end{cases}$
Above, $c_{1}=-\lambda\rho+b$, and $a>0$ is a variance parameter. The equation
has a non-local drift coefficient that captures the algorithm’s instantaneous
effect on the macroscopic distribution, and a Robin boundary condition
expressing zero flux through the origin.
When $\rho<0$, (1.1) has a unique stationary solution given by
(1.2) $\begin{split}v_{\rm stat}(x)&=w(x)^{-1/(\ell-1)},\qquad u_{\rm
stat}(x)=\frac{c_{1}}{a}(1-\alpha)e^{\frac{c_{1}}{a}(\ell-1)x}w(x)^{-\ell/(\ell-1)},\\\
w(x)&=(1-\alpha)e^{\frac{c_{1}}{a}(\ell-1)x}+\alpha,\qquad\alpha=\frac{b}{c_{1}}=\Big{(}1-\lambda\frac{\rho}{b}\Big{)}^{-1}\in(0,1),\end{split}$
for $x\in{\mathbb{R}}_{+}$. It can be seen that for fixed
$(\ell,\rho,\lambda,a)$, as $b\to 0$ and, respectively, $b\to\infty$, the
measure $u_{\rm stat}(x)dx$ converges to the exponential distribution and to
$\delta_{0}(dx)$, showing that the setting covers the entire range from queues
operating independently to a state space collapse where the load balancing
algorithm achieves complete coordination.
Our second goal is to provide an invariance principle. Under the additional
assumption that, for a fixed $k$,
$(\hat{X}^{n}_{i}(0))_{i\in[k]}\Rightarrow(X_{i}(0))_{i\in[k]}$, it is shown
that $(\hat{X}^{n}_{i})_{i\in[k]}\Rightarrow(X_{i})_{i\in[k]}$ where the
latter is the solution to the SDE in ${\mathbb{R}}_{+}^{k}$
(1.3)
$X_{i}(t)=X_{i}(0)+b_{1}t+b_{0}\int_{0}^{t}v(X_{i}(s),s)^{\ell-1}ds+\sigma
W_{i}(t)+L_{i}(t),\qquad i\in[k],$
in which $L_{i}$ are boundary terms having continuous nondecreasing sample
paths, satisfying $\int_{[0,\infty)}X_{i}(t)dL_{i}(t)=0$. Alternatively, if
the initial queue lengths are assumed exchangeable, then the limit in
distribution of the $k$-tuple is given by $k$ independent copies of
(1.4) $X(t)=X(0)+b_{1}t+b_{0}\int_{0}^{t}v(X(s),s)^{\ell-1}ds+\sigma
W(t)+L(t),$
with $\int_{[0,\infty)}X(t)dL(t)=0$. In both cases, the diffusion limits
depend on the service time distribution only through its first two moments.
The last result, combined with the characterization of the limiting empirical
distribution in terms of (1.1), shows that the pair $(X,\bar{F})$, describing
the limiting stochastic dynamics of a rescaled queue length and its law,
satisfies
(1.5) $X(t)=X(0)+\int_{0}^{t}\mathbf{b}(X(s),\bar{F}(\cdot,s))ds+\sigma
W(t)+L(t),\qquad\bar{F}(x,t)={\mathbb{P}}(X(t)>x),$
where $\mathbf{b}(x,\bar{F})=b_{1}+b_{0}\bar{F}(x)^{\ell-1}$. Viewed this way,
the result is closely related to the literature on interacting diffusion
models, specifically diffusions interacting through their ranks, and is an
instance of a McKean-Vlasov limit. For background on McKean-Vlasov limits and
propagation of chaos we refer to [41, 21]. To explain the relation, consider a
parametric regime in which the heavy traffic parameter is taken to its limit
first. In this case, each normalized queue length behaves as a Brownian
particle on ${\mathbb{R}}_{+}$, with an interaction among the particles caused
by the load balancing algorithm. The algorithm effects the dynamics by
selecting a particle with probability depending on its rank. In the original
model, the selected queue increases by one job. In the particle system this
can be imitated by imposing a positive rank-dependent drift. It therefore
comes as no surprise that the McKean-Vlasov SDE (1.5) is a special case of the
one that arises in the study of rank-dependent diffusions. Indeed, existence
and uniqueness of solutions to (1.5), as well as convergence results, follow
from work on interacting diffusions, specifically [29, Sec. 2.4]. Clearly, the
above explanation is only heuristic. In the regime under consideration, the
prelimit objects are queue lengths, which do not follow a Markovian evolution
when considered as an $n$-tuple with state space ${\mathbb{R}}_{+}^{n}$. As
the basic assumptions of the interacting diffusion setting do not hold, the
treatment requires a completely different set of tools.
### 1.1. Motivation and related work
A large body of work has been devoted to subcritically loaded systems, where
on average queues are short. Other works analyzed critically loaded systems
for which either state space collapse or near optimal performance were
achieved; see references below. However, as far as many-server limits are
concerned, non perfect balancing has not been quantified before and very few
papers have addressed asymptotic behaviors in which individual queue lengths
exhibit diffusive fluctuations (in a setting with a fixed number of servers,
[6] considered both aspects). These two aspects are of great practical
relevance.
Invariance principles are desirable as they express robustness of performance
to underlying distributions. In the current context, an additional important
motivation for an invariance principle stems from the fact that, to the best
of our knowledge, all earlier diffusion limit results on load balancing in
many-server settings cover only the case of exponential service times, a
rather restrictive assumption from a modeling viewpoint.
This paper is not the first to study a regime in which the number of servers
scales like $n$ and individual queues are critically loaded, with each queue
length scaling like $n^{1/2}$. This precise parameterization was considered
before in [9], which studied a multi-agent game rather than a load balancing
model.
The first paper to study an asymptotic regime that combines heavy traffic
scaling with many-server scaling was [26], where a single-queue pooled
$n$-server system was considered. The regime introduced there, known as the
Halfin-Whitt regime, is one where the probability of a job to meet an
available server upon arrival is bounded away from $0$ and $1$, and is, in
particular, distinct from the regime studied here.
The literature on load balancing in asymptotic regimes is vast; see [18] for a
recent survey. The asymptotic regime where individual queues undergo heavy
traffic scaling and JSQ($\ell$) is applied on a small fraction of the arrival
stream was introduced in the aforementioned [6], in a setting where the number
of servers is fixed. This could model two different scenarios, that are also
relevant for our model. One is where a single stream is split by the
dispatcher. Another is where each server has a dedicated stream due, for
example, to geographical or compatibility constraints, and an additional
stream is shared by all servers. Under an exponential service assumption, it
was proved in [6] that the collection of rescaled queue lengths converges to a
rank-based diffusion.
Perhaps the most well known load balancing algorithm is the join-the-shortest-
queue (JSQ), which routes jobs the shortest among all queues, known to be
delay optimal under exponential service and asymptotically delay optimal in
various limiting regimes; see, for example, [15]. Its variation JSQ($\ell$),
that is typically applied with $\ell$ much smaller that $n$, significantly
reduces the communication overhead while maintaining good performance. It was
introduced in [43], where under the exponential service time assumption and
subcritical load, the $n\to\infty$ limit of the length of a typical queue in
stationarity was shown to have a doubly exponential tail decay, a dramatic
improvement over routing uniformly at random in which case decay is
exponential. Related results appeared in [34]. The result was extended in [23]
where empirical measures were considered and propagation of chaos was
obtained. Functional central limit theorems (CLT) were obtained in [24] and
strong approximation results, including law of large numbers (LLN) and CLT, in
[32]. (These CLT results were not invariance principles and were not concerned
with the asymptotic regime studied in this paper). Results on the mixing rate
and the size of the maximal queue length were obtained in [31], and aspects of
stability and performance under server heterogeneity were investigated in
[36]. The paper [35] studied how $\ell(n)$ should grow so that JSQ$(\ell(n))$
with exponential servers would perform like JSQ, and thus achieve asymptotic
delay optimality.
A series of papers [11, 12, 13] extended [43] to several families of general
service time distributions, based on an approach that first establishes
propagation of chaos and then uses it to compute queue length distribution in
equilibrium. Another line of research treating JSQ($\ell$) with general
service times is [2, 3] and [1]. In [2, 3], the dynamical behavior was studied
under general load and the hydrodynamic limit was shown to be given by an
infinite system of coupled PDE. A numerical method was developed to solve
these PDE. In [1], an infinite system of PDE was constructed and shown to
constitute the invariant state of the aforementioned system of PDE under a
subcriticality condition. Our setting, where individual queues undergo heavy
traffic scaling, as well as our results, are quite different from both these
lines of work.
The diffusion limit in heavy traffic of JSQ in a many server setting was
established in [19], where, assuming exponential servers, a rescaled empirical
measure of queue lengths was shown to converge to a process expressed in terms
of a 2d diffusion. Convergence of the steady state at the same scale was
proved in [14], and properties of the diffusion process were investigated in
[7, 8]. Under the regime considered in this line of work, the number of queues
that are of length $0$, $1$ and $2$ is of order $n^{1/2}$, $n$ and,
respectively, $n^{1/2}$, and only a negligible fraction exceeds length $2$.
Thus this regime captures quite a different behavior from what is described in
this paper. The papers [25] and [46] considered JSQ in other diffusive
regimes, namely the nondegenerate slowdown regime and, respectively, the
super-Halfin-Whitt regime.
Numerous load balancing algorithms besides JSQ and JSQ($\ell$) have been
proposed. Ones that emphasize sparse communication, where the messaging rate
often goes far below that of JSQ($\ell$), include [42, 45, 20, 4]. The
practical importance of sparse messaging is widely acknowledged; see e.g.
[33]. Further asymptotic results on load balancing with nonexponential service
times include join-the-idle-queue [20], pull-based load distribution [39, 40],
zero waiting algorithms [30], and join-the-shortest-estimated-queue [5].
McKean-Vlasov limits of diffusions interacting through their ranks were
addressed in [38]. Observing that in these models the dependence of each
particle’s coefficients on the empirical law is discontinuous, a situation not
covered by the classical treatment such as [21], this paper gave convergence
results assuming that the coefficients are merely measurable. The paper [29]
mentioned above addresses a setting of which rank-based interaction is a
special case, and attains convergence results in a stronger topology than weak
convergence.
### 1.2. Notation
Denote ${\mathbb{R}}_{+}=[0,\infty)$. Let
$\iota:{\mathbb{R}}_{+}\to{\mathbb{R}}_{+}$ denote the identity map. In
${\mathbb{R}}^{N}$, denote the Euclidean norm by $\|\cdot\|$. For
$({\mathbf{X}},d_{\mathbf{X}})$ a Polish space, let
$C({\mathbb{R}}_{+},{\mathbf{X}})$ and $D({\mathbb{R}}_{+},{\mathbf{X}})$
denote the space of continuous and, respectively, càdlàg paths, endowed with
the topology of uniform convergence on compacts and, respectively, the $J_{1}$
topology. Denote by ${\mathcal{M}}_{1}$ the space of probability measures on
${\mathbb{R}}_{+}$ equipped with the topology of weak convergence. Denote by
$C^{\uparrow}$ the set of members of $C({\mathbb{R}}_{+},{\mathbb{R}}_{+})$
that are nondecreasing and start at $0$. For $\xi\in
D({\mathbb{R}}_{+},{\mathbb{R}}^{N})$, an interval $I\subset{\mathbb{R}}_{+}$,
and $0\leq\delta\leq T$, denote
$\displaystyle\mathnormal{\Delta}\xi(t)$ $\displaystyle=\xi(t)-\xi(t-)\qquad
t>0,\qquad\mathnormal{\Delta}\xi(0)=\xi(0),$ $\displaystyle{\rm osc}(\xi,I)$
$\displaystyle=\sup\\{\|\xi(s)-\xi(t)\|:s,t\in I\\},$ $\displaystyle
w_{T}(\xi,\delta)$
$\displaystyle=\sup\\{\|\xi(t)-\xi(s)\|:s,t\in[0,T],|s-t|\leq\delta\\},$
$\displaystyle\|\xi\|^{*}_{T}$ $\displaystyle=\sup\\{\|\xi(t)\|:t\in[0,T]\\},$
and by $|\xi|(t)$ the total variation of $\xi$ in $[0,t]$.
For $\mathfrak{D}\subset{\mathbb{R}}^{N}$, denote by $\mathfrak{D}^{o}$ and
$\bar{\mathfrak{D}}$ its interior and, respectively, closure. Denote by
$C(\mathfrak{D})$ the set of continuous functions
$f:\mathfrak{D}\to{\mathbb{R}}$ and by $C_{0}(\mathfrak{D})$ the set of $f\in
C(\mathfrak{D})$ whose support is a compact subset of ${\mathbb{R}}^{N}$. For
$k,l\in{\mathbb{N}}$ and an open set $\mathfrak{D}\subset{\mathbb{R}}$ (resp.,
$\mathfrak{D}\subset{\mathbb{R}}\times{\mathbb{R}}_{+}$), denote by
$C^{k}(\mathfrak{D})$ (resp., $C^{k,l}(\mathfrak{D})$) the set of functions
$f:\mathfrak{D}\to{\mathbb{R}}$ possessing continuous derivatives up to and
including $k$ (resp., $(k,l)$). For $\mathfrak{D}\subset{\mathbb{R}}$,
$C^{k}(\mathfrak{D})$ denotes the set of functions
$f:\mathfrak{D}\to{\mathbb{R}}$ whose restriction to $\mathfrak{D}^{o}$ lies
in $C^{k}(\mathfrak{D}^{o})$, and whose derivatives up to and including $k$
have continuous extensions to $\mathfrak{D}$. Define $C^{k,l}(\mathfrak{D})$
analogously. For $\mathfrak{D}\in{\mathbb{R}}$ and
$\mathfrak{D}\in{\mathbb{R}}\times{\mathbb{R}}_{+}$ denote
$C^{\infty}(\mathfrak{D})=\bigcap_{k\in{\mathbb{N}}}C^{k}(\mathfrak{D})$ and
$C^{\infty}(\mathfrak{D})=\bigcap_{k,l\in{\mathbb{N}}}C^{k,l}(\mathfrak{D})$,
resp. A subscript $0$ denotes compact support, e.g.
$C^{k}_{0}(\mathfrak{D})=C^{k}(\mathfrak{D})\cap C_{0}(\mathfrak{D})$. A
subscript $b$ denotes boundedness, e.g. $C^{k}_{b}(\mathfrak{D})$ are
functions in $C^{k}(\mathfrak{D})$ whose derivatives of order $0\leq l\leq k$
are bounded. For $f$ defined on (a subset of) ${\mathbb{R}}$, $f^{\prime}$
denotes derivative, and for $f$ defined on (a subset of)
${\mathbb{R}}\times{\mathbb{R}}_{+}$, $f_{x}$ and $f_{t}$ denote spatial and
temporal derivatives, resp.
For an interval $I\subset{\mathbb{R}}_{+}$ and a normed space $S$, denote by
$\mathbb{L}^{p}(I;S)$, $1\leq p\leq\infty$, the usual $\mathbb{L}^{p}$ space
defined in terms of the Lebesgue measure on $I$. Denote by
$\mathbb{L}^{p}_{\rm loc}({\mathbb{R}}_{+};S)$ the set of
$f:{\mathbb{R}}_{+}\to S$ such that $f\in\mathbb{L}^{p}([0,T];S)$ for all
finite $T$. $\mathbb{L}^{p}({\mathbb{R}}_{+};{\mathbb{R}})$ is abbreviated to
$\mathbb{L}^{p}({\mathbb{R}}_{+})$.
For $f,g:{\mathbb{R}}_{+}\to{\mathbb{R}}$ and a measure $m$ on
${\mathbb{R}}_{+}$, denote $\langle
f,m\rangle=\int_{{\mathbb{R}}_{+}}f(x)m(dx)$ and $\langle
f,g\rangle=\int_{{\mathbb{R}}_{+}}f(x)g(x)dx$.
If $V\in{\mathbb{R}}^{n}$ then $V_{i}$, $i\in[n]$ denote its components in the
standard basis, and vice versa: Given $V_{i}$, $i\in[n]$, $V$ denotes the
vector $(V_{1},V_{2},\ldots,V_{n})$. Both these conventions hold also for
random variables $V_{i}$ and processes $V_{i}(\cdot)$. Occasionally, with a
slight abuse of standard terminology, a sequence of random elements (random
variables or processes) is referred to as tight when their laws form a tight
sequence of probability measures. The term with high probability (w.h.p.)
means “holds, for each $n$, on $\mathnormal{\Omega}_{n}$, where
$\lim_{n}{\mathbb{P}}(\mathnormal{\Omega}_{n})=1$”. Convergence in
distribution is denoted by $\Rightarrow$. $c$ denotes a positive constant
whose value may change from one expression to another.
Throughout the paper, superscript $n$ attached to scalars, random variables or
processes, denotes dependence on the index $n$ rather than power.
### 1.3. Paper organization
The load balancing model and the main results are presented in §2.1 and §2.2,
respectively. A discussion of the results appears in §2.3. An outline of the
proof is given in §2.4. Several tools required for the proof are developed in
§3. The proof of the main results appears in §4. For a detailed description of
the content of §§3–4 see §2.4.
Figure 1. Orders of magnitude of the various streams.
## 2\. Load balancing in heavy traffic
### 2.1. The load balancing model
#### 2.1.1. Arrivals, queue lengths and basic relations
In the model there are $n$ servers and a queue in front of each. There is a
dedicated stream of arrivals into each queue and an additional stream of
arrivals, called the load balancing stream (LBS), that go through the
JSQ$(\ell,n)$ algorithm. These $n+1$ arrival streams are modeled as mutually
independent Poisson processes. Clearly, this could be recast as a single
Poisson stream, out of which $n+1$ thinned streams are created by means of
random selection.
In what follows, the queueing systems will be indexed by $n\in{\mathbb{N}}$,
the number of servers. The processes $X^{n}_{i}$, $E^{n}_{i}$, $D^{n}_{i}$ and
$T^{n}_{i}$ represent the $i$-th queue length process, dedicated arrival
process, departure process and busyness process, respectively. Denote by
$A^{n}_{0}$ the LBS arrival process, and by $A^{n}_{i}$ the process counting
LBS arrivals routed to server $i$. For each $i$, $E^{n}_{i}$ is a Poisson
process of parameter $\lambda^{n}$, and $A^{n}_{0}$ is Poisson of parameter
$\lambda^{n}_{0}$, all having right-continuous sample paths. We have
(2.1)
$X^{n}_{i}(t)=X^{n}_{i}(0-)+E^{n}_{i}(t)+A^{n}_{i}(t)-D^{n}_{i}(t),\qquad
i\in[n],\ t\in{\mathbb{R}}_{+}.$
Work conservation is assumed, hence
(2.2) $T^{n}_{i}(t)=\int_{0}^{t}1_{\\{X^{n}_{i}(s)>0\\}}ds.$
#### 2.1.2. The load balancing algorithm
Upon each LBS arrival, $\ell$ out of the $n$ queues, chosen uniformly at
random, are sampled. The arrival is routed to the queue that is shortest among
the $\ell$; if there are ties, the queue with the smaller index is preferred.
Given $x\in{\mathbb{R}}^{n}$, let
(2.3) ${\rm rank}(i;x)=\\#\\{j:x_{j}<x_{i}\\}+\\#\\{j\leq i:x_{j}=x_{i}\\}.$
Then the probability that a LBS arrival is routed to the queue whose rank,
defined by (2.3), is $r$ is given by
(2.4) $p_{n,r}=\frac{{n-r\choose\ell-1}}{{n\choose\ell}},\qquad r\in[n],$
with ${k\choose j}=0$ when $j>k$. Note that
(2.5) $p_{n,r}=0\text{ for }r\geq
n-\ell+2,\qquad\max_{r\in[n]}p_{n,r}=p_{n,1}=\frac{\ell}{n}.$
Thus the randomization mechanism can equivalently be achieved by letting, for
each $n$, $\theta^{n}_{k}$, $k\in{\mathbb{N}}$ be IID random variables with
${\mathbb{P}}(\theta^{n}_{1}=r)=p_{n,r}$, $r\in[n]$, and routing the $k$-th
LBS arrival to the queue whose rank is $\theta^{n}_{k}$. That is,
(2.6)
$A^{n}_{i}(t)=\int_{[0,t]}1_{\\{{\mathcal{R}}^{n}_{i}(s-)=\theta^{n}_{A^{n}_{0}(s)}\\}}dA^{n}_{0}(s),\qquad{\mathcal{R}}^{n}_{i}(t)={\rm
rank}(i;X^{n}(t)),\qquad i\in[n],\ t\in{\mathbb{R}}_{+}.$
Of course, this randomization scheme is not used in practice as it requires
keeping track of the queue lengths of the entire system. However, it is
convenient for carrying out the analysis.
#### 2.1.3. The initial condition.
We allow quite a general initial condition, where residual times of jobs
already being processed at time $0$ may be dependent and have unspecified
distributions. Denote the (random) set of queues that at time $0$ contain no
jobs and, respectively, at least one job, by
${\mathcal{N}}^{n}=\\{i\in[n]:X^{n}_{i}(0)=0\\}$ and
${\mathcal{P}}^{n}=\\{i\in[n]:X^{n}_{i}(0)>0\\}$. For $i\in{\mathcal{P}}^{n}$,
let $Z^{n}_{i}(0)$ denote the initial residual time of the head-of-the-line
job in queue $i$. For $i\in{\mathcal{N}}^{n}$ we add fictitious jobs having
zero processing time. To this end, rather than specifying $X^{n}(0)$ as the
initial queue length, $X^{n}(0-)$ is specified; and for each
$i\in{\mathcal{N}}^{n}$, the queue length is set to $X^{n}_{i}(0-)=1$ and the
residual processing time to $Z^{n}_{i}(0)=0$. Obviously, this results in
$X^{n}_{i}(0)=0$. This convention allows us to greatly simplify notation when
we later construct counting processes for service and departure. The initial
condition is thus a tuple
${\mathcal{I}}^{n}=(\\{X^{n}_{i}(0-),Z^{n}_{i}(0),i\in[n]\\},{\mathcal{N}}^{n},{\mathcal{P}}^{n}),$
where $({\mathcal{N}}^{n},{\mathcal{P}}^{n})$ partitions $[n]$, and
$\begin{split}X^{n}_{i}(0-)&=1,\ Z^{n}_{i}(0)=0,\ i\in{\mathcal{N}}^{n},\\\
X^{n}_{i}(0-)&\geq 1,\ Z^{n}_{i}(0)>0,\ i\in{\mathcal{P}}^{n}.\end{split}$
#### 2.1.4. Service times
Let $\Phi_{\rm ser}$ be a Borel probability measure on $[0,\infty)$ with mean
1 and standard deviation $\sigma_{\rm ser}\in(0,\infty)$, such that $\Phi_{\rm
ser}(\\{0\\})=0$. Let $\Phi^{n}_{\rm ser}$ be defined as a scaled version of
$\Phi_{\rm ser}$ uniquely specified via $\Phi^{n}_{\rm ser}[0,x]=\Phi_{\rm
ser}[0,\mu^{n}x]$, $x\in{\mathbb{R}}_{+}$. Here, $\mu^{n}>0$ is the service
rate in the $n$-th system. For $k\geq 1$, let $Z^{n}_{i}(k)$ denote the
service time of the $k$-th job to be served by server $i$ after the head-of-
the-line job at time $0-$ there (for $i\in{\mathcal{N}}^{n}$ this means the
$k$-th job after the fictitious one). For every $i$,
${\mathcal{Z}}^{n}_{i}=(Z^{n}_{i}(k),k\geq 1)$ is an IID sequence with common
distribution $\Phi^{n}_{\rm ser}$.
Next, the potential service process $S^{n}_{i}$, evaluated at $t$, gives the
number of jobs completed by server $i$ by the time it has worked $t$ units of
time. It is given, with $\sum_{0}^{-1}=0$, by
$S^{n}_{i}(t)=\max\Big{\\{}k\in{\mathbb{Z}}_{+}:\sum_{j=0}^{k-1}Z^{n}_{i}(j)\leq
t\Big{\\}},\qquad t\geq 0.$
The departure processes are given by $D^{n}_{i}(t)=S^{n}_{i}(T^{n}_{i}(t))$.
This is the number of jobs completed by time $t$ by server $i$. Note that the
first departure counted by $D^{n}_{i}$ is the one initially processed if
$i\in{\mathcal{P}}^{n}$ and the fictitious job if $i\in{\mathcal{N}}^{n}$.
#### 2.1.5. Dependence structure.
For each $n$, the $2n+3$ stochastic elements
(2.7) $E^{n}_{i}$, $i\in[n]$, ${\mathcal{Z}}^{n}_{i}$, $i\in[n]$,
${\mathcal{I}}^{n}$, $A^{n}_{0}$, and $\\{\theta^{n}_{k}\\}$ are mutually
independent.
#### 2.1.6. Scaling and critical load condition.
The arrival and service rates are assumed to satisfy the following. There are
constants $\lambda>0$ and $\hat{\lambda}\in{\mathbb{R}}$ such that
(2.8) $\hat{\lambda}^{n}:=n^{-1/2}(\lambda^{n}-n\lambda)\to\hat{\lambda}\text{
as }n\to\infty,$
a constant $b>0$ such that
(2.9) $\hat{\lambda}^{n}_{0}:=n^{-3/2}\lambda^{n}_{0}\to b\text{ as
}n\to\infty,$
and constants $\mu>0$ and $\hat{\mu}\in{\mathbb{R}}$ such that
(2.10) $\hat{\mu}^{n}:=n^{-1/2}(\mu^{n}-n\mu)\to\hat{\mu}\text{ as
}n\to\infty.$
The critical load condition is assumed, namely
(2.11) $\lambda=\mu.$
Some further notation used throughout is
$\hat{b}^{n}_{1}=\hat{\lambda}^{n}-\hat{\mu}^{n},\qquad
b_{1}=\hat{\lambda}-\hat{\mu},\qquad c_{1}=-b_{1},\qquad
b_{0}=b\ell,\qquad\sigma^{2}=\lambda(1+\sigma_{\rm ser}^{2}),\qquad
a=\frac{\sigma^{2}}{2}.$
Note that the average load on a queue, defined as the rate of arrival of work
per server, $\lambda^{n}+n^{-1}\lambda^{n}_{0}$, divided by a server’s
processing rate, $\mu^{n}$, that is,
$\frac{n\lambda+n^{1/2}\hat{\lambda}+n^{1/2}b+o(n^{1/2})}{n\mu+n^{1/2}\hat{\mu}+o(n^{1/2})},$
is asymptotic to
$1+n^{-1/2}\rho,\qquad\rho:=\frac{\hat{\lambda}+b-\hat{\mu}}{\lambda}.$
Hence the load parameter $\rho$ is given by $\rho=(b_{1}+b)/\lambda$.
Let rescaled versions of the queue length processes and cumulative idle time
processes be defined by
(2.12)
$\hat{X}^{n}_{i}(t)=n^{-1/2}X^{n}_{i}(t),\qquad\hat{L}^{n}_{i}(t)=n^{-1/2}\mu^{n}(t-T^{n}_{i}(t)),$
and denote
$\bar{\xi}^{n}_{t}=n^{-1}\sum_{i\in[n]}\delta_{\hat{X}^{n}_{i}(t)}$. Further,
assume that $\bar{\xi}^{n}_{0-}\to\xi_{0}$ (equivalently
$\bar{\xi}^{n}_{0}\to\xi_{0}$) in probability, as $n\to\infty$, where
$\xi_{0}$ is a deterministic Borel probability measure on ${\mathbb{R}}_{+}$.
For the rescaled residual times $Z^{n}_{i}(0)$, assume that for every
$\varepsilon>0$,
(2.13)
$\lim_{n\to\infty}\max_{i\in[n]}{\mathbb{P}}(\tilde{Z}^{n}_{i}(0)>\varepsilon)=0\quad\text{
where}\quad\ \tilde{Z}^{n}_{i}(0):=\tilde{\mu}^{n}Z^{n}_{i}(0),\
\tilde{\mu}^{n}:=n^{-1/2}\mu^{n},$
and
(2.14) $\sup_{n}\max_{i\in[n]}{\mathbb{E}}[\tilde{Z}^{n}_{i}(0)^{2}]<\infty.$
Because $\tilde{\mu}^{n}$ scales like $n^{1/2}$, (2.13) and (2.14) impose the
condition that $Z^{n}_{i}(0)$ scale like $o(n^{-1/2})$. This is mild compared
to the condition on $Z^{n}_{i}(k)$, $k\geq 1$, which have been assumed to
scale like $n^{-1}$. Finally, assume that
(2.15) $\sup_{n}\max_{i\in[n]}{\mathbb{E}}[\hat{X}^{n}_{i}(0-)^{2}]<\infty.$
Note that, by Fatou’s lemma, this imposes a condition on $\xi_{0}$, namely
$\xi_{0}$ necessarily satisfies $\int x^{2}\xi_{0}(dx)<\infty$.
All assumptions made thus far are in force throughout the paper.
### 2.2. Main results
First we address well-posedness of the relevant PDE, starting with classical
solutions to (1.1).
###### Theorem 2.1.
Within the class of functions $u\in
C^{2,1}((0,\infty)\times{\mathbb{R}}_{+};{\mathbb{R}}_{+})$ satisfying, for
every $T<\infty$, $\sup_{t\in(0,T]}\int_{{\mathbb{R}}_{+}}xu(x,t)dx<\infty$,
there exists a unique solution to equation (1.1). Moreover, for each $t>0$,
$u(\cdot,t)$ is a probability density.
A function $u_{\rm stat}\in C^{2}({\mathbb{R}}_{+})$ is said to be a
stationary solution associated with (1.1), if it is a probability density
possessing a finite second moment, and, setting $\xi_{0}(dx)=u_{\rm
stat}(x)dx$, the solution to (1.1) is given by $u(x,t)=u_{\rm stat}(x)$ for
all $x,t$.
###### Proposition 2.2.
Assume $\rho<0$. Then there exists a unique stationary solution to (1.1). It
is given by (1.2).
Theorem 2.1 is a consequence of a result stated next, that gives uniqueness of
weak solutions to a class of viscous scalar conservation laws, and provides
one of the main tools used in this paper. This is a parabolic equation of the
type
(2.16)
$\begin{cases}v_{t}=\left(\mathfrak{f}(v)\right)_{x}+av_{xx},&(x,t)\in{\mathbb{R}}_{+}^{2},\\\
v(0,t)=1,&t>0,\\\ v(\cdot,0)=v_{0}.\end{cases}$
A key point is that this result does not require any regularity assumption in
the $x$ variable on the class of solutions. The definition of a weak solution
is as follows.
###### Definition 2.3.
A function $v\in\mathbb{L}_{\rm
loc}^{\infty}({\mathbb{R}}_{+};\mathbb{L}^{\infty}({\mathbb{R}}_{+}))\cap\mathbb{L}_{\rm
loc}^{\infty}({\mathbb{R}}_{+};\mathbb{L}^{1}({\mathbb{R}}_{+}))$ is a weak
solution of (2.16) if for any $t\in(0,\infty)$ and any $\phi\in
C^{\infty}_{0}({\mathbb{R}}_{+})$ satisfying $\phi(0)=0$,
(2.17) $\displaystyle\langle v(\cdot,t),\phi\rangle-\langle
v_{0},\phi\rangle=-\int_{0}^{t}\langle\mathfrak{f}(v(\cdot,s)),\phi^{\prime}\rangle
ds+a\int_{0}^{t}\langle v(\cdot,s),\phi^{\prime\prime}\rangle
ds+a\phi^{\prime}(0)t.$
The choice of $\mathfrak{f}$ that will be of interest is
$\mathfrak{f}(z)=c_{1}z-bz^{\ell}$. In particular, (1.1) is related (2.16)
when the latter takes the special form
(2.18)
$\begin{cases}v_{t}=(c_{1}v-bv^{\ell})_{x}+av_{xx},&(x,t)\in{\mathbb{R}}_{+}^{2},\\\
v(0,t)=1,&t>0,\\\ v(\cdot,0)=\xi_{0}(\cdot,\infty).\end{cases}$
The existence and uniqueness of a classical solution of (2.16) is well known,
and there is a vast literature on weak solutions in the $W^{1,p}_{\rm loc}$
sense. However, we found no uniqueness result of a weak solution as given in
Definition 2.3 under the mere assumption $v\in\mathbb{L}_{\rm
loc}^{\infty}({\mathbb{R}}_{+};\mathbb{L}^{\infty}({\mathbb{R}}_{+}))\cap\mathbb{L}_{\rm
loc}^{\infty}({\mathbb{R}}_{+};\mathbb{L}^{1}({\mathbb{R}}_{+}))$.
###### Theorem 2.4.
Assume $\mathfrak{f}\in C^{\infty}({\mathbb{R}})$ and
$v_{0}\in\mathbb{L}^{1}({\mathbb{R}}_{+})\cap\mathbb{L}^{\infty}({\mathbb{R}}_{+})$.
Then there exists a unique weak solution to (2.16) (in particular, to (2.18))
in the sense of Definition 2.3. This solution is in
$C^{\infty}({\mathbb{R}}_{+}\times(0,\infty))$. If $v$ denotes the weak
solution to (2.18) then $u=-v_{x}$ is the classical solution of (1.1).
Next is a hydrodynamic limit result.
###### Theorem 2.5.
Let $\xi_{0}$ be extended to a trajectory
$\xi=\\{\xi_{t},t\in{\mathbb{R}}_{+}\\}$ in
${\mathcal{M}}_{1}({\mathbb{R}}_{+})$ by setting
$\xi_{t}(dx)=u(x,t)dx,\qquad t>0,$
where $u$ is the unique solution to (1.1). Then $\xi\in
C({\mathbb{R}}_{+},{\mathcal{M}}_{1}({\mathbb{R}}_{+}))$, and one has
$\bar{\xi}^{n}\to\xi$ in probability in
$D({\mathbb{R}}_{+},{\mathcal{M}}_{1}({\mathbb{R}}_{+}))$, as $n\to\infty$.
We next provide two versions of an invariance principle under two different
assumptions on the initial conditions.
###### Theorem 2.6.
Fix $k\in{\mathbb{N}}$ and assume that
$(\hat{X}^{n}_{i}(0))_{i\in[k]}\Rightarrow(X_{i}(0))_{i\in[k]}$. Let $v$ be
the weak solution to (2.18). Then
$(\hat{X}^{n}_{i},\hat{L}^{n}_{i})_{i\in[k]}\Rightarrow(X_{i},L_{i})_{i\in[k]}$
in $(D({\mathbb{R}}_{+},{\mathbb{R}}_{+})\times
C({\mathbb{R}}_{+},{\mathbb{R}}_{+}))^{k}$, where the latter tuple is the
unique in law solution to the system (1.3), driven by a $k$-dimensional
standard BM $(W_{i})_{i\in[n]}$ independent of $(X_{i}(0))_{i\in[k]}$.
###### Theorem 2.7.
Assume that, for each $n$, the ${\mathbb{R}}_{+}$-valued random variables
$X^{n}_{i}(0-)$, $i\in[n]$, are exchangeable (necessitating that the limit law
of each $X^{n}_{i}(0-)$ is $\xi_{0}$). Let $v$ be the weak solution to (2.18).
Then for $k\in{\mathbb{N}}$,
$(\hat{X}^{n}_{i},\hat{L}^{n}_{i})_{i\in[k]}\Rightarrow(X_{i},L_{i})_{i\in[k]}$
in $(D({\mathbb{R}}_{+},{\mathbb{R}}_{+})\times
C({\mathbb{R}}_{+},{\mathbb{R}}_{+}))^{k}$, where the latter tuple is given by
$k$ independent copies of the unique in law solution $(X,L)$ to (1.4) driven
by a standard BM $W$ independent of $X(0)$, and the latter is distributed
according to $\xi_{0}$.
In view of Theorem 2.5, the SDE in Theorem 2.7 is a McKean-Vlasov SDE, as it
can be seen to take the form (1.5). Note that we do not assume that the entire
initial data is exchangeable, nor that the tie breaking rule is symmetric
w.r.t. exchanging $i$’s. The result implies that these issues have negligible
effect upon taking the limit.
### 2.3. Discussion
The hydrodynamic limit result establishes that the dynamics of the model are
given at the macroscopic scale by the PDE (1.1). One can use this to express
basic performance criteria in terms of the solution $u$. In particular,
denoting the macroscopic mean by
$m_{\rm mac}(t):=\int_{0}^{\infty}xu(x,t)dx,$
a first order approximation to the mean queue length is given by $m_{\rm
mac}(t)n^{1/2}$. Accordingly, the mean delay is, to first order, given by
$\lambda^{-1}m_{\rm mac}(t)n^{-1/2}$. Similarly, letting
$\sigma_{\rm mac}^{2}(t):=\int_{0}^{\infty}(x-m_{\rm mac}(t))^{2}u(x,t)dx,$
the macroscopic standard deviation, $\sigma_{\rm mac}(t)$, may be taken as an
index of balance. As a second order parabolic equation in one spatial
variable, there exist standard tools for numerically solving (1.1). Some
examples of solutions are plotted in Fig. 4.
Whereas the macroscopic dynamics can be solved numerically, the macroscopic
equilibrium state has an explicit formula, namely (1.2). One would like to
show rigorously that all solutions $u$ converge to $u_{\rm stat}$ as
$t\to\infty$, and further, that the invariant distribution of the stochastic
model’s dynamics converges to $u_{\rm stat}$ as $n\to\infty$; these will be
addressed in the future. However, even without establishing these limit
results, $u_{\rm stat}$ constitutes a legitimate solution to the PDE. In
particular, a combination of Theorem 2.5 and Proposition 2.2 implies that
$\bar{\xi}^{n}\to\xi$ in probability holds provided that $\xi_{0}(dx)=u_{\rm
stat}(x)dx$. Here, $\xi_{t}=\xi_{0}$ for all $t$. Fig. 4 shows graphs of
$u_{\rm stat}$ for different values of $\ell$ and $b$, based on (1.2).
We can also use $u_{\rm stat}$ in place of $u(\cdot,t)$ in the above
macroscopic performance criteria, and define analogously $m_{\rm mac}$ and
$\sigma_{\rm mac}$ which are now time independent quantities. Fig. 4 shows
their dependence on $\ell$ and $b$.
Figure 2. Solution of (1.1) for initial condition $\rm unif[0,10]$ (left) and
$\rm unif[15,30]$ (right) at different time instances, with $\rho=-.01$,
$c_{1}=.21$, $b=.2$, $\ell=4$, $a=1$.
Figure 3. Stationary solution for different values of $\ell$, with $b=.2$
(left), and for different values of $b$, with $\ell=4$ (right). In all cases,
$\rho=-.01,a=1,\lambda=1$.
Figure 4. Stationary mean and standard deviation for $2\leq\ell\leq 9$, with
$b=2$, $a=1$ (left) and for $0.1\leq b\leq 2$, with $\ell=4$, $a=1$ (right).
The nature of the drift in equation (1.1) is akin to the dynamics of a mass
distribution subjected to a gravitational field [44], where the force acting
on an infinitesimal mass depends on the amount of mass that piles above it.
Here it is the mass accumulated below it, specifically, a nonlinear function
of $v(x,t)$, that determines the force.
### 2.4. Outline of the proof
The approach used to proving the hydrodynamic limit result is based on PDE
uniqueness. Thus the first main tool required is the uniqueness of weak
solutions to (2.16). The crux of the argument is as follows. If $v_{1},v_{2}$
are two weak solutions then one can show that $w=v_{1}-v_{2}$ satisfies the
equality
$\langle
w(\cdot,t),\psi(\cdot,t)\rangle=\int_{0}^{t}\Big{\langle}w(\cdot,s),\Big{[}\psi_{s}(\cdot,s)-\Big{[}\frac{\mathfrak{f}(v_{1})-\mathfrak{f}(v_{2})}{v_{1}-v_{2}}\Big{]}\psi_{x}+\psi_{xx}\Big{]}\Big{\rangle}ds$
for any $t>0$ and test function $\psi$ satisfying $\psi\in
C_{0}^{\infty}(\mathbb{R}_{+}\times[0,t])$, $\psi(0,s)=0$ for $s\in[0,t]$. One
can then find an $\mathbb{L}^{\infty}$ approximation of the solution to the
linear, backward equation
$\psi_{s}(\cdot,s)-\Big{[}\frac{f(v_{1})-f(v_{2})}{v_{1}-v_{2}}\Big{]}\psi_{x}+\psi_{xx}=0$
on $\mathbb{R}_{+}\times[0,t]$ satisfying $\psi(\cdot,t)=\phi(\cdot)$, given
any smooth $\phi$. As a result, $\langle w(\cdot,t),\phi\rangle=0$ for any $t$
and any such $\phi$, hence $w\equiv 0$.
Next, $\bar{\xi}^{n}$ is shown to form a tight sequence, and then the next
main step is to prove that limit points satisfy (2.16) in weak sense.
Achieving this goal relies on two key elements. One is a semimartingale
decomposition for point processes, introduced in [17]. This decomposition is
particularly convenient in our setting, where the number of point processes
involved, given by the departure processes, grows to infinity. The other is an
estimate showing $C$-tightness of $\hat{X}^{n}_{i}$ uniformly in $i\in[n]$.
With this toolbox we can then show the following. Let $\tilde{\phi}$ be a test
function as in Definition 2.3 and $\phi$ its antiderivative. Then every limit
point $\xi$ of $\bar{\xi}^{n}$ satisfies
(2.19) $\displaystyle\langle\phi,\xi_{t}\rangle$
$\displaystyle=\langle\phi,\xi_{0}\rangle+\int_{0}^{t}\langle
b_{1}\phi^{\prime}+a\phi^{\prime\prime},\xi_{s}\rangle
ds+\frac{b_{0}}{\ell}\int_{0}^{t}\int_{{\mathbb{R}}_{+}}\phi^{\prime}(x)\mathfrak{S}(\xi_{s}[x,\infty),\xi_{s}(x,\infty))\xi_{s}(dx)ds,$
where
(2.20)
$\mathfrak{S}(a,b)=a^{\ell-1}+a^{\ell-2}b+\cdots+ab^{\ell-2}+b^{\ell-1},\qquad
a,b\in{\mathbb{R}}.$
To see the relation to equation (2.17) required by Definition 2.3, let
$v(x,t)=\xi_{t}(x,\infty)$. Then
$\langle\phi,\xi_{t}\rangle=\int_{{\mathbb{R}}_{+}}(v(x,t)-1)\tilde{\phi}(x)dx$.
Suppose that $\xi_{t}$ has no atoms for every $t>0$. Then, owing to the fact
that $\mathfrak{S}(a,a)=\ell a^{\ell-1}$, (2.19) reduces to (2.17) in the case
of interest $\mathfrak{f}(z)=c_{1}z-bz^{\ell}$. However, an a priori proof of
the atomless property is not required, as a calculation via integration by
parts shows that (2.19) and (2.17) are equivalent even in presence of atoms.
As a result, the existence of weak limit of $\bar{\xi}^{n}$ satisfying the PDE
follows, completing the proof of the hydrodynamic limit.
Equation (2.16) is well known to have $C^{\infty}$ solutions, and as a
consequence of the above result the relation of the hydrodynamic limit to the
PDE (1.1) for the density follows.
Finally, the two invariance principles follow by combining the $C$-tightness
of individual rescaled queue lengths with the hydrodynamic limit result, where
the atomless property, that by now has been proved, gives a control over the
interaction term.
The structure of §§3–4 is as follows. Preliminary steps are taken in §3,
starting from §3.1 where the uniqueness of weak solutions to the PDE (2.16) is
proved. The aforementioned semimartingale representations are provided in
§3.2. §3.3 gives uniform estimates on rescaled processes, including
$C$-tightness and second moment. Tightness of $\bar{\xi}^{n}$ is shown in
§3.4. With this set of tools, the proof is then carried out in §4, which
starts by writing down an equation for $\langle\phi,\bar{\xi}^{n}\rangle$. In
§4.1 it is shown that the linear term in this equation converges to the linear
term in (2.19), and in §4.2, that the interaction term converges to that in
(2.19). The reduction, alluded to above, of (2.19) to (2.17), is shown in
§4.3. Finally, all the main results are proved, based on the above, in §4.4.
## 3\. Preliminaries
### 3.1. PDE uniqueness
Here we prove the uniqueness part of Theorem 2.4, stated as Lemma 3.2 below.
In preparation for proving this result, we provide an extension of the class
of test functions allowed in Definition 2.3, which is relatively standard but
is given for completeness.
###### Lemma 3.1.
If $v$ is a weak solution of (2.16) (in the sense of Definition 2.3) then for
any $t\in(0,\infty)$ and any $\psi\in
C_{0}^{\infty}\left({\mathbb{R}}_{+}\times[0,t]\right)$ satisfying
$\psi(0,s)=0$, $0\leq s\leq t$, one has
$\displaystyle\langle v(\cdot,t),\psi(\cdot,t)\rangle-\langle
v_{0},\psi(\cdot,0)\rangle$ $\displaystyle=\int_{0}^{t}\langle
v(\cdot,s),\psi_{s}(\cdot,s)\rangle
ds-\int_{0}^{t}\langle\mathfrak{f}(v(\cdot,s)),\psi_{x}\rangle
ds+a\int_{0}^{t}\langle v(\cdot,s),\psi_{xx}\rangle ds$ (3.1)
$\displaystyle\qquad+a\int_{0}^{t}\psi_{x}(0,s)ds.$
Proof. Observe by (2.17) that for any $\phi$ satisfying Definition 2.3 and for
any $0\leq\tau_{1}<\tau_{2}\leq t$,
(3.2) $\langle v(\cdot,\tau_{2}),\phi\rangle=\langle
v(\cdot,\tau_{1}),\phi\rangle-\int_{\tau_{1}}^{\tau_{2}}\langle\mathfrak{f}(v(\cdot,s)),\phi^{\prime}\rangle
ds+a\int_{\tau_{1}}^{\tau_{2}}\left(\langle
v(\cdot,s),\phi^{\prime\prime}\rangle+\phi^{\prime}(0)\right)ds.$
Since $v$ and $\mathfrak{f}(v)$ are in $\mathbb{L}^{\infty}({\mathbb{R}}_{+})$
uniformly in $t$, it follows, in particular, that $t\mapsto\langle
v(\cdot,t),\phi\rangle$ is continuous and, moreover,
(3.3) $|\langle
v(\cdot,\tau_{2})-v(\cdot,\tau_{1}),\phi\rangle|\leq|\tau_{1}-\tau_{2}|K$
where $K=K(\phi,v)$.
Let now $\psi\in C_{0}^{\infty}({\mathbb{R}}_{+}\times[0,t])$ be a function
that satisfies the conditions of the lemma. Let $N\in\mathbb{N}$. Let
$t_{k}:=tk/N$ where $k=0,\ldots,N$. Define $\phi_{k}(x)=\psi(t_{k},x)$. Then,
from (3.2)
$\langle v(\cdot,t_{k+1}),\phi_{k}\rangle-\langle
v(\cdot,t_{k}),\phi_{k}\rangle=-\int_{t_{k}}^{t_{k+1}}\langle\mathfrak{f}(v(\cdot,s)),\phi^{\prime}_{k}\rangle
ds+a\int_{t_{k}}^{t_{k+1}}\left(\langle
v(\cdot,s),\phi^{\prime\prime}_{k}\rangle+\phi^{\prime}_{k}(0)\right)ds.$
Summing over $k=0,\ldots,N-1$ and using $\phi_{N}=\psi(\cdot,t)$,
$\phi_{0}=\psi(\cdot,0)$ we get
(3.4) $\langle v(\cdot,t),\psi(\cdot,t(1-1/N))\rangle-\langle
v_{0},\psi(\cdot,0)\rangle-\sum_{k=1}^{N-1}\langle
v(\cdot,t_{k+1}),\phi_{k+1}-\phi_{k}\rangle\\\
=\sum_{k=0}^{N-1}\left[-\int_{t_{k}}^{t_{k+1}}\langle\mathfrak{f}(v(\cdot,s)),\phi^{\prime}_{k}\rangle
ds+a\int_{t_{k}}^{t_{k+1}}\left(\langle
v(\cdot,s),\phi^{\prime\prime}_{k}\rangle+\phi^{\prime}_{k}(0)\right)ds\right].$
Evidently, $\langle v(\cdot,t),\psi(\cdot,t(1-1/N))\rangle\rightarrow\langle
v(\cdot,t),\psi(\cdot,t)\rangle$ as $N\rightarrow\infty$. Next, by (3.3)
$\langle
v(\cdot,t_{k+1}),\phi_{k+1}-\phi_{k}\rangle=\int_{t_{k}}^{t_{k+1}}\langle
v(\cdot,s),\psi_{s}(\cdot,s)\rangle ds+\int_{t_{k}}^{t_{k+1}}\langle
v(\cdot,t_{k+1})-v(\cdot,s),\psi_{s}(\cdot,s)\rangle ds$
$=\int_{t_{k}}^{t_{k+1}}\langle v(\cdot,s),\psi_{s}(\cdot,s)\rangle
ds+O(N^{-2}).$
Thus
$\sum_{k=1}^{N-1}\langle
v(\cdot,t_{k+1}),\phi_{k+1}-\phi_{k}\rangle\rightarrow\int_{0}^{t}\langle
v(\cdot,s),\psi_{s}(\cdot,s)\rangle ds$
as $N\rightarrow\infty$. Likewise, the right side of (3.4) converges to
$-\int_{0}^{t}\langle\mathfrak{f}(v(\cdot,s)),\psi_{x}\rangle
ds+a\int_{0}^{t}\left(\langle
v(\cdot,s),\psi_{xx}\rangle+\psi_{x}(0,s)\right)\rangle ds.$
This proves the lemma. ∎
###### Lemma 3.2.
Assume $\mathfrak{f}\in C^{\infty}({\mathbb{R}})$ and
$v_{0}\in\mathbb{L}^{1}({\mathbb{R}}_{+})\cap\mathbb{L}^{\infty}({\mathbb{R}}_{+})$.
Then there is at most one weak solution to (2.16) in the sense of Definition
2.3.
Proof. As can be seen by performing a change of variables $x\mapsto
a^{-1/2}x$, we may and will assume w.l.o.g. that $a=1$. Suppose $v_{1},v_{2}$
are two solutions. The goal is to show that $v_{1}=v_{2}$. Let $w=v_{1}-v_{2}$
and note that $w\in\mathbb{L}^{\infty}_{\rm
loc}({\mathbb{R}}_{+},\mathbb{L}^{\infty}({\mathbb{R}}_{+}))\cap\mathbb{L}^{\infty}_{\rm
loc}({\mathbb{R}}_{+},\mathbb{L}^{1}({\mathbb{R}}_{+}))$. Let
$J(x,t):=\begin{cases}\displaystyle\frac{\mathfrak{f}(v_{1}(x,t))-\mathfrak{f}(v_{2}(x,t))}{v_{1}(x,t)-v_{2}(x,t)}&\text{if}\
\ w(x,t)\not=0\\\ \mathfrak{f}^{\prime}(v_{1}(x,t))&\text{if}\ \
w(x,t)=0.\end{cases}$
Then for any test function $\psi$ satisfying the conditions of Lemma 3.1,
(3.5) $\langle w(\cdot,t),\psi(\cdot,t)\rangle=\int_{0}^{t}\langle
w(\cdot,s),\left[\psi_{s}(\cdot,s)-J(\cdot,s)\psi_{x}+\psi_{xx}\right]\rangle
ds.$
Fix $T$. By the assumptions on $\mathfrak{f}$ and the definition of a solution
we know
$J\in\mathbb{L}^{\infty}([0,T];\mathbb{L}^{\infty}({\mathbb{R}}_{+}))$.
Moreover, for any $x_{0}>0$,
$|\mathfrak{f}^{\prime}(x)-\mathfrak{f}^{\prime}(0)|\leq cx$ holds provided
that $|x|<x_{0}$, with $c=c(x_{0})$. Hence
$J-\mathfrak{f}^{\prime}(0)\in\mathbb{L}^{\infty}([0,T];\mathbb{L}^{1}({\mathbb{R}}_{+}))$.
Let now $J_{N}\in C^{\infty}({\mathbb{R}}_{+}\times[0,T])$ be a sequence that
is bounded uniformly in $\mathbb{L}^{\infty}({\mathbb{R}}_{+}\times[0,T])$ and
satisfies $\|J_{N}-J\|_{1}=O(N^{-1})$, where
$\|\cdot\|_{1}=\|\cdot\|_{\mathbb{L}^{1}({\mathbb{R}}_{+}\times[0,T])}$. Given
$t\in(0,T]$, let $\tilde{\psi}^{N}$ be the classical solution of the backward
linear problem on the time interval $[0,t]$:
(3.6)
$\tilde{\psi}^{N}_{s}(\cdot,s)-J_{N}(\cdot,s)\tilde{\psi}^{N}_{x}+\tilde{\psi}^{N}_{xx}=0,\
\ \tilde{\psi}^{N}(\cdot,t)=\phi,\ \ \tilde{\psi}^{N}(0,s)=0,\qquad 0\leq
s<t,$
where $\phi\in C^{\infty}_{0}({\mathbb{R}}_{+})$. Since $\tilde{\psi}^{N}$ and
all its derivatives decay to zero as $x\rightarrow\infty$, uniformly in
$s\in[0,t]$, we may replace them by $\psi^{N}$ which satisfy the conditions of
Lemma 3.1 and
$\|\psi^{N}-\tilde{\psi}^{N}\|_{\infty}+\|\psi^{N}_{x}-\tilde{\psi}^{N}_{x}\|_{\infty}+\|\psi^{N}_{xx}-\tilde{\psi}^{N}_{xx}\|_{\infty}+\|\psi^{N}_{t}-\tilde{\psi}^{N}_{t}\|_{\infty}=O(N^{-1})$
uniformly on $[0,t]$. It follows from (3.5) that
$\langle w(\cdot,t),\phi\rangle=\int_{0}^{t}\langle
w(\cdot,s),[\tilde{\psi}^{N}_{s}(\cdot,s)-J(\cdot,s)\tilde{\psi}^{N}_{x}+\tilde{\psi}^{N}_{xx}]\rangle
ds\ =\int_{0}^{t}\langle
w(\cdot,s),\left[J_{N}(\cdot,s)-J\right]\tilde{\psi}^{N}_{x}\rangle ds$
$=\int_{0}^{t}\langle
w(\cdot,s),\left[J_{N}(\cdot,s)-J\right]\psi^{N}_{x}\rangle ds+O(N^{-1})$
for any such $\phi$. Suppose
(3.7)
$\sup_{N}\sup_{s\in[0,t]}\|\tilde{\psi}^{N}_{x}(\cdot,s)\|_{\infty}<\infty$
if $t$ is small enough. Since
$w\in\mathbb{L}^{\infty}([0,T];\mathbb{L}^{\infty}({\mathbb{R}}_{+}))$ and
$\|J_{N}-J\|_{1}=O(N^{-1})$, this implies $\langle
w(\cdot,t),\phi\rangle=O(N^{-1})$ for any $\phi\in
C^{\infty}_{0}({\mathbb{R}}_{+})$ and any $N$, hence $w(\cdot,t)=0$. As a
consequence, $w(\cdot,t)=0$ for all $t\in[0,t_{0}]$, some $t_{0}>0$. Moreover,
$t_{0}$ does not depend on the initial condition $v_{0}$. Thus, iterating the
argument shows that $w=0$.
It thus suffices to show (3.7). To this end, denote
$M:=\sup_{N}\|J_{N}\|_{\infty}<\infty$. Let $\psi^{0}$ be the solution of
$\psi^{0}_{s}(\cdot,s)+\psi^{0}_{xx}=0,\ \ \psi^{0}(\cdot,t)=\phi,\ \
\psi^{0}(0,s)=0,\ \ 0\leq s\leq t.$
Let $m_{N}(y,\tau):=J_{N}(y,\tau)\tilde{\psi}^{N}_{y}(y,\tau)$. Since, as
mentioned earlier, $\|\tilde{\psi}^{N}_{y}\|_{\infty}<\infty$ for all $N$, one
also has $\|m_{N}\|_{\infty}<\infty$ for all $N$. Moreover,
$\tilde{\psi}^{N}=\psi_{0}+\hat{\psi}^{N}$ where $\hat{\psi}^{N}$ is the
solution of
$\hat{\psi}^{N}_{s}+\hat{\psi}^{N}_{xx}=m_{N},\ \ \
\hat{\psi}^{N}(\cdot,t)=0,\ \hat{\psi}^{N}(0,s)=0,\qquad 0\leq s\leq t.$
The solution $\hat{\psi}^{N}$ is given by Duhamel’s principle:
$\hat{\psi}^{N}(x,s)=\frac{1}{2\sqrt{\pi}}\int_{s}^{t}d\tau(t-\tau)^{-1/2}\left[\int_{0}^{\infty}m_{N}(y,t-\tau)e^{-\frac{(x-y)^{2}}{4(t-\tau)}}dy-\int_{0}^{\infty}m_{N}(y,t-\tau)e^{-\frac{(x+y)^{2}}{4(t-\tau)}}dy\right].$
Then $\hat{\psi}^{N}_{x}(x,s)=-A_{N}(x,s)+B_{N}(x,s)$, where
$\displaystyle A_{N}(x,s)$
$\displaystyle=\frac{1}{4\sqrt{\pi}}\int_{s}^{t}d\tau(t-\tau)^{-3/2}\int_{0}^{\infty}m_{N}(y,t-\tau)(x-y)e^{-\frac{(x-y)^{2}}{4(t-\tau)}}dy,$
$\displaystyle B_{N}(x,s)$
$\displaystyle=\frac{1}{4\sqrt{\pi}}\int_{s}^{t}d\tau(t-\tau)^{-3/2}\int_{0}^{\infty}m_{N}(y,t-\tau)(x+y)e^{-\frac{(x+y)^{2}}{4(t-\tau)}}dy.$
Both $|A_{N}(x,s)|$ and $|B_{N}(x,s)|$ are bounded by
$\frac{\|m_{N}\|_{\infty}}{4\sqrt{\pi}}\int_{s}^{t}d\tau(t-\tau)^{-3/2}\int_{-\infty}^{\infty}|x+y|e^{-\frac{(x+y)^{2}}{4(t-\tau)}}dy.$
Changing variables of integration,
$|A_{N}(x,s)|\vee|B_{N}(x,s)|\leq\frac{\|m_{N}\|_{\infty}}{2\sqrt{\pi}}\int_{s}^{t}d\tau(t-\tau)^{-1/2}\int_{-\infty}^{\infty}|z|e^{-\frac{z^{2}}{2}}dz\leq
C\|m_{N}\|_{\infty}t^{1/2},$
for a constant $C$. Recall that $\|m_{N}\|_{\infty}\leq
M\|\psi^{N}_{x}\|_{\infty}$ and $\tilde{\psi}^{N}=\psi^{0}+\hat{\psi}^{N}$.
Thus
$\|\tilde{\psi}^{N}_{x}(\cdot,s)\|_{\infty}\leq\|\psi^{0}_{x}\|_{\infty}+2C\|m_{N}\|_{\infty}t^{1/2}\leq\|\psi^{0}_{x}\|_{\infty}+2CM\|\psi^{N}_{x}(\cdot,s)\|_{\infty}t^{1/2}\
,\qquad 0<s\leq t.$
Hence $\|\tilde{\psi}^{N}_{x}(\cdot,s)\|_{\infty}\leq
2\|\psi^{0}_{x}\|_{\infty}$ for $0\leq s\leq(2CM)^{-2}/2$, completing the
proof that $v_{1}=v_{2}$. ∎
### 3.2. Martingale toolbox
In addition to the delayed renewal processes $S^{n}_{i}$ it will also be
useful to introduce the corresponding non-delayed renewal processes
$S^{0,n}_{i}(t)=\max\Big{\\{}k\in{\mathbb{Z}}_{+}:\sum_{j=1}^{k-1}Z^{n}_{i}(j)\leq
t\Big{\\}},\qquad t\geq 0.$
Note that $S^{0,n}_{i}(0)=1$ and that these processes are IID (unlike
$S^{n}_{i}$). Additional rescaled processes are denoted as follows
(3.8)
$\hat{E}^{n}_{i}(t)=n^{-1/2}(E^{n}_{i}(t)-\lambda^{n}t),\qquad\hat{S}^{n}_{i}(t)=n^{-1/2}(S^{n}_{i}(t)-\mu^{n}t),\qquad\hat{S}^{0,n}_{i}(t)=n^{-1/2}(S^{0,n}_{i}(t)-\mu^{n}t),$
(3.9)
$\hat{A}^{n}_{0}(t)=n^{-1/2}(A^{n}_{0}(t)-\lambda^{n}_{0}t),\qquad\hat{A}^{n}_{i}(t)=n^{-1/2}A^{n}_{i}(t).$
A semimartingale decomposition of counting processes was introduced in [17],
which deviates from the classical Doob-Meyer decomposition and is convenient
for our purposes. For the renewal process $S^{n}_{i}$, this decomposition is
constructed as follows. Denote by
$R^{n}_{i}(t)=\inf\\{s>0:S^{n}_{i}(t+s)>S^{n}_{i}(t)\\},\qquad t\geq 0,$
the residual time to the next counting instant (note that it is right-
continuous, hence at a time of counting it already shows the time until the
next counting). One has
(3.10) $t+R^{n}_{i}(t)=\sum_{k=0}^{S^{n}_{i}(t)}Z^{n}_{i}(k).$
Hence with
$M^{{\rm
ser},n}_{i}(t)=\sum_{k=1}^{S^{n}_{i}(t)}\zeta^{n}_{i}(k),\qquad\zeta^{n}_{i}(k)=1-\mu^{n}Z^{n}_{i}(k),\qquad
k\geq 1,$
and $\sum_{1}^{0}=0$, one has the Daley-Miyazawa semimartingale
representation,
(3.11) $S^{n}_{i}(t)=\mu^{n}(t-Z^{n}_{i}(0)+R^{n}_{i}(t))+M^{{\rm
ser},n}_{i}(t),\qquad t\geq 0.$
Denoting ${\mathcal{F}}^{{\rm
ser},n}_{i}(t)=\sigma\\{S^{n}_{i}(s),R^{n}_{i}(s),s\leq t\\}$, the first term
on the right is predictable on the filtration $\\{{\mathcal{F}}^{{\rm
ser},n}_{i}(t)\\}$ while $M^{{\rm ser},n}_{i}$ a martingale on it. A form of
this decomposition that will be useful here is based on a filtration
${\mathcal{F}}^{n}_{t}$ that, for each $t$, contains all relevant information
about the system by time $t$. It will be obtained once a time change
transformation of (3.11) is performed, to represent
$D^{n}_{i}(t)=S^{n}_{i}(T^{n}_{i}(t))$. Thus let
$\tilde{R}^{n}_{i}(t)=n^{-1/2}\mu^{n}R^{n}_{i}(t)=\tilde{\mu}^{n}R^{n}_{i}(t),\qquad\hat{M}^{{\rm
ser},n}_{i}(t)=n^{-1/2}M^{{\rm ser},n}_{i}(t),$
and
(3.12) $M^{{\rm
dep},n}_{i}(t)=\sum_{k=1}^{D^{n}_{i}(t)}\zeta^{n}_{i}(k),\qquad\hat{M}^{{\rm
dep},n}_{i}(t)=n^{-1/2}M^{{\rm dep},n}_{i}(t).$
Then, by (3.11),
(3.13)
$D^{n}_{i}(t)=\mu^{n}(T^{n}_{i}(t)-Z^{n}_{i}(0)+R^{n}_{i}(T^{n}_{i}(t)))+M^{{\rm
dep},n}_{i}(t),$
and
(3.14)
$\hat{S}^{n}_{i}(T^{n}_{i}(t))=-\tilde{Z}^{n}_{i}(0)+\tilde{R}^{n}_{i}(T^{n}_{i}(t))+\hat{M}^{{\rm
dep},n}_{i}(t).$
The tuple
${\mathcal{S}}^{n}(t)=(E^{n}_{i}(t),A^{n}_{i}(t),D^{n}_{i}(t),X^{n}_{i}(t),T^{n}_{i}(t),R^{n}_{i}(T^{n}_{i}(t)),\,i\in[n],\,A^{n}_{0}(t),\theta^{n}_{A^{n}_{0}(t)})$
is referred to as the state of the system at time $t$. Denote the
corresponding filtration by
${\mathcal{F}}^{n}_{t}=\sigma\\{{\mathcal{I}}^{n},{\mathcal{S}}^{n}(s),s\in[0,t]\\}.$
Note that at times $t$ when server $i$ is active, $R^{n}_{i}(T^{n}_{i}(t))$ is
the residual time till the completion of service of the job being processed by
that server, whereas at times when the server is idle,
$R^{n}_{i}(T^{n}_{i}(t))$ gives the service duration of the job that will be
processed next by this server.
###### Lemma 3.3.
i. The processes $\hat{M}^{{\rm dep},n}_{i}$, $\hat{E}^{n}_{i}$, and
$\hat{A}^{n}_{0}$ are $\\{{\mathcal{F}}^{n}_{t}\\}$-martingales, with optional
quadratic variations given by
$[\hat{M}^{{\rm
dep},n}_{i}](t)=n^{-1}\sum_{k=1}^{D^{n}_{i}(t)}\zeta^{n}_{i}(k)^{2},\qquad[\hat{E}^{n}_{i}](t)=n^{-1}E^{n}_{i}(t),\qquad[\hat{A}^{n}_{0}](t)=n^{-1}A^{n}_{0}(t),$
and one has
(3.15) ${\mathbb{E}}\\{[\hat{M}^{{\rm dep},n}_{i}](t)\\}=n^{-1}\sigma_{\rm
ser}^{2}{\mathbb{E}}\\{D^{n}_{i}(t)\\}<\infty.$
ii. For distinct $i,j\in[n]$,
(3.16) ${\mathbb{E}}\\{[\hat{M}^{{\rm dep},n}_{i},\hat{M}^{{\rm
dep},n}_{j}](t)\\}=0.$
iii. The process
$\hat{M}^{A,n}_{i}(t)=\hat{A}^{n}_{i}(t)-\hat{C}^{A,n}_{i}(t),\quad\text{
where
}\quad\hat{C}^{A,n}_{i}(t)=\lambda^{n}_{0}n^{-1/2}\int_{0}^{t}p_{n,{\mathcal{R}}^{n}_{i}(s)}ds,$
is an $\\{{\mathcal{F}}^{n}_{t}\\}$-martingale, whose optional quadratic
variation is $[\hat{M}^{A,n}_{i}](t)=n^{-1}A^{n}_{i}(t)$.
Proof. i. For adaptedness of $\hat{M}^{{\rm dep},n}_{i}$ it suffices to prove
that $Z^{n}_{i}(k)1_{k\leq D^{n}_{i}(t)}\in{\mathcal{F}}^{n}_{t}$ for all $k$.
This is shown as follows. By (3.10),
$T^{n}_{i}(s)+R^{n}_{i}(T^{n}_{i}(s))=\sum_{k=0}^{D^{n}_{i}(s)}Z^{n}_{i}(k)$.
Hence $\\{Z^{n}_{i}(k),k\leq D^{n}_{i}(t)\\}$ can all be recovered from the
tuple $\\{T^{n}_{i}(s),R^{n}_{i}(T^{n}_{i}(s)),D^{n}_{i}(s)\\}$, as $s$ varies
between $0$ and $t$. Since the latter is
$\\{{\mathcal{F}}^{n}_{t}\\}$-adapted, this proves the claim.
Next it is shown that $\hat{M}^{{\rm dep},n}_{i}(t)\in L_{1}(d{\mathbb{P}})$.
Note first that as a renewal process, $S^{n}_{i}(t)$ has finite expectation
for every $t$. Since $T^{n}_{i}(t)\leq t$ this gives
${\mathbb{E}}[D^{n}_{i}(t)]<\infty$. Let
$t^{n}_{i}(k)=\inf\\{t\geq 0:D^{n}_{i}(t)\geq k\\},\qquad k=1,2,\ldots.$
These are clearly stopping times on $\\{{\mathcal{F}}^{n}_{t}\\}$. Hence
(3.17) $t^{n}_{i}(k)\in{\mathcal{F}}^{n}_{t^{n}_{i}(k)-},\qquad k\geq 1,$
where we recall that for a stopping time $\tau$,
${\mathcal{F}}^{n}_{\tau-}={\mathcal{F}}^{n}_{0}\vee\sigma\\{A\cap\\{\tau<t\\}:A\in{\mathcal{F}}^{n}_{t},t\geq
0\\}$
(see [27, I.1.11 and I.1.14]). The state of the system up to $t^{n}_{i}(k)-$,
namely $\\{{\mathcal{S}}^{n}(t),t<t^{n}_{i}(k)\\}$, can be recovered from the
tuple ${\mathcal{I}}^{n}$, $(E^{n}_{i}(t),t\in{\mathbb{R}}_{+},i\in[n])$,
$(A^{n}_{0}(t),t\in{\mathbb{R}}_{+})$, $(\theta^{n}_{j},j\in{\mathbb{N}})$,
$(Z^{n}_{\ell}(j),j\in{\mathbb{N}})$, $\ell\in[n]\setminus\\{i\\}$ and
$(Z^{n}_{i}(j),j<k)$, as follows by the construction of the model. By our
assumptions, $Z^{n}_{i}(k)$ is independent of this tuple. As a result, it is
independent of ${\mathcal{F}}^{n}_{t^{n}_{i}(k)-}$. It follows from (3.17)
that
$\\{D^{n}_{i}(t)\leq k\\}=\\{t^{n}_{i}(k)\geq
t\\}\in{\mathcal{F}}^{n}_{t^{n}_{i}(k)-}.$
The structure that we have just proved, where
(3.18)
$Z^{n}_{i}(k^{\prime})\in{\mathcal{F}}^{n}_{t^{n}_{i}(k)-},k^{\prime}<k,\text{
whereas $Z^{n}_{i}(k)$ is independent of
${\mathcal{F}}^{n}_{t^{n}_{i}(k)-}$},$
$D^{n}_{i}(t)$ is a stopping time on the discrete parameter filtration
$\\{{\mathcal{F}}^{n}_{t^{n}_{i}(k)-},k\in{\mathbb{N}}\\}$,
${\mathbb{E}}[D^{n}_{i}(t)]<\infty$, along with the fact that $Z^{n}_{i}(k)$
are IID with ${\mathbb{E}}[Z^{n}_{i}(k)]<\infty$, allows us to use Wald’s
identity, showing
${\mathbb{E}}\Big{[}\sum_{k=1}^{D^{n}_{i}(t)}Z^{n}_{i}(k)\Big{]}={\mathbb{E}}\\{D^{n}_{i}(t)\\}{\mathbb{E}}\\{Z^{n}_{i}(1)\\}={\mathbb{E}}\\{D^{n}_{i}(t)\\}(\mu^{n})^{-1}<\infty.$
Since $|\zeta^{n}_{i}(k)|\leq 1+\mu^{n}Z^{n}_{i}(k)$, this shows that
${\mathbb{E}}\\{\hat{M}^{{\rm dep},n}_{i}(t)\\}<\infty$.
To show the martingale property, note that by the independence stated in
(3.18), we have
(3.19)
${\mathbb{E}}[\zeta^{n}_{i}(k)|{\mathcal{F}}^{n}_{t^{n}_{i}(k)-}]=0,\qquad
k\geq 1.$
Arguing now along the lines of the proof of [17, Lemma 2.1],
$\displaystyle\hat{M}^{{\rm dep},n}_{i}(t)$
$\displaystyle=n^{-1/2}\sum_{k=1}^{D^{n}_{i}(t)}\zeta^{n}_{i}(k)=n^{-1/2}\sum_{k=1}^{\infty}\zeta^{n}_{i}(k)1_{\\{t^{n}_{i}(k)\leq
t\\}}.$
Hence for $s<t$,
$\displaystyle{\mathbb{E}}[\hat{M}^{{\rm
dep},n}_{i}(t)|{\mathcal{F}}^{n}_{s}]-\hat{M}^{{\rm dep},n}_{i}(s)$
$\displaystyle=n^{-1/2}\sum_{k=1}^{\infty}{\mathbb{E}}[\zeta^{n}_{i}(k)1_{\\{s<t^{n}_{i}(k)\leq
t\\}}|{\mathcal{F}}^{n}_{s}]$
$\displaystyle=n^{-1/2}\sum_{k=1}^{\infty}{\mathbb{E}}[{\mathbb{E}}[\zeta^{n}_{i}(k)1_{\\{s<t^{n}_{i}(k)\leq
t\\}}|{\mathcal{F}}^{n}_{t^{n}_{i}(k)-}]\,|{\mathcal{F}}^{n}_{s}]$
$\displaystyle=0,$
where we used (3.17) and (3.19).
For the martingale property of $\hat{E}^{n}_{i}$ one only needs to show that
$E^{n}_{i}(t)-E^{n}_{i}(s)$ is independent of ${\mathcal{F}}^{n}_{s}$ when
$s<t$. Again, this follows from the fact that all the processes comprising
${\mathcal{S}}^{n}_{u}$, $u\leq s$, can be recovered from the tuple
${\mathcal{I}}^{n}$, $(E^{n}_{i}(u),u\in[0,s],i\in[n])$,
$(A^{n}_{0}(u),u\in[0,s])$, $(\theta^{n}_{k},k\in{\mathbb{N}})$,
$(Z^{n}_{i}(k),k\in{\mathbb{N}},i\in[n])$; but the increment
$E^{n}_{i}(t)-E^{n}_{i}(s)$ is independent of this tuple.
A similar proof holds for $\hat{A}^{n}_{0}$.
The expressions for the quadratic variation are straightforward.
To show (3.15) we can use Wald’s identity as before, now with the IID sequence
$\zeta^{n}_{i}(k)^{2}$, which now gives
${\mathbb{E}}\Big{[}\sum_{k=1}^{D^{n}_{i}(t)}\zeta^{n}_{i}(k)^{2}\Big{]}={\mathbb{E}}\\{D^{n}_{i}(t)\\}{\mathbb{E}}\\{\zeta^{n}_{i}(1)^{2}\\},$
and (3.15) follows.
ii. Similar to the argument following (3.17), one can recover the state of the
system up to $t^{n}_{ijkl}-$, where
$t^{n}_{ijkl}=\min(t^{n}_{i}(k),t^{n}_{j}(l)),$
namely $\\{{\mathcal{S}}^{n}(t),t<t^{n}_{ijkl}\\}$, from ${\mathcal{I}}^{n}$,
$(E^{n}_{i}(t),t\in{\mathbb{R}}_{+},i\in[n])$,
$(A^{n}_{0}(t),t\in{\mathbb{R}}_{+})$, $(\theta^{n}_{j},j\in{\mathbb{N}})$,
$(Z^{n}_{i^{\prime}}(k),k\in{\mathbb{N}})$,
$i^{\prime}\in[n]\setminus\\{i,j\\}$, $(Z^{n}_{i}(k^{\prime}),k^{\prime}<k)$
and $(Z^{n}_{j}(l^{\prime}),l^{\prime}<l)$. However, by our assumptions, the
pair $(Z^{n}_{i}(k),Z^{n}_{j}(l))$ is independent of this tuple, hence it is
independent of ${\mathcal{F}}^{n}_{t^{n}_{ijkl}-}$. Since $Z^{n}_{i}(k)$ and
$Z^{n}_{j}(l)$ are mutually independent, this gives
(3.20)
${\mathbb{E}}[\zeta^{n}_{i}(k)\zeta^{n}_{j}(l)|{\mathcal{F}}^{n}_{t^{n}_{ijkl}-}]=0,\qquad
k,l\geq 1.$
We have
$\displaystyle[\hat{M}^{{\rm dep},n}_{i},\hat{M}^{{\rm dep},n}_{j}](t)$
$\displaystyle=n^{-1}\sum_{k=1}^{D^{n}_{i}(t)}\zeta^{n}_{i}(k)\sum_{l=1}^{D^{n}_{j}(t)}1_{\\{t^{n}_{j}(l)=t^{n}_{i}(k)\\}}\zeta^{n}_{j}(l)$
$\displaystyle=n^{-1}\sum_{k=1}^{\infty}\sum_{l=1}^{\infty}\zeta^{n}_{i}(k)\zeta^{n}_{j}(l)1_{\\{t^{n}_{j}(l)=t^{n}_{i}(k)\leq
t\\}}.$
In view of (3.17), $1_{\\{t^{n}_{j}(l)=t^{n}_{i}(k)\leq
t\\}}\in{\mathcal{F}}^{n}_{t^{n}_{ijkl}-}$. Hence by (3.20),
${\mathbb{E}}[\zeta^{n}_{i}(k)\zeta^{n}_{j}(l)1_{\\{t^{n}_{j}(l)=t^{n}_{i}(k)\leq
t\\}}|{\mathcal{F}}^{n}_{t^{n}_{ijkl}-}]=0,\qquad k,l\geq 1,$
and (3.16) follows.
iii. Clearly $A^{n}_{i}$, defined in (2.6) is adapted and $A^{n}_{i}(t)$ is
integrable for all $t$. Hence the same is true for $\hat{M}^{n}_{i}$. Next,
let
$s^{n}(k)=\inf\\{t\geq 0:A^{n}_{0}(t)\geq k\\},\qquad k=1,2,\ldots.$
As in (i), these are stopping times and
$s^{n}(k)\in{\mathcal{F}}^{n}_{s^{n}(k)-}$. To show the martingale property,
we can write, using
$\int_{0}^{t}p_{n,{\mathcal{R}}^{n}_{i}(s-)}ds=\int_{0}^{t}p_{n,{\mathcal{R}}^{n}_{i}(s)}ds$,
$\displaystyle n^{1/2}\hat{M}^{A,n}_{i}(t)$
$\displaystyle=\int_{[0,t]}1_{\\{{\mathcal{R}}^{n}_{i}(s-)=\theta^{n}_{A^{n}_{0}(s)}\\}}dA^{n}_{0}(s)-\lambda^{n}_{0}\int_{0}^{t}p_{n,{\mathcal{R}}^{n}_{i}(s)}ds$
$\displaystyle=\int_{[0,t]}(1_{\\{{\mathcal{R}}^{n}_{i}(s-)=\theta^{n}_{A^{n}_{0}(s)}\\}}-p_{n,{\mathcal{R}}^{n}_{i}(s-)})dA^{n}_{0}(s)+\int_{0}^{t}p_{n,{\mathcal{R}}^{n}_{i}(s-)}(dA^{n}_{0}(s)-\lambda^{n}_{0}ds)$
$\displaystyle=:M^{n}_{i,1}(t)+M^{n}_{i,2}(t).$
For $M^{n}_{i,1}$, write
$A^{n}_{i}(t)=\sum_{k=1}^{A^{n}_{0}(t)}1_{\\{{\mathcal{R}}^{n}_{i}(s^{n}(k)-)=\theta^{n}_{k}\\}}.$
An argument similar to the one given before shows that $\theta^{n}_{k}$ is
independent of ${\mathcal{F}}^{n}_{s^{n}(k)-}$. Therefore, for $0\leq s<t$, we
have
$\displaystyle{\mathbb{E}}[A^{n}_{i}(t)|{\mathcal{F}}^{n}_{s}]-A^{n}_{i}(s)$
$\displaystyle=\sum_{k=1}^{\infty}{\mathbb{E}}[1_{\\{{\mathcal{R}}^{n}_{i}(s^{n}(k)-)=\theta^{n}_{k}\\}}1_{\\{s<s^{n}(k)\leq
t\\}}|{\mathcal{F}}^{n}_{s}]$
$\displaystyle=\sum_{k=1}^{\infty}{\mathbb{E}}[{\mathbb{E}}[1_{\\{{\mathcal{R}}^{n}_{i}(s^{n}(k)-)=\theta^{n}_{k}\\}}1_{\\{s<s^{n}(k)\leq
t\\}}|{\mathcal{F}}^{n}_{s^{n}(k)-}]|{\mathcal{F}}^{n}_{s}]$
$\displaystyle=\sum_{k=1}^{\infty}{\mathbb{E}}[p_{n,{\mathcal{R}}^{n}_{i}(s^{n}(k)-)}1_{\\{s<s^{n}(k)\leq
t\\}}|{\mathcal{F}}^{n}_{s}]$
$\displaystyle={\mathbb{E}}[C^{n}_{i}(t)|{\mathcal{F}}^{n}_{s}]-C^{n}_{i}(s),$
where
$C^{n}_{i}(t)=\sum_{k=1}^{A^{n}_{0}(t)}p_{n,{\mathcal{R}}^{n}_{i}(s^{n}(k)-)}=\int_{[0,t]}p_{n,{\mathcal{R}}^{n}_{i}(s-)}dA^{n}_{0}(s),$
showing that $A^{n}_{i}-C^{n}_{i}=M^{n}_{i,1}$ is a martingale.
In the expression for $M^{n}_{i,2}$, the integrand is
$\\{{\mathcal{F}}^{n}_{t}\\}$-adapted and has LCRL sample paths, while the
integrator is a martingale on the filtration. As a result, $M^{n}_{i,2}$ is a
local martingale [37, Theorem II.20]; using the estimate
$\|M^{n}_{i,2}\|^{*}_{t}\leq A^{n}_{0}(t)+c$ shows it is in fact a martingale.
As a result, so is $\hat{M}^{A,n}_{i}$. Finally, the expression for the
quadratic variation is straightforward. ∎
We will also need the following simple fact.
###### Lemma 3.4.
Let $M_{N}$, $N\in{\mathbb{Z}}_{+}$ be a martingale with $M_{0}=0$, for which
the increments $\mathnormal{\Delta}_{N}=M_{N}-M_{N-1}$ satisfy
${\mathbb{E}}(|\mathnormal{\Delta}_{N}|1_{\\{|\mathnormal{\Delta}_{N}|>a\\}})\leq\bar{r}(a)$,
$a\geq 0$, $N\in{\mathbb{N}}$, and $\bar{r}(a)\to 0$ as $a\to\infty$. Then
$N^{-1}{\mathbb{E}}\|M\|^{*}_{N}<A(N)\to 0$, where $\\{A(N)\\}$ depend only on
$\bar{r}$.
Proof. Let
$b_{N}={\mathbb{E}}[\mathnormal{\Delta}_{N}1_{\\{|\mathnormal{\Delta}_{N}|\leq
a\\}}]$ and note that
$b_{N}=-{\mathbb{E}}[\mathnormal{\Delta}_{N}1_{\\{|\mathnormal{\Delta}_{N}|>a\\}}]$
hence $|b_{N}|\leq\bar{r}(a)$. Write
$M_{N}=P_{N}+Q_{N},\qquad
P_{N}=\sum_{i=1}^{N}\\{\mathnormal{\Delta}_{i}1_{\\{|\mathnormal{\Delta}_{i}|>a\\}}+b_{i}\\},\qquad
Q_{N}=\sum_{i=1}^{N}\\{\mathnormal{\Delta}_{i}1_{\\{|\mathnormal{\Delta}_{i}|\leq
a\\}}-b_{i}\\}.$
The quadratic variation of the martingale $Q_{N}$ is bounded by
$(a+\bar{r}(a))^{2}n$, giving ${\mathbb{E}}[\|Q\|^{*}_{N}]\leq
c(a+\bar{r}(a))n^{1/2}$, where $c^{2}$ is the constant from the Burkholder-
Davis-Gundy (BDG) inequality with $p=2$. For $P_{N}$,
$|P_{N}|\leq\sum_{i=1}^{N}|\mathnormal{\Delta}_{i}|1_{\\{|\mathnormal{\Delta}_{i}|>a\\}}+N\bar{r}(a),$
thus ${\mathbb{E}}[\|P\|^{*}_{N}]\leq 2N\bar{r}(a)$. This gives
$n^{-1}{\mathbb{E}}[\|M\|^{*}_{N}]\leq c(a+\bar{r}(a))N^{-1/2}+2\bar{r}(a)$.
Taking $a=N^{1/4}$ completes the proof. ∎
### 3.3. Uniform estimates on individual queue length processes
The goal here is to calculate the dynamics of the individual rescaled
processes and develop estimates showing that they are $C$-tight uniformly in
$i\in[n]$.
In the following lemma, part (i) provides an equation, (3.21), for the
dynamics of individual queue lengths, and part (ii) shows that, at the cost of
introducing an error term, one can replace the term
$\hat{S}^{n}_{i}(T^{n}_{i})$ in that equation by a martingale. Both
representations (3.21) and (3.24) are used in this paper, where the former is
used for estimates on each $\hat{X}^{n}_{i}$ that are uniform in $i$, and the
latter is convenient for representations of $\bar{\xi}^{n}$, as the martingale
terms add up to a martingale. The last part of the lemma uses (3.21) and gives
uniform second moment and tightness estimates.
###### Lemma 3.5.
i. One has
(3.21) $\displaystyle\hat{X}^{n}_{i}(t)$
$\displaystyle=\hat{X}^{n}_{i}(0-)+\hat{E}^{n}_{i}(t)+\hat{A}^{n}_{i}(t)-\hat{S}^{n}_{i}(T^{n}_{i}(t))+\hat{b}^{n}_{1}t+\hat{L}^{n}_{i}(t).$
Moreover, the sample paths of $\hat{L}^{n}_{i}$ are in $C^{\uparrow}$ and
(3.22) $\int_{0}^{\infty}\hat{X}^{n}_{i}(t)d\hat{L}^{n}_{i}(t)=0.$
ii. One has
(3.23) $\hat{X}^{n}_{i}(t)=\hat{X}^{1,n}_{i}(t)+e^{1,n}_{i}(t),$
where
(3.24)
$\hat{X}^{1,n}_{i}(t)=\hat{X}^{n}_{i}(0-)+\hat{E}^{n}_{i}(t)+\hat{A}^{n}_{i}(t)-\hat{M}^{{\rm
dep},n}_{i}(t)+\hat{b}^{n}_{1}t+\hat{L}^{n}_{i}(t),$
$e^{1,n}_{i}(t)=\tilde{Z}^{n}_{i}(0)-\tilde{R}^{n}_{i}(T^{n}_{i}(t))=\tilde{\mu}^{n}(Z^{n}_{i}(0)-R^{n}_{i}(T^{n}_{i}(t))).$
iii. For
$H^{n}_{i}=\hat{S}^{n}_{i},\hat{E}^{n}_{i},\hat{A}^{n}_{i},\hat{L}^{n}_{i}$
and $\hat{X}^{n}_{i}$, one has
(3.25)
$\sup_{n}\sup_{i\in[n]}{\mathbb{E}}[(\|H^{n}_{i}\|^{*}_{t})^{2}]<\infty,\qquad
t\geq 0,$
and for every $t$, $\varepsilon>0$ and $\eta>0$ there is $\delta>0$ such that
(3.26)
$\limsup_{n}\max_{i\in[n]}{\mathbb{P}}(w_{t}(H^{n}_{i},\delta)>\varepsilon)<\eta.$
Proof. i. By (2.1) and (2.12),
$\displaystyle n^{-1/2}X^{n}_{i}(t)$
$\displaystyle=n^{-1/2}X^{n}_{i}(0-)+n^{-1/2}(E^{n}_{i}(t)-\lambda^{n}t)+n^{-1/2}(\lambda^{n}-n\lambda)t+n^{1/2}\lambda
t+n^{-1/2}A^{n}_{i}(t)$
$\displaystyle\quad-n^{-1/2}(S^{n}_{i}(T^{n}_{i}(t))-\mu^{n}T^{n}_{i}(t))-n^{-1/2}\mu^{n}T^{n}_{i}(t),$
and
$-n^{-1/2}\mu^{n}T^{n}_{i}(t)=-n^{-1/2}(\mu^{n}-n\mu)t-n^{1/2}\mu
t+\hat{L}^{n}_{i}(t).$
Using (2.11), (3.8) and (3.9) gives (3.21). The properties of
$\hat{L}^{n}_{i}$ and (3.22) follow from (2.2).
ii. Using (3.14) in (3.21) gives (3.23).
iii. By the central limit theorem for renewal processes [10, §17], and the
fact that, by (2.8) and (2.10), $n^{-1}(\lambda^{n},\mu^{n})\to(\lambda,\mu)$,
for each $i$, $(\hat{E}^{n}_{i},\hat{S}^{0,n}_{i})$ converge in law to
$(E,S)$, a pair of mutually independent BM starting at zero, with zero drift,
and diffusion coefficients $\lambda^{1/2}$ and $\mu^{1/2}\sigma_{\rm ser}$,
respectively (where we recall $\lambda=\mu$).
We prove that (3.25) and (3.26) hold for $\hat{S}^{n}_{i}$ by relating these
processes to $\hat{S}^{0,n}_{i}$, whose laws do not depend on $i$. To this
end, note that
$S^{n}_{i}(t)=S^{0,n}_{i}((t-Z^{n}_{i}(0))^{+})-1_{\\{t<Z^{n}_{i}(0)\\}}=S^{0,n}_{i}(t-Z^{\\#,n}_{i})-1_{\\{t<Z^{n}_{i}(0)\\}},$
where $Z^{\\#,n}_{i}=Z^{\\#,n}_{i}(t):=Z^{n}_{i}(0)\wedge t$. Hence by (3.8),
$\hat{S}^{n}_{i}(t)=\hat{S}^{0,n}_{i}(t-Z^{\\#,n}_{i})-\tilde{\mu}^{n}Z^{\\#,n}_{i}-n^{-1/2}1_{\\{t<Z^{n}_{i}(0)\\}}.$
As a result,
(3.27)
$\|\hat{S}^{n}_{i}\|^{*}_{t}\leq\|\hat{S}^{0,n}_{i}\|^{*}_{t}+\tilde{\mu}^{n}Z^{n}_{i}(0)+n^{-1/2}=\|\hat{S}^{0,n}_{i}\|^{*}_{t}+\tilde{Z}^{n}_{i}(0)+n^{-1/2},$
and
$w_{t}(\hat{S}^{n}_{i},\delta)\leq
w_{t}(\hat{S}^{0,n}_{i},\delta)+\tilde{\mu}^{n}Z^{n}_{i}(0)=w_{t}(\hat{S}^{0,n}_{i},\delta)+\tilde{Z}^{n}_{i}(0).$
The latter inequality and the fact that the law of $\hat{S}^{0,n}_{i}$ does
not depend on $i$ gives
$\max_{i\in[n]}{\mathbb{P}}(w_{t}(\hat{S}^{n}_{i},\delta)>\varepsilon)\leq{\mathbb{P}}\Big{(}w_{t}(\hat{S}^{0,n}_{1},\delta)>\frac{\varepsilon}{2}\Big{)}+\max_{i\in[n]}{\mathbb{P}}\Big{(}\tilde{Z}^{n}_{i}(0)>\frac{\varepsilon}{2}\Big{)}.$
Using (2.13) and the $C$-tightness of $\hat{S}^{0,n}_{1}$, $n\in{\mathbb{N}}$,
shows that (3.26) is satisfied by $\hat{S}^{n}_{i}$.
Next, under our second moment assumptions on the inter-renewal times, it is
well known that the rescaled non-delayed renewal processes satisfy
$\sup_{n}{\mathbb{E}}[(\|\hat{S}^{0,n}_{1}\|^{*}_{t})^{2}]<\infty,$
for every $t$ [28, Appendix 1]. Since the law of $\hat{S}^{0,n}_{i}$ does not
depend on $i$, it follows from (3.27) and our assumption (2.14) that
$\hat{S}^{n}_{i}$ satisfy (3.25).
It follows now that $\hat{E}^{n}_{i}$ satisfies both estimates, for the law of
$E^{n}_{i}$ is merely a special case of the law of the non-delayed renewal
processes $S^{0,n}_{i}$.
As for the processes $A^{n}_{i}$, recall from Lemma 3.3.iii that
$\hat{M}^{A,n}_{i}$ is a martingale and that
$[\hat{M}^{A,n}_{i}](t)=n^{-1}A^{n}_{i}(t)$. One has
${\mathbb{E}}[n^{-1}A^{n}_{i}(t)]=n^{-1}\lambda^{n}_{0}\int_{0}^{t}{\mathbb{E}}[p_{n,{\mathcal{R}}^{n}_{i}(s)}]ds\leq
cn^{-1}n^{3/2}n^{-1}t,$
where we used (2.5) and (2.9). Since this bound does not depend on $i$, it
follows by the BDG inequality that
(3.28)
$\lim_{n}\max_{i\in[n]}{\mathbb{E}}[(\|\hat{M}^{A,n}_{i}\|^{*}_{t})^{2}]=0.$
Again by the bound on $p_{n,r}$, the Lipschitz constant of $\hat{C}^{A,n}_{i}$
is bounded by $cn^{3/2}n^{-1/2}n^{-1}=c$. It follows that both estimates
(3.25) and (3.26) are satisfied by $\hat{A}^{n}_{i}$.
To treat $\hat{L}^{n}_{i}$, recall Skorohod’s lemma [16, §8], stating that for
a trajectory $y\in D({\mathbb{R}}_{+},{\mathbb{R}})$, if $(x,z)\in
D({\mathbb{R}}_{+},{\mathbb{R}}_{+})^{2}$ are such that $x=y+z$, $z$ is
nondecreasing and, with the convention $z(0-)=0$, $\int_{[0,\infty)}xdz=0$,
then $z$ is given by
$z(t)=\sup_{s\in[0,t]}y^{-}(s),\qquad t\geq 0.$
In particular,
(3.29) $z(t)\leq\|y\|_{t},\qquad w_{t}(z,\delta)\leq w_{t}(y,\delta),\qquad
t>0,\,\delta>0.$
In view of (3.21) and (3.22), it follows that
(3.30) $\hat{L}^{n}_{i}(t)\leq\|\hat{U}^{n}_{i}\|_{t},\qquad
w_{t}(\hat{L}^{n}_{i},\delta)\leq w_{t}(\hat{U}^{n}_{i},\delta),$
where
(3.31)
$\hat{U}^{n}_{i}(t)=\hat{X}^{n}_{i}(0-)+\hat{E}^{n}_{i}(t)+\hat{A}^{n}_{i}(t)-\hat{S}^{n}_{i}(T^{n}_{i}(t))+\hat{b}^{n}_{1}t,$
and
(3.32) $\hat{X}^{n}_{i}=\hat{U}^{n}_{i}+\hat{L}^{n}_{i}.$
Because $T^{n}_{i}$ are $1$-Lipschitz, we have that
$\hat{S}^{n}_{i}(T^{n}_{i}(\cdot))$ satisfy (3.25) and (3.26). Hence by the
second moment assumption on initial condition (2.15) and the boundedness of
$\hat{b}^{n}_{1}$, the same holds for $\hat{U}^{n}_{i}$. Finally, by (3.30)
and (3.32), this is also true for $\hat{L}^{n}_{i}$ and $\hat{X}^{n}_{i}$. ∎
### 3.4. Empirical process tightness
###### Lemma 3.6.
$\bar{\xi}^{n}$ are $C$-tight in
$D({\mathbb{R}}_{+},{\mathcal{M}}_{1}({\mathbb{R}}_{+}))$.
Proof. Denote by $C^{\varepsilon}$ the $\varepsilon$-neighborhood, in
${\mathbb{R}}_{+}$, of a set $C\in{\mathbb{R}}_{+}$, and let
$d_{\rm L}(p,q)=\inf\\{\varepsilon>0:p(C^{\varepsilon})+\varepsilon\geq
q(C)\text{ and }q(C^{\varepsilon})+\varepsilon\geq p(C)\text{ for all
}C\in{\mathcal{B}}({\mathbb{R}}_{+})\\},$
$p,q\in{\mathcal{M}}_{1}({\mathbb{R}}_{+})$, denote the Levy-Prohorov metric,
which induces the topology of weak convergence on
${\mathcal{M}}_{1}({\mathbb{R}}_{+})$. Since $\bar{\xi}^{n}_{0}$ converge in
probability, proving $C$-tightness of $\bar{\xi}^{n}$ amounts to showing that
for every $\varepsilon>0$ and $\eta>0$ there exists $\delta>0$ such that
(3.33)
$\limsup_{n}{\mathbb{P}}(w_{T}(\bar{\xi}^{n},\delta)>\varepsilon)<\eta.$
Here and below we use the same notation $w_{T}$ for the metric $d_{\rm L}$ and
for the usual metric on ${\mathbb{R}}$.
We show that (3.33) is a consequence of Lemma 3.5.iii. Fix $\varepsilon>0$ and
$\eta>0$. Given $n$ and $\delta>0$, consider the event
$\mathnormal{\Omega}^{n}_{\delta}=\\{w_{T}(\bar{\xi}^{n},\delta)>\varepsilon\\}$.
On this event there are $0\leq s\leq t\leq T$, $t-s\leq\delta$, and
$C\in{\mathcal{B}}({\mathbb{R}}_{+})$, such that
$\xi^{n}_{t}(C^{\varepsilon})+n\varepsilon<\xi^{n}_{s}(C)\quad\text{ or
}\quad\xi^{n}_{s}(C^{\varepsilon})+n\varepsilon<\xi^{n}_{t}(C).$
In both cases, the number of trajectories $\hat{X}^{n}_{i}$ whose displacement
between times $s$ and $t$ exceeds $\varepsilon$ is greater than
$n\varepsilon$. Therefore
$\\#\\{i\in[n]:w_{T}(\hat{X}^{n}_{i},\delta)\geq\varepsilon\\}>n\varepsilon.$
Hence by Chebychev’s inequality,
${\mathbb{P}}(\mathnormal{\Omega}^{n}_{\delta})\leq(n\varepsilon)^{-1}n\max_{i}{\mathbb{P}}(w_{T}(\hat{X}^{n}_{i},\delta)>\varepsilon).$
In view of (3.26), given any $\eta_{1}>0$ there is $\delta>0$ such that for
all large $n$, the maximum over $i$ in the above display is $<\eta_{1}$.
Hence, for such $\delta$, the above is $\leq\varepsilon^{-1}\eta_{1}$.
Choosing $\eta_{1}$ such that $\varepsilon^{-1}\eta_{1}<\eta$ and the
corresponding $\delta$ gives (3.33). ∎
## 4\. Proof
### 4.1. An equation for the empirical process
This section provides an equation for the empirical measure that is a
precursor to the parabolic PDE. Fix a test function $\tilde{\phi}$ as in
Definition 2.3, that is, $\tilde{\phi}\in C^{\infty}_{0}({\mathbb{R}}_{+})$,
$\tilde{\phi}(0)=0$. Let
(4.34) $\phi(x)=\int_{0}^{x}\tilde{\phi}(y)dy,\qquad x\in{\mathbb{R}}_{+}.$
Then $\phi\in C^{\infty}_{b}({\mathbb{R}}_{+})$ and
$\phi(0)=\phi^{\prime}(0)=0$. Moreover, $\phi,\phi^{\prime}$ and
$\phi^{\prime\prime}$ are uniformly continuous on ${\mathbb{R}}_{+}$. Apply
$\phi$ to the dynamics (3.24), noting that, unlike $X^{n}_{i}$, $X^{1,n}_{i}$
may assume negative values. On the r.h.s. of (3.24), the terms
$\hat{A}^{n}_{i}$ and $\hat{M}^{{\rm dep},n}_{i}$ are piecewise constant. Thus
$\displaystyle\phi(\hat{X}^{1,n}_{i}(t))$
$\displaystyle=\phi(\hat{X}^{n}_{i}(0-))+\int_{0}^{t}\phi^{\prime}(\hat{X}^{1,n}_{i}(s))(d\hat{E}^{n,c}_{i}(s)+\hat{b}^{n}_{1}ds)+e^{2,n}_{i}(t)$
(4.35) $\displaystyle\quad+\sum_{s\leq
t}(\phi(\hat{X}^{1,n}_{i}(s))-\phi(\hat{X}^{1,n}_{i}(s-))),$
where $\hat{E}^{n,c}_{i}$ is the continuous part of $\hat{E}^{n}_{i}$,
$e^{2,n}_{i}(t)=\int_{0}^{t}(\phi^{\prime}(\hat{X}^{1,n}_{i}(s))-\phi^{\prime}(\hat{X}^{n}_{i}(s)))d\hat{L}^{n}_{i}(s),$
and we have used (3.22) and the continuity of the sample paths of
$\hat{L}^{n}_{i}$ to write
$\int\phi^{\prime}(\hat{X}^{n}_{i}(s))d\hat{L}^{n}_{i}(s)=\phi^{\prime}(0)\hat{L}^{n}_{i}(t)=0.$
In the last term of (4.1), by Taylor’s expansion, jumps according to
$\hat{E}^{n}_{i}$ can be expressed as
$\phi^{\prime}(\hat{X}^{1,n}_{i}(s-))\mathnormal{\Delta}\hat{E}^{n}_{i}(s)+\frac{1}{2}\phi^{\prime\prime}(\hat{X}^{2,n}_{i}(s))\mathnormal{\Delta}\hat{E}^{n}_{i}(s)^{2},$
where $\hat{X}^{2,n}_{i}(s)$ is an intermediate value between
$\hat{X}^{1,n}_{i}(s-)$ and $\hat{X}^{1,n}_{i}(s)$ (we leave unspecified the
value of the processes $\hat{X}^{2,n}_{i}$ away from times of jumps of
$\hat{X}^{1,n}_{i}$). Similarly, jumps according to $\hat{M}^{{\rm
dep},n}_{i}$ are expressed as
$-\phi^{\prime}(\hat{X}^{1,n}_{i}(s-))\mathnormal{\Delta}\hat{M}^{{\rm
dep},n}_{i}(s)+\frac{1}{2}\phi^{\prime\prime}(\hat{X}^{2,n}_{i}(s))\mathnormal{\Delta}\hat{M}^{{\rm
dep},n}_{i}(s)^{2},$
where again $\hat{X}^{2,n}_{i}(s)$ are intermediate points, and because, a.s.,
jumps of $\hat{E}^{n}_{i}$ and $\hat{M}^{{\rm dep},n}_{i}$ do not occur
simultaneously (for the same $i$), we may express the intermediate values by
the same process. Finally, jumps according to $\hat{A}^{n}_{i}$ – here we only
need first order approximation – are expressed as
$\phi^{\prime}(\hat{X}^{2,n}_{i}(s))\mathnormal{\Delta}\hat{A}^{n}_{i}(s).$
Note that jumps of $\hat{E}^{n}_{i}$ and $\hat{A}^{n}_{0}$ are of size
$n^{-1/2}$. Hence in all three cases,
(4.36)
$|\hat{X}^{2,n}_{i}(t)-\hat{X}^{1,n}_{i}(t-)|\leq|\mathnormal{\Delta}\hat{X}^{1,n}_{i}(t)|\leq\max\\{|\mathnormal{\Delta}\hat{M}^{{\rm
dep},n}_{i}(t)|,n^{-1/2}\\},\qquad t\in{\mathcal{J}}^{n}_{i},$
where ${\mathcal{J}}^{n}_{i}$ is the set of jump times of $\hat{X}^{1,n}_{i}$.
By (4.1), using
$\hat{E}^{n}_{i}=\hat{E}^{n,c}_{i}+\mathnormal{\Delta}\hat{E}^{n}_{i}$ and
$\int_{0}^{t}\phi^{\prime}(\hat{X}^{1,n}_{i}(s))d\hat{E}^{n,c}_{i}(s)=\int_{0}^{t}\phi^{\prime}(\hat{X}^{1,n}_{i}(s-))d\hat{E}^{n,c}_{i}(s)$,
we have
(4.37)
$\displaystyle\langle\phi,\bar{\xi}^{n}_{t}\rangle=\langle\phi,\bar{\xi}^{n}_{0-}\rangle+\int_{0}^{t}\langle
b_{1}\phi^{\prime}+\frac{\sigma^{2}}{2}\phi^{\prime\prime},\bar{\xi}^{n}_{s}\rangle
ds+\mathnormal{\Gamma}^{n}(t)+\sum_{j=1}^{6}f^{j,n}(t),$
where
(4.38)
$\mathnormal{\Gamma}^{n}(t)=\frac{b_{0}}{n}\sum_{i}\int_{0}^{t}\phi^{\prime}(\hat{X}^{n}_{i}(s))\Big{(}\frac{n-{\mathcal{R}}^{n}_{i}(s)}{n}\Big{)}^{\ell-1}ds$
is the interaction term,
$\displaystyle f^{1,n}(t)$ $\displaystyle=f^{1,n}_{1}(t)+f^{1,n}_{2}(t)$
$\displaystyle:=\frac{1}{n}\sum_{i}\int_{[0,t]}\phi^{\prime}(\hat{X}^{1,n}_{i}(s-))d\hat{E}^{n}_{i}(s)-\frac{1}{n}\sum_{i}\int_{[0,t]}\phi^{\prime}(\hat{X}^{1,n}_{i}(s-))d\hat{M}^{{\rm
dep},n}_{i}(s),$ $\displaystyle f^{2,n}(t)$
$\displaystyle=\langle\phi,\bar{\xi}^{n}_{t}\rangle-\langle\phi,\bar{\xi}^{1,n}_{t}\rangle,$
$\displaystyle f^{3,n}(t)$ $\displaystyle=\frac{1}{n}\sum_{i}e^{2,n}_{i}(t),$
$\displaystyle f^{4,n}(t)$
$\displaystyle=\hat{b}^{n}_{1}\int_{0}^{t}\langle\phi^{\prime},\bar{\xi}^{1,n}_{s}\rangle
ds-b_{1}\int_{0}^{t}\langle\phi^{\prime},\bar{\xi}^{n}_{s}\rangle ds,$
$\displaystyle f^{5,n}(t)$
$\displaystyle=\frac{1}{2n}\sum_{i}\int_{[0,t]}\phi^{\prime\prime}(\hat{X}^{2,n}_{i}(s))\\{d[\hat{E}^{n}_{i}](s)+d[\hat{M}^{{\rm
dep},n}_{i}](s)\\}-\frac{\sigma^{2}}{2}\int_{0}^{t}\langle\phi^{\prime\prime},\bar{\xi}^{n}_{s}\rangle
ds,$ $\displaystyle f^{6,n}(t)$
$\displaystyle=\frac{1}{n}\sum_{i}\int_{[0,t]}\phi^{\prime}(\hat{X}^{2,n}_{i}(s))d\hat{A}^{n}_{i}(s)-\mathnormal{\Gamma}^{n}(t)$
are “error” terms, and
$\bar{\xi}^{1,n}_{t}=n^{-1}\sum_{i\in[n]}\delta_{\hat{X}^{n,1}_{i}(t)}.$
Note that the terms $f^{5,n}$ and $f^{6,n}$, which involve
$\hat{X}^{2,n}_{i}$, are well defined, for their evaluation requires the
values of $\hat{X}^{2,n}_{i}(t)$ only at ${\mathcal{J}}^{n}_{i}$.
###### Lemma 4.1.
With $\phi$ as in (4.34), for $1\leq j\leq 6$, $f^{j,n}\to 0$ in probability
in $D({\mathbb{R}}_{+},{\mathbb{R}})$.
We note that one of the reasons the proof is somewhat involved is that we work
under minimal moment assumptions. For example, estimates on the second order
term $f^{5,n}$ use martingales that are driven by squares of the primitive
data $\zeta^{n}_{i}(k)$. Since the latter are assumed only to possess a second
moment, estimates based on the corresponding quadratic variation are not
applicable, for it does not, in general, have finite expectation.
Proof. Step 1. Estimating $f^{1,n}$. The cross variation between independent
Poisson processes is zero, hence $[\hat{E}^{n}_{i},\hat{E}^{n}_{j}](t)=0$ for
$i\neq j$. Hence by Lemma 3.3,
$[f^{1,n}_{1}](t)\leq cn^{-2}\sum_{i}[\hat{E}^{n}_{i}](t)\leq
cn^{-3}\sum_{i}E^{n}_{i}(t).$
Since each $E^{n}_{i}$ is a Poisson process of intensity $\leq cn$,
${\mathbb{E}}\\{[f^{1,n}_{1}](t)\\}\leq ctn^{-1}$, and it follows by the BDG
inequality that $f^{1,n}_{1}\to 0$ in probability. As for $f^{1,n}_{2}$, we
have by Lemma 3.3.ii that $[\hat{M}^{{\rm dep},n}_{i},\hat{M}^{{\rm
dep},n}_{j}](t)$, $i\neq j$, has zero mean. Hence by Lemma 3.3.i,
${\mathbb{E}}\\{[f^{1,n}_{2}](t)\\}\leq
cn^{-2}\sum_{i}{\mathbb{E}}\\{[\hat{M}^{{\rm dep},n}_{i}](t)\\}\leq
cn^{-3}\sum_{i}{\mathbb{E}}\\{D^{n}_{i}(t)\\}\leq cn^{-1}t.$
Thus $f^{1,n}_{2}\to 0$ in probability, and we conclude that $f^{1,n}\to 0$ in
probability.
Step 2. We show that for all $t>0$ and $\varepsilon>0$,
(4.39)
$\lim_{n\to\infty}\max_{i\in[n]}{\mathbb{P}}(\|e^{1,n}_{i}\|^{*}_{t}>\varepsilon)=0.$
In accordance with (2.13) we denote
$\tilde{Z}^{n}_{i}(k)=\tilde{\mu}^{n}Z^{n}_{i}(k)$, $k\geq 1$. By the
definition of $e^{1,n}_{i}(t)$ and $T^{n}_{i}(t)\leq t$,
$\displaystyle|e^{1,n}_{i}(t)|\leq\tilde{Z}^{n}_{i}(0)\vee\|\tilde{R}^{n}_{i}\|^{*}_{t}$
$\displaystyle\leq\max\\{\tilde{Z}^{n}_{i}(k):0\leq k\leq S^{n}_{i}(t)\\}$
(4.40) $\displaystyle\leq\max\\{\tilde{Z}^{n}_{i}(k):0\leq k\leq
S^{0,n}_{i}(t)\\}.$
Thus, by (2.13) and the fact that the law of $((\tilde{Z}^{n}_{i}(k),k\geq
1),S^{0,n}_{i}(t))$ does not depend on $i$, it suffices to prove that
$\tilde{Y}^{n}:=\max\\{\tilde{Z}^{n}_{1}(k):1\leq k\leq S^{0,n}_{1}(t)\\}\to
0\text{ in probability.}$
This can be argued via the $C$-tightness of $\\{\hat{S}^{0,n}_{1}\\}$ as
follows. Given $n$ and $t_{1}<t_{2}$, we have
(4.41) $\text{if $S^{0,n}_{1}(t_{1})=S^{0,n}_{1}(t_{2})$ then
}\hat{S}^{0,n}_{1}(t_{1})-\hat{S}^{0,n}_{1}(t_{2})=\tilde{\mu}^{n}(t_{2}-t_{1}).$
This shows that
$\max\\{Z^{n}_{1}(k):1\leq k\leq S^{0,n}_{1}(t)-1\\}\leq
2(\tilde{\mu}^{n})^{-1}\|\hat{S}^{0,n}_{1}\|^{*}_{t}.$
To include also $k=S^{0,n}_{1}(t)$, let the residual time process
$R^{0,n}_{1}$ be defined analogously to $R^{n}_{1}$, for $S^{0,n}_{1}$ in
place of $S^{n}_{1}$. Note that if $R^{0,n}_{1}(t)>1$ holds then (4.41) holds
with $t_{1}=t$ and $t_{2}=t+1$ hence
$\hat{S}^{0,n}_{1}(t)-\hat{S}^{0,n}_{1}(t+1)=\tilde{\mu}^{n}$. Because
$\tilde{\mu}^{n}\to\infty$ and $\|\hat{S}^{0,n}_{1}\|^{*}_{t+1}$ are tight,
this shows that w.h.p., $R^{0,n}_{1}(t)\leq 1$. As a result, again by (4.41),
w.h.p.,
$Y^{n}:=\max\\{Z^{n}_{1}(k):1\leq k\leq S^{0,n}_{1}(t)\\}\leq
2(\tilde{\mu}^{n})^{-1}\|\hat{S}^{0,n}_{1}\|^{*}_{t+1}.$
This shows that $Y^{n}\to 0$ in probability. Next, given $\delta>0$, on the
event $Y^{n}<\delta$, we have by (4.41),
$\displaystyle w_{t+1}(\hat{S}^{0,n}_{1},\delta)$
$\displaystyle\geq\sup\\{|\hat{S}^{0,n}_{1}(t_{1})-\hat{S}^{0,n}_{1}(t_{2})|:0\leq
t_{1}<t_{2}\leq t+1,\,S^{0,n}_{1}(t_{1})=S^{0,n}_{1}(t_{2})\\}$
$\displaystyle\geq\tilde{\mu}^{n}\sup\\{t_{2}-t_{1}:0\leq t_{1}<t_{2}\leq
t+1,\,S^{0,n}_{1}(t_{1})=S^{0,n}_{1}(t_{2})\\}$
$\displaystyle\geq\tilde{\mu}^{n}Y^{n}.$
Hence, w.h.p., $\tilde{Y}^{n}=\tilde{\mu}^{n}Y^{n}\leq
w_{t+1}(\hat{S}^{0,n}_{1},\delta)$. Using the $C$-tightness of
$\hat{S}^{0,n}_{1}$, sending $n\to\infty$ and then $\delta\to 0$ shows that
$\tilde{Y}^{n}\to 0$ in probability. This proves (4.39).
Step 3. We can now control $f^{2,n}$ and $f^{4,n}$. Recall (3.23). Then,
denoting by $m_{\phi}(\cdot)$ the modulus of continuity of $\phi$, for any
$\delta>0$,
$\|f^{2,n}\|^{*}_{t}\leq\frac{1}{n}\sum_{i}\|\phi(\hat{X}^{n}_{i})-\phi(\hat{X}^{1,n}_{i})\|^{*}_{t}\leq
m_{\phi}(\delta)+\frac{2\|\phi\|_{\infty}}{n}\\#\\{i:\|e^{1,n}_{i}\|^{*}_{t}>\delta\\}.$
Given $\varepsilon>0$ let $\delta>0$ be such that
$m_{\phi}(\delta)<\varepsilon/2$. Then
${\mathbb{P}}(\|f^{2,n}\|^{*}_{t}>\varepsilon)\leq
4\|\phi\|_{\infty}\varepsilon^{-1}\max_{i}{\mathbb{P}}(\|e^{1,n}_{i}\|^{*}_{t}>\delta).$
Sending $n\to\infty$, the above converges to $0$ by (4.39), proving that
$f^{2,n}\to 0$ in probability.
Next, similarly,
$\langle\phi^{\prime},\bar{\xi}^{1,n}\rangle-\langle\phi^{\prime},\bar{\xi}^{n}\rangle\to
0$ in probability, and
$\|f^{4,n}\|^{*}_{t}\leq
t\,|\hat{b}^{n}_{1}|\,\|\langle\phi^{\prime},\bar{\xi}^{1,n}\rangle-\langle\phi^{\prime},\bar{\xi}^{n}\rangle\|^{*}_{t}+|\hat{b}^{n}_{1}-b_{1}|\int_{0}^{\cdot}\langle\phi^{\prime},\bar{\xi}^{n}_{s}\rangle
ds.$
The first term on the right converges to $0$ in probability by the argument
shown for $f^{2,n}$ and the boundedness of $\hat{b}^{n}_{1}$. In the second
term, the integral is bounded by $t\|\phi^{\prime}\|_{\infty}$, hence the
convergence of this term to zero follows from $\hat{b}^{n}_{1}\to b_{1}$. This
proves $f^{4,n}\to 0$ in probability.
Step 4. To estimate $f^{3,n}$, note that, for any $\delta>0$,
$\|f^{3,n}\|^{*}_{t}\leq\frac{m_{\phi^{\prime}}(\delta)}{n}\sum_{i}\hat{L}^{n}_{i}(t)+\frac{2\|\phi^{\prime}\|_{\infty}}{n}\sum_{i}1_{\\{\|e^{1,n}_{i}\|^{*}_{t}>\delta\\}}\hat{L}^{n}_{i}(t).$
Hence
${\mathbb{E}}[\|f^{3,n}\|^{*}_{t}]\leq\frac{m_{\phi^{\prime}}(\delta)}{n}\sum_{i}{\mathbb{E}}[\hat{L}^{n}_{i}(t)]+\frac{2\|\phi^{\prime}\|_{\infty}}{n}\sum_{i}{\mathbb{P}}(\|e^{1,n}_{i}\|^{*}_{t}>\delta)^{1/2}({\mathbb{E}}[\hat{L}^{n}_{i}(t)^{2}])^{1/2}.$
Since by Lemma 3.5.iii,
$\max_{i\in[n]}{\mathbb{E}}[\hat{L}^{n}_{i}(t)^{2}]<c$, it follows that
${\mathbb{E}}[\|f^{3,n}\|^{*}_{t}]\leq
cm_{\phi^{\prime}}(\delta)+c\max_{i\in[n]}{\mathbb{P}}(\|e^{1,n}_{i}\|^{*}_{t}>\delta).$
In view of (4.39), if we take $n\to\infty$ and then $\delta\to 0$, the
expression on the right converges to $0$, hence $f^{3,n}\to 0$ in probability.
Step 5. Recall
$f^{5,n}(t)=\frac{1}{2n}\sum_{i}\int_{[0,t]}\phi^{\prime\prime}(\hat{X}^{2,n}_{i}(s))\\{d[\hat{E}^{n}_{i}](s)+d[\hat{M}^{{\rm
dep},n}_{i}](s)\\}-\frac{\sigma^{2}}{2}\int_{0}^{t}\langle\phi^{\prime\prime},\bar{\xi}^{n}_{s}\rangle
ds,$ $[\hat{E}^{n}_{i}](t)=n^{-1}E^{n}_{i}(t)\qquad[\hat{M}^{{\rm
dep},n}_{i}](t)=n^{-1}\sum_{k=1}^{D^{n}_{i}(t)}\zeta^{n}_{i}(k)^{2}.$
Let
$\displaystyle f^{5,n}_{1}$
$\displaystyle=\frac{1}{n}\sum_{i}\int_{[0,t]}\phi^{\prime\prime}(\hat{X}^{1,n}_{i}(s-))d[\hat{E}^{n}_{i}](s)-\lambda\int_{0}^{t}\langle\phi^{\prime\prime},\bar{\xi}^{1,n}_{s}\rangle
ds,$ $\displaystyle f^{5,n}_{2}$
$\displaystyle=\frac{1}{n}\sum_{i}\int_{[0,t]}\phi^{\prime\prime}(\hat{X}^{1,n}_{i}(s-))d[\hat{M}^{{\rm
dep},n}_{i}](s)-\lambda\sigma_{\rm
ser}^{2}\int_{0}^{t}\langle\phi^{\prime\prime},\bar{\xi}^{1,n}_{s}\rangle ds,$
$\displaystyle f^{5,n}_{3}$
$\displaystyle=\int_{0}^{t}\langle\phi^{\prime\prime},\bar{\xi}^{n}_{s}\rangle
ds-\int_{0}^{t}\langle\phi^{\prime\prime},\bar{\xi}^{1,n}_{s}\rangle ds$
$\displaystyle f^{5,n}_{4}$
$\displaystyle=\frac{1}{n}\sum_{i}\int_{[0,t]}(\phi^{\prime\prime}(\hat{X}^{2,n}_{i}(s))-\phi^{\prime\prime}(\hat{X}^{1,n}_{i}(s-)))\\{d[\hat{E}^{n}_{i}](s)+d[\hat{M}^{{\rm
dep},n}_{i}](s)\\}.$
Then $f^{5,n}=\sum_{j}f^{5,n}_{j}$. For $f^{5,n}_{1}$, write
$\displaystyle f^{5,n}_{1}$
$\displaystyle=\frac{1}{n}\sum_{i}\int_{[0,t]}\phi^{\prime\prime}(\hat{X}^{1,n}_{i}(s-))n^{-1}dE^{n}_{i}(s)-\lambda\int_{0}^{t}\langle\phi^{\prime\prime},\bar{\xi}^{1,n}_{s}\rangle
ds$
$\displaystyle=\frac{1}{n}\sum_{i}\int_{[0,t]}\phi^{\prime\prime}(\hat{X}^{1,n}_{i}(s-))n^{-1/2}d\hat{E}^{n}_{i}(s)+\frac{1}{n}\sum_{i}\int_{0}^{t}\phi^{\prime\prime}(\hat{X}^{1,n}_{i}(s))n^{-1/2}\hat{\lambda}^{n}ds.$
In the first sum, each term is a martingale, with expected quadratic variation
$\leq cn^{-2}\lambda^{n}t\leq cn^{-1}t$ where we use Lemma 3.3.i, and $c$ does
not depend on $i$ or $n$. Hence the first normalized sum is a martingale whose
expected quadratic variation is $\leq cn^{-2}t$. In particular, it converges
to zero in probability. In the second sum, each term is bounded in absolute
value by $ctn^{-1/2}$, hence the second normalized sum converges to zero. This
shows $f^{5,n}_{1}\to 0$ in probability.
Using Lemma 3.3 for the expression for $[\hat{M}^{{\rm dep},n}_{i}]$ and
denoting
$P^{n}_{i}(t)=\phi^{\prime\prime}(\hat{X}^{1,n}_{i}(t)),\qquad
Q^{n}_{i}(t)=\sum_{k=1}^{D^{n}_{i}(t)}q^{n}_{i}(k),\qquad
q^{n}_{i}(k)=\zeta^{n}_{i}(k)^{2}-\sigma_{\rm ser}^{2},$
write $f^{5,n}_{2}$ as
(4.42) $\displaystyle f^{5,n}_{2}$
$\displaystyle=\frac{1}{n}\sum_{i}\int_{[0,t]}P^{n}_{i}(s-)n^{-1}dQ^{n}_{i}(s)+\frac{\sigma_{\rm
ser}^{2}}{n}\sum_{i}\int_{[0,t]}P^{n}_{i}(s-)\\{n^{-1}dD^{n}_{i}(s)-\lambda
ds\\}.$
To estimate the first term above, note that, by (3.8),
$D^{n}_{i}(t)=S^{n}_{i}(T^{n}_{i}(t))\leq
S^{n}_{i}(t)=n^{1/2}\hat{S}^{n}_{i}(t)+\mu^{n}t\leq
n^{1/2}\hat{S}^{n}_{i}(t)+c_{1}nt,$
for a constant $c_{1}$. For any $\beta>2c_{1}t$,
$\max_{i\in[n]}{\mathbb{P}}(D^{n}_{i}(t)>\beta
n)\leq\max_{i\in[n]}{\mathbb{P}}(\hat{S}^{n}_{i}(t)>n^{1/2}\beta/2)\leq
cn^{-1}\beta^{-2},$
by Lemma 3.5.iii, where $c=c(t)$. Denote the jump times of $D^{n}_{i}$ by
$0\leq t^{n}_{i}(1)<t^{n}_{i}(2)<\cdots$, and
$m^{n}_{i}(N)=\sum_{k=1}^{N}P^{n}_{i}(t^{n}_{i}(k)-)q^{n}_{i}(k)$. Then the
first term in (4.42) is given by $n^{-2}\sum_{i}m^{n}_{i}(D^{n}_{i}(t))$,
which we write as
(4.43) $\frac{1}{n^{2}}\sum_{i}m^{n}_{i}(D^{n}_{i}(t)\wedge(\beta
n))+\frac{1}{n^{2}}\sum_{i}\\{m^{n}_{i}(D^{n}_{i}(t)-m^{n}_{i}(D^{n}_{i}(t)\wedge(\beta
n))\\}.$
Now, $m^{n}_{i}(\cdot)$ is a discrete parameter martingale with increments
bounded by $\|\phi^{\prime\prime}\|_{\infty}|q^{n}_{i}(k)|$. Moreover,
$q^{n}_{i}(k)$ are IID in $k$ and $i$, and possesses a first moment in view of
our second moment assumption on the service times. Hence Lemma 3.4 is
applicable, showing that
$N^{-1}{\mathbb{E}}\|m^{n}_{i}\|^{*}_{N}<A(N)\to 0\qquad\text{as }N\to\infty,$
where $A(\cdot)$ does not depend on $n$ ot $i$. Hence
${\mathbb{E}}\Big{|}\frac{1}{n^{2}}\sum_{i}m^{n}_{i}(D^{n}_{i}(t)\wedge(\beta
n))\Big{|}\leq\frac{1}{n}\sum_{i}n^{-1}{\mathbb{E}}[\|m^{n}_{i}\|^{*}_{\beta
n}]\leq n^{-1}\beta nA(\beta n)=\beta A(\beta n),$
where throughout this paragraph $\beta n$ should be read as $\lfloor\beta
n\rfloor$. Hence the limit of the above expression is $0$ for any $\beta$.
Next, using $D^{n}_{i}\leq S^{n}_{i}$,
$\displaystyle{\mathbb{E}}\Big{|}\frac{1}{n^{2}}\sum_{i}\\{m^{n}_{i}(D^{n}_{i}(t)-m^{n}_{i}(D^{n}_{i}(t)\wedge(\beta
n))\\}\Big{|}$
$\displaystyle\leq\|\phi^{\prime\prime}\|_{\infty}n^{-2}\sum_{i}1_{\\{S^{n}_{i}(t)>\beta
n\\}}\sum_{k=\beta n+1}^{S^{n}_{i}(t)}|q^{n}_{i}(k)|$
$\displaystyle=cn^{-2}\sum_{i}\sum_{k=1}^{(S^{n}_{i}(t)-\beta
n)^{+}}|q^{n}_{i}(k-\beta n)|$
$\displaystyle=cn^{-2}\sum_{i}{\mathbb{E}}[(S^{n}_{i}(t)-\beta
n)^{+}]{\mathbb{E}}[|q^{n}_{i}(1)|]$ $\displaystyle\leq
cn^{-1}\max_{i\in[n]}{\mathbb{E}}[(S^{n}_{i}(t)-\beta n)^{+}],$
where Wald’s identity is used on the third line. Since it follows from Lemma
3.5.iii that $n^{-1}S^{n}_{i}(t)$ are uniformly integrable in $i$ and $n$,
$\lim_{\beta\to\infty}\limsup_{n}n^{-1}\max_{i\in[n]}{\mathbb{E}}[(S^{n}_{i}(t)-\beta
n)^{+}]=0$. As a result, the expression in (4.43) converges to zero in
probability.
As for the second term in (4.42). In view of Lemma 3.5.iii and (2.8), one has
$\max_{i\in[n]}\|n^{-1}S^{n}_{i}-\lambda\iota\|^{*}_{t}\to 0\qquad\text{in
probability}.$
Also from Lemma 3.5.iii and the relation (2.12) between $\hat{L}^{n}_{i}$ and
$T^{n}_{i}$, $\max_{i\in[n]}\|T^{n}_{i}-\iota\|^{*}_{t}\to 0$ in probability.
It follows that $D^{n}_{i}=S^{n}_{i}(T^{n}_{i})$ also satisfies
$\kappa_{n}:=\max_{i\in[n]}\|n^{-1}D^{n}_{i}-\lambda\iota\|^{*}_{t}\to 0$ in
probability. Denote $\varepsilon=t/j$, $s_{k}=k\varepsilon$ and
$\tilde{P}^{n}_{i}(s)=P^{n}_{i}(s-)$. Then
$\displaystyle\int_{(0,t]}\tilde{P}^{n}_{i}(s)\\{n^{-1}dD^{n}_{i}(s)-\lambda
ds\\}$
$\displaystyle=\sum_{k=0}^{j-1}\Big{\\{}\int_{(s_{k},s_{k+1}]}(\tilde{P}^{n}_{i}(s)-\tilde{P}^{n}_{i}(s_{k}))\\{n^{-1}dD^{n}_{i}(s)-\lambda
ds\\}$
$\displaystyle\qquad\qquad+\tilde{P}^{n}_{i}(s_{k})\\{n^{-1}D^{n}_{i}(s_{k+1})-n^{-1}D^{n}_{i}(s_{k})-\lambda\varepsilon\\}\Big{\\}}.$
This gives
(4.44)
$\displaystyle\Big{|}\int_{(0,t]}\tilde{P}^{n}_{i}(s)\\{n^{-1}dD^{n}_{i}(s)-\lambda
ds\\}\Big{|}$ $\displaystyle\leq
w_{T}(\tilde{P}^{n}_{i},\varepsilon)(n^{-1}D^{n}_{i}(t)+\lambda
t)+\|\tilde{P}^{n}_{i}\|^{*}_{t}2j\kappa_{n}$ $\displaystyle\leq
w_{T}(\tilde{P}^{n}_{i},\varepsilon)(2\lambda
t+\kappa_{n})+\|\phi^{\prime\prime}\|_{\infty}2j\kappa_{n}.$
We recall from Lemma 3.5.iii that $\hat{X}^{n}_{i}$ are uniformly $C$-tight.
Along with the uniform estimate (4.39) on $e^{1,n}_{i}$, this shows that so
are $\tilde{P}^{n}_{i}$. In particular, these processes satisfy (3.26). Hence,
on sending $n\to\infty$ and then $\varepsilon\to 0$, it follows that the right
side of (4.44) converges to zero in probability, uniformly in $i\in[n]$. Hence
the second term in $f^{5,n}_{2}$ converges to zero in probability, and we
conclude that $f^{5,n}_{2}\to 0$ in probability.
Next, recalling the result regarding $f^{2,n}$ and replacing $\phi$ by
$\phi^{\prime\prime}$ shows that $f^{5,n}_{3}\to 0$ in probability.
To bound $f^{5,n}_{4}$, fix $\varepsilon>0$ and let $\delta>0$ be such that
$|x-y|<\delta$ implies
$|\phi^{\prime\prime}(x)-\phi^{\prime\prime}(y)|<\varepsilon$. Let
${\mathcal{J}}^{n,\delta}_{i}=\\{s\in{\mathcal{J}}^{n}_{i}:|\hat{X}^{2,n}_{i}(s)-\hat{X}^{1,n}_{i}(s-)|>\delta\\}$.
For $n$ sufficiently large, the size of jumps of $\hat{E}^{n}$ (which is
$n^{-1/2}$) is smaller than $\delta$, hence, by (4.36), the corresponding jump
times are not members of ${\mathcal{J}}^{n,\delta}_{i}$. Thus the $i$-th term
in $f^{5,n}_{4}$ is bounded, in absolute value, by
$\displaystyle\varepsilon([\hat{E}^{n}_{i}](t)+[\hat{M}^{{\rm
dep},n}_{i}](t))+2\|\phi^{\prime\prime}\|_{\infty}\sum_{s\in{\mathcal{J}}^{n,\delta}_{i},s\leq
t}\mathnormal{\Delta}[\hat{M}^{{\rm dep},n}_{i}](s)$
$\displaystyle\leq\varepsilon
n^{-1}\Big{(}E^{n}_{i}(t)+\sum_{k=1}^{D^{n}_{i}(t)}\zeta^{n}_{i}(k)^{2}\Big{)}+cn^{-1}\sum_{k=1}^{D^{n}_{i}(t)}\zeta^{n}_{i}(k)^{2}1_{\\{n^{-1/2}|\zeta^{n}_{i}(k)|>\delta\\}}$
where we used the fact that for $s\in{\mathcal{J}}^{n,\delta}_{i}$,
$|\mathnormal{\Delta}\hat{X}^{1,n}_{i}(s)|=|\mathnormal{\Delta}\hat{M}^{{\rm
dep},n}_{i}(s)|$ by (3.24). Using (3.15), the expected value of the above
expression is
$\leq\varepsilon n^{-1}(\lambda^{n}_{i}t+\sigma_{\rm
ser}^{2}{\mathbb{E}}[D^{n}_{i}(t)])+cn^{-1}{\mathbb{E}}[\zeta^{n}_{1}(1)^{2}1_{\\{n^{-1/2}|\zeta^{n}_{1}(1)|>\delta\\}}]{\mathbb{E}}[D^{n}_{i}(t)],$
where for the last term we used Wald’s identity in exactly the same way as in
the proof of Lemma 3.3. In view of Lemma 3.5.iii,
${\mathbb{E}}[S^{n}_{i}(t)]<cn$, where $c$ does not depend on $i$ or $n$.
Since $D^{n}_{i}(t)\leq S^{n}_{i}(t)$, it follows that
${\mathbb{E}}[|f^{5,n}_{4}|]\leq
c\varepsilon+c{\mathbb{E}}[\zeta^{n}_{1}(1)^{2}1_{\\{n^{-1/2}|\zeta^{n}_{1}(1)|>\delta\\}}].$
Taking $n\to\infty$ then $\varepsilon\to 0$ shows that $f^{5,n}_{4}\to 0$ in
probability.
Step 6. Finally,
$f^{6,n}(t)=\frac{1}{n}\sum_{i}\int_{[0,t]}[\phi^{\prime}(\hat{X}^{2,n}_{i}(s))-\phi^{\prime}(\hat{X}^{n}_{i}(s))]d\hat{A}^{n}_{i}(s)+\frac{1}{n}\sum_{i}\int_{[0,t]}\phi^{\prime}(\hat{X}^{n}_{i}(s))d\hat{A}^{n}_{i}(s)-\mathnormal{\Gamma}^{n}(t).$
By Lemma 3.3.iii,
$\displaystyle\frac{1}{n}\sum_{i}\int_{[0,t]}\phi^{\prime}(\hat{X}^{n}_{i}(s-))d\hat{A}^{n}_{i}(s)$
$\displaystyle=\lambda^{n}_{0}n^{-3/2}\sum_{i}\int_{0}^{t}\phi^{\prime}(\hat{X}^{n}_{i}(s))p_{n,{\mathcal{R}}^{n}_{i}(s)}ds+\bar{M}^{A,n}(t),$
$\displaystyle\bar{M}^{A,n}(t)$
$\displaystyle=\frac{1}{n}\sum_{i}\int_{[0,t]}\phi^{\prime}(\hat{X}^{n}_{i}(s-))d\hat{M}^{A,n}_{i}(s).$
By Lemma 3.3.iii, $[\bar{M}^{A,n}](t)\leq
cn^{-3}\sum_{i}A^{n}_{i}(t)=cn^{-3}A^{n}_{0}(t)$. Since $A^{n}_{0}$ is a
Poisson process of intensity $\lambda^{n}_{0}\leq cn^{3/2}$ (by (2.9)), it
follows that $\bar{M}^{A,n}\to 0$ in probability.
Now, writing
$p_{n,r}=\frac{\ell}{n}\,\frac{(n-r)(n-r-1)\cdots(n-r-\ell+2)}{n(n-1)\cdots(n-\ell+1)}$
shows that
(4.45)
$p_{n,r}=\frac{\ell}{n}\Big{[}\Big{(}1-\frac{r}{n}\Big{)}^{\ell-1}+\alpha_{n,r}\Big{]},\qquad\alpha_{n}:=\max_{r\in[n]}|\alpha_{n,r}|\to
0.$
Hence
$\displaystyle\lambda^{n}_{0}n^{-3/2}\sum_{i}\int_{0}^{t}\phi^{\prime}(\hat{X}^{n}_{i}(s))p_{n,{\mathcal{R}}^{n}_{i}(s)}ds-\mathnormal{\Gamma}^{n}(t)$
$\displaystyle\qquad=\sum_{i}\int_{0}^{t}\phi^{\prime}(\hat{X}^{n}_{i}(s))\Big{[}\lambda^{n}_{0}n^{-3/2}p_{n,{\mathcal{R}}^{n}_{i}(s)}-\frac{b_{0}}{n}\Big{(}\frac{n-{\mathcal{R}}^{n}_{i}(s)}{n}\Big{)}^{\ell-1}\Big{]}ds$
$\displaystyle\qquad=\sum_{i}\int_{0}^{t}\phi^{\prime}(\hat{X}^{n}_{i}(s))\Big{(}\frac{n-{\mathcal{R}}^{n}_{i}(s)}{n}\Big{)}^{\ell-1}\Big{[}\lambda^{n}_{0}n^{-3/2}\frac{\ell}{n}-\frac{b_{0}}{n}\Big{]}ds+\sum_{i}\int_{0}^{t}\phi^{\prime}(\hat{X}^{n}_{i}(s))\lambda^{n}_{0}n^{-3/2}\frac{\ell}{n}\alpha_{n,{\mathcal{R}}^{n}_{i}(s)}ds.$
We have $\lambda^{n}_{0}n^{-3/2}\ell\to b\ell=b_{0}$, which shows that the
first sum converges to $0$. From the fact $\alpha_{n}\to 0$ we also have that
the last sum converges to $0$.
To bound the first term in $f^{6,n}$, note that $\hat{A}^{n}_{i}$ are
nondecreasing. Also, since the jumps of $\hat{A}^{n}_{i}$ are of size
$n^{-1/2}$, one has
$|\phi^{\prime}(\hat{X}^{2,n}_{i}(s)-\phi^{\prime}(\hat{X}^{1,n}_{i}(s))|\leq
m_{\phi^{\prime}}(n^{-1/2})$ at any jump time $s$ of $\hat{A}^{n}_{i}$.
Moreover,
$\max_{i\in[n]}|\phi^{\prime}(\hat{X}^{1,n}_{i}(s)-\phi^{\prime}(\hat{X}^{n}_{i}(s))|\leq\max_{i\in[n]}m_{\phi^{\prime}}(\|e^{1,n}_{i}\|^{*}_{t})\to
0$ in probability, by (4.39). Hence
$\max_{i\in[n]}\Big{|}\int_{[0,t]}[\phi^{\prime}(\hat{X}^{2,n}_{i}(s))-\phi^{\prime}(\hat{X}^{n}_{i}(s))]d\hat{A}^{n}_{i}(s)\Big{|}\leq\max_{i\in[n]}\\{(m_{\phi^{\prime}}(n^{-1/2})+m_{\phi^{\prime}}(\|e^{1,n}_{i}\|^{*}_{t}))\hat{A}^{n}_{i}(t)\\}\to
0$
in probability, where Lemma 3.5.iii is used for a uniform estimate on
$\hat{A}^{n}_{i}(t)$. It follows that $f^{6,n}\to 0$ in probability. ∎
### 4.2. Interaction term under limit
In this subsection we prove the following.
###### Lemma 4.2.
Given a subsequence along which $\bar{\xi}^{n}\Rightarrow\xi$, one has
$\mathnormal{\Gamma}^{n}\Rightarrow\mathnormal{\Gamma}$ in
$C({\mathbb{R}}_{+},{\mathbb{R}})$ along the subsequence, where
$\mathnormal{\Gamma}(t)=\frac{b_{0}}{\ell}\int_{0}^{t}\int_{{\mathbb{R}}_{+}}\phi^{\prime}(x)\mathfrak{S}(\xi_{s}[x,\infty),\xi_{s}(x,\infty))\xi_{s}(dx)ds.$
Recall that if $\xi_{t}$ is atomless for all $t>0$ then the integrand in the
above expression simplifies to
$\ell\phi^{\prime}(x)\xi_{s}[x,\infty)^{\ell-1}$, in which case
$\mathnormal{\Gamma}$ is directly related to the form of the interaction term
in equations (1.1), (1.3) and (2.16). However, the atomless property has not
been established at this stage of the proof.
Proof. Invoking the Skorohod representation theorem, assume without loss of
generality that $\bar{\xi}^{n}\to\xi$ a.s. Because
$0\leq{\mathcal{R}}^{n}_{i}\leq n$, the integrands in (4.38) are bounded by
$\|\phi^{\prime}\|_{\infty}$. Hence by bounded convergence, it suffices to
prove that, a.s., for every $t$,
(4.46)
$\gamma^{n}(t):=\frac{1}{n}\sum_{i}\phi^{\prime}(\hat{X}^{n}_{i}(t))\bar{\mathcal{R}}^{c,n}_{i}(t)^{\ell-1}\to\frac{1}{\ell}\int_{{\mathbb{R}}_{+}}\phi^{\prime}(x)\mathfrak{S}(\xi_{t}[x,\infty),\xi_{t}(x,\infty))\xi_{t}(dx),$
where
${\mathcal{R}}^{c,n}_{i}(t)=n-{\mathcal{R}}^{n}_{i}(t),\qquad\bar{\mathcal{R}}^{c,n}_{i}(t)=n^{-1}{\mathcal{R}}^{c,n}_{i}(t).$
Fix $t$ and $\varepsilon>0$. The function $x\mapsto\xi_{t}(x,\infty)$ has at
most countably many discontinuities. Hence one can find a finite sequence
$x_{*}=y_{0}<y_{1}<\ldots<y_{K}=x^{*}$ such that $[x_{*},x^{*}]$ contains the
compact support of $\phi^{\prime}$ (recall (4.34)),
$y_{k}-y_{k-1}<\varepsilon$, $k=1,\ldots,K$, and $\xi_{t}(\\{y_{k}\\})=0$,
$k=0,\ldots,K$. Because $y_{k}$ are not charged, both
$\bar{\xi}^{n}_{t}(y_{k-1},y_{k})$ and $\bar{\xi}^{n}_{t}[y_{k-1},y_{k}]$
converge a.s. to $\xi_{t}(y_{k-1},y_{k})$. Define the complementary rank by
${\rm rank}^{c}(i;x)=n-{\rm rank}(i;x)$, $x\in{\mathbb{R}}^{n}$, $i\in[n]$.
Then $0\leq{\rm rank}^{c}(i;x)\leq n-1$ and by (2.3),
${\rm rank}^{c}(i;x)=\\#\\{j:x_{j}>x_{i}\\}+\\#\\{j>i:x_{j}=x_{i}\\}.$
Denote $x_{i}=\hat{X}^{n}_{i}(t)$, $x=(x_{1},\ldots,x_{n})$, and for
$k=1,\ldots,K$,
$I_{k}=[y_{k-1},y_{k}),\qquad V_{k}=\\{i:x_{i}\in I_{k}\\},\qquad
P_{k}=\xi^{n}_{t}(I_{k}),\qquad Q_{k}=\xi^{n}_{t}[y_{k},\infty).$
Fix $k$, and for $i\in V_{k}$ let $j(i)={\rm rank}(i;(x_{l})_{l\in V_{k}})$.
This is a relabeling of $V_{k}$ according to the rank of its members within
$V_{k}$. With this notation we have
${\rm rank}^{c}(i;x)=Q_{k}+P_{k}-j(i),\qquad i\in V_{k}.$
Because $j(i)$ take all values between $1$ and $P_{k}$ as $i$ varies in
$V_{k}$, we have
$\sum_{i\in V_{k}}{\rm
rank}^{c}(i;x)^{\ell-1}=\sum_{j=1}^{P_{k}}(Q_{k}+P_{k}-j)^{\ell-1}=\sum_{j=0}^{P_{k}-1}(Q_{k}+j)^{\ell-1}.$
If $p_{n}$ is a sequence satisfying $n^{-1}p_{n}\to p\geq 0$ then
$n^{-1}\sum_{j=0}^{p_{n}}(j/n)^{\ell-1}\to\int_{0}^{p}z^{\ell-1}dz=\ell^{-1}p^{\ell}$.
Noting, as mentioned, that $n^{-1}P_{k}\to\xi_{t}(I_{k})$, we get
(4.47) $\frac{1}{n}\sum_{i\in
V_{k}}\bar{\mathcal{R}}^{c,n}_{i}(t)^{\ell-1}\to\frac{1}{\ell}(\xi_{t}(y_{k-1},\infty)^{\ell}-\xi_{t}(y_{k},\infty)^{\ell})=\frac{1}{\ell}\xi_{t}(I_{k})\mathfrak{S}(\xi_{t}(y_{k-1},\infty),\xi_{t}(y_{k},\infty)).$
Fix a sequence $\varepsilon_{m}>0$, $\varepsilon_{m}\to 0$. For each $m$, let
points as the above $y_{k}$, corresponding to $\varepsilon_{m}$, be denoted by
$y_{k}^{m}$. For $x\in[x_{*},x^{*}]$ let $x_{(m)}$ and $x^{(m)}$ be the unique
two points $y_{k-1}^{m}$ and $y_{k}^{m}$, respectively, for which
$x\in[y_{k-1}^{m},y_{k}^{m})$. This gives $x^{(m)}\to x$, $x^{(m)}>x$. By
right-continuity of $x\mapsto\xi_{t}(x,\infty)$, it follows that
$\xi_{t}(x^{(m)},\infty)\to\xi_{t}(x,\infty)$ for every $x\in[x_{*},x^{*}]$.
Similarly, by left continuity of $x\mapsto\xi_{t}[x,\infty)$, and $x_{(m)}\to
x$, $x_{(m)}\leq x$, we obtain $\xi_{t}(x_{(m)},\infty)\to\xi_{t}[x,\infty)$.
Let
$h_{(m)}(x)=\inf\\{\phi^{\prime}(z):z\in[x_{(m)},x^{(m)}]\\},\qquad
h^{(m)}(x)=\sup\\{\phi^{\prime}(z):z\in[x_{(m)},x^{(m)}]\\}.$
Summing over $k$ in (4.47), we have for every $m$,
(4.48) $\displaystyle\liminf_{n}\gamma^{n}(t)$
$\displaystyle\geq\frac{1}{\ell}\int_{[x_{*},x^{*}]}h_{(m)}(x)\mathfrak{S}(\xi_{t}(x_{(m)},\infty),\xi_{t}(x^{(m)},\infty))\xi_{t}(dx)$
(4.49) $\displaystyle\limsup_{n}\gamma^{n}(t)$
$\displaystyle\leq\frac{1}{\ell}\int_{[x_{*},x^{*}]}h^{(m)}(x)\mathfrak{S}(\xi_{t}(x_{(m)},\infty),\xi_{t}(x^{(m)},\infty))\xi_{t}(dx).$
It remains to show the expressions of the r.h.s. of (4.48) and (4.49) both
converge to the r.h.s. of (4.46) as $m\to\infty$. By bounded convergence, it
suffices that the integrands in both these integrals converge, for every $x$,
to $\phi^{\prime}(x)\mathfrak{S}(\xi_{t}[x,\infty),\xi_{t}(x,\infty))$. The
latter convergence holds because, by continuity of $\phi^{\prime}$,
$h^{(m)}(x)\to\phi^{\prime}(x)$ and $h^{(m)}(x)\to\phi^{\prime}(x)$, and
moreover $\xi_{t}(x_{(m)},\infty)\to\xi_{t}[x,\infty)$ and
$\xi_{t}(x^{(m)},\infty)\to\xi_{t}(x,\infty)$. This completes the proof. ∎
### 4.3. PDE satisfied by the limit
We take limits along an arbitrary convergent subsequence. By (4.37), (4.38),
Lemma 4.1 and Lemma 4.2, we have now established that every limit point $\xi$
satisfies (2.19).
Again, fix a convergent subsequence and denote its limit by $\xi$. Some
notation used below is as follows. For a right-continuous nondecreasing
function $V:{\mathbb{R}}\to[0,1]$ with $V(0-)=0$, let $V(dx)$ denote the
Stieltjes measure induced on ${\mathbb{R}}_{+}$. Let
$v(x,t)=\xi_{t}(x,\infty),\qquad\tilde{v}(x,t)=1-v(x,t)=\xi_{t}[0,x].$
These are right continuous functions for every $t$. Thus
$\tilde{v}(dx,t)=\xi_{t}(dx)$. Next, for $V$ as above, let the pure jump part
and continuous part be denoted, respectively, by
$V^{\rm jmp}(x)=\sum_{y\in[0,x]}\mathnormal{\Delta}V(y),\qquad V^{\rm
cts}(x)=V(x)-V^{\rm jmp}(x).$
###### Lemma 4.3.
$v$ is a weak solution to (2.18).
Proof. By equations (4.37) and (4.38) and Lemmas 4.1 and 4.2,
$\langle\phi,\xi_{t}\rangle=\langle\phi,\xi_{0}\rangle+\int_{0}^{t}\langle
b_{1}\phi^{\prime}+a\phi^{\prime\prime},\xi_{s}\rangle
ds+\mathnormal{\Gamma}(t).$
Write $\mathnormal{\Gamma}(t)$ as $\frac{b_{0}}{\ell}\int_{0}^{t}A(s)ds$ where
$A(t)=\int\phi^{\prime}(x)\mathfrak{S}(v(x-,t),v(x,t))\tilde{v}(dx,t).$
Let $\tilde{v}(x,t)=\tilde{v}^{\rm cts}(x,t)+\tilde{v}^{\rm jmp}(x,t)$ denote
the decomposition alluded to above. Let $U(x,t)=1-v(x,t)^{\ell}$. Then
$U(dx,t)=\ell v(x,t)^{\ell-1}\tilde{v}(dx,t).$
Hence
$U^{\rm cts}(dx,t)=\ell v(x,t)^{\ell-1}\tilde{v}^{\rm cts}(dx,t),\qquad U^{\rm
jmp}(dx,t)=\ell v(x,t)^{\ell-1}\tilde{v}^{\rm jmp}(dx,t).$
Now,
$A(t)=\int\phi^{\prime}(x)\ell v(x,t)^{\ell-1}\tilde{v}^{\rm
cts}(dx,t)+\int\phi^{\prime}(x)\frac{v(x-,t)^{\ell}-v(x,t)^{\ell}}{v(x-,t)-v(x,t)}\tilde{v}^{\rm
jmp}(dx,t).$
The second integral on the right is
$\sum_{x\in[0,\infty)}\phi^{\prime}(x)(v(x-,t)^{\ell}-v(x,t)^{\ell})=\sum_{x\in[0,\infty)}\phi^{\prime}(x)(U(x,t)-U(x-,t))=\int\phi^{\prime}(x)U^{\rm
jmp}(dx,t).$
Therefore, using $\phi^{\prime}(0)=0$,
$\displaystyle A(t)$ $\displaystyle=\int\phi^{\prime}(x)(U^{\rm
cts}(dx,t)+U^{\rm jmp}(dx,t))=\int\phi^{\prime}(x)U(dx,t)$
$\displaystyle=-\int\phi^{\prime\prime}(x)U(x,t)dx=\int\phi^{\prime\prime}(x)v(x,t)^{\ell}dx.$
Using integration by parts we have
$\langle\phi,\xi_{t}\rangle=\int_{{\mathbb{R}}_{+}}(v(x,t)-1)\phi^{\prime}(x)dx$.
Recalling $\phi^{\prime}=\tilde{\phi}$ and combining the above calculations,
we obtain
$\displaystyle\int\tilde{\phi}(x)(v(x,t)-1)dx$
$\displaystyle=\int\tilde{\phi}(x)(v(x,0)-1)dx+\int_{0}^{t}\int(b_{1}\tilde{\phi}^{\prime}(x)+a\tilde{\phi}^{\prime\prime}(x))(v(x,s)-1)dxds$
$\displaystyle\qquad+\frac{b_{0}}{\ell}\int_{0}^{t}\int\tilde{\phi}^{\prime}(x)v(x,t)^{\ell}dxds.$
Thus
$\displaystyle\int\tilde{\phi}(x)v(x,t)dx$
$\displaystyle=\int\tilde{\phi}(x)v(x,0)dx+\int_{0}^{t}\int(b_{1}\tilde{\phi}^{\prime}(x)+a\tilde{\phi}^{\prime\prime}(x))v(x,s)dxds+a\int_{0}^{t}\tilde{\phi}^{\prime}(0)ds$
$\displaystyle\qquad+\frac{b_{0}}{\ell}\int_{0}^{t}\int\tilde{\phi}^{\prime}(x)v(x,t)^{\ell}dxds.$
According to Definition 2.3, this shows that $v$ is a weak solution of (2.18)
once it is verified that $v\in\mathbb{L}^{\infty}_{\rm
loc}({\mathbb{R}}_{+},\mathbb{L}^{1}({\mathbb{R}}_{+}))$.
This property can otherwise be stated as
$\sup_{t\in(0,T]}\int_{0}^{\infty}\xi_{t}[x,\infty)dx<\infty$ a.s., for every
$T<\infty$. Invoking Skorohod’s representation theorem we may assume w.l.o.g.
that, along the chosen convergent subsequence, one has $\bar{\xi}^{n}\to\xi$
a.s. Arguing by contradiction, let there exist a sequence $t_{N}\in(0,T]$ and
an event of positive ${\mathbb{P}}$ measure on which
$\int_{0}^{\infty}\xi_{t_{N}}[x,\infty)dx\to\infty$. Now, for each fixed
$t\in(0,T]$, by Fatou’s lemma,
$\displaystyle{\mathbb{E}}\int_{0}^{\infty}\xi_{t}[x,\infty)dx$
$\displaystyle\leq\liminf_{n}{\mathbb{E}}\int_{0}^{\infty}\bar{\xi}^{n}_{t}[x,\infty)dx$
$\displaystyle\leq\liminf_{n}\frac{1}{n}\sum_{i\in[n]}{\mathbb{E}}\hat{X}^{n}_{i}(t)$
$\displaystyle\leq\sup_{n}\max_{i\in[n]}{\mathbb{E}}\|\hat{X}^{n}_{i}\|^{*}_{T}\leq
c<\infty,$
where $c=c(T)$ and the last assertion follows from Lemma 3.5.iii. This
contradicts the assumption, and hence the uniform $\mathbb{L}^{1}$ property
follows. ∎
### 4.4. Proof of main results
We can now prove Theorems 2.1, 2.4, 2.5, 2.6, 2.7 and Proposition 2.2.
Proof of Theorem 2.4. Having proved uniqueness in Lemma 3.2, we proceed to the
remaining assertions. Existence of a
$C^{\infty}({\mathbb{R}}_{+}\times(0,\infty))$ solution is well known; see
e.g. [22, Lemma 3.2, p. 68].
In the case $\mathfrak{f}(z)=c_{1}z-bz^{\ell}$ and
$v_{0}(x)=\xi_{0}(x,\infty)$, we have proved in Lemma 4.3 that the function
$v$ induced by any subsequential limit $\xi$ is a weak solution. By uniqueness
it must be equal to the smooth function mentioned above. Since $x\mapsto
v(x,t)$ is automatically nonincreasing, the function $u:=-v_{x}$ is
nonnegative and smooth. Using this in (2.17), an integration by parts gives
the first 3 parts of (1.1) as well as that $u$ integrates to $1$ for each
$t>0$.
As for the initial condition $u(\cdot,t)dx\to\xi_{0}(dx)$ in (1.1), again
using (2.17) with $-v_{x}=u$ gives
$\int\phi^{\prime}(x)u(x,t)dx=\int\phi^{\prime}(x)\xi_{0}(dx)+O(t)$ as $t\to
0$, which proves that the initial condition is satisfied. This shows that $u$
thus defined forms a classical solution to (1.1). ∎
Proof of Proposition 2.2. The fact that $u_{\rm stat}$ given in (1.2) is a
stationary solution is verified directly. As for uniqueness, for any solution
$u$, the function $v=\int_{\cdot}^{\infty}u(x)dx$ takes values in $[0,1]$, is
monotone, and satisfies
$\mathfrak{f}(v)^{\prime}+av^{\prime\prime}=0,\qquad
v(0)=1,\qquad\lim_{x\to\infty}v(x)=0.$
Integrating we obtain $\mathfrak{f}(v)+av^{\prime}=c_{1}$, where $c_{1}$ is a
constant. Since $\mathfrak{f}(0)=0$, we obtain by the condition at infinity
that $\lim_{x\to\infty}v^{\prime}(x)=c_{1}/a$, which can only hold if
$c_{1}=0$. Thus $v$ must satisfy the ODE
$v^{\prime}=-a^{-1}\mathfrak{f}(v),\qquad\text{ on }{\mathbb{R}}_{+},\qquad
v(0)=1.$
Since $\mathfrak{f}$ is Lipschitz on $[0,1]$, the result follows by ODE
uniqueness. ∎
Proof of Theorem 2.1. The existence of a classical solution has been shown in
the proof of Theorem 2.4 above. The property $\sup_{t\in(0,T]}\int
xu(x,t)dx<\infty$ follows from the fact that $v\in\mathbb{L}^{\infty}_{\rm
loc}({\mathbb{R}}_{+};\mathbb{L}^{1}({\mathbb{R}}_{+}))$. The uniqueness in
the class of $C^{2,1}$ functions satisfying the above uniform integrability
follows directly from uniqueness of weak solutions to (2.16), via integration
by parts. ∎
Proof of Theorem 2.5. The $C$-tightness of $\bar{\xi}^{n}$ stated in Lemma
3.6, the fact stated in Lemma 4.3 that for any limit point $\xi$,
$v(x,t):=\xi_{t}(x,\infty)$ is a weak solution of (2.18), and the uniqueness
stated in Lemma 3.1 imply that $\bar{\xi}^{n}$ possesses a deterministic weak
limit $\xi$, and that $\xi\in
C({\mathbb{R}}_{+},{\mathcal{M}}_{1}({\mathbb{R}}_{+}))$. The assertion that
$\xi$ is given by $\xi_{t}(dx)=u(x,t)dx$, where the latter is the unique
classical solution to (1.1), follows now from Lemma 4.3 and Theorem 2.4. ∎
Proof of Theorem 2.6. By Lemma 3.3.iii and Lemma 3.5.i, for $i\in[k]$,
(4.50)
$\hat{X}^{n}_{i}(t)=\hat{X}^{n}_{i}(0-)+\hat{E}^{n}_{i}(t)-\hat{S}^{n}_{i}(T^{n}_{i}(t))+\hat{b}^{n}_{1}t+\hat{\lambda}^{n}_{0}n^{-1/2}\int_{0}^{t}p_{n,{\mathcal{R}}^{n}_{i}(s)}ds+\hat{M}^{A,n}_{i}(t)+\hat{L}^{n}_{i}(t),$
and $\int\hat{X}^{n}_{i}(t)d\hat{L}^{n}_{i}(t)=0$. From Step 5 in the proof of
Lemma 4.1 we have that $T^{n}_{i}\to\iota$ in probability. Recall that
$(\hat{E}^{n}_{i},\hat{S}^{n}_{i})\Rightarrow(E_{i},S_{i})$ and that
$E_{i}-S_{i}$ are equal in law to $\sigma_{i}W_{i}$, where $W_{i}$ are
mutually independent standard BMs. Since, by assumption,
$(\hat{X}^{n}_{i}(0-))_{i\in[k]}\Rightarrow(X_{i}(0))_{i\in[k]}$, using the
dependence structure (2.7), we obtain
$(\hat{X}^{n}_{i}(0-)+\hat{E}^{n}_{i}-\hat{S}^{n}_{i}(T^{n}_{i}))\Rightarrow(X_{i}(0)+\sigma_{i}W_{i})_{i\in[k]}$,
where $(X_{i}(0))_{i\in[k]})$ is independent of $(W_{i})_{i\in[k]}$.
From (3.28) in the proof of Lemma 3.5 we have that $\hat{M}^{A,n}_{i}\to 0$ in
probability. From Lemma 3.5.iii, we have $C$-tightness of
$(\hat{X}^{n}_{i},\hat{L}^{n}_{i})_{i\in[k]}$. If we denote the integral term
in (4.50) by $\hat{K}^{n}_{i}$, we have tightness of the tuple
$(\hat{X}^{n}_{i},\hat{X}^{n}_{i}(0-)+\hat{E}^{n}_{i}-\hat{S}^{n}_{i}(T^{n}_{i}),\hat{K}^{n}_{i},\hat{L}^{n}_{i})_{i\in[k]}$,
and denoting a subsequential weak limit point by $(X_{i},X_{i}(0)+\sigma
W_{i},K_{i},L_{i})_{i\in[k]}$, one has, for $i\in[k]$,
$X_{i}(t)=X_{i}(0)+\sigma W_{i}(t)+b_{1}t+K_{i}(t)+L_{i}(t),\qquad\int
X_{i}dL_{i}=0.$
By uniqueness in law of the system of SDE (1.3), the result will be proved
once it is shown that $K_{i}(t)=b_{0}\int_{0}^{t}v(X_{i}(s))^{\ell-1}ds$.
Using Skorohod’s representation theorem we assume without loss that the
convergence along the subsequence is a.s. Thus, in view of (4.45), it suffices
to show that, along the subsequence, one has
(4.51)
$(\bar{\mathcal{R}}^{c,n}_{i})_{i\in[k]}\to(v(X_{i}(\cdot),\cdot))_{i\in[k]},$
a.s., in the uniform topology on $[t_{0},T]$, for any $0<t_{0}<T$.
To this end, recall that Theorem 2.5 establishes, in particular, that
$\bar{\xi}^{n}\to\xi$ in $D({\mathbb{R}}_{+},{\mathcal{M}}_{1})$, where
$\xi\in C({\mathbb{R}}_{+},{\mathcal{M}}_{1})$, and $\xi_{t}$ is atomless for
every $t>0$. Again it may be assumed that the convergence is a.s. Hence, with
$d_{\rm L}$ the Levy-Prohorov metric, $\|d_{\rm
L}(\bar{\xi}^{n},\xi)\|^{*}_{T}\to 0$ a.s. Because for any $t_{0}>0$ and
$x_{0}>0$, $v$ is uniformly continuous on $[0,x_{0}]\times[t_{0},T]$, this
gives
(4.52) $\sup_{t_{0}\leq t\leq T}\sup_{x\leq
x_{0}}|\bar{\xi}^{n}_{t}[x,\infty)-\xi_{t}[x,\infty)|\to 0,\quad\text{a.s.}$
By the atomless property of $\xi_{t}$, $t>0$, we know that
$\bar{k}_{n}:=\sup\\{\bar{\xi}^{n}_{t}(\\{x\\}):(x,t)\in[0,x_{0}]\times[t_{0},T]\\}\to
0,\quad\text{a.s.}$
Now, $\bar{\mathcal{R}}^{c,n}_{i}=n^{-1}(n-{\mathcal{R}}^{n}_{i})$ satisfies
$\bar{\xi}^{n}_{t}[\hat{X}^{n}_{i}(t),\infty)-\bar{k}_{n}\leq\bar{\mathcal{R}}^{c,n}_{i}(t)\leq\bar{\xi}^{n}_{t}[\hat{X}^{n}_{i}(t),\infty)+\bar{k}_{n},\qquad(x,t)\in[0,x_{0}]\times[t_{0},T],\
i\in[k].$
Using this in (4.52) shows
$\sup_{t_{0}\leq t\leq
T}|\bar{\mathcal{R}}^{c,n}_{i}(t)-v(\hat{X}^{n}_{i}(t),t)|1_{\\{\hat{X}^{n}_{i}(t)\leq
x_{0}\\}}\to 0,\quad\text{a.s.}$
Hence (4.51) follows by the a.s. convergence $\hat{X}^{n}_{i}\to X^{n}$ and
a.s. finiteness of $\|X_{i}\|^{*}_{T}$, $i\in[k]$, completing the proof. ∎
Proof of Theorem 2.7. The assumed exchangeability of $\hat{X}^{n}_{i}(0-)$ for
every $n$, along with the convergence in probability of their empirical law
$\bar{\xi}^{n}_{0-}$ to the deterministic limit $\xi_{0}$, imply the
convergence $(\hat{X}^{n}_{i}(0-))_{i\in[k]}\Rightarrow(X_{i}(0))_{i\in[k]}$
where the latter are mutually independent and each is $\xi_{0}$-distributed
[41, Proposition 2.2]. Hence the hypotheses of Theorem 2.6 hold. Because
Theorem 2.6 asserts that $(\hat{X}^{n}_{i},\hat{L}^{n}_{i})_{i\in[k]}$
converge in law to the solution of the system (1.3), in which $(W_{i})$ are
mutually independent and are independent of $(X_{i}(0))$, the additional
mutual independence of $(X_{i}(0))$ that has just been shown completes the
proof. ∎
Acknowledgment. Research of RA supported by ISF (grant 1035/20).
## References
* [1] P. Agarwal and K. Ramanan. Invariant states of hydrodynamic limits of randomized load balancing networks. arXiv preprint arXiv:2008.08510, 2020.
* [2] R. Aghajani, X. Li, and K. Ramanan. The PDE method for the analysis of randomized load balancing networks. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 1(2):1–28, 2017.
* [3] R. Aghajani and K. Ramanan. The hydrodynamic limit of a randomized load balancing network. The Annals of Applied Probability, 29(4):2114–2174, 2019.
* [4] R. Atar, I. Keslassy, G. Mendelson, A. Orda, and S. Vargaftik. Persistent-idle load-distribution. Stochastic Systems, 10(2):152–169, 2020.
* [5] R. Atar and D. Lipshutz. Heavy traffic limits for join-the-shortest-estimated-queue policy using delayed information. Mathematics of Operations Research, 46(1):268–300, 2021.
* [6] S. Banerjee, A. Budhiraja, and B. Estevez. Load balancing in parallel queues and rank-based diffusions. arXiv preprint arXiv:2302.10317, 2023.
* [7] S. Banerjee and D. Mukherjee. Join-the-shortest queue diffusion limit in Halfin–Whitt regime: Tail asymptotics and scaling of extrema. The Annals of Applied Probability, 29(2):1262–1309, 2019.
* [8] S. Banerjee and D. Mukherjee. Join-the-shortest queue diffusion limit in Halfin–Whitt regime: Sensitivity on the heavy-traffic parameter. The Annals of Applied Probability, 30(1):80–144, 2020.
* [9] E. Bayraktar, A. Budhiraja, and A. Cohen. Rate control under heavy traffic with strategic servers. The Annals of Applied Probability, 29(1):1–35, 2019.
* [10] P. Billingsley. Convergence of Probability Measures. John Wiley & Sons, 2013.
* [11] M. Bramson, Y. Lu, and B. Prabhakar. Randomized load balancing with general service time distributions. ACM SIGMETRICS performance evaluation review, 38(1):275–286, 2010.
* [12] M. Bramson, Y. Lu, and B. Prabhakar. Asymptotic independence of queues under randomized load balancing. Queueing Systems, 71:247–292, 2012.
* [13] M. Bramson, Y. Lu, and B. Prabhakar. Decay of tails at equilibrium for FIFO join the shortest queue networks. The Annals of Applied Probability, 23(5):1841–1878, 2013.
* [14] A. Braverman. Steady-state analysis of the join-the-shortest-queue model in the Halfin–Whitt regime. Mathematics of Operations Research, 45(3):1069–1103, 2020.
* [15] H. Chen and H.-Q. Ye. Asymptotic optimality of balanced routing. Operations Research, 60(1):163–179, 2012.
* [16] K. L. Chung and R. J. Williams. Introduction to Stochastic Integration, volume 2. Springer, 1990.
* [17] D. J. Daley and M. Miyazawa. A martingale view of Blackwell’s renewal theorem and its extensions to a general counting process. Journal of Applied Probability, 56(2):602–623, 2019.
* [18] M. V. der Boor, S. C. Borst, J. S. Van Leeuwaarden, and D. Mukherjee. Scalable load balancing in networked systems: A survey of recent advances. SIAM Review, 64(3):554–622, 2022.
* [19] P. Eschenfeldt and D. Gamarnik. Join the shortest queue with many servers. The heavy-traffic asymptotics. Mathematics of Operations Research, 43(3):867–886, 2018.
* [20] S. Foss and A. L. Stolyar. Large-scale join-idle-queue system with general service times. Journal of Applied Probability, 54(4):995–1007, 2017.
* [21] J. Gärtner. On the McKean-Vlasov limit for interacting diffusions. Mathematische Nachrichten, 137(1):197–248, 1988\.
* [22] E. Godlewski and P.-A. Raviart. Hyperbolic Systems of Conservation Laws. Number 3-4. Ellipses, 1991.
* [23] C. Graham. Chaoticity on path space for a queueing network with selection of the shortest queue among several. Journal of Applied Probability, 37(1):198–211, 2000.
* [24] C. Graham. Functional central limit theorems for a large network in which customers join the shortest of several queues. Probability Theory and Related Fields, 131:97–120, 2005\.
* [25] V. Gupta and N. Walton. Load balancing in the nondegenerate slowdown regime. Operations Research, 67(1):281–294, 2019.
* [26] S. Halfin and W. Whitt. Heavy-traffic limits for queues with many exponential servers. Operations Research, 29(3):567–588, 1981.
* [27] J. Jacod and A. Shiryaev. Limit Theorems for Stochastic Processes. Springer-Verlag, Berlin, 1987.
* [28] E. V. Krichagina and M. I. Taksar. Diffusion approximation for GI/G/1 controlled queues. Queueing Systems, 12:333–367, 1992.
* [29] D. Lacker. On a strong form of propagation of chaos for McKean-Vlasov equations. Electron. Commun. Probab., 23(45):1–11, 2018\.
* [30] X. Liu, K. Gong, and L. Ying. Large-system insensitivity of zero-waiting load balancing algorithms. ACM SIGMETRICS Performance Evaluation Review, 50(1):101–102, 2022.
* [31] M. J. Luczak and C. McDiarmid. On the maximum queue length in the supermarket model. The Annals of Probability, 34(2):493–527, 2006\.
* [32] M. J. Luczak and J. Norris. Strong approximation for the supermarket model. The Annals of Applied Probability, 15(3):2038–2061, 2005.
* [33] G. Mendelson and K. Xu. Care: Resource allocation using sparse communication. arXiv preprint arXiv:2206.02410, 2022.
* [34] M. Mitzenmacher. The power of two choices in randomized load balancing. IEEE Transactions on Parallel and Distributed Systems, 12(10):1094–1104, 2001.
* [35] D. Mukherjee, S. C. Borst, J. S. Van Leeuwaarden, and P. A. Whiting. Universality of power-of-d load balancing in many-server systems. Stochastic Systems, 8(4):265–292, 2018.
* [36] A. Mukhopadhyay and R. R. Mazumdar. Analysis of randomized join-the-shortest-queue (JSQ) schemes in large heterogeneous processor-sharing systems. IEEE Transactions on Control of Network Systems, 3(2):116–126, 2015.
* [37] P. Protter. Stochastic Integration and Differential Equations. A New Approach. Springer, Berlin, 1990.
* [38] M. Shkolnikov. Large systems of diffusions interacting through their ranks. Stochastic Processes and their Applications, 122(4):1730–1747, 2012.
* [39] A. L. Stolyar. Pull-based load distribution in large-scale heterogeneous service systems. Queueing Systems, 80:341–361, 2015.
* [40] A. L. Stolyar. Pull-based load distribution among heterogeneous parallel servers: the case of multiple routers. Queueing Systems, 85:31–65, 2017.
* [41] A.-S. Sznitman. Topics in propagation of chaos. Lecture Notes in Mathematics, (1464):165–251, 1991.
* [42] M. van der Boor, M. Zubeldia, and S. Borst. Zero-wait load balancing with sparse messaging. Operations Research Letters, 48(3):368–375, 2020.
* [43] N. D. Vvedenskaya, R. L. Dobrushin, and F. I. Karpelevich. Queueing system with selection of the shortest of two queues: An asymptotic approach. Problemy Peredachi Informatsii, 32(1):20–34, 1996.
* [44] G. Wolansky. Comparison between two models of self-gravitating clusters: Conditions for gravitational collapse. Nonlinear Analysis: Theory, Methods & Applications, 24(7):1119–1129, 1995.
* [45] L. Ying, R. Srikant, and X. Kang. The power of slightly more than one sample in randomized load balancing. Mathematics of Operations Research, 42(3):692–722, 2017.
* [46] Z. Zhao, S. Banerjee, and D. Mukherjee. Many-server asymptotics for join-the-shortest queue in the super-Halfin-Whitt scaling window. arXiv preprint arXiv:2106.00121, 2021.
|
:
AABI 2024Under review for the Workshop at the 6th Symposium on Advances in
Approximate Bayesian Inference, 2024
# Towards One Model for Classical Dimensionality Reduction: A Probabilistic
Perspective on UMAP and t-SNE
Aditya Ravuri
University of Cambridge
Neil D. Lawrence
University of Cambridge
###### Abstract
This paper shows that the dimensionality reduction methods, UMAP and t-SNE,
can be approximately recast as MAP inference methods corresponding to a
generalized Wishart-based model introduced in Ravuri et al. (2023). This
interpretation offers deeper theoretical insights into these algorithms, while
introducing tools with which similar dimensionality reduction methods can be
studied.
## 1 Introduction
In the realm of single-cell biology and various other domains with complex,
high-dimensional data, dimensionality reduction (DR) algorithms are essential
tools for uncovering the underlying structure of data. These algorithms, which
include very widely-used techniques like t-SNE (van der Maaten and Hinton,
2008) and UMAP (McInnes et al., 2020), are especially valuable for visualizing
data manifolds, enabling downstream processing, and the discovery of
insightful patterns. A deeper comprehension of these algorithms and their
theoretical underpinnings is crucial for advancing their applicability,
particularly when prior information is available and improving their
interpretability. Our work builds on (and aims to unify fully) the ProbDR
framework, which interprets classical DR methods through a probabilistic lens
to enable the communication of assumptions, integration of prior knowledge and
accounting for noise and confounders.
Ravuri et al. (2023) introduced ProbDR as a framework with two main
interpretations: UMAP and t-SNE corresponding to inference over an adjacency
matrix, and other classical DR algorithms that utilize eigendecomposition of a
PSD matrix as inference using a Wishart generative model on the
covariance/precision matrix.
In this work, we further simplify the framework, moving away from the
variational interpretation and propose that all algorithms with ProbDR
interpretations (and hence most classical DR methods) can be written as MAP
inference algorithms given the model,
$\mathbf{S}|\mathbf{X}\sim\mathcal{W}^{\\{-1\\}}(\mathbf{XX}^{T}+\epsilon\mathbf{H}K_{t}(\mathbf{X})\mathbf{H}+\gamma\mathbf{I},\nu),$
(1)
where $\mathbf{S}\in S^{+}_{n}$ is an estimate of a covariance matrix,
$\mathbf{X}\in\mathbb{R}^{n,q}$ corresponds to the set of low ($q$)
dimensional latent variables, $\mathbf{H}=\mathbf{I}-\mathbf{11}^{T}/n$ is a
centering matrix, and
$K_{t}(\mathbf{X}_{i},\mathbf{X}_{j})=(1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2})^{-1}$
is the Student-t kernel.
We aim to interpret these DR algorithms as MAP inference methods instead of
variational methods as studied in Ravuri et al. (2023); Van Assel et al.
(2022). This unifies t-SNE and UMAP-like algorithms with the other algorithms
and provides semantic interpretation to UMAP and t-SNE. Additionally, we hope
that the tools introduced in this paper will provide researchers with more
machinery to understand the behaviour of latent variable models.
## 2 Background
The ProbDR framework showed that many classical DR algorithms can be expressed
as inference algorithms corresponding to a probabilistic model. Algorithms
that set the embedding of a high-dimensional dataset
$\mathbf{Y}\in\mathbb{R}^{n,d}$ in terms of the eigenvectors of a positive-
semi-definite matrix were shown in Ravuri et al. (2023) to correspond to a
two-step inference process, where,
1. 1.
one first estimates a covariance matrix $\mathbf{S}(\mathbf{Y})$ or a
precision matrix $\mathbf{\Gamma}(\mathbf{Y})$ (which a graph Laplacian
$\mathbf{L}$ can be an estimate of),
2. 2.
then estimates the embedding via maximum a-posteriori given one of the two
following models,
$\displaystyle\mathbf{S}|\mathbf{X}\sim\mathcal{W}\left(\mathbf{X}\mathbf{X}^{T}+\sigma^{2}\mathbf{I}_{n},d\right)\text{
or,}$
$\displaystyle\mathbf{\Gamma}|\mathbf{X}\sim\mathcal{W}\left((\mathbf{X}\mathbf{X}^{T}+\beta\mathbf{I}_{n})^{-1},d\right).$
PCoA, for example, is recovered using the first of these formulations with
$\mathbf{S}\equiv\mathbf{Y}\mathbf{Y}^{T}$. Setting $\epsilon=0$ in Equation 1
recovers these results.
In the case of UMAP and t-SNE, ProbDR did not specify a generative model for
either the data or the covariance but only a generative model for the
adjacency matrices that describe the nearest neighbour graph.
Specifically, ProbDR showed that the generative models corresponding to UMAP
and t-SNE can be seen as models for adjacency matrices $\mathbf{A}^{\prime}$,
$\small
p(\mathbf{A}^{\prime}|\mathbf{X})=\begin{cases}\text{Categorical}(\text{vec}(\mathbf{A}^{\prime})|w^{t}_{ij}(\mathbf{X}))&\text{t-SNE}\\\
\prod_{i>j}^{n}\text{Bernoulli}(\mathbf{A}^{\prime}_{ij}|w^{U}_{ij}(\mathbf{X}_{i},\mathbf{X}_{j}))&\text{UMAP}\end{cases}$
(2)
where,
$\displaystyle
w^{t}_{ij}=\dfrac{(1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2})^{-1}}{\sum_{k\neq
l}(1+\|\mathbf{X}_{k}-\mathbf{X}_{l}\|^{2})^{-1}},\text{ and
}w^{U}_{ij}=\dfrac{1}{1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}}.$
The inference can be done as maximum a-posteriori estimation for $\mathbf{X}$,
given the binary, empirical, nearest-neighbour adjacency matrix
$\mathbf{A}^{\prime}(\mathbf{Y})_{ij}=\mathcal{I}(j\in N(i))$, where $N(i)$
represents the set of nearest neighbours of data point $\mathbf{Y}_{i}$. UMAP
and t-SNE are typically interpreted in a variational way, however the
inference trivially becomes MAP inference when we use
$\mathbf{A}^{\prime}(\mathbf{Y})$ as the variational data-dependent
distribution (due to the Ravuri et al. (2023), Appendix B.7, Lemma 13). This
interpretation is also presented in Damrich et al. (2022).
This simplification is reasonable due to the findings of Damrich and Hamprecht
(2021), where it was found that the relatively complex calculation of the
variational probabilities in t-SNE and UMAP can be replaced with simply the
adjacency matrices without loss of performance. Our initial experiments also
closely aligned with these findings.
Crucially, however, Becht et al. (2019); Damrich et al. (2022) note that the
optimisation process is equally as important. As part of an extensive study on
the nature of the t-SNE and UMAP loss functions, Damrich et al. (2022) then
show how the stochastic optimisation of t-SNE and UMAP can be interpreted to
be contrastive estimation with the loss
$\mathcal{L}(\mathbf{X})\propto-\mathbb{E}_{ij\sim
p}\log\left(\frac{w_{ij}(\mathbf{X})}{w_{ij}(\mathbf{X})+1}\right)-m\mathbb{E}_{ij\sim\xi}\log\left(1-\frac{w_{ij}(\mathbf{X})}{w_{ij}(\mathbf{X})+1}\right),$
(3)
where $p$ represents a discrete distribution that is uniform across the
nearest neighbour pairs and zero everywhere else. Similarly, $\xi$ represents
a uniform distribution over non-neighbours. $m$ (set to $2n_{-}$) is a
multiplicative hyperparameter proportional to the number of contrastive
negatives that affects the strength of repulsion. In Damrich et al. (2022),
$w_{ij}(\mathbf{X})=\frac{1}{\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}}$ recovers
UMAP. This bound is important, as we found that a naive optimisation of the
Bernoulli likelihood in Equation 2 leads to a poor embedding (although
Appendix B offers more commentary on this bound as a Bernoulli likelihood and
provides more evidence for our claims).
For this work, we aim to work with such a contrastive loss function and
interpret it as a likelihood, but over the latents $\mathbf{X}$. This is
because we were particularly inspired by Nakamura et al. (2023), who showed
that contrastive learning methods could be seen as variational algorithms
(hence suggesting a link between t-SNE and contrastive learning) and by
Gutmann and Hyvärinen (2010), which shows that contrastive losses are
estimators of negative log-likelihoods. Damrich et al. (2022) also greatly
simplify the UMAP optimisation process.
## 3 Discussion
This section argues that inference with the generative model in Equation 1
approximately recovers UMAP and t-SNE-like algorithms.
### 3.1 The proposed model
We first consider MAP inference for $\mathbf{X}$ given the model,
$\displaystyle\nu\mathbf{L}|\mathbf{X}$
$\displaystyle\sim\mathcal{W}((-\alpha\mathbf{H}\mathbf{D}^{\prime}\mathbf{H}+\gamma\mathbf{I})^{-1},\nu),$
(4) $\displaystyle\mathbf{X}$
$\displaystyle\sim\text{Uniform}(-\infty,\infty),$
where $\mathbf{D^{\prime}}$ is a squared distance matrix, with elements
$\mathbf{D}^{\prime}_{ij}=\log(1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2})$. Let
$\mathbf{M}=-\mathbf{H}\mathbf{D}^{\prime}\mathbf{H}$. $\mathbf{M}$ is PSD
(with some interesting properties relating to isometric Euclidean embeddings,
see Section A.2). The graph Laplacian is computed as
$\mathbf{L}=(\mathbf{D-A})/\bar{d}$, with $\bar{d}$ being the average
degree111So that its inverse (i.e. the covariance) has diagonal elements
around one.. The adjacency matrix $\mathbf{A}_{ij}=1\text{ if }p(ij)\text{ (of
\lx@cref{creftype~refnum}{eqn:cne-bound}) }>0.$
Then, the log-likelihood given the model in Equation 4 is as follows (we focus
on just the data-dependent term),
$\displaystyle\log p(\mathbf{X}|\mathbf{L})$
$\displaystyle=-0.5\nu\text{tr}(\mathbf{L}(\alpha\mathbf{M}+\gamma\mathbf{I}))+0.5\nu\log\det(\alpha\mathbf{M}+\gamma\mathbf{I})+c$
$\displaystyle\propto-\alpha\text{tr}(\mathbf{L}\mathbf{M})+\log\det(\alpha\mathbf{M}+\gamma\mathbf{I})+k$
$\displaystyle=\alpha\text{tr}(\mathbf{L}\mathbf{H}\mathbf{D^{\prime}}\mathbf{H})+\log\det(\alpha\mathbf{M}+\gamma\mathbf{I})+k$
$\displaystyle=\alpha\text{tr}\left(\mathbf{L}\mathbf{D}^{\prime}\right)+\log\det(\alpha\mathbf{M}+\gamma\mathbf{I})+k\qquad\text{(trace
cyclic and }\mathbf{L}\text{ centered)}$
$\displaystyle=-\alpha\text{tr}(\mathbf{A}\mathbf{D}^{\prime})+\log\det(\alpha\mathbf{M}+\gamma\mathbf{I})+k\qquad\text{tr}(\mathbf{D^{\prime}})=0$
$\displaystyle=-\alpha\sum_{i}\mathbf{A}_{i}^{T}\mathbf{D}^{\prime}_{i}+\log\det(\alpha\mathbf{M}+\gamma\mathbf{I})+k$
$\displaystyle=-\alpha\sum_{i}\sum_{j}a_{ij}\log(1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2})+\log\det(\alpha\mathbf{M}+\gamma\mathbf{I})+k$
$\displaystyle=\alpha\dfrac{n_{+}}{\bar{d}}\mathbb{E}_{ij\sim
p}\log\left(\dfrac{1}{1+\|\mathbf{X}_{i}-\mathbf{X}_{j})\|^{2}}\right)+\log\det(\alpha\mathbf{M}+\gamma\mathbf{I})+k$
$\displaystyle=\alpha n\mathbb{E}_{ij\sim
p}\log\left(\dfrac{w_{ij}(\mathbf{X}_{i},\mathbf{X}_{j})}{1+w_{ij}(\mathbf{X}_{i},\mathbf{X}_{j})}\right)+\log\det(\alpha\mathbf{M}+\gamma\mathbf{I})+k,$
with $n_{+}=n\bar{d}\approx 1.5nn_{\\#}$ being the number of total number of
edges and $n_{\\#}$ is the number of neighbours per point (typically set to
15). An important note here is that the model is misspecified (for example,
the variance implied by the covariance parameter and the data are quite
different, with the variance of the Wishart being much lower than the data
estimate).
Therefore, by minimising the first term of Equation 3, we maximise the data-
dependent term of the likelihood of Equation 4, and so, Equation 4 defines a
model for dimensionality reduction that in some ways is similar to t-SNE and
UMAP-like algorithms. Figure 1 shows embeddings of 30,000 digits from the
MNIST dataset obtained using this model. We also run this model on a suite of
other datasets (from Pedregosa et al. (2011); Deng (2012); sta ; Krumsiek et
al. (2011)) to show that we recover roughly similar embeddings as minimisation
of Equation 3 in Figure 2.
Figure 1: Embeddings of 30k MNIST digits obtained using inference within
Equation 4. Left: A run with both $\alpha=1/50$ and $\gamma=1/5$ treated as
hyper-parameters. Right: A run with $\alpha=1/50$ treated as a hyperparameter
and $\gamma$ optimised using maximum likelihood inference. Appendix B sheds
light into why this choice of hyper-parameters is performant, leading to the
embedding of Figure 4.
Figure 2: Left: embeddings on various datasets obtained using optimisation of
the CNE bound (Equation 3) compared with right: inference results using our
model.
.
### 3.2 CNE bounds can sometimes be approximated using the proposed model
Consider the negative CNE loss,
$\displaystyle-\mathcal{L}$ $\displaystyle\propto\dfrac{\alpha\nu
n}{2}\mathbb{E}_{ij\sim
p}\log\left(\frac{w_{ij}(\mathbf{X})}{w_{ij}(\mathbf{X})+1}\right)+\dfrac{\alpha\nu
n}{2}m\mathbb{E}_{ij\sim\xi}\log\left(1-\frac{w_{ij}(\mathbf{X})}{w_{ij}(\mathbf{X})+1}\right)$
$\displaystyle=-\dfrac{\nu}{2}\text{tr}(\mathbf{L}(\alpha\mathbf{M}+\gamma\mathbf{I}))+\dfrac{\alpha\nu
n}{2}m\mathbb{E}_{ij\sim\xi}\log\left(\frac{\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}}{1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}}\right)$
Sec. 3.1
Let the first term be represented as $\mathcal{T}_{a}$. We first approximate
the distances $\mathbf{D}^{\prime}_{ij}=\log(1+d_{ij}^{2})\approx
0.5(1+d_{ij}^{2}-1/(1+d_{ij}^{2}))$ within $\mathcal{T}_{a}$ as this is a good
approximation for the log1p function near 0. Therefore,
$\displaystyle\mathbf{M}\approx-\mathbf{H}(\mathbf{11^{T}}+0.5\mathbf{D}^{2}-0.5K_{t})\mathbf{H}=\mathbf{X}\mathbf{X}^{T}+\mathbf{H}K_{t}\mathbf{H}.$
App. A.1
This approximation sheds light on the behaviour of the covariance parameter of
Equation 4 and draws the link to Equation 1.
Then,
$\displaystyle-\mathcal{L}$
$\displaystyle\propto\mathcal{T}_{a}+\dfrac{\alpha\nu
n}{2}m\mathbb{E}_{ij\sim\xi}\log\left(\frac{\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}}{1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}}\right)$
$\displaystyle\approx\mathcal{T}_{a}-\dfrac{\alpha\nu
mn}{2}\mathbb{E}_{ij\sim\xi}\left(\frac{1}{1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}}\right)\qquad(\text{for
large distances})$ $\displaystyle\approx\mathcal{T}_{a}-\dfrac{\alpha\nu
mn}{2}\mathbb{E}_{ij\sim\mathcal{U}}\left(\frac{1}{1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}}\right)\qquad\text{as
}n>>n_{\\#}$ $\displaystyle=\mathcal{T}_{a}-\dfrac{\alpha\nu
mn}{2n^{2}}\sum_{ij}\frac{1}{1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}}$
$\displaystyle=\mathcal{T}_{a}+\dfrac{\alpha\nu
m}{2}\text{tr}\left(-\dfrac{1}{n}\mathbf{11^{T}}K_{t}-\dfrac{1}{n}K_{t}\mathbf{11^{T}}+\dfrac{1}{n^{2}}\mathbf{11^{T}}K_{t}\mathbf{11^{T}}\right)$
$\displaystyle=\mathcal{T}_{a}+\dfrac{\nu
m}{2}\text{tr}\left(\alpha\mathbf{H}K_{t}\mathbf{H}\right)+k$
$\displaystyle=-\dfrac{\nu}{2}\text{tr}(\mathbf{L}(\alpha\mathbf{M}_{\text{aprx}}+\gamma\mathbf{I}))+\dfrac{\nu}{2}\text{tr}\left(\alpha
m\mathbf{M}_{\text{aprx}}+\gamma m\mathbf{I}\right)-\dfrac{\alpha
m\nu}{2}\text{tr}(\mathbf{X}\mathbf{X}^{T})+k$
$\displaystyle\propto-\dfrac{m\nu}{2}\text{tr}(m^{-1}\mathbf{L}(\dfrac{\alpha}{\gamma}\mathbf{M}_{\text{aprx}}+\mathbf{I}))+\dfrac{m\nu}{2}\text{tr}\left(\dfrac{\alpha}{\gamma}\mathbf{M}_{\text{aprx}}+\mathbf{I}\right)-\dfrac{\alpha
m\nu}{2\gamma}\text{tr}(\mathbf{X}\mathbf{X}^{T})+k$
$\displaystyle\approx-\dfrac{m\nu}{2}\text{tr}(m^{-1}\mathbf{L}(\dfrac{\alpha}{\gamma}\mathbf{M}_{\text{aprx}}+\mathbf{I}))+\dfrac{m\nu}{2}\log\det\left(\dfrac{\alpha}{\gamma}\mathbf{M}_{\text{aprx}}+\mathbf{I}\right)-\dfrac{\alpha
m\nu}{2\gamma}\text{tr}(\mathbf{X}\mathbf{X}^{T})+k\quad(\text{large }\gamma)$
$\displaystyle=\log\mathcal{W}(\nu\mathbf{L}|m^{-1}(\alpha\mathbf{M}_{\text{aprx}}+\gamma\mathbf{I})^{-1},m\nu)+\log\mathcal{N}(\mathbf{X}|\mathbf{0},\gamma\mathbf{I}/\alpha
m\nu)$
This shows the NCE bound can be approximated by the likelihood of our proposed
model with a diffuse prior on $\mathbf{X}$. Note that the model is still
misspecified. A comparison of embeddings used with this approximation and the
CNE bound are shown in Figure 3.
Figure 3: Left: Embedding of 30k digits from MNIST obtained using the CNE
bound and right: using our approximation. For this, we set $n_{-}=1$, which
was the best performing case.
## 4 Conclusion
We present a probabilistic interpretation of t-SNE and UMAP-like algorithms,
showing that they correspond to inference within misspecified generative
models for the covariance/precision with fixed scale parameters, and a choice
of a covariance that describes non-linear functions within a Gaussian process
context. We hope that this serves as a foundation to further refine these
models (particularly based on their regularisation terms) such that they
emulate results from t-SNE and UMAP-like algorithms.
Figure 4: Embedding of 30k MNIST digits resulting from ProbDR inference using
a scaled Student-t kernel. See Appendix B.
AR would like to thank Francisco Vargas for helpful discussions and a
studentship from the Accelerate Programme for Scientific Discovery.
## References
* (1) Statlog (Shuttle). UCI Machine Learning Repository. URL https://doi.org/10.24432/C5WS31.
* Becht et al. (2019) Etienne Becht, Leland McInnes, John Healy, Charles-Antoine Dutertre, Immanuel W. H. Kwok, Lai Guan Ng, Florent Ginhoux, and Evan W. Newell. Dimensionality reduction for visualizing single-cell data using UMAP. _Nature Biotechnology_ , 37(1):38–44, Jan 2019. ISSN 1546-1696. 10.1038/nbt.4314. URL https://doi.org/10.1038/nbt.4314.
* Bhatia (2007) Rajendra Bhatia. _Positive Definite Matrices_. Princeton University Press, 2007. ISBN 9780691129181. URL http://www.jstor.org/stable/j.ctt7rxv2.
* Damrich and Hamprecht (2021) Sebastian Damrich and Fred A Hamprecht. On umap's true loss function. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, _Advances in Neural Information Processing Systems_ , volume 34, pages 5798–5809. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/2de5d16682c3c35007e4e92982f1a2ba-Paper.pdf.
* Damrich et al. (2022) Sebastian Damrich, Niklas Böhm, Fred A Hamprecht, and Dmitry Kobak. From $t$-sne to umap with contrastive learning. In _The Eleventh International Conference on Learning Representations_ , 2022.
* Deng (2012) Li Deng. The mnist database of handwritten digit images for machine learning research [best of the web]. _IEEE Signal Processing Magazine_ , 29(6):141–142, 2012. 10.1109/MSP.2012.2211477.
* Gupta and Nagar (2018) Arjun K Gupta and Daya K Nagar. _Matrix variate distributions_. Chapman and Hall/CRC, 2018.
* Gutmann and Hyvärinen (2010) Michael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In _Proceedings of the thirteenth international conference on artificial intelligence and statistics_ , pages 297–304. JMLR Workshop and Conference Proceedings, 2010.
* Khare (2019) Apoorva Khare. Schoenberg: from metric geometry to matrix positivity, Apr 2019. URL https://math.iisc.ac.in/seminar-slides/2019/2019-04-12-ApoorvaKhare.pdf.
* Krumsiek et al. (2011) Jan Krumsiek, Carsten Marr, Timm Schroeder, and Fabian J Theis. Hierarchical differentiation of myeloid progenitors is encoded in the transcription factor network. _PLoS One_ , 6(8):e22649, August 2011.
* McInnes et al. (2020) Leland McInnes, John Healy, and James Melville. UMAP: Uniform manifold approximation and projection for dimension reduction, 2020.
* Nakamura et al. (2023) Hiroki Nakamura, Masashi Okada, and Tadahiro Taniguchi. Representation uncertainty in self-supervised learning as variational inference. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pages 16484–16493, 2023.
* Pedregosa et al. (2011) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. _Journal of Machine Learning Research_ , 12:2825–2830, 2011.
* Ravuri et al. (2023) Aditya Ravuri, Francisco Vargas, Vidhi Lalchand, and Neil D Lawrence. Dimensionality reduction as probabilistic inference. In _Fifth Symposium on Advances in Approximate Bayesian Inference_ , 2023. URL https://arxiv.org/pdf/2304.07658.pdf.
* Schoenberg (1935) I. J. Schoenberg. Remarks to maurice frechet’s article “sur la definition axiomatique d’une classe d’espace distances vectoriellement applicable sur l’espace de hilbert. _Annals of Mathematics_ , 36(3):724–732, 1935. ISSN 0003486X. URL http://www.jstor.org/stable/1968654.
* Van Assel et al. (2022) Hugues Van Assel, Thibault Espinasse, Julien Chiquet, and Franck Picard. A probabilistic graph coupling view of dimension reduction. _Advances in Neural Information Processing Systems_ , 35:10696–10708, 2022.
* van der Maaten and Hinton (2008) Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. _Journal of Machine Learning Research_ , 9(86):2579–2605, 2008. URL http://jmlr.org/papers/v9/vandermaaten08a.html.
## Appendix A Centered Distance Based Results
### A.1 The MDS Matrix is a Gram matrix
Assume centered $\mathbf{X}$ (i.e. the column means of $\mathbf{X}$ are zero).
Then,
$\displaystyle-\mathbf{H}\mathbf{D}\mathbf{H}$
$\displaystyle=-\left(\mathbf{I}-\frac{1}{n}\mathbf{11}^{T}\right)\mathbf{D}\left(I-\frac{1}{n}\mathbf{11}^{T}\right)$
$\displaystyle=-\left(\mathbf{I}-\frac{1}{n}\mathbf{11}^{T}\right)\left(\tilde{\mathbf{D}}\mathbf{11}^{T}+\mathbf{11}^{T}\tilde{\mathbf{D}}-2\mathbf{X}\mathbf{X}^{T}\right)\left(I-\frac{1}{n}\mathbf{11}^{T}\right)$
$\displaystyle...$
$\displaystyle=2\mathbf{X}\mathbf{X}^{T}-\frac{2}{n}\mathbf{X}\mathbf{X}^{T}\mathbf{11}^{T}-\frac{2}{n}\mathbf{11}^{T}\mathbf{X}\mathbf{X}^{T}+\frac{2}{n^{2}}2\mathbf{11}^{T}\mathbf{X}\mathbf{X}^{T}\mathbf{11}^{T}$
$\displaystyle=2\mathbf{X}\mathbf{X}^{T}$
### A.2 -$\mathbf{H}\mathbf{D}^{\prime}\mathbf{H}$ is PSD
$\mathbf{D}_{ij}^{{}^{\prime}0.5}$ (and $\mathbf{D}^{\prime}_{ij}$) are valid
distance metrics. We only need to show that $-\mathbf{D^{\prime}}$ is CPSD,
then, there exists an isometric Euclidean embedding Khare (2019); Schoenberg
(1935).
$-\mathbf{D^{\prime}}=\log\dfrac{1}{1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}}$.
Note that the inner term is a kernel, hence is PSD. The $\log$ function
preserves CPSD-ness (Bhatia, 2007).
## Appendix B A different interpretation of Equation 3
If the negative samples coefficient of the second term of Equation 3 is
absorbed into the logarithm, and the first term ignored, this implies that the
edge probability is,
$\displaystyle\mathbb{P}(\mathbf{A}_{ij}=1)$
$\displaystyle=1-\left(1-\dfrac{1}{1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}}\right)^{1.5n_{\\#}n_{-}/n}$
$\displaystyle\approx\dfrac{1.5n_{\\#}n_{-}}{n}\log\left(1+\dfrac{1}{\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}+\epsilon}\right).$
We further approximate this probability as the following based on empirical
observations that it did not seem to affect the embeddings negatively,
$\displaystyle\mathbb{P}(\mathbf{A}_{ij}=1)$
$\displaystyle\approx\dfrac{1.5n_{\\#}n_{-}/n}{1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}}.$
Using this definition, we estimate the parameter of a Wishart distribution
over the normalized graph Laplacian $\mathbf{L}$ by moment matching
(specifically such that the functional terms of the Wishart variance match the
Bernoulli distribution’s variances). This leads to the covariance estimate,
$\mathbf{M}_{ij}=\dfrac{\mathbf{p}_{ij}}{\sum_{a}\mathbf{p}_{ia}\sum_{b}\mathbf{p}_{ib}},$
where
$\mathbf{p}_{ij}=\dfrac{1.5n_{\\#}n_{-}/n}{1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}}$.
It’s easy to see that this is PD. Plugging this in place of $\alpha\mathbf{M}$
into our model in Equation 4 leads to the embedding in LABEL:fig:omp-est. Note
that the coefficient is around $0.01$ when $n=10k$, which might shed light
into the choice for $\alpha$ that is most performant in Figure 4.
## Appendix C A Matrix-t Perspective
The density of the matrix-t distribution, with the column covariance set to
$\mathbf{I}$ is expressed as (Gupta and Nagar, 2018),
$\displaystyle\log p(\mathbf{Y}|\Sigma)$
$\displaystyle=-(\alpha+n/2)\log\left|\mathbf{I}_{n}+\dfrac{\beta}{2}\Sigma^{-1}\mathbf{Y}\mathbf{Y}^{T}\right|-\dfrac{d}{2}\log|\Sigma|+c.$
Let $\mathbf{S}=\mathbf{\Gamma}^{-1}$ be an invertible estimator of
$\mathbf{Y}\mathbf{Y}^{T}$. Note that,
$\displaystyle\log\left|\mathbf{I}_{n}+\dfrac{\beta}{2}\Sigma^{-1}\mathbf{S}\right|$
$\displaystyle=\log\left|\dfrac{\beta}{2}\Sigma^{-1}\mathbf{S}\right|\left|\dfrac{2}{\beta}\mathbf{\Gamma}\Sigma+\mathbf{I}_{n}\right|$
$\displaystyle=\log\left|\dfrac{2}{\beta}\mathbf{\Gamma}\Sigma+\mathbf{I}_{n}\right|-\log\left|\Sigma\right|+k.$
Therefore the density can be rewritten in terms of the precision matrix
$\mathbf{\Gamma}$,
$\displaystyle\log p(\mathbf{Y}|\Sigma)$
$\displaystyle=\dfrac{2\alpha+n-d}{2}\log|\Sigma|-(\alpha+n/2)\log\left|\dfrac{2}{\beta}\mathbf{\Gamma}\Sigma+\mathbf{I}_{n}\right|+c.$
For reference, the negative log-density of a matrix Cauchy distribution can be
written as follows (adapted from ),
$\displaystyle\mathcal{L}_{t_{1}}(\Sigma)=\dfrac{d+n}{2}\log|I+\mathbf{L}\Sigma|-\dfrac{n}{2}\log|\Sigma|+c.$
The first term of the CNE bound can also be approximated as,
$\displaystyle-\mathbb{E}_{ij\sim
p}\log\left(\frac{w_{ij}(\mathbf{X})}{w_{ij}(\mathbf{X})+1}\right)$
$\displaystyle=-\mathbb{E}_{ij\sim
p}\log\left(\frac{1}{1+\frac{1}{w_{ij}(\mathbf{X})}}\right)$
$\displaystyle=-\mathbb{E}_{ij\sim
p}\log\left(\frac{1}{2+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}}\right)$
$\displaystyle=\mathbb{E}_{ij\sim
p}\log\left(2+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}\right)$
$\displaystyle\approx
n^{-2}\sum_{ij}a_{ij}\log\left(1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}\right)$
$\displaystyle\leq\log(2+n^{-2}\sum_{ij}a_{ij}\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2})$
$\displaystyle=\log(1+2n^{-2}\text{tr}(\mathbf{L}\mathbf{X}\mathbf{X}^{T}+\mathbf{I}))$
$\displaystyle\approx\log|\mathbf{I}+\mathbf{L}(\mathbf{X}\mathbf{X}^{T}+\mathbf{I})|.$
Following the methodology of Section 3.1 however leads to a very Laplacian-
Eigenmaps-like solution. The regularisation term is paramount to the quality
of embeddings obtained using such methods.
## Appendix D An Analysis of the Adjacency Probabilities
The generative probabilities of t-SNE, in some sense, model the probability
with which an edge is the shortest edge on the entire distance graph,
representing the shortest edge. The probability of this is proportional to
$w_{ij}(\mathbf{X})=\frac{1}{1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}}$
(implied by the contrastive loss used in Damrich et al. (2022); this is also
evident from the construction of the algorithm).
In this section, we assume that there’s a latent dataset described by a
generative model $\mathbf{X}\rightarrow\mathbf{Y}$ that’s unobserved, but
whose statistics then affect the data adjacency matrix $\mathbf{A}$ through
$\mathbf{Y}\rightarrow\mathbf{A}$, that is observed.
Assuming that a generative model for data $\mathbf{Y}$ given latents
$\mathbf{X}$ exists, we show by simulation that
$\mathbb{P}(\mathcal{A}_{ij}(\mathbf{Y})=1|\mathbf{X})=\mathbb{P}\left(\mathcal{I}\left[\|\mathbf{Y}_{i}-\mathbf{Y}_{j}\|^{2}=\underset{k>l}{\text{argmin}}\|\mathbf{Y}_{k}-\mathbf{Y}_{l}\|^{2}\right]=1\mathbf{X}\right)$
cannot be proportional to the form assumed in t-SNE if a Gaussian process
model,
$\mathbf{Y}|\mathbf{X}\sim\mathcal{MN}(\mathbf{0},\mathbf{X}\mathbf{X}^{T}+\sigma^{2}\mathbf{I},\mathbf{I}),$
is assumed, but can be achieved by both matrix Cauchy distributiones with a
linear kernel and Gaussian processes with a kernel that is the sum of linear
and smooth kernels. This is intuitive, as for these probabilities to decay to
zero, in the Gaussian case, the kernel must be non-stationary. In the case of
a matrix Cauchy distribution, this is due to its extreme-value properties.
Monte-Carlo simulations were done where a matrix Cauchy distribution (left on
Figure 5) and a Gaussian process (right) with dot product kernels are used to
generate high-dimensional data, and hence distance matrices between the high-
dimensional data points. Then, the log probability that the $ij^{th}$ element
of the distance matrix is the smallest is computed and plotted against the
Euclidean distances of the corresponding $\mathbf{X}$ points. These show that
using a Gaussian process with a linear kernel produces probabilities that are
linear functions of the latent Euclidean distance. A matrix Cauchy prior on
the other hand, induces proximity probabilities of the right shape.
Figure 5: Monte-Carlo simulations showing the probability with which a
distance $d_{ij}(\mathbf{Y})$ is the minimum throughout the entire distance
matrix, plotted as a function of the Euclidean distance between the
$\mathbf{X}$.
As the linear kernel induces extremely small probabilities for larger
distances in $\mathbf{X}$, we also use importance sampling to simulate tail
probabilities and ensure that these behave linearly as a function of the
distances in $\mathbf{X}$. For this, we need a full joint distribution of
distances, which is approximated below.
###### Theorem D.1 (Distribution of normal distances).
Assume that $\mathbf{Y}$ is distributed as,
$\begin{bmatrix}\mathbf{Y}_{i}\\\
\mathbf{Y}_{j}\end{bmatrix}\sim\mathcal{MN}\left(\mu,\begin{bmatrix}k_{ii}&k_{ij}\\\
k_{ji}&k_{jj}\end{bmatrix},\mathbf{I}_{d}\right).$
Then, the following hold. Firstly, denoting
$d_{ij}^{2}=\|\mathbf{Y}_{i}-\mathbf{Y}_{j}\|^{2}$, the marginal distribution
is given by,
$d_{ij}^{2}\sim\Gamma\left(k=\dfrac{d}{2},\theta=2(k_{ii}+k_{jj}-2k_{ij})\right).$
As a consequence, $\mathbb{E}(d_{ij}^{2})=d*\tilde{k}_{ij}\text{ and
}\mathbb{V}(d_{ij}^{2})=2d*\tilde{k}_{ij}^{2}$, where
$\tilde{k}_{ij}=k_{ii}+k_{jj}-2k_{ij}$.
Additionally,
$\mathbb{C}(d_{ij}^{2},d_{mn}^{2})=2d*(k_{im}+k_{jn}-k_{in}-k_{jm})^{2}.$
This is a useful fact as the upper triangle of the distance matrix is
approximately normal due to the central limit theorem with increasing $d$.
Proved in Appendix E.
Results of this experiment are given in Figure 6, which confirms that a
Gaussian process with a linear (dot product) kernel produces probabilities
that are approximately linear functions of the latent distance, which are
unlike the assumed tSNE adjacency probabilities.
Figure 6: Illustration of $\mathbb{P}(argmin(d_{ij}^{2})=ij)$ through
importance sampling for a linear GP.
Along different lines, we can think of the UMAP generative model as a model
for adjacencies
$\mathbf{A}_{ij}\sim\text{Bernoulli}(1/(1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}))$
as stated in ProbDR. Using just the mean the variance in Theorem D.1, we can
find the parameters of a covariance
$\mathbf{S}=\alpha\mathbf{X}\mathbf{X}^{T}+\beta
K_{t}(\mathbf{X})+\gamma\mathbf{I}$, using a black-box optimiser such that the
probability of a pair of points $ij$ having a distance smaller than
$\epsilon$,
$\mathbb{P}(d_{ij}^{2}<\epsilon)\approx\Phi\left(\dfrac{\epsilon-d\tilde{k}_{ij}}{\sqrt{2d}\tilde{k}_{ij}}\right),$
with
$\tilde{k}_{ij}=\alpha\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}+2\gamma+2\beta(1-1/(1+\|\mathbf{X}_{i}-\mathbf{X}_{j}\|^{2}))$,
is as in UMAP. This results in the parameters $\alpha\approx 0.1,\beta\approx
0.3,\gamma\approx 0.6$, with the fit being near-perfect. However, we reason
that this is quite a coarse approximation and a naive application of this
covariance within ProbDR results in a mediocre embedding.
## Appendix E An Approximate Joint Distribution of Euclidean Distances
The first part of the theorem is given in Ravuri et al. 2023, reproduced
below.
$\displaystyle\forall k:d^{\prime}_{ij}\equiv y_{i}^{k}-y_{j}^{k}$
$\displaystyle\sim\mathcal{N}(0,k_{ii}+k_{jj}-2k_{ij})\overset{d}{=}\sqrt{k_{ii}+k_{jj}-2k_{ij}}Z$
$\displaystyle\Rightarrow
d_{ij}^{2}\equiv\|\mathbf{Y}_{i}-\mathbf{Y}_{j}\|^{2}=\sum_{k}^{d}(y^{k}_{i}-y^{k}_{j})^{2}$
$\displaystyle\overset{d}{=}(k_{ii}+k_{jj}-2k_{ij})\sum_{k}^{d}Z_{k}^{2}$
$\displaystyle\overset{d}{=}(k_{ii}+k_{jj}-2k_{ij})\chi^{2}_{d}$
$\displaystyle\overset{d}{=}\Gamma(k=d/2,\theta=2(k_{ii}+k_{jj}-2k_{ij})).$
The covariance between $d_{ij}^{2}$ and $d_{mn}^{2}$ can be computed as
follows. Let,
$\displaystyle
d^{\prime}_{ij}=\mathbf{Y}_{id}-\mathbf{Y}_{jd}\;\;\text{and}\;\;d^{\prime}_{mn}=\mathbf{Y}_{md}-\mathbf{Y}_{nd}.$
We can then derive some important moments as follows,
$\mathbb{E}(d^{\prime}_{ij})=0,$
$\mathbb{V}(d^{\prime}_{ij})=\mathbb{E}(d^{{}^{\prime}2}_{ij})=k_{ii}+k_{jj}-2k_{ij},$
$\displaystyle\mathbb{C}(d^{\prime}_{ij},d^{\prime}_{mn})$
$\displaystyle=\mathbb{C}(Y_{id}-Y_{jd},Y_{md}-Y_{nd})$
$\displaystyle=\mathbb{C}(Y_{id},Y_{md})-\mathbb{C}(Y_{id},Y_{nd})-\mathbb{C}(Y_{jd},Y_{md})+\mathbb{C}(Y_{jd},Y_{nd})$
$\displaystyle=k_{im}+k_{jn}-k_{in}-k_{jm},$
Then,
$\displaystyle\mathbb{C}(d_{ij}^{2},d_{mn}^{2})$
$\displaystyle=\mathbb{C}\left(\sum_{d_{1}}(\mathbf{Y}_{id_{1}}-\mathbf{Y}_{jd_{1}})^{2},\sum_{d_{2}}(\mathbf{Y}_{md_{2}}-\mathbf{Y}_{nd_{2}})^{2}\right)$
$\displaystyle=\sum_{d_{1}}\sum_{d_{2}}\mathbb{C}((\mathbf{Y}_{id_{1}}-\mathbf{Y}_{jd_{1}})^{2},(\mathbf{Y}_{md_{2}}-\mathbf{Y}_{nd_{2}})^{2})$
linearity
$\displaystyle=\sum_{d}\mathbb{C}(d^{{}^{\prime}2}_{ij},d^{{}^{\prime}2}_{mn})$
independence
$\displaystyle=d*\mathbb{C}(d^{{}^{\prime}2}_{ij},d^{{}^{\prime}2}_{mn})$
$\displaystyle=d*\left[\mathbb{E}[d^{{}^{\prime}2}_{ij}d^{{}^{\prime}2}_{mn}]-\mathbb{E}[d^{{}^{\prime}2}_{ij}]\mathbb{E}[d^{{}^{\prime}2}_{mn}]\right]$
$\displaystyle=d*\left[\mathbb{E}(d^{{}^{\prime}2}_{ij})\mathbb{E}(d^{{}^{\prime}2}_{mn})+2\mathbb{E}^{2}(d^{\prime}_{ij}d^{\prime}_{mn})-\mathbb{E}[d^{{}^{\prime}2}_{ij}]\mathbb{E}[d^{{}^{\prime}2}_{mn}]\right]$
Isserlis’ theorem
$\displaystyle=2d*\mathbb{E}^{2}(d^{\prime}_{ij}d^{\prime}_{mn})$
$\displaystyle=2d*(k_{im}+k_{jn}-k_{in}-k_{jm})^{2}.$
|
# Is Your AI-Generated Code Really Safe? Evaluating Large Language Models on
Secure Code Generation with CodeSecEval
Jiexin Wang South China University of TechnologyChina , Xitong Luo South
China University of TechnologyChina , Liuwen Cao South China University of
TechnologyChina , Hongkui He South China University of TechnologyChina ,
Hailin Huang South China University of TechnologyChina , Jiayuan Xie South
China University of TechnologyChina , Adam Jatowt University of
InnsbruckAustria and Yi Cai South China University of TechnologyChina
(2018)
###### Abstract.
Large language models (LLMs) have brought significant advancements to code
generation and code repair, benefiting both novice and experienced developers.
However, their training using unsanitized data from open-source repositories,
like GitHub, raises the risk of inadvertently propagating security
vulnerabilities. Despite numerous studies investigating the safety of code
LLMs, there remains a gap in comprehensively addressing their security
features. In this work, we aim to present a comprehensive study aimed at
precisely evaluating and enhancing the security aspects of code LLMs. To
support our research, we introduce CodeSecEval, a meticulously curated dataset
designed to address 44 critical vulnerability types with 180 distinct samples.
CodeSecEval serves as the foundation for the automatic evaluation of code
models in two crucial tasks: code generation and code repair, with a strong
emphasis on security. Our experimental results reveal that current models
frequently overlook security issues during both code generation and repair
processes, resulting in the creation of vulnerable code. In response, we
propose different strategies that leverage vulnerability-aware information and
insecure code explanations to mitigate these security vulnerabilities.
Furthermore, our findings highlight that certain vulnerability types
particularly challenge model performance, influencing their effectiveness in
real-world applications. Based on these findings, we believe our study will
have a positive impact on the software engineering community, inspiring the
development of improved methods for training and utilizing LLMs, thereby
leading to safer and more trustworthy model deployment.
Large Language Models, Code Generation, Code Repair, Security, Dataset
††copyright: acmcopyright††journalyear: 2018††doi: XXXXXXX.XXXXXXX††booktitle:
Woodstock ’18: ACM Symposium on Neural Gaze Detection, June 03–05, 2018,
Woodstock, NY††price: 15.00††isbn: 978-1-4503-XXXX-X/18/06††ccs: Security and
privacy††ccs: Security and privacy Software and application security††ccs:
Security and privacy Software security engineering
## 1\. Introduction
Large language models (LLMs) such as PALM (Chowdhery et al., 2022), LLaMA
(Touvron et al., 2023), GPT-4 (OpenAI, 2023), and Claude 3 (Anthropic, 2024)
have demonstrated remarkable performance in code generation, enabling
developers to quickly transform ideas into functional code. This capability
reduces development time and effort significantly, as evidenced by the
popularity of GitHub’s Copilot (Friedman, 2021), a cloud-based AI assistant
that has attracted over 1.2 million users. However, since these code LLMs are
often trained on data from open-source repositories like GitHub, they may
inadvertently learn and replicate code that contains software faults, bugs,
and security vulnerabilities. The 2022 Open Source Security and Risk Analysis
(OSSRA) report (2022., [n. d.]) highlights that 81% of the 2,049 codebases
analyzed contain at least one vulnerability, with 49% harboring high-risk
vulnerabilities. Consequently, there is a risk that these models could
perpetuate these vulnerabilities in their code generation process, potentially
producing code that is not just flawed but also highly susceptible to
exploitation and malicious attacks. For instance, Pearce et al. (2022) reveal
that Copilot generates insecure code about 40% of the time, while Khoury et
al. (2023) observe that only 5 of the 21 programs produced by ChatGPT were
initially secure. Furthermore, Perry et al. (2023) find that participants who
had access to an AI assistant wrote significantly less secure code than those
without access to an assistant. As AI-driven programming becomes increasingly
prevalent in real-world software development, ensuring both the correctness
and security of the generated code is crucial to foster trust in AI solutions
and safeguard software systems against potential attacks.
Table 1. Comparison of related datasets. Abbreviations: PAE - ”Precise Automatic Evaluation”, CG - ”Code Generation”, CR - ”Code Repair”. Note: The ”PAE” column indicates whether the dataset supports the precise automatic evaluation like Pass@k, and the ”Complete & Executable Code” column indicates whether the Secure/Insecure Code is fully complete and runnable without the need for additional context, such as helper functions for full functionality. Additionally, we include the HumanEval dataset, which is widely used for general code generation task, but does not specifically address code security concerns. Dataset | Size | Problem Len(Avg.) | Insecure Code Lines(Avg.) | Secure Code Lines(Avg.) | Test Cases Num(Avg.) | Complete & Excutable Code | CWE Types Num | PAE
---|---|---|---|---|---|---|---|---
CG | CR
HumanEval (Chen et al., 2021) | 164 | 67.85 | - | 7.49 | 7.20 | ✔ | - | ✔ | ✘
SecurityEval (Siddiq and Santos, 2022) | 121 | 40.90 | 11.60 | - | - | ✘ | 69 | ✘ | ✘
LLMSecEval(Tony et al., 2023) | 150 | 55.01 | - | 21.90 | - | ✔ | 18 | ✘ | ✘
CyberSecEval (Bhatt et al., 2023) | 1916 | 70.24 | 15.34 | - | - | ✘ | 50 | ✘ | ✘
CodeSecEval | 180 | 78.73 | 6.73 | 10.21 | 3.61 | ✔ | 44 | ✔ | ✔
While multiple studies (Pearce et al., 2022; Khoury et al., 2023; Perry et
al., 2023; Asare et al., 2023; Siddiq and Santos, 2022; Bhatt et al., 2023)
have investigated code LLMs from a safety perspective, their limitations are
noteworthy: (i) Most research tends to focus on either a select few LLMs or a
narrow range of vulnerability types. For instance, studies such as (Pearce et
al., 2022; Perry et al., 2023; Asare et al., 2023) exclusively focus on
Copilot, whereas (Khoury et al., 2023; Nascimento et al., 2023) primarily
examine ChatGPT. (ii) Although these studies identify security vulnerabilities
in LLM-generated code, they often fall short in exploring or sufficiently
validating strategies for generating more secure code. Moreover, the
capability of code LLMs to repair insecure code, another vital aspect of
improving code security, has been largely neglected. (iii) Existing datasets
(Pearce et al., 2022; Tony et al., 2023; Bhatt et al., 2023; Siddiq and
Santos, 2022) designed for evaluating code security exhibit significant
limitations, such as small size, partial and non-executable codes, or even
lack of insecure/secure code examples. Furthermore, for security assessment,
they typically rely on rule-based static analyzers, which have proven to be
inaccurate, or on manual checks that are only practical for a small, sampled
set of results and may overlook the correctness of the code. These issues
underscore a critical gap in the existing research landscape, highlighting the
need for more comprehensive studies that address a broader range of code
security challenges posed by large language models.
In response to these limitations, this study revolves around five critical
research questions, with a twofold objective: firstly, to more accurately
identify security vulnerabilities in code generation and code repair by
current code LLMs; and secondly, to offer strategies for mitigating the
security risks associated with these tasks. To support our research, we
introduce CodeSecEval,111CodeSecEval has been uploaded as supplemental
material and will be made publicly available after publication. a meticulously
curated dataset comprising 180 samples that cover 44 critical vulnerability
types. This dataset represents a significant improvement over existing
datasets (Tony et al., 2023; Bhatt et al., 2023; Siddiq and Santos, 2022) by
enabling automated evaluations of code generation and repair tasks. It
includes complete and executable code and a set of test cases, which reduces
the reliance on labor-intensive manual assessments and imprecise analytical
tools. Table 1 provides detailed statistics and comparisons with four related
datasets (i.e., HumanEval (Chen et al., 2021), SecurityEval (Siddiq and
Santos, 2022), LLMSecEval (Tony et al., 2023),222In Table 1, the 150 instances
in the LLMSecEval dataset actually correspond to only 51 unique problems. This
is because a large proportion of the ”NL Prompt” entries (equivalent to
”Problem” in this study) are rephrased versions of the same issue, essentially
requiring identical code solutions. and CyberSecEval (Bhatt et al., 2023)),
highlighting its distinct features and advantages. Leveraging the CodeSecEval
dataset, we assess the performance of 7 state-of-the-art code LLMs in the
tasks of secure code generation and insecure code repair.333It is worth noting
that the CodeSecEval dataset can also be easily adapted to other code-related
tasks like code completion (Izadi et al., 2022; Lu et al., 2022) and
vulnerability classification (Dong et al., 2023; Wang et al., 2023), with a
particular focus on code security. Our findings indicate that current models
often overlook security concerns during code generation or repair processes.
In response, we propose and validate strategies that significantly enhance
code security during generation and repair by integrating vulnerability-aware
information and explanations of insecure code. Therefore, this study aims to
encourage the development of more robust methods for training and deploying
LLMs, leading to safer and more reliable code generation and repair solutions.
In summary, our contributions are as follows:
1. (1)
We introduce CodeSecEval, a carefully curated dataset consisting of 180
samples covering 44 critical vulnerability types. This dataset represents a
substantial improvement over existing resources by enabling more efficient and
automated evaluations for code security analysis.
2. (2)
Through an extensive evaluation of seven cutting-edge code LLMs, our work
sheds light on their common neglect of security considerations during code
generation and repair. This analysis offers a detailed critique of the models’
vulnerabilities, providing a deeper insight into their limitations.
3. (3)
We devise and validate effective strategies to enhance the security of code
generated or repaired by incorporating vulnerability-aware information and
explanations of insecure code. These strategies, aimed at significantly
mitigating vulnerabilities, offer valuable insights into safer model training
methodologies and more secure program deployment practices.
## 2\. Related Work
### 2.1. Security Issue of LLMs
Beyond natural language understanding, large language models (LLMs) have
greatly advanced the field of programming languages. Leveraging vast code
repositories, LLMs have achieved significant success across various code-
related tasks including code repair (Joshi et al., 2023; Xia and Zhang, 2022;
Pearce et al., 2023), code completion (Izadi et al., 2022; Lu et al., 2022),
code summarization (MacNeil et al., 2023, 2022), and code generation (Wang et
al., 2021a; Chen et al., 2021; Nijkamp et al., 2022). Moreover, advancements
in pre-training techniques have also led to the creation of specialized models
like CodeBERT (Feng et al., 2020), CodeT5 (Wang et al., 2021b), PyCodeGPT (Zan
et al., 2022), AlphaCode (Li et al., 2022), and InCoder (Fried et al., 2022).
However, the frequent neglect of security issues in both generic LLMs and
specialized models poses substantial risks.
Recent research highlights the security vulnerabilities associated with code
generated by LLMs (Pearce et al., 2022; Khoury et al., 2023; Perry et al.,
2023; Asare et al., 2023; Siddiq and Santos, 2022; Bhatt et al., 2023). For
instance, Khoury et al. (2023) discovered that ChatGPT produced insecure code
in 16 out of 21 security-relevant scenarios, with only 7 cases being self-
corrected after further prompting. Pearce et al. (2022) reported that Copilot,
evaluated using CodeQL and manual checks, generated insecure code about 40% of
the time. Moreover, Perry et al. (2022) found that developers using AI model
assistance tended to generate more vulnerabilities, particularly in string
encryption and SQL injection, when interacting with OpenAI’s Codex model (Chen
et al., 2021).
In addition to generating more secure code, enhancing code security through
code repair (or automatic program repair, APR) presents another viable
solution. Although many studies (Gazzola et al., 2018; Le Goues et al., 2021;
Ye et al., 2021; Jiang et al., 2021; Sobania et al., 2023) have primarily
focused on bug fixes with less emphasis on security, recent research has
started to explore LLMs’ ability to address vulnerabilities (Wu et al., 2023;
Pearce et al., 2023; Chen et al., 2022; Prenner et al., 2022). For example, Wu
et al. (2023) conducted a pioneering study evaluating both LLMs and APR models
for their effectiveness in repairing Java vulnerabilities, revealing that they
only fix very few Java vulnerabilities.
While previous research has identified security issues in code generated or
repaired by LLMs, these studies often exhibit significant limitations: (1)
Most studies focus on a narrow selection of LLMs—for instance, Khoury et al.
(2023) and (Sobania et al., 2023) only evaluate ChatGPT, and (Wu et al., 2023)
overlooks advanced models such as GPT-4 (OpenAI, 2023) or CodeLlama (Roziere
et al., 2023). Additionally, some studies like (Pearce et al., 2023) are
limited to a few specific vulnerability types, examining only seven. (2) Many
of these studies primarily identify security challenges (Pearce et al., 2022,
2023), but do not sufficiently explore or validate strategies for generating
secure code or repairing insecure code. (3) There is an heavy reliance on
security tools like CodeQL (CodeQL, 2022) to validate code security, despite
their known inaccuracies (Siddiq and Santos, 2022; Xiong et al., 2023; Shin et
al., 2023). For example, (Shin et al., 2023) revealed that static bug
detectors identified only a negligible fraction of all bugs, accounting for
only 6 out of 410 bugs (0.01%). Moreover, while some studies employ manual
assessment to focus on security, this method can sometimes overlook the
overall correctness of the code.
### 2.2. Datasets for code security
Various datasets have been developed for code generation tasks, including
JuICe (Agashe et al., 2019), CONCODE (Iyer et al., 2018), DS-1000 (Lai et al.,
2022), HumanEval (Chen et al., 2021) and APPS (Hendrycks et al., 2021).
However, these datasets primarily focus on general code generation and do not
specifically evaluate the ability to generate secure code. In terms of
datasets related to security concerns, most are designed for evaluating
techniques in vulnerability detection and prediction (Arzt et al., 2014;
Nikitopoulos et al., 2021; Ponta et al., 2019). For code repair tasks,
QuixBugs (Lin et al., 2017) includes programs translated to both Python and
Java, each containing a single-line bug. Despite its relevance, this dataset
is relatively small, comprising only 40 instances. Big-Vul(Fan et al., 2020)
contains 3,754 code vulnerabilities spanning 91 different vulnerability types,
all extracted from 348 Github projects. CVEfixes(Bhandari et al., 2021)
provides a comprehensive categorization of vulnerabilities, utilizing the
Common Weakness Enumeration (CWE) types, and further enhances the assessment
of their impact by incorporating CVSS severity scores. This dataset comprises
a collection of 18,249 files and 50,322 functions, encompassing both pre-
repair and post-repair code. Both of these datasets contain vulnerability
information along with code before and after fixes, rendering them invaluable
resources in the field of vulnerability analysis. However, due to the lack of
test cases, automated assessment of the repair code generated by the models
from a security standpoint proves challenging. Focusing on secure code
generation, three notable datasets have been introduced: SecurityEval (Siddiq
and Santos, 2022), LLMSecEval (Tony et al., 2023), and CyberSecEval (Bhatt et
al., 2023). SecurityEval, introduced first, comprises 130 Python code samples
across 75 vulnerability types. LLMSecEval followed with 150 instances covering
18 types, and the most recent one, CyberSecEval, provides a significantly
larger collection of 1,916 instances across 50 types.
Despite the availability of these datasets, significant gaps remain in their
ability to comprehensively address code security, as highlighted in Table 1.
These datasets often fail to provide comparative examples of insecure and
secure code. For instance, each SecurityEval sample only includes an ‘ID’, a
‘Prompt’ (equivalent to ”Problem” in this study), and an ‘Insecure Code’, but
lacks corresponding secure code examples. Additionally, the code in
SecurityEval and CyberSecEval is not executable as-is, often requiring
additional helper functions or specific configurations. In contrast, while the
code in LLMSecEval is complete, it presents a redundancy issue. Its 150
instances only represent 51 unique problems, as many of the ”NL Prompt”
entries are merely rephrased versions of the same issue. Furthermore, these
datasets do not support precise automatic evaluation like the Pass@k metric,
forcing reliance on imprecise rule-based static analyzers or manual checks,
the shortcomings of which were discussed previously.
To overcome these limitations, we introduce CodeSecEval, a meticulously
curated dataset designed specifically to evaluate the security awareness of
large language models in code generation and repair tasks. CodeSecEval
includes a broad spectrum of critical vulnerability types and provides
detailed attributes for each data instance, enabling precise automatic
evaluations. By utilizing CodeSecEval, we aim to more accurately investigate
the capabilities of state-of-the-art LLMs in code generation and repair, while
also proposing effective strategies to enhance security in both tasks.
(a) Example data instance of the SecEvalBase, with ”ID” attribute of
”CWE-020_author_1”, ”Entry_Point” attribute of ”yaml_load”.
(b) Example data instance of the SecEvalPlus, with ”ID” attribute of
”CWE-78_01”, ”Entry_Point” attribute of ”find_files”.
Figure 1. Illustrative examples of the CodeSecEval dataset, comprising two
data instances from its two sub-datasets. The attributes displayed with a
white background correspond to the standard attributes of the CodeSecEval
dataset. In contrast, the attributes with a gray background are those
introduced specifically, that our investigation aims to validate whether they
can effectively mitigate vulnerabilities, as discussed in Section 3.2.
## 3\. Study Design
In this work, we aim to evaluate the efficacy of code LLMs in managing
security concerns during code generation and repair. Additionally, we seek to
bolster the security of these processes by proposing and assessing effective
strategies. To achieve this, we formulate several research questions that
guide our investigation:
* •
RQ1: How effective are LLMs in addressing security concerns during code
generation?
* •
RQ2: What strategies can be devised to improve the security of code generation
by LLMs, and to what extent can they mitigate security vulnerabilities? Are
certain vulnerability types more likely to be successfully mitigated?
* •
RQ3: How well do LLMs perform in repairing insecure code?
* •
RQ4: What strategies can be devised to improve the security of code repaired
by LLMs, and to what extent can these proposed approaches repair security
vulnerabilities? Are certain vulnerability types more likely to be
successfully repaired?
* •
RQ5: What are the implications of the research findings for the broader
software engineering community, and how can developers and researchers
leverage LLMs more securely in real-world applications?
Following this, we present CodeSecEval and elaborate on its construction
process. We then describe the experimental setup, including five experiments
tailored to probe the outlined research questions. Subsequently, we introduce
the code LLMs tested in these experiments and the evaluation metrics used to
assess the security-related performance.
### 3.1. CodeSecEval
#### 3.1.1. Dataset Introduction
We now introduce CodeSecEval,444CodeSecEval has been uploaded as supplemental
material and will be made publicly available after publication. a dataset
meticulously curated to evaluate the tasks of secure code generation and
insecure code repair. Comprising 180 samples across 44 vulnerability types,
CodeSecEval offers a robust framework for assessing code security in the
Python language. As shown in Table 1, this dataset distinguishes itself from
existing datasets such as SecurityEval (Siddiq and Santos, 2022), LLMSecEval
(Tony et al., 2023), and CyberSecEval (Bhatt et al., 2023). Notably,
CodeSecEval includes both executable secure and insecure codes, as well as
incorporates test cases, facilitating automated and precise evaluations using
the Pass@k metric. The dataset is structured with six distinct attributes for
each instance, which are as follows:
* •
ID: A unique identifier for each data instance, which also indicates a
specific vulnerability type. For example, ”CWE-434_03” refers to a sample of
the CWE-434 vulnerability type.
* •
Problem: A description of a moderately complex programming problem that needs
to be solved.
* •
Insecure Code: An example of insecure code that exhibits the specified
vulnerability.
* •
Secure Code: An example of secure code that addresses the specified
vulnerability.
* •
Test: A set of test cases designed to validate both the functional correctness
and the security of the code, encapsulated in a function named ”check”.
* •
Entry_Point: Name of the function to be implemented.
Based on the characteristics of the vulnerabilities addressed and the
resources utilized, CodeSecEval is further divided into the following two
distinct subsets:
1. (1)
SecEvalBase: This subset is constructed using the SecurityEval dataset (Siddiq
and Santos, 2022), which aggregates instances from four external sources:
CodeQL (CodeQL, 2022), The Common Weakness Enumeration (CWE) ((2022), MITRE),
SonarSource (S.A., 2022), and Pearce et al.(Pearce et al., 2022). The original
SecurityEval dataset, however, does not include annotations for ”Secure Code”,
”Test”, and ”Entry_Point”, and its ”Insecure Code” instances are often
incomplete, necessitating additional context such as helper functions or
specific configurations to ensure full functionality. Therefore, the selection
of instances for SecEvalBase was guided by the practicality of completing the
insecure code and providing necessary annotations for the missing attributes.
Finally, SecEvalBase includes 67 instances covering 37 vulnerability types.
2. (2)
SecEvalPlus: This subset focuses on the ”2023 CWE Top 25 Most Dangerous
Software
Weaknesses”555https://cwe.mitre.org/top25/archive/2023/2023_top25_list.html We
excluded eight types from this list due to their rarity in Python, such as
”CWE-476: NULL Pointer Dereference”, or their specific configurations required
to conduct testing, like ”CWE-918: Server-Side Request Forgery (SSRF)”. We
merged ”CWE-287”, ”CWE-863”, ”CWE-862”, and ”CWE-306” into a single category
addressing similar authorization issues. Finally, SecEvalPlus comprises 113
instances across 14 types, providing at least 8 instances for each
type,666Only the merged authorization-related type includes 9 instances.
ensuring a robust sample for each category.
Figure 1 showcases two example data instances from the SecEvalBase and
SecEvalPlus of CodeSecEval (displayed with a white background), each
displaying four attributes, with ”ID” and ”Entry_Point” noted in the subfigure
captions. In SecEvalPlus (Figure 1b), targeting the CWE-78 vulnerability (”OS
Command Injection”), the ”Insecure Code” illustrates a risk where attackers
could inject harmful commands, such as ”rm -rf”. In contrast, the ”Secure
Code” effectively mitigates this vulnerability. The ”Test” attribute includes
various test cases designed to assess both the correctness and security of the
code, such as checking for the presence of harmful commands like
‘Test/CWE-78_01/dir1; rm MyImportantFile.txt’. Furthermore, while SecEvalPlus
employs a more natural language description for the ”Problem”, SecEvalBase
features code statements combined with a docstring. This deliberate
differentiation in dataset construction aims to evaluate the performance of
LLMs across different presentation formats.
#### 3.1.2. Dataset Construction
This subsection outlines the construction process of the CodeSecEval dataset.
To ensure its high-quality, we engaged eight students specializing in software
engineering, including four Ph.D. and four M.S. students, with research
expertise in areas such as code generation and code summarization. They were
grouped into four pairs, each consisting of one Ph.D. and one M.S. student, to
foster collaboration and leverage diverse skills. Subsequently, these pairs
were tasked with generating instances for the two subsets of CodeSecEval.
For the SecEvalBase dataset, each group was allocated approximately 35 records
from the existing SecurityEval dataset, representing about a quarter of its
total records. The team members were tasked with closely collaborating to
analyze the assigned instances, focusing on the executability of the insecure
code and the feasibility of constructing various test cases. Following the
initial assessment, they next annotated five key data elements: ”Secure Code”,
”Test”, and ”Entry_Point”. Moreover, teams were instructed to add some input-
output examples in the ”Problem” and made necessary adjustments to the
”Insecure Code” to facilitate testing and better match the vulnerability
contexts. Each record then underwent a rigorous manual checking process within
the group, following these steps:
1. (1)
The ”Problem” should be clear, moderately complex, distinct from previously
collected ”Problem”, and include input-output examples.
2. (2)
The ”Insecure Code” must exhibit the designated vulnerability.
3. (3)
The ”Secure Code” needs to effectively address the vulnerability present in
the ”Insecure Code”.
4. (4)
The ”Test” should comprise various cases that assess both the correctness and
security of the code, with the ”Secure Code” passing all tests while the
”Insecure Code” fails.
5. (5)
The ”Entry_Point” should solely contain the name of the function to be
implemented.
Figure 2. The flowchart of the manual filtering process.
If any step does not fulfill the requirement, the students are asked to either
correct it to be valid or omit it and generate another new record. Figure 2
depicts a clear flowchart outlining the manual filtering steps. Finally, to
further ensure the quality of the dataset, we hired 2 additional M.S. students
to thoroughly check and clean each instance in the collected data.
For SecEvalPlus, each group was assigned 3 or 4 vulnerability types from the
selected 14 types listed in the ”2023 CWE Top 25 Most Dangerous Software
Weaknesses”. The teams were tasked with generating at least eight instances
for each type. Unlike SecEvalBase, no predefined ”Insecure Code” or ”Problem”
was provided, requiring groups to either identify real-world scenarios or
create new ones exemplifying these vulnerabilities, inspired by studies like
(Khoury et al., 2023; Pearce et al., 2022). Finally, each SecEvalPlus record
underwent the same meticulous verification and filtering process as
SecEvalBase.
### 3.2. Assumptions for Vulnerability Mitigation in Code Generation and Code
Repair
This subsection outlines our assumptions designed to potentially enhance the
security of code generated and repaired by LLMs. We hypothesize that
incorporating vulnerability-aware information into problem descriptions and
providing explanations of vulnerabilities in insecure code can foster more
secure coding practices.
Vulnerability-aware Problem: Inspired by findings from (Khoury et al., 2023),
which demonstrated that further prompting could correct security flaws in
several coding scenarios, we hypothesize that making problem descriptions
vulnerability-aware can also assist LLMs. This strategy involves explicitly
emphasizing the importance of recognizing and addressing vulnerabilities. We
propose that by integrating security concerns into problem descriptions, LLMs
might be better prepared to identify and mitigate potential security risks.
Insecure Code Explanation: Considering that it might be too difficult for
models to repair accurately using incorrect code and problem as input, we
assume that providing a brief explanation of the vulnerabilities present in
the insecure code could improve repair outcomes. This additional information
is intended to provide some context that enables LLMs to focus more precisely
on the security flaws needing correction.
To test these assumptions, the students responsible for constructing the
dataset were specifically instructed to develop both vulnerability-aware
problems and insecure code explanations.777This enriched contextual
information has also been uploaded as supplemental material, aiming to enhance
its utility and accessibility for further research. Figure 1 illustrates these
enhancements with examples from the dataset ID ”CWE-020_author_1” and
”CWE-78_01”, displayed in a gray background.
### 3.3. Experimental Setup
#### 3.3.1. Designed Experiments
To answer the five formulated research questions, we conduct comprehensive
evaluations of the models using CodeSecEval across code generation and code
repair. We have designed four different experiments to thoroughly investigate
the performance and validate the effectiveness of strategies applied by LLMs
in both tasks:
1. (1)
Direct Code Generation: This experiment evaluates the capability of LLMs to
generate secure code directly from problem statements, aiming to answer RQ1.
It explores how effectively current models address vulnerabilities during code
generation.
2. (2)
Code Generation with Vulnerability-aware Problem: This experiment examines the
impact of incorporating vulnerability-aware information during code
generation. It seeks to determine if enhanced problem descriptions with
security details can lead to fewer vulnerabilities, addressing RQ2.
3. (3)
Direct Code Repair: This experiment addresses RQ3 and focuses on assessing how
well existing large language models perform in directly repairing insecure
code. We aim to understand the models’ capabilities in automatically
identifying and fixing security vulnerabilities in existing code.
4. (4)
Code Repair with Insecure Code Explanation: This experiment provides LLMs with
explanations of the vulnerabilities present in the insecure code during code
repair. This test addresses RQ4 and explores whether supplying detailed
vulnerability context improves or hinders the repair process.
#### 3.3.2. Tested Models
We test the following seven models:
* •
InCoder (Fried et al., 2022): InCoder is pre-trained on a mixture of
multilingual code data from GitHub and StackOverflow posts, utilizing a causal
masking objective. For our experiments, we utilized the InCoder model with
6.7B parameters.
* •
CodeGen (Nijkamp et al., 2022): CodeGen is a family of code language models
available in different parameter sizes (350M, 2.7B, 6.1B, and 16.1B). For fair
comparison with the InCoder model, we used the mono version with parameter
size 6B.
* •
StarCoder (Li et al., 2023a): StarCoder is a 15B parameter model with an 8K
window size and FIM (Fill In the Middle, or infilling) capability. It
outperforms many previous open-source large language models that support
generating code from natural language descriptions and even matches the OpenAI
code-cushman-001 model on the HumanEval (Chen et al., 2021) and MBPP
benchmarks (Austin et al., 2021).
* •
CodeLlama-Instruct (Roziere et al., 2023): CodeLlama-Instruct is a specialized
model crafted for precise instruction comprehension and secure deployment. By
leveraging a dataset from Llama 2 prompts to solve coding challenges and
leveraging CodeLlama to generate relevant unit tests and solutions, CodeLlama-
Instruct significantly enhances security and usability through fine-tuning. We
used the version with parameter size 7B.
* •
GPT-3.5 (OpenAI, 2023): GPT-3.5 has 175 billion parameters and has been
trained on a diverse range of internet text, enabling it to demonstrate
impressive understanding and generation capabilities.
* •
GPT-4 (OpenAI, 2023): GPT-4 has been trained on an extensive and diverse data,
surpassing the capabilities of its predecessor GPT-3.5.
* •
Claude 3 Opus (Anthropic, 2024): Claude 3 Opus, with 137 billion parameters,
stands as a cutting-edge large language model engineered by Anthropic,
showcasing exceptional performance across a spectrum of AI benchmarks
evaluating expert knowledge, reasoning, and mathematical prowess.
Demonstrating near-human comprehension on intricate tasks, Claude 3 Opus
excels in analysis, forecasting, nuanced content creation, coding, and
multilingual conversation.
#### 3.3.3. Metrics
For code generation and code repair, we utilize the execution-based metric
Pass@k, which is widely acknowledged as a more reasonable measure than match-
based methods such as BLEU (Papineni et al., 2002). Pass@k is usef for
measuring the exact functional correctness of generated code, where k code
samples are generated for each problem. A problem is considered solved if any
sample passes all the unit tests. Since this computation of Pass@k can have
high variance, we follow (Chen et al., 2021) and use the unbiased version of
Pass@k:
(1) $Pass@k=E_{problems}[1-\frac{\binom{n-c}{k}}{\binom{n}{k}}]$
where $k\leq n$ is the number of samples and $c\leq n$ is the number of codes
that pass all test cases. $1-\frac{\binom{n-c}{k}}{\binom{n}{k}}$ is the
estimated Pass@k for a single problem. $E$ is the expectation of Pass@k over
all problems. In practice, we compute the average pass@k across all problems,
considering k values equal to 1, 3, 5, 7, and 10.
## 4\. Results Discussion
Table 2. Comparative results of code generation across various models on CodeSecEval and its two subsets (SecEvalBase, SecEvalPlus), under two different experimental settings. | CodeSecEval | | SecEvalBase | | SecEvalPlus
---|---|---|---|---|---
| Pass@K | | Pass@K | | Pass@K
Model | k=1 | k=3 | k=5 | k=7 | k=10 | | k=1 | k=3 | k=5 | k=7 | k=10 | | k=1 | k=3 | k=5 | k=7 | k=10
Direct Code Generation
Incoder | 0.39 | 0.84 | 1.11 | 1.33 | 1.67 | | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | | 0.62 | 1.34 | 1.77 | 2.12 | 2.65
CodeGen | 5.89 | 8.00 | 9.14 | 10.03 | 11.11 | | 2.09 | 3.97 | 4.86 | 5.42 | 5.97 | | 8.14 | 10.38 | 11.68 | 12.77 | 14.16
StarCoder | 4.33 | 7.32 | 8.72 | 9.63 | 10.56 | | 1.19 | 2.91 | 3.90 | 4.35 | 4.48 | | 6.19 | 9.93 | 11.58 | 12.76 | 14.16
CodeLlama-Instruct | 9.22 | 12.28 | 13.16 | 13.54 | 13.89 | | 9.55 | 12.86 | 13.92 | 14.45 | 14.93 | | 9.03 | 11.93 | 12.71 | 13.00 | 13.27
GPT-3.5 | 10.56 | 14.64 | 16.18 | 17.02 | 17.78 | | 10.75 | 13.23 | 14.23 | 14.73 | 14.93 | | 10.44 | 15.48 | 17.34 | 18.38 | 19.47
GPT-4 | 12.44 | 15.24 | 16.28 | 17.00 | 17.78 | | 13.43 | 15.86 | 16.80 | 17.36 | 17.91 | | 11.86 | 14.88 | 15.97 | 16.78 | 17.70
Claude 3 Opus | 13.83 | 15.58 | 15.96 | 16.07 | 16.11 | | 13.13 | 13.43 | 13.43 | 13.43 | 13.43 | | 14.25 | 16.85 | 17.46 | 17.64 | 17.70
Code Generation using Vulnerability-aware Problem
Incoder | 0.61 | 1.30 | 1.70 | 1.98 | 2.22 | | 0.30 | 0.80 | 1.16 | 1.39 | 1.49 | | 0.80 | 1.59 | 2.02 | 2.33 | 2.65
CodeGen | 13.50 | 19.82 | 22.12 | 23.31 | 24.44 | | 7.46 | 12.59 | 14.69 | 15.75 | 16.42 | | 17.08 | 24.12 | 26.53 | 27.80 | 29.20
StarCoder | 14.11 | 21.32 | 23.90 | 25.32 | 26.67 | | 4.18 | 6.38 | 7.34 | 8.05 | 8.96 | | 20.00 | 30.18 | 33.72 | 35.57 | 37.17
CodeLlama-Instruct | 24.33 | 33.04 | 36.17 | 37.87 | 39.44 | | 27.01 | 33.53 | 35.77 | 37.25 | 38.81 | | 22.74 | 32.75 | 36.40 | 38.23 | 39.82
GPT-3.5 | 28.89 | 43.69 | 48.75 | 51.61 | 54.44 | | 29.85 | 41.03 | 44.01 | 45.25 | 46.27 | | 28.32 | 45.27 | 51.56 | 55.38 | 59.29
GPT-4 | 31.89 | 41.62 | 44.46 | 46.10 | 47.78 | | 34.48 | 42.77 | 45.02 | 46.31 | 47.76 | | 30.35 | 40.93 | 44.13 | 45.98 | 47.79
Claude 3 Opus | 39.89 | 46.63 | 49.55 | 51.42 | 53.33 | | 38.81 | 45.07 | 47.49 | 49.10 | 50.75 | | 40.53 | 47.56 | 50.76 | 52.79 | 54.97
Figure 3. Code Generation performance results of GPT-4 across 14 vulnerability types on the SecEvalPlus sub-datasets, under two different experimental settings. Table 3. Code Generation performance results of GPT-4 Using Different Types of Vulnerability-aware Problem | Vulnerability-aware Problem With Steps | | Vulnerability-aware Problem Without Steps
---|---|---|---
| Pass@K | | Pass@K
Model | k=1 | k=3 | k=5 | k=7 | k=10 | | k=1 | k=3 | k=5 | k=7 | k=10
Incoder | 0.94 | 1.76 | 2.09 | 2.27 | 2.35 | | 0.32 | 0.88 | 1.35 | 1.72 | 2.11
CodeGen | 13.76 | 20.06 | 22.32 | 23.54 | 24.71 | | 13.26 | 19.61 | 21.94 | 23.11 | 24.21
StarCoder | 13.65 | 19.17 | 21.11 | 22.30 | 23.53 | | 14.53 | 23.25 | 26.40 | 28.03 | 29.47
CodeLlama-Instruct | 22.47 | 30.98 | 33.87 | 35.24 | 36.47 | | 26.00 | 34.89 | 38.22 | 40.22 | 42.11
GPT-3.5 | 32.00 | 46.93 | 51.49 | 54.01 | 56.47 | | 26.11 | 40.80 | 46.29 | 49.46 | 52.63
GPT-4 | 36.82 | 47.64 | 51.38 | 53.80 | 56.47 | | 27.47 | 36.23 | 38.27 | 39.21 | 40.00
Claude 3 Opus | 42.12 | 48.32 | 51.04 | 53.00 | 55.29 | | 37.89 | 45.12 | 48.21 | 50.01 | 51.58
Table 4. Comparative results of code repair across various models on CodeSecEval and its two subsets (SecEvalBase, SecEvalPlus), under two different experimental settings. | CodeSecEval | | SecEvalBase | | SecEvalPlus
---|---|---|---|---|---
| Pass@K | | Pass@K | | Pass@K
Model | k=1 | k=3 | k=5 | k=7 | k=10 | | k=1 | k=3 | k=5 | k=7 | k=10 | | k=1 | k=3 | k=5 | k=7 | k=10
Direct Code Repair
Incoder | 0.28 | 0.51 | 0.55 | 0.56 | 0.56 | | 0.75 | 1.37 | 1.49 | 1.49 | 1.49 | | 0.00 | 0.00 | 0.00 | 0.00 | 0.00
CodeGen | 3.17 | 4.28 | 4.60 | 4.80 | 5.00 | | 2.39 | 3.38 | 3.37 | 4.03 | 4.48 | | 3.63 | 4.81 | 5.11 | 5.25 | 5.31
StarCoder | 0.61 | 1.02 | 1.27 | 1.46 | 1.67 | | 1.19 | 1.49 | 1.49 | 1.49 | 1.49 | | 0.27 | 0.74 | 1.13 | 1.45 | 1.77
CodeLlama-Instruct | 9.28 | 13.17 | 14.66 | 15.59 | 16.67 | | 9.55 | 12.60 | 13.43 | 14.03 | 14.93 | | 9.12 | 13.51 | 15.39 | 16.51 | 17.70
GPT-3.5 | 10.67 | 15.16 | 17.16 | 18.44 | 20.00 | | 12.09 | 14.04 | 14.68 | 14.90 | 14.93 | | 9.82 | 15.83 | 18.63 | 20.55 | 23.01
GPT-4 | 20.44 | 26.65 | 29.23 | 30.92 | 32.78 | | 17.91 | 24.88 | 28.20 | 30.39 | 32.84 | | 21.95 | 27.71 | 29.84 | 31.23 | 32.74
Claude 3 Opus | 20.72 | 24.69 | 26.23 | 27.37 | 28.89 | | 19.55 | 23.79 | 25.12 | 25.95 | 26.87 | | 21.42 | 25.23 | 26.89 | 28.22 | 30.09
Code Repair using Insecure Code Explanation
Incoder | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | | 0.00 | 0.00 | 0.00 | 0.00 | 0.00
CodeGen | 2.61 | 3.62 | 3.83 | 3.88 | 3.89 | | 2.09 | 2.96 | 2.30 | 2.30 | 2.30 | | 2.92 | 4.01 | 4.33 | 4.42 | 4.42
StarCoder | 0.67 | 1.31 | 1.70 | 1.98 | 2.22 | | 1.04 | 1.48 | 1.49 | 1.49 | 1.49 | | 0.44 | 1.21 | 1.82 | 2.27 | 2.65
CodeLlama-Instruct | 15.67 | 21.42 | 23.68 | 25.07 | 26.67 | | 20.00 | 24.23 | 26.53 | 28.56 | 31.34 | | 13.10 | 19.76 | 22.00 | 23.01 | 23.89
GPT-3.5 | 16.59 | 23.44 | 26.48 | 28.35 | 30.17 | | 19.24 | 23.33 | 25.29 | 26.86 | 28.79 | | 15.04 | 23.50 | 27.17 | 29.23 | 30.97
GPT-4 | 23.44 | 28.85 | 30.84 | 31.91 | 32.78 | | 21.64 | 27.48 | 29.52 | 30.60 | 31.34 | | 24.51 | 29.67 | 31.62 | 32.69 | 33.63
Claude 3 Opus | 24.28 | 27.49 | 28.44 | 29.13 | 30.00 | | 22.09 | 24.92 | 26.53 | 27.96 | 29.85 | | 25.58 | 29.00 | 29.57 | 29.82 | 30.09
Figure 4. Code Repair performance results of GPT-4 across 14 vulnerability
types on the SecEvalPlus sub-datasets, under two different experimental
settings.
RQ1 _How effective are LLMs in addressing security concerns during code
generation?_
To address RQ1, we evaluate the models’ performance in generating code based
on ”Problem” information, with results presented in the upper section of Table
2. Among relatively small models (Incoder, CodeGen, StarCoder, and CodeLlama-
Instruct), our analysis reveals that CodeLlama-Instruct achieves the best
results in terms of Pass@k scores across various k values and datasets most of
the time, with CodeGen ranking second. In contrast, Incoder generally
underperforms in different settings, possibly due to its focus on code
completion tasks and the use of causal masking objectives during pre-training,
which may limit its effectiveness in broader code generation tasks. More
interestingly, despite being more than twice the size of CodeGen and
CodeLlama-Instruct, StarCoder yields inferior results. When considering models
with significantly larger parameters, our analysis reveals that the Claude 3
Opus model achieves superior results when k is small on the entire CodeSecEval
dataset and SecEvalPlus dataset. However, as k increases, GPT-4 or GPT-3.5
outperforms Claude 3 Opus. Additionally, on the SecEvalBase dataset, GPT-4
emerges as the best performer, while showing less effectiveness on the
SecEvalPlus dataset, which features problems in the form of natural language
descriptions.
Overall, these findings highlight the nuanced performance of Language Model-
based Models (LLMs) in code generation tasks, underlining the importance of
considering both k-values and dataset characteristics for optimal results.
While smaller models like CodeGen or CodeLlama-Instruct show promising
outcomes, larger models such as GPT-4 or Claude 3 Opus demonstrate superior
performance under certain conditions. These insights emphasize the ongoing
need for fine-tuning LLMs and tailoring their application to specific
requirements in addressing security concerns during code generation.
RQ2 _What strategies can be devised to improve the security of code generation
by LLMs, and to what extent can they mitigate security vulnerabilities? Are
certain vulnerability types more likely to be successfully mitigated?_
Next, we aim to explore methods for bolstering the security of code generation
by LLMs. While it’s intuitive to assume that formulating problems to highlight
potential vulnerabilities may prompt LLMs to avoid generating insecure code,
this assumption lacks robust validation in existing studies. To address this,
we introduce ”Vulnerability-aware Problems” to assess whether incorporating
vulnerability information improves code generation security. Results presented
in the lower section of Table 2 demonstrate a notable performance boost across
Pass@k for all models, except Incoder. Notably, relatively smaller models like
CodeLLama-Instruct shows substantial gains, with Pass@1 and Pass@5 metrics
increased from 9.22 to 24.33 and from 13.16 to 36.17 on the CodeSecEval
dataset, respectively. Particularly striking is the performance of Claud 3
Oppus, which shows remarkable improvements with Pass@1 and Pass@5 increasing
from 13.83 to 39.89 and from 15.96 to 49.55 on the CodeSecEval dataset,
respectively, and even outperforming GPT-4 on the SecEvalBase dataset.
Next, we analyze the performance of LLMs across various vulnerability types,
with a particular focus on GPT-4’s performance on the SecEvalPlus dataset
using the Pass@5 metric. Each of the 14 types in this subset contains a more
evenly distributed number of instances. As indicated in the blue column of
Figure 3, direct code generation using GPT-4 generally struggled to generate
secure code for the SecEvalPlus dataset, with only the Pass@5 for CWE-502
surpassing 50%. Notably, vulnerability types such as CWE-20, CWE-79, CWE-77,
CWE-434, and CWE-787 achieved a 0.0 score. However, by incorporating
”Vulnerability-aware Problem” descriptions, there was a significant
improvement in Pass@k rates across most types, with seven types exceeding a
50.0 score in the Pass@5 metric. Despite these gains, some vulnerability
types, like CWE-22 and CWE-276, showed minimal improvement. Interestingly,
types related to injection vulnerabilities, specifically CWE-78 (”OS Command
Injection”) and CWE-89 (”SQL Injection”), experienced worse results. This
suggests that GPT-4 may struggle with addressing injection vulnerabilities, or
that the vulnerability-aware information provided may inadvertently complicate
the model’s performance in these scenarios.
Finally, our analysis of the Vulnerability-aware Problems reveals that they
can be categorized into two types: one with detailed procedural steps, as
shown in Figure 1 (a), and one without detailed steps, as shown in Figure 1
(b). We manually classified these and found that 85 instances included
procedural steps, while 95 did not. Further analysis of GPT-4’s performance on
these two types, presented in Table 3, indicates that problems including steps
achieved better performance than those without. This finding aligns with the
results of studies such as (Jiang et al., 2023) and (Li et al., 2023b), which
suggest that using LLMs to plan and then implement code step-by-step can
significantly enhance code generation performance. However, these studies
primarily focused on general code generation without considering the security
aspect. Nevertheless, while introducing security-relevant step information
significantly aids in secure code generation, providing explanations of
vulnerabilities, even without a stepwise format, also contributes positively
to generating secure code. This indicates that both detailed procedural
guidance and straightforward vulnerability explanations can effectively
improve security in code generation tasks.
RQ3 _How well do LLMs perform in repairing insecure code?_
Next, we focus on the performance of code LLMs in the code repair task, where
models are tasked with repairing ”Insecure Code” based on the ”Problem” input.
The results of this experiment are detailed in the upper part of Table
LABEL:new_tab3. Comparing these results with those from the direct code
generation task shown in Table 2, we observe a general decline in performance
among the three smaller models in the CodeSecEval dataset, namely Incoder,
CodeGen, and StarCoder. This trend suggests that these models may be less
effective at code repair. Specifically, both Incoder and StarCoder exhibit a
notable drop in effectiveness, with StarCoder experiencing the most
significant decline, where the Pass@1 score falls from 4.33 to 0.61.
Conversely, GPT-3.5, GPT-4, and Claude 3 Opus show enhanced performance in the
code repair task relative to code generation. Particularly striking is GPT-4,
whose Pass@5 score improves from 16.23 to 29.23 on the CodeSecEval, achieving
the best results in most cases.
RQ4 _What strategies can be devised to improve the security of code repaired
by LLMs, and to what extent can these proposed approaches repair security
vulnerabilities? Are certain vulnerability types more likely to be
successfully repaired?_
We then explore whether including Insecure Code Explanation improves the
repair of insecure code. The results are shown in the lower part of Table
LABEL:new_tab3. Surprisingly, similar to the findings in direct code
generation, we observe a general decline in performance among the same three
smaller models (Incoder, CodeGen, and StarCoder) when compared with direct
code generation. For other four models, including relatively smaller model
CodeLlama-Instruct, all demonstrate improvements. Again, GPT-4 achives the
best results in most cases, with Claud 3 Opus as the second best model.
Similar to code generation, we next analyze the performance of LLMs across
various vulnerability types in code repair task, focusing particularly on
GPT-4’s performance on the SecEvalPlus dataset using the Pass@5 metric. As
depicted in Figure 4, although GPT-4 shows the overall improvement when using
Insecure Code Explanation in Table LABEL:new_tab3, the performance still
varies significantly across different vulnerability types. For some types,
there is no improvement or even a decline when using the insecure code
explanations. These findings highlight the complexities involved in repairing
insecure code with current models and underline the need for advanced
approaches in code repair to bolster security in software development
practices.
RQ5 _What are the implications of the research findings for the broader
software engineering community, and how can developers and researchers
leverage LLMs more securely in real-world applications?_
The research findings presented in this study have several implications for
the broader software engineering community and offer insights on leveraging
large language models more securely in real-world applications.
1. (1)
Firstly, the CodeSecEval dataset introduced in this paper serves as a valuable
resource for evaluating code LLMs from a software security perspective. It
provides a curated collection of vulnerable and secure code instances,
enabling researchers to benchmark and improve the security-awareness
capabilities of code LLMs. The dataset can aid in evaluating more secure and
robust models for code generation, repair, and vulnerability classification
tasks.
2. (2)
Secondly, our study highlights the potential risks associated with using large
language models for code generation and code repair. It emphasizes the
importance of considering and mitigating security concerns when employing
these models in software development tasks. Understanding the varying
performance of different models across different vulnerability types can guide
developers in selecting appropriate models for specific use cases, considering
security requirements.
3. (3)
Finally, our findings underscore the need for further research and
advancements in code repair approaches to enhance security in software
engineering practices. As large language models continue to evolve, addressing
the challenges of repairing insecure code effectively is crucial for building
more trustworthy and secure software systems.
To leverage large language models more securely in real-world applications,
developers and researchers should consider:
* •
Incorporate Security Awareness: When utilizing large language models for code
generation tasks, developers should incorporate potential vulnerability
information into input prompts to encourage the models to generate more secure
code. Furthermore, the research of transforming Problem to Vulnerability-aware
Problem, can also aid in generating more secure code.
* •
Validate Repair Capabilities: Before deploying large language models for code
repair tasks, thorough validation of their repair capabilities, especially
concerning security vulnerabilities, is essential to avoid introducing new
security risks.
* •
Dataset Curation: Building comprehensive datasets like CodeSecEval that
encompass various vulnerability types and provide clear explanations of
insecure code can facilitate the development of more robust and secure models.
* •
Continuous Model Improvements: Researchers and developers should continuously
work on improving large language models’ security-awareness capabilities,
addressing the limitations identified in our study and other related research.
In conclusion, the findings from this research provide valuable guidance for
enhancing the security of large language models in code generation and repair
tasks, contributing to the overall improvement of secure software engineering
practices. By understanding the implications of these findings, developers and
researchers can leverage large language models more securely in real-world
applications and mitigate potential security risks associated with code
generation tasks.
## 5\. Conclusions And Future Work
This paper provides a comprehensive study that aims to evaluate and enhance
code LLMs from a software security perspective. Extensive experiments on our
curated CodeSecEval dataset yield valuable insights into the strengths and
limitations of large language models in security-critical software engineering
tasks. Our proposed approaches for code generation have demonstrated their
effectiveness in enhancing code security and mitigating security
vulnerabilities. However, we also identified specific weaknesses in existing
LLMs’ capabilities, particularly in code repair for certain vulnerability
types. To advance the field of secure code generation, future research should
explore the generalizability of our approaches to other programming languages.
Moreover, improving the code repair capabilities of LLMs remains a promising
direction, and further research could investigate the effectiveness of
integrating domain-specific knowledge and feedback mechanisms to produce more
robust and secure code repairs. Overall, this study contributes to a better
understanding of LLMs’ potential and limitations in addressing security
concerns.
## References
* (1)
* 2022\. ([n. d.]) Synopsys 2022. [n. d.]. Open Source Security and Risk Analysis Report. Technical report, Synopsys Inc.
* Agashe et al. (2019) Rajas Agashe, Srinivasan Iyer, and Luke Zettlemoyer. 2019. JuICe: A Large Scale Distantly Supervised Dataset for Open Domain Context-based Code Generation. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_. Association for Computational Linguistics, 5436–5446. https://aclanthology.org/D19-1546
* Anthropic (2024) Anthropic. 2024. Introducing the next generation of Claude. Accessed: March 13, 2024. 2024. url: https://www.anthropic.com/news/claude-3-family.
* Arzt et al. (2014) Steven Arzt, Siegfried Rasthofer, Christian Fritz, Eric Bodden, Alexandre Bartel, Jacques Klein, Yves Le Traon, Damien Octeau, and Patrick McDaniel. 2014. Flowdroid: Precise context, flow, field, object-sensitive and lifecycle-aware taint analysis for android apps. _Acm Sigplan Notices_ 49, 6 (2014), 259–269.
* Asare et al. (2023) Owura Asare, Meiyappan Nagappan, and N Asokan. 2023. Is github’s copilot as bad as humans at introducing vulnerabilities in code? _Empirical Software Engineering_ 28, 6 (2023), 129.
* Austin et al. (2021) Jacob Austin, Augustus Odena, Maxwell I. Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie J. Cai, Michael Terry, Quoc V. Le, and Charles Sutton. 2021. Program Synthesis with Large Language Models. _CoRR_ abs/2108.07732 (2021). arXiv:2108.07732 https://arxiv.org/abs/2108.07732
* Bhandari et al. (2021) Guru Bhandari, Amara Naseer, and Leon Moonen. 2021. CVEfixes: automated collection of vulnerabilities and their fixes from open-source software. In _Proceedings of the 17th International Conference on Predictive Models and Data Analytics in Software Engineering_. 30–39.
* Bhatt et al. (2023) Manish Bhatt, Sahana Chennabasappa, Cyrus Nikolaidis, Shengye Wan, Ivan Evtimov, Dominik Gabi, Daniel Song, Faizan Ahmad, Cornelius Aschermann, Lorenzo Fontana, et al. 2023\. Purple llama cyberseceval: A secure coding benchmark for language models. _arXiv preprint arXiv:2312.04724_ (2023).
* Chen et al. (2021) Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021\. Evaluating large language models trained on code. _arXiv preprint arXiv:2107.03374_ (2021).
* Chen et al. (2022) Zimin Chen, Steve Kommrusch, and Martin Monperrus. 2022. Neural transfer learning for repairing security vulnerabilities in c code. _IEEE Transactions on Software Engineering_ 49, 1 (2022), 147–165.
* Chowdhery et al. (2022) Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022\. Palm: Scaling language modeling with pathways. _arXiv preprint arXiv:2204.02311_ (2022).
* CodeQL (2022) CodeQL. 2022. CodeQL. https://github.com/github/codeq.
* Dong et al. (2023) Yukun Dong, Yeer Tang, Xiaotong Cheng, and Yufei Yang. 2023. DeKeDVer: A deep learning-based multi-type software vulnerability classification framework using vulnerability description and source code. _Information and Software Technology_ 163 (2023), 107290.
* Fan et al. (2020) Jiahao Fan, Yi Li, Shaohua Wang, and Tien N Nguyen. 2020. AC/C++ code vulnerability dataset with code changes and CVE summaries. In _Proceedings of the 17th International Conference on Mining Software Repositories_. 508–512.
* Feng et al. (2020) Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020\. Codebert: A pre-trained model for programming and natural languages. _arXiv preprint arXiv:2002.08155_ (2020).
* Fried et al. (2022) Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. 2022. Incoder: A generative model for code infilling and synthesis. _arXiv preprint arXiv:2204.05999_ (2022).
* Friedman (2021) Nat Friedman. 2021. Introducing GitHub Copilot: your AI pair programmer. _URL https://github. blog/2021-06-29-introducing-github-copilot-ai-pair-programmer_ (2021).
* Gazzola et al. (2018) Luca Gazzola, Daniela Micucci, and Leonardo Mariani. 2018. Automatic software repair: A survey. In _Proceedings of the 40th International Conference on Software Engineering_. 1219–1219.
* Hendrycks et al. (2021) Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. 2021\. Measuring Coding Challenge Competence With APPS. In _Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)_.
* Iyer et al. (2018) Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2018. Mapping Language to Code in Programmatic Context. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, Brussels, Belgium, 1643–1652. https://doi.org/10.18653/v1/D18-1192
* Izadi et al. (2022) Maliheh Izadi, Roberta Gismondi, and Georgios Gousios. 2022. Codefill: Multi-token code completion by jointly learning from structure and naming sequences. In _Proceedings of the 44th International Conference on Software Engineering_. 401–412.
* Jiang et al. (2021) Nan Jiang, Thibaud Lutellier, and Lin Tan. 2021. Cure: Code-aware neural machine translation for automatic program repair. In _2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE)_. IEEE, 1161–1173.
* Jiang et al. (2023) Xue Jiang, Yihong Dong, Lecheng Wang, Qiwei Shang, and Ge Li. 2023. Self-planning code generation with large language model. _arXiv preprint arXiv:2303.06689_ (2023).
* Joshi et al. (2023) Harshit Joshi, José Cambronero Sanchez, Sumit Gulwani, Vu Le, Gust Verbruggen, and Ivan Radiček. 2023. Repair is nearly generation: Multilingual program repair with llms. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 37. 5131–5140.
* Khoury et al. (2023) Raphaël Khoury, Anderson R Avila, Jacob Brunelle, and Baba Mamadou Camara. 2023. How Secure is Code Generated by ChatGPT? _arXiv preprint arXiv:2304.09655_ (2023).
* Lai et al. (2022) Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Scott Wen-tau Yih, Daniel Fried, Sida Wang, and Tao Yu. 2022. DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation. _arXiv preprint arXiv:2211.11501_ (2022).
* Le Goues et al. (2021) Claire Le Goues, Michael Pradel, Abhik Roychoudhury, and Satish Chandra. 2021. Automatic program repair. _IEEE Software_ 38, 4 (2021), 22–27.
* Li et al. (2023a) Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. 2023a. StarCoder: may the source be with you! _arXiv preprint arXiv:2305.06161_ (2023).
* Li et al. (2023b) Xin-Ye Li, Jiang-Tian Xue, Zheng Xie, and Ming Li. 2023b. Think outside the code: Brainstorming boosts large language models in code generation. _arXiv preprint arXiv:2305.10679_ (2023).
* Li et al. (2022) Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. 2022\. Competition-level code generation with alphacode. _Science_ 378, 6624 (2022), 1092–1097.
* Lin et al. (2017) Derrick Lin, James Koppel, Angela Chen, and Armando Solar-Lezama. 2017. QuixBugs: A multi-lingual program repair benchmark set based on the Quixey Challenge. In _Proceedings Companion of the 2017 ACM SIGPLAN international conference on systems, programming, languages, and applications: software for humanity_. 55–56.
* Lu et al. (2022) Shuai Lu, Nan Duan, Hojae Han, Daya Guo, Seung-won Hwang, and Alexey Svyatkovskiy. 2022. Reacc: A retrieval-augmented code completion framework. _arXiv preprint arXiv:2203.07722_ (2022).
* MacNeil et al. (2023) Stephen MacNeil, Andrew Tran, Arto Hellas, Joanne Kim, Sami Sarsa, Paul Denny, Seth Bernstein, and Juho Leinonen. 2023. Experiences from using code explanations generated by large language models in a web software development e-book. In _Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1_. 931–937.
* MacNeil et al. (2022) Stephen MacNeil, Andrew Tran, Dan Mogil, Seth Bernstein, Erin Ross, and Ziheng Huang. 2022. Generating diverse code explanations using the gpt-3 large language model. In _Proceedings of the 2022 ACM Conference on International Computing Education Research-Volume 2_. 37–39.
* (36) The MITRE Corporation (MITRE). 2022. Common Weakness Enumeration.
* Nascimento et al. (2023) Nathalia Nascimento, Paulo Alencar, and Donald Cowan. 2023. Comparing software developers with chatgpt: An empirical investigation. _arXiv preprint arXiv:2305.11837_ (2023).
* Nijkamp et al. (2022) Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2022. Codegen: An open large language model for code with multi-turn program synthesis. _arXiv preprint arXiv:2203.13474_ (2022).
* Nikitopoulos et al. (2021) Georgios Nikitopoulos, Konstantina Dritsa, Panos Louridas, and Dimitris Mitropoulos. 2021. CrossVul: a cross-language vulnerability dataset with commit data. In _Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering_. 1565–1569.
* OpenAI (2023) OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL]
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th annual meeting of the Association for Computational Linguistics_. 311–318.
* Pearce et al. (2022) Hammond Pearce, Baleegh Ahmad, Benjamin Tan, Brendan Dolan-Gavitt, and Ramesh Karri. 2022. Asleep at the keyboard? assessing the security of github copilot’s code contributions. In _2022 IEEE Symposium on Security and Privacy (SP)_. IEEE, 754–768.
* Pearce et al. (2023) Hammond Pearce, Benjamin Tan, Baleegh Ahmad, Ramesh Karri, and Brendan Dolan-Gavitt. 2023. Examining zero-shot vulnerability repair with large language models. In _2023 IEEE Symposium on Security and Privacy (SP)_. IEEE, 2339–2356.
* Perry et al. (2022) Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh. 2022. Do users write more insecure code with AI assistants? _arXiv preprint arXiv:2211.03622_ (2022).
* Perry et al. (2023) Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh. 2023. Do users write more insecure code with AI assistants?. In _Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security_. 2785–2799.
* Ponta et al. (2019) Serena Elisa Ponta, Henrik Plate, Antonino Sabetta, Michele Bezzi, and Cédric Dangremont. 2019. A manually-curated dataset of fixes to vulnerabilities of open-source software. In _2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR)_. IEEE, 383–387.
* Prenner et al. (2022) Julian Aron Prenner, Hlib Babii, and Romain Robbes. 2022. Can OpenAI’s codex fix bugs? an evaluation on QuixBugs. In _Proceedings of the Third International Workshop on Automated Program Repair_. 69–75.
* Roziere et al. (2023) Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. 2023\. Code llama: Open foundation models for code. _arXiv preprint arXiv:2308.12950_ (2023).
* S.A. (2022) SonarSource S.A. 2022. SonarSource static code analysis. https://rules.sonarsource.com.
* Shin et al. (2023) Jiho Shin, Junjie Wang, Song Wang, Nachiappan Nagappan, et al. 2023\. Automatic static bug detection for machine learning libraries: Are we there yet? _arXiv preprint arXiv:2307.04080_ (2023).
* Siddiq and Santos (2022) Mohammed Latif Siddiq and Joanna CS Santos. 2022. SecurityEval dataset: mining vulnerability examples to evaluate machine learning-based code generation techniques. In _Proceedings of the 1st International Workshop on Mining Software Repositories Applications for Privacy and Security_. 29–33.
* Sobania et al. (2023) Dominik Sobania, Martin Briesch, Carol Hanna, and Justyna Petke. 2023. An analysis of the automatic bug fixing performance of chatgpt. In _2023 IEEE/ACM International Workshop on Automated Program Repair (APR)_. IEEE, 23–30.
* Tony et al. (2023) Catherine Tony, Markus Mutas, Nicolás E Díaz Ferreyra, and Riccardo Scandariato. 2023. Llmseceval: A dataset of natural language prompts for security evaluations. In _2023 IEEE/ACM 20th International Conference on Mining Software Repositories (MSR)_. IEEE, 588–592.
* Touvron et al. (2023) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023\. Llama: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_ (2023).
* Wang et al. (2023) Qian Wang, Yuying Gao, Jiadong Ren, and Bing Zhang. 2023. An automatic classification algorithm for software vulnerability based on weighted word vector and fusion neural network. _Computers & Security_ 126 (2023), 103070.
* Wang et al. (2021a) Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi. 2021a. CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, 8696–8708. https://doi.org/10.18653/v1/2021.emnlp-main.685
* Wang et al. (2021b) Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. 2021b. Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. _arXiv preprint arXiv:2109.00859_ (2021).
* Wu et al. (2023) Yi Wu, Nan Jiang, Hung Viet Pham, Thibaud Lutellier, Jordan Davis, Lin Tan, Petr Babkin, and Sameena Shah. 2023. How effective are neural networks for fixing security vulnerabilities. In _Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis_. 1282–1294.
* Xia and Zhang (2022) Chunqiu Steven Xia and Lingming Zhang. 2022. Less training, more repairing please: revisiting automated program repair via zero-shot learning. In _Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering_. 959–971.
* Xiong et al. (2023) Yiheng Xiong, Mengqian Xu, Ting Su, Jingling Sun, Jue Wang, He Wen, Geguang Pu, Jifeng He, and Zhendong Su. 2023. An empirical study of functional bugs in android apps. In _Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis_. 1319–1331.
* Ye et al. (2021) He Ye, Matias Martinez, Thomas Durieux, and Martin Monperrus. 2021. A comprehensive study of automatic program repair on the QuixBugs benchmark. _Journal of Systems and Software_ 171 (2021), 110825.
* Zan et al. (2022) Daoguang Zan, Bei Chen, Dejian Yang, Zeqi Lin, Minsu Kim, Bei Guan, Yongji Wang, Weizhu Chen, and Jian-Guang Lou. 2022. CERT: Continual Pre-training on Sketches for Library-oriented Code Generation. In _The 2022 International Joint Conference on Artificial Intelligence_.
|
# Comparative Analysis of Contextual Relation Extraction based on Deep
Learning Models
R. Priyadharshini*
Department of Computer Science
Pondicherry University
Puducherry, India
<EMAIL_ADDRESS>
G. Jeyakodi
Department of Computer Science
Pondicherry University
Puducherry, India
<EMAIL_ADDRESS>
P. Shanthi Bala
Department of Computer Science
Pondicherry University
Puducherry, India
<EMAIL_ADDRESS>
###### Abstract
Contextual Relation Extraction (CRE) is mainly used for constructing a
knowledge graph with a help of ontology. It performs various tasks such as
semantic search, query answering, and textual entailment. Relation extraction
identifies the entities from raw texts and the relations among them. An
efficient and accurate CRE system is essential for creating domain knowledge
in the biomedical industry. Existing Machine Learning and Natural Language
Processing (NLP) techniques are not suitable to predict complex relations from
sentences that consist of more than two relations and unspecified entities
efficiently. In this work, deep learning techniques have been used to identify
the appropriate semantic relation based on the context from multiple
sentences. Even though various machine learning models have been used for
relation extraction, they provide better results only for binary relations,
i.e., relations occurred exactly between the two entities in a sentence.
Machine learning models are not suited for complex sentences that consist of
the words that have various meanings. To address these issues, hybrid deep
learning models have been used to extract the relations from complex sentence
effectively. This paper explores the analysis of various deep learning models
that are used for relation extraction.
_Keywords_ Contextual Relation Extraction $\cdot$ Word Embeddings $\cdot$ BERT
$\cdot$ Deep Learning Model
## 1 Introduction
Contextual Relation Extraction (CRE) helps to understand the meaning of the
entities and their relationship in a sentence. It can improve the performance
of Natural Language Processing tasks such as information retrieval, question
answering, and semantic search [1]. Named Entity Recognition aims to
automatically identify and classify objects like people, products,
organizations, locations, etc. The process of identifying the terms in a text
and arranging in an appropriate group is a source for named entity recognition
and a key component for text analysis. The analysis of common syntactic
patterns is an important factor of NER. Many deep learning models solve entity
recognition applications such as indexing documents, finding relationship
among entities, and building an ontology [2-4]. The combination of NER and CRE
can provide a rich understanding of the text by identifying both the entities
and their relationships based on the context. The joint modeling of entity
recognition and relation classification attained more focus recently [5].
Additionally, these end-to-end models have generated massively to improve the
results. Information Extraction (IE) begins with the creation of knowledge
graphs that transforms unformatted text into formatted data. Entity extraction
and Relation extraction are the two subtasks of IE.
Relation extraction is ongoing research for the recent years. Neural networks
enabled technology is used to efficiently classify entities and relation.
Natural Language Understanding (NLU) represents the associated relationship
among the existing objects and a distinct relationship between two or more
entities. Entity relationship is the basis for automatically creating a
knowledge graph. Relation extraction instantly detect and categorizes the
entities from the text during semantic relationship extraction. Example of
binary and n-ary relation are shown in Figure.1.
Figure 1: Example of binary and n-ary relation.
Binary relation consists of two entities and one relation and n-ary relation
consists of more that two entities and many relations. Binary relation
extraction models may have trouble in handling larger sentence and take lot of
time for processing. Some of the common issues in binary relation extraction
are ambiguity, incomplete data and noise in the text. To find and understand
the connections between different established categories, RE makes use of a
range of technologies. Recent joint extraction models work on fixed word
vector format for word embedding that are unsuitable for a word that has
multiple semantic meanings. To address this problem, Bo Qiao et al. developed
a dynamic fine-tuning method to overcome the issues in static word embedding
using the LSTM-LSTM-Bias method proposed by Zheng et al [6].
Bidirectional Encoder Representations from Transformers (BERT), is a machine
language pre- training model to represent language. BERT uses joint conditions
to compare each word context in forward and backward directions. The BERT
model can be improved by adding a single additional output layer for tasks
such as question answering and language inference. It does not require major
changes in the architecture. Devlin et al. proposed the significance of
bidirectional pre-training for language representations to eliminate the
requirement of multiple task-specific architectures. The BERT model is based
on the fine tune representation that outperforms multiple task-specific
architectures and reaches cutting-edge performance on a variety of task
levels, including token and sentence levels. The pre-training and fine-tuning
steps in BERT architecture help to understand the semantic meaning of the
words effectively. Before solving the joint extraction task, it pre-trains the
BERT model using another corpus. BERT can be used for a wide range of
linguistic activities and primarily adds a thin layer to the basic model [7].
Figure 2 shows the categorization of various BERT (Fine Tuning) based
applications.
Figure 2: BERT (Fine Tuning) based applications.
## 2 RELATED WORK
In this section, various models for Relation Extraction (RE) are explored.
Relation extraction is used to understand the relationships among the various
entities in an unlabeled text. There are various methods to perform relation
extraction, from a simple string extraction to automated models.
### 2.1 Models for Relation Extraction
Recently, many works such as document-level, pipelined and joint model is
proposed to solve the Relation Extraction tasks.
* •
Pipelined Method: The pipeline method treats NER and Relation Categorization
as a distinct operation. Zexuan et al. suggested the new state-of-the-art for
entity and RE using a straightforward pipelined strategy, and they obtain a
relative improvement over the earlier joint models using a similar pre-trained
encoder [8].
* •
Joint model: Joint extraction model recognizes entities and relations
simultaneously and these models extract entities and relations using a single
task. Feature-based structured systems compose the majority of joint
techniques. Zheng et al suggested a tagging scheme to convert joint extraction
of entities and relations [9].
* •
Document-level Relation Extraction Models: When compared to sentence-level
Relation Extraction, document level Relation Extraction is a complex process.
Because document may contain entity pairs with multiple relationships. The
Sentence Importance Estimation and Focusing (SIEF) framework was presented for
document-level. In various disciplines, the SIEF framework enhances the
performance of basic models [10]. Zeng et al. proposed an architecture to
distinguish the document level based on the intra and inter sequential
reasoning techniques [11].
### 2.2 Contextual Word Embeddings for Relation Extraction
Word embeddings are a method for finding similarities between words in a
corpus by predicting the co-occurrence of words in a text using some sort of
model. When it was proven that word embeddings could be used to find
analogies, they became well-known in the field of automated text analysis.
Table 1 illustrates various word embedding techniques.
Table 1: Word Embedding Techniques S. No | Word Embeddings | Explanation | Feature
---|---|---|---
1 | TF-IDF | A statistical technique for determining a word’s relevance to the corpus of text. It doesn’t record word associations with semantic meaning. | Perform well on retrieving information and extracting keywords from documents.
2 | Word2Vec | CBOW and Skip-gram architectures based on neural networks are superior at capturing semantic information. | Suitable for smaller and larger datasets.
3 | GLoVe | Global word-word co- occurrence-based matrix factorization. It resolves Word2Vec’s local context issues. | Better in tasks that involve word analogies and named-entity recognition. Word2Vec is commonly used in semantic analysis tasks.
4 | BERT | High-quality contextual information can be captured via a transformer-based attention method. | Translation services and a question-and-answer platform are used in the Google Search engine to interpret search keywords.
Contextual embeddings represent each word based on its context, capture the
word usage across a range of situations and encode cross-linguistic knowledge.
Contextual embeddings, such as ELMo and BERT, perform significantly better
than generic word representations. The ELMo of the bidirectional Language
Model combines the representations from its intermediary layer according to
the task at hand. When ELMo combines both the representations of forward and
backward LSTMs, the interactions between the left and right contexts are not
taken into consideration [12]. BERT offers Masked Language Modeling (MLM) that
involves randomly masking some of the tokens in input sequence. It employs a
Transformer encoder during pre- training to focus on instances involving bi-
directional communication and the other one is Next Sentence Prediction (NSP).
RE with Distant Supervision and Transformers suggested by Despina et al.
predicts better embeddings using fine-tuning BERT [13]. ELMo and BERT perform
better than Word2Vec, and offer ground-breaking performance in a range of NLP
applications. Using two input sentences, natural language processing (NLP)
determines whether the preceding sentence follows the first one. NLP helps to
facilitate the tasks which needs sentence pairs analysis.
### 2.3 Datasets for Relation Extraction
Several datasets for relation extraction have been developed recently to
enhance the relation extraction systems. Two examples of RE datasets created
through human annotations with relation types are SemEval-2010 Task 8 and
ACE05. The crowdsourcing method is used to build TACRED dataset to meet the
demands of the large-scale dataset. To enhance document-level RE research,
DocRED was developed. Ten thousand annotated examples and more than one
hundred relations are included in FewRel. The issues with few-short relation
extraction have been addressed with the development of FewRel and FewRel 2.0.
HacRED consists of 65,225 relational facts that has been identified from 9,231
documents [14-18].
## 3 ANALYSIS OF DEEP LEARNING MODELS
Deep learning uses artificial neural networks using representation learning.
It can be supervised, semi-supervised, or unsupervised. The rapid growth and
use of Artificial Intelligence based systems have elevated concerns regarding
understandability [19]. Rahman et al. constructed artificial neural networks
model for effectively forecast solar radiation [20]. Representation learning
helps to reduce the data dimension to simplify in identifying patterns and
anomalies. A neural network instructs computers to scrutinize data like human
brain. The hidden layers are referred as the term "deep." Deep neural networks
consist of 150 hidden layers, compared to the two or three layers that
traditional neural networks normally have. The structure of deep neural
network is depicted in Figure 3.
Figure 3: Structure of Deep Neural Network.
Deep learning model helps to learn categorization that take input from the
various sources such as images, text, and sounds. It can also attain high
accuracy, occasionally even superior human performance. Large labeled data and
multi-layered architectures are used to train the models to learn data
characteristics automatically. Deep learning has the ability to achieve high
levels of accuracy when trained on huge amounts of data. There are many
complex problems to solve in natural language. In some specific natural
language problems, deep learning achieves the best results. Table. 2
illustrates some of the deep learning techniques that are widely used in the
task of RE. Survey on existing RE model on Deep learning Techniques using
various dataset are mentioned in Table. 3.
Table 2: Deep Learning Techniques S. No | Models | Working Principle | Benifits | Issues
---|---|---|---|---
1 | CNN [21-23] | Convolutional Neural Network has Multiple layers to process and extract features. | human supervision is not required for features recognizing. | Overfitting, exploding gradient, and class imbalance.
2 | Bi-GRU [24] | Model that combines the Gated Recurrent Unit (GRU) and the bidirectional Recurrent Neural Network. | Simple than LSTM | Only input and forget gate.
3 | LSTM [25] | Long Short-Term Memory picks up and remembers enduring addictions and long-term retention of past knowledge. | Offers parameters like learning rates, input, and output biases. | Overfitting
4 | CRF [26] | It’s a discriminative model to predict contextual information. | Perform well on NLP tasks such as part of speech tagging, NER. | More accurate but difficult to train.
5 | BiLSTM [27] | It is a combination of two separate RNNs.The networks access both forward and backward information. | Better predictions compared to Auto Regressive Integrated Moving Average (ARIMA). | Slower and requires more time.
6 | RNN [28] | RNN has connections that form directed cycles that allows the current phase to accept the LSTM outputs as inputs. | Remembers every piece of information through time. | Exploding gradient problem, and long-term dependency of words.
7 | MLPs [29] | Made up of many layers of perceptron with activation capabilities. Layers of input and output are interconnected and have equal layers for the input and output. | Used to solve complex nonlinear problems. | Feature scaling, and Computational complexity.
8 | DBNs [30] | Made up of a lot of latent and random layers. Latent variables, often called hidden units that are characterized by binary values. Boltzmann machines has connections between its layers. | Powerful and learn complex patterns. Process large amounts of data very quickly. | Hardware requirements, expensive to train.
9 | RBM [31] | Consists of both visible and hidden components. All hidden units are linked to all visible units. | Computationally efficient and faster than a typical Boltzmann Machine. | Hard to evaluate or simulate.
Table 3: Comparison of Deep Learning Relation Extraction Models S. No | Authors and Year | Objective | Techniques | Issues | Dataset
---|---|---|---|---|---
1 | Chen Gao et al 2022 [32] | It extracts the semantic mutuality between entity and relation extraction. | HBT, WDec and CasREL | Overlapping entities in the sentence cannot be resolved by this technique. | New York Times (NYT), WebNLG
2 | O.A Tarasova et al 2022 [33] | Method to extract Clinical Named Entities from texts which combines the naive Bayes classifier with specially built filters. | Naïve Bayes classifier | The result of CNER using naive-Bayes method is slightly worse. | CHEMDNE R
3 | T.Bai et al 2022 [34] | Segment attention method based on CNN to extract local semantic properties through word embedding. | SVM, KNN, CNN, and SEGATT- CNN | This model applies only to supervised methods | Herbal- Disease and Herbal Chemistry, HD-HC
4 | Qingbang W et al 2022 [35] | This model efficiently predicts the information and semantic context of the current text. | BERT- BiLSTM, BiLSTM- ATT | The BERT- BLSTM network does not function well when dealing with the issue of partial entity overlap. | Food public opinion field data
5 | Hailin Wang et al 2022 [36] | Supervised and distant supervision methods for Relation Extraction. | DNN, RNN and PCNN | Error propagation in supervised methods. | SemEval 2010-task8, ACE series and NYT+Freebase
6 | Yang Yang et al 2022 [37] | Basics of IE and DL, mainly concentrating on DL technologies in the field of IE. | RNN, CNN and BiLSTM | DNN models cannot handle all the knowledge in huge database. | COVID-19 news
7 | Zhiyun Z et al 2022 [38] | Distant Supervised Relation Extraction (DSRE) model using residual network. | CNN-ATT, PCNN-ATT, and DSRE | Noise label reducing. | Freebase + NYT
8 | Guangyao Wang et al 2022 [39] | Weighted graph convolutional network (WGCN) model to extract the nontaxonomic relationships. | LSTM, CNN and BiLSTM | When the feature graph is used as an input to the GCN, the directed graph’s effect is not better. | Human- annotated RE data, NYT data
9 | Chantrapor nchai et al 2021 [40] | BERT and spacy model to extract specific information from entire texts based on machine learning. | BERT, Spacy | The Performance of SpaCy is poor. | Tourism Data
10 | W. Zhou et al 2021 [41] | The multi-label and multi-entity problems are solved using adaptive thresholding and localized context pooling. | BERT- ATLOP, BERT-E | Adaptive thresholding only works when the model is optimized. | DocRED, CDR
11 | Prashant S et al 2021 [42] | Attention Retrieval Model to improve the applicability of attention-based model for RE. | LSTM, RNN, and GRU | The ARM technique must test the model rather than categorize the text. | Atlas of Inflammation Resolution (AIR), BioGRID, and ChemProt.
---|---|---|---|---|---
12 | Liu Kang et al2020 [43] | Neural relation extraction with a specific focus to train neural relation extraction model. | BERT, LSTM, and BiLSTM | Unable to meet demand in practical applications. | ACE, SemEval 2010, TACRE.
13 | Boran Hao et al 2020 [44] | Novel joint training technique is used to develop language model pre-training for clinical corpora. | Clinical BERT + BiLSTM, Clinical KB- ALBERT | The improvement for ALBERT is less significant. | MIMIC-III and UMLS Knowledge Base
14 | Diana Sousa et al 2020 [45] | BiOnt uses four different kinds of biomedical ontologies to perform relation extraction. | BO-LSTM, BioBERT | This approach does not allow for th integration of ontological knowledge. | DDI corpus, PGR corpus, BC5CDR corpus
15 | Rakesh Patra et al 2019 [46] | A Model for automatic generation of named entity distractor. A combination of statistical and semantic similarity is used. | F requency based, Co- occurrence based | Existing techniques focus on language learning and vocabulary testing, these metrics are not applicable for evaluating the named entity distractors. | 200 cricket -related MCQ-key pairs
16 | Veera Ragavendra et al 2018 [47] | The rule-based method for relationship classes. | SVM, BiLSTM | Suitable only for smaller number of samples. | I2b2 2010
17 | S. Zeng et al 2018 [48] | Separate Intra- and Inter-sentential Reasoning for Document-level Relation Extraction (SIRE) architecture. | BiLSTM,BERT, and SIRE- BERT | This model primarily enhances intra- sentential relations’ performance. | DocRED, CDR, and GDA
18 | Jing Qiu et al 2018 [49] | SGNRI model to extract non- taxonomic associations using a multi-phase correlation search automated system. | SGNRI(Word2Vec) SGNRI(LD A) | The performance of the word2Vec- based model is poor. | Concept Pairs
19 | Henghui et al 2018 [50] | Developed a model for the purpose of clinical feature extraction using a contextual word embedding approach. | BiLSTM- CRF, ELMo | Difficulties in creating a language model on a big corpus of domain- specific data. | I2B2 2010
20 | Linfeng Song et al 2018 [51] | Graph-state LSTM model for displaying discourse and relationship structures. | Bidirection al DAG LSTM, GLSTM | Word sense confusion | Biomedical domain
## 4 DISCUSSION
The comparison of existing relation models with various techniques shows that
BERT based relation extraction model provides significantly improved
performance than other models such as CNN, RNN, KNN, etc. BERT reads text
input in both left-to-right and right-to-left directions at once. Using this
bidirectional capability, BERT is pretrained on two different NLP tasks such
as Masked Language Modeling and Next Sentence Prediction. It is observed that
the model can be used for various domains such as clinical, tourism,
agriculture, and so on. Table 4 shows the performance evaluation of existing
relation extraction models based on deep learning techniques. Table 5 lists
the performance accuracy (F1 score) of the BERT, CNN, and RNN based models for
the SemEval 2010 dataset.
Table 4: Performance evaluation of existing relation extraction models Model | Dataset | F1 score | Reference
---|---|---|---
SIRE-BERT | DocRED | 62.05 | [9]
RoBERT-ATLOP | DocRED | 63.40 | [41]
MDL-J3E | COVID-19 News | 70.96 | [28]
KNN | CDR | 71.49 | [34]
BiDAG LSTM | Biomedical | 75.6 | [51]
BERT | Tourism data | 77.96 | [40]
SGNRI | Concept pairs | 81.4 | [49]
KB-BERT | I2b2 2010 | 84.4 | [44]
BiLSTM | SemEval 2010 | 84.7 | [6]
WCGN | NYT Data | 84.47 | [27]
BERT-BiLSTM | Food public opinion field data | 87.44 | [35]
ELMo+BiLSTM-CRF | I2b2/VA 2010 | 88.60 | [50]
BERT-BiLSTM-CRF | Clinical data | 96.73 | [52]
Table 5: Comparison of CNN, RNN, and BERT based RE models (SemEval 2010 dataset) Model | F1 Score | Reference
---|---|---
Att+CNN | 88.0 | [53]
CR-CNN | 84.1 | [54]
MVRNN | 82.4 | [55]
BRCNN | 85.4 | [56]
Att+BiLSTM | 84.0 | [57]
R-BERT | 89.2 | [58]
BERT-GCN | 90.2 | [59]
The F1 statistical metric is employed to calculate an estimation of the deep
learning model’s accuracy. From the literature survey, it has been identified
that the BERT-BiLSTM-CRF model achieves better results for breast cancer
concepts and their attributes extraction. Even though several BERT based
relation extraction for different fields are developed, the overlapping of
relation and partial entity overlapping are still in a development state.
Relation extraction offers a wide range of applications including information
retrieval, question answering, and knowledge base construction, etc. Creating
models that can extract relationships in a multilingual and cross-lingual
situation is another important area of focus. Additionally, the combination of
relation extraction with other NLP tasks such as named entity recognition and
event extraction is expected to lead to more wide-ranging and sophisticated
NLP systems. The contexts such as syntax, pragmatics can be considered for
improving the relation prediction accuracy. BERT variants such as RoBERTa,
DistilBERT, and XLNet can be incorporated to enhance the contextual relation
prediction.
## 5 CONCLUSION
This paper provides information on conceptual relation extraction and the
various techniques used. It affords information on various deep learning
models which are used in different tasks such as building classification
models, developing recommendation systems, learning behavior predictions, and
so on. It has been identified that BERT- based models can provide better
accuracy to identify relations based on their context from multiple sentences.
While comparing to other models BERT- BiLSTM-CRF achieved 97% of accuracy with
limited information. In future, the overlapping relations problems can be
focused to improve prediction accuracy.
## References
* [1] X. Chen and R. Badlani, “Proceedings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages Relation Extraction with Contextualized Relation Embedding (CRE).” [Online]. Available: https://developers.google.com/
* [2] [2]P. Li, M. Wang, and J. Wang,“Named entity translation method based on machine-translation lexicon,” Neural Comput Appl, vol. 33, no. 9,pp. 3977–3985, May 2021, doi: 10.1007/s00521- 020-05509-y.
* [3] [3] C. Gao, X. Zhang, M. Han, and H. Liu, “A review on cyber security named entity recognition,” Frontiers of Information Technology and Electronic Engineering, vol. 22, no. 9. Zhejiang University, pp. 1153–1168, Sep. 01, 2021. doi: 10.1631/FITEE.2000286.
* [4] R. Patra and S. K. Saha, “A hybrid approach for automatic generation of named entity distractors for multiple choice questions,” Educ Inf Technology (Dordr), vol. 24, no. 2, pp. 973–993, Mar. 2019, doi: 10.1007/s10639- 018-9814-3.
* [5] B. Qiao, Z. Zou, Y. Huang, K. Fang, X. Zhu, and Y. Chen, “A joint model for entity and relation extraction based on BERT,” Neural Comput Appl, vol. 34, no. 5, pp. 3471–3481, Mar. 2022, doi: 10.1007/s00521- 021-05815-z.
* [6] H. Zhu, I. Ch. Paschalidis, and A. Tahmasebi, “Clinical Concept Extraction with Contextual Word Embedding,” Oct. 2018, [Online]. Available: http://arxiv.org/abs/1810.10566
* [7] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” Oct. 2018, [Online]. Available: http://arxiv.org/abs/1810.04805
* [8] Z. Zhong and D. Chen, “A Frustratingly Easy Approach for Entity and Relation Extraction,” Oct. 2020, [Online]. Available: http://arxiv.org/abs/2010.12812
* [9] S. Zheng, F. Wang, H. Bao, Y. Hao, P. Zhou, and B. Xu, “Joint extractionof entities and relations based on a novel tagging scheme,” in ACL2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), 2017, vol.1, pp. 1227– 1236. doi: 10.18653/v1/P17-1113.
* [10] W. Xu, K. Chen, L. Mou, and T. Zhao, “Document-Level Relation Extraction with Sentences Importance Estimation and Focusing.” [Online]. Available: https://github.
* [11] S. Zeng, Y. Wu, and B. Chang, “SIRE: Separate Intra- and Inter- sentential Reasoning for Document- level Relation Extraction,” Jun. 2021, [Online]. Available: http://arxiv.org/abs/2106.01709
* [12] M. E. Peters et al., “Deep contextualized word representations,” Feb. 2018, [Online]. Available: http://arxiv.org/abs/1802.05365.
* [13] D. Christou and G. Tsoumakas, “Improving Distantly-Supervised Re- lation Extraction through BERT-Based Label and Instance Embed- dings,” IEEE Access, vol. 9, pp. 62574–62582, 2021, doi:10.1109/AC- CESS.2021.3073428.
* [14] I. Hendrickx et al., “SemEval-2010 Task 8: Multi-Way Classificationof Semantic Relations Between Pairs of Nominals,” Association for Computational Linguistics, 2010. [Online]. Available: http://docs.
* [15] Y. Yao et al., “DocRED: A Large-Scale Document-Level Relation Extraction Dataset.” [Online]. Available: https://spacy.io
* [16] X. Han et al., “FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation.” [Online]. Avail-able: http://zhuhao.me/fewrel
* [17] T. Gao et al,“FewRel 2.0: Towards More Challenging Few-Shot Relation Classification.” [Online]. Available: https://www.ncbi.nlm.nih.gov/pubmed/
* [18] Q. Cheng et al., “HacRED: A Large-Scale Relation Extraction Dataset Toward Hard Cases in Practical Applications.” [Online]. Available: http://lic2019.ccf.org.cn/kg
* [19] Haque, A. B., Islam, A. N., and Mikalef, P. (2023). Explainable Artificial Intelligence (XAI)from a user perspective: A synthesis of prior literature and problematizing avenues for future research. Technological Forecasting and Social Change, 186, 122120.
* [20] Rahman, S., Rahman, S., and Bahalul Haque, A. K. M. (2022). Automated detection of cardiac arrhythmia based on a hybrid CNN-LSTM network. In Emergent Converging Technologies and Biomedical Systems: Select Proceedings of ETBS 2021 (pp. 395-414). Singapore: Springer Singapore.
* [21] G. Kim, C. Lee, J. Jo, and H. Lim, “Automatic extraction of named entities of cyber threats using a deep Bi-LSTM-CRF network,” Inter- national Journal of Machine Learning and Cybernetics, vol. 11, no. 10, pp. 2341–2355, Oct. 2020, doi: 10.1007/s13042-020-01122-6.
* [22] Navid, S. M. A., Priya, S. H., Khandakar, N. H., Ferdous, Z., and Haque, A. B. (2019). Signature verification using convolutional neural network. In 2019 IEEE International Conference on Robotics, Automation, Artificial-intelligence and Internet-of-Things (RAAICON) (pp. 35-39). IEEE.
* [23] Siam, S. C., Faisal, A., Mahrab, N., Haque, A. B., and Suvon, M. N. I. (2021, February). Automated student review system with computer vision and convolutional neural network. In 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS) (pp. 493-497). IEEE.
* [24] P. Srivastava, S. Bej, K. Schultz, K. Yordanova, and O. Wolkenhauer, “Attention Retrieval Model for Entity Relation Extraction from Biolog- ical Literature,” IEEE Access, vol. 10, pp. 22429–22440, 2022, doi: 10.1109/ACCESS.2022.3154820.
* [25] S. Banerjee and K. Tsioutsiouliklis, “Relation Extraction Using Multi- Encoder LSTM Network on a Distant Supervised Dataset,” in Proceedings - 12th IEEE International Conference on Semantic Computing, ICSC 2018, Apr. 2018, vol. 2018-January, pp. 235–238. doi: 10.1109/ICSC.2018.00040.
* [26] H. Zhu, I. Ch. Paschalidis, and A. Tahmasebi, “Clinical Concept Extraction with Contextual Word Embedding,” Oct. 2018, [Online]. Available: http://arxiv.org/abs/1810.10566
* [27] G. Wang, S. Liu, and F. Wei, “Weighted graph convolution over dependency trees for nontaxonomic relation extraction on public opinioninformation,” Applied Intelligence, vol. 52, no. 3, pp. 3403– 3417, Feb. 2022, doi: 10.1007/s10489-021-02596-9.
* [28] Y. Yang, Z. Wu, Y. Yang, S. Lian, F. Guo, and Z. Wang, “A Survey of Information Extraction Based on Deep Learning,” Applied Sciences (Switzerland), vol. 12, no. 19. MDPI, Oct. 01, 2022. doi:10.3390/app12199691.
* [29] S. Mitra, S. Saha, and M. Hasanuzzaman, “A multi-view deep neural network model for chemical- disease relation extraction from imbalanceddatasets,” IEEE J Biomed Health Inform, vol. 24, no. 11, pp. 3315–3325,Nov. 2020, doi: 10.1109/JBHI.2020.2983365.
* [30] Y. Chen, W. Li, Y. Liu, D. Zheng, and T. Zhao, “Exploring DeepBelief Network for Chinese Relation Extraction.” [Online]. Available: http://www.nist.gov/speech/tests/ace/.
* [31] T. M. Alam and M. J. Awan, “Domain Analysis of Information Extraction Techniques,” INTERNATIONAL JOURNAL OF MULTIDISCI-PLINARY SCIENCES AND ENGINEERING, vol. 9, no. 6, 2018, [On-line]. Available: https://www.researchgate.net/publication/326463350.
* [32] C. Gao, X. Zhang, H. Liu, W. Yun, and J. Jiang, “A joint extraction model of entities and relations based on relation decomposition,” Inter- national Journal of Machine Learning and Cybernetics, vol. 13, no. 7,pp. 1833–1845, Jul. 2022, doi: 10.1007/s13042-021-01491-6.
* [33] O. A. Tarasova, A. v. Rudik, N. Y. Biziukova, D. A. Filimonov, and V. v.Poroikov, “Chemical named entity recognition in the texts of scientific publications using the na¨ıve Bayes classifier approach,” J Cheminform,vol. 14, no. 1, Dec. 2022, doi: 10.1186/s13321-022-00633-4.
* [34] T. Bai, H. Guan, S. Wang, Y. Wang, and L. Huang, “Traditional Chinese medicine entity relation extraction based on CNN with segmentattention,” Neural Comput Appl, vol. 34, no. 4, pp. 2739– 2748, Feb. 2022, doi: 10.1007/s00521-021-05897-9.
* [35] Q. Wang, Q. Zhang, M. Zuo, S. He, and B. Zhang, “An Entity Relation Extraction Model with Enhanced Position Attention in Food Domain,” Neural Process Lett, vol. 54, no. 2, pp. 1449–1464, Apr. 2022, doi: 10.1007/s11063-021-10690-9.
* [36] H. Wang, K. Qin, R. Y. Zakari, G. Lu, and J. Yin, “Deep neural network-based relation extraction: an overview,” Neural Comput Appl, vol. 34, no. 6, pp. 4781–4801, Mar. 2022, doi: 10.1007/s00521- 021-06667-3.
* [37] Y. Yang, Z. Wu, Y. Yang, S. Lian, F. Guo, and Z. Wang, “A Survey of Information Extraction Based on Deep Learning,” Applied Sciences (Switzerland), vol. 12, no. 19. MDPI, Oct. 01, 2022. doi: 10.3390/app12199691.
* [38] Z. Zheng, Y. Liu, D. Li, and X. Zhang, “Distant supervised relation extraction based on residual attention,” Frontiers of Computer Science, vol. 16, no. 6. Higher Education Press Limited Company, Dec. 01, 2022. doi: 10.1007/s11704-021-0474-x
* [39] G. Wang, S. Liu, and F. Wei, “Weighted graph convolution over dependency trees for nontaxonomic relation extraction on public opinion information,” Applied Intelligence, vol. 52, no. 3, pp. 3403– 3417, Feb. 2022, doi: 10.1007/s10489-021-02596-9.
* [40] C. Chantrapornchai and A. Tunsakul, “Information extraction on tourism domain using SpaCy and BERT,” ECTI Transactions on Computer and Information Technology, vol. 15, no. 1, pp. 108–122, Apr. 2021, doi: 10.37936/ecti-cit.2021151.228621.
* [41] W. Zhou, K. Huang, T. Ma, and J. Huang, “Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling,” 2021. [Online]. Available: www.aaai.org
* [42] P. Srivastava, S. Bej, K. Schultz, K. Yordanova, and O. Wolkenhauer, “Attention Retrieval Model for Entity Relation Extraction from Biolog- ical Literature,” IEEE Access, vol. 10, pp. 22429–22440, 2021, doi: 10.1109/ACCESS.2022.3154820.
* [43] K. Liu, “A survey on neural relation extraction,” Science China Technological Sciences, vol. 63, no. 10. Springer Verlag, pp. 1971–1989, Oct. 01, 2020. doi: 10.1007/s11431-020-1673-6.
* [44] B. Hao, H. Zhu, and I. Ch Paschalidis, “Enhancing Clinical BERT Embedding using a Biomedical Knowledge Base,” Online, 2020. [Online]. Available: https://github.com/noc-lab/clinical-kb-bert
* [45] D.Sousa and F. M. Couto, “BiOnt: Deep learning using multiple biomed-ical ontologies for relation extraction,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2020, vol. 12036 LNCS, pp. 367–374. doi: 10.1007/978-3-030-45442-546.
* [46] R. Patra and S. K. Saha, “A hybrid approach for automatic generation of named entity distractors for multiple choice questions,” Educ Inf Technol (Dordr), vol. 24, no. 2, pp. 973–993, Mar. 2019, doi: 10.1007/s10639-018-9814-3.
* [47] Chikka, Veera Raghavendra, and Kamalakar Karlapalem. ”A hybrid deep learning approach for medical relation extraction.” arXiv preprint arXiv:1806.11189 (2018).
* [48] S. Zeng, Y. Wu, and B. Chang, “SIRE: Separate Intra- and Inter- sentential Reasoning for Document- level Relation Extraction,” Jun. 2021, [Online]. Available: http://arxiv.org/abs/2106.01709
* [49] J. Qiu, Y. Chai, Y. Liu, Z. Gu, S. Li, and Z. Tian, “Automatic Non-Taxonomic Relation Extraction from Big Data in Smart City,” IEEE Access, vol. 6, pp. 74854–74864, 2018, doi: 10.1109/AC- CESS.2018.2881422.
* [50] H. Zhu, I. Ch. Paschalidis, and A. Tahmasebi, “Clinical Concept Extraction with Contextual Word Embedding,” Oct. 2018, [Online]. Available: http://arxiv.org/abs/1810.10566
* [51] L. Song, Y. Zhang, Z. Wang, and D. Gildea, “N-ary Relation Extraction using Graph State LSTM.” [Online]. Available: https://github.com/
* [52] Zhang X, Zhang Y, Zhang Q, Ren Y, Qiu T, Ma J, Sun Q, “ Extracting comprehensive clinical information for breast cancer using deep learning methods.” Int J Med Inform. 2019 Dec;132:103985. doi: 10.1016/j.ijmedinf.2019.103985. Epub 2019 Oct 2. PMID: 31627032.
* [53] L. Wang, Z. Cao, G. de Melo, and Z. Liu, “Relation Classification via Multi-Level Attention CNNs.”
* [54] S. Wu and Y. He, “Enriching Pre-trained Language Model with Entity Information for Relation Classification,” May 2019, [Online]. Available: http://arxiv.org/abs/1905.08284
* [55] D. Zeng, K. Liu, S. Lai, G. Zhou, and J. Zhao, “Relation Classification via Convolutional Deep Neural Network.” [Online]. Available: http://en.wikipedia.org/wiki/Bag-of-words
* [56] R. Cai, X. Zhang, and H. Wang, “Bidirectional Recurrent Convolutional Neural Network for Relation Classification.”
* [57] P. Zhou et al., “Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification.”
* [58] S. Wu and Y. He, “Enriching Pre-trained Language Model with Entity Information for Relation Classification,” May 2019, [Online]. Available: http://arxiv.org/abs/1905.08284.
* [59] Y. Zhao, H. Wan, J. Gao, and Y. Lin, “Improving Relation Classification by Entity Pair Graph,” 2019.
|
bucket brigade, errors can get stuck. That is, if an error occurs on some node
deep in the tree, this does not necessarily affect any other nodes.
The only way for errors to affect the final result is if they propagate to the
root node somehow. At each node, the routing circuit only swaps from one of
its two child nodes, so the error will only propagate upward if it is on the
correct side. As there is only one path which will get routed all the way to
the top of the tree – the path representing the actual address in the query –
an error propagates to the root only if it occurs on this path. But only
$O(\log N)$ qubits are on this path.
Errors could propagate in other ways; for example, if the control qubit has an
error _and_ the routing qubits have errors, then this will erroneously create
a path that propagates further up. But for the error to keep propagating
upward, a very precise sequence of errors must occur to create a path for it,
and this is unlikely for even modestly low probability errors. Overall, if
qubit undergoes an error channel with probability $p$, the error of the final
readout is only $O(p\cdot\log^{2}(N))$ [58].
The bucket-brigade requires all nodes to be in the “wait” state at the start
of an access. However, errors in a previous access that change this state
could persist to later accesses, depending on the device’s Hamiltonian. If the
probability of a persistent error is $p$ for each node in each access, then
after $Q$ accesses the proportion of error-ridden nodes grows proportional to
$Q^{2}p$.
Generally, the per-component error rate $p$ must still be exponentially small
for applications like database QAA, but for other applications, like quantum
linear algebra (see Section IV.4), higher error rates could be manageable.
Moreover, we propose that a bucket-brigade QRAM system could be partially
passive. Following the observation that errors in the root nodes of are more
destructive, we can run full error-correction on the first $k$ layers of the
bucket-brigade tree. If $k$ is selected to be a constant, or at least $o(\log
N)$, then this circumvents the issues with active QRAM that we have
extensively highlighted. The remaining $\log(N)-k$ layers would be a
collection of smaller, passive QRAMs joined to the error-corrected root nodes.
This encounters the same problem of interfacing noisy components with quantum
data encoded into a logical state that we discuss in Section VII. This
proposal requires $2^{k}$ error-corrected nodes, and we expect that the
overall probability of error would scale something like $O(p(\log(N)-k)^{2})$.
This wouldn’t change the asymptotic error rate of bucket-brigade, but with the
same physical error rate, this method could allow larger memories to become
practical than a fully passive bucket-brigade architecture would allow.
Example. To show why the small bucket-brigade error rates are somewhat unique,
we describe here an alternative readout circuit that _does_ require $O(1/N)$
gate error, as shown in Figure 9. To explain this circuit, we add an extra
layer $L$ of qubits at the leaves of the routing tree (shown as boxes in
Figure 9). We then add a single extra qubit prepared in the
$\left|1\right\rangle$ state, and propagate it through the tree by applying
the routing circuits at each node. This routes the $\left|1\right\rangle$ to
the location indexed by the address regiter. From there, for every $i$ such
that $T[i]=1$, we apply a CNOT from the leaf at the $i$th address to a qubit
in $L$. This means this final layer of qubits is all zeros if $T[i]=0$, and
has precisely one qubit in the $\left|1\right\rangle$ state otherwise. Thus,
we can read out the result by computing the parity of this final layer.
Parity is an appealing operation in the surface code, since it can be done by
conjugating a multi-target CNOT (itself depth-1 in a surface code) by Hadamard
gates.
The drawback of this method is that if an error occurs anywhere on this final
layer of ancilla qubits, it will propagate immediately to the output. Thus,
the qubits and gates in $L$ would need to have error rates of $O(1/N)$.
$T$$\mathsf{0}$$\mathsf{1}$$\mathsf{1}$$\mathsf{0}$$\mathsf{0}$$\mathsf{1}$$\mathsf{0}$$\mathsf{0}$$\oplus$$\oplus$$\oplus$
(a)
$\oplus$$\oplus$$\oplus$$\oplus$$\oplus$$\oplus$$\oplus$$\oplus$Output (b)
Figure 9: Alternative, suboptimal bucket-brigade readout forcing the readout
gates to have error rates of $O(1/N)$. The blue nodes are those with
$\left|1\right\rangle$ during the memory access for a specific address (in
this case, $i=2$). Notice that the parity in the second step only needs to use
leaves for $i$ with $T[i]=1$, but we include all of them for clarity.
## X Other Proposals
Here we summarize some other proposals which do not follow the bucket-brigade
approach, and detail the ways in which they fail to provide passive gate QRAM,
as they are currently described.
$T$Trapped atomLenspPolarizing beam splitterMirrorpLasersp Figure 10:
Schematic of quantum optical fanout QRAM, almost exactly as shown in [48].
### X.1 Time-bins
Many applications and methods to store quantum states with photon echoes exist
(e.g. [102]), but [78] propose an additional addressing component to turn this
storage into QRAM. To summarize their method, there is a memory cavity with
$M\gg N$ atoms and a control cavity. To store data, the device would send
quantum states encoded as photons into the memory cavity, where they would
excite the memory atoms. The memory atoms can replay this input via a two-
pulse echo technique; sending a second pulse will prompt the atoms to re-emit
the photons that were sent in. Importantly, if photons are sent in one-at-a-
time, they will be re-emitted in a time-reversed order.
Thus, to read the memory, the device sends a pulse which prompts the memory
cavity to re-emit its stored state. With just the memory cavity, this would
re-emit all input states at classically determined times, giving us CRAQM but
not QRAQM. For quantum addressing they use a second _control_ cavity. The
control cavity contains an atom such that, when this atom is in its ground
state, the memory cavity cannot emit any photons. However, when the control
cavity’s atom is excited, it allows the transfer of photons out of the memory
cavity.
An address state $\sum_{i}\alpha_{i}\left|i\right\rangle$ must then be
translated so that each address $\left|i\right\rangle$ is entangled with a
photonic state $\left|\psi_{i}(t)\right\rangle$ which reaches the control
cavity at precisely the right time to allow the memory cavity’s photons to
pass through. In this way, other addresses, which will not be entangled with a
photon pulse at that time, will not allow any emission from the memory cavity.
The fundamental nature of the time-bin addressing means that this requires
$\Omega(N)$ time for each memory access. The purported feature of this method
is the minimal hardware requirements (though it still requires $\Omega(N)$
atoms in the memory cavity), but unary circuit QRAM already provides a near-
optimal circuit on the $\Omega(N)$ side of the time-hardware tradeoff of QRAM
circuits.
In [84], they use a second type of pulse to decohere a memory atom so that it
will only re-emit once an identical pulse is sent to it. Using different
frequencies of pulses to store different memory bits, this could reduce the
total time requirement, at the expense of needing higher precision in pulse
frequencies.
There is still an issue of how the time-bin photonic address state is created.
As an example of how this could be created, consider the unary circuit QRAM.
It classically iterates through addresses $j$, and for each address $j$, flips
the state of a control qubit if the address in superposition equals $j$.
Suppose that flipping the control qubit puts it into an excited state which
can be stimulated to emit a photon. If we do this sequentially, then at time
$t_{j}$, only the address state $\left|j\right\rangle$ will cause a photon to
be emitted, thus the presence of photons at that time is in superposition and
entangled with the state of the address. In this way, applying this
stimulation instead of a CNOT turns the unary circuit QRAM into a circuit that
translates an address superposition into a superposition of time-bin photons.
Unfortunately, this has solved the addressing problem by using another QRAM
circuit to create the address! While superpositions of photons at different
points in time _may_ be a native quantum state in some architecture, for other
architectures, creating such a state is essentially a quantum addressing
problem, which is the difficult problem of QRAM anyway. The decohering pulses
of [84] face a similar issue, where they must be addressed and sent in a
coherent superposition to create a QRAM device.
As a final note, many aspects of the scheme in [78] require $\Omega(N)$ active
interventions. For example, with the photon echo technique, stimualating the
memory cavity to emit its states requires a phase-flipping pulse on all $M$
atoms for each time bin, meaning $\Omega(N)$ pulses that each require
$\Omega(N)$ energy. As in other proposals with photon-atom interactions, the
control cavity requires a stimulating pulse for each interaction (thus,
another $\Omega(N)$ stimulating pulses).
$\scriptscriptstyle\left|0\right\rangle$$\scriptscriptstyle\left|1\right\rangle$$\scriptscriptstyle\left|0\right\rangle$Readout
modeReadout modeSpatial address modesTransfer modes
Figure 11: Phase gate fanout QRACM, from [48]. The address qubits route the
address modes (red), which excites the phase-control qubits (blue transmons).
The excited phase-control qubits are highlighted yellow. This will cause
exactly one cavity’s transfer mode to resonate (dark green), exciting the 2
memory qubits (bottom, blue) for that address. The memory qubits change the
phase of the readout modes (yellow) which can be detected. The arrangement of
the two types of memory qubits (shown as white and grey) represents the data:
in this case, the table is $\mathsf{01010001}$.
### X.2 Quantum Optical Fanout
[48] describe an architecture using only $\log N$ trapped atoms, and instead
encode the QRAM access in $N$ spatial modes of light. The basic principle is
this: each bit of the address becomes one trapped atom, and then each beam
interacts with the atom, and if the atom is in $\left|1\right\rangle$, the
photon is polarized differently than if the atom is $\left|0\right\rangle$.
Then the photon beam is sent through a half-wave plate which spatially
separates different polarizations, thus doubling the number of photonic modes
and ensuring that an input beam is directed according to the state of the
atom.
Figure 10 duplicates Figure 4 from [48], with a crucial difference: we show
the schematic for an 8-bit address, rather than only 3. This immediately
highlights the problem: the number of spatial modes grows exponentially with
the number of address bits, but they must be focused back onto a single point.
If the photons are not aimed precisely, then after interacting with the atom
they will drift into another spatial mode. This causes an error. The precision
required for this aiming is $O(1/N)$ for $N$ bits of memory. This seems to be
the only physical mechanism for errors to propagate from one sequence of
spatial modes to another, suggesting that the overall error scaling has
similar robustness to bucket-brigade QRAM, but the aiming argument above means
that each component requires a precision scaling as $O(1/N)$ or it will have
an error. Intuitively, it doesn’t matter whether errors propagate across
“paths” of access, if all paths have an error.
Trying to guide the photons with a waveguide does not fix the problem,
regardless of the miniaturization we achieve. Taking the perspective of the
atom, the sources and receptors of the photons and must occupy non-overlapping
sections of all radial paths out of the atom. If either source or receptor has
a smaller cross-sectional area, then we can move it closer, but it occupies
the same “arc-area” from the atom’s perspective. This means the precision of
the beam must be the same – $O(1/N)$ – no matter the absolute area of the
photonic components.
Thus, in the exact formulation given, errors in this QRAM must scale
infeasibly. We can wonder whether a different technology, inspired by this
approach, could succeed. A core function of this approach is that the $i$th
address bit interacts with $2^{i}$ spatial modes in one “operation”. This is
the crux: finding a mechanism to enact such a large number of interactions
passively, with error scaling much less than $2^{i}$, seems difficult if not
impossible.
### X.3 Phase Gate Fanout
[48] describe another quantum fanout system, shown in Figure 11. This system
translates each address qubit into a photon in one of two spatial modes (based
on $\left|0\right\rangle$ or $\left|1\right\rangle$ in the address qubit,
shown as red horizontal beams in Figure 11). Each of these $2\lg N$ spatial
address modes couples to $N$ phase-control qubits (the upper blue transmons in
Figure 11), so that the phase-control qubits change state when there is a
photon in this spatial address mode.
Each phase-control qubit also couples to a another optical mode, which we’ll
call a transfer mode (the vertical green beams in Figure 11), such that there
are $N$ transfer modes, each coupled to $\lg N$ phase-control qubits. The
coupling is such that if the phase-control qubit is in its
$\left|1\right\rangle$ state, it adds a phase to the transfer mode. This two-
mode coupling is similar to the recently experimentally-realized photon
transistor in [107].
The net effect of all this is that if the address is $\left|j\right\rangle$,
then the $i$th transfer mode gains a phase of $e^{ij\varphi_{0}}$ for some
constant $\varphi_{0}$.
For readout, we have another collection of $N$ memory qubits (the bottom two
rows of transmons in Figure 11), each coupled to a different transfer mode, so
that the $j$th memory qubit interacts with the transfer mode only when the
transfer mode has a phase of $e^{ij\varphi_{0}}$. Thus, the memory qubit
changes state depending on whether the transfer mode is in resonance. For the
final readout, we use another optical readout mode (the yellow beams in Figure
11), which interacts with all memory qubits, and picks up a phase for each one
that is active. This means we will actually need $2N$ memory qubits and 2
readout modes, so that _which_ readout mode gains a phase will encode the
binary result of the memory lookup.
There are a number of practical issues with this scheme.
The foremost issue is the narrowness of the resonance of the memory qubits.
[48] note that there will need to be $N$ different resonant phases, and
analyze the width of a Fabry-Perot cavity’s resonance and conclude that if the
transmissitivity of the beamsplitters in the cavity is $O(1/N)$, the width of
resonant phases is also $O(1/N)$. However, this does not address the need to
fabricate each cavity, where the resonant phase must be constructed with
$O(1/N)$ precision as well. For a Fabry-Perot cavity, this would mean $O(1/N)$
precision in the physical distance between the mirrors.
Second, each address mode must interact with $N$ qubits. Modelled with a
Jaynes-Cummings Hamiltonian, every excited qubit means a reduced photon number
in the address mode. If not all qubits are excited, we will end up with some
superposition where only a small portion of the total amplitude corresponds to
states where the _right_ qubits are exicted to properly perform the memory
access. Thus, we need all qubits to be excited, meaning the photon number of
each address mode must be $\Omega(N)$.
In addition to forcing us to use $\Omega(N\log N)$ energy per QRAM access, we
must also provide this energy to the address modes in such a way that, for
each address qubit, we do not learn _which_ of the two address modes took this
energy. If we learned this, it effectively measures the address qubit. The
quantum router of [27] might suffice.
This QRAM may not even be passive, in fact. To excite the memory qubits, we do
not want to activate a new pulse – which requires $\Omega(N)$ interventions
and $\Omega(N)$ energy – but rather we want the memory qubits to reside in an
optical cavity that maintains some electric field. Consider a back-of-the-
envelope analysis of the losses of the optical cavities. If each cavity has
quality factor $Q$ and resonant frequency $f$, the fraction of energy lost per
unit time is proportional to $Nf/Q$, since there are $N$ cavities. Modern
“ultra-high” quality factors are on the order of $10^{9}$ [73], about the same
scale as the frequency (in hertz) of microwaves. Thus, to maintain the QRAM
for more than a few seconds requires either enormous optical cavities (to
increase the resonant frequency) or spectacular improvements in optical cavity
quality factors. In either case, if these improvements do not keep pace with
the increase in the number of memory bits, the _total_ energy loss quickly
becomes enormous.
Finally, each readout mode interacts with $N$ qubits. If our error model
permits some probability $\epsilon$ of adding the wrong phase from each memory
qubit (e.g., the probability from measuring after interacting with that memory
qubit), then we need $\epsilon$ to be at most $O(1/N)$, or the memory readout
will fail.
Unlike [48], the arrangment of phase-control qubits in Figure 11 means that
for each address, only one cavity has all $n$ phase-control qubits excited.
Potentially one could engineer each phase-control qubit to contribute $1/n$ of
the necessary phase to bring the transfer mode into resonance. If so, the
precision of the resonator might only need to be $O(1/\log N)$. However, given
the readout method, any leakage from the resonators will contribute directly
to readout error (if leakage excites the incorrect memory qubit, the readout
detects this). Hence, the transfer modes may still need $O(1/N)$ precision.
### X.4 Derangement Codes
[54] proposes error correction based on code switching, which we summarize and
slightly tweak here.
We start by imagining a no-op state $\left|\text{nop}\right\rangle$, such that
not only is $\left|\text{nop}\right\rangle$ invariant under $U_{QRAM}$, the
internal quantum state of the QRAM nodes are also invariant when we input
$\left|\text{nop}\right\rangle$. As an example for bucket-brigade QRAM, we
could use an ancilla flag qubit which controls the production of the input
photons from the address qubits. If this flag is set to $0$, no input photons
are generated and nothing enters the QRAM device.
Then we consider the circuit shown in Figure 12. This has $m$ devices to
access the same table $T$. It generates a superposition of $m$ different
states, indexing which QRAM device to shuffle our real query into (the query
is
$\left|\phi\right\rangle=\sum_{i}\alpha_{i}\left|i\right\rangle\left|0\right\rangle$),
while sending $\left|\text{nop}\right\rangle$ states to all the other devices.
We then apply all the QRACM devices, un-shuffle the state, and measure the
shuffle controls in the Hadamard basis.
$\left|\mathsf{nop}\right\rangle\left|0\right\rangle$$\times$$\dots$$\dots$$\left|\mathsf{nop}\right\rangle\left|0\right\rangle$$\times$$\sum\limits_{i=0}^{N-1}\alpha_{i}\left|i\right\rangle\left|0\right\rangle$$\times$$\phantom{{}^{\otimes
m}}\left|0\right\rangle^{\otimes\lg
m}$$H$$U_{QRACM}(T)$$U_{QRACM}(T)$$\dots$$U_{QRACM}(T)$$\times$$\times$$\times$$\dots$$H$$\left|\mathsf{nop}\right\rangle\left|0\right\rangle$$\left|\mathsf{nop}\right\rangle\left|0\right\rangle$$\dots$$\sum\limits_{i=0}^{N-1}\alpha_{i}\left|i\right\rangle\left|T_{i}\right\rangle$
Figure 12: QRACM with heralded error mitigation, adapted from [54]. Shaded
regions and thicker wires indicate error corrected qubits and operations, and
red rectangles on the border indicate decoding and re-encoding. The controlled
swaps indicate that, based on the $\lg m$ control qubits, the last register is
swapped to one of the intermediate registers. The states shown at the top are
the states if no errors occur, in which case the measurement result is all
$0$.
Measuring all zeros in the shuffle controls means the states that went out
came back to the registers they started in, enforcing a “sameness” among the
queries between different QRAM devices. If the errors are independent, then
this suppresses errors, since it’s unlikely for the same error to occur in all
QRAM devices at the same time.
[54] proposes compressing this to use only one QRAM device by sequentially
controlling swaps, so that only one of $m$ applications of the QRAM applies to
the real query state. Since this error correction method only works for
uncorrelated errors, it fails for bucket-brigade QRAM. Specifically, if an
error occurs at some point in the routing nodes, that error will persist
between queries (unless the routing nodes are reset, which creates an
$\Omega(N)$ cost as discussed previously). Thus, on average half of all errors
will be identical between queries, rendering the error suppression
ineffective.
Expanding their idea to a wide, parallel repetition increases hardware
overhead by a factor of $m$, but we imagine $m\ll N$, otherwise the error
suppression scheme – which requires $\Omega(m\log N)$ gates for the shuffles
alone – will start to approach the cost of a circuit QRAM access.
The overall effect of this is a “heralded” QRAM access, where if we measure
all zeros in the control qubits, we know that the infidelity has been
suppressed to $1/m$ of the physical error rate (see [54, Chapter 4] for
proof), _but_ the probability $p$ of measuring all-zero is approximately equal
to the original physical infidelity. This means that for an algorithm to have
a reasonable probability of success, it must be limited to $O(\frac{1}{1-p})$
QRAM queries. Recalling the applications from Section III, this forbids
database QAA.
Finally, the code switching process raises some issues. Imagine Figure 12 in
the surface code. We can attempt to limit the time when a qubit is outside of
the surface code to the smallest possible, but even a brief period without
error correction could spell disaster for a resource-intensive algorithm.
## XI Discussion
Large scale, high quality QRAM which is cheap to use is unlikely. It would
require passively-corrected quantum memory and highly complex, high-fidelity
ballistic computation. We argued in Section VI and Section VII that these are
much stronger assumptions about quantum computing hardware than general large-
scale, fault-tolerant quantum computing. Even if those assumptions come true,
QRAM may still be difficult: building a ballistic computation is complicated
(Theorem VI.1) and it may not interact well with the rest of a fault-tolerant
quantum computation (Theorem VII.1). Indeed, we could find no proposed gate-
QRAM technology that was not implicitly active or required unreasonably low
error rates (Section IX,Section X).
All of that said, we cannot _prove_ that cheap QRAM will not be possible.
However, many powerful computing devices (say, a polynomial time SAT-solver)
are not provably impossible, but we do not assume they will be built. Our
claim is that cheap QRAM should be considered infeasible until shown
otherwise, instead of the reverse. While classical RAM appears to set a
precedent for such devices, we argued in Section VIII that this is an
inappropriate comparison: QRAM is more difficult relative to other quantum
computations, than classical RAM is difficult relative to classical
computation.
Despite these critiques, QRAM will likely be a crucial tool for quantum
algorithms. In a circuit model, QRAM has a linear gate cost (regardless of
depth), but optimized circuits allow it to help with quantum chemistry and
some cryptographic attacks (Section V). We encourage readers to consider
circuit QRAM for their algorithms, and use one of the optimized circuits we
referenced.
We leave here a summary of some open questions we raised in this work:
1. 1.
With the favourable error scaling of bucket-brigade QRACM, can it be more
effective than other methods in a surface code, especially for cryptographic
attacks?
2. 2.
If we correct only some of the nodes in a bucket-brigade QRAM, can this
suppress enough noise for practical purposes?
3. 3.
Can Theorem V.2 and Theorem VII.1 extend to account for measurement feedback,
and can the latter extend to data-specific distillation processes?
4. 4.
How should we model the errors and the costs of construction and use of large-
scale Hamiltonians for ballistic computations?
5. 5.
Are there codes for which QRAM is transversal, or can otherwise be applied
directly to logical states?
6. 6.
Can we fix the scaling issues of the transmon-based bucket-brigade memory of
[27]?
7. 7.
How much noise can fault-tolerant quantum linear algebra tolerate in the QRAM
access, and could bucket-brigade QRAM reach that level?
ACKNOWLEDGEMENTS
We thank John Schanck for many fruitful discussions and providing many of
these arguments; Simon C. Benjamin, Balínt Koczor, Sam McArdle, Shouvanik
Chakrabarti, Dylan Herman, Yue Sun, Patrick Rebentrost and an anonymous
reviewer for helpful comments; and Romy Minko, Ryan Mann, Oliver Brown, and
Christian Majenz for valiant attempts to generalize Theorem D.2. S.J. was
funded by the University of Oxford Clarendon fund and A.G.R. was funded by a
JPMorgan Chase Global Technology Applied Research PhD Fellowship.
DISCLAIMER
This research was funded in part by JPMorgan Chase & Co. Any views or opinions
expressed herein are solely those of the authors listed, and may differ from
the views and opinions expressed by JPMorgan Chase & Co. or its affiliates.
This material is not a product of the Research Department of J.P. Morgan
Securities LLC. This material should not be construed as an individual
recommendation for any particular client and is not intended as a
recommendation of particular securities, financial instruments or strategies
for a particular client. This material does not constitute a solicitation or
offer in any jurisdiction.
## References
* Aar [15] Scott Aaronson. Read the fine print. Nature Physics, 11(4):291–293, April 2015.
* AGJO+ [15] Srinivasan Arunachalam, Vlad Gheorghiu, Tomas Jochym-O’Connor, Michele Mosca, and Priyaa Varshinee Srinivasan. On the robustness of bucket brigade quantum RAM. New Journal of Physics, 17(12):123010, December 2015.
* AGPS [20] Martin R. Albrecht, Vlad Gheorghiu, Eamonn W. Postlethwaite, and John M. Schanck. Estimating quantum speedups for lattice sieves. In Shiho Moriai and Huaxiong Wang, editors, Advances in Cryptology – ASIACRYPT 2020, pages 583–613, Cham, 2020. Springer International Publishing.
* AMD [22] AMD. AMD instinct™ MI250X accelerator, 2022.
* AS [22] Martin R. Albrecht and Yixin Shen. Quantum augmented dual attack. Cryptology ePrint Archive, Paper 2022/656, 2022. https://eprint.iacr.org/2022/656.
* BBG+ [13] Robert Beals, Stephen Brierley, Oliver Gray, Aram W. Harrow, Samuel Kutin, Noah Linden, Dan Shepherd, and Mark Stather. Efficient distributed quantum computing. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 469(2153):20120686, May 2013.
* BBHL [20] Gustavo Banegas, Daniel J. Bernstein, Iggy Van Hoof, and Tanja Lange. Concrete quantum cryptanalysis of binary elliptic curves. IACR Transactions on Cryptographic Hardware and Embedded Systems, pages 451–472, December 2020.
* BCSS [23] Xavier Bonnetain, André Chailloux, André Schrottenloher, and Yixin Shen. Finding many collisions via reusable quantum walks: Application to lattice sieving. In Advances in Cryptology – EUROCRYPT 2023: 42nd Annual International Conference on the Theory and Applications of Cryptographic Techniques, Lyon, France, April 23-27, 2023, Proceedings, Part V, page 221–251, Berlin, Heidelberg, 2023. Springer-Verlag.
* Ber [01] D. J. Bernstein. Circuits for integer factorization: a proposal, 2001. https://cr.yp.to/papers.html#nfscircuit.
* Ber [09] D. J. Bernstein. Cost analysis of hash collisions: Will quantum computers make sharcs obsolete? Workshop Record of SHARCS’09: Special-purpose Hardware for Attacking Cryptographic Systems, 2009.
* BGB+ [18] Ryan Babbush, Craig Gidney, Dominic W. Berry, Nathan Wiebe, Jarrod McClean, Alexandru Paler, Austin Fowler, and Hartmut Neven. Encoding electronic spectra in quantum circuits with linear t complexity. Phys. Rev. X, 8:041015, Oct 2018.
* BGM+ [19] Dominic W. Berry, Craig Gidney, Mario Motta, Jarrod R. McClean, and Ryan Babbush. Qubitization of Arbitrary Basis Quantum Chemistry Leveraging Sparsity and Low Rank Factorization. Quantum, 3:208, December 2019.
* BHMT [02] Gilles Brassard, Peter Hoyer, Michele Mosca, and Alain Tapp. Quantum amplitude amplification and estimation. Contemporary Mathematics, 305:53–74, 2002.
* BHPP [15] Mihir K Bhaskar, Stuart Hadfield, Anargyros Papageorgiou, and Iasonas Petras. Quantum algorithms and circuits for scientific computing. arXiv preprint arXiv:1511.08253, 2015.
* BHT [97] Gilles Brassard, Peter Høyer, and Alain Tapp. Quantum cryptanalysis of hash and claw-free functions. ACM SIGACT News, 28(2):14–19, June 1997.
* BJLM [13] Daniel J. Bernstein, Stacey Jeffery, Tanja Lange, and Alexander Meurer. Quantum algorithms for the subset-sum problem. In Philippe Gaborit, editor, Post-Quantum Cryptography, pages 16–33, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg.
* BJWS [19] Akrem Benatia, Weixing Ji, Yizhuo Wang, and Feng Shi. Sparse matrix partitioning for optimizing SpMV on CPU-GPU heterogeneous platforms. The International Journal of High Performance Computing Applications, 34(1):66–80, November 2019.
* BL [13] Daniel J. Bernstein and Tanja Lange. Non-uniform cracks in the concrete: The power of free precomputation. In Kazue Sako and Palash Sarkar, editors, Advances in Cryptology \- ASIACRYPT 2013, pages 321–340, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg.
* BLP+ [16] Benjamin J. Brown, Daniel Loss, Jiannis K. Pachos, Chris N. Self, and James R. Wootton. Quantum memories at finite temperature. Reviews of Modern Physics, 88(4), November 2016.
* [20] Ryan Babbush, Jarrod R McClean, Michael Newman, Craig Gidney, Sergio Boixo, and Hartmut Neven. Focus beyond quadratic speedups for error-corrected quantum advantage. PRX Quantum, 2(1):010103, 2021.
* [21] Ryan Babbush, Jarrod R. McClean, Michael Newman, Craig Gidney, Sergio Boixo, and Hartmut Neven. Focus beyond quadratic speedups for error-corrected quantum advantage. PRX Quantum, 2:010103, Mar 2021.
* BMR+ [20] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc., 2020.
* BP [95] G. Bilardi and F.P. Preparata. Horizons of parallel computation. Journal of Parallel and Distributed Computing, 27(2):172–182, 1995\.
* BS [20] Xavier Bonnetain and André Schrottenloher. Quantum security analysis of CSIDH. In Advances in Cryptology – EUROCRYPT 2020, pages 493–522. Springer International Publishing, 2020.
* BT [23] Ainesh Bakshi and Ewin Tang. An improved classical singular value transformation for quantum machine learning. arXiv preprint arXiv:2303.01492, 2023.
* BWP+ [17] Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd. Quantum machine learning. Nature, 549(7671):195–202, 2017.
* Cad [15] Arnau Sala Cadellans. A transmon based quantum switch for a quantum random access memory. Master’s thesis, Leiden University Faculty of Science, Leiden, Netherlands, 2015.
* CCH+ [22] Nadiia Chepurko, Kenneth Clarkson, Lior Horesh, Honghao Lin, and David Woodruff. Quantum-inspired algorithms from randomized numerical linear algebra. In International Conference on Machine Learning, pages 3879–3900. PMLR, 2022.
* CDEH+ [21] Kevin C. Chen, Wenhan Dai, Carlos Errando-Herranz, Seth Lloyd, and Dirk Englund. Heralded quantum random access memory in a scalable photonic integrated circuit platform. In Conference on Lasers and Electro-Optics. Optica Publishing Group, 2021.
* CDS+ [22] B. David Clader, Alexander M. Dalzell, Nikitas Stamatopoulos, Grant Salton, Mario Berta, and William J. Zeng. Quantum resources required to block-encode a matrix of classical data, 2022.
* CGH+ [17] Yu Cai, Saugata Ghose, Erich F. Haratsch, Yixin Luo, and Onur Mutlu. Error characterization, mitigation, and recovery in flash-memory-based solid-state drives. Proceedings of the IEEE, 105(9):1666–1704, 2017.
* CGL+ [22] Nai-Hui Chia, András Pal Gilyén, Tongyang Li, Han-Hsuan Lin, Ewin Tang, and Chunhao Wang. Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning. Journal of the ACM, 69(5):1–72, 2022.
* CH [10] Aaron Carroll and Gernot Heiser. An analysis of power consumption in a smartphone. In Proceedings of the 2010 USENIX Conference on USENIX Annual Technical Conference, USENIXATC’10, page 21, USA, 2010. USENIX Association.
* CHI+ [18] Carlo Ciliberto, Mark Herbster, Alessandro Davide Ialongo, Massimiliano Pontil, Andrea Rocchetto, Simone Severini, and Leonard Wossnig. Quantum machine learning: a classical perspective. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 474(2209):20170551, January 2018.
* Cho [22] Charles Q. Choi. The beating heart of the world’s first exascale supercomputer, 2022\.
* CL [21] André Chailloux and Johanna Loyer. Lattice sieving via quantum random walks. In Mehdi Tibouchi and Huaxiong Wang, editors, Advances in Cryptology – ASIACRYPT 2021, pages 63–91, Cham, 2021. Springer International Publishing.
* CSCDJRH [21] Jorge Chávez-Saab, Jesús-Javier Chi-Domínguez, Samuel Jaques, and Francisco Rodríguez-Henríquez. The SQALE of CSIDH: sublinear vélu quantum-resistant isogeny action with low exponents. Journal of Cryptographic Engineering, August 2021.
* CT [18] Tore Vincent Carstens and Dirk Oliver Theis. Note on (active-)qram-style data access as a quantum circuit, 2018.
* DLBZ [22] Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Llm. int8 (): 8-bit matrix multiplication for transformers at scale. arXiv preprint arXiv:2208.07339, 2022.
* EG [02] Laurent El Ghaoui. Inversion error, condition number, and approximate inverses of uncertain matrices. Linear algebra and its applications, 343:171–193, 2002.
* FAHA [22] Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323, 2022.
* Fey [85] Richard P. Feynman. Quantum mechanical computers. Optics News, 11(2):11–20, Feb 1985.
* GE [21] Craig Gidney and Martin Ekerå. How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits. Quantum, 5:433, April 2021.
* GGKK [03] Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar. Introduction to Parallel Computing. Addison Wesley, 2003.
* Gid [19] Craig Gidney. Windowed quantum arithmetic. arXiv:1905.07682, 2019.
* Gid [22] Craig Gidney. Quantum dictionaries without qram. arXiv:2204.13835, 2022.
* Gio [17] R. Gioiosa. Resilience for extreme scale computing. In Rugged Embedded Systems, pages 123–148. Elsevier, 2017.
* [48] Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Architectures for a quantum random access memory. Phys. Rev. A, 78:052310, Nov 2008.
* [49] Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Quantum random access memory. Phys. Rev. Lett., 100:160501, Apr 2008.
* GLT [18] András Gilyén, Seth Lloyd, and Ewin Tang. Quantum-inspired low-rank stochastic regression with logarithmic dependence on the dimension. arXiv preprint arXiv:1811.04909, 2018.
* God [68] John T. Godfrey. Binary digital computer, U.S. Patent 3390471A, Jul. 1968.
* Goo [22] Google. Cloud TPU documentation: System architecture, 2022. https://cloud.google.com/tpu/docs/system-architecture-tpu-vm.
* GSLW [19] András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe. Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pages 193–204, 2019.
* Han [21] Connor T. Hann. Practicality of Quantum Random Access Memory. PhD thesis, Yale Graduate School of Arts and Sciences Dissertations, 2021\.
* Hei [21] Max Heiser. Improved quantum hypercone locality sensitive filtering in lattice sieving. Cryptology ePrint Archive, Paper 2021/1295, 2021. https://eprint.iacr.org/2021/1295.
* HHL [09] Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. Quantum algorithm for linear systems of equations. Phys. Rev. Lett., 103:150502, Oct 2009.
* HJN+ [20] Thomas Häner, Samuel Jaques, Michael Naehrig, Martin Roetteler, and Mathias Soeken. Improved quantum circuits for elliptic curve discrete logarithms. In Post-Quantum Cryptography, pages 425–444. Springer International Publishing, 2020.
* HLGJ [21] Connor T. Hann, Gideon Lee, S.M. Girvin, and Liang Jiang. Resilience of quantum random access memory to generic noise. PRX Quantum, 2(2), April 2021.
* HSH+ [09] J. Alex Halderman, Seth D. Schoen, Nadia Heninger, William Clarkson, William Paul, Joseph A. Calandrino, Ariel J. Feldman, Jacob Appelbaum, and Edward W. Felten. Lest we remember: Cold-boot attacks on encryption keys. Commun. ACM, 52(5):91–98, may 2009.
* HXZ+ [12] Fang-Yu Hong, Yang Xiang, Zhi-Yan Zhu, Li-zhen Jiang, and Liang-neng Wu. Robust quantum random access memory. Phys. Rev. A, 86:010306, Jul 2012.
* JPC+ [19] N. Jiang, Y.-F. Pu, W. Chang, C. Li, S. Zhang, and L.-M. Duan. Experimental realization of 105-qubit random access quantum memory. npj Quantum Information, 5(1), April 2019.
* JS [19] Samuel Jaques and John M. Schanck. Quantum cryptanalysis in the RAM model: Claw-finding attacks on SIKE. In Alexandra Boldyreva and Daniele Micciancio, editors, Advances in Cryptology - CRYPTO 2019 - 39th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 18-22, 2019, Proceedings, Part I, volume 11692 of Lecture Notes in Computer Science, pages 32–61. Springer, 2019.
* Kir [18] Elena Kirshanova. Improved quantum information set decoding. In Tanja Lange and Rainer Steinwandt, editors, Post-Quantum Cryptography, pages 507–527, Cham, 2018. Springer International Publishing.
* KMM+ [19] Dhiraj Kalamkar, Dheevatsa Mudigere, Naveen Mellempudi, Dipankar Das, Kunal Banerjee, Sasikanth Avancha, Dharma Teja Vooturi, Nataraj Jammalamadaka, Jianyu Huang, Hector Yuen, et al. A study of bfloat16 for deep learning training. arXiv preprint arXiv:1905.12322, 2019.
* KP [16] Iordanis Kerenidis and Anupam Prakash. Quantum recommendation systems. arXiv preprint arXiv:1603.08675, 2016.
* KSV [02] A. Yu. Kitaev, A. Shen, and M. N. Vyalyi. Classical and Quantum Computation. American Mathematical Society, USA, 2002.
* KT [17] Ghazal Kachigar and Jean-Pierre Tillich. Quantum information set decoding algorithms, 2017.
* Kup [13] G. Kuperberg. Another subexponential-time quantum algorithm for the dihedral hidden subgroup problem. In TQC 2013, LIPIcs 22, pages 20–34, 2013.
* KV [17] Ravindran Kannan and Santosh Vempala. Randomized algorithms in numerical linear algebra. Acta Numerica, 26:95–135, 2017.
* Laa [17] Thijs Laarhoven. Sieving for closest lattice vectors (with preprocessing). In Roberto Avanzi and Howard Heys, editors, Selected Areas in Cryptography – SAC 2016, pages 523–542, Cham, 2017. Springer International Publishing.
* LC [19] Guang Hao Low and Isaac L Chuang. Hamiltonian simulation by qubitization. Quantum, 3:163, 2019.
* Lin [22] Lin Lin. Lecture notes on quantum algorithms for scientific computation. arXiv preprint arXiv:2201.08309, 2022.
* LJGGK [16] C. Lecaplain, C. Javerzac-Galy, M. L. Gorodetsky, and T. J. Kippenberg. Mid-infrared ultra-high-q resonators based on fluoride crystalline materials. Nature Communications, 7(1), November 2016.
* LKS [18] Guang Hao Low, Vadym Kliuchnikov, and Luke Schaeffer. Trading t-gates for dirty qubits in state preparation and unitary synthesis, 2018.
* LLL+ [23] Junyu Liu, Minzhao Liu, Jin-Peng Liu, Ziyu Ye, Yuri Alexeev, Jens Eisert, and Liang Jiang. Towards provably efficient quantum algorithms for large-scale machine-learning models. arXiv preprint arXiv:2303.03428, 2023.
* LMvdP [15] Thijs Laarhoven, Michele Mosca, and Joop van de Pol. Finding shortest lattice vectors faster using quantum search. Designs, Codes and Cryptography, 77(2-3):375–400, April 2015.
* MGM [20] Olivia Di Matteo, Vlad Gheorghiu, and Michele Mosca. Fault-tolerant resource estimation of quantum random-access memories. IEEE Transactions on Quantum Engineering, 1:1–13, 2020.
* MM [16] E. S. Moiseev and S. A. Moiseev. Time-bin quantum RAM. Journal of Modern Optics, 63(20):2081–2092, May 2016.
* Moh [10] Vidyabhushan Mohan. Modelling the physical characteristics of NAND flash memory. PhD thesis, School of Engineering and Applied Science, University of Virginia, 2010.
* Moo [22] Dustin Moody. Status report on the third round of the NIST post-quantum cryptography standardization process. Technical report, National Institute of Standards and Technology, 2022\.
* MP [21] Michele Mosca and Marco Piani. Quantum threat timeline report 2020. Technical report, Global Risk Institute, 2021.
* MRTC [21] John M Martyn, Zane M Rossi, Andrew K Tan, and Isaac L Chuang. Grand unification of quantum algorithms. PRX Quantum, 2(4):040203, 2021.
* NZB+ [22] Murphy Yuezhen Niu, Alexander Zlokapa, Michael Broughton, Sergio Boixo, Masoud Mohseni, Vadim Smelyanskyi, and Hartmut Neven. Entangling quantum generative adversarial networks. Phys. Rev. Lett., 128:220505, Jun 2022.
* OKD+ [22] James O’Sullivan, Oscar W. Kennedy, Kamanasish Debnath, Joseph Alexander, Christoph W. Zollitsch, Mantas Šimėnas, Akel Hashim, Christopher N. Thomas, Stafford Withington, Irfan Siddiqi, Klaus Mølmer, and John J. L. Morton. Random-access quantum memory using chirped pulse phase encoding. Phys. Rev. X, 12:041014, Nov 2022.
* PAA+ [21] Marco Pistoia, Syed Farhan Ahmad, Akshay Ajagekar, Alexander Buts, Shouvanik Chakrabarti, Dylan Herman, Shaohan Hu, Andrew Jena, Pierre Minssen, Pradeep Niroula, et al. Quantum machine learning for finance iccad special session paper. In 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD), pages 1–9. IEEE, 2021.
* PBK+ [22] Jung Jun Park, Kyunghyun Baek, M. S. Kim, Hyunchul Nha, Jaewan Kim, and Jeongho Bang. $t$-depth-optimized quantum search with quantum data-access machine, 2022\.
* PCG [23] Koustubh Phalak, Avimita Chatterjee, and Swaroop Ghosh. Quantum random access memory for dummies, 2023.
* Pei [20] Chris Peikert. He gives c-sieves on the CSIDH. In Advances in Cryptology – EUROCRYPT 2020, pages 463–492. Springer International Publishing, 2020.
* PLG [22] Koustubh Phalak, Junde Li, and Swaroop Ghosh. Approximate quantum random access memory architectures, 2022.
* POB [20] Alexandru Paler, Oumarou Oumarou, and Robert Basmadjian. Parallelizing the queries in a bucket-brigade quantum random access memory. Phys. Rev. A, 102:032608, Sep 2020.
* PPR [19] Daniel K. Park, Francesco Petruccione, and June-Koo Kevin Rhee. Circuit-based quantum random access memory for classical data. Scientific Reports, 9(1), March 2019.
* PY [15] Fernando Pastawski and Beni Yoshida. Fault-tolerant logical gates in quantum error-correcting codes. Phys. Rev. A, 91:012305, Jan 2015.
* RK [22] Arthur G Rattew and Bálint Koczor. Preparing arbitrary continuous functions in quantum registers with logarithmic complexity. arXiv preprint arXiv:2205.00519, 2022.
* RML [14] Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd. Quantum support vector machine for big data classification. Physical review letters, 113(13):130503, 2014.
* RS [08] Oded Regev and Liron Schiff. Impossibility of a quantum speed-up with a faulty oracle. In Luca Aceto, Ivan Damgård, Leslie Ann Goldberg, Magnús M. Halldórsson, Anna Ingólfsdóttir, and Igor Walukiewicz, editors, Automata, Languages and Programming, pages 773–781, Berlin, Heidelberg, 2008. Springer Berlin Heidelberg.
* SB [18] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
* Sch [21] John M. Schanck, Jun 2021.
* SCS+ [22] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding, 2022.
* SPW [11] Bianca Schroeder, Eduardo Pinheiro, and Wolf-Dietrich Weber. Dram errors in the wild: A large-scale field study. Commun. ACM, 54(2):100–107, feb 2011.
* Ste [16] Damien Steiger. Racing in parallel: Quantum versus classical, 2016.
* STY+ [23] Xiaoming Sun, Guojing Tian, Shuai Yang, Pei Yuan, and Shengyu Zhang. Asymptotically optimal circuit depth for quantum state preparation and general unitary synthesis. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, pages 1–1, 2023.
* TAC+ [10] W. Tittel, M. Afzelius, T. Chaneliére, R.L. Cone, S. Kröll, S.A. Moiseev, and M. Sellars. Photon-echo quantum memory in solid state systems. Laser & Photonics Reviews, 4(2):244–267, 2010.
* Tan [07] Seiichiro Tani. An improved claw finding algorithm using quantum walk. In Mathematical Foundations of Computer Science 2007, pages 536–547. Springer Berlin Heidelberg, 2007.
* Tan [19] Ewin Tang. A quantum-inspired classical algorithm for recommendation systems. In Proceedings of the 51st annual ACM SIGACT symposium on theory of computing, pages 217–228, 2019.
* Tez [04] Tezzaron Semiconductor, January 2004.
* VBE [96] Vlatko Vedral, Adriano Barenco, and Artur Ekert. Quantum networks for elementary arithmetic operations. Physical Review A, 54(1):147, 1996.
* WBL+ [22] Zhiling Wang, Zenghui Bao, Yan Li, Yukai Wu, Weizhou Cai, Weiting Wang, Xiyue Han, Jiahui Wang, Yipu Song, Luyan Sun, Hongyi Zhang, and Luming Duan. An ultra-high gain single-photon transistor in the microwave regime. Nature Communications, 13(1), October 2022.
* YMH+ [15] X. X. Yuan, J.-J. Ma, P.-Y. Hou, X.-Y. Chang, C. Zu, and L.-M. Duan. Experimental demonstration of a quantum router. Scientific Reports, 5(1), July 2015.
* YZ [22] Pei Yuan and Shengyu Zhang. Optimal qram and improved unitary synthesis by quantum circuits with any number of ancillary qubits, 2022.
* ZDM+ [19] Darko Zivanovic, Pouya Esmaili Dokht, Sergi Moré, Javier Bartolome, Paul M. Carpenter, Petar Radojković, and Eduard Ayguadé. Dram errors in the field: A statistical approach. In Proceedings of the International Symposium on Memory Systems, MEMSYS ’19, page 69–84, New York, NY, USA, 2019. Association for Computing Machinery.
* [111] Xiao-Ming Zhang, Tongyang Li, and Xiao Yuan. Quantum state preparation with optimal circuit depth: Implementations and applications. Phys. Rev. Lett., 129:230504, Nov 2022.
* [112] Xiao-Ming Zhang, Tongyang Li, and Xiao Yuan. Quantum state preparation with optimal circuit depth: Implementations and applications. Physical Review Letters, 129(23):230504, 2022.
## Appendix A Linear Algebra Proofs
###### Lemma A.1 (Optimality of Quantum Eigenvalue Transform).
Any quantum algorithm implementing a degree-$k$ polynomial of a matrix $H$
(with the polynomial defined on the eigenvalues) requires $\Omega(k)$ queries
to $O_{H}$, in general.
###### Proof.
We prove this lower-bound by a reduction to unstructured search, in the QSVT
framework (as per [53]), from which the bound immediately follows. We note
that the result of [53] almost immediately implies this bound, but we present
the following derivation in a format to enhance the clarity of our discussion.
First we formally define a block-encoding. As per [53], an $n+a$-qubit unitary
matrix $U_{A}$ is called an $(\alpha,a,\epsilon)$-block encoding of the
$n$-qubit matrix $A$ if $\left\lVert A-\alpha(\left\langle 0\right|^{\otimes
a}\otimes I_{n})U_{A}(\left|0\right\rangle^{\otimes a}\otimes
I_{n})\right\rVert_{2}\leq\epsilon$.
Given an unstructured search function $f(x)=0$ for all $x\neq m$, and $f(m)=1$
(with m being unique), specified by an $O_{f}$, oracle defined as
$O_{f}\left|x\right\rangle=(-1)^{f(x)\oplus 1}\left|x\right\rangle$ (where we
negate the sign for ease of analysis), we can construct a $(1,9,0)$-block-
encoding $U_{A}$ of $A:=\frac{1}{\sqrt{N}}|m\rangle\langle+^{n}|$, where
$\left|+^{n}\right\rangle=H^{\otimes n}\left|0\right\rangle$. We give a very
inefficient block encoding (in terms of the number of ancillas) for ease of
exposition. First, we construct a $(1,4,0)$-block-encoding of $H^{\otimes
n}|0\rangle\langle 0|H^{\otimes n}$. This can first be done by constructing a
$(1,2,0)$-block-encoding of $|0\rangle\langle
0|=\frac{1}{2}(I_{n}+(2|0\rangle\langle 0|-I_{n}))$ (which can be done by
applying the standard sum of block encoding lemma [53]) of an $n$ qubit
identity matrix with the standard $2|0\rangle\langle 0|-I_{n}$ unitary from
Grover search. We can then use the product of block encodings lemma with
$H^{\otimes n}$, $|0\rangle\langle 0|$, and $H^{\otimes n}$ again to get a
$(1,4,0)$-block encoding of $|+^{n}\rangle\langle+^{n}|$. We can then use the
product of block encoding with $I_{1}\otimes O_{f}$ and
$|+^{n}\rangle\langle+^{n}|$ to get a $(1,5,0)$-block encoding of
$O_{f}|+^{n}\rangle\langle+^{n}|$. We can then use the sum of block encodings
with $|+^{n}\rangle\langle+^{n}|$ and $O_{f}|+^{n}\rangle\langle+^{n}|$ to get
a $(1,9,0)$-block-encoding of
$\frac{1}{2}((O_{f}+I)|+^{n}\rangle\langle+^{n}|)=\frac{1}{\sqrt{N}}|m\rangle\langle+^{n}|$,
as desired.
We can apply the polynomial approximation of the sign function as per [53] to
our block encoding unitary $U_{A}$, giving a constant probability of success
of measuring the marked state. They give the degree of this (odd degree)
polynomial as $k=O(\log(1/\epsilon)/\delta)$, where $\epsilon$ is an upper-
bound on the maximum deviation of the polynomial approximation to the sign
function in the $x\in[-2,2]\backslash(-\delta,\delta)$. If we set
$\delta=\frac{1}{\sqrt{N}}$, then the sign function maps our non-zero singular
value to $1$, while leaving the zero-valued singular values unchanged. As a
result, we can measure the ancilla register in the $\left|0\right\rangle$
state with $\propto 1-\epsilon$ probability, learning the result of the
unstructured search problem. If it were possible to implement this degree
$\tilde{O}(1/\delta)$ degree-polynomial with fewer than $\Omega(1/\delta)$
queries to $U_{A}$, we could solve unstructured search in general with fewer
than $\Omega(\sqrt{N})$ queries to the unstructured search oracle, violating
well-known lower-bounds for unstructured search. As a consequence, any general
algorithm implementing a SVT of a degree-$k$ polynomial of some matrix $H$
must use at least $\Omega(k)$ queries to that matrix.
A polynomial implementing the sign function applied to the eigenvalues of a
block encoding of $\tilde{A}:=\begin{pmatrix}0&A\\\
A^{\dagger}&0\end{pmatrix}$ would also solve unstructured search in the same
way (noting that the non-zero eigenvalues of $\tilde{A}$ correspond to
$\pm\frac{1}{\sqrt{N}}$ and have associated eigenvectors
$\frac{1}{\sqrt{2}}(\left|0\right\rangle_{1}\left|m\right\rangle\pm\left|1\right\rangle_{1}\left|+^{n}\right\rangle)$.
Thus, applying $\text{Sign}(\tilde{A})$ leaves the $0$ eigenvalues unchanged,
and maps the $\pm\frac{1}{\sqrt{N}}$ eigenvalues to $\pm 1$ (within $\epsilon$
distance). Consequently,
$\text{Sign}(\tilde{A})\approx\begin{pmatrix}0&|m\rangle\langle+^{n}|\\\
|+^{n}\rangle\langle m|&0\end{pmatrix}$. Applying $\text{Sign}(\tilde{A})$ to
an initial state $\left|1\right\rangle\left|+^{n}\right\rangle$ then gives the
solution to the unstructured search problem, and so the same bound holds for
Problem 1. ∎
###### Lemma A.2.
Given $P$ classical processors sharing a common memory of access time $O(1)$,
a $d$-sparse matrix $A\in\mathbb{C}^{N\times N}$ and a vector
$\bm{v}\in\mathbb{C}^{N}$, we can compute $A\bm{v}$ with $\tilde{O}(Nd/P)$
time complexity.
###### Proof.
This is a standard result; see [44]. Assume we represent $A$ as $N$ lists of
elements $(j,A_{ij})$, where $A_{ij}$ are the non-zero components in row $i$,
and $\bm{v}$ as an $N$-element array. With fewer than $N$ processors, each
processor will iterate through a row of $A$. For an entry $(j,A_{ij})$, it
will look up the $j$th element of $\bm{v}$, multiply it by $A_{ij}$ and add
that to a running total. This produces one element of $A\bm{v}$. Each element
can be computed independently, so this parallelizes perfectly. With more than
$N$ processors, rows will have multiple processors. The processors for row $i$
can divide the entries of $A$ in that row and compute sub-totals. To compute
the full sum, they add their sub-totals together in a tree structure; this
tree has depth $O(\log(P/N))$. ∎
Following the techniques of [9], we use sorting to build matrix
multiplication.
###### Lemma A.3.
Given $P$ parallel processors forming a sorting network that can sort in time
$\mathsf{S}$, multiplying an $N$-dimensional vector $\bm{v}$ by a $d$-sparse
$N\times N$ matrix $A$ takes time $O(\mathsf{S}+Nd/P+\lg(d))$.
###### Proof.
First the processors create $d$ copies of $\bm{v}$. Each processor will store
a block of $A$ or $\bm{v}$ in local memory, so by treating $A$ as a block
matrix, we suppose each processor has one entry $A_{ij}$ of $A$ or
$\bm{v}_{j}$ of $\bm{v}$. We assume there is an efficiently-computable
function $f_{j}$ for each column $j$, which injectively maps indices $k$ from
$1$ to $d$ to the non-zero elements of that column. Each processor creates a
tuple $(k,j,0,A_{f_{j}(k)j})$ for components of $A$, and $(k,j,\bm{v}_{j})$
for components of $\bm{v}$ (each copy of $\bm{v}$ has a distinct value of
$k$). They will then sort this data by the second component, then the first.
Because each column of $A$ contains at most $d$ non-zero entries, this ensures
each processor with a tuple $(k,j,0,A_{f_{j}(k)j})$ is adjacent to a tuple
$(k,j,\bm{v}_{j})$, so these two processors can compute
$A_{f_{j}(k)j}\bm{v}_{j}$ and store the result in the tuple for $A$. The
processors can then discard the tuples for $\bm{v}$, and change the first
component from $k$ to $f_{j}(k)$. Then they re-sort so that the rows of $A$
are physically close (e.g., in a square of length $\sqrt{d}$ in 2 dimensions).
They then add the entries of $A_{ij}\bm{v}_{j}$ for each row. Cascading these
together will take at least $\lg(d)$ sequential additions, but it will be
asymptotically less than the sorts of the full matrix and vector. This
produces the $i^{th}$ element of $A\bm{v}$. Another sort can send these to
whatever memory location represents the output. ∎
###### Lemma A.4 (Dense Matrix-Vector Multiplication with Local Memory).
A set of $O(N^{2}\log(N))$ processors arranged in a $3D$ grid and with nearly
local connectivity, leading to a total wire-length of $\tilde{O}(N^{2})$, can
implement a matrix-vector multiplication with an $N\times N$ matrix $A$ and an
$N$-dimensional vector $\bm{v}$ with $O(\log(N))$ complexity.
###### Proof.
Consider a grid of $N\times N$ processors, each with local memory, and no
connections to their nearest neighbors. Assume that each element in $A$ is
assigned to a corresponding processor in the grid, i.e. the processor in the
$i^{th}$ row and $j^{th}$ column stores element $A_{ij}$. If our initial
vector $\bm{v}$ is stored in a set of $N$ data cells, using a stack of
$\log(N)$ local processors, of dimension $2\times N,4\times N,...,N\times N$
we can recursively spread the elements in the vector to construct an $N\times
N$ matrix $V$ such that $V_{ij}=\bm{v}_{j}$. The first layer in this stack has
2 wires per layer, each of length $N/2$, for a total wire-length of
$O(N^{2})$. The final layer in this stack, the one that connects element
$V_{ij}$ to element $A_{ij}$ in the main $N\times N$ grid, has $O(N^{2})$
wires, each of length $O(1)$ for a total wire length of $O(N^{2})$. The middle
layers in the stack similarly have a total wire length of $O(N^{2})$ (summing
progressively more shorter wires). Thus, the main $N\times N$ processor grid
can store $A_{ij}\bm{v}_{j}$ with a total of $O(N^{2}\log N)$ ancillary local
processors with a total wire length of $O(N^{2}\log N)$. We can then repeat
this process of spreading out the elements of $\bm{v}$ in reverse, adding
another stack of $\log N$ processors of size $N\times N,N\times
N/2,...,N\times 2,N\times 1$. We then add elements in adjacent cells, sending
them to the cell in the layer above, building up the sum of products of the
appropriate matrix and vector element. This produces the output vector
$A\bm{v}$, where the $i^{th}$ row stores $\sum_{j=1}^{N}A_{ij}\bm{v}_{j}$,
with a total wire-length of $O(N^{2}\log N)$, and with a total of $O(N^{2}\log
N)$ processors. ∎
## Appendix B Proof of Lemma V.1
We start by defining the set of circuits $\mathcal{C}(W,D,G,g,k)$ to be all
circuits on $W$ qubits, of circuit depth $D$, using $G$ gates from a set of
size $g$ with fanin at most $k$. Here we assume all gates have depth 1 and we
count circuits based on the arrangement of gates, not by function. Define a
“circuit” as a function that takes as input a pair of (qubit,time step) and
returns as output the particular gate applied to that qubit at that time step.
If two circuits have a different effect on quantum states, they must use
different gates, and thus will be distinct circuits by this definition. This
definition overcounts circuits, since equivalent circuits are counted twice,
but we want to upper bound the number of circuits so this is fine.
###### Proposition B.1.
Under this notation,
$|\mathcal{C}(W,D,G,g,k)|\leq|\mathcal{C}(W,D,\min\\{kG,DW\\},Wg,1)|$.
###### Proof.
Let $U$ be an $\ell$-qubit gate in the gate set of $\mathcal{C}(W,D,g,k)$. We
can define $\ell$ distinct single-qubit gates $U_{1},\dots,U_{\ell}$ from $U$,
expanding the total number of gates from $g$ to $gk$, and and then further
define $W/\ell$ multiples of each of these single qubit gates, i.e.,
$U_{1}^{(1)},\dots,U_{\ell}^{(1)},\dots,U_{1}^{(m)},\dots,U_{\ell}^{(m)}$
where $m=\left\lfloor W/\ell\right\rfloor$. This gives at most $W$ single-
qubit gates from $U$, so the total number of single-qubit gates defined in
this way is at most $Wg$.
We now define an injective function from $\mathcal{C}(W,D,G,g,k)$ to
$\mathcal{C}(W,D,\min\\{kG,DW\\},Wg,1)$. For each circuit, at each timestep we
take all $\ell$-qubit gates $U^{(1)},\dots U^{(m)}$ (we know there are fewer
tham $m=\left\lfloor W/\ell\right\rfloor$ gates of fanin $\ell$ in one
timestep, because there aren’t enough qubits for more!) and decompose each
into $U_{1}^{(i)},\dots,U_{\ell}^{(i)}$, such that the qubit mapped to the
$j$th input of the $i$th gate is given gate $U_{j}^{(i)}$. This increases the
total number of gates by at most a factor of $k$, and cannot increase the
number of gates above $DW$.
For example, a CNOT would split into two gates, a “target” gate and a
“control” gate, and if these are reversed, the circuit is different (by our
counting).
To show injectivity, suppose two circuits $C_{1}$ and $C_{2}$ in
$\mathcal{C}(W,D,G,g,k)$ map to the same circuit in
$\mathcal{C}(W,D,kG,gW,1)$. Let $U$ be any gate in $C_{1}$; suppose it applies
to $\ell$ qubits $q_{1},\dots,q_{\ell}$ in time step $t$. This means there is
some $i$ such that the image circuit has $U_{1}^{(i)},\dots,U_{\ell}^{(i)}$
applied to $q_{1},\dots,q_{\ell}$ in time step $t$. By construction of our
function, this means there must be some gate in $C_{2}$ of the same type as
$G$ which is also applied to those same qubits, in the same order, in time
step $t$. Repeating for all gates shows that $C_{1}=C_{2}$. ∎
###### Proposition B.2.
In the same notation, $|\mathcal{C}(W,D,G,g,1)|\leq\binom{DW}{G}g^{G}$.
###### Proof.
We can straightforwardly count: There are $DW$ possible “slots” for each gate,
depending on which qubit and which time step. We choose $G$ slots, and for
each one, we select one of the $g$ gates. ∎
###### Lemma B.3.
$|\mathcal{C}(W,D,G,g,k)|\leq\binom{DW}{\min\\{kG,DW\\}}(Wg)^{G}$.
###### Proof.
We combine Proposition B.1 and Proposition B.2 to obtain the result. ∎
## Appendix C Proof of Hamiltonian Complexity
Our model for a ballistic computation is a Hamiltonian on $W$ qubits, where we
choose $n$ $k$-local terms $H_{1},\dots,H_{n}$ out of a set of $N$ possible
terms, scaling each to an an energy $a_{i}$ between $-E$ and $E$, then
evolving for a time $t$. The Hamiltonian will be
$H=\sum_{i=1}^{n}a_{i}H_{i}.$ (15)
Since $e^{itH/\hbar}=e^{i(tE)(H/E)/\hbar}$, for convenience we scale the
energy to $1$ and scale up the time $t$. Note this also captures the case of
evolving the same Hamiltonian for less time, e.g., evolving for time $t_{0}<t$
is the same as scaling all energy coefficients by $\frac{t_{0}}{t}$ and
evolving to time $t$.
Before the main theorem, we need a lemma about matrix exponentials:
###### Lemma C.1.
Suppose $H_{1}$ and $H_{2}$ are two Hamiltonians of norm at most $1$ such that
$\|H_{1}-H_{2}\|_{2}\leq\epsilon$. If
$\|e^{itH_{1}/\hbar}-e^{itH_{2}/\hbar}\|_{2}\geq\delta$ (16)
then
$\epsilon\geq\frac{\hbar\ln(\delta
e^{-t/\hbar}+1)}{t}\geq\frac{\hbar}{t}\delta e^{-t/\hbar}(1-\delta
e^{-t/\hbar}).$ (17)
We will introduce an unusual but helpful notation: for any natural numbers
$n,j$ with $j\leq n$, and two matrices $A$ and $B$,
$\binom{n:A}{j:B}:=\sum_{\vec{x}\in\\{0,1\\}^{n}:\|\vec{x}\|_{1}=j}B^{x_{1}}A^{1-x_{1}}\cdots
B^{x_{n}}A^{1-x_{n}}$ (18)
In other words, we add up all arrangements of $A$ and $B$ that contain $j$
terms of $B$ and $n-j$ terms of $A$. This is useful because
$(A+B)^{n}=\sum_{j=0}^{n}\binom{n:A}{j:B}.$ (19)
###### Proof.
Suppose that $H_{1}-H_{2}=\epsilon H_{12}$, where $H_{12}$ has singular values
at most $1$. We can then argue that
$\displaystyle e^{itH_{1}/\hbar}=$ $\displaystyle e^{itH_{2}/\hbar+it\epsilon
H_{12}/\hbar}$ (20) $\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\frac{i^{n}t^{n}}{\hbar^{n}n!}(H_{2}+\epsilon
H_{12})^{n}$ (21) $\displaystyle=$
$\displaystyle\underbrace{\sum_{n=0}^{\infty}\frac{i^{n}t^{n}}{\hbar^{n}n!}H_{2}^{n}}_{=e^{itH_{2}/\hbar}}+\sum_{n=0}^{\infty}\frac{i^{n}t^{n}}{\hbar^{n}n!}\sum_{j=1}^{n}\epsilon^{j}\binom{n:H_{2}}{j:H_{12}}$
(22)
Since $\|H_{1}\|\leq 1$ and $\|H_{2}\|\leq 1$, we can bound
$\left\|\binom{n:H_{1}}{j:H_{12}}\right\|\leq\sum_{\vec{a}\in\\{0,1\\}^{n},\|\vec{a}\|_{1}=j}1=\binom{n}{j}.$
(23)
so we can bound
$\displaystyle\delta\leq\|e^{itH_{1}/\hbar}-e^{itH_{2}/\hbar}\|\leq$
$\displaystyle\sum_{n=0}^{\infty}\frac{t^{n}}{\hbar^{n}n!}\sum_{j=1}^{n}\epsilon^{j}\binom{n}{j}$
(24) $\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\frac{t^{n}}{\hbar^{n}n!}\left((1+\epsilon)^{n}-1\right)$
(25) $\displaystyle=$ $\displaystyle e^{t/\hbar}(e^{t\epsilon/\hbar}-1).$ (26)
Rearranging gives the result. ∎
###### Theorem C.2.
Let $\mathcal{G}$ be a set of mutually orthogonal circuits (i.e., for any
$U_{1},U_{2}\in\mathcal{G}$, there is some input state such that
$\left\langle\psi\right|U_{2}^{\dagger}U_{1}\left|\psi\right\rangle=0$). Let
$\mathcal{H}$ be a set of Hamiltonians constructed in the manner specified
above, such that for all $U\in\mathcal{G}$, there is $H\in\mathcal{H}$ such
that $U=e^{itH/\hbar}$. Then
$\log(|\mathcal{G}|)\in O(ntE+n\log W).$ (28)
###### Proof.
Let $\vec{a}=(a_{1},\dots,a_{n})$ be the vector of the coefficients of each
term. First we note that the singular values of $a_{1}H_{1}+\dots+a_{n}H_{n}$
are bounded by $\epsilon$ if $\|\vec{a}\|_{1}<\epsilon$. If we choose the same
$n$ Hamiltonians and two coefficient vectors $\vec{a}_{A}$ and $\vec{a}_{B}$
such that $\|\vec{a}_{A}-\vec{a}_{B}\|_{1}\leq\epsilon$, then the induced
Hamiltonians $H_{A}$ and $H_{B}$ satisfy $\|H_{A}-H_{B}\|\leq\epsilon$.
We then consider how many possible Hamiltonians we might be able to produce
that are all mutually at distance at least $2\epsilon$. Our goal here is an
upper bound. Optimistically, assume that if we choose two different sets of
Hamiltonian terms $\\{H_{1},\dots,H_{n}\\}$, we automatically achieve the
required distance. Then each choice (there are $\binom{N}{n}$ choices)
partitions the space $[-1,1]^{n}\subseteq\mathbb{R}^{n}$ into $\ell_{1}$-balls
of radius $\epsilon$. The volume of the space is $2^{n}$ and the volume of
each ball is $\frac{2^{n}}{n!}\epsilon^{n}$, so this gives us at most
$\binom{N}{n}\frac{n!}{\epsilon^{n}}$ (29)
such matrices.
If we want to capture all the gates in the set $\mathcal{G}$ with a
Hamiltonian, we thus need
$\binom{N}{n}\frac{n!}{\epsilon^{n}}\geq|\mathcal{G}|$ (30)
for some $\epsilon$. Rearranging gives
$\epsilon\leq\left(\binom{N}{n}\frac{n!}{|\mathcal{G}|}\right)^{1/n}$ (31)
However, if $\epsilon$ is small enough that
$e^{t\hbar}(e^{t\hbar\epsilon}-1)\leq 1$, then by Lemma C.1 the unitaries
induced by the Hamiltonians are too close to represent the mutually orthogonal
gates in $\mathcal{G}$, which are at distance $1$. To put this another way, we
need
$\displaystyle\epsilon>$
$\displaystyle\frac{\hbar\ln(e^{-t/\hbar}+1)}{t}\geq\frac{\hbar
e^{-t/\hbar}}{2t}.$ (32)
Combining these inequalities gives
$\frac{e^{-t/\hbar}}{2t\hbar}\leq\left(\binom{N}{n}\frac{n!}{|\mathcal{G}|}\right)^{1/n}$
(33)
With some work, Stirling’s inequality gives
$\displaystyle\frac{N!}{(N-n)!}\leq$ $\displaystyle N^{n}e^{n}$ (34)
Combining that with Equation 31 gives
$\frac{\hbar e^{-t/\hbar}}{2t}<Ne|\mathcal{G}|^{-1/n}$ (35)
or
$\ln|\mathcal{G}|<n\ln N+\frac{nt}{\hbar}+n\ln(2te)-n\ln(\hbar)$ (36)
Here we bound $N$ in terms of the width $W$. If there are more than $4^{k}$
possible $k$-local Hamiltonian terms on the same $k$ qubits, then the final
Hamiltonian term can be written as a linear combination of at most $4^{k}$
terms, so we can assume the “extra” terms are not used. Adding up over all
arities,
$N\leq
4\mathsf{W}+4^{2}\binom{\mathsf{W}}{2}+\dots+4^{k}\binom{\mathsf{W}}{k}\in
O(\mathsf{W}^{k}).$ (37)
Thus, $n\ln N=O(nk\log W)$, so the full bound is $O(nt+n\log W)$. ∎
Our QRACM result follows:
###### Corollary C.3.
Using a family of Hamiltonians with $n$ terms of energy $E$ for time $t$ to
create a QRACM access to tables of size $N$ must satisfy
$ntE+n\log W\in\Omega(N).$ (38)
###### Proof.
We use the set of possible QRACM access gates as $\mathcal{G}$. There are
$2^{N}$ possible QRACM access gates, and they are mutually orthogonal, so the
result follows immediately. ∎
## Appendix D Proof of No Distillation
There are two components of the proof: First, we argue that with only $d$
inputs in the QRACM, we can always find some indices which are
“underrepresented”, i.e., very little of the amplitude of the input state is
concentrated on those indices. This is more general than necessary: we only
need $\ell=1$ to prove our main theorem.
###### Lemma D.1.
Let $\rho_{1},\dots,\rho_{d}$ be any collection of states, each in
$B(\mathcal{H}_{i}\otimes\mathcal{H}_{QRACM})$, i.e., part of the state is in
the input space for a QRACM gate. Then there exists some set of $\ell$ indices
such that for any two tables $T$ and $T^{\prime}$ differing only in those
$\ell$ indices, if we define
$\delta_{i}:=\left\|\mathcal{U}_{QRACM}(T)(\rho_{i})-\mathcal{U}_{QRACM}(T^{\prime})(\rho_{i})\right\|_{1}$
(39)
(with an implicit identity channel on $\mathcal{H}_{i}$) then
$\sum_{i=1}^{d}\delta_{i}\leq\frac{2\ell\sqrt{d}}{N}.$ (40)
###### Proof.
Because we care about the application of a perfect QRACM gate, we will purify
each of these states, so we consider some purification of $\rho_{i}$:
$\left|\psi_{i}\right\rangle=\sum_{j=0}^{n}\sum_{b\in\\{0,1\\}}\alpha_{jb}\left|\psi_{ijb}\right\rangle\left|j\right\rangle\left|b\right\rangle$
(41)
where we have expressed the input space to the QRACM gate in the computational
basis. Here $\left|\psi_{ijb}\right\rangle$ is in
$\mathcal{H}_{i}\otimes\mathcal{H}_{p}$, where $\mathcal{H}_{p}$ is the space
necessary to purify $\rho_{i}$. We see that
$U_{QRACM}(T)=\sum_{j,b}\left|\left.j\right\rangle\kern-1.99997pt\left\langle
j\right.\right|\otimes\left|\left.b\oplus
T[j]\right\rangle\kern-1.99997pt\left\langle b\right.\right|$, so that
$\displaystyle U_{QRACM}$ $\displaystyle(T^{\prime})^{\dagger}U_{QRACM}(T)$
$\displaystyle=\sum_{j:T^{\prime}[j]=T[j]}\left|\left.j\right\rangle\kern-1.99997pt\left\langle
j\right.\right|\otimes I_{2}+\sum_{j:T^{\prime}[j]\neq
T[j]}\left|\left.j\right\rangle\kern-1.99997pt\left\langle
j\right.\right|\otimes X$ (42)
where $X$ is the Pauli $X$. This implies that
$\displaystyle\left\langle\psi_{i}\right|$ $\displaystyle
U_{QRACM}(T^{\prime})^{\dagger}U_{QRACM}(T)\left|\psi_{i}\right\rangle$ (43)
$\displaystyle=$
$\displaystyle\sum_{j:T[j]=T^{\prime}[j],b\in\\{0,1\\}}|\alpha_{ijb}|^{2}+$
(44) $\displaystyle\sum_{j:T[j]\neq
T^{\prime}[j]}\overline{\alpha_{ij0}}\alpha_{ij1}\left\langle\left.\psi_{ij0}\right|\psi_{ij1}\right\rangle+\overline{\alpha_{ij1}}\alpha_{ij0}\left\langle\left.\psi_{ij1}\right|\psi_{ij0}\right\rangle$
(45) $\displaystyle\geq$ $\displaystyle 1-\left(\sum_{j:T[j]\neq
T^{\prime}[j]}|\alpha_{j0}|^{2}+|\alpha_{j1}|^{2}+\overline{\alpha_{ij0}}\alpha_{ij1}+\overline{\alpha_{ij1}}\alpha_{ij0}\right)$
(46) $\displaystyle\geq$ $\displaystyle 1-2\sum_{j:T[j]\neq
T^{\prime}[j]}|\alpha_{ij0}|^{2}+|\alpha_{ij1}|^{2}$ (47)
Since $U_{QRACM}(T^{\prime})$ commutes with tracing out the purifying space,
and partial trace can only decrease trace distance, we have that the trace
distance between the states after applying QRACM to the two different tables
is at most
$\delta_{i}:=\sqrt{2\sum_{j:T[j]\neq
T^{\prime}[j]}|\alpha_{ij0}|^{2}+|\alpha_{ij1}|^{2}}$ (48)
We then let
$m_{j}:=\sum_{i=1}^{d}|\alpha_{ij0}|^{2}+|\alpha_{ij1}|^{2}$ (49)
We know that $\sum_{j=0}^{N}m_{j}=1$. Thus, for any $\ell$, there exist a set
$\mathcal{J}$ of $d$ indices such that
$\sum_{j\in\mathcal{J}}m_{j}\leq\frac{\ell}{N}.$ (50)
If we set $T$ and $T^{\prime}$ to differ only on $\mathcal{J}$, we see that
$\sum_{i=1}^{d}\delta_{i}^{2}\leq\frac{2\ell}{N}$, from which we see that
$\sum_{i=1}^{d}\delta_{i}\leq\frac{2\ell\sqrt{d}}{N}.$ (51)
∎
We now argue by saying that since so little of the input state is concentrated
on certain indices, we can replace any table $T$ with a different table that
differs on those indices, and this will be indistinguishable to the QRACM
distillation-and-teleportation process. Since the process needs to handle all
logical inputs, it must fail on that index.
###### Theorem D.2.
Suppose there is a QRACM state distillation-and-teleportation process,
accessing tables of size $N=2^{n}$ and making at most $d$ calls to the
physical QRACM. Then the minimal fidelity for this process is at most
$\frac{3}{4}+2\frac{\sqrt{d}}{N}.$ (52)
###### Proof.
Let $T$ be any table, and label the states input into the QRAM device during
the distillation process as $\rho_{1}(T),\dots,\rho_{d}(T)$. Let $\mathcal{J}$
be the set of $\ell$ indices implied by Lemma D.1 for these states, and let
$T^{\prime}$ be a table that differs from $T$ in those $\ell$ indices. We will
show by induction that
$\|\rho_{i}(T^{\prime})-\rho_{i}(T)|_{1}\leq\sum_{j=1}^{i}\delta_{j}$ (53)
with $\delta_{i}$ as in Lemma D.1. This holds for $i=0$ since both take
$\left|\left.0\right\rangle\kern-1.99997pt\left\langle 0\right.\right|$ as the
starting state.
For induction, recall from the definition of the distillation process that
$\rho_{i+1}(T)=\Phi_{i+1}\circ(I\otimes\mathcal{Q}(T))\rho_{i}(T)$ (54)
(similarly for $T^{\prime}$). Since the channel $\Phi_{i+1}$ can only reduce
the trace distance between then by the data processing inequality, we obtain:
$\displaystyle\|\rho_{i+1}(T^{\prime})-$ $\displaystyle\rho_{i+1}(T)\|_{1}$
(55) $\displaystyle\leq$
$\displaystyle\|(I\otimes\mathcal{Q}(T^{\prime}))\rho_{i}(T^{\prime})-(I\otimes\mathcal{Q})(T)\rho_{i}(T)\|_{1}$
(56) $\displaystyle\leq$
$\displaystyle\|((I\otimes\mathcal{Q}(T)-(I\otimes\mathcal{Q}(T^{\prime}))\rho_{i}(T)\|_{1}$
(57)
$\displaystyle+\|(I\otimes\mathcal{Q}(T^{\prime}))(\rho_{i}(T^{\prime})-\rho_{i}(T))\|_{1}$
(58) $\displaystyle\leq$ $\displaystyle\delta_{i+1}+\sum_{j=1}^{i}\delta_{j}$
(59)
using Lemma D.1.
We finally note that $\rho_{d}(T)=\rho_{distill}(T)$, the state input into the
final teleportation process. This means the distance between
$\rho_{distill}(T)$ and $\rho_{distill}(T^{\prime})$ is at most
$\frac{2\ell\sqrt{d}}{N}$ by Lemma D.1. Again by the data processing
inequality, the final teleportation process cannot increase the distance.
Denoting the full process as $\tilde{\mathcal{U}}_{QRACM}(T)$, we see that for
any logical input $\rho_{L}$, we obtain
$\|(\tilde{\mathcal{U}}_{QRACM}(T)-\tilde{\mathcal{U}}_{QRACM}(T^{\prime}))\rho_{L}\|_{1}\leq\frac{2\ell\sqrt{d}}{N}$
(60)
However, let $\rho_{L}=\left|\left.j\right\rangle\kern-1.99997pt\left\langle
j\right.\right|\otimes\left|\left.0\right\rangle\kern-1.99997pt\left\langle
0\right.\right|$ for some $j$ with $T[j]\neq T^{\prime}[j]$. Then we have for
the _ideal_ QRACM access,
$\|(\mathcal{U}_{QRACM}(T)-\mathcal{U}_{QRACM}(T^{\prime}))(\rho_{L})\|_{1}=1$
(61)
But by the triangle inequality,
$\displaystyle 1=$
$\displaystyle\|(\mathcal{U}_{QRACM}(T)-\mathcal{U}_{QRACM}(T^{\prime}))(\rho_{L})\|_{1}$
(62) $\displaystyle\leq$
$\displaystyle\|(\mathcal{U}_{QRACM}(T)-\tilde{\mathcal{U}}_{QRACM}(T))\rho_{L}\|_{1}$
(63)
$\displaystyle+\|(\mathcal{U}_{QRACM}(T^{\prime})-\tilde{\mathcal{U}}_{QRACM}(T^{\prime}))\rho_{L}\|_{1}$
(64)
$\displaystyle+\|(\tilde{\mathcal{U}}_{QRACM}(T)-\tilde{\mathcal{U}}_{QRACM}(T^{\prime}))\rho_{L}\|_{1}$
(65) $\displaystyle 1-\frac{2\ell\sqrt{d}}{N}\leq$
$\displaystyle\|(\mathcal{U}_{QRACM}(T)-\tilde{\mathcal{U}}_{QRACM}(T))\rho_{L}\|_{1}$
(66)
$\displaystyle+\|(\mathcal{U}_{QRACM}(T^{\prime})-\tilde{\mathcal{U}}_{QRACM}(T^{\prime}))\rho_{L}\|_{1}$
(67)
meaning (WLOG) that the distance between the realized QRACM channel and the
ideal QRACM channel, for state $\rho_{L}$ on access to table $T$, is at least
$\frac{1}{2}-\frac{\ell\sqrt{d}}{N}$. Using known relations between trace
distance and fidelity and setting $\ell=1$ gives the result. ∎
|
# Nonlinear System Identification of Swarm of UAVs Using Deep Learning Methods
Saman Yazdannik MSc student
Department of Aerospace Engineering
K. N Toosi University of Technology
Email<EMAIL_ADDRESS>Morteza Tayefi Assistant Professor
Department of Aerospace Engineering
K. N Toosi University of Technology
Email<EMAIL_ADDRESS>Mojtaba Farrokh Associate Professor
Department of Aerospace Engineering
K. N Toosi University of Technology
Email<EMAIL_ADDRESS>
###### Abstract
This study designs and evaluates multiple nonlinear system identification
techniques for modeling the UAV swarm system in planar space. learning methods
such as RNNs, CNNs, and Neural ODE are explored and compared. The objective is
to forecast future swarm trajectories by accurately approximating the
nonlinear dynamics of the swarm model. The modeling process is performed using
both transient and steady-state data from swarm simulations. Results show that
the combination of Neural ODE with a well-trained model using transient data
is robust for varying initial conditions and outperforms other learning
methods in accurately predicting swarm stability.
## 1 Introduction
The investigation and modeling of the interactions and dynamics of animal and
robot swarms is a longstanding area of interest in academia. This area falls
under the discipline of nonlinear System Identification (SID). Swarm dynamics
are often nonlinear and governed by Ordinary Differential Equations (ODEs).
Several data-driven SID methods exist, some of which will be discussed in the
related work section. Multiple swarm models exist and there is no unified
model that governs all swarm systems. This study focuses on a specific type of
swarm model from a particular class, with the general form described as.
$\displaystyle\dot{r_{i}}=v_{i}(t)$ (1)
$\displaystyle\dot{v_{i}}=f(v_{i}(t))-\sum_{j=1,i\neq
j}^{N}g(r_{i}(t),r_{j}(t))+\eta_{i}(t)$ (2)
The swarm model under examination is governed by a second-order differential
equation, where swarm behavior arises from acceleration ($v_{i}$) as outlined
in Equation 2. It consists of three components: the intrinsic dynamics of an
agent is dependent on its velocity, the interaction dynamics is a sum of
functions of the agent’s position and the positions of other agents, and model
noise, typically assumed to be Gaussian white noise.
Models of this form can exhibit various steady-state behaviors including
flocking, milling, and aggregation. Given time series position and velocity
data obtained from observing a swarm at a fixed sampling rate and an unknown
governing equation, the goal is to find an approximate function that
accurately predicts the trajectories for any arbitrary initial conditions.
SID methods can be grouped into two categories. Given the dynamical process
$\dot{x}=f(x(t))$ and the initial condition $x_{0}$, the first class of
methods aims to directly predict future trajectory $x$, or to approximate
$x_{0}+\int_{0}^{T}f(x(t))dt$, giving $x(T)$ for any T based on the initial
state at $t=0$. The second class of SID methods approximates $f(x(t))$ and
then uses an ODE solver to integrate numerically and derive future $x(t)$
trajectories based on the initial state at $t=0$. [RNN, CNN and MLP] belong to
the first class while Neural ODE belongs to the second class, as evaluated in
this study.
Due to inherent model noise, perfect prediction of all agents’ future
trajectories in a swarm is not feasible. The objective of this study is to
identify models that accurately capture overall swarm behavior. A model is
considered successful if it demonstrates correct convergence to the actual
swarm’s steady state and is robust to varying initial conditions. Experiments
and metrics are presented to evaluate both criteria.
This study utilizes time-series data from simulations of a swarm model using a
basic ODE solver based on Euler’s method. The swarm’s trajectory is divided
into two phases: transient and steady state. With randomly initialized
positions and velocities, a swarm initially enters the transient state before
settling into the steady state. Results of training on data from both phases
are presented. Time-series data prompted the consideration of MLPs, RNNs, and
CNNs as initial models for this study.
## 2 Modeling Swarm Dynamics in Nonlinear Systems
We concentrate on a single swarm model from the previously mentioned swarm
category.
$\displaystyle\dot{r_{i}}=v_{i}$ (3)
$\displaystyle\dot{v_{i}}=(1-|v_{i}|^{2})v_{i}-\frac{a}{N}\sum_{j=1,i\neq
j}^{N}g(r_{i}(t),r_{j}(t-\tau))+\eta_{i}(t)$ (4)
In the present study, the intrinsic dynamics dictate that each agent
experiences an acceleration or deceleration, leading to a constant velocity of
1 as time approaches infinity. The interaction dynamics cause each agent to
accelerate towards the mean position of all other agents at a rate
proportional to the coupling strength (a). It is assumed that Gaussian white
noise is equivalent for all agents.
In this swarm model, three forms of stability are observed: ”milling”,
”rotation”, and ”flocking”. The first stability, ”milling”, is achieved under
conditions of zero time lag, low noise, and an optimal coupling strength. The
agents ultimately converge to a rough circular pattern around the mean field
with a constant radius, resulting in a ”milling” effect. This stability can
manifest as all agents moving in a single direction or a mixture of clockwise
and counter-clockwise movement. The second stability, ”rotation”, occurs when
a significant time delay $(\tau)$ is introduced. The agents will eventually
aggregate and rotate in a circle with a fixed radius. It is worth noting that
simulations using Euler’s method with large time steps are equivalent to
introducing time delay to the swarm system. Our experiments show that, under
specific model parameters, a swarm exhibiting ”milling” stability can
transition to ”rotating” stability as the simulation time step is increased.
The third stability, ”flocking”, emerges with the introduction of significant
noise. The agents will all flock in a single direction. This study focuses on
the learning of systems that exhibit the first two forms of stability.
The examination of model stability is a topic within the field of dynamical
systems. This paper does not delve into the mathematical proof of model
stability. Instead, we present empirical results obtained from our
simulations. The second order ODE described in equation 4 can be reformulated
as a first order ODE. As an example, consider a swarm consisting of 5 agents.
The dynamics of each agent are governed by the second order ODE in equation 4.
To simplify this ODE to a first order system, we introduce two variables for
the $i^{th}$ agent: $x_{1i}=r_{i}$ and $x_{2i}=v_{i}$. This allows us to
express the dynamics of the entire swarm as follows:
$\displaystyle\dot{x_{11}}=x_{21},$ (5)
$\displaystyle\dot{x_{21}}=(1-|x_{21}|^{2})x_{21}-\frac{a}{5}\sum_{j=1}^{5}(x_{11}(t)-x_{2j}(t))+\eta_{1}(t),$
(6) $\displaystyle\dot{x_{12}}=x_{22},$ (7)
$\displaystyle\dot{x_{22}}=(1-|x_{22}|^{2})x_{22}-\frac{a}{5}\sum_{j=1}^{5}(x_{12}(t)-x_{2j}(t))+\eta_{2}(t),$
(8) $\displaystyle\dot{x_{13}}=x_{23},$ (9)
$\displaystyle\dot{x_{23}}=(1-|x_{23}|^{2})x_{23}-\frac{a}{5}\sum_{j=1}^{5}(x_{13}(t)-x_{2j}(t))+\eta_{3}(t),$
(10) $\displaystyle\dot{x_{14}}=x_{24},$ (11)
$\displaystyle\dot{x_{24}}=(1-|x_{24}|^{2})x_{24}-\frac{a}{5}\sum_{j=1}^{5}(x_{14}(t)-x_{2j}(t))+\eta_{4}(t),$
(12) $\displaystyle\dot{x_{15}}=x_{25},$ (13)
$\displaystyle\dot{x_{25}}=(1-|x_{25}|^{2})x_{25}-\frac{a}{5}\sum_{j=1}^{5}(x_{15}(t)-x_{2j}(t))+\eta_{5}(t),$
(14)
or we can write this as vector-matrix form:
$\begin{bmatrix}\dot{x_{11}}\\\ \dot{x_{12}}\\\ \dot{x_{13}}\\\
\dot{x_{14}}\\\ \dot{x_{15}}\\\ \dot{x_{21}}\\\ \dot{x_{22}}\\\
\dot{x_{23}}\\\ \dot{x_{24}}\\\ \dot{x_{25}}\end{bmatrix}\\\
=\begin{matrix}zeroes(5,5)&I(5,5)&zeroes(5,5)\\\ \frac{a}{5}\cdot
ones(5,5)-a\cdot I(5,5)&I(5,5)&-I(5,5)\end{matrix}\begin{bmatrix}{x_{11}}\\\
{x_{12}}\\\ {x_{13}}\\\ {x_{14}}\\\ {x_{15}}\\\ {x_{21}}\\\ {x_{22}}\\\
{x_{23}}\\\ {x_{24}}\\\ {x_{25}}\\\ |x_{21}|^{2}{x_{21}}\\\
|x_{22}|^{2}{x_{22}}\\\ |x_{23}|^{2}{x_{23}}\\\ |x_{24}|^{2}{x_{24}}\\\
|x_{25}|^{2}{x_{25}}\end{bmatrix}+\begin{bmatrix}0\\\ 0\\\ 0\\\ 0\\\ 0\\\ 0\\\
0\\\ 0\\\ 0\\\ 0\\\ \eta_{1}\\\ \eta_{2}\\\ \eta_{3}\\\ \eta_{4}\\\
\eta_{5}\end{bmatrix}$ (15)
sectionLiterature Review on collective motion of a swarm
### 2.1 Literature Review on collective motion of a swarm
The collective motion of a swarm can be understood as the interplay of three
key components: short-range repulsion to prevent collisions, local
interactions to align the velocity vectors of neighboring units, and a global
positional constraint to maintain the coherence of the swarm [1]. A universal
feature observed in collective swarm motion in biological systems such as
schools of fish, bird flocks, and herds of mammals is the tendency for the
velocity vectors of individual units to be parallel with one another [2]. A
comprehensive examination of various types of swarms is presented in [3]. The
field of System Identification (SID) has generated significant academic
interest, as it encompasses the study of systems to unveil and model the
interactions and dynamics in animal or robot swarms. A deeper understanding of
these dynamics can aid in the advancement of more advanced swarm systems.
Identification algorithms have been proposed that integrate structure
determination with parameter estimation using an orthogonal least squares
approach [4]. Determining the parameters through estimation of causal entropy
can provide insight into the impact of components in multivariate time series
data [5]. Another non-parametric method, entropic regression, has been
utilized to estimate the parameters of dynamic equations [6]. Recently,
various learning-based methods have also been developed for SID. Temporal
Convolutional Neural Networks (CNNs) utilize masked convolutions to implement
a fixed-range sequence model [7]. Additionally, multi-step neural network
approaches, such as Adams-Bashforth [8], Cluster-Networks [9], and Physics-
Informed Deep Learning models [10] and [11], have been proposed. Another study
evaluates the use of machine learning models to supplement knowledge-based
mathematical models in cases where analytical models may fail, for example,
due to unknown noise models or approximations [12].
A recent development in the field of nonlinear system identification has
introduced a new method for uncovering the structure of a nonlinear dynamic
system from data. This approach utilizes symbolic regression to determine the
system’s dynamics and strikes a balance between the model’s complexity (number
of terms) and its agreement with the data. However, this regression problem
can be computationally expensive and may not scale well to large-scale
systems, and may result in over-fitting. Other techniques for discovering
emergent behavior and governing equations from time-series data include
automated statistical inference of dynamics, equation-free modeling, and
empirical dynamic modeling. One method, Sparse Identification of Nonlinear
Dynamics (SINDy), produces a sparse, nonlinear regression that automatically
identifies the relevant terms in the system from a library of functions [13].
## 3 Methods and Approaches for System Identification
Simulations were performed to generate data, utilizing equation (15) in
Python. The resulting data comprised of the position and velocity of each
swarm particle for N timesteps. A representative example of the generated
data, presented in CSV format, can be found in the appendix. The system’s
steady state is characterized by the time derivatives of its parameters being
equal to and maintaining zero values. The eventual attainment of the steady
state may vary depending on the system’s initial conditions and the path or
duration required to reach it. To comprehend the system more thoroughly, the
methodologies applied should be categorized into four distinct training
approaches.
1. 1.
Training on Transient Dynamics with Identical Initial Conditions to Test Data
2. 2.
Training on Transient Dynamics with Dissimilar Initial Conditions to Test Data
3. 3.
Training on Steady State Dynamics with Identical Initial Conditions to Test
Data
4. 4.
Training on Steady State Dynamics with Dissimilar Initial Conditions to Test
Data
The baseline techniques were executed and evaluated as described above. The
models were trained through a folding approach, where a consecutive set of
observations were utilized to predict the subsequent step. For instance, the
first five steps were utilized to forecast the sixth step, the second to sixth
steps predicted the seventh step, and so forth. This resulted in a single
batch shape, assuming a swarm of 32 particles, as follows:
$\displaystyle 5\times 32\times 4$ (16) $\displaystyle 5\times 128$ (17)
In order to assess and compare our models, we defined the Mean Field Error
(MFE) as a metric. The MFE quantifies the discrepancy between the mean
positions of the predicted swarm particles and the ground truth particles. It
is computed as the Euclidean distance between the mean positions of the swarm
particles, providing insight into the stability achieved by the model and its
ability to learn from the time-series data. The calculation of the MFE is
described as follows:
$\displaystyle
MFE=\sqrt{(x_{m,true}-x_{m,pred})^{2}+(y_{m,true}-_{m,pred})^{2}}$ (18)
## 4 Non Deep learning Models
This section outlines one method: a Regression Model-based Forecasting
approach for predicting swarm behavior for deriving the equation form of the
swarm’s dynamical system.
### 4.1 Least Square-Regression Model
The data utilized to construct models predicting the future behavior of swarms
is time-series data. For basic time-series forecasting, Linear Regression
Models are commonly used. Despite the swarm system being non-linear, linear
regression models can locally approximate the non-linear function and still be
employed for forecasting future samples, provided the local neighborhood of
the estimation is small. One of the initial models established is a Linear
Regression Model based on Least Squares. The Ordinary Least Squares (OLS)
method is employed to evaluate the various regression lines, where the
regression line that minimizes the sum of the squared differences between the
observed and predicted dependent variables is selected according to OLS.
Given a set of m samples, $x=[x_{1},x_{2},x_{3},...,x_{m}]$, the model will
learn a weight vector, w, to predict future samples,
$\tilde{x}=[x_{m+1},x_{m+2},x_{m+3},...,x_{m+n}]$, as $\tilde{x}=w^{T}x$,
where the predicted samples are n steps ahead of the current time. The
quantity m represents the number of in-samples, while n represents the number
of out-samples that the model predicts. The optimal feature set size (m) and
optimal prediction horizon (n) are generally determined through cross-
validation and serve as hyperparameters for the model.
1. 1.
Model for Steady State Dynamics: A swarm model was generated using the defined
model from Section 2, producing data over 3000 time steps. A test-train split
was created, using 2000 samples for training and 1000 samples for testing. The
data primarily represents the steady state behavior of the swarm system. A
regression model was built, taking 10 samples as input and predicting the 11th
sample. After training, a sliding window approach was used for testing the
model. The model was initially given the last 10 samples of the training data
to predict a new sample, then the window was shifted to include the predicted
sample to predict additional samples until 1000 samples were predicted.
2. 2.
A Regression Model for Transient Dynamics: The swarm model data consisting of
250 time steps was generated as per the procedure outlined in Section 2. A
test-train split was established, with 150 samples designated for training and
100 samples reserved for testing, to primarily capture the transient dynamics
of the swarm system. A regression model was built, which employed 10 samples
to predict the 11th sample. After training, a sliding window approach was
employed for evaluating the model’s performance. The last 10 samples of the
training data were initially fed to the model, which predicted the next
sample. This process was repeated by folding ahead and incorporating the
predicted sample until 100 samples were predicted.
Figure 1: MFE for Steady State Predictions
The study finds that a regression model with an optimal feature size and
prediction horizon of 10 and 1 respectively can accurately predict the steady
state evolution of a swarm system with minimal mean field error (MFE). As the
number of prediction samples increases, the MFE begins to rise. The model can
predict the steady state behavior of the swarm up to approximately 1000 steps
ahead with acceptable MFE. However, its performance on transient data is poor
due to its inability to capture the highly non-linear nature of the swarm
system. This linear regression model does not provide insight into the
underlying dynamics of the swarm system as it is linear in nature and the
swarm system is highly non-linear. The model’s success in predicting steady
state may be due to the linear stability of the multi-agent system in the
steady state.
Figure 2: MFE for Transient Predictions Figure 3: Prediction for Steady State
Figure 4: Prediction for Transient
## 5 Deep Learning Models
The study evaluated three deep learning baselines, including a standard feed
forward network, recurrent neural networks (RNNs), and temporal convolutional
neural networks (CNNs). The RNNs and temporal CNNs demonstrated superior
performance in extrapolating steady swarm dynamics and accurately matching the
ground truth. However, when trained on transient data, these models failed to
extrapolate and reach stability, revealing their limitations in capturing the
inherent dynamics of the system. This highlights the strengths of these models
in inferring temporal dependencies but not in learning the underlying dynamics
of the system.
### 5.1 Multi-Layer Perceptron
The state dynamics of the system is characterized by a set of non-linear
equations. To capture this non-linearity, a baseline was implemented using a
fully connected neural network. This network, with appropriate activation
functions between layers, has the potential to capture the temporally
dependent dynamics of the swarm. The input to the network is a tensor
consisting of positional and velocity data of 32 particles over five time
steps, and the objective of the model is to predict the state and velocity of
the swarm for the next time step. The following table presents the final
hyperparameters that produced the most reliable outcomes:
Hidden Layer Size | 256
---|---
Optimizer | SGD
Learning Rate | 0.001
Training Window for Dataset | 5
Table 1: Hyperparameters for MLP
### 5.2 Recurrent Neural Network
Recurrent Neural Networks (RNNs) are extensions of feedforward neural networks
with a built-in memory component. They are recurrent in nature as they perform
the same computation for every sequence of input data, with the output
dependent on the previous computation. The output is then copied and fed back
into the network, allowing it to consider both the current input and the
previous output in making a decision. This makes RNNs highly effective in
identifying patterns in time-series data or data where the samples at a
particular time are assumed to be dependent on the preceding sample.
Figure 5: RNN Unrolled Structure
A single-layer RNN was employed, followed by a linear activation function. The
loss function used was nn.MSELoss(). The hyperparameters optimized included
the hidden size, the number of RNN layers, the optimizer, and the learning
rate, as detailed below:
Hidden Layer Size | 256
---|---
Number of RNN layers | 1
Optimizer | SGD
Learning Rate | 0.005
Training Window for Dataset | 5
Table 2: Hyperparameters for RNN
### 5.3 Convolutional Neural Networks
While Recurrent Neural Networks (RNNs) and their derivatives such as Long
Short-Term Memories (LSTMs) and Gated Recurrent Units (GRUs) have
traditionally been the preferred models for time-series based data,
Convolutional Neural Networks (CNNs) have also been effectively applied to the
learning of time-series data as described in [14]. Given the presence of
multiple swarm particles with underlying interaction forces in the data,
incorporating a form of convolution may be beneficial in capturing both local
and temporal information. In this implementation, 1-dimensional convolutions
were used instead of 2-dimensional convolutions. The following presents the
final architecture of our implemented model:
Figure 6: Implemented CNN Architecture
Several architectural and filtering configurations were evaluated, with the
final configuration producing the most consistent results. The use of Max-
Pooling, Adaptive Average-Pooling, and Dropout resulted in suboptimal
performance, hence not included in the final architecture. The ReLU activation
function was applied to most layers. The chosen loss function was Mean Square
Error $(nn.MSE)$. Both Stochastic Gradient Descent $(SGD)$ and Adam
optimization algorithms were experimented with. Although Adam demonstrated
fast convergence at times, its performance was inconsistent across different
learning rates. On the other hand, SGD consistently converged quickly at a
learning rate of $5\times 10^{-3}$ and was chosen as the final optimizer and
learning rate.
Loss Function | Mean Square Error
---|---
Optimizer | SGD
Learning Rate | 0.0005
Training Window for Dataset | 5
Table 3: Hyperparamters for CNN First Lyer | Second Layer | Third Layer
---|---|---
Filter 1 | 5 | Filter 2 | 5 | Filter 3 | 3
Padding 1 | 2 | Padding 2 | 2 | Padding 3 | 1
Stride 1 | 2 | Stride 2 | 2 | Stride 3 | 1
Table 4: Parameters of Convolution Layers
The performance of the models was inadequate when extrapolating and predicting
transient swarm dynamics. These findings suggest the need for further research
in developing robust deep learning solutions capable of generalizing the
system’s dynamics and accurately forecasting the future trajectories of swarm
particles during transient states.
### 5.4 Swarm Dynamics Predictive Performance through Neural Network Models
The Multi-Layer Perceptron (MLP) model was initially evaluated as a baseline.
The MLP was capable of capturing the non-linear properties of the model.
However, its performance was highly contingent on the initial conditions of
the particles, resulting in a marked decrease in its ability to predict the
true test trajectories of the swarm when these conditions deviated from the
training data. The comparison between the performance of the model on steady-
state data and data consisting of both transient and steady-state dynamics
revealed a substantial reduction in performance during the latter case,
particularly during the transient phase, as indicated by the Mean Field
Errors. These findings suggest that the MLP is not capable of uncovering the
underlying behavior of the swarm and instead focuses solely on minimizing the
Mean Field Error. As a result, it is prone to failure when initial conditions
differ or during transient scenarios. In addition, the training time required
by the MLP was significantly longer (500 epochs) compared to the RNN and CNN
models (50 epochs), further hindering its utility. Given these limitations,
the MLP was not tested as extensively as the CNN and RNN models.
Figure 7: Training loss for model trained on Steady State for MLP
The RNN and CNN models exhibited efficient performance during the training
process, exhibiting rapid convergence across the four methodologies outlined
in the Methods section. Experiments with various epoch lengths revealed
consistent results for lengths exceeding 50 epochs. The loss plots for the two
models in one case are depicted in Figures 8 and 9.
Figure 8: Training loss for model trained on Steady State for RNN Figure 9:
Training loss for model trained on Steady State for CNN
Despite their rapid convergence to a minimal cost (within 50 epochs), the RNN
and CNN models only demonstrated exceptional performance under one
methodology, namely when they were trained on steady-state data with initial
conditions identical to the test data. The results were inconsistent for the
other training methodologies, as illustrated by the Mean Field Error plots. As
demonstrated in Figures 10 and 11, no clear better-performing model emerges,
owing to the presence of a transient state in the evaluation data. As
previously noted, the model trained on steady-state data was expected to
perform well, and it did once the transient state was removed, causing a
significant change in the Mean Field Error. This suggests that both the RNN
and CNN models can only learn the stable part of the swarm system. The plots
for all other truncated tests are included in the appendix. Although RNN and
CNN seem to have comparable performance, a closer examination of the predicted
data from each model via animation reveals that while the Mean Field Error is
similar for both models, the CNN model exhibits a noticeable phase shift.
Figure 10: trained on Steady State for RNN MFE Figure 11: trained on Steady
State for CNN MFE
The analysis of the results presented in this section indicates that the RNN
and MLP models have an advantage in learning inter-particle interactions and
individual particle behaviors compared to only learning the steady state of
the entire system, which is the case with the CNN model. The three deep-
learning-based models (MLP, RNN, and CNN) are capable of learning steady-state
models from data with the same initial conditions, but they are unable to
generalize well when dealing with the presence of transient states or states
with different initial conditions. The results suggest that the deep-learning-
based models, similar to the non-baseline models, are not able to identify the
hidden state dynamics of the swarm system and struggle with capturing the
highly non-linear properties of the transient system.
## 6 Advanced Deep Learning Model - Neural ODE
The proposed model boasts two distinctive features that enhance its efficacy.
Firstly, it adopts the concept of Neural ODE, a non-architectural but
innovative method for optimizing through the utilization of an ODE solver.
Backpropagation through an ODE solver is feasible, but it necessitates the
storage of variables at each time step, leading to substantial memory
overhead. This model circumvents this issue by defining adjoint states, the
time dynamics of which can be obtained through numerical integration with the
ODE solver. These adjoint states then facilitate the calculation of
derivatives of each parameter with respect to the loss function, thereby
enabling the ODE solver to be utilized as a ”black box”, regardless of the
specific ODE solver employed.
Our objective is to approximate the function $\dot{X}=f(X)$ as a neural
network $\hat{f}$ with parameters $\theta$ , using the form presented in
equation 15. The optimization target is to minimize the scalar-valued loss
function L, given by $L(ODESolve(X(t_{0}),\hat{f},t_{0},t_{1},\theta))$. The
definition of adjoint states and the calculation of their time dynamics are
detailed in [15]. In this model, the selection of the neural network
architecture utilized for approximating the function $\hat{f}$ is the second
aspect considered. Given that the dynamics of $\dot{X}$ is the direct target
for approximation, physical characteristics of the swarm system can be
leveraged to inform the choice of the neural network. The homogeneity of the
observed swarm implies that the dynamics of each agent must be uniform and
that the interactions between agents must also exhibit consistency. In light
of these considerations, the architecture was designed as depicted in Figure
12. The utilization of a physics-informed network significantly shrinks the
parameter space, thereby expediting the training process.
| Intrinsic | Interaction | Aggregation
---|---|---|---
Number of Layers | 3 | 3 | 3
Nonlinearity Input | Cubic | None | None
Table 5: Physics-informed model architecture ODESolver Type | Euler
---|---
ODESolver Step | 0.05
Optimizer | Adam(lr=0.01)
Loss Function | MSE
Table 6: Physics-informed model hyperparameters
Tables 5 and 6 present the architecture of the final model and the
hyperparameters employed for the training process, respectively.
Figure 12: Physics-informed model for 3 agent swarm team Figure 13: Learning
curve of NODE for 3 agent swarm team Figure 14: Predicted trajectory of agents
compare to ground truth
## 7 Conclusions
In this study, we investigated the effectiveness of various neural network
models in capturing the dynamics of nonlinear Ordinary Differential Equations
(ODEs) systems. Our aim was to evaluate the performance of existing deep
learning and non-deep learning models in learning the stability of swarm
systems through time-series data. The results showed that traditional deep
learning models and non-deep learning baselines were not robust enough to
handle the nonlinearity and transient states of the swarm systems. The models
performed well when evaluating systems with the same initial conditions but
struggled with systems with different initial conditions and transient states.
This led us to explore a new deep learning method, Neural ODE, which provides
a framework for training ODEs within larger models. Despite finding a working
model, it remains unclear how the network, in conjunction with Neural ODE,
captures the true dynamical process of the system. This is because the
presence of bifurcations and catastrophes in dynamical systems can cause
instability and hysteresis, hindering the learning of a good model. Further
research is needed to fully understand the limitations and capabilities of
Neural ODEs in capturing the nonlinear dynamics of swarm systems.
## References
1. 1.
C. W. Reynolds, ”Flocks, herds and schools: A distributed behavioral model,”
in _Proceedings of the 14th annual conference on Computer graphics and
interactive techniques_ , 1987, pp. 25-34.
2. 2.
Csaba Virágh, Gábor Vásárhelyi, Norbert Tarcai, Tamás Szőrényi, Gergő
Somorjai, Tamás Nepusz, and Tamás Vicsek, ”Flocking algorithm for autonomous
flying robots,” _Bioinspiration & biomimetics_, vol. 9, no. 2, p. 025012,
2014.
3. 3.
Carl Kolon and Ira B Schwartz, ”The dynamics of interacting swarms,” _arXiv
preprint arXiv:1803.08817_ , 2018.
4. 4.
Sheng Chen, Stephen A Billings, and Wan Luo, ”Orthogonal least squares methods
and their application to non-linear system identification,” _International
Journal of control_ , vol. 50, no. 5, pp. 1873-1896, 1989.
5. 5.
Pileun Kim, Jonathan Rogers, Jie Sun, and Erik Bollt, ”Causation entropy
identifies sparsity structure for parameter estimation of dynamic systems,”
_Journal of Computational and Nonlinear Dynamics_ , vol. 12, no. 1, 2017.
6. 6.
Abd AlRahman R AlMomani, Jie Sun, and Erik Bollt, ”How entropic regression
beats the outliers problem in nonlinear system identification,” _Chaos: An
Interdisciplinary Journal of Nonlinear Science_ , vol. 30, no. 1, p. 013107,
2020.
7. 7.
John M Maroli, Ümit Özgüner, and Keith Redmill, ”Nonlinear system
identification using temporal convolutional networks: a silverbox study,”
_IFAC-PapersOnLine_ , vol. 52, no. 29, pp. 186-191, 2019.
8. 8.
Maziar Raissi, Paris Perdikaris, and George Em Karniadakis, ”Multistep neural
networks for data-driven discovery of nonlinear dynamical systems,” _arXiv
preprint arXiv:1801.01236_ , 2018.
9. 9.
Rong-Jong Wai and Alex S Prasetia, ”Adaptive neural network control and
optimal path planning of UAV surveillance system with energy consumption
prediction,” _Ieee Access_ , vol. 7, pp. 126137-126153, 2019.
10. 10.
Maziar Raissi, Paris Perdikaris, and George Em Karniadakis, ”Physics informed
deep learning (part i): Data-driven solutions of nonlinear partial
differential equations,” _arXiv preprint arXiv:1711.10561_ , 2017.
11. 11.
Tayyab Manzoor, Hailong Pei, Zhongqi Sun, and Zihuan Cheng, ”Model Predictive
Control Technique for Ducted Fan Aerial Vehicles Using Physics-Informed
Machine Learning,” _Drones_ , vol. 7, no. 1, p. 4, 2022.
12. 12.
Jaideep Pathak, Alexander Wikner, Rebeckah Fussell, Sarthak Chandra, Brian R
Hunt, Michelle Girvan, and Edward Ott, ”Hybrid forecasting of chaotic
processes: Using machine learning in conjunction with a knowledge-based
model,” _Chaos: An Interdisciplinary Journal of Nonlinear Science_ , vol. 28,
no. 4, p. 041101, 2018.
13. 13.
Kathleen Champion, Bethany Lusch, J Nathan Kutz, and Steven L Brunton, ”Data-
driven discovery of coordinates and governing equations,” _Proceedings of the
National Academy of Sciences_ , vol. 116, no. 45, pp. 22445-22451, 2019.
14. 14.
Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar,
and Pierre-Alain Muller, ”Deep learning for time series classification: a
review,” _Data mining and knowledge discovery_ , vol. 33, no. 4, pp. 917-963,
2019.
15. 15.
Han Zhang, Xi Gao, Jacob Unterman, and Tom Arodz, ”Approximation capabilities
of neural ordinary differential equations,” _arXiv preprint arXiv:1907.12998_
, vol. 2, no. 4, pp. 3-1, 2019.
|
Finite-Sample Analysis of Off-Policy Natural Actor-Critic with Linear Function Approximation
A]Zaiwei<EMAIL_ADDRESS>,A]Sajad<EMAIL_ADDRESS>A]Siva Theja<EMAIL_ADDRESS>
Geogia Institute of Technology,
Equal contribution between Zaiwei Chen and Sajad Khodadadian
In this paper, we develop a novel variant of off-policy natural actor-critic algorithm with linear function approximation and we establish a sample complexity of $\mathcal{O}(\epsilon^{-3})$, outperforming all the previously known convergence bounds of such algorithms. In order to overcome the divergence due to deadly triad in off-policy policy evaluation under function approximation, we develop a critic that employs $n$-step TD-learning algorithm with a properly chosen $n$. We present finite-sample convergence bounds on this critic under both constant and diminishing step sizes, which are of independent interest. Furthermore, we develop a variant of natural policy gradient under function approximation, with an improved convergence rate of $\mathcal{O}(1/T)$ after $T$ iterations. Combining the finite sample error bounds of actor and the critic, we obtain the $\mathcal{O}(\epsilon^{-3})$ sample complexity. We derive our sample complexity bounds solely based on the assumption that the behavior policy sufficiently explores all the states and actions, which is a much lighter assumption compared to the related literature.
§ INTRODUCTION
Reinforcement learning (RL) is a paradigm in which an agent aims at maximizing long term rewards via interacting with the environment. For solving the RL problem, there are value space methods such as $Q$-learning, and policy space methods such as actor-critic (AC) and its variants (e.g. natural actor critic (NAC)). In the AC framework, the actor aims at performing the policy update while the critic aims at estimating the value function of the current policy at hand. For AC type algorithms to perform well, the policy used to collect samples (called the behavior policy) must sufficiently explore the state-action space [65]. If the behavior policy coincides with the current policy iterate of AC, it is called on-policy sampling, otherwise it is called off-policy sampling.
In on-policy AC, the agent is restricted to use the current policy iterate to collect samples, which may not be exploratory. Moreover, on-policy sampling might be of high risk (e.g. self driving cars [84]), high cost (e.g. robotics [28, 42]), or might be unethical (e.g. in clinical trials [26, 45, 27]). Off-policy AC, on the other hand, is more practical than on-policy sampling [42]. Specifically, off-policy sampling enables the agent to learn using the historical data, hence decouples the sampling process and the learning process. This allows the agent to learn in an off-line manner, and makes RL applicable in high-stake problems mentioned earlier. In addition, it is empirically observed that by using a suitable behavior policy, one can rectify the exploration issue in on-policy AC. As a result, off-policy learning successfully solved many practical problems in different areas, such as board game [63], city navigation [53], education [50], and healthcare [20].
In practice, AC algorithms are usually used along with function approximation to overcome the curse of dimensionality in RL [6]. However, it has been observed that the combination of function approximation, off-policy sampling, and bootstrapping (also known as the deadly triad [65]) can result in instability or even divergence [65, 3]. In this work, we develop a variant of off-policy NAC with function approximation, and we establish its finite-sample convergence guarantee in the presence of the deadly triad.
§.§ Main Contributions
The main contributions of this paper are fourfold.
Finite-Sample Bounds of Off-Policy NAC. We develop a variant of NAC with off-policy sampling, where both the actor and the critic use linear function approximation, and the critic uses off-policy sampling. We establish finite-sample mean square bound of our proposed algorithm. Our result implies an $\tilde{\mathcal{O}}(\epsilon^{-3})$ sample complexity, which is the best known convergence bound in the literature for AC algorithms with function approximation.
Novelty in the Critic.
Off-policy TD with function approximation is famously known to diverge due to deadly triad [65]. To overcome this difficulty, we employ $n$-step TD-learning, and show that a proper choice of $n$ naturally achieves convergence, and we present finite-sample bounds under both constant and diminishing stepsizes. To the best of our knowledge, we are the first to design a single time-scale off-policy TD with function approximation with provable finite-sample bounds.
Novelty in the Actor.
NAC under function approximation was developed in [1] by projecting the $Q$-values (gradients) to the lower dimensional space, and this involves the use of the discounted state visitation distribution, which is hard to estimate. We develop a new NAC algorithm for the function approximation setting that is instead based on the solution of a projected Bellman equation [73], which our critic is designed to solve.
Exploration through Off-Policy Sampling. We establish the convergence bounds under the minimum set of assumptions, viz., ergodicity under the behavior policy, which ensures sufficient exploration, and thus resolving challenges faced in on-policy sampling. As a result, learning can be done using a single trajectory of samples generated by the behavior policy, and we do not require constant reset of the system that was introduced in on-policy AC algorithms [1, 76] to ensure exploration. A similar observation about employing off-policy sampling to ensure exploration has been made in the tabular setting in [34].
§.§ Related Literature
The two main approaches for learning an optimal policy in an RL problem are value space methods, such as $Q$-learning, and policy space methods, such as AC. The $Q$-learning algorithm proposed in [77] is perhaps the most well-known value space method. The asymptotic convergence of $Q$-learning was established in [72, 32, 15, 52]. As for finite-sample bounds, see [74, 60, 18, 43, 17] and the references therein. We next focus on related literature on AC-type of algorithms.
AC algorithms comprise two stages: actor and critic. The actor is responsible for the policy improvement, which is usually performed with the policy gradient (PG). The critic estimates the value function of the current policy (which provides the gradient), and uses TD-learning methods.
PG Methods. The first PG algorithm with function approximation was proposed in [68], where the asymptotic convergence was established. A refined asymptotic analysis of PG methods has been further proposed in [5, 57, 29]. Natural policy gradient (NPG), which is a PG method with preconditioning, was proposed in [33]. Recently, there has been a line of work to establish finite-sample convergence bound of NPG. In particular, sublinear convergence of NPG was established in [2, 25, 1, 61, 86], and geometric convergence of NPG was established in [51, 16, 11, 40, 35].
TD-Learning. The policy evaluation problem within the critic is usually solved with TD-learning. In the on-policy setting, the asymptotic convergence of TD-learning was established in [73, 70, 13], and the finite-sample bounds were studied in [19, 39, 10, 64, 30, 18]. When TD-learning is used with off-policy sampling and function approximation, all the three elements of the deadly triad are present [65]. As a result, the algorithm can diverge. In order to overcome the divergence issue, numerous variants of TD-learning algorithms, such as GTD [67], TDC [69], and emphatic TD-learning [66], are proposed in the literature. However, all these algorithms require to maintain two iterates and hence are two time-scale algorithms, while our proposed algorithm is a single time-scale algorithm.
Sample complexity bounds of the AC-type algorithms using function approximation
24 emAlgorithm 24.5 emSampling Procedure 26 emReferences 26 emSample Complexity $^{1,2}$ 24emSingle Trajectory
44 emActor Critic 34.5 emOn-Policy [37] Asymptotic
[76] $\tilde{\mathcal{O}}(\epsilon^{-6})$
[59, 38] $\tilde{\mathcal{O}}(\epsilon^{-4})$
Off-Policy [49, 87] Asymptotic
54 emNatural Actor Critic 34.5 emOn-Policy [12] Asymptotic
[76] $\tilde{\mathcal{O}}(\epsilon^{-14})$
[1] $\tilde{\mathcal{O}}(\epsilon^{-6})$
24.5 emOff-Policy [82] $\tilde{\mathcal{O}}(\epsilon^{-4})$
This work $\tilde{\mathcal{O}}(\epsilon^{-3})$
$^1$ In this table, for the AC (respectively NAC) algorithms, sample complexity is the number of samples needed to find a policy $\pi$ such that $\mathbb{E}[\|\nabla V^{\pi}(\mu)\|^2] \leq \epsilon + \mathcal{E}_{\text{bias}}$ (respectively $\mathbb{E}[V^*(\mu)-V^{\pi}(\mu)]\leq \epsilon +\mathcal{E}_{bias}$), where $\mathcal{E}_{bias}$ is a non-vanishing error due to the function approximation. In the presence of a bias, one should be careful about interpreting the sample complexity. For a detailed illustration, see Appendix <ref> of this work and also Appendix C of [34].
$^2$ Here $\Tilde{O}(\cdot)$ ignores all the logarithmic terms.
On-Policy AC. Several variants of AC were proposed in [4, 14, 54, 56, 71]. In the tabular setting, [78, 13, 14] studied the asymptotic convergence of AC algorithm. Furthermore, [37, 12] characterize the asymptotic convergence of on-policy AC under function approximation. Recently, there has been a flurry of work studying the finite-sample convergence of AC and NAC [23]. [61, 40, 36] perform the finite sample analysis of NAC under tabular setting, and [85, 59, 38, 46, 76, 80, 81, 47, 79] establish the finite-sample bounds of AC in function approximation setting. To the best of our knowledge, the best sample complexity bound of AC algorithms is provided in [40], where the authors characterize an $\tilde{\mathcal{O}}(\epsilon^{-2})$ sample complexity. However, [40] only studies tabular RL in the on-policy setting.
Off-policy AC. Off-policy AC, was first proposed in [21]. After that, there has been numerous extensions to that work such as DPG [62], DDPG [44], ACER [75], TD3 [24], IMPALA [22], ACE [31], etc. The asymptotic convergence of off-policy AC was established for Gradient-AC in [49], and for AC with emphasis in [87]. The first finite-sample bound of off-policy NAC was established in [34]. However, in [34] only tabular setting was studied. In the function approximation setting, [82] provided the finite sample analysis of a doubly robust off-policy AC. [48] also provided a convergence bound for off-policy AC, however their convergence bound does not involve a bound for the critic. A detailed comparison between our results and the related literature on off-policy AC-type algorithms with function approximation is presented in Table <ref>.
§ MAIN RESULTS
In this section, we present our main results. Specifically, in Section <ref> we briefly cover the background of RL and AC. In Section <ref>, we present our algorithm design for the critic, which uses off-policy sampling with linear function approximation. In section <ref>, we combine the critic with our actor update to form a variant of off-policy NAC with linear function approximation, and we present our finite-sample guarantees and sample complexity bounds.
§.§ Preliminaries
Consider modelling the RL problem as an infinite horizon MDP, which consists of a finite set of states $\mathcal{S}$, a finite set of actions $\mathcal{A}$, a set of unknown transition probability matrices $\mathcal{P}=\{P_a\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{S}|}\mid a\in\mathcal{A}\}$, an unknown reward function $\mathcal{R}:\mathcal{S}\times\mathcal{A}\mapsto \mathbb{R}$, and a discount factor $\gamma\in (0,1)$. Without loss of generality we assume that $\max_{s,a}|\mathcal{R}(s,a)|\leq 1$. For a given policy $\pi$, its state value function is defined by $V^\pi(s) =\mathbb{E}_\pi[\sum_{k=0}^\infty \gamma^k\mathcal{R}(S_k,A_k)\mid S_0=s]$ for all $s\in\mathcal{S}$, and its state-action value function is defined by $Q^\pi(s,a)=\mathbb{E}_\pi[\sum_{k=0}^\infty\gamma^k\mathcal{R}(S_k,A_k)\mid S_0=s,A_0=a]$ for all $(s,a)\in \mathcal{S}\times\mathcal{A}$. The goal of RL is to find an optimal policy $\pi^*$ which maximizes $V^\pi(\mu)=\sum_s \mu(s)V^{\pi}(s)$, where $\mu$ is an arbitrary fixed initial distribution over the state space. It was shown in the literature that the optimal policy is in fact independent of the initial distribution. See [8, 58, 65] for more details for the MDP model of the RL problem.
To solve the RL problem, a popular approach is to use the AC framework [37]. In AC algorithm, we iteratively perform the policy evaluation and the policy improvement until an optimal policy is obtained. Specifically, in each iteration, we first estimate the $Q$-function (or the advantage function) of the current policy at hand, which is related to the policy gradient. Then we update the policy using gradient ascent over the space of the policies. NAC is a variant of AC where the gradient ascent step is performed with a properly chosen pre-conditioner. See [1] for more details about AC and NAC.
In AC framework, since we need to work with the $Q$-function and the policy, which are $|\mathcal{S}||\mathcal{A}|$ dimensional objects, the algorithm becomes intractable when the size of the state-action space is large [6]. To overcome this difficulty, in this work we consider using linear function approximation for both the policy and the $Q$-function. Specifically, let $\{\phi_i\}_{1\leq i\leq d}$ be a set of basis functions, where $\phi_i\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}$ for all $i$. Without loss of generality, we assume that $\phi_i$, $1\leq i\leq d$, are linearly independent and are normalized so that $\|\phi(s,a)\|_1\leq 1$ for all $(s,a)$, where $\phi(s,a)=[\phi_1(s,a),\cdots,\phi_d(s,a)]$ is the feature associated with state-action pair $(s,a)$. Let $\Phi=[\phi_1,\cdots,\phi_d]$ be the feature matrix. We parameterize the policy and the $Q$-function using compatible function approximation [68]. In particular, we use softmax parametrization for the policy, i.e., $\pi_\theta(a|s)=\frac{\exp(\phi(s,a)^\top \theta)}{\sum_{a'\in\mathcal{A}}\exp(\phi(s,a')^\top \theta)}$ for all $(s,a)$, where $\theta\in\mathbb{R}^{d}$ is the parameter. As for the $Q$-function, we approximate it from the linear sub-space given by $\mathcal{Q}=\{Q_w=\Phi w\mid w\in\mathbb{R}^d\}$, where $w\in\mathbb{R}^{d}$ is the corresponding parameter. Note that the compatible features in the case of our actor (which utilizes the $Q$-function) are indeed $\{\phi(s,a)\}$. The reason that our features are different than that of [12, 68] is because [12, 68] use the advantage function in the actor update. When using the advantage function, the corresponding parametric features are $\{\phi(s,a)-\mathbb{E}_{A\sim \pi}[\phi(s,A)]\}$.
By doing linear function approximation, we now only need to work with $d$-dimensional objects (i.e., $w$ for the $Q$-function and $\theta$ for the policy), where $d$ is usually chosen to be much smaller than $|\mathcal{S}||\mathcal{A}|$.
§.§ Off-Policy Multi-Step TD-learning with Linear Function Approximation
In this section, we present the $n$-step off-policy TD-learning algorithm under linear function approximation [65], which is used for solving the policy evaluation (critic) sub-problem in our AC framework. Let $\pi$ be the target policy we aim to evaluate, and let $\pi_b$ be the behavior policy we used to collect samples. For any state-action pairs $(s,a)$, let $\rho(s,a)=\frac{\pi(a|s)}{\pi_b(a|s)}$, which is called the importance sampling ratio between $\pi$ and $\pi_b$ at $(s,a)$. For any positive integer $n$, Algorithm <ref> presents the off-policy $n$-step TD-learning algorithm for estimating $Q^\pi$.
[h]Off-Policy $n$-Step TD-Learning with Linear Function Approximation
[1]
Input: $K$, $\alpha$, $w_0$, $\pi$, $\pi_b$, and $\{(S_k,A_k)\}_{0\leq k\leq (K+n)}$ (a single trajectory generated by the behavior policy $\pi_b$)
$\delta_{k,i}=\mathcal{R}(S_i,A_i)+\gamma \rho(S_{i+1},A_{i+1})\phi(S_{i+1},A_{i+1})^\top w_k-\phi(S_i,A_i)^\top w_k$
$w_{k+1}=w_k+\alpha \phi(S_k,A_k)\Delta_{k,n}$
Output: $w_K$
In Algorithm <ref>, we employ the importance sampling ratio to account for the discrepancy between the target policy $\pi$ and the behavior policy $\pi_b$.
Although all the three elements of the deadly triad (bootstrapping, function approximation, and off-policy sampling) [65] are present, we show that by choosing $n$ appropriately, Algorithm <ref> has provable finite-sample convergence guarantee. The detailed statement of the result is presented in Section <ref>. In this section we provide some intuition.
Suppose that the Markov chain $\{(S_k,A_k)\}_{k\geq 0}$ under the behavior policy $\pi_b$ has a unique stationary distribution $\kappa_b\in\Delta^{|\mathcal{S}||\mathcal{A}|}$. Let $\kappa_{b,\min}=\min_{s,a}\kappa_b(s,a)$ and let $\mathcal{K}=\text{diag}(\kappa_b)$. Algorithm <ref> can be interpreted as a stochastic approximation (SA) algorithm for solving the equation $\Phi^\top \mathcal{K}(\mathcal{T}_\pi^n(\Phi w)-\Phi w)=0$ as explained in Section <ref>, which is equivalent to the projected Bellman equation:
\begin{align}\label{eq:pbe}
Q_w=\Pi_{\kappa_b}\mathcal{T}_\pi^n(Q_w)=\Phi (\Phi^\top \mathcal{K}\Phi)^{-1}\Phi^\top \mathcal{K}\mathcal{T}_\pi^n(Q_w).
\end{align}
Here $\mathcal{T}_\pi^n(\cdot)$ denotes the $n$-step Bellman operator, and $\Pi_{\kappa_b}(\cdot)$ stands for the projection operator onto the linear sub-space $\mathcal{Q}$ with respect to the weighted $\ell_2$-norm with weights $\{\kappa_b(s,a)\}_{(s,a)\in\mathcal{S}\times\mathcal{A}}$ [73]. It is well-known that the operator $\mathcal{T}_\pi(\cdot)$ (i.e., the one-step Bellman operator) is a contraction mapping[It is also known that $\mathcal{T}_\pi(\cdot)$ is a contraction mapping with respect to the weighted $\ell_2-$ norm $\|\cdot\|_\kappa$, where $\kappa\in \Delta^{|\mathcal{S}||\mathcal{A}|}$ is the stationary distribution of the Markov chain $\{(S_k,A_k)\}$ under the target policy $\pi$ [73]. However, since we do not assume that the target policy induces an ergodic Markov chain, such $\kappa$ may not be unique and/or may not induce a norm. Hence we cannot use this contraction property here.] with respect to $\|\cdot\|_\infty$, with contraction factor $\gamma$. Moreover, the projection operator $\Pi_{\kappa_b}(\cdot)$ is a non-expansive operator with respect to the weighted $\ell_2$-norm $\|\cdot\|_{\kappa_b}$. However, due to the norm mismatch, the composed operator $\Pi_{\kappa_b}\mathcal{T}_\pi(\cdot)$ need not be a contraction mapping with respect to either $\|\cdot\|_\infty$ or $\|\cdot\|_{\kappa_b}$. Specifically, for any given $Q_1$ and $Q_2$, in general we only have
\begin{align}\label{eq:norm_mismatch}
\|\Pi_{\kappa_b}\mathcal{T}_\pi(Q_1)-\Pi_{\kappa_b}\mathcal{T}_\pi(Q_2)\|_{c}\leq
(\gamma/\sqrt{\kappa_{b,\min}}) \|Q_1-Q_2\|_{c},
\end{align}
where $c=\infty$ or $c=\kappa_b$.
In fact, it is not clear if $\Pi_{\kappa_b}\mathcal{T}_\pi(\cdot)$ can be contractive with respect to any norm. This is the fundamental mathematical reason for the divergence of off-policy one-step TD [65].
Now consider the composed operator $\Pi_{\kappa_b}\mathcal{T}_\pi^n(\cdot)$. Observe that the $n$-step TD operator $\mathcal{T}_\pi^n(\cdot)$ is a contraction mapping with respect to $\|\cdot\|_\infty$, with contraction factor $\gamma^n$. Since the contraction factor of $\mathcal{T}_\pi^n(\cdot)$ decreases geometrically fast as $n$ increases, by choosing $n$ large enough, one can ensure that $\Pi_{\kappa_b}\mathcal{T}_\pi^n(\cdot)$ is a contraction with respect to any chosen norm. This important observation enables us to establish the convergence of Algorithm <ref> in Section <ref>. A similar idea was exploited in off-policy TD$(\lambda)$ algorithm in [9, 83], where it was shown that for $\lambda$ close to unity, the off-policy TD$(\lambda)$ algorithm converges. However, [9, 83] require an additional projection step in the algorithm to establish the convergence, and no finite-sample guarantees were shown.
In existing literature, to achieve stability in the presence of the deadly triad, algorithms such as GTD [67], TDC [69], and Emphatic TD [66] all require to maintain two iterates. Such two time-scale algorithms are in general harder to implement. In addition, the limit point of GTD-type algorithms can only be characterized when the projected Bellman equation (<ref>) has a unique solution, which is naturally satisfied in the on-policy setting, but stated as an assumption in the off-policy setting; see for example <cit.>. By exploiting multi-step return, Algorithm <ref> naturally achieves convergence, requires to maintain only one iterate, and has a limit point that can be characterized as the solution (which is guaranteed to exist and is unique) of the $n$-step projected Bellman equation.
§.§ Off-Policy Variant of NAC with Linear Function Approximation
In this section, we combine the off-policy TD-learning with linear function approximation algorithm in the previous section, with our variant of NPG update to form the off-policy variant of NAC algorithm. For simplicity of notation, we denote $Q^{\pi_{\theta_t}}$ as $Q^{\pi_t}$. Also, with input $K$, $\alpha$, $w_0$, $\pi$, $\pi_b$, and samples $\{(S_k,A_k)\}_{0\leq k\leq K+n}$, we denote the output of Algorithm <ref> as
\begin{align*}
\textsc{critic}(K,\alpha,w_0,\pi,\pi_b,\{S_k,A_k\}_{0\leq k\leq K+n}).
\end{align*}
For any integer $T\geq 1$, let $\hat{T}$ be a uniform sample from $\{0,1,...,T-1\}$.
Off-Policy Natural Actor-Critic with Linear Function Approximation
[1]
Input: $\hat{T}$, $K$, $\alpha$, $\beta$, $\theta_0$, $\pi_b$, and a single trajectory $\{(S_k,A_k)\}_{0\leq k\leq \hat{T}(K+n)}$ generated by $\pi_b$
$w_{t}=\textsc{cr}(K,\alpha,\bm{0},\pi_t,\{(S_k,A_k)\}_{t(K+n)\leq k\leq (t+1)(K+n)})$
$\theta_{t+1} =\theta_t + \beta w_{t}$
Output: $\theta_{\hat{T}}$
In each iteration of the off-policy NAC algorithm <ref>, the critic first estimates the $Q$-function $Q^{\pi_{t}}$ using $\Phi w_t$. Then, the actor updates the parameter $\theta_t$ of the current policy. Note that unlike the on-policy NAC where the algorithm usually needs to be constantly reset to a specific state of the environment, which is impractical, off-policy sampling enables us to use a single sample trajectory collected under the behavior policy.
In existing literature of NAC algorithm with linear function approximation, the critic aims at finding the projection (onto $\mathcal{Q}$) of the target $Q$-function $Q^{\pi_t}$ with respect to a suitable norm involving the state visitation distribution $d^{\pi_t}$ [1]. More specifically,
$w_t$ is an estimate of the minimizer of the optimization problem
\begin{align}\label{optimization}
\mathbb{E}_{s\sim d^{\pi_t},a\sim \pi_t}[(Q^{\pi_t}(s,a)-\phi(s,a)^\top w)^2].
\end{align}
However, the distribution $d^{\pi_t}$ is unknown and also requires special sampling <cit.>. Moreover, such sampling requires constant reset of the system, which is necessary in variants of AC algorithms proposed in many related literature; see <cit.> for a more detailed discussion.
In the tabular setting, the solution of the optimization problem (<ref>) is simply the $Q$-function $Q^{\pi_t}$. In the function approximation setting, the solution can be interpreted as an approximation of the $Q$-function $Q^{\pi_t}$ from the chosen linear sub-space. We propose obtaining such approximation by solving the projected Bellman equation, which avoids the use of $d^{\pi_t}$, and enables using a single trajectory of Markovian samples. The projected Bellman equation was introduced in [73] for analyzing on-policy TD with linear function approximation. Here we generalize the result of [73] to the off-policy setting and we use it in the critic of NAC.
As an aside, NPG algorithm can be alternatively viewed as a gradient ascent algorithm with the Fisher information matrix as the pre-conditioner. See <cit.> for more details.
§.§ Finite-Sample Convergence Guarantees
In this section, we present the finite-sample convergence bounds of Algorithms <ref> and <ref>. We begin by stating our one and only assumption.
The behavior policy $\pi_b$ satisfies $\pi_b(a|s)>0$ for all $(s,a)$ and the Markov chain $\{S_k\}$ induced by the behavior policy is irreducible and aperiodic.
Assumption <ref> is standard in studying off-policy TD-learning algorithms [49, 87]. Since we work with finite state and action spaces, under Assumption <ref>, the Markov chain $\{S_k\}$ admits a unique stationary distribution, denoted by $\mu_b\in\Delta^{|\mathcal{S}|}$ [41]. In addition, we have $\|P^k(s,\cdot)-\mu_b(\cdot)\|_{\text{TV}}\leq C\sigma^k$ for any $k\geq 0$, where $C>0$, $\sigma\in (0,1)$ are constants, and $\|\cdot\|_{\text{TV}}$ stands for the total variation distance between probability distributions [41]. Note that in this case the random process $\{(S_k,A_k)\}$ is also a Markov chain with a unique stationary distribution, which we have denoted by $\kappa_b\in\Delta^{|\mathcal{S}||\mathcal{A}|}$, and $\kappa_b(s,a)=\mu_b(s)\pi_b(a|s)$ for all $(s,a)$.
In the existing literature, where on-policy NAC was studied, it is typically required that all the policies achieved in the iterations of the NAC induce ergodic Markov chains over the state-action space [59, 79]. Such a requirement is strong and not possible to satisfy in an MDP where the optimal policy is a unique deterministic policy. Off-policy sampling enables us to relax such an unrealistic requirement while also ensuring exploration.
We next present the finite-sample convergence bound of the off-policy TD-learning algorithm <ref> with constant stepsize. The result for using diminishing stepsizes is presented in Appendix <ref>. We begin by introducing some notation. For a given stepsize $\alpha$, let $t_\alpha=\min\{k\geq 0:\|P^k(s,\cdot)-\mu_b(\cdot)\|_{\text{TV}}\leq \alpha\}$, which represents the mixing time of the Markov chain $\{S_k\}$, and can be bounded by an affine function of $\log(1/\alpha)$ under Assumption <ref>. Let $f(x)=n+1$ when $x=1$ and $f(x)=\frac{1-x^{n+1}}{1-x}$ when $x\neq 1$. Denote $w_\pi$ as the solution of the projected Bellman equation (<ref>). Let $\zeta_\pi=\max_{s,a}\frac{\pi(a|s)}{\pi_b(a|s)}$, which measures the mismatch between $\pi$ and $\pi_b$. Let $\lambda_{\min}$ be the smallest eigenvalue of the positive definite matrix $\Phi^\top \mathcal{K}\Phi$.
Consider $\{w_k\}$ of Algorithm <ref>. Suppose that Assumptions <ref> is satisfied, the parameter $n$ is chosen such that $n\geq \frac{2\log(\gamma_c)+\log(\kappa_{b,\min})}{2\log(\gamma)}$ (where $\gamma_c\in (0,1)$ is some tunable constant), and $\alpha$ is chosen such that $\alpha (t_\alpha+n+1)\leq \frac{1-\gamma_c}{456 f(\gamma\zeta_\pi)^2}$. Then for all $k\geq t_\alpha+n+1$ we have:
\begin{align}\label{eq:bound1}
\mathbb{E}[\|w_k-w_\pi\|_2^2]\leq \underbrace{c_1(1-(1-\gamma_c)\lambda_{\min}\alpha)^{k-(t_\alpha+n+1)}}_{\mathcal{E}_1: \text{ convergence bias}}+\underbrace{c_2\frac{\alpha (t_\alpha+n+1)}{(1-\gamma_c)\lambda_{\min}}}_{\mathcal{E}_2: \text{variance}},
\end{align}
where $c_1=(\|w_0\|_2+\|w_0-w_\pi\|_2+1)^2$ and $c_2=114f(\gamma\zeta_\pi)^2(\|w_\pi\|_2+1)^2$. Moreover, when the stepsizes satisfy $\sum_{k=0}^\infty\alpha_k=\infty$ and $\sum_{k=0}^\infty\alpha_k^2<\infty$, we have $\lim_{k\rightarrow\infty}w_k=w_\pi$ almost surely.
Note that the choice of $n$ here depends on the unknown parameter $\kappa_{b,\min}$, which is a limitation of Theorem <ref>. In implementation, we can first "pretend" that $\kappa_{b}$ is uniform (which implies $\kappa_{b,\min}=1/|\mathcal{S}||\mathcal{A}|$), and initialize $n$ at the value $\frac{2\log(\gamma_c)-\log(|\mathcal{S}||\mathcal{A}|)}{2\log(\gamma)}$. As the algorithm progresses, we keep track of the iterates and see if we detect divergence. If that happens we increase the value of $n$, otherwise we leave $n$ unchanged.
As we see from Theorem <ref>, when using constant stepsize in Algorithm <ref>, the convergence bias has geometric rate while the variance is a constant with size $\mathcal{O}(\alpha\log(1/\alpha))$. This phenomenon is well observed in SA literature [64].
Regarding the choice of the parameter $n$, recall from Section <ref> that, to ensure the convergence of Algorithm <ref>, we need to choose the parameter $n$ large enough so that $\gamma^n$ is small enough to kill the norm mismatch constant $1/\sqrt{\kappa_{b,\min}}$ (cf. Eq. (<ref>)). Such a requirement on $n$ is explicitly given in Theorem <ref>. Under that condition, the operator $\Pi_{\kappa_b}\mathcal{T}_\pi^n(\cdot)$ is a contraction mapping with respect to both $\|\cdot\|_\infty$ and $\|\cdot\|_{\kappa_b}$, with a common contraction factor $\gamma_c\in (0,1)$. We make the parameter $\gamma_c$ a tunable constant which can be properly chosen to improve the algorithm performance.
We next present the finite-sample convergence bound of the off-policy NAC with linear function approximation. Let $\xi=\max_{\theta}\|Q^{\pi_\theta}-\Phi w_{\pi_\theta}\|_\infty$, where $Q^{\pi_\theta}$ is the $Q$-function associated with the policy $\pi_\theta$, and $w_{\pi_{\theta}}$ is the solution to the projected Bellman equation $\Phi w=\Pi_{\kappa_b}\mathcal{T}_{\pi_\theta}^n(\Phi w)$. Note that the quantity $\xi$ measures how powerful the function approximation architecture is. Let $\zeta_{\max}=\max_{s,a}\frac{1}{\pi_b(a|s)}$, which is an uniform upper bound of $\zeta_\pi$ for any target policy $\pi$.
Consider the output $\theta_{\hat{T}}$ of Algorithm <ref>. Under the same assumptions of Theorem <ref>, for any starting distribution $\mu$, we have for any $K\geq t_\alpha+n+1$ and $T\geq 1$:
\begin{align*}
\leq \underbrace{\frac{2}{(1-\gamma)^2T}}_{A_1:\text{ convergence bias in the actor}}+\underbrace{\frac{4\xi}{(1-\gamma)^2}}_{A_2:\text{ bias due to function approximation}}\\
&+\underbrace{\frac{4}{(1-\gamma)^2}c_3(1-(1-\gamma_c)\lambda_{\min}\alpha)^{\frac{K-(t_\alpha+n+1)}{2}}}_{A_3:\text{ convergence bias in the critic}}+\underbrace{\frac{44 c_3 f(\gamma\zeta_{\max})[\alpha(t_\alpha+n+1)]^{1/2}}{(1-\gamma)^2(1-\gamma_c)^{1/2}\lambda^{1/2}_{\min}}}_{A_4:\text{ variance in the Critic}}.
\end{align*}
Here $c_3=1+\max_{\pi}\|w_\pi\|_2$, where $\max_{\pi}\|w_\pi\|_2\leq \frac{2}{(1-\gamma_c)^{1/2}(1-\gamma)\sqrt{\lambda_{\min}}}$.
The term $A_1$ represents the convergence bias of the actor, and goes to zero at a rate of $\mathcal{O}(1/T)$ as the outer loop iteration number $T$ goes to infinity. The term $A_3$ measures the convergence bias in the critic, and goes to zero geometrically fast as the inner loop iteration number $K$ goes to infinity. The term $A_4$ represents the impact of the variance in the critic, and is of the size $\mathcal{O}(\sqrt{\alpha\log(1/\alpha)})$, which goes to zero as the inner loop stepsize $\alpha$ goes to zero.
The term $A_2$ captures the error introduced to the system due to function approximation, and cannot be eliminated asymptotically. Moreover, known results in approximate policy iteration (API) literature suggest that the $1/(1-\gamma)^2$ coefficient inside the term $A_2$ is inevitable. Specifically, it was shown in [7, 8] that when $\max_{\pi}\|V^\pi-\Phi w_\pi\|_\infty\leq \xi$, under the API algorithm $\limsup_{k\rightarrow \infty}\|V^{\pi_k}-V^{\pi^*}\|_\infty\leq \frac{2\gamma\xi}{(1-\gamma)^2}$, and an example is presented in <cit.>, where the inequality is tight. Since NAC algorithm can be viewed as an API algorithm with a softmax policy update (which is also weighted by the current policy), it is natural to expect a similar function approximation bias.
Therefore, to improve the function approximation bias term $A_2$, one has to develop instance dependent bound, which is one of our future direction.
Note that when $A_2=0$ (i.e., when the $Q$-functions corresponding to all the policies in the parametric space are linearly parametrizable), Theorem <ref> implies convergence to the true optimal policy, which indicates that the optimal policy must also be linearly parametrizable. In fact, suppose we have complete information of the underlying MDP model and were able to implement the general QNPG update (see Appendix <ref> for general QNPG update). Then we have convergence to the global optimal policy. Although this result is a direct implication of Theorem <ref>, we provide a simpler and more intuitive proof in Appendix <ref>.
To further understand the parameter $\xi$, consider tabular RL, which can be thought of as a special case of RL under linear function approximation with $|\mathcal{S}||\mathcal{A}|$ feature vectors that correspond to the canonical basis vectors, i.e., $\Phi$ is an identity matrix. In this special case, Algorithm <ref> and Theorem <ref> give the finite-sample bounds of $n$-step off-policy tabular TD in <cit.>. The actor in Algorithm <ref> reduces to the NPG update <cit.>. Furthermore, the function approximation bias $\xi$ in this case is zero, and the finite sample bounds in Theorem <ref> reduce to the ones presented in <cit.>. Compared to [34], we have an improved dependence on the effective horizon $1/(1-\gamma)$ and the size of the state-action space. The additional factors of $\log^{1/2}|\mathcal{S}||\mathcal{A}|$ and $1/(1-\gamma)$ in [34] is due to the fact that they were exploiting the $\ell_\infty$-norm contraction of the corresponding variant of the Bellman operator. Here, due to the flexibility in choosing $n$, we are able to exploit the $\ell_2$-norm contraction property, which is "nicer" than $\ell_\infty$-norm contraction. This eventually enables us to remove the additional factor of $\log^{1/2}|\mathcal{S}||\mathcal{A}|$ and $1/(1-\gamma)$ in [34]. See <cit.> and the paragraph below for more details about the difference between stochastic approximation algorithms under $\ell_2$-norm contraction and $\ell_\infty$-norm contraction.
§.§ Sample Complexity Analysis
In this section, we derive sample complexity of off-policy NAC algorithm based on Theorem <ref>, whose proof is presented in Appendix <ref>.
In order to achieve $V^{\pi^*}(\mu)-\mathbb{E}\left[V^{\pi_{\hat{T}}}(\mu)\right]\leq \epsilon+\frac{4\xi}{(1-\gamma)^2}$, the number of samples requires is of the size
\begin{align*}
\mathcal{O}\left(\epsilon^{-3}\log^2(1/\epsilon)\right)\Tilde{\mathcal{O}}\left(f(\gamma\zeta_{\max})^2 n(1-\gamma)^{-8}(1-\gamma_c)^{-3}\lambda_{\min}^{-3}\right).
\end{align*}
It was argued in <cit.> that sample complexity is not well-defined when the convergence error does not go to zero. Therefore, one should not use sample complexity when we do not have global convergence due to the function approximation bias. However, we present Corollary <ref> in terms of “sample complexity” in the same sense as used in prior literature to enable a fair comparison.
See Appendix <ref> for a more detailed discussion.
In view of the sample complexity bound, the dependency on the required accuracy level $\epsilon$ is $\Tilde{\mathcal{O}}(\epsilon^{-3})$. This improves the state-of-the-art sample complexity of off-policy NAC with function approximation result in the literature by a factor of $\epsilon^{-1}$ (cf. Table <ref>). Observe that the tunable constant $\gamma_c$ appears as $(1-\gamma_c)^{-3}$ in the bound. This makes intuitive sense in that $\gamma_c$ is the effective contraction ratio of the composed operator $\Pi_{\kappa_b}\mathcal{T}_\pi^n(\cdot)$ in the critic. Hence we expect better sample complexity for smaller $\gamma_c$. As stated in Theorem <ref>, in order to use smaller $\gamma_c$ in our analysis, we need to choose larger $n$ in executing Algorithm <ref>. An advantage of using large $n$ is that it leads to a lower function approximation bias $\xi$. To see this, consider the projected Bellman equation (<ref>). When $n$ tends to infinity, since $\lim_{n\rightarrow\infty}\mathcal{T}_\pi^n(\Phi w)=Q^\pi$ due to value iteration (Banach fixed-point theorem for the operator $\mathcal{T}_\pi(\cdot)$), the solution of the projected Bellman equation coincides with the projection of $Q^\pi$ to the linear sub-space $\mathcal{Q}$, which has the best function approximation bias. However, note that the parameter $n$ also appears in the numerator of the sample complexity bound (which is due to the variance term in the critic), hence there is a trade-off in the choice of $n$. To summarize, increasing (decreasing) the parameter $n$ leads to better (worse) critic convergence bias and function approximation bias, but has worse (better) critic variance.
In general, the issue of high variance due to the importance sampling ratio (cf. $\zeta_{\max}$) is a fundamental problem in multi-step off-policy TD-learning [65]. In order to reduce such high variance, several variants of off-policy RL such as Retrace$(\lambda)$ [55], $V$-trace [22], and $Q$-trace [34] have been proposed. These algorithms use truncated importance sampling ratios to reduce $\zeta_{\max}$, thus reducing the variance. However, none of them are shown to converge in the function approximation setting. Designing efficient algorithms to control the high variance in multi-step off-policy TD-learning with function approximation is one of our future directions.
§ PROOF SKETCH OF THEOREMS <REF> AND <REF>
In this section, we present the proof sketch of Theorems <ref> and <ref>. The detailed proof is presented in Appendices <ref> and <ref>, respectively.
§.§ Proof Sketch of Theorem <ref>
We begin by remodeling the update equation of Algorithm <ref> (line 4) as a Markovian SA algorithm. For any $k\geq 0$, let $X_k=(S_k,A_k,...,S_{k+n},A_{k+n})$, which is a Markov chain. Denote the state space of $\{X_k\}$ by $\mathcal{X}$. Note that $\mathcal{X}$ is finite. Define an operator $F:\mathbb{R}^{d}\times\mathcal{X}\mapsto\mathbb{R}^{d}$ by
\begin{align*}
=\;&\phi(s_0,a_0)\sum_{i=0}^{n-1}\gamma^i\prod_{j=1}^{i}\rho(s_j,a_j)\left(\mathcal{R}(s_i,a_i)+\gamma \rho(s_{i+1},a_{i+1})\phi(s_{i+1},a_{i+1})^\top w-\phi(s_i,a_i)^\top w\right).
\end{align*}
Then the update equation of Algorithm <ref> can be equivalently written as
\begin{align}\label{algo:sa:remodel}
w_{k+1}=w_k+\alpha F(w_k,S_k,A_k,...,S_{k+n},A_{k+n}).
\end{align}
Define the expected operator $\Bar{F}:\mathbb{R}^d\mapsto\mathbb{R}^d$ of $F(\cdot)$ by $\Bar{F}(w)=\mathbb{E}_{S_0\sim \mu_b}[F(w,S_0,A_0,...,S_n,A_n)]$. Then Algorithm (<ref>) can be viewed as a Markovian SA algorithm for solving the equation $\Bar{F}(w)=0$.
To proceed and establish finite-sample bound of Algorithm (<ref>), we will apply Markovian SA results in the literature. In particular, we will apply Theorem 2.1 of [17], which is presented in Appendix <ref> for self-containedness. To achieve that, we establish properties of the operators $F(\cdot)$, $\bar{F}(\cdot)$, and the Markov chain $\{Y_k\}$ in the following proposition, which guarantee that all the assumptions for applying <cit.> is satisfied. The proof is presented in Appendix <ref>.
Suppose Assumption <ref> is satisfied and $n\geq \frac{2\log(\gamma_c)+\log(\kappa_{b,\min})}{2\log(\gamma)}$.
* The operator $F(w,x)$ satisfies $\|F(w_1,x)-F(w_2,x)\|_2\leq 2f(\gamma\zeta_\pi) \|w_1-w_2\|_2$ and $\|F(\bm{0},x)\|_2\leq f(\gamma\zeta_\pi)$ for any $w_1,w_2\in\mathbb{R}^d$ and $x\in\mathcal{X}$.
* The Markov chain $\{X_k\}$ has a unique stationary distribution, denoted by $\nu_b$. Moreover, it holds for any $k\geq 0$ that $\max_{x\in\mathcal{X}}\left\|P^{k+n+1}(x,\cdot)-\nu_b(\cdot)\right\|_{\text{TV}}\leq C\sigma^k$,
where the constants $C$ and $\sigma$ are given right after Assumption <ref>.
* $\bar{F}(w)$ is explicitly given by $\bar{F}(w)=\Phi^\top \mathcal{K} (\mathcal{T}_\pi^n(\Phi w)-\Phi w)$.
* $\bar{F}(w)=0$ has a unique solution, which we have denoted by $w_\pi\in\mathbb{R}^d$.
* Let $M(w)=\frac{1}{2}\|w\|_2^2$. Then we have $\langle\nabla M(w-w_\pi),\bar{F}(w)\rangle\leq -2(1-\gamma_c)\lambda_{\min}M(w-w_\pi)$ for any $w\in\mathbb{R}^d$.
Proposition <ref> (1) states that the operator $F(w,x)$ is Lipschitz in terms of $w$, which further implies affine growth rate of $F(w,x)$ in the sense that $\|F(w,x)\|_2\leq f(\gamma\zeta_\pi)(\|w\|_2+1)$ for any $w\in\mathbb{R}^d$ and $x\in\mathcal{X}$. Proposition <ref> (2) states that the auxiliary Markov chain $\{X_k\}$ also preserves the geometric mixing property, which is particularly useful for us to control the Markovian noise in the update equation (<ref>). Proposition <ref> (3) implies that using $M(w-w_\pi)$ as the Lyapunov function, both SA algorithm (<ref>) and its associated ODE have a negative drift. This is the key property used in [17] to establish the finite-sample convergence bounds. Now we are ready to apply <cit.> to establish finite-sample bounds of Algorithm (<ref>) (and hence Algorithm <ref>). The details are presented in Appendix <ref>.
§.§ Proof Sketch of Theorem <ref>
First, we show an equivalent form of the update equation of the actor parameter $\theta_t$ (line 4 of Algorithm <ref>) in the following lemma. The proof is provided in Appendix <ref>.
For any $w,\theta\in\mathbb{R}^d$, let $\theta'=\theta+\beta w$. Then the following relation holds:
\begin{align}\label{eq:equivalent_update}
\pi_{\theta'}(a|s)=\pi_\theta(a|s)\frac{\exp(\beta w^\top \phi(s,a))}{\sum_{a'\in\mathcal{A}}\pi_{\theta}(a'|s)\exp(\beta w^\top \phi(s,a'))}.
\end{align}
Such equivalent update rule is established in [1] but only under the condition that $w$ is the solution of an appropriate optimization problem, which forbids [1] from using the equivalent update equation (<ref>) in the analysis of function approximation. Here we establish such equivalence in the case where $w$ is arbitrary. On the one hand, this seemingly simple but important extension enables us to use the lower dimensional update equation $\theta_{t+1}=\theta_t+\beta w_t$ in the algorithm. On the other hand, we can use the equivalent update equation (<ref>) for the analysis to obtain better convergence rate than [1]. Using Lemma <ref>, we have the following performance bound for the actor. See Appendix <ref> for the proof.
Consider $\pi_{\hat{T}}$ generated by Algorithm <ref>. Let $\beta=\log(|\mathcal{A}|)$. Then we have
\begin{align}\label{eq:100}
\leq\; \frac{2}{(1-\gamma)^2T}+\frac{4}{(1-\gamma)^2T}\sum_{t=0}^{T-1}\mathbb{E}[\|Q^{\pi_t}-\Phi w_t\|_\infty].
\end{align}
The first term on the RHS of Eq. (<ref>) represents the convergence rate of the actor while the second term is a combination of the error in the critic estimate and the function approximation bias. This already improves the result in [1], where they have $\mathcal{O}(1/\sqrt{T})$ convergence rate of the actor in the function approximation setting while we have $\mathcal{O}(1/T)$. Note that the $\mathcal{O}(1/T)$ convergence rate matches with the convergence rate of the actor in the tabular setting [1, 36, 34].
The last step is to control $\mathbb{E}[\|Q^{\pi_t}-\Phi w_t\|_\infty]$. We first use triangle inequality to obtain
\begin{align}
\mathbb{E}[\|Q^{\pi_t}-\Phi w_t\|_\infty]
&\leq \mathbb{E}[\|Q^{\pi_t}-\Phi w_{\pi_t}\|_\infty]+\mathbb{E}[\|\Phi w_{\pi_t}-\Phi w_t\|_\infty]\nonumber\\
&\leq \mathbb{E}[\|Q^{\pi_t}-\Phi w_{\pi_t}\|_\infty]+\mathbb{E}[\|w_{\pi_t}- w_t\|_\infty],\label{eq:triangle}
\end{align}
where we used $\|\Phi\|_\infty= \max_{s,a}\|\phi(s,a)\|_1\leq 1$. Observe that the first term on the RHS of Eq. (<ref>) can be bounded by $\xi$, and the second term can be bounded by applying Theorem <ref> in conjunction with Jensen's inequality. The result then follows from substituting the upper bound of the term $\mathbb{E}[\|Q^{\pi_t}-\Phi w_t\|_\infty]$ into Eq. (<ref>) of Proposition <ref>.
§ CONCLUSION
In this paper, we establish finite-sample convergence guarantees of off-policy NAC with linear function approximation. To overcome the deadly triad in the critic, we use $n$-step TD-learning, which is a one-time scale algorithm for policy evaluation using off-policy sampling and linear function approximation, and has provable convergence bounds. As for the analysis of the actor, we identify an equivalent update equation, and use it to conduct refined analysis compared to [1]. As a result, our finite-sample bounds imply a sample complexity of $\Tilde{\mathcal{O}}(\epsilon^{-3})$, which advances the state-of-the-art result in the literature.
[1]
Agarwal, AlekhA.,
Kakade, Sham MS. M.,
Lee, Jason DJ. D. Mahajan, GauravG.
On the theory of policy gradient methods: Optimality, approximation,
and distribution shift.
Journal of Machine Learning Research
[2]
Azar, Mohammad GheshlaghiM. G.,
Gómez, VicençV. Kappen, Hilbert JH. J.
Dynamic policy programming.
The Journal of Machine Learning Research
[3]
Baird, LeemonL.
Residual algorithms: Reinforcement learning with function
In Machine Learning Proceedings 1995
[4]
Barto, A. G.A. G.,
Sutton, R. S.R. S. Anderson, C. W.C. W.
Neuronlike adaptive elements that can solve difficult learning control
IEEE Transactions on Systems, Man, and Cybernetics
[5]
Baxter, JonathanJ. Bartlett, Peter LP. L.
Infinite-horizon policy-gradient estimation.
Journal of Artificial Intelligence Research
[6]
Bellman, RR.
Dynamic programming princeton university press princeton.
New Jersey Google Scholar.
[7]
Bertsekas, Dimitri PD. P.
Approximate policy iteration: A survey and some new methods.
Journal of Control Theory and Applications
[8]
Bertsekas, Dimitri PD. P. Tsitsiklis, John NJ. N.
Neuro-dynamic programming.
Athena Scientific.
[9]
Bertsekas, Dimitri PD. P. Yu, HuizhenH.
Projected equation methods for approximate solution of large linear
Journal of Computational and Applied Mathematics
[10]
Bhandari, JalajJ.,
Russo, DanielD. Singal, RaghavR.
A Finite Time Analysis of Temporal Difference Learning With Linear
Function Approximation.
In Conference On Learning Theory
[11]
Bhandari, JalajJ. Russo, DanielD.
A note on the linear convergence of policy gradient methods.
Preprint arXiv:2007.11120.
[12]
Bhatnagar, ShalabhS.,
Sutton, Richard SR. S.,
Ghavamzadeh, MohammadM. Lee, MarkM.
Natural actor–critic algorithms.
[13]
Borkar, Vivek SV. S.
Stochastic approximation: a dynamical systems viewpoint
[14]
Borkar, Vivek SV. S. Konda, Vijaymohan RV. R.
The actor-critic algorithm as multi-time-scale stochastic
[15]
Borkar, Vivek SV. S. Meyn, Sean PS. P.
The ODE method for convergence of stochastic approximation and
reinforcement learning.
SIAM Journal on Control and Optimization
[16]
Cen, ShicongS.,
Cheng, ChenC.,
Chen, YuxinY.,
Wei, YutingY. Chi, YuejieY.
Fast global convergence of natural policy gradient methods with entropy
Operations Research.
[17]
Chen, ZaiweiZ.,
Zhang, ShengS.,
Doan, Thinh T.T. T.,
Clarke, John-PaulJ.-P. Maguluri, Siva ThejaS. T.
Finite-Sample Analysis of Nonlinear Stochastic Approximation with
Applications in Reinforcement Learning.
Preprint arXiv:1905.11425.
[18]
Chen, ZaiweiZ.,
Maguluri, Siva ThejaS. T.,
Shakkottai, SanjayS. Shanmugam, KarthikeyanK.
A Lyapunov Theory for Finite-Sample Guarantees of Asynchronous
$Q$-Learning and TD-Learning Variants.
Preprint arXiv:2102.01567.
[19]
Dalal, GalG.,
Szörényi, BalázsB.,
Thoppe, GuganG. Mannor, ShieS.
Finite sample analyses for TD$(0)$ with function approximation.
In Proceedings of the AAAI Conference on Artificial Intelligence
[20]
Dann, ChristophC.,
Li, LihongL.,
Wei, WeiW. Brunskill, EmmaE.
Policy certificates: Towards accountable reinforcement learning.
In International Conference on Machine Learning
[21]
Degris, ThomasT.,
White, MarthaM. Sutton, RichardR.
Off-Policy Actor-Critic.
In International Conference on Machine Learning.
[22]
Espeholt, LasseL.,
Soyer, HubertH.,
Munos, RemiR.,
Simonyan, KarenK.,
Mnih, VladV.,
Ward, TomT.,
Doron, YotamY.,
Firoiu, VladV.,
Harley, TimT.,
Dunning, IainI. et al.
IMPALA: Scalable Distributed Deep-RL with Importance Weighted
Actor-Learner Architectures.
In International Conference on Machine Learning
[23]
Even-Dar, EyalE.,
Kakade, Sham MS. M. Mansour, YishayY.
Online Markov decision processes.
Mathematics of Operations Research
[24]
Fujimoto, ScottS.,
Hoof, HerkeH. Meger, DavidD.
Addressing function approximation error in actor-critic methods.
In International Conference on Machine Learning
[25]
Geist, MatthieuM.,
Scherrer, BrunoB. Pietquin, OlivierO.
A theory of regularized markov decision processes.
In International Conference on Machine Learning
[26]
Gottesman, OmerO.,
Johansson, FredrikF.,
Komorowski, MatthieuM.,
Faisal, AldoA.,
Sontag, DavidD.,
Doshi-Velez, FinaleF. Celi, Leo AnthonyL. A.
Guidelines for reinforcement learning in healthcare.
Nature medicine
[27]
Gottesman, OmerO.,
Futoma, JosephJ.,
Liu, YaoY.,
Parbhoo, SonaliS.,
Celi, LeoL.,
Brunskill, EmmaE. Doshi-Velez, FinaleF.
Interpretable off-policy evaluation in reinforcement learning by
highlighting influential transitions.
In International Conference on Machine Learning
[28]
Gu, ShixiangS.,
Holly, EthanE.,
Lillicrap, TimothyT. Levine, SergeyS.
Deep reinforcement learning for robotic manipulation with asynchronous
off-policy updates.
In 2017 IEEE international conference on robotics and automation
[29]
Haarnoja, TuomasT.,
Tang, HaoranH.,
Abbeel, PieterP. Levine, SergeyS.
Reinforcement learning with deep energy-based policies.
In International Conference on Machine Learning
[30]
Hu, BinB. Syed, UsmanU.
Characterizing the exact behaviors of temporal difference learning
algorithms using Markov jump linear system theory.
Advances in neural information processing systems
[31]
Imani, EhsanE.,
Graves, EricE. White, MarthaM.
An off-policy policy gradient theorem using emphatic weightings.
Advances in Neural Information Processing Systems
[32]
Jaakkola, TommiT.,
Jordan, Michael IM. I. Singh, Satinder PS. P.
Convergence of stochastic iterative dynamic programming algorithms.
In Advances in neural information processing systems
[33]
Kakade, Sham MS. M.
A natural policy gradient.
Advances in neural information processing systems
[34]
Khodadadian, SajadS.,
Chen, ZaiweiZ. Maguluri, Siva ThejaS. T.
Finite-sample analysis of off-policy natural actor-critic algorithm.
In International Conference on Machine Learning
[35]
Khodadadian, SajadS.,
Jhunjhunwala, Prakirt RajP. R.,
Varma, Sushil MahavirS. M. Maguluri, Siva ThejaS. T.
On the Linear convergence of Natural Policy Gradient Algorithm.
Preprint arXiv:2105.01424.
[36]
Khodadadian, SajadS.,
Doan, Thinh T.T. T.,
Maguluri, Siva ThejaS. T. Romberg, JustinJ.
Finite Sample Analysis of Two-Time-Scale Natural Actor-Critic
Preprint arXiv:2101.10506.
[37]
Konda, Vijay RV. R. Tsitsiklis, John NJ. N.
Actor-critic algorithms.
In Advances in neural information processing systems
[38]
Kumar, HarshatH.,
Koppel, AlecA. Ribeiro, AlejandroA.
On the Sample Complexity of Actor-Critic Method for Reinforcement
Learning with Function Approximation.
Preprint arXiv:1910.08412.
[39]
Lakshminarayanan, ChandrashekarC. Szepesvari, CsabaC.
Linear Stochastic Approximation: How Far Does Constant Step-Size and
Iterate Averaging Go?
In International Conference on Artificial Intelligence and
[40]
Lan, GG.
Policy Mirror Descent for Reinforcement Learning: Linear Convergence,
New Sampling Complexity, and Generalized Problem Classes.
Mathematical programming.
[41]
Levin, David AD. A. Peres, YuvalY.
Markov chains and mixing times
American Mathematical Soc.
[42]
Levine, SergeyS.,
Kumar, AviralA.,
Tucker, GeorgeG. Fu, JustinJ.
Offline reinforcement learning: Tutorial, review, and perspectives on
open problems.
Preprint arXiv:2005.01643.
[43]
Li, GenG.,
Cai, ChangxiaoC.,
Chen, YuxinY.,
Gu, YuantaoY.,
Wei, YutingY. Chi, YuejieY.
Tightening the dependence on horizon in the sample complexity of
In International Conference on Machine Learning
[44]
Lillicrap, Timothy PT. P.,
Hunt, Jonathan JJ. J.,
Pritzel, AlexanderA.,
Heess, NicolasN.,
Erez, TomT.,
Tassa, YuvalY.,
Silver, DavidD. Wierstra, DaanD.
Continuous control with deep reinforcement learning.
In ICLR (Poster).
[45]
Liu, YaoY.,
Gottesman, OmerO.,
Raghu, AniruddhA.,
Komorowski, MatthieuM.,
Faisal, Aldo AA. A.,
Doshi-Velez, FinaleF. Brunskill, EmmaE.
Representation Balancing MDPs for Off-policy Policy Evaluation.
Advances in Neural Information Processing Systems
[46]
Liu, BoyiB.,
Cai, QiQ.,
Yang, ZhuoranZ. Wang, ZhaoranZ.
Neural proximal/trust region policy optimization attains globally
optimal policy.
Advances in Neural Information Processing Systems
[47]
Liu, YanliY.,
Zhang, KaiqingK.,
Basar, TamerT. Yin, WotaoW.
An improved analysis of (variance-reduced) policy gradient and natural
policy gradient methods.
Advances in Neural Information Processing Systems
[48]
Liu, YaoY.,
Swaminathan, AdithA.,
Agarwal, AlekhA. Brunskill, EmmaE.
Off-Policy Policy Gradient with Stationary Distribution Correction.
In Uncertainty in Artificial Intelligence
[49]
Maei, Hamid RezaH. R.
Convergent actor-critic algorithms under off-policy training and
function approximation.
Preprint arXiv:1802.07842.
[50]
Mandel, TravisT.,
Liu, Yun-EnY.-E.,
Levine, SergeyS.,
Brunskill, EmmaE. Popovic, ZoranZ.
Offline policy evaluation across representations with applications to
educational games.
In AAMAS
[51]
Mei, JinchengJ.,
Xiao, ChenjunC.,
Szepesvari, CsabaC. Schuurmans, DaleD.
On the global convergence rates of softmax policy gradient methods.
In International Conference on Machine Learning
[52]
Melo, Francisco SF. S.,
Meyn, Sean PS. P. Ribeiro, M IsabelM. I.
An analysis of reinforcement learning with function approximation.
In Proceedings of the 25th international conference on Machine
[53]
Mirowski, PiotrP.,
Grimes, MattM.,
Malinowski, MateuszM.,
Hermann, Karl MoritzK. M.,
Anderson, KeithK.,
Teplyashin, DenisD.,
Simonyan, KarenK.,
Zisserman, AndrewA.,
Hadsell, RaiaR. et al.
Learning to navigate in cities without a map.
In Advances in Neural Information Processing Systems
[54]
Morimura, TetsuroT.,
Uchibe, EijiE.,
Yoshimoto, JunichiroJ. Doya, KenjiK.
A generalized natural actor-critic algorithm.
In Advances in neural information processing systems
[55]
Munos, RémiR.,
Stepleton, ThomasT.,
Harutyunyan, AnnaA. Bellemare, Marc GM. G.
Safe and efficient off-policy reinforcement learning.
In Proceedings of the 30th International Conference on Neural
Information Processing Systems
[56]
Peters, JanJ. Schaal, StefanS.
Natural actor-critic.
[57]
Pirotta, MatteoM.,
Restelli, MarcelloM. Bascetta, LucaL.
Policy gradient in Lipschitz Markov decision processes.
Machine Learning
[58]
Puterman, Martin LM. L.
Markov decision processes: Discrete stochastic dynamic programming.
Journal of the Operational Research Society
[59]
Qiu, ShuangS.,
Yang, ZhuoranZ.,
Ye, JiepingJ. Wang, ZhaoranZ.
On the finite-time convergence of actor-critic algorithm.
In Optimization Foundations for Reinforcement Learning Workshop at
Advances in Neural Information Processing Systems (NeurIPS).
[60]
Qu, GuannanG. Wierman, AdamA.
Finite-Time Analysis of Asynchronous Stochastic Approximation and
In Conference on Learning Theory
[61]
Shani, LiorL.,
Efroni, YonathanY. Mannor, ShieS.
Adaptive Trust Region Policy Optimization: Global Convergence and
Faster Rates for Regularized MDPs.
In Proceedings of the AAAI Conference on Artificial Intelligence
[62]
Silver, DavidD.,
Lever, GuyG.,
Heess, NicolasN.,
Degris, ThomasT.,
Wierstra, DaanD. Riedmiller, MartinM.
Deterministic policy gradient algorithms.
In International conference on machine learning
[63]
Silver, DavidD.,
Schrittwieser, JulianJ.,
Simonyan, KarenK.,
Antonoglou, IoannisI.,
Huang, AjaA.,
Guez, ArthurA.,
Hubert, ThomasT.,
Baker, LucasL.,
Lai, MatthewM.,
Bolton, AdrianA. et al.
Mastering the game of go without human knowledge.
[64]
Srikant, RR. Ying, LeiL.
Finite-Time Error Bounds For Linear Stochastic Approximation and TD
In Conference on Learning Theory
[65]
Sutton, Richard SR. S. Barto, Andrew GA. G.
Reinforcement learning: An introduction.
MIT press.
[66]
Sutton, Richard SR. S.,
Mahmood, A RupamA. R. White, MarthaM.
An emphatic approach to the problem of off-policy temporal-difference
The Journal of Machine Learning Research
[67]
Sutton, Richard SR. S.,
Szepesvári, CsabaC. Maei, Hamid RezaH. R.
A convergent $\mathcal{O}(n)$ algorithm for off-policy
temporal-difference learning with linear function approximation.
Advances in neural information processing systems
[68]
Sutton, Richard SR. S.,
McAllester, DavidD.,
Singh, SatinderS. Mansour, YishayY.
Policy gradient methods for reinforcement learning with function
In Proceedings of the 12th International Conference on Neural
Information Processing Systems
[69]
Sutton, Richard SR. S.,
Maei, Hamid RezaH. R.,
Precup, DoinaD.,
Bhatnagar, ShalabhS.,
Silver, DavidD.,
Szepesvári, CsabaC. Wiewiora, EricE.
Fast gradient-descent methods for temporal-difference learning with
linear function approximation.
In Proceedings of the 26th Annual International Conference on
Machine Learning
[70]
Tadić, VladislavV.
On the convergence of temporal-difference learning with linear function
Machine learning
[71]
Thomas, Philip SP. S.,
Dabney, WilliamW.,
Mahadevan, SridharS. Giguere, StephenS.
Projected natural actor-critic.
In Proceedings of the 26th International Conference on Neural
Information Processing Systems-Volume 2
[72]
Tsitsiklis, John NJ. N.
Asynchronous stochastic approximation and $Q$-learning.
Machine learning
[73]
Tsitsiklis, John NJ. N. Van Roy, BenjaminB.
An analysis of temporal-difference learning with function
IEEE transactions on automatic control
[74]
Wainwright, Martin JM. J.
Stochastic approximation with cone-contractive operators: Sharp
$\ell_\infty$-bounds for ${Q}$-learning.
Preprint arXiv:1905.06265.
[75]
Wang, ZiyuZ.,
Bapst, VictorV.,
Heess, NicolasN.,
Mnih, VolodymyrV.,
Munos, RemiR.,
Kavukcuoglu, KorayK. de Freitas, NandoN.
Sample efficient actor-critic with experience replay.
Preprint arXiv:1611.01224.
[76]
Wang, LingxiaoL.,
Cai, QiQ.,
Yang, ZhuoranZ. Wang, ZhaoranZ.
Neural Policy Gradient Methods: Global Optimality and Rates of
In International Conference on Learning Representations.
[77]
Watkins, Christopher JCHC. J. Dayan, PeterP.
Machine learning
[78]
Williams, Ronald JR. J. Baird, Leemon CL. C.
A mathematical analysis of actor-critic architectures for learning
optimal controls through incremental dynamic programming.
In Proceedings of the Sixth Yale Workshop on Adaptive and Learning
[79]
Wu, Yue FrankY. F.,
Zhang, WeitongW.,
Xu, PanP. Gu, QuanquanQ.
A finite-time analysis of two time-scale actor-critic methods.
Advances in Neural Information Processing Systems
[80]
Xu, TengyuT.,
Wang, ZheZ. Liang, YingbinY.
Non-asymptotic Convergence Analysis of Two Time-scale (Natural)
Actor-Critic Algorithms.
Preprint arXiv:2005.03557.
[81]
Xu, TengyuT.,
Wang, ZheZ. Liang, YingbinY.
Improving sample complexity bounds for (natural) actor-critic
Advances in Neural Information Processing Systems
[82]
Xu, TengyuT.,
Yang, ZhuoranZ.,
Wang, ZhaoranZ. Liang, YingbinY.
Doubly robust off-policy actor-critic: Convergence and optimality.
In International Conference on Machine Learning
[83]
Yu, HuizhenH.
Least squares temporal difference methods: An analysis under general
SIAM Journal on Control and Optimization
[84]
Yurtsever, EkimE.,
Lambert, JacobJ.,
Carballo, AlexanderA. Takeda, KazuyaK.
A survey of autonomous driving: Common practices and emerging
IEEE Access
[85]
Zhang, KaiqingK.,
Koppel, AlecA.,
Zhu, HaoH. Başar, TamerT.
Convergence and iteration complexity of policy gradient method for
infinite-horizon reinforcement learning.
In 2019 IEEE 58th Conference on Decision and Control (CDC)
[86]
Zhang, JunyuJ.,
Koppel, AlecA.,
Bedi, Amrit SinghA. S.,
Szepesvari, CsabaC. Wang, MengdiM.
Variational policy gradient method for reinforcement learning with
general utilities.
Advances in Neural Information Processing Systems
[87]
Zhang, ShangtongS.,
Liu, BoB.,
Yao, HengshuaiH. Whiteson, ShimonS.
Provably convergent two-timescale off-policy actor-critic with function
In International Conference on Machine Learning
§ ANALYSIS OF THE CRITIC
§.§ Proof of Proposition <ref>
* Let $w_1,w_2\in\mathbb{R}^d$ and $x=(s_0,a_0,...,s_n,a_n)\in\mathcal{X}$ be arbitrary. For simplicity of notation, we denote $\rho_{i,j}=\prod_{k=i}^j\rho(s_k,a_k)$. Then we have
\begin{align*}
=\;&\left\|\phi(s_0,a_0)\sum_{i=0}^{n-1}\gamma^{i}(\gamma \rho_{1,i+1} \phi(s_{i+1},a_{i+1})^\top -\rho_{1,i}\phi(s_i,a_i)^\top)(w_1-w_2)\right\|_2\\
=\;&\left\|\phi(s_0,a_0)(\gamma^n\rho_{1,n}\phi(s_n,a_n)^\top -\phi(s_0,a_0)^\top)(w_1-w_2)\right\|_2\\
\leq \;&\|\phi(s_0,a_0)\|_2((\gamma\zeta_\pi)^n\|\phi(s_n,a_n)\|_2 +\|\phi(s_0,a_0)\|_2)\|w_1-w_2\|_2\\
\leq \; &((\gamma\zeta_\pi)^n+1)\|w_1-w_2\|_2\tag{$\|\phi(s,a)\|_2\leq \|\phi(s,a)\|_1\leq 1$ for all $(s,a)$}\\
\leq \;& \sum_{i=0}^{n}(\gamma\zeta_\pi)^i\|w_1-w_2\|_2\\
= \;& f(\gamma\zeta_\pi)\|w_1-w_2\|_2.
\end{align*}
Similarly, we have
\begin{align*}
\|F(\bm{0},x)\|_2&=\left\|\phi(s_0,a_0)\sum_{i=0}^{n-1}\gamma^{i}\rho_{1,i}\mathcal{R}(s_i,a_i)\right\|_2\\
&\leq \|\phi(s_0,a_0)\|_2\sum_{i=0}^{n-1}(\gamma\zeta_\pi)^{i}|\mathcal{R}(s_i,a_i)|\\
&\leq \sum_{i=0}^{n-1}(\gamma\zeta_\pi)^{i}\\
&\leq f(\gamma\zeta_\pi).
\end{align*}
* The claim that $\{X_k\}$ has a stationary distribution $\nu_b$ follows directly from its definition and Assumption <ref>. Now for any $x=(s_0,a_0,...,s_n,a_n)\in\mathcal{X}$, using the definition of total variation distance, we have for any $k\geq 0$:
\begin{align*}
=\;&\frac{1}{2}\sum_{s_0',a_0',\cdots,s_n',a_n'}\left|\sum_{s}P_{a_n}(s_n,s)P^k_{\pi_b}(s,s_0')\!-\!\mu_b(s_0')\right|\!\left[\prod_{i=0}^{n-1}\pi(a_i'\mid s_i')P_{a_i'}(s_i',s_{i+1}')\right]\!\pi(a_n'\mid s_n')\\
\leq \;&\frac{1}{2}\sum_{s}P_{a_n}(s_n,s)\sum_{s_0'}\left|P^k_{\pi_b}(s_n,s_0')-\mu_b(s_0')\right|\\
\leq \;&\max_{s\in\mathcal{S}}\|P^k_{\pi_b}(s,\cdot)-\mu_b(\cdot)\|_{\text{TV}}\\
\leq \;&C\sigma^k.
\end{align*}
It follows that $\max_{x\in\mathcal{X}}\left\|P^{k+n+1}_{\pi_b}(x,\cdot)-\nu_b(\cdot)\right\|_{\text{TV}}\leq C\sigma^k$ for all $k\geq 0$.
* We first compute $\bar{F}(w)$. By definition, we have
\begin{align*}
\bar{F}(w)
\!=\!\mathbb{E}_{S_0\sim\mu_b}\left[\phi(S_0,A_0)\left(\sum_{i=0}^{n-1}\gamma^{i}\rho_{1,i}\mathcal{R}(S_i,A_i)\!+\!\gamma^n \rho_{1,n}\phi(S_{n},A_{n})^\top w\!-\!\phi(S_0,A_0)^\top w\right)\right].
\end{align*}
Using conditional expectation and the Markov property, we have for any $i=0,...,n-1$:
\begin{align*}
=\;&\mathbb{E}_{S_0\sim\mu_b}\left[\phi(S_0,A_0)\gamma^{i}\rho_{1,i-1}\mathbb{E}\left[\rho_i\mathcal{R}(S_i,A_i)\mid S_0,A_0,\dots,S_{i-1},A_{i-1}\right]\right]\\
=\;&\mathbb{E}_{S_0\sim\mu_b}\left[\phi(S_0,A_0)\gamma^{i}\rho_{1,i-1}[P_\pi R](S_{i-1},A_{i-1})\right]\\
=\;&\Phi^\top \mathcal{K} (\gamma P_\pi)^{i}R,
\end{align*}
where $P_\pi$ is the transition probability matrix of the Markov chain $\{(S_k,A_k)\}$ under policy $\pi$, and $R$ is the reward vector. Similarly, we have
\begin{align*}
\mathbb{E}_{S_0\sim\mu_b}\left[\phi(S_0,A_0)\gamma^n \rho_{1,n}\phi(S_{n},A_{n})^\top w\right]=\Phi^\top \mathcal{K}(\gamma P_\pi)^n\Phi w,
\end{align*}
\begin{align*}
\mathbb{E}_{S_0\sim\mu_b}\left[\phi(S_0,A_0)\phi(S_0,A_0)^\top w\right]=\Phi^\top \mathcal{K}\Phi w.
\end{align*}
Therefore, we obtain
\begin{align*}
\bar{F}(w)&=\Phi^\top \mathcal{K} \sum_{i=0}^{n-1}(\gamma P_\pi)^{i}R+\Phi^\top \mathcal{K}(\gamma P_\pi)^n\Phi w-\Phi^\top \mathcal{K}\Phi w\\
&=\Phi^\top \mathcal{K} \left[\sum_{i=0}^{n-1}(\gamma P_\pi)^{i}R+(\gamma P_\pi)^n\Phi w-\Phi w\right]\\
&=\Phi^\top \mathcal{K}(\mathcal{T}_\pi^n(\Phi w)-\Phi w).
\end{align*}
* Note that the equation $\bar{F}(w)=0$ is equivalent to
\begin{align*}
\Phi w=\Phi(\Phi^\top \mathcal{K}\Phi)^{-1}\Phi^\top \mathcal{K}\mathcal{T}_\pi^n(\Phi w)=\Pi_{\kappa_b}\mathcal{T}_\pi^n(\Phi w),
\end{align*}
which is the projected $n$-step Bellman equation (<ref>). Observe that
\begin{align*}
\|\Pi_{\kappa_b}\mathcal{T}_\pi^n(Q_1)-\Pi_{\kappa_b}\mathcal{T}_\pi^n(Q_2)\|_{\kappa_b}&\leq \|\mathcal{T}_\pi^n(Q_1)-\mathcal{T}_\pi^n(Q_2)\|_{\kappa_b}\tag{$\Pi_{\kappa_b}(\cdot)$ is non expansive}\\
&\leq \|\mathcal{T}_\pi^n(Q_1)-\mathcal{T}_\pi^n(Q_2)\|_{\infty}\tag{norm inequality}\\
&\leq\gamma^n \|Q_1-Q_2\|_{\infty}\tag{$\mathcal{T}_\pi^n$ is $\gamma^n$-contraction}\\
&\leq\frac{\gamma^n}{\sqrt{\kappa_{b,\min}}} \|Q_1-Q_2\|_{\kappa_b}\tag{norm inequality}\\
&\leq \frac{\gamma_c \sqrt{\kappa_{b,\min}}}{\sqrt{\kappa_{b,\min}}} \|Q_1-Q_2\|_{\kappa_b}\tag{requirement on $n$}\\
\end{align*}
It follows that the composed operator $\Pi_{\kappa_b}\mathcal{T}_\pi^n(\cdot)$ is a contraction mapping with respect to $\|\cdot\|_{\kappa_b}$. Therefore, Banach fixed-point theorem implies that the projected Bellman equation (<ref>) has a unique solution. Since the matrix $\Phi$ is full-column rank, there is a unique solution (which we have denoted by $w_\pi$) to the equation $\bar{F}(w)=0$.
* Consider the Lyapunov function $M(w)=\frac{1}{2}\|w\|_2^2$. Since the $n$-step Bellman operator $\mathcal{T}_\pi^n(\cdot)$ is linear, we have
\begin{align*}
&\langle \nabla M(w-w_\pi),\bar{F}(w)\rangle\\
=\;&\langle w-w_\pi,\Phi^\top \mathcal{K} (\mathcal{T}_\pi^n(\Phi w)-\Phi w)\rangle\\
=\;&\langle (w-w_\pi),\Phi^\top \mathcal{K}\mathcal{T}_\pi^n(\Phi(w-w_\pi))-\Phi^\top \mathcal{K}\Phi(w-w_\pi)\rangle\tag{$\bar{F}(w_\pi)=0$}\\
=\;&\langle (\Phi^\top \mathcal{K}\Phi)(w-w_\pi),(\Phi^\top \mathcal{K}\Phi)^{-1}\Phi^\top \mathcal{K}\mathcal{T}_\pi^n(\Phi(w-w_\pi))-(w-w_\pi)\rangle\\
=\;&\langle (\Phi^\top \mathcal{K}\Phi)(w-w_\pi),(\Phi^\top \mathcal{K}\Phi)^{-1}\Phi^\top \mathcal{K}\mathcal{T}_\pi^n(\Phi(w-w_\pi))\rangle-\|\Phi(w-w_\pi)\|_{\kappa_b}^2\\
=\;&\langle \mathcal{K}^{1/2}\Phi(w-w_\pi),\mathcal{K}^{1/2}\Phi(\Phi^\top \mathcal{K}\Phi)^{-1}\Phi^\top \mathcal{K}\mathcal{T}_\pi^n(\Phi(w-w_\pi))\rangle-\|\Phi(w-w_\pi)\|_{\kappa_b}^2\\
\leq\;& \|\Phi(w-w_\pi)\|_{\kappa_b}\|\Phi(\Phi^\top \mathcal{K}\Phi)^{-1}\Phi^\top \mathcal{K}\mathcal{T}_\pi^n(\Phi(w-w_\pi))\|_{\kappa_b}-\|\Phi(w-w_\pi)\|_{\kappa_b}^2\tag{Cauchy Schwarz Inequality}\\
\leq\;& \gamma_c\|\Phi(w-w_\pi)\|_{\kappa_b}\|\Phi(w-w_\pi)\|_{\kappa_b}-\|\Phi(w-w_\pi)\|_{\kappa_b}^2\\
\leq \;&-2(1-\gamma_c)\lambda_{\min}M(w-w_\pi),
\end{align*}
where in the last line we used $\sqrt{\lambda_{\min}}\|w\|_2\leq \|\Phi w\|_{\kappa_b}$ for any $w\in\mathbb{R}^d$.
§.§ Proof of Theorem <ref>
Since Algorithm <ref> is a linear stochastic approximation algorithm under Markovian noise. Proposition <ref> ensures the applicability of <cit.>, which gives us the almost sure convergence result under nun-summable but squared-summable stepsizes. We next focus on the finite-sample guarantees.
We begin by restating <cit.> in the following, where we adopt our notation for consistency.
Consider the stochastic approximation algorithm
\begin{align*}
w_{k+1}=w_k+\alpha G(X_k,w_k).
\end{align*}
Suppose that
* The random process $\{X_k\}$ has a unique stationary distribution $\nu$, and it holds for any $k\geq 0$ that $\max_{x\in\mathcal{X}}\|P^k(x,\cdot)-\mu(\cdot)\|_{\text{TV}}\leq C_1\sigma_1^k$ for some constant $C_1>0$ and $\sigma_1\in (0,1)$.
* The operator $G(\cdot,\cdot)$ satisfies $\|G(x,w_1)-G(x,w_2)\|_2\leq L\|w_1-w_2\|_2$ and $\|G(x,\bm{0})\|_2\leq L$ for any $w_1,w_2\in\mathbb{R}^d$ and $x\in\mathcal{X}$.
* The equation $\bar{G}(w)=\mathbb{E}_{X\sim \mu}[G(X,w)]=0$ has a unique solution $w^*$, and the following inequality holds for all $w\in\mathbb{R}^d$: $(w-w^*)^\top \bar{G}(w)\leq -\ell \|w-w^*\|_2^2$, where $\ell>0$ is some positive constant.
* The stepsize $\alpha$ is chosen such that $\alpha \tau_\alpha\leq \frac{\ell}{114L^2}$, where
\begin{align*}
\tau_\alpha:=\min\{k\geq 0\;:\;\max_{x\in\mathcal{X}}\|P^k(x,\cdot)-\nu(\cdot)\|_{\text{TV}}\leq\alpha\}.
\end{align*}
Then we have for any $k\geq \tau_\alpha$ that
\begin{align*}
\mathbb{E}[\|w_k-w^*\|_2^2]\leq (\|w_0\|_2+\|w_0-w^*\|_2+1)^2(1-\ell \alpha)^{k-\tau_\alpha}+114L^2(\|w^*\|_2+1)^2\frac{\alpha \tau_\alpha}{\ell}.
\end{align*}
Now we proceed to prove Theorem <ref>. To apply Theorem <ref>, we begin by identifying the corresponding constants using Proposition <ref>. We have
\begin{align*}
L=f(\gamma\zeta_\pi),\;\ell=(1-\gamma_c)\lambda_{\min},\;\text{ and }\;\tau_\alpha=t_\alpha+n+1.
\end{align*}
It follows that when the constant stepsize $\alpha$ within Algorithm <ref> is chosen such that $\alpha (t_\alpha+n+1)\leq \frac{(1-\gamma_c)\lambda_{\min}}{114f(\gamma\zeta_\pi)^2}$, we have for all $k\geq t_\alpha+n+1$:
\begin{align*}
\mathbb{E}[\|w_k-w_\pi\|_2^2]\leq c_1(1-(1-\gamma_c)\lambda_{\min} \alpha)^{k-(t_\alpha+n+1)}+\frac{c_2\alpha (t_\alpha+n+1)}{(1-\gamma_c)\lambda_{\min}},
\end{align*}
where $c_1=(\|w_0\|_2+\|w_0-w_\pi\|_2+1)^2$ and $c_2=114f(\gamma\zeta_\pi)^2(\|w_\pi\|_2+1)^2$. This proves Theorem <ref>.
§.§ Finite-Sample Bound for Using Diminishing Stepsizes
We here state the finite-sample bounds of Algorithm <ref> for using diminishing stepsizes of the form $\alpha_k=\frac{\alpha}{(k+h)^\eta}$, where $\alpha,h>0$ and $\eta\in (0,1]$. For simplicity of notation, let $t_k=t_{\alpha_k}$, $L_1=\frac{1+\log(C/\sigma)}{\log(1/\sigma)}$, and $\ell=(1-\gamma_c)\lambda_{\min}$.
Consider $\{w_k\}$ of Algorithm <ref>. Suppose that Assumptions <ref> is satisfied, the parameter $n$ is chosen such that $n\geq \frac{2\log(\gamma_c)+\log(\kappa_{b,\min})}{2\log(\gamma)}$ (where $\gamma_c\in (0,1)$ is some tunable constant), and $\alpha_k=\frac{\alpha}{(k+h)^\eta}$, where $\alpha>0$, $\eta\in (0,1]$, and $h$ is chosen such that $\sum_{i=k-t_k}^{k-1}(t_i+n+1)\leq \frac{1-\gamma_c}{114 f(\gamma\zeta_\pi)^2}$. Let $\hat{k}:=\min\{k:k\geq t_k+n+1\}$. Then we have the following results.
* When $\eta=1$, we have for all $k\geq \hat{k}$:
\begin{align*}
\mathbb{E}[\|w_k-w^*\|_2^2]\leq
\begin{dcases}
c_1\left(\frac{\hat{k}+h}{k+h}\right)^{\ell\alpha}+\frac{8c_2\alpha^2L_1}{1-\ell\alpha}\frac{[\log\left(\frac{k+h}{\alpha}\right)+1]}{(k+h)^{\ell\alpha}},& \ell\alpha\in (0,1),\\
c_1\left(\frac{\hat{k}+h}{k+h}\right)+8c_2\alpha^2L_1\frac{\log(\frac{k+h}{\hat{k}+h})[\log\left(\frac{k+h}{\alpha}\right)+1]}{k+h},& \ell\alpha=1,\\
c_1\left(\frac{\hat{k}+h}{k+h}\right)^{\ell\alpha}+\frac{8ec_2\alpha^2L_1}{\ell\alpha-1} \frac{\left[\log\left(\frac{k+h}{\alpha}\right)+1\right]}{k+h},& \ell\alpha\in (1,\infty).
\end{dcases}
\end{align*}
* When $\eta\in (0,1)$ and $\alpha>0$, suppose in addition that $\hat{k}+h\geq [2\eta/(\ell\alpha)]^{1/(1-\eta)}$, then we have for all $k\geq \hat{k}$:
\begin{align*}
\mathbb{E}[\|\theta_k-\theta^*\|^2]
\leq c_1\exp\left[-\frac{\ell\alpha}{1-\eta}\left((k+h)^{1-\eta}-(\hat{k}+h)^{1-\eta}\right)\right]+\frac{4c_2\alpha^2L_1}{\ell\alpha}\frac{[\log\left(\frac{k+h}{\alpha}\right)+1]}{(k+h)^\eta}.
\end{align*}
Similar to Theorem <ref> following from <cit.>, Theorem <ref> follows from <cit.>. Hence we omit the proof.
§ ANALYSIS OF THE ACTOR
§.§ Proof of Lemma <ref>
Let $\pi$ and $\pi'$ be two policies parametrized by $\theta$ and $\theta'$, respectively. Then we have
\begin{align*}
\pi'(a|s)&=\frac{\exp(\theta'^\top \phi(s,a))}{\sum_{a'\in\mathcal{A}}\exp(\theta'^\top \phi(s,a'))}\\
&=\frac{\exp((\theta+\beta w)^\top \phi(s,a))}{\sum_{a'\in\mathcal{A}}\exp((\theta+\beta w)^\top \phi(s,a'))}\\
&=\frac{\exp(\theta^\top \phi(s,a))\exp(\beta w^\top \phi(s,a))}{\sum_{a'\in\mathcal{A}}\exp((\theta+\beta w)^\top \phi(s,a'))}\\
&=\frac{\exp(\theta^\top \phi(s,a))}{\sum_{a'\in\mathcal{A}}\exp(\theta^\top \phi(s,a'))}\frac{\exp(\beta w^\top \phi(s,a))\sum_{a'\in\mathcal{A}}\exp(\theta^\top \phi(s,a'))}{\sum_{a'\in\mathcal{A}}\exp((\theta+\beta w)^\top \phi(s,a'))}\\
&=\pi(a|s)\frac{\exp(\beta w^\top \phi(s,a))\sum_{a'\in\mathcal{A}}\exp(\theta^\top \phi(s,a'))}{\sum_{a'\in\mathcal{A}}\exp((\theta+\beta w)^\top \phi(s,a'))}\\
&=\pi(a|s)\frac{\exp(\beta w^\top \phi(s,a))}{\left[\frac{\sum_{a'\in\mathcal{A}}\exp(\theta^\top \phi(s,a'))\exp(w^\top \phi(s,a'))}{\sum_{a'\in\mathcal{A}}\exp(\theta^\top \phi(s,a'))} \right]}\\
&=\pi(a|s)\frac{\exp(\beta w^\top \phi(s,a))}{\sum_{a'\in\mathcal{A}}\pi_t(a'|s)\exp(w^\top \phi(s,a'))}.
\end{align*}
This establish the equivalence between the two update equations.
§.§ Proof of Proposition <ref>
Using Lemma <ref>, we see that the update equation of the actor (line 4 of Algorithm <ref>) can be equivalently written by
\begin{align}
\pi_{t+1}(a|s)&=\pi_t(a|s)\frac{\exp(\beta w_t^\top \phi(s,a))}{\sum_{a'\in\mathcal{A}}\pi_t(a'|s)\exp(\beta w_t^\top \phi(s,a'))}\nonumber\\
&=\pi_t(a|s)\frac{\exp(\beta (w_t^\top \phi(s,a)-V^{\pi_t}(s)))}{\sum_{a'\in\mathcal{A}}\pi_t(a'|s)\exp(\beta (w_t^\top \phi(s,a')-V^{\pi_t}(s)))}\nonumber\\
&=\pi_t(a|s)\frac{\exp(\beta( w_t^\top \phi(s,a)-V^{\pi_t}(s)))}{Z_t(s)},\label{eq:2}
\end{align}
where $Z_t(s)=\sum_{a'\in\mathcal{A}}\pi_t(a'|s)\exp(\beta (w_t^\top \phi(s,a')-V^{\pi_t}(s)))$. We will use Eq. (<ref>) for our analysis. To prove Proposition <ref>, we need the following sequence of lemmas.
For any $t\geq 0$ and $s\in\mathcal{S}$, we have the following lower bound for $\log (Z_t(s))$
\begin{align*}
\log(Z_{t}(s))\geq\beta\sum_{a\in\mathcal{A}}\pi_t(a|s)(w_t^\top \phi(s,a)-Q^{\pi_t}(s,a)).
\end{align*}
Using the equivalent update rule (<ref>) of $\pi_{t}$ and we have for any $t\geq 0$ and $s\in\mathcal{S}$:
\begin{align*}
\log(Z_{t}(s))&=\log\left[\sum_{a\in\mathcal{A}}\pi_t(a|s)\exp(\beta (w_t^\top\phi(s,a)-V^{\pi_t}(s)))\right]\\
&\geq \beta\sum_{a\in\mathcal{A}}\pi_t(a|s)(w_t^\top\phi(s,a)-V^{\pi_t}(s)))\tag{Jensen's inequality}\\
&= \beta\sum_{a\in\mathcal{A}}\pi_t(a|s)(w_t^\top\phi(s,a)-Q^{\pi_t}(s,a)+Q^{\pi_t}(s,a)-V^{\pi_t}(s)))\\
\end{align*}
where in the last line we used $\sum_{a\in\mathcal{A}}\pi_t(a|s)Q^{\pi_t}(s,a)=V^{\pi_t}(s)$.
For any starting distribution $\mu$ and policy $\pi$, we define the following as the discounted visitation distribution.
\[
d^\pi_\mu(s) = (1-\gamma)\mathbb{E}_{s_0\sim\mu}\left[\sum_{t=0}^\infty \gamma^tPr^\pi(S_t=s|S_0 = s_0)\right].
\]
For any starting distribution $\mu$, the following inequality holds:
\begin{align*}
V^{\pi_{t+1}}(\mu)-V^{\pi_t}(\mu)\geq\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}(\pi_t(a|s)-\pi_{t+1}(a|s))(w_t^\top\phi(s,a)-Q^{\pi_t}(s,a))\\
&-\mathbb{E}_{s\sim \mu}\sum_{a\in\mathcal{A}}\pi_t(a|s)(w_t^\top\phi(s,a)-Q^{\pi_t}(s,a))+\frac{1}{\beta}\mathbb{E}_{s\sim \mu}\log Z_{t}(s),
\end{align*}
where for the ease of notation we denote $d^{\pi_t}_\mu\equiv d^t$.
For any starting distribution $\mu$, we have
\begin{align}
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s)A^{\pi_t}(s,a)\tag{Performance Difference Lemma, \citep[Lemma 3.2]{agarwal2021theory}}\nonumber\\
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s)(Q^{\pi_t}(s,a)-w_t^\top\phi(s,a)+w_t^\top\phi(s,a)-V^{\pi_t}(s))\nonumber\\
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s)(Q^{\pi_t}(s,a)-w_t^\top\phi(s,a))\nonumber\\
&+\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s)\log\left(\frac{\pi_{t+1}(a|s)}{\pi_t(a|s)}Z_{t}(s)\right).\label{eq:3}
\end{align}
Consider the second term on the RHS of the previous inequality. Using the definition of Kullback–Leibler (KL) divergence, we have
\begin{align*}
&\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s)\log\left(\frac{\pi_{t+1}(a|s)}{\pi_t(a|s)}Z_{t}(s)\right)\\
=\;&\mathbb{E}_{s\sim d^{t+1}}D_{\text{KL}}(\pi_{t+1}(\cdot|s)\mid \pi_t(\cdot|s))+\mathbb{E}_{s\sim d^{t+1}}\log Z_{t}(s)\\
\geq \;&\mathbb{E}_{s\sim d^{t+1}}\left[\log Z_{t}(s)-\beta\sum_{a\in\mathcal{A}}\pi_t(a|s)(w_t^\top\phi(s,a)-Q^{\pi_t}(s,a))\right]\\
&+\beta\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_t(a|s)(w_t^\top\phi(s,a)-Q^{\pi_t}(s,a))\tag{KL divergence is non-negative}\\
\geq \;&(1-\gamma)\mathbb{E}_{s\sim \mu}\left[\log Z_{t}(s)-\beta\sum_{a\in\mathcal{A}}\pi_t(a|s)(w_t^\top\phi(s,a)-Q^{\pi_t}(s,a))\right]\\
&+\beta\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_t(a|s)(w_t^\top\phi(s,a)-Q^{\pi_t}(s,a))\tag{$d^{t+1}\geq (1-\gamma)\mu$ and Lemma \ref{le:logzt}}.
\end{align*}
By substituting the previous inequality into Eq. (<ref>) we obtain
\begin{align*}
V^{\pi_{t+1}}(\mu)-V^{\pi_t}(\mu)\geq \;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}(\pi_t(a|s)-\pi_{t+1}(a|s))(w_t^\top\phi(s,a)-Q^{\pi_t}(s,a))\\
&-\mathbb{E}_{s\sim \mu}\sum_{a\in\mathcal{A}}\pi_t(a|s)(w_t^\top\phi(s,a)-Q^{\pi_t}(s,a))+\frac{1}{\beta}\mathbb{E}_{s\sim \mu}\log Z_{t}(s).
\end{align*}
The following equality holds for any starting distribution $\mu$ and $t\geq 0$:
\begin{align*}
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)(Q^{\pi_t}(s,a)-w_t^\top\phi(s,a))+\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^*}\log(Z_{t}(s))\\
&+\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^*}\left[D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_t(\cdot|s))-D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_{t+1}(\cdot|s))\right],
\end{align*}
where $d^*\equiv d^{\pi^*}_\mu$ is the discounted visitation distribution corresponding to the optimal policy.
Using the equivalent update rule of $\pi_{t}$ in (<ref>), for any $t\geq 0$ and $s\in\mathcal{S}$ we have
\begin{align}
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)A^{\pi_t}(s,a)\tag{Performance Difference Lemma}\nonumber\\
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)(Q^{\pi_t}(s,a)-w_t^\top \phi(s,a)+w_t^\top \phi(s,a)-V^{\pi_t}(s))\nonumber\\
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)(Q^{\pi_t}(s,a)-w_t^\top \phi(s,a))\nonumber\\
&+\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)\log\left(\frac{\pi_{t+1}(a|s)}{\pi_t(a|s)}Z_{t}(s)\right)\nonumber\\
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)(Q^{\pi_t}(s,a)-w_t^\top \phi(s,a))+\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^*}\log(Z_{t}(s))\nonumber\\
&+\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^*}\left[D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_t(\cdot|s))-D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_{t+1}(\cdot|s))\right],\label{eq:Delta_V}
\end{align}
where the last line follows from the definition of KL divergence.
We now proceed to prove Proposition <ref>. Since Lemma <ref> holds for any distribution $\mu$, apply lemma <ref> with $\mu=d^*$ and we have
\begin{align*}
V^{\pi_{t+1}}(d^*)-V^{\pi_t}(d^*)\geq\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}_{d^*}}\sum_{a\in\mathcal{A}}(\pi_t(a|s)-\pi_{t+1}(a|s))(w_t^\top\phi(s,a)-Q^{\pi_t}(s,a))\\
&-\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi_t(a|s)(w_t^\top\phi(s,a)-Q^{\pi_t}(s,a))+\frac{1}{\beta}\mathbb{E}_{s\sim d^*}\log Z_{t}(s),
\end{align*}
which implies
\begin{align}\label{eq:10}
\frac{1}{\beta}\mathbb{E}_{s\sim d^*}\log Z_{t}(s)\leq V^{\pi_{t+1}}(d^*)-V^{\pi_t}(d^*)+\frac{3}{1-\gamma}\|\Phi w_t-Q^{\pi_t}\|_\infty.
\end{align}
Using (<ref>), for any $T\geq 1$ we have
\begin{align*}
= \;&\frac{1}{1-\gamma}\sum_{t=0}^{T-1}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)(Q^{\pi_t}(s,a)-w_t^\top\phi(s,a))+\frac{1}{(1-\gamma)\beta}\sum_{t=0}^{T-1}\mathbb{E}_{s\sim d^*}\log(Z_{t}(s))\\
&+\frac{1}{(1-\gamma)\beta}\sum_{t=0}^{T-1}\mathbb{E}_{s\sim d^*}\left[D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_t(\cdot|s))-D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_{t+1}(\cdot|s))\right]\\
\leq \;&\frac{1}{1-\gamma}\sum_{t=0}^{T-1}\|Q^{\pi_t}-\Phi w_t\|_\infty+\frac{1}{1-\gamma}\sum_{t=0}^{T-1}\left[V^{\pi_{t+1}}(d^*)-V^{\pi_t}(d^*) + \frac{3}{1-\gamma}\|Q^{\pi_t}-\Phi w_t\|_\infty \right]\tag{Eq. (\ref{eq:10})}\\
&+\frac{1}{(1-\gamma)\beta}\sum_{t=0}^{T-1}\mathbb{E}_{s\sim d^*}\left[D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_t(\cdot|s))-D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_{t+1}(\cdot|s))\right]\\
\leq \;&\frac{1}{1-\gamma}\sum_{t=0}^{T-1}\|Q^{\pi_t}-\Phi w_t\|_\infty+\frac{1}{1-\gamma}(V^{\pi_T}(d^*)-V^{\pi_0}(d^*))+\frac{3}{(1-\gamma)^2}\sum_{t=0}^{T-1}\|Q^{\pi_t}-\Phi w_t\|_\infty\\
&+\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^*}\left[D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_0(\cdot|s))-D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_{T}(\cdot|s))\right]\\
\leq \;&\frac{4}{(1-\gamma)^2}\sum_{t=0}^{T-1}\|Q^{\pi_t}-\Phi w_t\|_\infty+\frac{1}{(1-\gamma)^2}+\frac{\log(\mathcal{A})}{(1-\gamma)\beta}\\
\leq \;&\frac{4}{(1-\gamma)^2}\sum_{t=0}^{T-1}\|Q^{\pi_t}-\Phi w_t\|_\infty+\frac{2}{(1-\gamma)^2},
\end{align*}
where the last line follows from $\beta=\log(|\mathcal{A}|)$.
Therefore, using the previous inequality and the definition of $\hat{T}$, we have:
\begin{align*}
V^{\pi^*}(\mu)-\mathbb{E}\left[V^{\pi_{\hat{T}}}(\mu)\right]=\;& V^{\pi^*}(\mu)-\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\left[V^{\pi_t}(\mu)\right]\\
\leq\;& \frac{2}{(1-\gamma)^2 T}+\frac{4}{(1-\gamma)^2T}\sum_{t=0}^{T-1}\mathbb{E}[\|Q^{\pi_t}-\Phi w_t\|_\infty],
\end{align*}
which proves Proposition <ref>.
§.§ Proof of Theorem <ref>
Using the result of Proposition <ref>, for any starting distribution $\mu$, we have:
\begin{align}
\leq\;& \frac{2}{(1-\gamma)^2 T}+\frac{4}{(1-\gamma)^2T}\sum_{t=0}^{T-1}\mathbb{E}[\|Q^{\pi_t}-\Phi w_t\|_\infty]\nonumber\\
\leq\;& \frac{2}{(1-\gamma)^2 T}+\frac{4}{(1-\gamma)^2T}\sum_{t=0}^{T-1}\mathbb{E}[\|Q^{\pi_t}-\Phi w_{\pi_t}\|_\infty]\nonumber\\
\leq\;& \frac{2}{(1-\gamma)^2 T}+\frac{4\xi}{(1-\gamma)^2}\nonumber\\
\end{align}
where we recall that $\xi=\max_{\theta}\|Q^{\pi_\theta}-\Phi w_{\pi_\theta}\|_\infty$.
To control $\mathbb{E}[\|w_t-w_{\pi_t}\|_\infty]$, we apply Theorem <ref>. Since we choose the initial iterate $w_0=0$ in the critic, we can upper bound the constants $c_1$ and $c_2$ in Theorem <ref> by
\begin{align*}
c_1\leq c_3^2\quad \text{and}\quad c_2\leq 114 f(\gamma\zeta_{\max})^2c_3^2,
\end{align*}
where $c_3=1+\max_\pi\|w_\pi\|_2$. The following lemma provides a uniform bound on $\|w_\pi\|_2$ for any target policy $\pi$. The proof is presented in Appendix <ref>.
For any policy $\pi$, we have $\|w_\pi\|_2\leq \frac{2}{(1-\gamma)\sqrt{1-\gamma_c}\sqrt{\lambda_{\min}}}$.
Since $\pi_t$ is determined by $\{(S_i,A_i)\}_{0\leq i\leq t(K+n)}$ while $w_t$ is determined by $\{(S_i,A_i)\}_{t(K+n)\leq i\leq (t+1)(K+n)}$, using the Markov property and conditional expectation, by Theorem 2.1 we have
\begin{align*}
\mathbb{E}[\|w_t-w_{\pi_t}\|_\infty]\leq\;& \mathbb{E}[\|w_t-w_{\pi_t}\|_2]\\
\leq\;& \sqrt{\mathbb{E}[\|w_t-w_{\pi_t}\|_2^2]}\tag{Jensen's inequality}\\
\leq\;& c_3(1-(1-\gamma_c)\lambda_{\min} \alpha)^{\frac{K-(t_\alpha+n+1)}{2}}+\frac{11c_3 f(\gamma\zeta_{\max})[\alpha (t_\alpha+n+1)]^{1/2}}{(1-\gamma_c)^{1/2}\lambda_{\min}^{1/2}}.
\end{align*}
Finally, by substituting the previous inequality into Eq. (<ref>), we get
\begin{align*}
V^{\pi^*}(\mu)-\mathbb{E}\left[V^{\pi_{\hat{T}}}(\mu)\right]\leq\;& \frac{2}{(1-\gamma)^2 T}+\frac{4\xi}{(1-\gamma)^2}\\
&+\frac{4c_3}{(1-\gamma)^2}(1-(1-\gamma_c)\lambda_{\min} \alpha)^{\frac{K-(t_\alpha+n+1)}{2}}\\
&+\frac{44 c_3 f(\gamma\zeta_{\max})[\alpha (t_\alpha+n+1)]^{1/2}}{(1-\gamma_c)^{1/2}(1-\gamma)^2\lambda_{\min}^{1/2}}.
\end{align*}
This proves Theorem <ref>.
§.§ Proof of Lemma <ref>
For any policy $\pi$, using the projected Bellman equation $\Phi w_\pi=\Pi_{\kappa_b}\mathcal{T}_\pi^n(\Phi w_\pi)$ we have
\begin{align*}
\|Q^\pi-\Phi w_\pi\|_{\kappa_b}^2&=\|Q^\pi-\Pi_{\kappa_b}Q^\pi+\Pi_{\kappa_b}Q^\pi-\Phi w_\pi\|_{\kappa_b}^2\\
&=\|Q^\pi-\Pi_{\kappa_b}Q^\pi\|_{\kappa_b}^2+\|\Phi w_\pi-\Pi_{\kappa_b}Q^\pi\|_{\kappa_b}^2\tag{Babylonian–Pythagorean theorem}\\
&=\|Q^\pi-\Pi_{\kappa_b}Q^\pi\|_{\kappa_b}^2+\|\Pi_{\kappa_b}\mathcal{T}_\pi^n(\Phi w_\pi)-\Pi_{\kappa_b}\mathcal{T}_\pi^n(Q^\pi)\|_{\kappa_b}^2\\
&\leq \|Q^\pi-\Pi_{\kappa_b}Q^\pi\|_{\kappa_b}^2+\gamma_c^2\|\Phi w_\pi-Q^\pi\|_{\kappa_b}^2.
\end{align*}
It follows that
\begin{align*}
\|Q^\pi-\Phi w_\pi\|_{\kappa_b}&\leq \frac{1}{\sqrt{1-\gamma_c^2}}\|\Pi_{\kappa_b}Q^\pi-Q^\pi\|_{\kappa_b}\\
&\leq \frac{1}{\sqrt{1-\gamma_c^2}}\|Q^\pi\|_{\kappa_b}\tag{Babylonian–Pythagorean theorem}\\
&\leq \frac{1}{(1-\gamma)\sqrt{1-\gamma_c^2}}.
\end{align*}
Using the reverse triangle inequality we get
\begin{align*}
\|w_\pi\|_2&\leq \frac{1}{\sqrt{\lambda_{\min}}}\|\Phi w_\pi\|_{\kappa_b}\\
&\leq \frac{1}{\sqrt{\lambda_{\min}}}\left(\|Q^\pi\|_{\kappa_b}+\frac{1}{(1-\gamma)\sqrt{1-\gamma_c^2}}\right)\\
&\leq \frac{2}{(1-\gamma)\sqrt{1-\gamma_c^2}\sqrt{\lambda_{\min}}}\\
&\leq \frac{2}{(1-\gamma)\sqrt{1-\gamma_c}\sqrt{\lambda_{\min}}}.
\end{align*}
§.§ Proof of Corollary <ref>
For an given accuracy $\epsilon>0$, in order to achieve
\begin{align*}
V^{\pi^*}(\mu)-\mathbb{E}\left[V^{\pi_{\hat{T}}}(\mu)\right]\leq \epsilon+\frac{3\xi}{(1-\gamma)^2},
\end{align*}
in light of Theorem <ref> and Lemma <ref>, we must have
\begin{align*}
T&\sim \mathcal{O}\left(\frac{1}{\epsilon(1-\gamma)^2}\right)\\
\alpha &\sim \mathcal{O}\left(\frac{\epsilon^2}{\log(1/\epsilon)}\right)\Tilde{\mathcal{O}}\left(\frac{(1-\gamma_c)^2(1-\gamma)^6\lambda_{\min}^2}{nf(\gamma\zeta_{\max})^2}\right)\\
K&\sim \mathcal{O}\left(\frac{\log^2(1/\epsilon)}{\epsilon^2}\right)\Tilde{\mathcal{O}}\left(\frac{nf(\gamma\zeta_{\max})^2}{(1-\gamma_c)^3(1-\gamma)^6\lambda_{\min}^3}\right).
\end{align*}
Therefore, the total sample complexity is
\begin{align*}
\end{align*}
§ DISCUSSION ABOUT SAMPLE COMPLEXITY
For completeness, We restate here the argument from <cit.> that explains issues with definition of sample complexity when the error is not going to zero. Consider a convergence bound of the form
\begin{align*}
\text{Error}\leq \frac{1}{T}+\mathcal{E}_0,
\end{align*}
where $\mathcal{E}_0$ is a constant bias term, and $T$ is the number of iterations. For example, in our case, $\mathcal{E}_0$ represents the function approximation bias. By using the AM-GM inequality, we have
\begin{align}
\text{Error}&\leq \left(\frac{1}{\mathcal{E}_0^{N-1}T^N}\mathcal{E}_0^{N-1}\right)^{1/N}+\mathcal{E}_0\nonumber\\
&\leq \frac{1}{N\mathcal{E}_0^{N-1}}\frac{1}{T^N}+\left(2-\frac{1}{N}\right)\mathcal{E}_0\label{eq:error}\\
\end{align}
which leads to the misleading interpretation of obtaining $\mathcal{O}(\epsilon^{-1/N})$ sample complexity for any $N\geq 1$. See Appendix C of [34] for a more detailed discussion.
A simple way to identify the problem in the previous derivation is to consider the special case where $\mathcal{E}_0=0$, which corresponds to using $\Phi=I_{|\mathcal{S}||\mathcal{A}|}$ in our NAC algorithm (i.e., the tabular setting). In this case, since the RHS of Eq. (<ref>) is infinity, the convergence bound in Eq. (<ref>) and also Eq. (<ref>) are meaningless. In our Theorem <ref>, when $\Phi=I_{|\mathcal{S}||\mathcal{A}|}$ and hence $\xi=0$, Theorem <ref> still provides a meaningful finite-sample bounds. In fact, it coincides with the finite-sample bounds of tabular NAC provided in [34] when the two truncation levels within the $Q$-trace algorithm are large enough. Therefore, the issue of trading off asymptotic error and convergence rate using AM-GM inequality is not present in our results.
§ CONVERGENCE OF QNPG
In this section we establish $\mathcal{O}(1/T)$ convergence of QNPG, improving upon the $\mathcal{O}(1/\sqrt{T})$ result in [1].
Consider an arbitrary (possibly dependent on policy $\pi$) distribution $\nu^\pi$ over the states of the MDP. For an arbitrary policy $\pi$, define
\[
w^{\pi}\in\argmin_w \mathbb{E}_{s\sim\nu^{\pi},a\sim\pi(\cdot|s)}[(Q^{\pi}(s,a)-w^\top \phi(s,a))^2].
\]
Note that the solution to the projected Bellman equation <ref> is denoted as $w_\pi$ which can in general be different from $w^\pi$.
The general QNPG algorithm is presented in Algorithm <ref>.
General QNPG
[1]
Input: $T$, $\beta$, $\theta_0$, features $\phi_{s,a}\in\mathbb{R}^d$ for all $s,a$, distribution function $\pi\rightarrow\nu^\pi$
Evaluate $w^{\pi_{\theta_t}}$
$\theta_{t+1} =\theta_t + \beta w^{\pi_{\theta_t}}$
Output: $\theta_{\hat{T}}$, where $\hat{T}$ is uniformly sampled from $[0,T-1]$.
\[
\xi_{error}=\max_\pi \|Q^\pi-\Phi w^\pi\|_\infty.
\]
We have the following theorem:
The general QNPG Algorithm <ref> with step size $\beta\geq\log(|\mathcal{A}|)$ satisfies the following
\[
V^*-\mathbb{E}[V^{\pi_{\theta_{\hat{T}}}}] \leq \frac{2}{(1-\gamma)^2T}+\frac{4}{(1-\gamma)^2}\xi_{error},
\]
where the expectation is only with respect to the randomness in $\hat{T}$.
§.§ Proof of Theorem <ref>
Throughout this section, we denote $\pi_t\equiv\pi_{\theta_t}$. Using Lemma <ref>, we have
\begin{align}
\pi_{t+1}(a|s)&=\pi_t(a|s)\frac{\exp(\beta\phi(s,a)^\top w^{\pi_t} )}{\sum_{a'\in\mathcal{A}}\pi_t(a'|s)\exp(\beta \phi(s,a')^\top w^{\pi_t})}\nonumber\\
&=\pi_t(a|s)\frac{\exp(\beta (\phi(s,a)^\top w^{\pi_t}-V^{\pi_t}(s)))}{\sum_{a'\in\mathcal{A}}\pi_t(a'|s)\exp(\beta (\phi(s,a')^\top w^{\pi_t}-V^{\pi_t}(s)))}\nonumber\\
&=\pi_t(a|s)\frac{\exp(\beta( \phi(s,a)^\top w^{\pi_t}-V^{\pi_t}(s)))}{Z_t(s)},\label{eq:22}
\end{align}
where $Z_t(s)=\sum_{a'\in\mathcal{A}}\pi_t(a'|s)\exp(\beta (\phi(s,a')^\top w^{\pi_t}-V^{\pi_t}(s)))$. First, we state three supporting lemmas for the proof of Theorem <ref>.
For any $t\geq 0$ and $s\in\mathcal{S}$, we have the following lower bound for $\log (Z_t(s))$
\begin{align*}
\log(Z_{t}(s))\geq\beta\sum_{a\in\mathcal{A}}\pi_t(a|s)(\phi(s,a)^\top w^{\pi_t}-Q^{\pi_t}(s,a)).
\end{align*}
For any starting distribution $\mu$ and $t\geq 0$, the following inequality holds:
\begin{align*}
V^{\pi_{t+1}}(\mu)-V^{\pi_t}(\mu)\geq\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}(\pi_t(a|s)-\pi_{t+1}(a|s))(\phi(s,a)^\top w^{\pi_t}-Q^{\pi_t}(s,a))\\
&-\mathbb{E}_{s\sim \mu}\sum_{a\in\mathcal{A}}\pi_t(a|s)(\phi(s,a)^\top w^{\pi_t}-Q^{\pi_t}(s,a))+\frac{1}{\beta}\mathbb{E}_{s\sim \mu}\log Z_{t}(s),
\end{align*}
where for the ease of notation we denote $d^{\pi_t}_\mu\equiv d^t$.
The following equality holds for any starting distribution $\mu$ and $t\geq 0$:
\begin{align*}
=&\;\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)(Q^{\pi_t}(s,a)-\phi(s,a)^\top w^{\pi_t})+\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^*}\log(Z_{t}(s))\\
&+\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^*}\left[D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_t(\cdot|s))-D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_{t+1}(\cdot|s))\right],
\end{align*}
where $d^*\equiv d^{\pi^*}_\mu$ is the discounted visitation distribution corresponding to the optimal policy.
We now proceed to prove Theorem <ref>. Since Lemma <ref> holds for any distribution $\mu$, apply this lemma with $\mu=d^*$ and we have
\begin{align*}
V^{\pi_{t+1}}(d^*)-V^{\pi_t}(d^*)\geq\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}_{d^*}}\sum_{a\in\mathcal{A}}(\pi_t(a|s)-\pi_{t+1}(a|s))(\phi(s,a)^\top w^{\pi_t}-Q^{\pi_t}(s,a))\\
&-\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi_t(a|s)(\phi(s,a)^\top w^{\pi_t}-Q^{\pi_t}(s,a))+\frac{1}{\beta}\mathbb{E}_{s\sim d^*}\log Z_{t}(s),
\end{align*}
which implies
\begin{align}\label{eq:102}
\frac{1}{\beta}\mathbb{E}_{s\sim d^*}\log Z_{t}(s)\leq V^{\pi_{t+1}}(d^*)-V^{\pi_t}(d^*)+\frac{3}{1-\gamma}\|\Phi w^{\pi_t}-Q^{\pi_t}\|_\infty.
\end{align}
Using Lemma <ref>, for any $T\geq 1$ we have
\begin{align*}
= \;&\frac{1}{1-\gamma}\sum_{t=0}^{T-1}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)(Q^{\pi_t}(s,a)-\phi(s,a)^\top w^{\pi_t})+\frac{1}{(1-\gamma)\beta}\sum_{t=0}^{T-1}\mathbb{E}_{s\sim d^*}\log(Z_{t}(s))\\
&+\frac{1}{(1-\gamma)\beta}\sum_{t=0}^{T-1}\mathbb{E}_{s\sim d^*}\left[D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_t(\cdot|s))-D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_{t+1}(\cdot|s))\right]\\
\leq \;&\frac{1}{1-\gamma}\sum_{t=0}^{T-1}\|Q^{\pi_t}-\Phi w^{\pi_t}\|_\infty+\frac{1}{1-\gamma}\sum_{t=0}^{T-1}\left[V^{\pi_{t+1}}(d^*)-V^{\pi_t}(d^*) + \frac{3}{1-\gamma}\|Q^{\pi_t}-\Phi w^{\pi_t}\|_\infty \right]\tag{Eq. (\ref{eq:102})}\\
&+\frac{1}{(1-\gamma)\beta}\sum_{t=0}^{T-1}\mathbb{E}_{s\sim d^*}\left[D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_t(\cdot|s))-D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_{t+1}(\cdot|s))\right]\\
\leq \;&\frac{1}{1-\gamma}\sum_{t=0}^{T-1}\|Q^{\pi_t}-\Phi w^{\pi_t}\|_\infty+\frac{1}{1-\gamma}(V^{\pi_T}(d^*)-V^{\pi_0}(d^*))+\frac{3}{(1-\gamma)^2}\sum_{t=0}^{T-1}\|Q^{\pi_t}-\Phi w^{\pi_t}\|_\infty\\
&+\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^*}\left[D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_0(\cdot|s))-D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_{T}(\cdot|s))\right]\\
\leq \;&\frac{4}{(1-\gamma)^2}\sum_{t=0}^{T-1}\|Q^{\pi_t}-\Phi w^{\pi_t}\|_\infty+\frac{1}{(1-\gamma)^2}+\frac{\log(\mathcal{A})}{(1-\gamma)\beta}\\
\leq \;&\frac{4}{(1-\gamma)^2}\sum_{t=0}^{T-1}\|Q^{\pi_t}-\Phi w^{\pi_t}\|_\infty+\frac{2}{(1-\gamma)^2},
\end{align*}
where the last line follows from $\beta=\log(|\mathcal{A}|)$.
Therefore, using the previous inequality and the definition of $\hat{T}$, we have:
\begin{align*}
V^{\pi^*}(\mu)-\mathbb{E}\left[V^{\pi_{\hat{T}}}(\mu)\right]=\;& V^{\pi^*}(\mu)-\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\left[V^{\pi_t}(\mu)\right]\\
\leq\;& \frac{2}{(1-\gamma)^2 T}+\frac{4}{(1-\gamma)^2T}\sum_{t=0}^{T-1}\mathbb{E}[\|Q^{\pi_t}-\Phi w^{\pi_t}\|_\infty]\\
\leq\;& \frac{2}{(1-\gamma)^2 T}+\frac{4}{(1-\gamma)^2}\xi_{error}
\end{align*}
Since $w^{\pi}$ is defined as $w^{\pi}\in\argmin_w \mathbb{E}_{s\sim\nu^{\pi},a\sim\pi(\cdot|s)}[(Q^{\pi}(s,a)-\phi(s,a)^\top w)^2]$, one might be interested in an upper bound based on the error
\[
\epsilon_{bias}=\max_\pi \mathbb{E}_{s\sim\nu^{\pi},a\sim\pi(\cdot|s)}[(Q^{\pi}(s,a)-\phi(s,a)^\top w^{\pi})^2].
\]
The following Corollary characterizes this error.
The general QNPG Algorithm <ref> satisfies the following
\[
V^*-\mathbb{E}[V^{\pi_{\theta_{\hat{T}}}}] \leq \frac{2}{(1-\gamma)^2T}+\frac{4}{(1-\gamma)^2}\sqrt{\frac{\epsilon_{bias}}{\lambda}},
\]
where $\lambda=\min_{\pi,s,a}\nu^\pi(s)\pi(a|s)$ and the expectation is only with respect to the randomness in $\hat{T}$.
The proof follows immediately from Theorem <ref> and the norm inequality $\|Q^{\pi}-\Phi w^{\pi}\|_\infty\leq \frac{1}{\sqrt{\lambda}} \sqrt{\mathbb{E}_{s\sim\nu^{\pi},a\sim\pi(\cdot|s)}[(Q^{\pi}(s,a)-\phi(s,a)^\top w^{\pi})^2]}$
§.§ Proof of Auxiliary lemmas
Using the equivalent update rule (<ref>) of $\pi_{t}$ and we have for any $t\geq 0$ and $s\in\mathcal{S}$:
\begin{align*}
\log(Z_{t}(s))&=\log\left[\sum_{a\in\mathcal{A}}\pi_t(a|s)\exp(\beta (\phi(s,a)^\top w^{\pi_t}-V^{\pi_t}(s)))\right]\\
&\geq \beta\sum_{a\in\mathcal{A}}\pi_t(a|s)(\phi(s,a)^\top w^{\pi_t}-V^{\pi_t}(s)))\tag{Jensen's inequality}\\
&=\beta\sum_{a\in\mathcal{A}}\pi_t(a|s)(\phi(s,a)^\top w^{\pi_t}-Q^{\pi_t}(s,a)),
\end{align*}
where in the last line we used $\sum_{a\in\mathcal{A}}\pi_t(a|s)Q^{\pi_t}(s,a)=V^{\pi_t}(s)$.
For any starting distribution $\mu$, we have
\begin{align}
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s)A^{\pi_t}(s,a)\tag{Performance Difference Lemma, \citep[Lemma 3.2]{agarwal2021theory}}\nonumber\\
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s)(Q^{\pi_t}(s,a)-\phi(s,a)^\top w^{\pi_t}+\phi(s,a)^\top w^{\pi_t}-V^{\pi_t}(s))\nonumber\\
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s)(Q^{\pi_t}(s,a)-\phi(s,a)^\top w^{\pi_t})\nonumber\\
&+\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s)\log\left(\frac{\pi_{t+1}(a|s)}{\pi_t(a|s)}Z_{t}(s)\right).\label{eq:32}
\end{align}
Consider the second term on the RHS of the previous inequality. Using the definition of Kullback–Leibler (KL) divergence, we have
\begin{align*}
&\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s)\log\left(\frac{\pi_{t+1}(a|s)}{\pi_t(a|s)}Z_{t}(s)\right)\\
=\;&\mathbb{E}_{s\sim d^{t+1}}D_{\text{KL}}(\pi_{t+1}(\cdot|s)\mid \pi_t(\cdot|s))+\mathbb{E}_{s\sim d^{t+1}}\log Z_{t}(s)\\
\geq \;&\mathbb{E}_{s\sim d^{t+1}}\left[\log Z_{t}(s)-\beta\sum_{a\in\mathcal{A}}\pi_t(a|s)(\phi(s,a)^\top w^{\pi_t}-Q^{\pi_t}(s,a))\right]\\
&+\beta\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_t(a|s)(\phi(s,a)^\top w^{\pi_t}-Q^{\pi_t}(s,a))\tag{KL divergence is non-negative}\\
\geq \;&(1-\gamma)\mathbb{E}_{s\sim \mu}\left[\log Z_{t}(s)-\beta\sum_{a\in\mathcal{A}}\pi_t(a|s)(\phi(s,a)^\top w^{\pi_t}-Q^{\pi_t}(s,a))\right]\\
&+\beta\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_t(a|s)(\phi(s,a)^\top w^{\pi_t}-Q^{\pi_t}(s,a))\tag{$d^{t+1}\geq (1-\gamma)\mu$ and Lemma \ref{le:logzt2}}.
\end{align*}
By substituting the previous inequality into Eq. (<ref>) we obtain
\begin{align*}
V^{\pi_{t+1}}(\mu)-V^{\pi_t}(\mu)\geq \;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}(\pi_t(a|s)-\pi_{t+1}(a|s))(\phi(s,a)^\top w^{\pi_t}-Q^{\pi_t}(s,a))\\
&-\mathbb{E}_{s\sim \mu}\sum_{a\in\mathcal{A}}\pi_t(a|s)(\phi(s,a)^\top w^{\pi_t}-Q^{\pi_t}(s,a))+\frac{1}{\beta}\mathbb{E}_{s\sim \mu}\log Z_{t}(s).
\end{align*}
Using the equivalent update rule of $\pi_{t}$ in Eq. (<ref>), for any $t\geq 0$ and $s\in\mathcal{S}$ we have
\begin{align}
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)A^{\pi_t}(s,a)\tag{Performance Difference Lemma}\nonumber\\
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)(Q^{\pi_t}(s,a)-\phi(s,a)^\top w^{\pi_t}+\phi(s,a)^\top w^{\pi_t}-V^{\pi_t}(s))\nonumber\\
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)(Q^{\pi_t}(s,a)-\phi(s,a)^\top w^{\pi_t})\nonumber\\
&+\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)\log\left(\frac{\pi_{t+1}(a|s)}{\pi_t(a|s)}Z_{t}(s)\right)\nonumber\\
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)(Q^{\pi_t}(s,a)-\phi(s,a)^\top w^{\pi_t})+\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^*}\log(Z_{t}(s))\nonumber\\
&+\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^*}\left[D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_t(\cdot|s))-D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_{t+1}(\cdot|s))\right].\label{eq:Delta_V2}
\end{align}
§ GLOBAL CONVERGENCE WITH LINEAR –FUNCTION
Throughout this section we denote $\pi_t\equiv\pi_{\theta_t}$. Consider NPG Algorithm <ref>.
Natural Policy Gradient Algorithm
[1]
Input: $T$, $\beta$, $\theta_0=0$, features $\phi(s,a)\in\mathbb{R}^d$ for all $s,a$
Evaluate the unique solution of $\Phi w=\Pi_{\kappa_b}\mathcal{T}_{\pi_t}^n(\Phi w)$ as $w_{\pi_t}$
$\theta_{t+1} =\theta_t + \beta w_{\pi_t}$
Output: $\theta_{T}$
In this section we prove the following fact.
Suppose the $Q$-function corresponding to all the policies in the parametrized space is linearly realizable. In other words, suppose $Q^{\pi_\theta}(s,a)= w_{\pi_\theta}^\top\phi(s,a)$ for all $s,a$ and $\theta\in\mathbb{R}^d$. Then, for an arbitrary distribution $\rho$, NPG Algorithm <ref> converges to the global optimal policy as $V^{\pi^*}(\rho)-V^{\pi_{T}}(\rho)\leq \frac{\log(|\mathcal{A}|)}{(1-\gamma)\beta (T+1)} + \frac{1}{(1-\gamma)^2(T+1)}$.
Two remarks regarding the Fact <ref> are in order. First of all, we should emphasize that this fact is evident from our convergence bound in Theorem <ref>. In particular, due to the assumption on the feature vectors, it is easy to see that $\xi=0$. Furthermore, due to the deterministic update of Eq. (<ref>), we can substitute $A_3=A_4=0$. Hence we have $1/T$ rate for global convergence of the update in Eq. (<ref>). What we are doing in this section is to provide a different view point for this result. Furthermore, note that all the policies achieved through the NPG update lie within the space of parameterized policies. In particular, the parameter $\theta_t$ of the policy $\pi_t$ is equal to $\theta_t=\sum_{l=0}^{t-1}\beta w_{\pi_l}$.
By Lemma <ref> it is easy to see that the update of Algorithm <ref> is equivalent to the update of the policy as follows
\begin{align}\label{eq:NPG_update}
\pi_{t+1}(a|s)=\frac{\pi_t(a|s)\exp(\beta w_{\pi_t}^\top\phi(s,a))}{\sum_{a'}\pi_t(a'|s)\exp(\beta w_{\pi_t}^\top\phi(s,a'))},
\end{align}
where $w_{\pi}$ is the solution of the projected Bellman equation <ref>.
Denote $Z_t(s)=\sum_{a'}\pi_t(a'|s)\exp(\beta w_{\pi_t}^\top\phi(s,a'))$. We have
\begin{align*}
\log Z_t(s) =& \log \sum_{a'}\pi_t(a'|s)\exp(\beta w_{\pi_t}^\top\phi(s,a'))\\
\geq& \sum_{a'} \pi_t(a'|s) \beta w_{\pi_t}^\top\phi(s,a')\tag{Jensen's inequality}\\
=&\sum_{a'} \pi_t(a'|s)\beta Q^{\pi_t}(s,a')\\
=&\beta V^{\pi_t}(s).
\end{align*}
For any distribution $\mu$, denote $d^{t}=d^{\pi_t}_\mu$. We have
\begin{align*}
V^{\pi_{t+1}}(\mu)-V^{\pi_{t}}(\mu) =& \frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s)A^{\pi_t}(s,a)\\
=&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s)(Q^{\pi_t}(s,a)-V^{\pi_t}(s))\\
=&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s)(w_{\pi_t}^\top\phi(s,a)-V^{\pi_t}(s))\\
=&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s) w_{\pi_t}^\top\phi(s,a) - \frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s)V^{\pi_t}(s)\\
=&\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^{t+1}}\sum_{a\in\mathcal{A}}\pi_{t+1}(a|s)\log\frac{\pi_{t+1}(a|s)Z_t(s)}{\pi_t(a|s)} - \frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}V^{\pi_t}(s)\\
=&\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^{t+1}}\left[D_{KL}(\pi_{t+1}(\cdot|s)||\pi_t(\cdot|s))+\log Z_t(s)\right] - \frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}V^{\pi_t}(s)\\
\geq & \frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^{t+1}}\log Z_t(s) - \frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}V^{\pi_t}(s)\tag{positivity of KL-divergence}\\
= & \frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{t+1}}\left[\frac{1}{\beta}\log Z_t(s) - V^{\pi_t}(s)\right]\\
\geq & \mathbb{E}_{s\sim \mu}\left[\frac{1}{\beta}\log Z_t(s) - V^{\pi_t}(s)\right]\geq0.\tag{by definition of $d^{t+1}$}
\end{align*}
Note that the above inequality shows monotonic improvement of the update in NPG.
For an arbitrary distribution $\rho$, denote $d^*_\rho=d^*$. We have
\begin{align}
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)A^{\pi_t}(s,a)\tag{Performance Difference Lemma}\nonumber\\
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)(Q^{\pi_t}(s,a)-V^{\pi_t}(s))\nonumber\\
=\;&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)(w_{\pi_t}^\top\phi(s,a)-V^{\pi_t}(s))\nonumber\\
=&\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^*}\sum_{a\in\mathcal{A}}\pi^*(a|s)\log\left(\frac{\pi_{t+1}(a|s)}{\pi_t(a|s)}Z_{t}(s)\right)-\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}V^{\pi_t}(s)\nonumber\\
=&\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^*}\left[D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_t(\cdot|s))-D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_{t+1}(\cdot|s))\right]\nonumber\\
&+\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}\left[\frac{1}{\beta}\log\left(Z_{t}(s)\right)-V^{\pi_t}(s)\right]\nonumber\\
\leq&\frac{1}{(1-\gamma)\beta}\mathbb{E}_{s\sim d^*}\left[D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_t(\cdot|s))-D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_{t+1}(\cdot|s))\right]\nonumber\\
&+\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^*}\left[V^{\pi_{t+1}}(d^*)-V^{\pi_{t}}(d^*)\right].\label{eq:NPG_V_gap}
\end{align}
Summing up both sides of the above inequality, we get
\begin{align*}
V^{\pi^*}(\rho)-V^{\pi_{T-1}}(\rho)\leq& \frac{1}{T}\sum_{t=0}^{T-1}V^{\pi^*}(\rho)-V^{\pi_{t}}(\rho)\tag{monotonic improvement of NPG}\\
\leq& \frac{1}{(1-\gamma)\beta T}\mathbb{E}_{s\sim d^*}\left[D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_0(\cdot|s))-D_{\text{KL}}(\pi^*(\cdot|s)\mid \pi_{T}(\cdot|s))\right]\\
&+\frac{1}{(1-\gamma)T}\mathbb{E}_{s\sim d^*}\left[V^{\pi_{T}}(d^*)-V^{\pi_{0}}(d^*)\right]\tag{by Eq. \eqref{eq:NPG_V_gap}}\\
\leq & \frac{\log(|\mathcal{A}|)}{(1-\gamma)\beta T} + \frac{1}{(1-\gamma)^2T}.
\end{align*}
|
single = true UAV short=UAV, long = unmanned aerial vehicle, short-indefinite
= a, long-indefinite = an MAV short=MAV, long = micro aerial vehicle, OMAV
short=OMAV, long = omnidirectional micro aerial vehicle, short-indefinite =
an, long-indefinite = an IMU short=IMU, long = inertial measurement unit,
short-indefinite = an, long-indefinite = an RC short=RC, long = remote
control, short-indefinite = an, long-indefinite = a FPV short=FPV, long =
first person view, short-indefinite = an, long-indefinite = a MOCAP
short=MOCAP, long = motion capture system, short-indefinite = a, long-
indefinite = a
# Towards 6DoF Bilateral Teleoperation of an Omnidirectional Aerial Vehicle
for Aerial Physical Interaction
Mike Allenspach1, Nicholas Lawrance1, Marco Tognon1, and Roland Siegwart1 1
Authors are with the Autonomous Systems Lab, ETH Zürich, Zürich 8092,
Switzerland<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>research was in part supported by the
National Center of Competence in Research (NCCR) on Digital Fabrication, in
part by the NCCR Robotics, and in part by Armasuisse Science and Technology.
###### Abstract
Bilateral teleoperation offers an intriguing solution towards shared autonomy
with aerial vehicles in contact-based inspection and manipulation tasks.
Omnidirectional aerial robots allow for full pose operations, making them
particularly attractive in such tasks. Naturally, the question arises whether
standard bilateral teleoperation methodologies are suitable for use with these
vehicles. In this work, a fully decoupled 6DoF bilateral teleoperation
framework for aerial physical interaction is designed and tested for the first
time. The method is based on the well established rate control, recentering
and interaction force feedback policy. However, practical experiments evince
the difficulty of performing decoupled motions in a single axis only. As such,
this work shows that the trivial extension of standard methods is insufficient
for omnidirectional teleoperation, due to the operator’s physical inability to
properly decouple all input DoFs. This suggests that further studies on
enhanced haptic feedback are necessary.
## I INTRODUCTION
The physical interaction between flying robots and the environment has gained
increasing interest in the robotics community in recent years. Aerial robots
with manipulation capabilities have already been successfully deployed in a
variety of interaction tasks [1, 2, 3]. The low-cost, high maneuverability and
nearly unlimited workspace of these aerial manipulators allow deployment in
hard-to-reach or remote places, as well as when contact-based inspection and
maintenance are too dangerous for human operators [4, 5]. In this regard, OMAV
offer a particularly compelling solution. Their capability to generate thrust
in any direction allows hovering at arbitrary orientations, as well as
independently controlling position and attitude. Thus, these platforms are
capable of precise motion and interaction force control, while simultaneously
rejecting disturbances [6, 7, 8].
Despite recent advances in autonomous control of these vehicles (e.g. [9, 10,
11]), existing regulations and safety requirements often still require a human
operator in the loop. Real-time inclusion of the human expert’s knowledge is
especially important when complex tasks must be performed in uncertain or
a-priori unknown environments, considering the limited decisional autonomy of
modern robots. Transferring the operator supervision and decision making
skills to the remote site requires careful design of teleoperation systems. On
the one hand, taking full control of the robot and exploiting its capabilities
is only possible if every DoF is individually controllable by the operator. In
view of the recent trend towards omnidirectional aerial manipulators,
teleoperation frameworks must therefore simultaneously provide fully decoupled
commands for all three translational and rotational axes. On the other hand,
meaningful system information must be reflected back to the operator through
haptic and/or visual feedback. This concept of bilateral teleoperation
improves the user situational awareness, which in turn supports their decision
making process [12].
State-of-the-art bilateral teleoperation approaches are almost exclusively
focused on underactuated platforms for which the operator can control the
position and yaw angle only. Naturally, the question arises if an extension to
omnidirectional vehicles is straightforward or if additional considerations
and/or problems must be addressed. Specifically, the goal of this work is to
evaluate whether it is possible to teleoperate an OMAV using standard
methodologies, in both contact-less and contact-based conditions.
### I-A Related Work
Early works on teleoperation methods for aerial robots primarily considered
contact-free flight rather than interaction [13, 14], mostly focusing on
direct control of the vehicles and the obstacle avoidance problem. An
alternative teleoperation strategy was suggested in [15], where the operator
indirectly steers the MAV by modifying the parameters of a dynamic path.
Hereby, user feedback includes information about tracking performance or the
presence of obstacles.
Only recently, bilateral teleoperation of MAV has been extended to aerial
physical interaction as well. One of the earliest work to apply bilateral
teleoperation to aerial physical interaction has been presented in [16] but
was restricted to simulation. The authors proposed the use of a haptic device
with three actuated translational DoFs to command both the motion of the
vehicle and the interaction force when in contact, mapping the position of the
input device into the desired acceleration of the MAV. Simultaneously, the
device renders a feedback force aimed to recenter the input device, as well as
to provide an indication of the measured interaction force. A similar
reference and feedback generation scheme is used in [17], although
environmental forces are estimated using a risk field interaction model.
Relying on standard rate control and interaction force feedback, the framework
in [18, 19] makes use of passivity theory to ensure stability even under
communication delays.
The methods presented thus far focus exclusively on underactuated platforms
and are naturally limited to the position control of the vehicle. In fact, [7]
is the only work where the topic of omnidirectional bilateral teleoperation
for aerial robots is addressed. However, since it is not the main contribution
of the article, specifics regarding the implementation of reference and
feedback generation schemes, as well as detailed discussions are missing.
Furthermore, although the framework seems to support decoupled 6DoF bilateral
teleoperation in theory, the experimental verification is limited to
translational motion only.
Even though the extension of well established methods may appear trivial, a
detailed evaluation is still missing. It is important to note that standard
solutions developed for ground manipulators (e.g. Leader-Follower-
Configuration) are unsuitable for aerial robotics, since the limited input
workspace must be mapped to a virtually unlimited robot workspace. Additional
issues arise due to the complexity of the $\mathsf{SE}(3)$ control space,
containing an open vector space for translation and a closed isometry for
orientation. In summary, effective bilateral teleoperation for OMAV remains an
unsolved problem, let alone its application in physical aerial interaction
tasks.
### I-B Contributions
In summary, the main contributions of this work are:
* •
Design of a fully decoupled 6DoF teleoperation framework by extending the
established rate control, recentering and interaction force feedback policy to
$\mathsf{SE}(3)$.
* •
Evaluation of the proposed policy in real-world flight experiments, including
free-flight omnidirectional reference generation, as well as push-and-slide
operation during physical interaction.
* •
Discussion about limitations of standard policies, namely the operator’s
physical inability to properly decouple all input DoFs.
As such, this study serves as a first step towards remote controlled
omnidirectional aerial physical interaction, by identifying working features
and potential issues when adapting standard methods used for underactuated
MAV.
(a) Local environment: human and haptic device.
(b) Remote work environment: aerial robot and task objects.
Figure 1: Representation the 6DoF bilateral teleoperation setup. Local
Environment(1), (2)Reference Generation(8), (9)Feedback Generation(10)Remote
Environment(3)$\begin{bmatrix}\bm{p}_{H}\\\
\bm{q}^{M}_{H}\end{bmatrix}$$\begin{bmatrix}\bm{p}_{S,ref}\\\
\bm{v}_{S,ref}\\\ {\bm{R}^{W}_{S,ref}}\\\
\bm{\omega}_{{S,ref}}\end{bmatrix}$$\bm{\tau}_{fb,total}$$\bm{\tau}_{fb,ext}$
Figure 2: Detailed interactions between the different components of the
proposed teleoperation framework.
## II Modeling
The system considered in a bilateral teleoperation framework consists of a
human operator, a haptic device, and the robot (in our case OMAV). Human and
haptic device constitute the local environment and are connected through a
virtual communication link to the robot in the remote work environment (see
Fig. 1). The human operator is in constant contact with the handle of the
haptic device which allows them to perform small-scale translational and
rotational perturbations. Hereby, the mechanical construction and actuation of
the input device must support decoupled 6DoF motion and force/torque feedback
rendering.
To describe the local configuration, we define the inertial frame
$\mathcal{F}_{M}=\\{O_{M},\bm{x}_{M},\bm{y}_{M},\bm{z}_{M}\\}$ with origin
$O_{M}$ and unit axes $\\{\bm{x}_{M},\bm{y}_{M},\bm{z}_{M}\\}$ corresponding
to the idle pose of the haptic device’s handle. Its current position and
orientation are captured by the frame
$\mathcal{F}_{H}=\\{O_{H},\bm{x}_{H},\bm{y}_{H},\bm{z}_{H}\\}$ with origin
$O_{H}$ rigidly attached to the handle. In particular, it will become clear
that when $\mathcal{F}_{H}$ and $\mathcal{F}_{M}$ coincide, the desired rate
commanded to the aerial robot is zero. In a similar fashion, two additional
frames are defined for the remote environment. The inertial world frame,
$\mathcal{F}_{W}=\\{O_{W},\bm{x}_{W},\bm{y}_{W},\bm{z}_{W}\\}$ is located at
an arbitrary origin point $O_{W}$, such that $\bm{z}_{W}$ is opposite to
gravity. Finally, the state of the aerial vehicle is described by the body
frame $\mathcal{F}_{S}=\\{O_{S},\bm{x}_{S},\bm{y}_{S},\bm{z}_{S}\\}$ whose
origin $O_{S}$ coincides with the OMAV’s center-of-mass and $\bm{x}_{S}$
points along the end-effector used during physical contact. A right-hand
superscript, e.g. $\bm{\star}^{S}$ , is used if a vector is not represented in
its original frame.
### II-A Human Operator
Let $\bm{p}_{H}\in\mathbb{R}^{3}$ and $\bm{v}_{H}\in\mathbb{R}^{3}$, both
expressed in $\mathcal{F}_{M}$, denote the position and velocity of $O_{H}$
with respect to $O_{M}$, i.e. the haptic device end-effector position and
velocity with respect to the idle configuration. Similarly, we define the
attitude and angular rate of $\mathcal{F}_{H}$ with respect to
$\mathcal{F}_{M}$ as ${\bm{R}^{M}_{H}}\in\mathsf{SO}(3)$ and
$\bm{\omega}_{H}\in\mathbb{R}^{3}$, the latter expressed in $\mathcal{F}_{H}$.
Since the human is in constant contact with the end-effector, its pose and
twist values are similar to the ones of the end-effector. The dynamic relation
between these values is modeled in $\mathcal{F}_{H}$ as:
$\displaystyle\bm{M}_{H}\begin{bmatrix}{\dot{\bm{v}}_{H}}^{H}\\\
{\dot{\bm{\omega}}_{H}}\end{bmatrix}+\bm{D}_{H}\begin{bmatrix}\bm{v}_{H}^{H}\\\
\bm{\omega}_{H}\end{bmatrix}=-\bm{\tau}_{H}+\bm{\tau}_{H,act},$ (1)
where $\bm{\tau}_{H,act}\in\mathbb{R}^{6}$ are wrenches (stacked forces and
torques) from the muscles and $\bm{\tau}_{H}\in\mathbb{R}^{6}$ are the
interaction wrenches with the haptic device. The human inherent inertia and
damping are denoted as $\bm{M}_{H}\in\mathbb{R}^{6\times 6}$ and
$\bm{D}_{H}\in\mathbb{R}^{6\times 6}$.
Following standard practice in both RC and manned rotary-wing vehicle
piloting, it is assumed that the visual frame of the human is identical to
$\mathcal{F}_{S}$. In fact, the robot is commonly equipped with an onboard
camera that streams images back to the operator during remote operations.
However, the human’s point of view can be changed as desired by modifying the
coordinate frame conventions used in Section III accordingly.
### II-B Haptic Device
Since the inertia and damping of the human dynamics in (1) are generally
unknown, the haptic device manipulator is interacting with an unknown
environment with low impedance. As explained in detail in [20], this is
generally undesired for torque-controlled systems, since the contact
constraints can not be accurately described. To still ensure compliant
interaction and haptic transparency, an admittance filter is introduced and
combined with a low-level joint position controller. Assuming perfect
tracking, the closed-loop robot arm dynamics can then be approximated as:
$\displaystyle{\bm{M}_{adm}}\begin{bmatrix}{\dot{\bm{v}}_{H}}^{H}\\\
{\dot{\bm{\omega}}_{H}}\end{bmatrix}+{\bm{D}_{adm}}\begin{bmatrix}\bm{v}_{H}^{H}\\\
\bm{\omega}_{H}\end{bmatrix}=\bm{\tau}_{H}+\bm{\tau}_{fb,total},$ (2)
with inertia
${\bm{M}_{adm}}=diag({\bm{M}_{adm,t}},{\bm{M}_{adm,r}})\in\mathbb{R}^{6\times
6}$ and damping
${\bm{D}_{adm}}=diag({\bm{D}_{adm,t}},{\bm{D}_{adm,r}})\in\mathbb{R}^{6\times
6}$ tuning matrices and where $\bm{\tau}_{fb,total}\in\mathbb{R}^{6}$ are the
desired feedback wrenches to be applied to the user (see Section III-B).
### II-C OMAV
We denote the position and velocity of the OMAV’s center-of-mass $O_{S}$ with
respect to $\mathcal{F}_{W}$ with $\bm{p}_{S}\in\mathbb{R}^{3}$ and
$\bm{v}_{S}\in\mathbb{R}^{3}$. The orientation and angular rate of
$\mathcal{F}_{S}$ with respect to $\mathcal{F}_{W}$ is given as
${\bm{R}^{W}_{S}}\in\mathsf{SO}(3)$ and $\bm{\omega}_{S}\in\mathbb{R}^{3}$,
the latter expressed in $\mathcal{F}_{S}$. To allow for compliant interaction,
we assume that the robot is controlled by an impedance controller similar to
the one presented in [11]. Thus, the rendered closed-loop dynamics in
$\mathcal{F}_{S}$ are
$\displaystyle\bm{M}_{v}\begin{bmatrix}\dot{\bm{v}}_{S}^{S}\\\
\dot{{\bm{\omega}}}_{S}\end{bmatrix}+\bm{D}_{v}\begin{bmatrix}\bm{e}_{v}\\\
\bm{e}_{\omega}\end{bmatrix}+\bm{K}_{v}\begin{bmatrix}\bm{e}_{p}\\\
\bm{e}_{R}\end{bmatrix}=\hat{\bm{\tau}}_{ext}.$ (3)
The virtual inertia $\bm{M}_{v}\in\mathbb{R}^{6\times 6}$, damping
$\bm{D}_{v}\in\mathbb{R}^{6\times 6}$ and stiffness
$\bm{K}_{v}\in\mathbb{R}^{6\times 6}$ are tuning parameters of the on-board
controller and $\hat{\bm{\tau}}_{ext}\in\mathbb{R}^{6}$ describes external
disturbances acting on the platform. In the context of this work, we assume
that such external disturbances originate solely from physical interaction of
the robot with the environment.
Given a desired position $\bm{p}_{S,ref}\in\mathbb{R}^{3}$ and velocity
$\bm{v}_{S,ref}\in\mathbb{R}^{3}$ in $\mathcal{F}_{W}$, attitude
${\bm{R}^{W}_{S,ref}}\in\mathsf{SO}(3)$ and angular rate
$\bm{\omega}_{{S,ref}}\in\mathbb{R}^{3}$ in $\mathcal{F}_{S,ref}$ of the OMAV,
the tracking errors are defined in $\mathcal{F}_{S}$ as
$\displaystyle\bm{e}_{p}$
$\displaystyle={\bm{R}^{W}_{S}}^{\top}\left({\bm{p}_{S}}-{\bm{p}_{S,ref}}\right)$
(4) $\displaystyle\bm{e}_{R}$
$\displaystyle=\frac{1}{2}({\bm{R}^{W}_{S,ref}}^{\top}{\bm{R}^{W}_{S}}-{\bm{R}^{W}_{S}}^{\top}{\bm{R}^{W}_{S,ref}})^{\vee}$
(5) $\displaystyle\bm{e}_{v}$
$\displaystyle={\bm{R}^{W}_{S}}^{\top}\left(\bm{v}_{S}-\bm{v}_{S,ref}\right)$
(6) $\displaystyle\bm{e}_{\omega}$
$\displaystyle=\bm{\omega}_{S}-{\bm{R}^{W}_{S}}^{\top}{\bm{R}^{W}_{S,ref}}\bm{\omega}_{{S,ref}},$
(7)
where the vee-map $(\cdot)^{\vee}:\mathfrak{so}{(3)}\rightarrow\mathbb{R}^{3}$
is the inverse of the skew-symmetric operator
$[\cdot]_{\times}:\mathbb{R}^{3}\rightarrow\mathfrak{so}{(3)}$.
## III TELEOPERATION
An overview of the omnidirectional bilateral teleoperation framework proposed
in this work is presented in Fig. 2. The employed reference and feedback
generation policies are explained in detail in the following sections.
### III-A Rate Control Reference Generation
Rate control is a well established method to teleoperate aerial vehicles,
since it provides an intuitive mapping between the restricted input workspace
and the potentially infinite robot workspace [21]. However, in the state-of-
the-art literature, it is often limited to translation only due to the
underactuated nature of standard fixed-rotor MAV. This restriction does not
apply to OMAV, which is why the concept is extended in this work to include
rotational rate control as well. Essentially, any translational or rotational
deviation between $\mathcal{F}_{H}$ and $\mathcal{F}_{M}$ is translated into a
corresponding translational velocity or angular rate reference for the robot.
Based on this, the translational references are computed as
$\displaystyle\bm{v}_{S,ref}^{S}$ $\displaystyle=v_{max}\bm{p}_{H}$ (8a)
$\displaystyle\bm{p}_{S,ref}^{S}$
$\displaystyle=\int_{0}^{t}\bm{v}_{S,ref}^{S}(s)ds,$ (8b)
and similarly for rotation
$\displaystyle\bm{\omega}_{{S,ref}}^{S}$
$\displaystyle=\frac{\omega_{max}}{2}\left({\bm{R}^{M}_{H}}-{\bm{R}^{M}_{H}}^{\top}\right)^{\vee}$
(9a) $\displaystyle{\bm{R}^{W}_{S,ref}}$
$\displaystyle=\int_{0}^{t}{\bm{R}^{W}_{S,ref}}(s)[\bm{\omega}_{{S,ref}}(s)]_{\times}ds,$
(9b)
where $v_{max}$ and $\omega_{max}$ are used to tune how fast the vehicle
moves. In accordance with the assumption of the human’s point of view from an
onboard camera, all references are provided in the body frame
$\mathcal{F}_{S}$.
### III-B Feedback Generation
As stated in Section I, the design of adequate feedback wrenches is crucial to
ensure ease of operation and situational awareness for the human operator.
Hereby, the overall feedback wrench $\bm{\tau}_{fb,total}\in\mathbb{R}^{6}$
expressed in $\mathcal{F}_{M}$ is often a combination of multiple
contributions, each one representing a different aspect of the current task
(e.g. object avoidance, guiding towards a waypoint). In the context of this
work, we restrict our analysis to the well established recentering
$\bm{\tau}_{fb,rec}\in\mathbb{R}^{6}$ and interaction wrench
$\bm{\tau}_{fb,ext}\in\mathbb{R}^{6}$ feedback. Eventually, the total feedback
wrench takes the form
$\displaystyle\bm{\tau}_{fb,total}=\bm{\tau}_{fb,rec}+\bm{\tau}_{fb,ext},$
(10)
with $\bm{\tau}_{fb,rec}$ and $\bm{\tau}_{fb,ext}$ computed as explained
below.
#### III-B1 Recentering Wrench $\bm{\tau}_{fb,rec}$
When using rate control reference generation, the most essential type of
feedback is the recentering. The recentering wrench $\bm{\tau}_{fb,rec}$ in
$\mathcal{F}_{M}$ aims to move the haptic device’s end-effector back to its
idle pose, in other words make $\mathcal{F}_{H}$ identical to
$\mathcal{F}_{M}$:
$\displaystyle\bm{\tau}_{fb,rec}=-\bm{K}_{rec}\begin{bmatrix}\bm{p}_{H}\\\
\frac{1}{2}\left({\bm{R}^{M}_{H}}-{{\bm{R}^{M}_{H}}}^{\top}\right)^{\vee}\end{bmatrix},$
(11)
where the stiffness
$\bm{K}_{rec}=diag(\bm{K}_{rec,t},\bm{K}_{rec,r})\in\mathbb{R}^{6\times 6}$ is
a tuning parameter. Under rate control, recentering the end-effector will
cause the robot to slow down and eventually hold position and attitude. In
that sense, adding a virtual spring on the human side translates to the
addition of a virtual damper on the robot side. Without recentering feedback,
it would be almost impossible to manually zero the haptic device in all six
directions and achieve static hover. Additionally, it allows the human
operator to let go of the handle at any time and the robot will automatically
stabilize at its current pose.
When targeting applications involving physical contact with the environment
however, the recentering wrench is no longer sufficient to ensure situational
awareness. Especially in cases where the camera view might not be conclusive
about whether the robot is in contact or not, additional interaction-specific
information must be provided.
#### III-B2 Interaction Wrench $\bm{\tau}_{fb,ext}$
In state-of-the-art literature, interaction specific feedback involves
reflecting the measured or estimated forces at the contact point
$\bm{\tau}_{c,1:3}\in\mathbb{R}^{3}$ back to the operator, i.e. the forces
being applied by the environment to the aerial robot expressed in
$\mathcal{F}_{S}$. This work proposes an extension for omnidirectional
vehicles, whereby the torques $\bm{\tau}_{c,4:6}\in\mathbb{R}^{3}$ in
$\mathcal{F}_{S}$ acting on the vehicle at the contact point are also
included. Considering the offset between the vehicle’s center-of-mass and the
part of it that is in contact (i.e. the tool)
$\bm{r}_{O_{S}T}\in\mathbb{R}^{3}$ in $\mathcal{F}_{S}$, the interaction
wrench feedback is then given as:
$\displaystyle\bm{\tau}_{fb,ext}=\begin{bmatrix}\bm{\tau}_{c,1:3}\\\
\bm{\tau}_{c,4:6}\end{bmatrix}+\begin{bmatrix}\bm{0}\\\
\bm{r}_{O_{S}T}\times\bm{\tau}_{c,1:3}\end{bmatrix},$ (12)
where $\bm{\tau}_{fb,ext}$ is expressed in $\mathcal{F}_{M}$, again using the
assumption that $\mathcal{F}_{M}$ is aligned with $\mathcal{F}_{S}$. Notice
that this corresponds to the external wrench $\hat{\bm{\tau}}_{ext}$ acting on
the vehicle’s center-of-mass during interaction, effectively representing the
same wrench a human being would feel when holding the tool.
### III-C Stability Considerations
In this paper, no formal proof of the stability of the teleoperation system is
provided. While this will be the focus of future work, some stability-related
aspects are briefly discussed here. The structure of our proposed framework,
namely rate control in combination with recentering and environment force
feedback, has some similarity with the scheme presented in [14]. In that
paper, stability of the teleoperation loop was proven when subject to bounded
operator and external forces. Although an underactuated system is considered,
that proof suggests the existence of similar formal guarantees for the case of
omnidirectional vehicles. Furthermore, the flight experiments presented in the
following section already verify the practical stability of the developed
teleoperation policy.
## IV RESULTS
Figure 3: Decoupled translational (top) and rotational (bottom) reference
generation. Solid lines and corresponding shaded areas indicate the mean and
standard deviation of the operator inputs, respectively and dashed lines
indicate the mean total feedback wrench over five repeated trials. Coupling
effects related to unintended input agitation despite recentering wrench can
be observed, especially $\bm{y}_{S},\bm{z}_{S}$ coupling during translation
inputs and $\bm{x}_{S},\bm{y}_{S}$ translation during rotation inputs.
### IV-A Experimental Setup
Indoor flight experiments are conducted to evaluate the capabilities and
performance of the proposed bilateral teleoperation setup. The employed system
is shown in Fig. 1, consisting of a 7DoF Franka Emika Panda arm with a handle
attached to the end-effector for a haptic device and the OMAV based on the
tiltrotor aerial platform introduced in [22]. A measure of the interaction
force between the human and the robot arm is obtained at
$800\text{\,}\mathrm{Hz}$ with a 6-axis Rokubi force-torque sensor mounted
between the end-effector and the handle. The Panda arm is running the default
cartesian velocity controller in combination with an admittance filter,
effectively rendering the closed-loop dynamics in (2). The admittance gains
are set as low as possible to improve haptic transparency, while still
ensuring a minimum dissipation to maintain system stability. Reference
generation and recentering parameters are tuned to render robot velocities and
feedback wrenches comfortable for the user. The specific values used during
experiments are listed in Table I. It should be noted that the framework can
easily be combined with other types of haptic devices, given they support 6DoF
motion input and wrench feedback rendering.
Parameter | | Value |
---|---|---|---
${\bm{M}_{adm,t}}$ | ${\bm{M}_{adm,r}}$ | $10\bm{I}_{3\times 3}[$\mathrm{kg}$]$ | $\bm{I}_{3\times 3}[$\mathrm{k}\mathrm{g}\mathrm{m}^{2}$]$
${\bm{D}_{adm,t}}$ | ${\bm{D}_{adm,r}}$ | $5\bm{I}_{3\times 3}[$\mathrm{kg}\text{\,}{\mathrm{s}}^{-1}$]$ | $\bm{I}_{3\times 3}[$\mathrm{k}\mathrm{g}\mathrm{s}^{-1}\mathrm{m}^{2}$]$
$v_{max}$ | $\omega_{max}$ | $1[${\mathrm{s}}^{-1}$]$ | $1[${\mathrm{s}}^{-1}$]$
$\bm{K}_{rec,t}$ | $\bm{K}_{rec,r}$ | $50\bm{I}_{3\times 3}[$\mathrm{N}\text{\,}{\mathrm{m}}^{-1}$]$ | $2\bm{I}_{3\times 3}[$\mathrm{N}\text{\,}\mathrm{m}$]$
TABLE I: System parameters for experiments.
The remote environment is equipped with a MOCAP, providing pose measurements
for the OMAV at $100\text{\,}\mathrm{Hz}$. An EKF-based state estimator
provides the full state estimate
$\bm{p}_{S},\bm{v}_{S},{\bm{R}^{W}_{S}},\bm{\omega}_{S}$ required by the
impedance controller, fusing MOCAP with onboard IMU data (accelerometer and
gyroscope). The controller is tuned according to [11]. A safety tether is
connected to the OMAV but kept slack to limit external disturbances.
Additionally, a rigid rod of approximately $0.6\text{\,}\mathrm{m}$ length
with a soft ball at the end is attached to the robot and acts as the
interaction tool for contact-based tasks. The rod provides sufficient
clearance for the propellers to allow successful interaction without the risk
of collision. The ball acts as a mechanical damper to soften hard impacts.
Interaction forces are measured by a 6-axis Rokubi force-torque sensor mounted
between the OMAV and the rod. Hereby, the sensor data is transformed
accordingly to represent forces and torques acting on the vehicle center-of-
mass (see Section III-B).
Note that the human is directly looking at the robot, instead of viewing from
the robot perspective as mentioned in earlier sections. However, their visual
frame is still well aligned with $\mathcal{F}_{S}$, since the haptic device is
placed directly behind the robot and the experiments only involve small
attitude changes ($<$10\text{\,}\mathrm{\SIUnitSymbolDegree}$$).
### IV-B Translational and Rotational Reference Generation
Recall that the aim of this paper is to analyze the suitability of standard
bilateral teleoperation methodologies when extended to 6DoF for
omnidirectional vehicles. As a first criteria, the operator must be able to
generate decoupled motion commands in all translational and rotational DoF, in
order to fully exploit the omnidirectional capabilities of the OMAV. This
requires the human to accurately render decoupled forces and torques at a
single interaction point, namely the handle of the haptic device. Thus,
evaluating whether they are physically and cognitively capable of performing
such manipulation is crucial for the feasibility of the proposed framework.
This is tested experimentally by repeatedly tasking an operator with
sequentially actuating each individual input axis without introducing motion
in other directions. The resulting translational and rotational reference
velocity statistics from five trials with a single operator over the
experiment duration $T$, as well as the total feedback wrench acting on the
handle are shown in Fig. 3. Note that this test only involves free flight
operation, meaning that the displayed feedback wrench only consists of
recentering actions, i.e. $\bm{\tau}_{fb,total}=\bm{\tau}_{fb,rec}$. The
effect of this recentering term is clearly visible, shown by the constant
wrench opposing the twist commands, aiming to restore the handle’s idle pose
in the local environment.
The results clearly show an unintended coupling between the different axes on
the input device. During the translational reference generation along
$\bm{y}_{S}$ ($\tilde{t}\in[0.15,0.3]$) for example, non-zero velocity
references in $\bm{z}_{S}$ can be observed. Similarly, when trying to move
along $\bm{z}_{S}$ only ($\tilde{t}\in[0.3,0.5]$), additional rotation along
$\bm{y}_{S}$ is accidentally introduced. A similar phenomenon is observed when
the user is tasked with performing rotations only. These coupling effects
worsen the performance when precise maneuvering is required, such as for high-
accuracy tasks or when operating in confined spaces.
In summary, it appears that producing decoupled reference inputs, especially a
pure rotation at the handle, are physically challenging for the operator,
despite the supporting recentering wrench. Making the recentering gain
adaptive could help to constrain the user input to a single axis at a time, by
making the remaining axes more stiff. Hereby, the adaptive solution must still
allow full exploitation of the omnidirectional capabilities. A detailed study
of such methodologies is left for future work.
### IV-C Push-and-Slide Interaction
Apart from omnidirectional reference generation in contact-free conditions, a
bilateral teleoperation framework for an OMAV must allow the operator to
perform physical interaction tasks as well. We evaluate this requirement by
performing a push-and-slide operation with a whiteboard, as shown in Fig. 4.
This is a common task in contact-based inspection applications. During the
first phase of the experiment, the operator is asked to approach in a
direction normal to the whiteboard surface and push against it. In a second
phase, once contact with the board is established, the user is tasked with
sliding along $\bm{z}_{W}$. The resulting interaction wrench being fed back to
the user is shown in Fig. 5. Notice that the recentering wrench is not
included here, since it is not the primary focus of the experiment.
Additionally, the position of the OMAV in $\mathcal{F}_{W}$ is visualized to
highlight the motion with respect to the whiteboard surface.
When pushing against the board (highlighted in blue), a clear spike in
$-\bm{x}_{S}$ feedback force can be observed, indicating the presence and
intensity of the contact to the operator. The non-zero torque around
$-\bm{y}_{S}$ originates from the second term in (12) and is caused by a
misalignment between the surface normal $\bm{n}$ and $\bm{x}_{S}$ (see Fig. 4
with $\alpha<0$), resulting in an external pitching torque. During vertical
sliding, friction effects acting at the tool tip cause the same behavior.
While moving upwards (highlighted in orange), the tool lags behind due to the
high friction force. Since the tool and the connecting rod are rigidly
attached to the OMAV, this causes the vehicle to pitch down slightly,
producing the observed positive feedback torque around $\bm{y}_{S}$. The
opposite behavior occurs when sliding downwards (highlighted in yellow).
Compared to the omnidirectional reference generation, no immediate limitation
was detected when using the proposed extension of the standard interaction
force feedback. That being said, different experiments showed a degradation in
the performance of the flight controller, such as oscillations or tool
skipping, in the presence of large magnitude interaction and friction forces
(see also complementary video). In this regard, the effectiveness of the
provided feedback could be improved further by including information about the
robot state and its limitations.
Figure 4: Experimental setup for vertical sliding experiment along
$\bm{z}_{W}$. Uncompensated friction opposing the motion results in an angular
offset $\alpha$ between the surface norm $\bm{n}$ and the body x-axis
$\bm{x}_{S}$. Figure 5: Push-and-slide contact experiment. Contact with the
wall (the dashed line) and sliding along $\pm\bm{z}_{W}$ are shown. External
torques due to friction effects (see Fig. 4) can be observed.
## V CONCLUSION
This work investigates the suitability of standard bilateral teleoperation
methods in the context of OMAV. Based on a straightforward extension of the
well established rate control, recentering and interaction force feedback
policy, a bilateral teleoperation framework for an omnidirectional aerial
robot has been designed and evaluated. The human is in contact with the handle
of a haptic device and performs small-scale deviations from the idle pose,
thereby generating twist commands for the robot. Wrench feedback is provided
to the operator, on the one hand recentering the handle to restore its idle
pose and on the other hand reflecting external forces acting on the vehicle
when in contact with the environment.
Practical experiments including contact-free flight, as well as push-and-slide
operation during physical interaction are conducted to evaluate the potential
of the proposed approach. Although the operator is able to control all six
axes of the OMAV, performing decoupled motion in a single DoF only is
practically challenging. This is a fundamental issue of the straightforward
extension of standard methodologies on this new types of platforms, which
shows the need for additional measures to suppress unintended inputs. Being
able to prevent coupling effects is absolutely mandatory, especially when
precise maneuvering is required for high-accuracy tasks and in confined
spaces. Future work will focus on addressing this problem through the use of
adaptive axes stiffness and human intention detection.
In summary, effective bilateral teleoperation for OMAV cannot be achieved by
simple extension of standard teleoperation methodologies but rather requires
more sophisticated policies to fully exploit the capabilities of these
vehicles.
## References
* [1] M. Tognon, H. A. Tello Chávez, E. Gasparin, Q. Sablé, D. Bicego, A. Mallet, M. Lany, G. Santi, B. Revaz, J. Cortés, and A. Franchi, “A truly redundant aerial manipulator system with application to push-and-slide inspection in industrial plants,” _IEEE Robotics and Automation Letters_ , vol. 4, no. 2, pp. 1846–1851, 2019.
* [2] K. Bodie, M. Brunner, M. Pantic, S. Walser, P. Pfändler, U. Angst, R. Siegwart, and J. Nieto, “An omnidirectional aerial manipulation platform for contact-based inspection,” in _Proceedings of Robotics: Science and Systems_ , FreiburgimBreisgau, Germany, June 2019.
* [3] A. Ollero, M. Tognon, A. Suarez, D. Lee, and A. Franchi, “Past, Present, and Future of Aerial Robotic Manipulators,” _IEEE Transactions on Robotics_ , vol. 38, no. 1, pp. 626–645, 2022.
* [4] F. Ruggiero, V. Lippiello, and A. Ollero, “Aerial Manipulation: A Literature Review,” _IEEE Robotics and Automation Letters_ , vol. 3, no. 3, pp. 1957–1964, 2018.
* [5] X. Meng, Y. He, and J. Han, “Survey on Aerial Manipulator: System, Modeling, and Control,” _Robotica_ , vol. 38, no. 7, p. 1288–1317, 2020.
* [6] K. Bodie, M. Tognon, and R. Siegwart, “Dynamic end effector tracking with an omnidirectional parallel aerial manipulator,” _IEEE Robotics and Automation Letters_ , vol. 6, no. 4, pp. 8165–8172, 2021.
* [7] S. Park, J. Lee, J. Ahn, M. Kim, J. Her, G. Yang, and D. Lee, “Odar: Aerial manipulation platform enabling omnidirectional wrench generation,” _IEEE/ASME Transactions on Mechatronics_ , vol. 23, no. 4, pp. 1907–1918, 2018.
* [8] D. Brescianini and R. D’Andrea, “An omni-directional multirotor vehicle,” _Mechatronics_ , vol. 55, pp. 76–93, 2018.
* [9] M. Ryll, G. Muscio, F. Pierri, E. Cataldi, G. Antonelli, F. Caccavale, D. Bicego, and A. Franchi, “6D interaction control with aerial robots: The flying end-effector paradigm,” _The International Journal of Robotics Research_ , vol. 38, no. 9, pp. 1045–1062, 2019.
* [10] M. A. Trujillo, J. R. Martínez-de Dios, C. Martín, A. Viguria, and A. Ollero, “Novel Aerial Manipulator for Accurate and Robust Industrial NDT Contact Inspection: A New Tool for the Oil and Gas Inspection Industry,” _Sensors_ , vol. 19, no. 6, 2019.
* [11] K. Bodie, M. Brunner, M. Pantic, S. Walser, P. Pfandler, U. Angst, R. Siegwart, and J. Nieto, “Active Interaction Force Control for Contact-Based Inspection With a Fully Actuated Aerial Vehicle,” _IEEE Transactions on Robotics_ , pp. 1–14, 2020.
* [12] A. Y. Mersha, S. Stramigioli, and R. Carloni, “On bilateral teleoperation of aerial robots,” _IEEE Transactions on Robotics_ , vol. 30, no. 1, pp. 258–274, 2014.
* [13] F. Schill, X. Hou, and R. Mahony, “Admittance mode framework for haptic teleoperation of hovering vehicles with unlimited workspace,” in _2010 Australasian Conf. on Robotics & Automation, (Brisbane, Australia)_, 12 2010.
* [14] H. Rifaï, M.-D. Hua, T. Hamel, and P. Morin, “Haptic-based bilateral teleoperation of underactuated Unmanned Aerial Vehicles,” _IFAC Proceedings Volumes_ , vol. 44, no. 1, pp. 13 782–13 788, 2011, 18th IFAC World Congress.
* [15] C. Masone, M. Mohammadi, P. R. Giordano, and A. Franchi, “Shared planning and control for mobile robots with integral haptic feedback,” _The International Journal of Robotics Research_ , vol. 37, no. 11, pp. 1395–1420, 2018.
* [16] G. Gioioso, M. Mohammadi, A. Franchi, and D. Prattichizzo, “A force-based bilateral teleoperation framework for aerial robots in contact with the environment,” in _2015 IEEE International Conference on Robotics and Automation (ICRA)_ , 2015, pp. 318–324.
* [17] S. Islam, R. Ashour, and A. Sunda-Meya, “Haptic and Virtual Reality Based Shared Control for MAV,” _IEEE Transactions on Aerospace and Electronic Systems_ , vol. 55, no. 5, pp. 2337–2346, 2019.
* [18] J. Lee, R. Balachandran, Y. S. Sarkisov, M. D. Stefano, A. Coelho, K. Shinde, M. J. Kim, R. Triebel, and K. Kondak, “Visual-Inertial Telepresence for Aerial Manipulation,” 2020.
* [19] A. Coelho, H. Singh, K. Kondak, and C. Ott, “Whole-body bilateral teleoperation of a redundant aerial manipulator,” in _2020 IEEE International Conference on Robotics and Automation (ICRA)_ , 2020, pp. 9150–9156.
* [20] Y. Zimmermann, E. B. Küçüktabak, F. Farshidian, R. Riener, and M. Hutter, “Towards Dynamic Transparency: Robust Interaction Force Tracking Using Multi-Sensory Control on an Arm Exoskeleton,” in _2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2020, pp. 7417–7424.
* [21] F. Conti and O. Khatib, “Spanning large workspaces using small haptic devices,” in _First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. World Haptics Conference_ , 2005, pp. 183–188.
* [22] M. Allenspach, K. Bodie, M. Brunner, L. Rinsoz, Z. Taylor, M. Kamel, R. Siegwart, and J. Nieto, “Design and optimal control of a tiltrotor micro-aerial vehicle for efficient omnidirectional flight,” _The International Journal of Robotics Research_ , vol. 39, no. 10-11, pp. 1305–1325, 2020.
|
# Independent Control and Path Planning of Microswimmers with a Uniform
Magnetic Field
###### Abstract
Artificial bacteria flagella (ABFs) are magnetic helical micro-swimmers that
can be remotely controlled via a uniform, rotating magnetic field. Previous
studies have used the heterogeneous response of microswimmers to external
magnetic fields for achieving independent control. Here we introduce
analytical and reinforcement learning control strategies for path planning to
a target by multiple swimmers using a uniform magnetic field. The comparison
of the two algorithms shows the superiority of reinforcement learning in
achieving minimal travel time to a target. The results demonstrate, for the
first time, the effective independent navigation of realistic micro-swimmers
with a uniform magnetic field in a viscous flow field.
###### keywords:
micro-swimmers, reinforcement learning, magnetically driven
Lucas Amoudruz Petros Koumoutsakos*
L. Amoudruz, Prof. P. Koumoutsakos
Computational Science and Engineering Laboratory, ETH Zürich, CH-8092,
Switzerland.
Email<EMAIL_ADDRESS>
L. Amoudruz, Prof. P. Koumoutsakos
John A. Paulson School of Engineering and Applied Sciences, Harvard
University, Cambridge, MA, USA.
## 1 Introduction
The magnetic control of micro-swimming devices [1, 2, 3, 4, 5] through micro-
manipulation [6, 7], targeted drug delivery [8, 9] or convection-enhanced
transport [10], has created new frontiers for bio-medicine. A particularly
promising technology involves corkscrew-shaped magnetic micro-swimmers
(artificial bacterial flagella) that propel themselves when subjected to a
rotating magnetic field [11]. Rotating magnetic fields can form propulsive
gradients and they are arguably preferable to alternatives, such as electric
fields, for in-vivo conditions [12, 13]. However, the independent, yet
coordinated, control of individual ABFs is challenging as it requires
balancing between the magnetic forces and the hydrodynamic interactions
between the swimmers while the employed magnetic fields are practically
uniform over lengths of few micrometers. We note that independent navigation
of mm-sized micro-swimmers has been shown in [14] through experiments and
simulations, while in [15] a reinforcement learning (RL) algorithm was applied
to adjust the velocity of an idealized swimmer in simulations with one way
coupling with a complex flow field. Control of swimmers using two-way coupling
and RL have been demonstrated with linked-spheres at low Reynolds numbers [16]
and for artificial fish in macro-scales [17]. Similarly, genetic algorithms
have been used to navigate micro-swimmers towards high concentrations of
chemicals [18].
The problem of heterogeneous micro-robots navigation via a uniform input has
been studied in two dimensions on surfaces [19] and in a fluid at rest [20].
The steering of two micro-propellers along two distinct paths in 3 dimensions
has been accomplished with the help of magnetic fields gradients [21]. These
advances exploited the heterogeneous response of micro-swimmers to a uniform
input to achieve independent trajectories along a prescribed path. These
control methods are based on short horizon objectives (stay on the prescribed
path) and do not provide the trajectory that minimizes the travel time to a
target position, particularly in the presence of a background flow. In
addition, strong background flows restrict the set of feasible paths for given
micro-swimmers. To the best of our knowledge the steering of multiple micron-
sized swimmers towards a target in a minimal time under a background flow and
a uniform magnetic field has not been reported before.
In this work, we present two methods to independently guide two micro-ABFs
towards a single target in the presence of a uniform magnetic field. The two
methods rely on simulations of swimming ABFs using an ordinary differential
equation (ODE) model. The model is calibrated with the method of dissipative
particle dynamics (DPD) [22, 23], taking into account the particular geometry
of the swimmers and their interactions with the viscous fluid. We first
present a semi-analytical solution for the simple yet instructive setup of
multiple, geometrically distinct ABFs in free space, with zero background
flow. This result enables understanding of the design constraints for the ABFs
necessary for independent control and how their geometric characteristics
relate to their travel time. We then employ RL to control multiple ABFs
trajectories in a broad range of flow conditions including a non-zero
background flow.
## 2 Artificial bacterial flagella
The ABFs are modeled as microscopic rigid bodies of length $l$ with position
$\mathbf{x}$ and orientation $q$ (represented by a quaternion), immersed in a
viscous fluid and subjected to a rotating, uniform, magnetic field. We
estimate that the magnetic and hydrodynamic interactions between ABFs are
orders of magnitude smaller than those due to the magnetic field for dilute
systems (see supplementary material) and we ignore inertial effects due to
their low Reynolds number ($\mathrm{Re}\approx${10}^{-3}$$). Following this
approximation, the system is fully described by the position and orientation
of the ABFs.
Additionally, the linear and angular velocities of the ABF, $\mathbf{V}^{b}$
and $\bm{\Omega}^{b}$, are directly linked to the external force and torque,
$\mathbf{F}^{b}$ and $\mathbf{T}^{b}$, via the mobility matrix [24],
$\begin{bmatrix}\mathbf{V}^{b}\\\
\bm{\Omega}^{b}\end{bmatrix}=\begin{bmatrix}\Delta&Z\\\
Z^{\text{T}}&\Gamma\end{bmatrix}\begin{bmatrix}\mathbf{F}^{b}\\\
\mathbf{T}^{b}\end{bmatrix},$ (1)
where the superscript b indicates that the quantity is expressed in the ABF
frame of reference, for which $\Delta$, $Z$ and $\Gamma$ are diagonal. The
matrices $\Gamma$ and $Z$ represent the application of torque to changing the
angular and linear velocity, respectively. The ABFs are propelled by torque
applied through a magnetic field and we assume that it can swim only in the
direction of its main axis so that $Z$ has only one non-zero entry ($Z_{11})$.
The coefficients in the mobility matrix are often estimated by empirical
formulas for low Reynolds number flows [25]. Here we estimate the components
of the mobility matrix for the specific ABF by conducting flow simulations
using DPD [22, 23], which we validate against experimental data of [8] (see
supplementary material). We remark that the shape (pitch, diameter, length,
thickness) of the ABF influence the elements of these matrices and the present
approach allows to account for these geometries.
The ABF with a magnetic moment $\mathbf{m}$ is subjected to a uniform magnetic
field $\mathbf{B}$ and hence experiences a torque
$\mathbf{T}=\mathbf{m}\times\mathbf{B}.$ (2)
No other external force is applied to the ABF, hence $\mathbf{F}=\mathbf{0}$.
Combining eq. 1 with the kinematic equations for a rigid body gives the
following system of ODEs:
$\displaystyle\dot{\mathbf{x}}$ $\displaystyle=\mathbf{V},$ (3a)
$\displaystyle\dot{q}$ $\displaystyle=\frac{1}{2}q\otimes\hat{\bm{\Omega}},$
(3b) $\displaystyle\mathbf{V}^{b}$ $\displaystyle=Z\mathbf{T}^{b},$ (3c)
$\displaystyle\bm{\Omega}^{b}$ $\displaystyle=\Gamma\mathbf{T}^{b},$ (3d)
where $\otimes$ denotes the quaternion product, and $\hat{\bm{\Omega}}$ the
pure quaternion formed by the vector $\bm{\Omega}$. The transformations
between the laboratory frame of reference and that of the ABF are given by:
$\displaystyle\mathbf{T}^{b}$ $\displaystyle=R(q)\mathbf{T},$ (4a)
$\displaystyle\mathbf{m}$ $\displaystyle=R(q^{\star})\mathbf{m}^{b},$ (4b)
$\displaystyle\mathbf{V}$ $\displaystyle=R(q^{\star})\mathbf{V}^{b},$ (4c)
$\displaystyle\bm{\Omega}$ $\displaystyle=R(q^{\star})\bm{\Omega}^{b},$ (4d)
where $q^{\star}$ is the conjugate of $q$ and $R(q)$ is the rotation matrix
that corresponds to the rotation by a quaternion $q$ [26]. The system of
differential equations (2) and (3) is advanced in time with a fourth order
Runge-Kutta integrator.
We note that when simulating multiple non-interacting ABFs in free space, we
use the above ODE system for each swimmer with the common magnetic field but
different mobility coefficients and magnetic moments.
## 3 Forward velocity
ABF were designed to swim under a rotating, uniform magnetic field [27, 11].
We first study this scenario by applying the field
$\mathbf{B}(t)=B\left(0,\cos{\omega t},\sin{\omega t}\right)$ to ABFs
initially aligned with the $x$ axis of the laboratory frame. Note that in the
later sections, the magnetic field is able to rotate in any direction so that
the swimmers can navigate in three dimensions. We consider two ABFs with the
same length but different pitch and magnetic moments, as shown in fig. 1. In
both cases, the magnetic moment is perpendicular to the helical axis of the
ABF. Under these conditions, by symmetry of the problem, the swimmers swim
along the $x$ axis. The difference in pitch results in different coefficients
of the mobility matrix and along with the different magnetic moments results
in distinct propulsion velocities for the two ABFs. For each ABF velocity we
distinguish a linear and a non-linear variation with respect to the frequency
of the magnetic field. First, the ABF rotates at the same frequency as the
magnetic field and its forward velocity increases linearly with the frequency
of the magnetic field, consistent with the low Reynolds approximation [28, 8,
29, 3]. In the non-linear regime, the magnetic torque is no longer able to
sustain the same frequency of rotation as the magnetic field. The onset of
non-linearity depends on the geometry and magnetic moment of the ABF as well
as the imposed magnetic field. Indeed, the magnitude of the magnetic torque is
bounded while that of the hydrodynamic torque increases linearly with the ABF
angular velocity $\Omega$. The torque imbalance at high rotation frequencies
causes the ABF to slip, resulting in an alternating forward and backward
motion (see supplementary material). Increasing the frequency further
increases the effective slip and accordingly decreases the forward velocity.
The two regimes are distinguished by the step-out frequency $\omega_{c}$
corresponding to the maximum forward velocity of the ABF.
Figure 1: Left: Dimensionless time averaged forward velocity of two ABFs,
differing in shape and magnetic moment, against the field rotation frequency
(in units of the step-out frequency of the first swimmer, $\omega_{c,1}$).
Right: The ABFs geometries. The arrows represent the magnetic moment of the
ABFs.
The differences in propulsion velocities for the ABFs can be exploited to
control independently their trajectories. The slope $V/\omega$ in the linear
regime depends only on the shape of the ABF. The step-out frequency depends on
both the shape and the magnetic moment (it can also be changed by varying the
surface wetability of the ABF [30]). These two properties can be chosen such
that the forward velocities of two ABFs react differently to the magnetic
rotation frequency (fig. 1). By changing $\omega$, it is then possible to
control the relative velocities of the two ABFs: one is faster that the other
in one regime while the opposite occurs in an other regime. This simple
observation constitutes the key idea for independent control of several ABFs
even with a uniform magnetic field. We remark that, while this potential has
been previously identified [28, 30, 31, 32, 33], the control of similar
systems have been performed in the simple case of free space, non interacting
propellers and no background flow [21, 20]. To the best of our knowledge, this
is the first time that such independent controlled navigation of multiple
micro-swimmers is materialised in three dimensions with a complex background
flow. In the following sections we propose two methods to tackle the problem
of steering ABFs towards a target in a minimal amount of time.
## 4 Independent control I: semi-analytical solution
In the absence of an external flow field, we derive a semi-analytical strategy
for the navigation of $N$ ABFs towards a particular target. Each ABF has a
distinct magnetic moment and without any loss of generality, we set the target
position of all swimmers to the origin and define the initial position of the
$i^{\text{th}}$ ABF as $\mathbf{x}^{(i)}$. We assume that the time required by
one ABF to align with the rotation direction of the field is much smaller than
$|\mathbf{x}^{(i)}|/v$, where $v$ is the typical forward velocity of the ABF.
The proposed strategy consists in gathering all ABFs along one direction
$\mathbf{n}_{k}$ at a time, such that $\mathbf{x}^{(i)}\cdot\mathbf{n}_{k}=0$,
$i=1,2,\dots,N$ after phase $k$. We choose a sequence of orthogonal
directions, $\mathbf{n}_{k}\cdot\mathbf{n}_{k^{\prime}}=\delta_{kk^{\prime}}$.
The choice of the orientations of $\mathbf{n}_{k}$ is not restricted to the
basis vectors of the laboratory frame and is described at the end of this
section. In three dimensions, the strategy consists of three phases,
$k=1,2,3$, until all ABFs have reached their target: they first gather on a
plane, then on a line and finally to the target.
All ABFs are gathered along a given direction $\mathbf{n}_{k}$ by exploiting
the different forward responses of the ABFs when we alternate the frequency of
rotation of the magnetic field. More specifically, for $N$ ABFs, the field
rotates in the direction $\mathbf{n}_{k}$ for $t_{j}$ time units at frequency
$\omega_{c,j}$, $j=1,2,\dots,N$, where $\omega_{c,j}$ is the step-out
frequency of the $j^{\text{th}}$ swimmer. We define the velocity matrix with
elements $U_{ij}=V_{i}\left(\omega_{c,j}\right)$, denoting the velocity of
swimmer $i$ when the field rotates with the step out frequency of swimmer $j$.
We can relate the above quantities to the (signed) distances $d_{j}$ covered
by the ABFs as
$d_{i}=\sum\limits_{j=1}^{N}s_{j}t_{j}U_{ij},$
where $s_{j}\in\\{-1,1\\}$ determines if the field rotates
clockwise/counterclockwise. Equivalently, the vector form of the above is
$\mathbf{d}=U\mathbf{\beta}$, where $\beta_{j}=t_{j}s_{j}$. Setting
$d_{i}=\mathbf{x}^{(i)}\cdot\mathbf{n}_{k}$, we can invert this linear system
of equations for each phase $k$ and obtain the times spent at each step-out
frequency $\mathbf{\beta}=U^{-1}X\mathbf{n}_{k}$, where we have set
$X_{ij}=x^{(i)}_{j}$. We emphasize that this result holds only if the velocity
matrix is invertible, restricting the design of the ABFs to achieve
independent control. The total time spent at phase $k$ is then given by
$T(\mathbf{n}_{k})=\sum\limits_{i=1}^{N}t_{i}=\sum\limits_{i=1}^{N}|\beta_{i}|=\left\lVert
U^{-1}X\mathbf{n}_{k}\right\rVert_{1}.$
The yet unknown directions $\mathbf{n}_{k}$, $k=1,2,3$, are chosen to minimize
the total travel time. The directions are parameterized as
$\mathbf{n}_{k}=R(\phi,\theta,\psi)\mathbf{e}_{k}$, $k=1,2,3$, where
$R(\phi,\theta,\psi)$ is the rotation matrix given by the three Euler angles
$\phi$, $\theta$ and $\psi$. Note that this choice of handedness of the three
directions does not influence the final result. The optimal angles satisfy
$\phi^{\star},\theta^{\star},\psi^{\star}=\operatorname*{arg\,min}\limits_{\phi,\theta,\psi}{\sum\limits_{k=1}^{3}T\left(R(\phi,\theta,\psi)\mathbf{e}_{k}\right)}.$
We solve the above minimization problem numerically with derandomised
evolution strategy with covariance matrix adaptation (CMA-ES) [34] (see
supplementary material for the configuration of the optimizer).
## 5 Independent control II: Reinforcement Learning
We now employ a RL approach to solve the problem introduced in section 4. Each
of the $N$ ABFs is initially placed at a random position
$\mathbf{x}_{i}\sim\mathcal{N}\left(\mathbf{x}_{i}^{0},\sigma\right)$,
$i=1,2,\dots,N$. The RL agent controls the magnetic field frequency of
rotation and direction, and has the goal of bringing all ABFs within a small
distance (here two body lengths, $d=2l$) from the target origin. This small
distance is justified by the assumption of non-interacting ABFs. The agent
sets the direction and magnitude of the magnetic field frequency every fixed
time interval. An episode is terminated if either of the two conditions occur:
(a) all ABFs reached the target within a small distance $d$, or (b) the
simulation time exceeds a maximum time $T_{\text{max}}$. The positions
$\mathbf{x}_{i}$ and orientations $q_{i}$ of the ABFs describe the state $s$
of the environment in the RL framework. The action performed by the agent
every $\Delta t$ time encodes the magnetic field rotation frequency and
orientation for the next time interval. The reward of the system is designed
so that all ABFs reach the target and the travel time is minimized.
Additionally, a shaping reward term [35] is added to improve the learning
process. The training is performed using VRACER, the off-policy actor critic
RL method described in [36]. More details on the method can be found in the
supplementary material.
## 6 Reaching the targets
In this section, we demonstrate the effectiveness of the two methods
introduced in sections 4 and 5. We first consider 2 ABFs in free space with
zero background flow. Figure 2 shows the distance of the ABFs to their target
over time, and the corresponding magnetic field rotation frequency for both
methods. In both cases, the ABFs successfully reach their target.
Interestingly, the rotation frequencies chosen by the RL agent correspond to
the step-out frequencies of the ABFs. Indeed, these frequencies allow the
fastest absolute velocity difference between the ABFs, so it is consistent
that they are part of the fastest solution found by the RL method.
Furthermore, the RL trained swimmer was about $25\%$ faster than the semi-
analytical swimmer. We remark that the RL solution amounts to first blocking
the forward motion of one swimmer while the other continues swimming (see fig.
3). The blocked swimmer is first reoriented such that its magnetic moment is
orthogonal to the plane of the magnetic field rotation, thus the resulting
magnetic torque applied to this swimmer is zero. On the other hand, the method
presented in section 4 makes both ABFs swim at all time, even if one of them
must go further from its target. In such situations, the “blocking” method
found by RL is advantageous over the other method.
Figure 2: Distance to target of the two controlled ABFs (in units of body
length $l$) against dimensionless time (and ) in free space, zero background
flow, and corresponding magnetic field rotation frequency (), where
$\omega_{c,1}$ is the step-out frequency of the first swimmer. Figure 3:
Trajectories of the ABFs from their initial positions (ABF representations) to
the target area (sphere) obtained with the two methods in three dimensions:
semi-analytical (dotted lines) and RL (solid lines). The arrows show the
successive axes of rotation of the magnetic field. The size of the ABFs has
been scaled up by a factor of $7$, for visualization purpose.
We now employ the RL method in the case of 2 ABFs swimming in a background
flow with non zero velocity. The assumptions required for deriving the semi-
analytical approach are violated and therefore we do not use this approach in
this case.
In the presence of a background flow $\mathbf{u}_{\infty}$, eqs. 4c and 4d
become
$\displaystyle\mathbf{V}$
$\displaystyle=R(q^{\star})\mathbf{V}^{b}+\mathbf{u}_{\infty}(\mathbf{x}),$
$\displaystyle\bm{\Omega}$
$\displaystyle=R(q^{\star})\bm{\Omega}^{b}+\frac{1}{2}\nabla\times\mathbf{u}_{\infty}(\mathbf{x})+\frac{\lambda^{2}-1}{\lambda^{2}+1}\mathbf{p}\times\left(E(\mathbf{x})\mathbf{p}\right),$
where we approximated the rotation component by the effect of the flow on an
axisymmetric ellipsoid of aspect ratio $\lambda$ (Jeffery orbits). Here
$E(\mathbf{x})=\left(\nabla\mathbf{u}_{\infty}(\mathbf{x})+\nabla\mathbf{u}_{\infty}^{T}(\mathbf{x})\right)/2$
is the deformation rate tensor of the background flow evaluated at the
swimmer’s position and $\mathbf{p}=R(q^{\star})\mathbf{e}_{x}$ is the
orientation of the ellipsoid. We used $\lambda=2$ in the subsequent
simulations. The background flow is set to the initial conditions of the
Taylor-Green vortex,
$\mathbf{u}_{\infty}(\mathbf{r})=\begin{bmatrix}A\cos{ax}\sin{by}\sin{cz}\\\
B\sin{ax}\cos{by}\sin{cz}\\\ C\sin{ax}\sin{by}\cos{cz}\end{bmatrix},$ (5)
with $A=B=C/2=V_{1}(\omega_{c,1})$ and $a=b=-c=2\pi/50l$. With these
parameters, the maximum velocity of the background flow is larger than the
maximum swimming speed of the ABFs.
Figure 4: Left: Distance to target of the two controlled ABFs (in units of
body length $l$) against dimensionless time (and ) in free space with the
background flow described by eq. 5 and corresponding magnetic field rotation
frequency (), where $\omega_{c,1}$ is the step-out frequency of the first
swimmer. Right: Trajectories of the ABFs with non-zero flow obtained with the
RL method. The arrows represent the velocity field and the colors represent
the magnitude of the vorticity field. The flow field is only shown for a
distance less than $4l$ from the trajectories, where $l$ is the length of the
swimmers.
The distances between the swimmers and the target over time are shown for the
RL method on fig. 4. Despite the background flow perturbation, the RL method
successfully navigates the ABFs to their target. The magnetic action space
exhibits a similar behavior as in the free space case: the rotation frequency
of the magnetic field oscillates between the step out frequencies of both
swimmers and never exceeds the highest of these frequencies, where the
swimming performance would degrade considerably. The trajectories of the ABFs
seem to make use of the velocity field to achieve a lower travel time: fig. 4
shows that the trajectories tend to be parallel to the velocity field. The RL
method not only found a solution, but also made use of its environment to
reduce the travel time.
## 7 Robustness of the RL policy
The robustness of the RL method is tested against two external perturbations,
unseen during the training phase. In both cases, the robustness of the method
is measured in terms of success rate (expected ratio between the number of
successful trajectories and the number of attempts).
First, a flow perturbation $\mathbf{\delta
u}(\mathbf{r})=\varepsilon\mathbf{u}_{\infty}(\mathbf{r}/p)$ is added to the
background flow described in the previous section, where $\varepsilon$
controls the strength of the perturbation and $p$ controls the wave length of
the perturbation with respect to the original one. Figure 5 shows the success
rate of the RL approach against the perturbation strength $\varepsilon$ for
different $p$. For large wave lengths ($p=2$), the RL agent is able to
successfully steer the ABFs to their target in more than $90\%$ of the cases
when the perturbation strengths of less than $20\%$ of the original flow. In
contrast, the success rate degrades more sharply for smaller wave lengths
($p=1/2$), suggesting that the method is less robust for short wave length
perturbations. The RL policy seems more robust to perturbations with the same
wavelengths as the original flow ($p=1$) for large perturbation strengths: the
success rate is above $30\%$ even for large perturbations.
Figure 5: Left: Success rate of the RL method to guide swimmers to their
target against the flow perturbation strength $\varepsilon$, for different
wave numbers, $p=1/2$ (), $p=1$ (), $p=2$ (). Right: Success rate of the RL
method to steer swimmers to their target against thermal fluctuation
$k_{B}T/k_{B}T_{0}$.
At small length scales, micro-swimmers are subjected to thermal fluctuations.
We investigate the robustness of the RL policy (trained with the background
flow, eq. 5) on swimmers subjected to thermal noise and background flow (eq.
5). The thermal fluctuations are modeled as an additive stochastic term to the
linear and angular velocities of each swimmer, following the Einstein relation
with the mobility tensor given by eq. 1. Defining the generalized undisturbed
velocity $\bar{\mathcal{V}}=(\mathbf{V},\bm{\Omega})$, the resulting
stochastic generalized velocities satisfy
$\displaystyle\mathcal{V}$
$\displaystyle=\bar{\mathcal{V}}+\delta\mathcal{V},$
$\displaystyle\langle\delta\mathcal{V}_{i}\rangle$
$\displaystyle=0,\;\;i=1,2,\dots,6,$
$\displaystyle\langle\delta\mathcal{V}_{i},\delta\mathcal{V}_{j}\rangle$
$\displaystyle=k_{B}TM_{ij},\;\;i,j=1,2,\dots,6,$
where $M$ is the mobility tensor and $k_{B}T$ is the temperature of the
system, in energy units. The above property is achieved by adding a scaled
Gaussian random noise with zero mean to the velocities at every time step of
the simulation.
The success rate of the policy is shown in fig. 5 for various temperatures
$k_{B}T$, in units of the room temperature $k_{B}T_{0}$. As expected, a large
thermal noise causes the policy to fail at its task. Nevertheless, this
failure only occurs at relatively high temperatures: the success rate falls
below $50\%$ for $k_{B}T>25k_{B}T_{0}$, which is well above the normal
operating conditions of ABFs. With temperatures below $2k_{B}T_{0}$, the RL
method sustains a success rate above $99\%$. We remark that this robustness is
achieved successfully even with a policy trained with $k_{B}T=0$.
## 8 Conclusion
We have presented two methods to guide multiple ABFs individually towards
targets with a uniform magnetic field. The semi-analytical method allows to
understand the basic mechanisms that allow independent control and we derive
the necessary condition for the independent control of multiple ABFs: their
velocity matrix must be invertible, a condition that can be accommodated by
suitably choosing the geometries of the swimmers. This result may help to
optimize the shapes of ABFs to further reduce the travel time.
The RL approach can control multiple ABFs in quiescent flow as well as in the
presence of a complex background flow. Additionally, this approach is
resilient to small flow perturbations and to thermal noise. When the
background flow vanishes, the RL method recovers a very similar behavior as
the first method: the rotation frequency is alternating between the step-out
frequencies of the swimmers. Furthermore, the RL method could reach lower
travel time than the first method by blocking one swimmer while the other is
swimming. Steering an increased number of swimmers requires longer travel
times according to the first method. We thus expect that applying the RL
approach to more than two swimmers requires longer training times and might
become prohibitively expensive as the number of swimmers increases. Possible
solutions to this problem might include pre-training the RL agent with the
policy found by the semi-analytical method.
The current work focused on the simplified case of non-interacting swimmers.
In practice, the ABFs may interact hydrodynamically and magnetically with each
other, encounter obstacles, evolve in confined geometries or experience time
varying flows. Nevertheless, we expect the RL method to be a good candidate to
overcome these variants, in the same way as it naturally handled the addition
of a background flow.
Acknowledgments
We acknowledge insightful discussions with Guido Novati (ETHZ) and his
technical support for the usage of smarties. We acknowledge support by the
European Research Council (ERC Advanced Grant 341117).
Conflicts of Interest
The authors declare no financial or commercial conflicts of interest.
## References
* [1] Q. Cao, X. Han, L. Li, _Lab Chip_ 2014, _14_ 2762\.
* [2] L. Yang, L. Zhang, _Annual Review of Control, Robotics, and Autonomous Systems_ 2021, _4_ , 1 null.
* [3] P. Tierno, R. Golestanian, I. Pagonabarraga, F. Sagués, _The Journal of Physical Chemistry B_ 2008, _112_ , 51 16525.
* [4] Y. Liu, D. Ge, J. Cong, H.-G. Piao, X. Huang, Y. Xu, G. Lu, L. Pan, M. Liu, _Small_ 2018, _14_ , 17 1704546.
* [5] P. Liao, L. Xing, S. Zhang, D. Sun, _Small_ 2019, _15_ , 36 1901197.
* [6] L. Zhang, K. E. Peyer, B. J. Nelson, _Lab on a Chip_ 2010, _10_ , 17 2203.
* [7] Y. Yu, J. Guo, Y. Wang, C. Shao, Y. Wang, Y. Zhao, _ACS applied materials & interfaces_ 2020, _12_ , 14 16097.
* [8] R. Mhanna, F. Qiu, L. Zhang, Y. Ding, K. Sugihara, M. Zenobi-Wong, B. J. Nelson, _Small_ 2014, _10_ , 10 1953.
* [9] P. Sharan, A. Nsamela, S. C. Lesher-Pérez, J. Simmchen, _Small_ 2021, 2007403.
* [10] S. Schuerle, A. P. Soleimany, T. Yeh, G. M. Anand, M. Häberli, H. E. Fleming, N. Mirkhani, F. Qiu, S. Hauert, X. Wang, B. J. Nelson, S. N. Bhatia, _Science Advances_ 2019, _5_ , 4 1.
* [11] L. Zhang, J. J. Abbott, L. Dong, B. E. Kratochvil, D. Bell, B. J. Nelson, _Applied Physics Letters_ 2009, _94_ , 6 064107.
* [12] H. Gu, Q. Boehler, D. Ahmed, B. J. Nelson, _Science Robotics_ 2019, _4_ , 35.
* [13] K. Bente, A. Codutti, F. Bachmann, D. Faivre, _Small_ 2018, _14_ , 29 1704374.
* [14] D. Wong, E. B. Steager, V. Kumar, _IEEE Robotics and Automation Letters_ 2016, _1_ , 1 554.
* [15] S. Colabrese, K. Gustavsson, A. Celani, L. Biferale, _Physical Review Letters_ 2017, _118_ , 15 1.
* [16] A. C. H. Tsang, P. W. Tong, S. Nallan, O. S. Pak, _Phys. Rev. Fluids_ 2020, _5_ 074101\.
* [17] S. Verma, G. Novati, P. Koumoutsakos, _Proceedings of the National Academy of Sciences of the United States of America_ 2018, _115_ , 23 5849.
* [18] B. Hartl, M. Hübl, G. Kahl, A. Zöttl, _Proceedings of the National Academy of Sciences_ 2021, _118_ , 19.
* [19] S. Floyd, E. Diller, C. Pawashe, M. Sitti, _The International Journal of Robotics Research_ 2011, _30_ , 13 1553.
* [20] P. J. Vach, S. Klumpp, D. Faivre, _Journal of Physics D: Applied Physics_ 2015, _49_ , 6 065003.
* [21] E. Diller, J. Giltinan, M. Sitti, _The International Journal of Robotics Research_ 2013, _32_ , 5 614.
* [22] P. Espanol, P. Warren, _EPL (Europhysics Letters)_ 1995, _30_ , 4 191.
* [23] D. Alexeev, L. Amoudruz, S. Litvinov, P. Koumoutsakos, _Comput. Phys. Commun._ 2020, 107298.
* [24] J. Happel, H. Brenner, _Low Reynolds number hydrodynamics: with special applications to particulate media_ , volume 1, Springer Science & Business Media, 1981.
* [25] S. Kim, S. J. Karrila, _Microhydrodynamics: principles and selected applications_ , Courier Corporation, 2013.
* [26] B. Graf, _arXiv preprint arXiv:0811.2889_ 2008.
* [27] A. Ghosh, P. Fischer, _Nano letters_ 2009, _9_ , 6 2243.
* [28] K. E. Peyer, L. Zhang, B. J. Nelson, _Nanoscale_ 2013, _5_ , 4 1259.
* [29] D. Li, M. Jeong, E. Oren, T. Yu, T. Qiu, _Robotics_ 2019, _8_ , 4 87.
* [30] X. Wang, C. Hu, L. Schurz, C. De Marco, X. Chen, S. Pané, B. J. Nelson, _ACS Nano_ 2018, _12_ , 6 6210.
* [31] F. Bachmann, K. Bente, A. Codutti, D. Faivre, _Physical Review Applied_ 2019, _11_ , 3 034039.
* [32] I. S. Khalil, A. F. Tabak, Y. Hamed, M. E. Mitwally, M. Tawakol, A. Klingner, M. Sitti, _Advanced Science_ 2018, _5_ , 2 1700461.
* [33] I. S. Khalil, A. F. Tabak, Y. Hamed, M. Tawakol, A. Klingner, N. El Gohary, B. Mizaikoff, M. Sitti, _IEEE Robotics and Automation Letters_ 2018, _3_ , 3 1703.
* [34] N. Hansen, S. D. Müller, P. Koumoutsakos, _Evolutionary computation_ 2003, _11_ , 1 1.
* [35] A. Y. Ng, D. Harada, S. Russell, In _ICML_ , volume 99. 1999 278–287.
* [36] G. Novati, P. Koumoutsakos, In _Proceedings of the 36 th International Conference on Machine Learning_. 2019 .
*[ABFs]: artificial bacterial flagellum
*[RL]: reinforcement learning
*[ODE]: ordinary differential equation
*[DPD]: dissipative particle dynamics
*[ABF]: artificial bacterial flagellum
*[ODEs]: ordinary differential equation
*[CMA-ES]: derandomised evolution strategy with covariance matrix adaptation
|
# Rotation of optically bound particle assembly due to scattering induced
spin-orbit coupling of light
Yukihiro Tao Department of Material Engineering Science, Graduate School of
Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka
560-8531, Japan Tomohiro Yokoyama<EMAIL_ADDRESS>Department of Material Engineering Science, Graduate School of Engineering
Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan
Hajime Ishihara Department of Material Engineering Science, Graduate School
of Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka
560-8531, Japan Department of Physics and Electronics, Graduate school of
Engineering, Osaka Prefecture University, 1-1 Gakuen-cho, Naka-ku, Sakai,
Osaka 599-8531, Japan Center for Quantum Information and Quantum Biology,
Institute for Open and Transdisciplinary Research Initiatives, Osaka
University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan
###### Abstract
The optical binding of many particles has great potential to achieve the wide-
area formation of a “crystal” of small materials. Unlike conventional optical
binding, where the whole assembly of targeted particles is irradiated with
light, if one can indirectly manipulate remote particles using a single
trapped particle through optical binding, the degrees of freedom to create
ordered structures will be greatly enhanced. In this Letter, we theoretically
investigate the dynamics of the assembly of gold nanoparticles that is
manipulated using a single particle trapped by a focused laser. As a result,
we demonstrate that the spin–orbit coupling and angular momentum generation of
light via scattering induce the assembly and rotational motion of particles
through indirect optical force. This result opens the possibility of creating
ordered structures with a wide area and manipulating them, controlling local
properties using scanning laser beams.
## I Introduction
Optical manipulation is a non-contact method to capture various micro-scale
objects Ashkin et al. (1986), such as metals, semiconductors, dielectrics,
organic materials, and living cells Zhang and Liu (2008). Due to this variety
of trappable objects, laser trapping has been developed for a wide range of
research fields Li et al. (2010); Fazal and Block (2011). One significant
development in optical manipulation is the trapping of multiple particles. For
instance, a holographical technique can form various chains of microparticles
Curtis et al. (2002); Grier and Roichman (2006). On a glass substrate, total
reflection can provide a trapping force over a wide area Mellor and Bain
(2006); Mellor et al. (2006); Taylor et al. (2008). In addition to such design
of incident light, micro-scale fabrication has also achieved the trapping of
many particles, e.g., plasmonic structures Righini et al. (2007); Pang and
Gordon (2012) and photonic crystals Rahmani and Chaumet (2006); Yang et al.
(2009); Jaquay et al. (2013). The formation of an ordered monolayer of
particles at a liquid–liquid interface has also been reported Aveyard et al.
(2002); Park and Furst (2008).
Optical binding is a key concept for the optical manipulation of many
particles Depasse and Vigoureux (1994); Forbes et al. (2020). Optically
induced polarizations cause attractive or repulsive forces between the
particles, which results in an ordering of the particles Demergis and Florin
(2012). Yan et al. investigated the formation of a nanoparticle array under
wide-area laser irradiation with circular Han et al. (2018) and linear
polarizations Yan et al. (2013, 2014), where the ordering of particles
depended on the type of polarization. The spin angular momentum (SAM) of
circular polarization gives a torque to the array, although the mechanism of
this torque transfer is still unclear.
Recent reports by Kudo et al. Kudo et al. (2016); Wang et al. (2016); Kudo et
al. (2018) demonstrated new possibilities of optical manipulation and optical
binding. They examined the trapping of many nanoparticles by a tightly focused
single laser, where trapped particles showed tetragonal or hexagonal ordered
arrays depending on the polarization. These arrays were beyond the irradiation
area, like a crystal growing Sugiyama et al. (2007). Moreover, outside of the
focal area, polystyrene particles form additional horns Kudo et al. (2016) and
gold particles show revolution and swarming dynamics Kudo et al. (2018). This
implies that focal irradiation and scattering fields from the particles cause
self-organized and indirect optical binding, which is in significant contrast
with conventional optical binding with a wide area of irradiation. The
mechanism of such indirect binding of multiple particles is unknown at
present. Its elucidation will lead to an unconventional scheme for creating
and manipulating wide-area ordered structures, implementing finely controlled
local properties with scanning beams and rich extension of optical binding by
a combination of multiple focused lasers.
In this Letter, we theoretically investigates the mechanism of the optical
binding and dynamics of nanoparticles due to indirect optical force under the
irradiation of a single, tightly focused, and circularly polarized laser.
Considering gold nanoparticles, we numerically demonstrate the revolution of
surrounding particles. The binding position is determined by the field
intensity, whereas the revolution is explained by a “spin–orbit (SO) coupling”
of light. An analysis based on the SAM and orbital angular momentum (OAM) of
light elucidates three factors: large scattering and suppressed absorption
cross-sections, conversion of SAM to OAM by a single particle, and imbalance
of positive and negative OAM generation. The first and second factors are
determined by the individual particle’s properties, which accelerates the
revolution. The revolution should relate to a vortical flow of the momentum of
light. The optical current discussed by Berry Berry (2009) describes such
momentum flow of light. The interference between incident and scattered field
clearly exhibits a vortical structure of the optical current, as shown in
Appendix B. The last factor is affected by the particle configurations as well
as properties. Our results imply that the imbalance of OAM will reveal the
revolution direction.
## II Model
Considering the experiment reported in Ref. Kudo et al. (2018), we assume the
following model and conditions. We consider nanoscale spherical gold particles
(refractive index: $n_{\rm Au}\simeq 0.258+6.97i$) in a water solvent ($n_{\rm
w}\simeq 1.33$) on a glass substrate. The diameter of particles is
$d=200\,\mathrm{nm}$. The wavelength of the laser is
$\lambda=2\pi/k=1064\,\mathrm{nm}$ in a vacuum. A single focal incident laser
is modeled by a Gaussian beam Richards and Wolf (1959); Zhao et al. (2007);
Novotny and Hecht (2006) with an approximate spot size of $2\omega_{0}\simeq
800\,\mathrm{nm}$. This is related to the maximal half-angle, $\theta_{\rm
max}$, by $\omega_{0}\sim 2/(k\tan\theta_{\rm max})$. The numerical aperture
is $NA=n_{\rm w}\sin(2/(k\omega_{0}))\simeq 0.996$. The focal point is set on
the substrate surface. The incident light propagates from water to glass (see
Figs. 1(a) and (b)). The particles move and are optically manipulated on the
substrate. An effect of charge on the particles is not essential and is not
considered. The details of the methodology are summarized at the end of this
Letter, after the concluding remarks.
Figure 1: (a) Schematic view of optical trapping and binding by a single
focused laser accompanying by the generation of OAM, and (b) model of the
numerical simulation for $d=200\,\mathrm{nm}$ diameter Au particles in water.
(c)-(g) Intensity profiles of total electric field with $N_{\rm p}=2$ (c), $5$
(d), $7$ (e), and $8$ (f,g) on the $z=0$ plane. The incident laser is tightly
focused with circular polarization. The focal point is at the origin. Black
lines in the panels indicate trajectories of particles in a finite period.
The optical force on particle $i$ is evaluated based on the electromagnetic
field on the particle surface. The field is calculated self-consistently by
the generalized Mie theory and T-matrix method Mackowski (1994); Mackowski and
Mishchenko (1996, 2011):
$\bm{E}_{\rm tot}(\bm{r})=\bm{E}_{\rm inc}(\bm{r})+\sum_{i=1}^{N_{\rm
p}}\bm{E}_{{\rm sca},i}(\bm{r}),$ (1)
where $\bm{E}_{\rm inc}$ and $\bm{E}_{{\rm sca},i}$ represent the incident
light and scattered light from particle $i$, respectively. $N_{\rm p}$ is the
number of particles. The optical force $\bm{F}_{i}$ is calculated from the
total electromagnetic field $\bm{E}_{\rm tot}$ via Maxwell’s stress tensor as
$\bm{F}_{i}=\oint_{S_{i}}d\Omega\cdot\left(\bar{T}_{\rm E}+\bar{T}_{\rm
B}\right).$ (2)
The integral is over the surface of particle $i$ Datsyuk and Pavlyniuk (2015).
The simulation of particle dynamics follows the Langevin equation,
$m\frac{d^{2}\bm{r}_{i}}{dt^{2}}=-\zeta\frac{d\bm{r}_{i}}{dt}+\bm{F}_{i}+\bm{\xi}_{i},$
(3)
with $\zeta=3\pi\eta_{\rm w}d$ being the friction coefficient. $\bm{\xi}_{i}$
represents the Gaussian random force due to the Brownian motion of water
molecules. All particles are identical and their mass is $m$. The viscosity of
water is assumed to be $\eta_{\rm w}=0.890\,\mathrm{mPa\cdot s}$ at room
temperature. The optical force on one particle accelerates the other particles
due to the hydrodynamic interaction. This is accounted for by the
Ront–Prager–Yamakawa mobility tensor Rotne and Prager (1969); Yamakawa (1970);
Happel and Brenner (1983) (see method), which gives additional velocities to
particle $i$ from the external force acting on the other $j\neq i$:
$\Delta\bm{v}_{i}=\sum_{j\neq i}\tilde{\bm{\mu}}_{ij}\bm{F}_{j}.$ (4)
## III Results and discussion
### III.1 Dynamics of gold particles
We use the simulated results to reveal the indirect mechanism. The multiple
scattering and interference with the incident field are essential. The binding
distance approximately corresponds to the wavelength of light. The dynamics is
faster with an increase in the number of bound particles. The observed motion
is elucidated based on the “SO coupling” of light and the optical current
Berry (2009), as explained below. The correlation between SAM and OAM has
previously been discussed for the rotational optical manipulation by the
optical vortex with OAM Tamura et al. (2019). However, it should be noted that
SO coupling plays an essential role even when light with only SAM is injected
if the targeted matter system has a geometrical structure.
Figures 1(c)–(g) show a snapshot of the intensity of the total electric field
with $N_{\rm p}=2\sim 8$ particles and their trajectories when the circular
polarization of the incident laser is applied. At the center of the focal
area, one particle is optically trapped directly by the incident laser. The
trapped particle causes a scattering of incident laser. Then, due to
interference, the light intensity shows a ring-shaped oscillation (Fig. 1(c)
and Fig. 4 in Appendix A). This oscillation results in the binding of
surrounding particles by an indirect mechanism in the vicinity of the local
maximum of the intensity at $r\simeq\lambda/n_{\rm w}$ from the center
particle. The map of the optical force on an additional small particle shown
in Fig. 5(b) in Appendix B clearly indicates the position of binding due to
the indirect mechanism. How the added particles are bounded one-by-one is
shown in Figs. 1(d)–(f). For six surrounding particles, the bound position
from the center is $\Delta\approx 853.4\,\mathrm{nm}$. The intervals between
the particles are approximately equivalent with $\lambda/n_{\rm w}$. The
particles show a hexagonal distribution, which indicates that our simulation
well reproduces the experimental observation in Ref. Kudo et al. (2018). When
$N_{\rm p}\geq 8$, we find two semi-stable distributions, as shown in Figs.
1(f) and (g). Figure 1(g) shows the optical binding at the second neighboring
position (see also the binding position in Fig. 5(d) in Appendix B). This is
not at the local maximum of the intensity and suggests the indirect mechanism
of binding. As a side note, the case of a linear polarized laser is discussed
in Appendix C, where the particles are aligned in a direction perpendicular to
the polarization. This result is a good demonstration that our simulation
reasonably explains experimental observations Kudo et al. (2018).
The center particle at the focal point is strongly trapped and hardly moves,
whereas the other surrounding particles revolve by the optical force. The
lines in Figs. 1(c)–(g) indicate a slight trajectory of the revolution. The
direction of revolution accords with the optical current shown in Figs. 5(a)
and (c), which is in contrast with the map of force on an additional small
particle in Figs. 5(b) and (d). The speed of revolving particles increases
with the number of surrounding particles. This acceleration is attributed to
an enhancement of the multiple scattering of light. When $N_{\rm p}=7$ in Fig.
1(e), the hexagonal distribution is approximately maintained during the
revolution, although the particles are affected by the random force. The
motions are shown in Movie of Appendix. In Fig. 1(g), the second neighboring
particle is much slower than the first neighboring particles because of the
decreased light intensity.
The optical force is evaluated by the square of the field, $|\bm{E}_{\rm
inc}+\sum_{i}\bm{E}_{{\rm sca},i}|^{2}$. The interference term is significant
for the binding and revolution, whereas $|\bm{E}_{\rm inc}|^{2}$ and
$\sum_{i}|\bm{E}_{{\rm sca},i}|^{2}$ might result in force in the radial and
out-of-plane directions. Because the scattered field is larger than or
comparable to the incident field at the binding position, the self-assembly
process is critical for indirect optical manipulation. It has the possibility
to achieve more various configurations of particles by utilizing the internal
degrees of freedom of light.
### III.2 Angular momentum conversion
For the dynamics, we analyze the SAM and OAM components of scattered light on
the upper and lower celestial hemispheres at $r=R_{\rm c}\gg d$:
$\displaystyle C_{\sigma,l}$ $\displaystyle=$ $\displaystyle
C_{\sigma,l}^{\rm(u)}+C_{\sigma,l}^{\rm(l)}$ (5) $\displaystyle
C_{\sigma,l}^{\rm(u)}$ $\displaystyle=$
$\displaystyle\frac{1}{C^{(0)}}\int_{0}^{\pi/2}\sin\theta d\theta$ (6)
$\displaystyle\times\left|\int_{0}^{2\pi}d\phi\left\\{\bm{e}_{\sigma,l}(\theta,\phi)e^{iR_{\rm
c}}\right\\}^{\ast}\cdot\bm{E}_{\rm tot}(R_{\rm c},\theta,\phi)\right|,$
$\displaystyle C_{\sigma,l}^{\rm(l)}$ $\displaystyle=$
$\displaystyle\frac{1}{C^{(0)}}\int_{\pi/2}^{\pi}\sin\theta d\theta$ (7)
$\displaystyle\times\left|\int_{0}^{2\pi}d\phi\left\\{\bm{e}_{\sigma,l}(\theta,\phi)e^{iR_{\rm
c}}\right\\}^{\ast}\right.$ $\displaystyle\cdot\left\\{\bm{E}_{\rm tot}(R_{\rm
c},\theta,\phi)-\bm{E}_{\rm inc}(R_{\rm c},\theta,\phi)\right\\}\Big{|}.$
Here, $\bm{e}_{\sigma,l}(\theta,\phi)$ corresponds to the mode of the electric
field with spin $\sigma$ and vortex $l$ with respect to the $z$-axis (see
definition in Appendix D). $C^{(0)}$ is a normalization factor to satisfy
$\sum_{\sigma,l}{C_{\sigma,l}}^{2}=1$. Note that the subtraction in Eq. (7) is
to consider only the emission.
In the case of a plane wave, the light has no OAM ($l=0$) Zhao et al. (2007).
On the contrary, a focal laser with right circular polarization has a slight
$(\sigma,l)=(-1,+2)$ component in addition to the majority $(+1,0)$ component.
This is due to “scattering” by the lens. The incident laser consists of
$C_{+1,0}\approx 0.9750$ and $C_{-1,+2}\approx 0.2222$ when
$2\omega_{0}=800\,\mathrm{nm}$. Note that the sum of their squares is unity.
Figure 2: (a) Coefficients of SAM and OAM components of scattered field with
$N_{\rm p}=1,7$ when $\sigma=+1$ circular polarized light is tightly focused
and applied. $N_{\rm p}=7$ and the other parameters correspond to Fig. 1(e).
(b) Log scale plot of (a). (c,d) Particle distance dependence of the
coefficients when $N_{\rm p}=7$. (e) Summary of the coefficients for the SAM
and OAM defined in Eqs. (8) and (9) and the sentence that follows them.
Scattering by a particle or particle assembly produces a further $C_{-1,+2}$
component. Figures 2(a) and (b) exhibit the coefficient $C_{\sigma,l}$ of
light scattered when $N_{\rm p}=1$ and $7$. For a single particle, the
scattered light shows $C_{-1,+2}>0.3$, and the others of $\sigma=-1$ are zero.
This represents a conversion of the SAM to the OAM under the conservation of
total angular momentum (TAM), $j=l+\sigma$. This is regarded as SO coupling
for the light. However, it is difficult to determine the orbital motion due to
a strong trapping.
In the situation of Figs. 1(c)–(g), the rotational symmetry is broken and the
TAM of light is not conserved. In Fig. 1(e), the system has a six-hold
rotational symmetry and the scattered light consists of $l=\pm 6m$ with
$\sigma=+1$ (where $m$ is an integer). A spin-flip also occurs, and
$C_{-1,2\pm 6m}$ is generated, where the TAM is distributed from $j=1$ to only
$1\pm 6m$. Here, note that the distribution of TAM (mainly OAM) is imbalanced
and, the coefficient of $j=1-6$ is larger than that of $j=1+6$. This is
consistent with the rotation of assembly if the particles and photons follow
Newton’s third law of motion.
Here, we claim that the key elements of the particle dynamics are the spin-
flip by the “SO coupling” and the imbalance of OAM generation. Let us discuss
the SAM and OAM components in the parameter space of the particle distance and
the complex refractive index. We introduce
$\displaystyle C_{\pm}$ $\displaystyle=$
$\displaystyle\left[{\textstyle\sum_{l}}{C_{\sigma=\pm
1,l}}^{2}\right]^{\frac{1}{2}},$ (8) $\displaystyle C_{>(<)}$ $\displaystyle=$
$\displaystyle\left[{C_{+1,>(<)}}^{2}+{C_{-1,>(<)}}^{2}\right]^{\frac{1}{2}}$
(9)
with ${C_{+1,>(<)}}=[\sum_{l>0(<0)}{C_{+1,l}}^{2}]^{1/2}$ and
${C_{-1,>(<)}}=[\sum_{l>2(<2)}{C_{-1,l}}^{2}]^{1/2}$. Note that $C_{\pm}$ and
$C_{>(<)}$ are indicators of the extent of the spin-flip and imbalance of OAM
generation, respectively, which are summarized in Fig. 2(e).
First, we examine the case that the particle distance $\Delta$ changes
virtually, as shown in Figs. 2(c) and (d). The oscillations of the
coefficients are seen in these figures. Here, we can see why the rotation is
strongly driven in the considered system. Figure 2(c) shows that the spin-flip
coefficients generating negative $l$, namely $C_{\pm 1,<}$, are enhanced
around $\Delta$ corresponding to the light wavelength in water and its half,
whereas the maximum positions of the coefficients generating positive $l$,
namely $C_{\pm 1,>}$ are shifted, i.e., oscillate in a different phase, and
$C_{\pm 1,>}$ becomes minimum around $\Delta\approx$ wavelength. This is why
the imbalance of OAM becomes maximum (namely, $C_{>}/C_{<}$ becomes minimum)
there. At the binding position of our calculation, the OAM is generated with
sufficient imbalance to drive the revolution of surrounding particles in a
particular direction. The oscillation period suggests that the interference of
multiple scattering of light by the particle’s configuration determines the
generation of OAM.
Figure 2(d) shows that $C_{-}/C_{+}$ is insensitive to $\Delta$, except in the
region of small $\Delta$, and that $C_{>}<C_{<}$ is maintained. The latter
observation agrees with the absence of negative torque for a tightly focused
laser, which is in contrast with the case of wide-area irradiation Han et al.
(2018). In the case of the wide-area irradiation, the interference between the
incident and first-order scattering fields from the respective particles is
significant rather than the multiple scattering in the present case; the maps
of optical current and force are shown in Fig. 6. The rotation behavior
arising from the interference of scattered light can also be discussed in
terms of the optical current. The relevant figures and discussion are provided
in Appendix B.
Figure 3: (a,b) Profile of in-plane optical force $(F_{x},F_{y})$ on one of
the surrounding particles in the plane of complex refractive index
$\tilde{n}=n+i\kappa$ when $N_{\rm p}=7$ and tightly focused (a) and not
focused (b) lasers are applied. The beam waists are
$2\omega_{0}=0.8\,\mathrm{\mu m}$ and $12.5\,\mathrm{\mu m}$, respectively.
The latter situation is a conventional setup for optical binding. The diameter
of particles is $d=200\,\mathrm{nm}$. One particle is trapped at $r=0$, and
the others are at $r=\Delta=853.4\,\mathrm{nm}$ (see Fig. 1(e)). (c,d) Spin-
flip ratio due to the “SO coupling” (c) and imbalance of OAM generation (d)
evaluated from the coefficients of SAM and OAM. (e,f) Single particle cross-
section of the scattering $C_{\rm sca}$ (e) and absorption $C_{\rm abs}$ (f).
The plot is normalized by $\pi R^{2}$, where $R$ is the radius.
As shown in Fig. 2 (d), $C_{-}/C_{+}$ is insensitive to the particle distance.
However, it remarkably depends on the particle material, specifically its
complex refractive index, as discussed below. Here, we discuss the spin-flip
and OAM imbalance ratios by examining the indirect optical force, scattering,
and absorption cross-sections in a plane of the complex refractive index.
Figure 3(a) exhibits the indirect optical force driving the surrounding
particles in the situation of Fig. 1(e). Positive force corresponds to the
counterclockwise direction in Fig. 1(e). The optical force is enlarged when
the refractive index is almost purely imaginary (perfect conductor), and the
imaginary part is $\kappa\approx 2.67$. The force is also enlarged around
$n\approx 6.5,\kappa\ll 1$.
For a tightly focused laser, the indirect optical force is only positive in
the $n$-$\kappa$ plane, whereas both positive and negative force is found when
the light is applied widely [Fig. 3(b)]. This is a clear difference between
indirect and direct optical manipulation. The negative force is enlarged at
$n\approx 5.1,\kappa\ll 1$. The presence of negative torque in direct optical
binding with wide-area irradiation is consistent with Ref. Han et al. (2018).
The “SO coupling” is also examined in the $n$-$\kappa$ plane for the seven
particles in Fig. 3(c). The profile of spin-flip ratio $C_{-}/C_{+}$ shows
similar behavior to the indirect optical force, where a region of spin-flip
enhancement correlates with a large indirect force region. Meanwhile, the
difference of generated OAM, $C_{>}-C_{<}$, in Fig. 3(d) shows that the sign
changes and there are positive and negative peaks at $n\ll 1,\kappa\approx
6.5$ and $n\approx 8.6,\kappa\ll 1$, respectively. This result indicates that
negative torque can appear if the material is different, even in the case of
focused irradiation.
Figures 3(e) and (f) exhibit the scattering and absorption cross-sections of a
single particle in water, respectively, evaluated by the Mie coefficients
$a_{n}$ and $b_{n}$ Bohren and Huffman (1998) as follows:
$\displaystyle C_{\rm sca}$ $\displaystyle=$
$\displaystyle\frac{2\pi}{k^{2}}\sum_{n=1}^{\infty}(2n+1)(|a_{n}|^{2}+|b_{n}|^{2}),$
(10) $\displaystyle C_{\rm ext}$ $\displaystyle=$
$\displaystyle\frac{2\pi}{k^{2}}\sum_{n=1}^{\infty}(2n+1){\rm
Re}(a_{n}+b_{n}),$ (11) $\displaystyle C_{\rm abs}$ $\displaystyle=$
$\displaystyle C_{\rm ext}-C_{\rm sca}.$ (12)
The profile of $C_{\rm sca}$, especially in the enhancement parameter regions,
shows good agreement with the indirect optical force in Fig. 3(a). Meanwhile,
$C_{\rm abs}$ is not enlarged at $n\ll 1$. Therefore, by comparing Fig. 3(a)
with (c) and (e), we can see that the enhancements of scattering and “SO
coupling” are dominant factors in the indirect optical manipulation.
The above discussion gives a guideline for indirect and wide-area optical
manipulation by multiple scattering, such that one should prepare particles
and conditions with large scattering cross-section rather than absorption. To
transfer the momentum or angular momentum of photons to the particles
directly, both strong scattering and absorption are suitable. However, if
particles show a strong absorption, the light is extinct quickly and multiple
scattering is suppressed. For the case of $d=200\,\mathrm{nm}$, $C_{\rm
sca}\gg C_{\rm abs}\approx 0$ is a better condition for indirect optical
manipulation. The particle size is also an important parameter. In Appendix E,
we investigate the indirect force and cross-sections for various diameters
(see Fig. 8). We find that the criterion to obtain large indirect optical
force is adaptable for $d\gtrsim 150\,\mathrm{nm}$.
## IV Conclusions
In conclusion, we conducted numerical simulations of nanoparticles optically
trapped and bound by a single focused laser. The simulation revealed that the
scattered light from the strongly trapped center particle causes binding and
indirect optical force on the surrounding particles. Due to the multiple
scattering between all particles, the ordering and optical manipulation of
particles can be achieved beyond the focal irradiation area. Under circularly
polarized laser irradiation, a hexagonal ordering of particles with wavelength
distance is formed, and the surrounding particles revolve. The simulated
results qualitatively agree well with recent experimental observations Kudo et
al. (2018).
Based on the analysis of SAM and OAM, we revealed the mechanisms to enlarge
the revolution in the indirect optical manipulation: $C_{\rm sca}\gg C_{\rm
abs}$ and large “SO coupling” of light, which must be related with each other.
How a strong SO coupling contributes to the revolution is also elucidated
schematically in Fig. 9 in Appendix F. This is determined by the particle
properties , i.e., complex refractive index, diameter, shape, etc. Their
engineering may be possible by using core-shell structures or dye-doped
polymers. The imbalance of OAM generation is another important factor for both
indirect and direct optical manipulation, which is largely affected by the
particle placement.
As we discuss in Appendix B, the optical binding and dynamics of particles are
described well by the optical force and current, respectively. The structure
of optical current is induced by interference between the scattering and
incident fields. Thus, strong scattering enlarges the dynamics.
The present results indicate the possibility that a scanning single beam can
create an ordered pattern with local structures over a wide area by
controlling the polarization. If we use several beams simultaneously with
different phases, the degrees of freedom for designing the pattered structures
will be greatly enhanced. We suggest that such large degrees of freedom would
open the possibility of new optical manipulation, such as a phase transition
of the order of optical binding, as will be elaborated on in our next
publication.
## V Method
The remainder of this Letter details our methodology. We simulated the
dynamics of nanoscale spherical gold particles in a water solvent on a glass
substrate. A single focal incident laser with counterclockwise circular
polarization was considered. The electric field was modeled as a Gaussian beam
Richards and Wolf (1959); Zhao et al. (2007); Novotny and Hecht (2006):
$\bm{E}_{\rm
inc}(r,\varphi,z)=-\frac{ikf}{2}\sqrt{\frac{n_{1}}{n_{2}}}E_{0}e^{-ikf}\left[\begin{matrix}I_{00}+I_{02}e^{i2\varphi}\\\
i(I_{00}-I_{02}e^{i2\varphi})\\\ -2iI_{01}e^{i\varphi}\end{matrix}\right]$
(13)
with
$\displaystyle I_{00}$ $\displaystyle=$ $\displaystyle\int_{0}^{\theta_{\rm
max}}d\theta f_{w}(\theta)\sqrt{\cos\theta}\sin\theta(1+\cos\theta)$ (14)
$\displaystyle\times J_{0}(kr\sin\theta)e^{ikz\cos\theta},$ $\displaystyle
I_{01}$ $\displaystyle=$ $\displaystyle\int_{0}^{\theta_{\rm max}}d\theta
f_{w}(\theta)\sqrt{\cos\theta}\sin^{2}\theta$ (15) $\displaystyle\times
J_{1}(kr\sin\theta)e^{ikz\cos\theta},$ $\displaystyle I_{02}$ $\displaystyle=$
$\displaystyle\int_{0}^{\theta_{\rm max}}d\theta
f_{w}(\theta)\sqrt{\cos\theta}\sin\theta(1-\cos\theta)$ (16)
$\displaystyle\times J_{2}(kr\sin\theta)e^{ikz\cos\theta},$
where $f_{w}(\theta)=\exp(-(\sin\theta/\sin\theta_{\rm max})^{2})$. $J_{n}(x)$
is the $n$-the order Bessel function. The parameters $f$, $\theta_{\rm max}$,
and $k$ represent the focal distance, maximal half-angle of light cone, and
wavenumber in a vacuum, respectively.
The incident field is expanded by the vector spherical harmonics (VSH)
functions, $\bm{M}_{nmp,i}^{(1)}$ and $\bm{N}_{nmp,i}^{(1)}$, as
$\displaystyle\bm{E}_{\rm inc}(\bm{r})$ $\displaystyle=$
$\displaystyle\sum_{n=1}^{n_{\rm max}}\sum_{m=-n}^{+n}\sum_{p={\rm
e,m}}\left[u_{nmp,i}\bm{M}_{nmp,i}^{(1)}(\bm{r})\right.$ (17) $\displaystyle+$
$\displaystyle\left.v_{nmp,i}\bm{N}_{nmp,i}^{(1)}(\bm{r})\right].$
Here, the superscript $(1)$ in $\bm{M}_{nmp,i}^{(1)}$ and
$\bm{N}_{nmp,i}^{(1)}$ represents the spherical Bessel function in the radial
part to describe the incident field. The subscript $i$ denotes that these VSH
functions are located at $\bm{r}_{i}$, i.e., the position of particle $i$. The
index $p={\rm e,m}$ corresponds to the TE and TM modes. $u_{nmp,i}$ and
$v_{nmp,i}$ are the expansion coefficients, as determined by localized
approximation Mackowski (1994). The expansion of $\bm{E}_{\rm inc}$ by
$\bm{M}_{nmp,i}^{(1)}$ and $\bm{N}_{nmp,i}^{(1)}$ is applied to its scattering
by particle $i$.
When the particles are isolated from each other, the scattered field is
approximately given by
$\displaystyle\bm{E}_{{\rm sca},i}^{(0)}(\bm{r})$ $\displaystyle=$
$\displaystyle\sum_{n=1}^{n_{\rm
max}}\sum_{m=-n}^{+n}\left[a_{n}u_{nmp,i}\bm{M}_{nmp,i}^{(3)}(\bm{r})\right.$
(18) $\displaystyle+$
$\displaystyle\left.b_{n}v_{nmp,i}\bm{N}_{nmp,i}^{(3)}(\bm{r})\right]$
with the Mie coefficients $a_{n}$ and $b_{n}$ being independent of $i$. Note
that the radial part of $\bm{M}_{nmp,i}^{(3)}$ and $\bm{N}_{nmp,i}^{(3)}$ is
given by the spherical Hankel function of the first kind to describe the
outward spherical wave. By introducing vectors $\vec{c}_{{\rm
inc},i}=(\\{u_{nmp,i},v_{nmp,i}\\})^{\rm t}$ and $\vec{c}_{{\rm sca},i}^{\
(0)}=(\\{a_{n}u_{nmp,i},b_{n}v_{nmp,i}\\})^{\rm t}$ for the incident and
scattered fields, respectively, the Mie coefficients give the T-matrix for a
single particle, $\vec{c}_{{\rm sca},i}^{\ (0)}=\hat{t}_{i}\vec{c}_{{\rm
inc},i}$.
The multiple scatterings between the particles combine the T-matrices
$\hat{t}_{i}$ and result in generalized T-matrix elements $\hat{T}_{ij}$
Mackowski (1994); Mackowski and Mishchenko (1996, 2011). An extension of the
vector, $\vec{C}_{\rm inc}=(\vec{c}_{{\rm inc},1},\vec{c}_{{\rm
inc},2},\cdots)^{\rm t}$, shows a simple formulation of the multiple
scattering:
$\vec{C}_{\rm sca}=\left(\begin{matrix}\vec{c}_{{\rm sca},1}\\\ \vec{c}_{{\rm
sca},2}\\\
\vdots\end{matrix}\right)=\left(\begin{matrix}\hat{T}_{11}&\hat{T}_{12}&\\\
\hat{T}_{21}&\hat{T}_{22}&\\\ &&\ddots\end{matrix}\right)\vec{C}_{\rm inc}.$
(19)
The evaluated coefficients $\vec{c}_{{\rm
sca},i}=(\\{A_{nmp,i},B_{nmp,i}\\})^{\rm t}$ give a full scattered field
$\displaystyle\bm{E}_{{\rm sca},i}(\bm{r})$ $\displaystyle=$
$\displaystyle\sum_{n=1}^{n_{\rm
max}}\sum_{m=-n}^{+n}\left[A_{nmp,i}\bm{M}_{nmp,i}^{(3)}(\bm{r})\right.$ (20)
$\displaystyle+$
$\displaystyle\left.B_{nmp,i}\bm{N}_{nmp,i}^{(3)}(\bm{r})\right].$
## ACKNOWLEDGMENT
The authors thank Prof. H. Masuhara, Dr. T. Kudo, and Z.-H. Huang for their
fruitful discussions on their experimental results. The authors acknowledge
for all member of Collective Optofluidic Dynamics of Nanoparticles meeting
organized by Prof. Masuhara. This work was supported by JSPS KAKENHI Grant
Number 16K21732 in Scientific Research on Innovative Areas “Nano-Material
Optical-Manipulation”. T.Y. was supported by JSPS KAKENHI Grant Number
18K13484 and H.I. was supported by JSPS KAKENHI Grant Number 16H06504 and
18H01151.
## Appendix A Profile of incident and total fields
In our discussion on indirect optical manipulation under tightly focused laser
irradiation, the scattered field by a central trapped particle is larger than
or comparable with the incident field at the binding position of surrounding
particles. Therefore, the multiple scattering of light between particles is
significant. To show the field profile explicitly in our setup with $NA\simeq
0.996$, we plot the intensities of total electric fields in Fig. 4 when
$N_{\rm p}=0$ (incident field), $1$, and $7$. The incident field is a Gaussian
beam applying localized approximation with the Gaussian beam constant $C_{\rm
GB}$ being $0.51$. Although this value is beyond an appropriate range of
approximation with $C_{\rm GB}=1/(k\omega_{0})$ to estimate the beam waist
$\omega_{0}$, it is chosen to produce $\omega_{0}\approx 800\,\mathrm{nm}$;
see the plot of $N_{\rm p}=0$ in Fig. 4.
When the light is applied to a single particle, the intensity oscillates as a
function of the distance $r$ from the focal center due to interference between
the incident and scattered fields. At $r\simeq 850\,\mathrm{nm}$, the
intensity indicates the first maximum, where the gradient force is zero and
the particle could be bound optically. At the local maximum, the total field
intensity is larger than the incident one. In the case of $N_{\rm p}=7$, the
profile plot of field intensity follows a bisector between two of the
surrounding particles (for the case of Fig. 1(e), along the $y$ axis). The
oscillations of intensity along the radial direction for $N_{\rm p}=1$ and $7$
are almost equivalent.
We also plot the intensity of the magnetic field. The magnetic field intensity
is also larger than or comparable with that of incident light. Therefore, a
scattering-induced component of the Poynting vector describing the optical
current is comparable with the component due to the incident field.
Figure 4: Profile of the intensities of total electric and magnetic fields,
$|\bm{E}_{\rm tot}|^{2}$ and $|\bm{H}_{\rm tot}|^{2}$, for a circular
polarization when $N_{\rm p}=0,1$, and $7$. The focal point of the incident
laser is located at the origin. The intensities are normalized by the incident
field at the focal point, $|\bm{E}_{\rm inc}(r=0)|^{2}={E_{0}}^{2}$ and
$|\bm{H}_{\rm inc}(r=0)|^{2}={H_{0}}^{2}$. The particles are gold with
$d=200\,\mathrm{nm}$ diameter.
## Appendix B Optical current vs. optical force
In the main text and Fig. 2, we examine the analysis of scattered light in
terms of the SAM and OSM to discuss the mechanism of revolution by the
indirect optical force. In our simulation with gold particles and a tightly
focused laser, the imbalance of generated OAM is consistent with the
revolution direction. To consider a mechanism to determine the revolution
direction, as shown in Fig. 5, we examine maps of the optical current
described by the Poynting vector $\bm{S}=\bm{E}^{*}\times\bm{H}/2$ and the
optical force acting on an additional small particle. Note that the optical
force acts on all the present particles. To consider an “ optical force at an
arbitrary position”, we introduce a particle, which is $d=10\,\mathrm{nm}$ to
avoid additional light scattering. The optical current describes the
contribution of the scattering force, whereas the optical force consists of
both the scattering and gradient forces.
Figure 5: Poynting vectors indicating the optical current (left) and optical
force on a small additional particle (right panels) in the $z=0$ plane when
$N_{\rm p}=1$ (a,b) and $7$ (c,d) under tightly focused laser irradiation. The
incident laser has $\sigma=+1$ circular polarization. The beam waist is
$0.8\,\mathrm{\mu m}$. The other parameters of the laser and particles are the
same as those in Fig. 1(e) in the main text. In the right panels, the
additional particle is $d=10\,\mathrm{nm}$ to avoid multiple scattering by the
additional particle. The color scale displays the absolute value of vectors,
$\sqrt{F_{x}^{2}+F_{y}^{2}}$. The black circle indicates Au particles with
$d=200\,\mathrm{nm}$.
The optical current of the incident laser is almost along the incident
direction, and the in-plane components are negligible even for a tightly
focused laser (not shown). With the scattered field, the optical current
exhibits the spatial structures. When one particle is trapped at the focal
point, the optical current of the total field shows a vortical structure
according to the incident $\sigma=+1$ polarization in Fig. 5(a), whereas the
optical force is in the radial direction in Fig. 5(b). This is due to much
stronger gradient force than the scattering force. The optical force indicates
the binding position clearly, while the optical current does not. Therefore,
for the binding, one should discuss the optical force, whereas for the
dynamics or indirect manipulation, the optical current gives reasonable
information. The optical current with seven particles in Fig. 5(c) shows only
slight difference from that with one particle in (a) at a point of the
vortical flow to explain the revolution. The qualitative difference is an
enhancement of the optical currents by multiple scattering in the vicinity of
surrounding particles. This result qualitatively explains the direction of
revolution in Fig. 1(e) in the main text, whereas the force in Fig. 5(d) is
not useful to discuss the dynamics.
Figure 6: Poynting vectors indicating the optical current (left) and optical
force on a small additional particle (right panels) in the $z=0$ plane when
$N_{\rm p}=1$ (a,b) and $7$ (c,d) under wide-area irradiation. The incident
laser has $\sigma=+1$ circular polarization. The beam waist is
$12.5\,\mathrm{\mu m}$. The other parameters of the laser and particles are
the same as those in Fig. 1(e) in the main text. The upper four panels show
the case of tightly focused laser irradiation. The lower four panels are the
non-focused case. In the right panels, the additional particle is
$d=10\,\mathrm{nm}$ to avoid multiple scattering by the additional particle.
The color scale displays the absolute value of vectors,
$\sqrt{F_{x}^{2}+F_{y}^{2}}$. The black circle indicates Au particles with
$d=200\,\mathrm{nm}$.
Figures 6(a)–(d) demonstrate widely focused irradiation. When $N_{\rm p}=1$ in
Fig. 6(a), the interference between the incident and scattered fields
indicates the changes of vortex direction of the optical current with the
radial distance $r$ from the focal point. This is also found in the optical
force in Fig. 6(b). Such behaviors qualitatively agree with the observation of
negative torque reported in Ref. Han et al. (2018). When $N_{\rm p}=7$, the
optical current implies a rotation of individual particles in Fig. 6(c),
whereas the dynamics of whole assembly is not readable. The optical force in
Fig. 6(d) shows the next binding positions, which are slightly different from
the second stable position in Fig. 6(b). Neither Figs. 6(c) nor (d) exhibit
binding or revolution, unlike the tightly focused case. For the direct optical
manipulation with a wide irradiation, one should first examine the optical
current and force by the scattering from a single particle rather than
multiple scattering.
The optical current suggests the absence and presence of negative torque by
tight and wide focusing, respectively. Thus, negative torque would be obtained
by properly tuning the particle distance by a given charge or other
techniques.
## Appendix C Linear polarization
In contrast to circular polarization, a linear polarized laser does not have
SAM. Then, indirect optical manipulation by linear polarization must show a
qualitative difference from the circular polarized laser discussed in the main
text. Here, we consider the indirect optical binding by a focused linear
polarized laser and demonstrate a stable alignment of multiple particles,
which agrees with the experiment by Kudo et al. Kudo et al. (2018).
The parameters of the laser are the same as those for circular polarization.
Figure 7 shows the stable position of bound particles when $N_{\rm p}$
increases one by one. The polarization direction is the $x$-direction. In the
case of linear polarization, the bound particles are fixed in a finite period
and change their positions by the random force. Thus, Fig. 7 shows an example
of possible configurations. When $N_{\rm p}=2$, the “second” particle is
trapped at a slightly shifted position from the perpendicular direction (on
the $y$-axis), which may be attributed to the shape of incident laser shown in
the inset of Fig. 7(a). We can also find other stable trapping positions. The
“third” particle is trapped at another position. When the number of particles
increases, their shifts from the $y$-axis are reduced and they tend to be
aligned on a line. Such configurations were found in an experiment Kudo et al.
(2018).
For the circular polarization, we investigate the analysis of scattered fields
in terms of the SAM and OAM and reveal that the “SO coupling” of light and
imbalance of the generated OAM are significant for the revolution of the
surrounding particles. Meanwhile, for linear polarization, the indirect
binding is static. Although the scattering of linear polarized light by the
bound particles also causes OAM, it is balanced, which follows the stable
optical binding shown in Fig. 7.
Figure 7: Intensity of total electric field with $N_{\rm p}=2$ (a), $3$ (b),
and $5$ (c) on the $z=0$ plane when an $x$-linearly polarized and focused
single laser is applied. The focal point of the incident laser is located at
the origin. The particles are stably trapped. The inset of panel (a) shows the
incident field.
## Appendix D Definition of $\bm{e}_{\sigma,l}(\theta,\phi)$ for scattered
field
We analyze the scattered light from all particles in terms of the SAM and OAM.
The scattered light is a spherical wave at a sufficiently far position.
Therefore, to define the axis of angular momenta, the scattered field must be
converted to a plane wave. We consider a fictitious lens for this conversion.
Through the lens, the field at ($r=R_{\rm c},\theta,\phi$) is rewritten as
$\bm{E}_{\rm tot}\to\hat{R}_{y}^{-1}\hat{R}_{z}^{-1}\bm{E}_{\rm tot}$ (21)
with
$\displaystyle\hat{R}_{z}(\theta,\phi)$ $\displaystyle=$
$\displaystyle\left(\begin{matrix}\cos\phi&-\sin\phi&0\\\
\sin\phi&\cos\phi&0\\\ 0&0&1\end{matrix}\right),$ (22)
$\displaystyle\hat{R}_{y}(\theta,\phi)$ $\displaystyle=$
$\displaystyle\left(\begin{matrix}\cos\phi&0&\sin\phi\\\ 0&1&0\\\
-\sin\phi&0&\cos\phi\end{matrix}\right)$ (23)
being the rotation matrices with respect to the $z$ and $y$-axes. The
converted field is projected onto $\bm{e}_{\sigma=\pm 1}=(1,\pm i,0)^{\rm
t}/\sqrt{2}$ for $\sigma$ SAM with $l$ OAM, $\bm{e}_{\sigma}e^{il\phi}$. Thus,
the unit vector $\bm{e}_{\sigma,l}(\theta,\phi)$ in Eqs. (6) and (7) is given
as
$\bm{e}_{\sigma,l}(\theta,\phi)=\hat{R}_{z}\hat{R}_{y}\bm{e}_{\sigma}e^{il\phi}.$
(24)
## Appendix E Particle size dependence of indirect optical force and cross-
sections
In the main text, we discussed the correlation of the indirect optical force
and cross-sections in the $n$-$\kappa$ plane when the particle diameter is
fixed at $d=200\,\mathrm{nm}$ in Fig. 3. Then, we noted that for the indirect
mechanism, the scattering rather than the absorption is essential. Here, we
examine this criterion when the particle size is changed.
Figure 8: Indirect optical force and cross-sections in the plane of complex
refractive index, $\tilde{n}=n+i\kappa$, for $d=50\,\mathrm{nm}$(a) to
$350\,\mathrm{nm}$(f), excluding $d=200\,\mathrm{nm}$, when $N_{\rm p}=7$; the
case of $d=200\,\mathrm{nm}$ is shown in Fig. 3 in the main text. The optical
force is projected in the angular direction and evaluated on one surrounding
particle at $r=\Delta=853.4\,\mathrm{nm}$. Both cross-sections are plotted in
a unit of $\pi R^{2}$ with $R=100\,\mathrm{nm}$.
Figure 8 demonstrates the indirect optical force when $N_{\rm p}=7$ and the
scattering and absorption cross-sections of a single particle for
$d=50-350\,\mathrm{nm}$, excluding $d=200\,\mathrm{nm}$; the case of
$d=200\,\mathrm{nm}$ is shown in the main text. Note that the cross-sections
in Fig. 8 are normalized by $\pi(100\,\mathrm{nm})^{2}$ for any $d$. The
optical force is evaluated when the surrounding particles are at
$r=853.4\,\mathrm{nm}$. The force tends to increase with the particle size.
When $d<200\,\mathrm{nm}$, the optical force is much smaller than that of
$d=200\,\mathrm{nm}$, which corresponds to the depression of $C_{\rm sca}$. At
$d>200\,\mathrm{nm}$, the indirect optical force clearly increases with $d$,
whereas the scattering cross-section is increased only slightly. Then, the
scattered field intensity is not affected significantly by the diameter.
Meanwhile, the optical force increases according to the particle volume. The
stable binding distance $\Delta$ from the focal point is slightly affected by
the particle size, and one must evaluate the self-consistently. However, we
fixed the position of surrounding particles at $r=\Delta=853.4\,\mathrm{nm}$
because the structures of force in the $n$-$\kappa$ plane shown in Fig. 3 in
the main text and Figs. 8(a1)–(f1) exhibit no significant change.
The absorption cross-section also depends on the diameter. For large
particles, the absorption cross-section tends to be relatively smaller than
the scattering one. In the $d<200\,\mathrm{nm}$ region, however, we can see a
$C_{\rm sca}<C_{\rm abs}$ region in the plane. From the figures of $d=50$ and
$100\,\mathrm{nm}$, the behavior of indirect optical force in the $n$-$\kappa$
plane is similar to the absorption cross-section, whereas the structures of
optical force correspond to those of $C_{\rm sca}$ for $d\geq
150\,\mathrm{nm}$. Then, we can say that the scattering cross-section is an
essential factor for a large indirect optical force when the scatterers have
sufficient size. However, for tiny particles, $C_{\rm sca}$ is too small and
the absorption dominates the optical force.
The analysis of the SAM and OAM components of “emitted” light can also be
examined. However, discussion on the SAM and OAM with small or large particles
is essentially equivalent to that the main text.
## Appendix F Scattering of light with spin–orbit coupling
The flip of SAM also explains the acceleration of surrounding particles.
Figure 9 shows the mechanism schematically. When $l=0$ OAM is scattered from
the center particle, the scattered fields at six surrounding particles have
the same phase [see Fig. 9(a)]. Then, the Huygens–Fresnel principle from the
surrounding particles results in light emission in the radial direction, as
shown in Fig. 9(c). Meanwhile, for $l=2$ shown in Fig. 9(b), the scattered
fields at the neighboring two surrounding particles have $4\pi/6$ phase
differences. This phase difference causes a tilt of wavefront of emitted
light, as in Fig. 9(d). This tilt of “light emission” causes a scattering
force and accelerates the surrounding particles.
If the “SO coupling” of light at a single particle is enlarged, the ratio of
$l=2$ scattering from the center particle against $l=0$ scattering increases.
Therefore, by this schematic understanding, stronger SO coupling results in
stronger indirect optical force, which is reasonable from the similarity
between the profiles of Figs. 3(a) and (c) in the main text.
Figure 9: Phase profile of $l=0$ (a) and $l=2$ OAM (b) incident components.
Black circles show the particles with $d=200\,\mathrm{nm}$ when
$\Delta=853.4\,\mathrm{nm}$. The curve indicating a phase jump at $\pm\pi$ is
artificial redundancy in (b). (c,d) Schematic explanation of force by the
light scattering when $l=0$ (c) and $l=2$ (d) based on the phase of light and
the Huygens–Fresnel principle.
## Appendix G Description of movie
The movie in this shows a numerical simulation of the dynamics of gold
particles in a water solvent. The parameters of the simulation are the same as
those of Fig. 1(e) in the main text. In this movie, we assume a slight charge
on the particles, which might exist experimentally. However, the presence of
charge on the particles only slightly changes the particle distance and is not
essential for the dynamics.
## ACKNOWLEDGMENT
The authors thank Prof. H. Masuhara, Dr. T. Kudo, and Z.-H. Huang for their
fruitful discussions on their experimental results. The authors acknowledge
for all member of Collective Optofluidic Dynamics of Nanoparticles meeting
organized by Prof. Masuhara. This work was supported by JSPS KAKENHI Grant
Number 16K21732 in Scientific Research on Innovative Areas “Nano-Material
Optical-Manipulation”. T.Y. was supported by JSPS KAKENHI Grant Number
18K13484 and H.I. was supported by JSPS KAKENHI Grant Number 16H06504 and
18H01151.
## References
* Ashkin et al. (1986) Ashkin, A.; Dziedzic, J. M.; Bjorkholm, J. E.; Chu, S. Observation of a single-beam gradient force optical trap for dielectric particles. _Optics Letters_ 1986, _11_ , 288–290.
* Zhang and Liu (2008) Zhang, H.; Liu, K.-K. Optical tweezers for single cells. _Journal of the Royal Society Interface_ 2008, _5_ , 671–690.
* Li et al. (2010) Li, T.; Kheifets, S.; Medellin, D.; Raizen, M. G. Measurement of the instantaneous velocity of a brownian particle. _Science_ 2010, _328_ , 1673–1675.
* Fazal and Block (2011) Fazal, F. M.; Block, S. M. Optical tweezers study life under tension. _Nature Photonics_ 2011, _5_ , 318–321.
* Curtis et al. (2002) Curtis, J. E.; Koss, B. A.; Grier, D. G. Dynamic holographic optical tweezers. _Optics Communications_ 2002, _207_ , 169–175.
* Grier and Roichman (2006) Grier, D. G.; Roichman, Y. Holographic optical trapping. _Applied Optics_ 2006, _45_ , 880–887.
* Mellor and Bain (2006) Mellor, C. D.; Bain, C. D. Array Formation in Evanescent Waves. _ChemPhysChem_ 2006, _7_ , 329–332.
* Mellor et al. (2006) Mellor, C. D.; Fennerty, T. A.; Bain, C. D. Polarization effects in optically bound particle arrays. _Optics Express_ 2006, _14_ , 10079–10088.
* Taylor et al. (2008) Taylor, J. M.; Wong, L. Y.; Bain, C. D.; Love, G. D. Emergent properties in optically bound matter. _Optics Express_ 2008, _16_ , 6921–6929.
* Righini et al. (2007) Righini, M.; Zelenina, A. S.; Girard, C.; Quidant, R. Parallel and selective trapping in a patterned plasmonic landscape. _Nature Physics_ 2007, _3_ , 477–480.
* Pang and Gordon (2012) Pang, Y.; Gordon, R. Optical Trapping of a Single Protein. _Nano Letters_ 2012, _12_ , 402–406.
* Rahmani and Chaumet (2006) Rahmani, A.; Chaumet, P. C. Optical trapping near a photonic crystal. _Optics Express_ 2006, _14_ , 6353–6358.
* Yang et al. (2009) Yang, A. H. J.; Moore, S. D.; Schmidt, B. S.; Klug, M.; Lipson, M.; Erickson, D. Optical manipulation of nanoparticles and biomolecules in sub-wavelength slot waveguides. _Nature_ 2009, _457_ , 71–75.
* Jaquay et al. (2013) Jaquay, E.; Martinez, L. J.; Mejia, C. A.; Povinelli, M. L. Light-Assisted, Templated Self-Assembly Using a Photonic-Crystal Slab. _Nano Letters_ 2013, _13_ , 2290–2294.
* Aveyard et al. (2002) Aveyard, R.; Binks, B. P.; Clint, J. H.; Fletcher, P. D.; Neumann, B.; Paunov, V. N.; Annesley, J.; Botchway, S. W.; Parker, A. W.; Ward, A. D.; Burgess, A. N. Drag forces on a stationary particle in flowing two-dimensional ordered particle monolayers: Simulation and measurement using optical tweezers. _Langmuir_ 2002, _18_ , 9587–9593.
* Park and Furst (2008) Park, B. J.; Furst, E. M. Optical Trapping Forces for Colloids at the Oil–Water Interface. _Langmuir_ 2008, _24_ , 13383–13392.
* Depasse and Vigoureux (1994) Depasse, F.; Vigoureux, J. M. Optical binding force between two Rayleigh particles. _Journal of Physics D: Applied Physics_ 1994, _27_ , 914–919.
* Forbes et al. (2020) Forbes, K. A.; Bradshaw, D. S.; Andrews, D. L. Optical binding of nanoparticles. _Nanophotonics_ 2020, _9_ , 1–17.
* Demergis and Florin (2012) Demergis, V.; Florin, E.-L. Ultrastrong Optical Binding of Metallic Nanoparticles. _Nano Letters_ 2012, _12_ , 5756–5760.
* Han et al. (2018) Han, F.; Parker, J. A.; Yifat, Y.; Peterson, C.; Gray, S. K.; Scherer, N. F.; Yan, Z. Crossover from positive to negative optical torque in mesoscale optical matter. _Nature Communications_ 2018, _9_ , 4897\.
* Yan et al. (2013) Yan, Z.; Shah, R. A.; Chado, G.; Gray, S. K.; Pelton, M.; Scherer, N. F. Guiding Spatial Arrangements of Silver Nanoparticles by Optical Binding Interactions in Shaped Light Fields. _ACS Nano_ 2013, _7_ , 1790–1802.
* Yan et al. (2014) Yan, Z.; Gray, S. K.; Scherer, N. F. Potential energy surfaces and reaction pathways for light-mediated self-organization of metal nanoparticle clusters. _Nature Communications_ 2014, _5_ , 3751.
* Kudo et al. (2016) Kudo, T.; Wang, S. F.; Yuyama, K.-I.; Masuhara, H. Optical Trapping-Formed Colloidal Assembly with Horns Extended to the Outside of a Focus through Light Propagation. _Nano Letters_ 2016, _16_ , 3058–3062.
* Wang et al. (2016) Wang, S. F.; Kudo, T.; Yuyama, K.-I.; Sugiyama, T.; Masuhara, H. Optically Evolved Assembly Formation in Laser Trapping of Polystyrene Nanoparticles at Solution Surface. _Langmuir_ 2016, _32_ , 12488–12496.
* Kudo et al. (2018) Kudo, T.; Yang, S. J.; Masuhara, H. A Single Large Assembly with Dynamically Fluctuating Swarms of Gold Nanoparticles Formed by Trapping Laser. _Nano Letters_ 2018, _18_ , 5846–5853.
* Sugiyama et al. (2007) Sugiyama, T.; Adachi, T.; Masuhara, H. Crystallization of glycine by photon pressure of a focused CW laser beam. _Chemistry Letters_ 2007, _36_ , 1480–1481.
* Berry (2009) Berry, M. V. Optical currents. _Journal of Optics A: Pure and Applied Optics_ 2009, _11_ , 094001–1–094001–12.
* Richards and Wolf (1959) Richards, B.; Wolf, E. Electromagnetic diffraction in optical systems, II. Structure of the image field in an aplanatic system. _Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences_ 1959, _253_ , 358–379.
* Zhao et al. (2007) Zhao, Y.; Edgar, J. S.; Jeffries, G. D.; McGloin, D.; Chiu, D. T. Spin-to-Orbital Angular Momentum Conversion in a Strongly Focused Optical Beam. _Physical Review Letters_ 2007, _99_ , 073901.
* Novotny and Hecht (2006) Novotny, L.; Hecht, B. _Principles of Nano-Optics_ ; Cambridge University Press, 2006.
* Mackowski (1994) Mackowski, D. W. Calculation of total cross sections of multiple-sphere clusters. _Journal of the Optical Society of America A_ 1994, _11_ , 2851–2861.
* Mackowski and Mishchenko (1996) Mackowski, D. W.; Mishchenko, M. I. Calculation of the T matrix and the scattering matrix for ensembles of spheres. _Journal of the Optical Society of America A_ 1996, _13_ , 2266–2278.
* Mackowski and Mishchenko (2011) Mackowski, D. W.; Mishchenko, M. I. A multiple sphere $T$-matrix Fortran code for use on parallel computer clusters. _Journal of Quantitative Spectroscopy and Radiative Transfer_ 2011, _112_ , 2182–2192.
* Datsyuk and Pavlyniuk (2015) Datsyuk, V. V.; Pavlyniuk, O. R. Maxwell stress on a small dielectric sphere in a dielectric. _Physical Review A_ 2015, _91_ , 023826.
* Rotne and Prager (1969) Rotne, J.; Prager, S. Variational Treatment of Hydrodynamic Interaction in Polymers. _The Journal of Chemical Physics_ 1969, _50_ , 4831–4837.
* Yamakawa (1970) Yamakawa, H. Transport Properties of Polymer Chains in Dilute Solution: Hydrodynamic Interaction. _The Journal of Chemical Physics_ 1970, _53_ , 436–443.
* Happel and Brenner (1983) Happel, J.; Brenner, H. _Low Reynolds number hydrodynamics: with special applications to particulate media_ ; Springer Netherlands, 1983.
* Tamura et al. (2019) Tamura, M.; Omatsu, T.; Tokonami, S.; Iida, T. Interparticle-Interaction-Mediated Anomalous Acceleration of Nanoparticles under Light-Field with Coupled Orbital and Spin Angular Momentum. _Nano Letters_ 2019, _19_ , 4873–4878.
* Bohren and Huffman (1998) Bohren, C. F.; Huffman, D. R. _Absorption and Scattering of Light by Small Particles_ ; Wiley-VCH, 1998.
|
# Artificial Intelligence-Based Methods for Fusion of Electronic Health
Records and Imaging Data††thanks: This is pre-print of paper accepted for
publication in Scientific Reports. Cite the final version from Nature
Scientific Reports.
Email<EMAIL_ADDRESS><EMAIL_ADDRESS>
Farida Mohsen College of Science and Engineering, Hamad Bin Khalifa
University, Qatar Foundation, 34110 Doha, Qatar Hazrat Ali College of
Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110
Doha, Qatar Nady El Hajj College of Science and Engineering, Hamad Bin
Khalifa University, Qatar Foundation, 34110 Doha, Qatar College of Health and
Life Sciences, Hamad Bin Khalifa University, Qatar Foundation, 34110 Doha,
Qatar Zubair Shah College of Science and Engineering, Hamad Bin Khalifa
University, Qatar Foundation, 34110 Doha, Qatar Correspondence to: Zubair
Shah
###### Abstract
Healthcare data are inherently multimodal, including electronic health records
(EHR), medical images, and multi-omics data. Combining these multimodal data
sources contributes to a better understanding of human health and provides
optimal personalized healthcare. The most important question when using
multimodal data is how to fuse them - a field of growing interest among
researchers. Advances in artificial intelligence (AI) technologies,
particularly machine learning (ML), enable the fusion of these different data
modalities to provide multimodal insights. To this end, in this scoping
review, we focus on synthesizing and analyzing the literature that uses AI
techniques to fuse multimodal medical data for different clinical
applications. More specifically, we focus on studies that only fused EHR with
medical imaging data to develop various AI methods for clinical applications.
We present a comprehensive analysis of the various fusion strategies, the
diseases and clinical outcomes for which multimodal fusion was used, the ML
algorithms used to perform multimodal fusion for each clinical application,
and the available multimodal medical datasets. We followed the PRISMA-ScR
(Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension
for Scoping Reviews) guidelines. We searched Embase, PubMed, Scopus, and
Google Scholar to retrieve relevant studies. After pre-processing and
screening, we extracted data from 34 studies that fulfilled the inclusion
criteria. We found that studies fusing imaging data with EHR are increasing
and doubling from 2020 to 2021. In our analysis, a typical workflow was
observed: feeding raw data, fusing different data modalities by applying
conventional machine learning (ML) or deep learning (DL) algorithms, and
finally, evaluating the multimodal fusion through clinical outcome
predictions. Specifically, early fusion was the most used technique in most
applications for multimodal learning (22 out of 34 studies). We found that
multimodality fusion models outperformed traditional single-modality models
for the same task. Disease diagnosis and prediction were the most common
clinical outcomes (reported in 20 and 10 studies, respectively) from a
clinical outcome perspective. Neurological disorders were the dominant
category (16 studies). From an AI perspective, conventional ML models were the
most used (19 studies), followed by DL models (16 studies). Multimodal data
used in the included studies were mostly from private repositories (21
studies). Through this scoping review, we offer new insights for researchers
interested in knowing the current state of knowledge within this research
field.
## Introduction
Over the past decade, digitization of health data have grown tremendously with
increasing data repositories spanning the healthcare sectors [1]. Healthcare
data are inherently multimodal, including electronic health records (EHR),
medical imaging, multi-omics, and environmental data. In many applications of
medicine, the integration (fusion) of different data sources has become
necessary for effective prediction, diagnosis, treatment, and planning
decisions by combining the complementary power of different modalities,
thereby bringing us closer to the goal of precision medicine[2, 3].
Data fusion is the process of combining several data modalities, each
providing different viewpoints on a common phenomenon to solve an inference
problem. The purpose of fusion techniques is to effectively take advantage of
cooperative and complementary features of different modalities [4, 5]. For
example, in interpreting medical images, clinical data is often necessary for
making effective diagnostic decisions. Many studies found that missing
pertinent clinical and laboratory data during image interpretation decreases
the radiologists’ ability to accurately make diagnostic decisions[6]. The
significance of clinical data to support the accurate interpretation of
imaging data is well established in radiology as well as in a wide variety of
imaging-based medical specialties such as dermatology, ophthalmology, and
pathology that depend on clinical context to interpret imaging data
correctly[7, 8, 9].
Thanks to the advances of AI and ML models, one can achieve a useful fusion of
multimodal data with high-dimensionality [10], various statistical properties,
and different missing value patterns[11]. Multimodal ML is the domain that can
integrate different data modalities. In recent years, multimodal data fusion
has gained much attention for automating clinical outcome prediction and
diagnosis. This can be seen in Alzheimer’s disease diagnosis and prediction
[12, 13, 14, 15] when imaging data were combined with specific lab test
results and demographic data as inputs to ML models, and better performance
was achieved than the single-source models. Similarly, fusing pathological
images with patient demographic data observed an increase in performance in
comparison with single modality models for breast cancer diagnosis[16].
Several studies found similar advantages in various medical imaging
applications, including diabetic retinopathy prediction, COVID-19 detection,
and glaucoma diagnosis[17, 18, 19].
This scoping review focuses on studies that use AI models to fuse medical
images with EHR data for different clinical applications. Modality fusion
strategies play a significant role in these studies. In the literature, some
other reviews have been published on the use of AI for multimodal medical data
fusion [20, 21, 22, 23, 24, 25, 26]; however, they differ from our review in
terms of their scope and coverage. Some previous studies focused on the fusion
of different medical imaging modalities [20, 21]; they did not consider the
EHR in conjunction with imaging modalities. Other reviews focused on the
fusion of omics data with other data modalities using DL models [22, 23].
Another study [24] focused on the fusion of various internet of medical things
(IoMTs) data for smart healthcare applications. Liu et al. [27] focused
exclusively on integrating multimodal EHR data, where multimodality refers to
structured data and unstructured free texts in EHR, using conventional ML and
DL techniques. Huang et al. [26] discussed fusion strategies of structured EHR
data and medical imaging using DL models emphasizing fusion techniques and
feature extraction methods. Furthermore, their review covered the research
till 2019 and retrieved only 17 studies. In contrast, our review focuses on
studies using conventional ML or DL techniques with EHR and medical imaging
data, covering 34 recent studies. Table 1 provides a detailed comparison of
our review with existing reviews.
Previous Reviews | Year | Scope and Coverage | Comparative contribution of our review
---|---|---|---
A review on multimodal medical image fusion: Compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics [20] | 2022 | Their review focused on the fusion of different medical imaging modalities.
| Our review focused on the fusion of medical imaging with multimodal EHR
data and considered different imaging modalities as a single modality. The two
reviews did not share any common studies.
Advances in multimodality data fusion in neuroimaging [21]
| 2021 | Their review focused on the fusion of different imaging modalities, considering neuroimaging applications for brain diseases and neurological disorders. | Our review focused on the fusion of medical imaging with EHR data, considering various diseases, such as neurological disorders, cancer, cardiovascular diseases, psychiatric disorders, eye diseases, and Covid-19. The two reviews did not share any common studies.
An overview of deep learning methods for multimodal medical data mining [22]
| 2022 | Their review focused on the fusion of different types of multi-omics data with EHR and different imaging modalities, only considering DL models for specific diseases (COVID-19, cancer, and Alzheimer’s). | Our review focused on the fusion of medical imaging with EHR data, considering all AI models for various diseases, such as neurological disorders, cancer, cardiovascular diseases, psychiatric disorders, eye diseases, and Covid-19. The two reviews did not share any common studies.
Multimodal deep learning for biomedical data fusion: a review [23] | 2022 | Their review focused on the fusion of different types of multi-omics data with EHR and imaging modalities, considering only DL models. Moreover, they did not provide a summary of the freely accessible multimodal datasets and a summary of evaluation measures used to evaluate the multimodal models. | Our review focused on the fusion of medical imaging with EHR data, considering all AI models. Moreover, o ur study provided a summary of the accessible multimodal datasets and a summary of evaluation measures used to evaluate the multimodal models. The two reviews only shared two common studies.
A comprehensive survey on multimodal medical signals fusion for smart healthcare systems [24] | 2021 | Their survey did not focus on fusing medical imaging with EHR but rather covered the fusion of IoMTs data for smart healthcare applications and covered studies published untill 2020. Moreover, in their review, multimodality referred to fusing either different 1D medical signals (such as electrocardiogram (ECG) and biosignals), different medical imaging modalities, or 1D medical signals with imaging. | Our review focused on the fusion of medical imaging with EHR (structured and unstructured) for different clinical applications. It included 34 studies, most of them published in 2021 and 2022, with no study common between the two reviews.
Machine learning for multimodal electronic health records-based research: Challenges and perspectives [27] | 2021 | Their review focused on the fusion of structured and unstructured EHR data and did not consider medical imaging modalities. Moreover, they did not provide a summary of the freely accessible multimodal datasets and a summary of evaluation measures used to evaluate the multimodal models. | Our review focused on the fusion of medical imaging with EHR and considered structured and unstructured data in EHR as a single modality. The two reviews did not share any common studies.
Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines [26] | 2020 | Their review focused on the fusion of structured EHR data and medical imaging, considering only DL models, and included only 17 studies published until 2019. | Our review focused on the fusion of medical imaging with EHR data, considering all AI models, and included 34 studies, almost more than half published in 2020 and 2021.
Table 1: Comparison with previous reviews.
The primary purpose of our scoping review is to explore and analyze published
scientific literature that fuses EHR and medical imaging using AI models.
Therefore, our study aims to answer the following questions:
1. 1.
Fusion Strategies: what fusion strategies have been used by researchers to
combine medical imaging data with EHR? What is the most used method?
2. 2.
Diseases: For what type of diseases are fusion methods implemented?
3. 3.
Clinical outcomes and ML methods: What types of clinical outcomes are
addressed using the different fusion strategies? What kind of ML algorithms
are used for each clinical outcome?
4. 4.
Resource: What are the publicly accessible medical multimodal datasets?
We believe that this review will provide a comprehensive overview to the
readers on the advancements made in multimodal ML for EHRs and medical imaging
data. Furthermore, the reader will develop an understanding of how ML models
could be designed to align data from different modalities for various clinical
tasks. Besides, we believe that our review will help identify the lack of
multimodal data resources for medical imaging and EHR, thus motivating the
research community to develop more multimodal medical data.
## Preliminaries
We first identify the EHR and medical imaging modalities that are the focus of
this review. Then, we present the data fusion strategies that we use to
investigate the studies from the perspective of multimodal fusion.
### Data modalities
In this review, we focus on studies that use two primary data modalities:
* •
Medical imaging modality: This includes N-dimensional imaging information
acquired in clinical practice, such as X-ray, Magnetic Resonance Imaging
(MRI), functional MRI (fMRI), structural MRI (sMRI), Positron Emission
Tomography (PET), Computed Tomography (CT), and Ultrasound.
* •
EHR data: This includes both structured and unstructured free-text data.
Structured data include coded data such as diagnosis codes, procedure codes,
numerical data such as laboratory test results, and categorical data such as
demographic information, family history, vital signs, and medications.
Unstructured data include medical reports and clinical notes.
In our review, we consider studies combining the two modalities of EHR and
imaging. However, there exist cases where the data could contain only multiple
EHR modalities (structured and unstructured) or multiple imaging modalities
(e.g., PET and MRI). We consider such data as a single modality, i.e., the EHR
modality or imaging modality.
### Fusion strategies
As outlined in [26], fusion approaches can be categorized into early, late,
and joint fusion. These strategies are classified depending on the stage in
which the features are fused in the ML model. Our scoping review follows the
definitions in [26] and attempts to match each study to its taxonomy. In this
section, we briefly describe each fusion strategy:
* •
Early fusion: It joins features of multiple input modalities at the input
level before being fed into a single ML algorithm for training[26]. The
modality features are extracted either manually or by using different methods
such as neural networks (NN), software, statistical methods, and word
embedding models. When NN are used to extract features, early fusion requires
training multiple models: the feature extraction models and the single fusion
model. There are two types of joint fusion: type I and type II. Type I fuses
the original features without extracting features, while type II fuses
extracted features from modalities.
* •
Late fusion: It trains separate ML models on data of each modality, and the
final decision leverages the predictions of each model[26]. Aggregation
methods such as weighted average voting, majority voting, or a meta-classifier
are used to make the final decision. This type of fusion is often known as
decision-level fusion.
* •
Joint fusion: It combines the learned features from intermediate layers of NN
with features from other modalities as inputs to a final model during
training[26]. In contrast to early fusion, the loss from the final model is
propagated back to the feature extraction model during training so that the
learned feature representations are improved through iterative updating of the
feature weights. NNs are used for joint fusion since they can propagate loss
from the final model to the feature extractor(s). There are two types of joint
fusion: type I and type II. The former is when NNs are used to extract
features from all modalities. The latter is when not all the input modalities’
features are extracted using NNs[26].
## Methods
In this scoping review, we followed the guidelines recommended by the PRISMA-
ScR [28].
### Search strategy
In a structured search, we searched four databases, including Scopus, PubMed,
Embase, and Google Scholar, to retrieve the relevant studies. We note here
that MEDLINE is covered in PubMed . For Google Scholar search results, we
selected the first 110 relevant studies, as, beyond 110 entries, the search
results rapidly lost relevancy and were unmatched to our review’s topic.
Furthermore, we limited our search to English-language articles published in
the last seven years between January 1, 2015, and January 6, 2022. The search
was based on abstracts and titles and was conducted between January 3 and
January 6, 2022.
In this scoping review, we focused on applying AI models to multimodal medical
data-based applications. The term multimodal refers to combining medical
imaging and EHR, as described in Preliminaries section. Therefore, our search
string incorporated three major terms connected by AND:( (“Artificial
Intelligence” OR “machine learning” OR “deep learning”) AND “multimodality
fusion” AND (“medical imaging” OR “electronic health records”)). We used
different forms of each term. We provide the complete search string for all
databases in Appendix 1 of the supplementary material.
### Inclusion and exclusion criteria
We included all studies that fused EHR with medical imaging modalities using
an AI model for any clinical application. As AI models, we considered
classical ML models, DL models, transfer learning, ensemble learning, etc as
mentioned in the search terms in Appendix 1 of the supplementary material. We
did not consider studies that use classical statistical models such as
regression in our review. Our definition of imaging modalities is any type of
medical imaging used in clinical practice, such as MRI, PET, CT scans, and
Ultrasound. We considered both structured and unstructured free-text patients’
data for EHR modalities as described in Preliminaries section . Only peer-
reviewed studies and conference proceedings were included. Moreover, all
included studies were limited to English language only. We did not enforce
restrictions on types of disorders, diseases or clinical tasks.
We excluded studies that used a single data modality. Also, we excluded
studies that used different types of data from the same modality, such as
studies that only combined two or more imaging types (e.g. PET and MRI), as we
considered this single modality. Moreover, studies that integrated original
imaging modalities with extracted imaging features were excluded as this was
still considered a single modality. Also, studies that combined multi-omics
data modality were excluded. In addition, studies that were unrelated to the
medical field or did not use AI-based models were excluded. We excluded
reviews, conference abstracts, proposals, editorials, commentaries, letters to
editors, preprints, and short letters articles. Non-English publications were
also excluded.
### Study selection
We used Rayyan web-based review management tool [29] for the first screening
and study selection. After removing duplicates, we screened the studies based
on title and abstract. Subsequently, full-text of the selected studies from
the title and abstract screening were assessed for eligibility using our
inclusion and exclusion criteria. Two authors (F.M. and H.A.) conducted the
study selection and resolved any conflict through discussion. A third author
(Z.S.) was consulted when an agreement could not be reached.
### Data extraction
From the final included studies, a data extraction form was designed and
piloted on four studies to develop a systematic and accurate data extraction
process. The extracted data from the studies are first author’s name, year,
the country of the first author’s institution, disease’s name, clinical
outcome, imaging modality, EHR modality, fusion strategy, feature extraction
methods, data source, AI models, evaluation metrics, and comparison with
single modality. In Appendix 2 of the supplementary material, we provide the
extracted information description in detail. One author (F.M.) performed the
data extraction, and two other authors (Z.S. and H.A.) reviewed and verified
the extracted data. Any disagreement was resolved through discussion and
consensus between the three authors.
### Data synthesis
Following the data extraction, we used a narrative approach to synthesize the
data. We analyzed the studies from five perspectives: fusion strategies,
diseases, clinical outcomes with ML algorithms, data sources/type, and
evaluation mechanism. For fusion strategies, we focused on how the multimodal
data was fused. In addition, we recorded implementation details of the model,
such as feature extraction and single modality evaluation. We also extracted
information on the diseases for which fusion methods were implemented.
Furthermore, we analyzed where the data fusion models were applied for
clinical outcomes and what ML models were used for each task. Moreover, we
focused on the type of imaging and EHR data used by the studies, the source of
data, and its availability. Finally, for evaluation, we focused on the
evaluation metrics used by each study.
### Study quality assessment
In accordance with the guidelines for scoping reviews [30, 31], we did not
perform quality assessments of the included studies.
## Results
### Search results
A total of 1158 studies were retrieved from the initial search. After
duplicates elimination, 971 studies were retained. Based on our study
selection criteria (see Methods), 44 studies remained for full-text review
after excluding articles based on their abstract and title. Moreover, 10
studies were removed after the full-text screening. Finally, 34 studies met
our inclusion criteria and were selected for data extraction and synthesis.
Figure 1 shows a flowchart of the study screening and selection process.
Figure 1: PRISMA flow chart for study identification, screening, and
selection.
### Demographics of the studies
As presented in Table 2, approximately two-thirds of the studies were journal
articles ($n$= $23$, $\sim 68$%)[12, 13, 14, 15, 17, 19, 32, 33, 34, 25, 35,
36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46], whereas $11$ studies were
conference proceedings ($\sim 32$%) [16, 47, 48, 49, 50, 51, 52, 53, 54, 55,
56]. Most of the studies were published between 2020 and 2022 ($n$ = $22$,
$\sim\leavevmode\nobreak\ 65\%$). Figure 2 shows a visualization of the
publication type-wise and year-wise distribution of the studies. The included
studies were published in $13$ countries; however, most of these studies were
from the USA ($n$ = $10$, $\sim 30\%$) and China ($n$ = $8$, $\sim 24\%$).
Characteristics | Number of studies
---|---
Year |
2022 | 1
2021 | 14
2020 | 7
2019 | 2
2018 | 5
2016 | 4
2015 | 1
Country |
United States of America (USA) | 10
China | 8
United Kingdom (UK) | 4
Germany | 2
India | 2
Australia | 1
Denmark | 1
Iran | 1
Korea | 1
Pakistan | 1
Kingdom of Saudi Arabia | 1
Singapore | 1
Publication type |
Journal | 23
Conference | 11
Table 2: Demographics of the included studies. Figure 2: The distribution of
studies by the type of publication and the year.
### Data fusion strategies
We mapped the included studies to the taxonomy of fusion strategies outlined
in the Preliminaries Section. A primary interest of our review is to identify
the fusion strategies that the included studies used to improve the
performance of ML models for different clinical outcomes.
#### Early fusion
The majority of the included studies ($n$ = $22$, $\sim 65\%$) used early
fusion to combine medical imaging and non-imaging data. When the input
modalities have different dimensions, such as when combining one-dimensional
(1D) EHR data with 2D or 3D imaging data, it is essential to extract high-
level imaging features in 1D before fusing with 1D EHR data. To accomplish
this, various methods were used in the studies, including neural network-based
features extraction, data generation through software, or manual extraction of
features. Out of the $22$ early fusion studies, $19$ studies [12, 13, 15, 33,
34, 25, 35, 36, 41, 42, 43, 44, 45, 39, 50, 51, 52, 53] used manual or
software-based imaging features, and $3$ studies used neural network-based
architectures to extract imaging features before combining with other clinical
data modality [16, 18, 54]. Six out of the $19$ studies that used manual or
software-based features reduced the feature dimension before concatenating the
two modalities’ features using different methods [25, 36, 45, 50, 51, 52].
Such methods include recursive feature elimination [52], a filter-based method
using Pearson correlation coefficient[51], Random Forest feature selection
based on Gini importance[50], Relief-based feature selection method[25], a
wrapper-based method using backward feature elimination[36], and a rank-based
method using Gini coefficients[45]. Moreover, $3$ studies [13, 15, 44]
utilized the principal component analysis (PCA) dimensionality reduction
technique to reduce the feature dimension.
In the studies that used neural network-based architectures to extract imaging
features, CNN architectures were used in three studies [16, 18, 54]. These
studies concatenated the multimodal features (CNN-extracted and EHR features)
for their fusion strategy.
Fourteen early fusion studies evaluated their fusion models’ performance
against that of single modality models [12, 13, 15, 16, 18, 32, 33, 34, 25,
36, 41, 42, 43, 44, 51] . As a result, $13$ of these studies exhibited a
better performance for fusion when compared with their imaging-only and
clinical-only counterparts [12, 13, 15, 16, 18, 32, 33, 34, 25, 41, 42, 43,
44, 51].
#### Joint fusion
Joint fusion was the second most common fusion strategy used in $10$ out of
the $34$ studies. In these studies, different neural network-based methods
were used for processing the imaging and EHR data modalities. Chen et al.[39]
used the Visual Geometry Group (VGG-16) architecture to extract features from
MRI images, while they used a bidirectional long-short term memory (LSTM )
network with an attention layer to learn feature representation from MRI
reports. Then, they concatenated the learned features of the two modalities
before feeding them into a stacked K-nearest neighbor (KNN) attention pooling
layer. Grant et al.[55] used a Residual Network (ResNet50) architecture to
extract relevant features from the imaging modality and fully connected NN to
process the non-imaging data. They directly concatenated the learned feature
representation of the imaging and non-imaging data and fed them into two fully
connected networks. Yidong et al.[19] used a Bayesian CNN encoder-decoder to
extract imaging features and a Bayesian Multilayer perception (MLP) encoder-
decoder to process the medical indicators data. The study directly
concatenated the two feature vectors and fed the resulted vector into another
Bayesian MLP. Samak et al.[47] utilized CNN with a self-attention mechanism to
extract the imaging features and fully connected NNs to process the metadata
information. Lili et al. [39] used VGG-19 architecture to extract the
multimodal MRI features and fully connected networks for clinical data. The
study concatenated the two feature vectors and fed them into fully connected
NN. Another study[46] applied CNN layers for imaging features extraction and
word embeddings (Word2vec) with self-attention for textual medical data. In
another research [38], Fang et al. applied a ResNet architecture and MLP for
imaging and clinical data feature extraction. Then, the authors fused the
feature vectors by concatenation and fed them into an LSTM network followed by
a fully connected network. Hsu et al.[17] concatenated the imaging features
extracted using Inception-V3 model with the clinical data features before
feeding them to fully connected NN. In[56], Sharma et al. used CNN to extract
image features and then concatenated them directly with the clinical data to
feed into a SoftMax classifier. Xu et al.[53] used AlexNet architecture to
convert the imaging data into a feature vector fusible with other non-image
modalities. Then, they jointly learned the non-linear correlations among all
modalities using fully connected NN. Out of $10$ joint fusion studies, seven
studies evaluated their fusion models’ performance against that of a single
modality and reported a performance improvement when fusion was used [17, 39,
46, 47, 49, 53, 55].
#### Late fusion
Late fusion was the least common fusion approach used in the included studies,
as only two studies used it. Qiu et al.[37] trained three independent imaging
models that took a single MRI slice as input, then aggregated the prediction
of these models using maximum, mean, and majority voting. After combining the
results of these aggregations by majority vote, the study performed late
fusion with the clinical data models. In another study [40], Huang et al.
trained four different late fusion models. Three models took the average of
the predicted probabilities from the imaging and EHR modality models as the
final prediction. The fourth model used an NN classifier as an aggregator,
which took as input the single modality models’ prediction. The study also
created early, joint fusion models and two single modality models to compare
with late fusion performance. As a result, the late fusion outperformed both
the early and joint fusion models and the single modality models.
### Diseases
We categorized the diseases and disorders in the included studies into seven
types: neurological disorders, cancer, cardiovascular diseases, Covid-19,
psychiatric disorders, eye diseases, and other diseases. The majority of the
included studies focused on neurological disorders ($n$ = $16$). Table 3 shows
the distribution of the included studies in terms of the diseases and
disorders they covered.
Disease Category | Number of studies | Study reference
---|---|---
Neurological disorders | 18 |
Alzheimer’s disease (AD) | 7 | [12, 13, 14, 15, 44, 48, 49]
Mild cognitive impairment (MCI) | 4 | [37, 42, 50, 51]
Ischemic Stroke | 2 | [35, 47]
Demyelinating diseases | 1 | [32]
Neurodevelopmental Deficits | 1 | [39]
Epilepsy | 1 | [34]
Cancer | 5 |
Breast Cancer | 2 | [16, 41]
Glioblastoma | 1 | [43]
Lung Cancer | 1 | [55]
Upper Gastrointestinal (UGI) Cancer | 1 | [46]
Cardiovascular diseases | 3 |
Aortic stenosis | 1 | [54]
Cardiomegaly | 1 | [55]
Myocardial Infarction | 1 | [56]
Psychiatric disorder | 2 |
Bipolar disorder | 1 | [33]
Schizophrenia | 1 | [36]
Eye diseases | 2 |
Diabetic Retinopathy (DR) | 1 | [17]
Glaucoma | 1 | [19]
COVID-19 | 3 | [18, 25, 38]
Other diseases | 3 |
Cervical dysplasia | 1 | [53]
Pulmonary Embolism (PE) | 1 | [40]
Hepatitis B | 1 | [52]
Table 3: Disease distribution covered by the 34 studies.
### Clinical outcomes and machine learning models
Multimodal ML enables a wide range of clinical applications such as diagnosis,
early prediction, patient stratification, phenotyping, biomarkers
identification, etc. In this review, we labeled each study according to its
clinical outcome. We categorized the retrieved clinical tasks into two main
categories: diagnosis and prediction. Though some of the studies mentioned
detection, classification, diagnosis, and prediction, we categorized them
under the diagnosis category. Under the early prediction group, we considered
only the studies that predict diseases before onset, identify significant risk
factors, predict mortality and overall survival, and predict a treatment
outcome. These clinical outcomes were implemented using multimodal ML models.
This section summarizes the different clinical tasks of the retrieved studies,
the fusion strategy used, and the ML models that were developed for each task.
Figure 3 shows the distribution of fusion strategies associated with different
diseases’ and clinical outcomes.
#### Diagnosis
The most common applied clinical outcome in the included studies was the
diagnosis, reported in $20$ ($\sim 59\%$) studies. In these studies, EHRs were
combined with medical imaging to diagnose a spectrum of diseases including
neurological disorders ($n$ = $9$) [13, 14, 15, 32, 37, 42, 49, 50, 4],
psychiatric disorders ($n$ = $2$) [33, 36], CVD ($n$ =$3$) [54, 55, 56],
Cancer ($n$ = $2$) [16, 55], and four studies for other different diseases
[18, 19, 40, 53]. Specifically, most of the studies that focused on detecting
neurological diseases were for AD ($n=4$) [13, 14, 15, 49], and MCI ($n$ =
$4$) [37, 42, 50, 51].
Early fusion was the most utilized technique for diagnosis purposes used in
$13$ studies. These studies employed different ML models on the fused imaging
and EHR data for diagnosing different diseases. Most of these studies were for
diagnosing neurological and and psychiatric disorders such as AD [13, 14, 15],
MCI [42, 50, 51], demyelinating diseases [32], bipolar disorder [33], and
schizophrenia [36] . Parvathy et al. [13] reported diagnosing AD by fusing
sMRI and PET imaging features with mini-mental state examination (MMSE) score,
clinical dementia rating (CDR), and age of the subjects. They fed the fused
features vector to different ML models, including support vector machine
(SVM), random forest (RF), and gaussian process (GP) for classification. Niyas
et al. [14] classified AD by fusing MRI, PET, demographic data, and lab tests,
including cognitive tests and Cerebro-Spinal Fluid (CSF) test. They applied
dynamic ensemble of classifiers selection algorithms using a different pool of
classifiers on the fused features for classification. Hamid et al.[15]
combined MRI and PET imaging features with personal information and
neurological data such as MMSE and CRF features for AD early diagnosis. In
their study, they fed the fused features into SVM for classification. For MCI
diagnosis, Matteo et al. [42] proposed combining MRI imaging with cognitive
assessments for MCI diagnosis. They concatenated the features of both
modalities and fed them into a linear and quadratic discriminant analysis
algorithm for diagnosis. Parisa et al. [50, 51] integrated features extracted
from MRI and PET images with neuropsychological tests and demographic data
(gender, age, and education) to diagnose MCI early. They trained SVM and deep
NNs using the fused features for classification in [50] and [51],
respectively. In another study [32], Xin et al. combined MRI imaging with
structured data extracted from EHRs to diagnose demyelinating diseases using
SVM. For bipolar disorder, Rashmin et al. [33] combined multimodal imaging
features with neuropsychological tests and personal information features. They
fed them into SVM to differentiate bipolar patients from healthy patients.
Ebdrup et al. [36] proposed integrating MRI and diffusion tensor imaging
tractography (DTI) imaging with neurocognitive tests and clinical data for
schizophrenia classification. Then, they fused the features of the two
modalities and fed them to different types of ML classifiers, including SVM,
RF, linear regression (LR), decision tree (DT), and Naïve Bayes (NB) for
classification.
Moreover, two studies implemented multimodality early fusion to diagnose
different cancer diseases [16, 55]. Yan et al. [16] fused pathological images
and structured data extracted from EHRs to classify malignant and benign
breast cancer. Then, they fused the features of the two modalities and fed
them to two fully connected NN followed by a SoftMax layer for classification.
Seung et al. [55] combined PET imaging with clinical and demographic data for
differentiating lung adenocarcinoma (ADC) from squamous cell carcinoma. They
fed the integrated features into different algorithms such as SVM, RF, LR, NB,
and artificial neural network (ANN) for classification. For COVID-19
diagnosis, Ming et al. [18] combined CT images with clinical features and fed
them into different ML models, including SVM, RF, and KNN for diagnosis.
Finally, Tanveer et al.[54] combined features from echocardiogram reports and
images, with diagnosis information for the detection of patients with aortic
stenosis CVD. Their study fed the combined features to an RF learning
framework to detect patients likely to have the disease.
Joint fusion was used for diagnostic purposes in $5$ studies [19, 49, 53, 55,
56]. These studies employed different types of DL architectures to learn and
fuse the imaging and EHR data for diagnosis purposes. In [19], they proposed a
Bayesian deep multisource learning (BDMSL) model that integrated retinal
images with medical indicators data to diagnose glaucoma. For this model, they
used Bayesian CNN encoder-decoder to extract imaging features and a Bayesian
MLP encoder-decoder to process the medical indicators data. The two feature
vectors were directly concatenated and fed into Bayesian MLP for
classification. Chen et al.[49] used DL for multimodal feature extraction and
classification to detect AD; the authors used the VGG-16 model to extract
features from MRI images and a bidirectional LSTM network with an attention
layer to learn features from MRI reports. Then, they fed the fused features
into a stacked KNN pooling layer to classify the patient’s diagnosis data. In
[53], Xu et al. proposed an end-to-end deep multimodal framework that can
learn better complementary features from the image and non-image modalities
for cervical dysplasia diagnosis. They used CNN, specifically AlexNet
architecture, to convert the cervigram image data into a feature vector
fusible with other non-image modalities. After that, they jointly learned the
non-linear correlations among all modalities using fully connected NN for
cervical dysplasia classification. Another two studies [55, 56] also employed
DL models to jointly learn multimodal feature representation for diagnosing
CVDs. The former [55] proposed a multimodal network for cardiomegaly
classification, which simultaneously integrates the non-imaging intensive care
unit (ICU) data (laboratory values, vital sign values, and static patient
metadata, including demographics) and the imaging data (chest X-ray). They
used a ResNet50 architecture to extract features from the X-ray images and
fully connected NN to process the ICU data. To join the learned imaging and
non-imaging features, they concatenated the learned feature representation and
fed them into two fully connected layers to generate a label for cardiomegaly
diagnosis. The latter study [56] proposed a stacked multimodal architecture
called SM2N2, which integrated clinical information and MRI images. In their
research, they used CNN to extract imaging features, and then they
concatenated these features with clinical data to feed into a SoftMax
classifier for myocardial infarction detection.
Late fusion was implemented in $2$ studies [37, 40] for disease diagnosis
purposes. Fang et al.[37] proposed the fusion of MRI scans, logical memory
(LM) tests, and MMSE for MCI classification. Their study utilized VGG-11
architecture for MRI feature extraction and developed two MLP models for MMSE
and LM test results. Then, they combined both MRI and MLP models using
majority voting. As a result, the fusion model outperformed the individual
models. Huang et al. [40] utilized a non-open dataset comprising CT scans and
EHR data to train two unimodal and four late fusion models for PE diagnosis.
They used their previously implemented architecture (PENet) [57] to encode the
CT images and a feedforward network to encode the tabular data. The late
fusion approach performed best among the fusion models and outperformed the
models trained on the image-only and the tabular-only data.
#### Early Prediction
Prediction tasks were reported in $14$ ($\sim 41.2\%$) studies. In these
studies, EHRs were fused with medical imaging to predict different outcomes,
including disease prediction, mortality prediction, survival prediction, and
treatment outcome prediction. Ten studies of the prediction tasks were disease
prediction [12, 17, 34, 38, 39, 41, 44, 46, 48, 52], which involved
determining whether an individual might develop a given disease in the future.
The second most common prediction task was treatment outcome prediction
reported in $2$ studies [35, 47], followed by one study for mortality
prediction and overall survival prediction [25, 43], respectively.
The early fusion technique was used in $6$ studies [12, 34, 41, 44, 48, 52]
for disease prediction. Minhas et al.[12] proposed an early fusion model to
predict which subjects will progress from MCI to AD in the future. The study
concatenated MRI extracted features with demographic and neuropsychological
biomarkers before feeding them to an SVM model for prediction. Ali et al. [34]
proposed a model to predict Epileptogenic-Zone in the Temporal Lobe by feeding
MRI extracted features integrated with set-of-semiology features into various
ML models such as LR, SVM, and Gradient Boosting. Ma et al.[41] fused MRI and
clinicopathological features for predicting metachronous distant metastasis
(DM) in breast cancer. They fed the concatenated features to an LR model.
Another study [44] combined MRI-derived features and high-throughput brain
phenotyping to diagnose and predict the onset of AD. They fed the fused
features into different ML classifiers, including RF, SVM, and LR. Ulyana et
al.[48] trained a deep, fully connected network as a regressor in a 5-year
longitudinal study on AD to predict cognitive test scores at multiple future
time points. Their model produced MMSE scores for ten unique future time
points at six-month intervals by combing biomarkers from cognitive test
scores, PET, and MRI. They early fused imaging features with the cognitive
test scores through concatenation before feeding them into the fully connected
network. Finally, Bai et al.[52] compared different multimodal biomarkers
(clinical data, biochemical and hemologic parameters, and ultrasound
elastography parameters) for predicting the assessment of fibrosis in chronic
hepatitis B using SVM.
For disease prediction, joint fusion was used in $4$ studies [17, 38, 39, 46].
Hsu et al. [17] proposed a deep multimodal fusion model that trained
heterogeneous data from fundus images and non-image data for DR screening.
They concatenated the imaging extracted features from Inception-V3 with the
clinical data features before feeding them to fully connected NN followed by
SoftMax layer for classification. Fang et al.[38] developed a prediction
system by jointly fusing CT scans and clinical data to predict the progression
of COVID-19 malignancy. In their study, the feature extraction part applied a
ResNet architecture and MLP for CT and clinical data, respectively. Then, they
concatenated the different features and fed them into an LSTM network followed
by a fully connected NN for prediction. In[39], the authors proposed a deep
multimodal model for predicting neurodevelopmental deficits at 2 years of age.
Their model consisted of a feature extractor and fusion classifier. In the
feature extractor, they used VGG-19 architecture to extract MRI features and
fully connected NN for clinical data. Then, the study combined the extracted
features of the two modalities and fed their combination to another fully
connected network in the fusion classifier for prediction. To evaluate the
performance of the modality fusion, they tested their model using a single
modality of MRI and clinical features. The results showed that multimodal
fusion outperformed the single modality performance. Another study [46] also
used multimodal joint fusion for UGI cancer screening. Their model integrated
features extracted from UGI endoscopic images with corresponding textual
medical data. They applied CNN for image feature extraction and word
embeddings (Word2vec) with self-attention for textual medical data feature
extraction. After that, they concatenated the extracted features of the two
modalities and fed them into fully connected NN for prediction. Their results
showed that multimodal fusion outperformed the single modality performance.
For treatment outcome prediction [35, 47], the former [35] implemented early
fusion while the latter [47] used joint fusion. For acute ischemic stroke,
Gianluca et al. [35] evaluated the predictive power of imaging, clinical, and
angiographic features to predict the outcome of acute ischemic stroke using
ML. The study early fused all features into gradient boosting classifiers for
prediction. In [47], the authors proposed a DL model to directly exploit
multimodal data (clinical metadata and non-contrast CT (NCCT) imaging data) to
predict the success of endovascular treatment for ischemic stroke. They
utilized CNN with a self-attention mechanism to extract the features of
images, and then they concatenated them with the metadata information. Then,
the classification stage of the proposed model processed the fused features
through a fully connected NN, followed by the Softmax function applied to the
outputs. Their results showed that multimodal fusion outperformed the single
modality performance.
Both the mortality and overall survival prediction studies [25, 43]
implemented early fusion. In [25], they developed a model to predict COVID-19
ventilatory support and mortality early on to prioritize patients and manage
the hospital resources’ allocation. They fused patients’ CT images and EHR
data features by concatenation before feeding them to different ML models,
including SVM, RF, LR, and eXtreme gradient boosting. They evaluated the
performance against single modality models and observed that the results for
multimodal fusion were better. The other study [43] aimed to develop ML models
to predict glioblastoma patients’ overall survival (OS) and progression-free
survival (PFS) based on combining treatment features, pathological, clinical,
PET/CT-derived information, and semantic MRI-based features. They concatenated
the features of all modalities and fed them to an RF model. The study showed
that the model based on multimodal fusion data outperformed the single
modality models.
Figure 3: Fusion strategies associated with clinical outcomes for different
diseases.
### Datasets
#### Patient Data Types
The included studies reported medical imaging and EHRs (structured and non-
structured) patient’s data types. In terms of imaging modality, CT, MRI, fMRI,
structural MRI (sMRI), PET, Diffusion MRI, DTI, ultrasound, X-ray, fundus
images, and PET were used in the studies. MRI and PET images were the most
utilized modalities. Out of the included $34$ studies, $13$ used MRI images,
and $8$ used PET images mostly for AD diagnosis and prediction. In terms of
EHRs, structured data was the most commonly used modality ($n$ = $32$). Table
4 summarizes the types of imaging and EHR data used in the studies.
Data Type | Number of studies | Study reference
---|---|---
Imaging Data | |
MRI imaging | |
MRI | $13$ | [12, 14, 15, 32, 33, 37, 41, 42, 43, 48, 49, 50, 51]
DTI | $3$ | [33, 36, 39]
fMRI | $2$ | [33, 39]
sMRI and Diffusion MRI | $1$ | [44]
PET | 8 | [13, 14, 15, 43, 45, 48, 50, 51]
CT | $7$ | [18, 35, 38, 40, 43, 45, 47]
X-ray | $2$ | [25, 55]
fundus images | $2$ | [17, 19]
Ultrasound | $1$ | [52]
Echocardiography | $1$ | [54]
Pathological images | $1$ | [16]
Cervigram images | $1$ | [53]
Endoscopy images | $1$ | [46]
EHR Data | |
Structured | $32$ | [12, 14, 13, 15, 16, 17, 18, 19, 32, 33, 34, 25, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 47, 48, 50, 51, 52, 53, 54, 55, 56]
Unstructured | $2$ | [46, 49]
Table 4: Patient data types used in the included studies.
#### Patient data Resources
Almost two-thirds of the studies included in this scoping review used private
data sources (clinical data that are not publicly available) ($n$ = $21$,
$\sim 59\%$). In contrast, publicly accessible datasets were used in only $13$
studies. We observed that the most used public dataset was the “Alzheimer’s
Disease Neuroimaging Initiative” dataset (ADNI) [58] , where $7$ out of $13$
studies used the dataset. Other publicly available datasets that were used
among the included studies were the “National Alzheimer’s Coordinating Center”
(NACC) dataset [59], the “Medical Information Mart for Intensive Care” (MIMIC-
IV) dataset [60], the "National Cancer Institute" (NCI) dataset, ADNI TADPOLE
dataset [61] , and MR CLEAN Trial dataset [62]. In Table 5, we summarize the
public multimodal medical datasets and their clinical applications.
Considering these datasets for each clinical task, the most popular is ADNI
for AD and MCI disease diagnosis and prediction.
Public Dataset | Description | URL | Clinical outcomes | Study reference
---|---|---|---|---
ADNI | ADNI represents a series of studies, including ADNI 1, 2, and 3, designed to study MCI and its progression into AD. It has MRI and PET images along with clinical and genetic information [58]. | https://adni.loni.usc.edu/data-samples/data-types/ | Disease diagnosis (AD)
Disease diagnosis (MCI)
Disease Prediction (AD)
| [13, 15]
[50, 51]
[12, 44, 48]
ADNI TADPOLE | ADNI has a simplified counterpart, TADPOLE, which has a subset of ADNI-3 samples and features.ATDPOLE does not include raw images, but it has processed structural information about the images such as ROI averages, thicknesses of the cortex and volumes of brain sub-regions, etc [61]. | https://tadpole.grand-challenge.org/Data/
| Disease diagnosis (AD) | [14]
NACC | The NACC dataset was established to facilitate collaborative AD research. The dataset comprises MRI data, demographic data, neuropsychological testing scores, and clinical diagnosis of patients [59].
| https://naccdata.org/requesting-data/nacc-data | Disease Diagnosis (MCI) | [37]
MIMIC-CXR, MIMIC-IV | MIMIC-CXR is a dataset of patient chest radiographs. It contains X-ray studies for 64,588 patients [63].
MIMIC-IV is a database for patients admitted to critical care units comprising patient stay information, patient’s ICU data, and lookup tables to allow linking to MIMIC-CXR [60]. | MIMIC-CXR:https://www.nature.com/articles/s41597-019-0322-0
MIMIC-IV:https://physionet.org/content/mimiciv/0.4/
| Disease diagnosis (Cardiomegaly) | [55]
NCI | Data collections produced by major NCI initiatives are listed in the NCI Data Catalog, including Clinical data, Genomics, imaging, and Proteomics.
| https://datascience.cancer.gov/resources/nci-data-catalog | Disease diagnosis (Cervical dysplasia) | [53]
MR CLEAN Trial | A longitudinal study of 500 patients treated with endovascular therapy in The Netherlands for acute ischemic stroke comprising NCCT images, CT Angiography (CTA) images, and clinical metadata information on its patients [62]. | https://www.mrclean-trial.org/home.html | Treatment outcome prediction (ischemic stroke) | [47]
Table 5: Multimodal medical datasets and clinical outcome applications.
### Evaluation metrics
Evaluation metrics are mainly dependent on the clinical task. Typically,
accuracy, the area under the curve (AUC), sensitivity, specificity, F1-
measure, and precision are mostly used for the evaluation of diagnosis and
prediction tasks. Table 6 shows the distribution of the evaluation measures
used in the included studies
Evaluation metrics | Number of studies | Study Reference
---|---|---
Accuracy | $31$ | [12, 15, 16, 17, 18, 19, 32, 33, 34, 25, 35, 36, 37, 38, 39, 40, 41, 42, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56]
Sensitivity (recall) | $20$ | [12, 13, 14, 15, 17, 19, 32, 33, 34, 37, 38, 39, 40, 41, 45, 46, 50, 51, 53, 54]
AUC | $17$ | [12, 15, 16, 17, 19, 32, 25, 35, 37, 38, 39, 41, 45, 47, 52, 53, 55]
Specificity | $15$ | [12, 14, 15, 17, 32, 33, 34, 38, 39, 40, 41, 46, 50, 51, 53]
Precision | $7$ | [13, 19, 34, 37, 45, 54]
Positive predictive value (PPV) and Negative predictive value (NPV) | $3$ | [15, 34, 40]
Matthews correlation coefficient (MCC) | $2$ | [41, 34]
C-index | $1$ | [43]
Root-Mean Squared Error (RMSE) | $1$ | [48]
The numbers in the second column do not sum up to 34 as many studies used more
than a single metric.
Table 6: The distribution of evaluation metrics in the included studies.
## Discussion
This section summarizes our findings and provides future directions for
research on the multimodal fusion of medical imaging and EHR.
### Principal findings
We found that multimodal models that combined EHR and medical imaging data
generally outperformed single modality models for the same task in disease
diagnosis or prediction. Since our review shows that the fusion of medical
imaging and clinical context data can improve the performance of AI models, we
recommend attempting fusion approaches when multimodal data is obtainable.
Moreover, through this review, we observed certain trends in the field of
multimodality fusion in the medical area, which can be categorized as:
* •
Resources: We observed that multimodal data resources of medical imaging and
EHR are limited owing to privacy considerations. The most prominent dataset
was the ADNI, containing MRI and PET images collected from about 1700
individuals in addition to clinical and genetic information. Considering
ADNI’s contributions in advancing the research, similar multimodal datasets
should be developed for other medical data sources too.
* •
Fusion implementation: Early fusion was the most commonly used technique in
most applications for multimodal learning. Before fusing 1D EHRs data with
image data in 2D or 3D, images data was converted to a 1D vector by extracting
high-level representations using manual or software-generated features [12,
13, 15, 33, 34, 25, 35, 36, 41, 42, 43, 44, 45, 39, 50, 51, 52, 53], or CNN-
extracted features [16, 8, 54]. The learned imaging features from CNN often
resulted in better task-specific performance than manually or software-derived
features [64]. Based on this reviewed studies, early fusion models performed
better than conventional single-modality models on the same task. Researchers
can use the early fusion method as a first attempt to learn multimodal
representations since it can learn to exploit the interactions and
correlations between features of each modality. Furthermore, it only requires
one model to be trained, making the pipeline for training easier than that of
joint and late fusion. However, if imaging features are extracted with CNN,
early fusion requires multiple models to be trained.
Joint fusion was the second most commonly used fusion approach. From a
modality perspective, CNNs appeared to be the best option for image feature
extraction. Tabular data were mainly processed using dense layers when fed
into a model, while text data were mostly processed using LSTM layers followed
by the attention layer. Most of the current research directly concatenated the
feature vectors of the different modalities to combine multimodal data. Using
NNs to implement joint fusion can be a limitation when dealing with small
datasets, which means that joint fusion is preferred with large datasets. For
small datasets, it is preferable to use early or late fusion methods as they
can be implemented using classical ML techniques. Nevertheless, we expect and
agree with [26] that joint fusion models can provide better results than other
fusion strategies because they update their feature representations
iteratively by propagating the loss to all the feature extraction models,
aiming to learn correlations across modalities.
Based on the performance reported in the included studies, it is preferred to
try the early and joint fusion when the relation between the two data
modalities is complementary. In this review, AD diagnosis is an example in
which imaging and EHRs data are dependent as relevant and accurate knowledge
of the patient’s current symptomatology, personal information and imaging
reports can help doctors interpret imaging results in a suitable clinical
context, resulting in a more precise diagnosis. Therefore, all AD diagnosis
studies in this review implemented either early fusion [13, 14, 15] or joint
fusion [49] for multimodal learning.
On the other hand, it is preferred to try late fusion when input modalities do
not complement each other. For example, the brain MRI pixel data and the
quantitative result of an MMSE (e.g., Qiu et al. [37]) for diagnosing MCI are
independent, making them appropriate candidates for inclusion in the late
fusion strategy. Also, late fusion does not impose the requirement of a huge
amount of training data, so it could be used when the modalities data sizes
are small. Moreover, late fusion strategy could be attempted when the
concatenation of feature vectors from multiple modalities results in high-
dimensional vectors that are difficult for ML algorithms to learn without
overfitting unless many input samples are available. In late fusion, multiple
models are employed, each specialized in a single modality, thereby limiting
the size of the input feature vector for each model. Furthermore, late fusion
could be used when data is incomplete or missing, i.e., some patients have
only imaging data but no clinical data or vice versa. This is because late
fusion uses independent models for different modalities, and aggregation
methods like averaging and majority voting can be used even when predictions
from a modality are not present. Moreover, predictions could be
disproportionately influenced by the most feature-rich input modality when the
number of features is very different between the input data modalities [65];
in this scenario, late fusion is preferable because it allows training each
model using each modality separately.
* •
Applications: In this review, we found that AD diagnosis and prediction [12,
13, 14, 15, 44, 48, 49] were the most common applications addressed in a
multimodal setting among studies. Using ML fusion techniques consistently
demonstrated improved AD diagnosis, while clinicians experience difficulty
with accurate and reliable diagnosis even when multimodal data is available
[26]. This emphasizes the utility and significance of multimodal fusion
approaches in clinical applications.
* •
Prospects: In this review, we noted that multimodal medical data fusion is
growing due to its potential in achieving state-of-the-art performance for
healthcare applications. Nonetheless, this growth is hampered by the absence
of adequate data for benchmarking methods. This is not surprising, given the
privacy concerns surrounding revealing healthcare data. Moreover, we observed
a lack of complexity in the used non-imaging data, particularly in the context
of heavily feature-rich data included in the EHR. For example, the majority of
studies focused mostly on basic demographic data like gender and age [12, 15,
44, 51], a limited number of studies also included medical histories such as
smoking status and hypertension [18, 55] or specific clinical characteristics
that are known to be associated with a certain disease, such as an MMSE for
diagnosing AD. In addition to selecting the disease-associated features,
future research may benefit from using vast amounts of feature-rich data, as
demonstrated in domains outside of medicine, such as autonomous driving [66].
### Future directions
Although we focus on EHR and medical imaging as multimodal data, other
modalities such as multi-omics and environmental data could also be integrated
using the aforementioned fusion approaches. As the causes of many diseases are
complex, many factors, including inherited genetics, lifestyle, and living
environments, contribute to the development of diseases. Therefore, combining
multisource data, e.g. EHR, imaging, and multi-omics data, may lead to a
holistic view that can improve patient outcomes through personalized medicine.
Although we focus on EHR and medical imaging as multimodal data, other
modalities such as multi-omics and environmental data could also be integrated
using the aforementioned fusion approaches. As the causes of many diseases are
complex, many factors, including inherited genetics, lifestyle, and living
environments, contribute to the development of diseases. Therefore, combining
multisource data, e.g. EHR, imaging, and multi-omics data, may lead to a
holistic view that can improve patient outcomes through personalized medicine.
Moreover, the unavailability of multimodal public data is a limitation that
hinders the development of corresponding research. Many factors (e.g., gender,
ethnicity, environmental factors) could influence the research directions or
even clinical decision, relying on a few publicly available datasets might not
be enough for making conclusive clinical claims to the global population [27].
Consequently, it is imperative to encourage the sharing of flexible data among
institutions and hospitals in order to facilitate the exploration of a wider
range of population data for clinical research. In ML, federated learning (FL)
[67, 68] provides the ability to collect data safely and securely from
multiple centers. It may be used to collect multimodal data from various
centers to train a large-scale model without collecting data directly.
### Limitations
Our search was limited to studies published within the previous seven years
(2015-2022). We only considered studies published in English, which may have
led to leaving out some studies published in other languages. We solely
included studies fusing EHR with medical imaging. We did not include studies
that used other data modalities such as multi-omics data, as they are out of
the scope of this work. Because positive results are typically reported
disproportionately, publication bias might be another limitation of this
review. This bias may result in an overestimation of the benefits associated
with multimodal data analysis. The studies included in this review employed
various input modalities, investigated various clinical tasks for different
diseases, and reported different performance metrics; hence a direct
comparison of the results presented in the studies is not always applicable.
Furthermore, not all articles provided confidence bounds, making it difficult
to compare their results statistically.
## Conclusion
Multimodal ML is an area of research that is gaining attention within the
medical field. This review surveyed multimodal medical ML literature that
combines EHR with medical imaging data. It discussed fusion strategies, the
clinical tasks and ML models that implemented data fusion, the type of
diseases, and the publicly accessible multimodal data for medical imaging and
EHRs. Furthermore, it highlighted some directions to pave the way for future
research. Our finding suggests that there is a growing interest in multimodal
medical data. Still, most studies combine the modalities with relatively
simple strategies, which despite being shown to be effective, might not fully
exploit the rich information embedded in these modalities. As this is a fast-
growing field and new AI models with multimodal data are constantly being
developed, there might exist studies that fall outside our definition of
fusion strategies or use a combination of these strategies. We believe that
the development of this field will give rise to more comprehensive multimodal
medical data analysis and will be of great support to the clinical decision-
making process.
## Data availability
The data generated during this scoping review is provided as supplementary
materials.
## References
* [1] Murdoch, T. B. & Detsky, A. S. The inevitable application of big data to health care. _Jama_ 309, 1351–1352 (2013).
* [2] Obermeyer, Z. & Emanuel, E. J. Predicting the future—big data, machine learning, and clinical medicine. _The New England journal of medicine_ 375, 1216 (2016).
* [3] Roski, J., Bo-Linn, G. W. & Andrews, T. A. Creating value in health care through big data: opportunities and policy implications. _Health affairs_ 33, 1115–1122 (2014).
* [4] Lozano-Perez, T. _Autonomous robot vehicles_ (Springer Science & Business Media, 2012).
* [5] Castanedo, F. A review of data fusion techniques. _The scientific world journal_ 2013 (2013).
* [6] Cohen, M. D. Accuracy of information on imaging requisitions: does it matter? _Journal of the American College of Radiology_ 4, 617–621 (2007).
* [7] Comfere, N. I. _et al._ Provider-to-provider communication in dermatology and implications of missing clinical information in skin biopsy requisition forms: a systematic review. _International journal of dermatology_ 53, 549–557 (2014).
* [8] Jonas, J. B. _et al._ Glaucoma. _The Lancet_ 390, 2183–2193, DOI: https://doi.org/10.1016/S0140-6736(17)31469-1 (2017).
* [9] Comfere, N. I. _et al._ Dermatopathologists’ concerns and challenges with clinical information in the skin biopsy requisition form: a mixed-methods study. _Journal of cutaneous pathology_ 42, 333–345 (2015).
* [10] Li, Y., Wu, F.-X. & Ngom, A. A review on machine learning principles for multi-view biological data integration. _Briefings in bioinformatics_ 19, 325–340 (2018).
* [11] Ramachandram, D. & Taylor, G. W. Deep multimodal learning: A survey on recent advances and trends. _IEEE signal processing magazine_ 34, 96–108 (2017).
* [12] Minhas, S. _et al._ Early mci-to-ad conversion prediction using future value forecasting of multimodal features. _Computational intelligence and neuroscience_ 2021 (2021).
* [13] Pillai, P. S., Leong, T.-Y., Initiative, A. D. N. _et al._ Fusing heterogeneous data for alzheimer’s disease classification. In _MEDINFO 2015: eHealth-enabled Health_ , 731–735 (IOS Press, 2015).
* [14] KP, M. N. & Thiyagarajan, P. Alzheimer’s classification using dynamic ensemble of classifiers selection algorithms: A performance analysis. _Biomedical Signal Processing and Control_ 68, 102729 (2021).
* [15] Akramifard, H., Balafar, M. A., Razavi, S. N. & Ramli, A. R. Early detection of alzheimer’s disease based on clinical trials, three-dimensional imaging data, and personal information using autoencoders. _Journal of medical signals and sensors_ 11, 120 (2021).
* [16] Yan, R. _et al._ Richer fusion network for breast cancer classification based on multimodal data. _BMC Medical Informatics and Decision Making_ 21, 1–15 (2021).
* [17] Hsu, M.-Y. _et al._ Deep learning for automated diabetic retinopathy screening fused with heterogeneous data from ehrs can lead to earlier referral decisions. _Translational Vision Science & Technology_ 10, 18–18 (2021).
* [18] Xu, M. _et al._ Accurately differentiating between patients with covid-19, patients with other viral infections, and healthy individuals: multimodal late fusion learning approach. _Journal of Medical Internet Research_ 23, e25535 (2021).
* [19] Chai, Y., Bian, Y., Liu, H., Li, J. & Xu, J. Glaucoma diagnosis in the chinese context: An uncertainty information-centric bayesian deep learning model. _Information Processing & Management_ 58, 102454 (2021).
* [20] Azam, M. A. _et al._ A review on multimodal medical image fusion: Compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics. _Computers in Biology and Medicine_ 144, 105253, DOI: https://doi.org/10.1016/j.compbiomed.2022.105253 (2022).
* [21] Zhang, Y.-D. _et al._ Advances in multimodal data fusion in neuroimaging: Overview, challenges, and novel orientation. _Information Fusion_ 64, 149–187, DOI: https://doi.org/10.1016/j.inffus.2020.07.006 (2020).
* [22] Behrad, F. & Saniee Abadeh, M. An overview of deep learning methods for multimodal medical data mining. _Expert Systems with Applications_ 200, 117006, DOI: https://doi.org/10.1016/j.eswa.2022.117006 (2022).
* [23] Stahlschmidt, S. R., Ulfenborg, B. & Synnergren, J. Multimodal deep learning for biomedical data fusion: a review. _Briefings in Bioinformatics_ 23 (2022).
* [24] Muhammad, G. _et al._ A comprehensive survey on multimodal medical signals fusion for smart healthcare systems. _Information Fusion_ 76, 355–375, DOI: https://doi.org/10.1016/j.inffus.2021.06.007 (2021).
* [25] Aljouie, A. F. _et al._ Early prediction of covid-19 ventilation requirement and mortality from routinely collected baseline chest radiographs, laboratory, and clinical data with machine learning. _Journal of multidisciplinary healthcare_ 14, 2017 (2021).
* [26] Huang, S.-C., Pareek, A., Seyyedi, S., Banerjee, I. & Lungren, M. P. Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines. _NPJ digital medicine_ 3, 1–9 (2020).
* [27] Liu, Z. _et al._ Machine learning for multimodal electronic health records-based research: Challenges and perspectives. _arXiv preprint arXiv:2111.04898_ (2021).
* [28] Tricco, A. C. _et al._ Prisma extension for scoping reviews (prisma-scr): checklist and explanation. _Annals of internal medicine_ 169, 467–473 (2018).
* [29] Ouzzani, M., Hammady, H., Fedorowicz, Z. & Elmagarmid, A. Rayyan—a web and mobile app for systematic reviews. _Systematic reviews_ 5, 1–10 (2016).
* [30] Arksey, H. & O’Malley, L. Scoping studies: towards a methodological framework. _International journal of social research methodology_ 8, 19–32 (2005).
* [31] Grant, M. J. & Booth, A. A typology of reviews: an analysis of 14 review types and associated methodologies. _Health information & libraries journal_ 26, 91–108 (2009).
* [32] Xin, B., Huang, J., Zhou, Y., Lu, J. & Wang, X. Interpretation on deep multimodal fusion for diagnostic classification. In _2021 International Joint Conference on Neural Networks (IJCNN)_ , 1–8 (IEEE, 2021).
* [33] Achalia, R. _et al._ A proof of concept machine learning analysis using multimodal neuroimaging and neurocognitive measures as predictive biomarker in bipolar disorder. _Asian Journal of Psychiatry_ 50, 101984 (2020).
* [34] Alim-Marvasti, A. _et al._ Machine learning for localizing epileptogenic-zone in the temporal lobe: Quantifying the value of multimodal clinical-semiology and imaging concordance. _Frontiers in digital health_ 3, 8 (2021).
* [35] Brugnara, G. _et al._ Multimodal predictive modeling of endovascular treatment outcome for acute ischemic stroke using machine-learning. _Stroke_ 51, 3541–3551 (2020).
* [36] Ebdrup, B. H. _et al._ Accuracy of diagnostic classification algorithms using cognitive-, electrophysiological-, and neuroanatomical data in antipsychotic-naïve schizophrenia patients. _Psychological medicine_ 49, 2754–2763 (2019).
* [37] Qiu, S. _et al._ Fusion of deep learning models of mri scans, mini–mental state examination, and logical memory test enhances diagnosis of mild cognitive impairment. _Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring_ 10, 737–749 (2018).
* [38] Fang, C. _et al._ Deep learning for predicting covid-19 malignant progression. _Medical image analysis_ 72, 102096 (2021).
* [39] He, L. _et al._ Deep multimodal learning from mri and clinical data for early prediction of neurodevelopmental deficits in very preterm infants. _Frontiers in neuroscience_ 15 (2021).
* [40] Huang, S.-C., Pareek, A., Zamanian, R., Banerjee, I. & Lungren, M. P. Multimodal fusion with deep neural networks for leveraging ct imaging and electronic health record: a case-study in pulmonary embolism detection. _Scientific reports_ 10, 1–9 (2020).
* [41] Ma, W. _et al._ Distant metastasis prediction via a multi-feature fusion model in breast cancer. _Aging (Albany NY)_ 12, 18151 (2020).
* [42] De Marco, M., Beltrachini, L., Biancardi, A., Frangi, A. F. & Venneri, A. Machine-learning support to individual diagnosis of mild cognitive impairment using multimodal mri and cognitive assessments. _Alzheimer Disease & Associated Disorders_ 31, 278–286 (2017).
* [43] Peeken, J. C. _et al._ Combining multimodal imaging and treatment features improves machine learning-based prognostic assessment in patients with glioblastoma multiforme. _Cancer medicine_ 8, 128–136 (2019).
* [44] Wang, Y. _et al._ Diagnosis and prognosis of alzheimer’s disease using brain morphometry and white matter connectomes. _NeuroImage: Clinical_ 23, 101859 (2019).
* [45] Hyun, S. H., Ahn, M. S., Koh, Y. W. & Lee, S. J. A machine-learning approach using pet-based radiomics to predict the histological subtypes of lung cancer. _Clinical nuclear medicine_ 44, 956–960 (2019).
* [46] Ding, S., Huang, H., Li, Z., Liu, X. & Yang, S. Scnet: a novel ugi cancer screening framework based on semantic-level multimodal data fusion. _IEEE Journal of Biomedical and Health Informatics_ 25, 143–151 (2020).
* [47] Samak, Z. A., Clatworthy, P. & Mirmehdi, M. Prediction of thrombectomy functional outcomes using multimodal data. In _Annual Conference on Medical Image Understanding and Analysis_ , 267–279 (Springer, 2020).
* [48] Morar, U. _et al._ A deep-learning approach for the prediction of mini-mental state examination scores in a multimodal longitudinal study. In _2020 International Conference on Computational Science and Computational Intelligence (CSCI)_ , 761–766 (IEEE, 2020).
* [49] Chen, D., Zhang, L. & Ma, C. A multimodal diagnosis predictive model of alzheimer’s disease with few-shot learning. In _2020 International Conference on Public Health and Data Science (ICPHDS)_ , 273–277, DOI: 10.1109/ICPHDS51617.2020.00060 (2020).
* [50] Forouzannezhad, P., Abbaspour, A., Cabrerizo, M. & Adjouadi, M. Early diagnosis of mild cognitive impairment using random forest feature selection. In _2018 IEEE Biomedical Circuits and Systems Conference (BioCAS)_ , 1–4, DOI: 10.1109/BIOCAS.2018.8584773 (2018).
* [51] Forouzannezhad, P., Abbaspour, A., Li, C., Cabrerizo, M. & Adjouadi, M. A deep neural network approach for early diagnosis of mild cognitive impairment using multiple features. In _2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)_ , 1341–1346, DOI: 10.1109/ICMLA.2018.00218 (2018).
* [52] Bai, Y., Chen, X., Dong, C., Liu, Y. & 0001, Z. Z. A comparison of multimodal biomarkers for chronic hepatitis b assessment using recursive feature elimination. In _38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016, Orlando, FL, USA, August 16-20, 2016_ , 2448–2451, DOI: 10.1109/EMBC.2016.7591225 (IEEE, 2016).
* [53] Xu, T., Zhang, H., Huang, X., Zhang, S. & Metaxas, D. N. Multimodal deep learning for cervical dysplasia diagnosis. In Ourselin, S., Joskowicz, L., Sabuncu, M. R., Unal, G. & Wells, W. (eds.) _Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016_ , 115–123 (Springer International Publishing, Cham, 2016).
* [54] Syeda-Mahmood, T. _et al._ Identifying patients at risk for aortic stenosis through learning from multimodal data. In Ourselin, S., Joskowicz, L., Sabuncu, M. R., Unal, G. & Wells, W. (eds.) _Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016_ , 238–245 (Springer International Publishing, Cham, 2016).
* [55] Grant, D., Papież, B. W., Parsons, G., Tarassenko, L. & Mahdi, A. Deep learning classification of cardiomegaly using combined imaging and non-imaging icu data. In Papież, B. W., Yaqub, M., Jiao, J., Namburete, A. I. L. & Noble, J. A. (eds.) _Medical Image Understanding and Analysis_ , 547–558 (Springer International Publishing, Cham, 2021).
* [56] Sharma, R., Eick, C. F. & Tsekos, N. V. Sm2n2: A stacked architecture for multimodal data and its application to myocardial infarction detection. In Puyol Anton, E. _et al._ (eds.) _Statistical Atlases and Computational Models of the Heart. M &Ms and EMIDEC Challenges_, 342–350 (Springer International Publishing, Cham, 2021).
* [57] Huang, S.-C. _et al._ Penet—a scalable deep-learning model for automated diagnosis of pulmonary embolism using volumetric ct imaging. _NPJ digital medicine_ 3, 1–9 (2020).
* [58] Mueller, S. _et al._ The alzheimer’s disease neuroimaging initiative. _Neuroimaging Clinics of North America_ 15, 869–877, DOI: 10.1016/j.nic.2005.09.008 (2005).
* [59] Beekly, D. _et al._ The national alzheimer’s coordinating center (nacc) database: an alzheimer disease database. _Alzheimer disease and associated disorders_ 18, 270–277 (2004).
* [60] Alistair, J. _et al._ Mimic-iv (version 0.4). _PhysioNet_ DOI: https://doi.org/10.13026/a3wn-hq05 (2020).
* [61] Marinescu, R. V. _et al._ Tadpole challenge: prediction of longitudinal evolution in alzheimer’s disease. _arXiv preprint arXiv:1805.03909_ (2018).
* [62] Fransen, P. S. _et al._ Mr clean, a multicenter randomized clinical trial of endovascular treatment for acute ischemic stroke in the netherlands: study protocol for a randomized controlled trial. _Trials_ 15, 1–11 (2014).
* [63] Johnson, A. E. _et al._ Mimic-cxr, a de-identified publicly available database of chest radiographs with free-text reports. _Scientific data_ 6, 1–8 (2019).
* [64] Goodfellow, I., Bengio, Y. & Courville, A. _Deep learning_ (MIT press, 2016).
* [65] Reda, I. _et al._ Deep learning role in early diagnosis of prostate cancer. _Technology in cancer research & treatment_ 17, 1533034618775530 (2018).
* [66] Hecker, S., Dai, D. & Van Gool, L. End-to-end learning of driving models with surround-view cameras and route planners. In _Proceedings of the european conference on computer vision (eccv)_ , 435–453 (2018).
* [67] Li, T., Sahu, A. K., Talwalkar, A. & Smith, V. Federated learning: Challenges, methods, and future directions. _IEEE Signal Processing Magazine_ 37, 50–60 (2020).
* [68] Al, H., Alam, T., Househ, M. & Shah, Z. Federated learning and internet of medical things – opportunities and challenges. In _20th International Conference on Informatics, Management, and Technology in Healthcare (ICIMTH)_ (2022).
## Author contributions statement
F.M., H.A., Z.S. contributed to conceptualization. F.M. and H.A. administered
the project. F.M. curated the data, performed data synthesis, and contributed
to writing—original draft. H.A and N.E performed writing—review and editing.
Z.S. and H.A. supervised the study. All authors read and approved the final
manuscript.
## Additional information
Supplementary Methods
Competing interests
The authors declare that they have no competing interests.
|
# Congruence Properties of Indices of Triangular Numbers Multiple of Other
Triangular Numbers
Vladimir PLETSER European Space Agency (ret<EMAIL_ADDRESS>
###### Abstract.
It is known that, for any positive non-square integer multiplier $k$, there is
an infinity of multiples of triangular numbers which are triangular numbers.
We analyze the congruence properties of the indices $\xi$ of triangular
numbers that are multiples of other triangular numbers. We show that the
remainders in the congruence relations of $\xi$ modulo $k$ come always in
pairs whose sum always equal $\left(k-1\right)$, always include 0 and
$\left(k-1\right)$, and only 0 and $\left(k-1\right)$ if $k$ is prime, or an
odd power of a prime, or an even square plus one or an odd square minus one or
minus two. If the multiplier $k$ is twice the triangular number of $n$, the
set of remainders includes also $n$ and $\left(n^{2}-1\right)$ and if $k$ has
integer factors, the set of remainders include multiples of a factor following
certain rules. Finally, algebraic expressions are found for remainders in
function of $k$ and its factors. Several exceptions are noticed and
superseding rules exist between various rules and expressions of remainders.
This approach allows to eliminate in numerical searches those
$\left(k-\upsilon\right)$ values of $\xi_{i}$ that are known not to provide
solutions, where $\upsilon$ is the even number of remainders. The gain is
typically in the order of $k/\upsilon$, with $\upsilon\ll k$ for large values
of $k$.
###### Key words and phrases:
Triangular Numbers, Multiple of Triangular Numbers, Recurrent Relations,
Congruence Properties
AMS 2010 Mathematics Subject Classification: Primary 11A25; Secondary 11D09
## 1\. Introduction
Triangular numbers $T_{t}=\frac{t\left(t+1\right)}{2}$ are one of the figurate
numbers enjoying many properties; see, e.g., [1, 2] for relations and
formulas. Triangular numbers $T_{\xi}$ that are multiples of other triangular
number $T_{t}$
(1.1) $T_{\xi}=kT_{t}$
are investigated. Only solutions for $k>1$ are considered as the cases $k=0$
and $k=1$ yield respectively $\xi=0$ and $\xi=t,\forall t$. Accounts of
previous attempts to characterize these triangular numbers multiple of other
triangular numbers can be found in [3, 4, 5, 6, 7, 8, 9]. Recently, Pletser
showed [9] that, for non-square integer values of $k$, there are infinitely
many solutions that can be represented simply by recurrent relations of the
four variables $t,\xi,Tt$ and $T_{\xi}$, involving a rank $r$ and parameters
$\kappa$ and $\gamma$, which are respectively the sum and the product of the
$\left(r-1\right)^{\text{th}}$ and the $r^{\text{th}}$ values of $t$. The rank
$r$ is being defined as the number of successive values of $t$ solutions of
(1.1) such that their successive ratios are slowly decreasing without jumps.
In this paper, we present a method based on the congruent properties of
$\xi\left(\text{mod\,}k\right)$, searching for expressions of the remainders
in function of $k$ or of its factors. This approach accelerates the numerical
search of the values of $t_{n}$ and $\xi_{n}$ that solve (1.1), as it
eliminates values of $\xi$ that are known not to provide solutions to (1.1).
The gain is typically in the order of $k/\upsilon$ where $\upsilon$ is the
number of remainders, which is usually such that $\upsilon\ll k$.
## 2\. Rank and Recurrent Equations
Sequences of solutions of (1.1) are known for $k=2,3,5,6,7,8$ and are listed
in the Online Encyclopedia of Integer Sequences (OEIS) [10], with references
given in Table 1.
Table 1. OEIS [10] references of sequences of integer solutions of (1.1) for $k=2,3,5,6,7,8$ $k$ | 2 | 3 | 5 | 6 | 7 | 8
---|---|---|---|---|---|---
$t$ | A053141 | A061278 | A077259 | A077288 | A077398 | A336623
$\xi$ | A001652 | A001571 | A077262 | A077291 | A077401 | A336625
$T_{t}$ | A075528 | A076139 | A077260 | A077289 | A077399 | A336624
$T_{\xi}$ | A029549 | A076140 | A077261 | A077290 | A077400 | A336626
Among all solutions, $t=0$ is always a first solution of (1.1) for all non-
square integer value of $k$, yielding $\xi=0$.
Let’s consider the two cases of $k=2$ and $k=7$ yielding the successive
solution pairs as shown in Table 2. We indicate also the ratios
$t_{n}/t_{n-1}$ for both cases and $t_{n}/t_{n-2}$ for $k=7$. It is seen that
for $k=2$, the ratio $t_{n}/t_{n-1}$ varies between close values, from 7 down
to 5.829, while for $k=7$, the ratio $t_{n}/t_{n-1}$ alternates between values
2.5 … 2.216 and 7.8 … 7.23, while the ratio $t_{n}/t_{n-2}$ decreases
regularly from 19.5 to 16.023 (corresponding approximately to the product of
the alternating values of the ratio $t_{n}/t_{n-1}$). We call rank $r$ the
integer value such that $t_{n}/t_{n-r}$ is approximately constant or, better,
decreases regularly without jumps (a more precise definition is given
further). So, here, the case $k=2$ has rank $r=1$ and the case $k=7$ has rank
$r=2$.
Table 2. Solutions of (1.1) for $k=2,7$ $n$ | $k=2$ | $k=7$
---|---|---
| $t_{n}$ | $\xi_{n}$ | $\frac{t_{n}}{t_{n-1}}$ | $t_{n}$ | $\xi_{n}$ | $\frac{t_{n}}{t_{n-1}}$ | $\frac{t_{n}}{t_{n-2}}$
0 | 0 | 0 | | 0 | 0 | |
1 | 2 | 3 | – | 2 | 6 | – | –
2 | 14 | 20 | 7 | 5 | 14 | 2.5 | –
3 | 84 | 119 | 6 | 39 | 104 | 7.8 | 19.5
4 | 492 | 696 | 5.857 | 87 | 231 | 2.231 | 17.4
5 | 2870 | 4059 | 5.833 | 629 | 1665 | 7.230 | 16.128
6 | 16730 | 23660 | 5.829 | 1394 | 3689 | 2.216 | 16.023
In [9],we showed that the rank $r$ is the index of $t_{r}$ and $\xi_{r}$
solutions of (1.1) such that
(2.1) $\kappa=t_{r}+t_{r-1}=\xi_{r}-\xi_{r-1}-1$
and that the ratio $t_{2r}/t_{r}$, corrected by the ratio $t_{r-1}/t_{r}$, is
equal to a constant $2\kappa+3$
(2.2) $\frac{t_{2r}-t_{r-1}}{t_{r}}=2\kappa+3$
For example, for $k=7$ and $r=2$, (2.1) and (2.2) yield respectively,
$\kappa=7$ and $2\kappa+3=17$.
Four recurrent equations for $t_{n},\xi_{n},T_{t_{n}}$ and $T_{\xi_{n}}$ are
given in [9] for each non-square integer value of $k$
(2.3) $\displaystyle t_{n}$
$\displaystyle=2\left(\kappa+1\right)t_{n-r}-t_{n-2r}+\kappa$ (2.4)
$\displaystyle\xi_{n}$
$\displaystyle=2\left(\kappa+1\right)\xi_{n-r}-\xi_{n-2r}+\kappa$ (2.5)
$\displaystyle T_{t_{n}}$
$\displaystyle=\left(4\left(\kappa+1\right)^{2}-2\right)T_{t_{n-r}}-T_{t_{n-2r}}+\left(T_{\kappa}-\gamma\right)$
(2.6) $\displaystyle T_{\xi_{n}}$
$\displaystyle=\left(4\left(\kappa+1\right)^{2}-2\right)T_{\xi_{n-r}}-T_{\xi_{n-2r}}+k\left(T_{\kappa}-\gamma\right)$
where coefficients are functions of two constants $\kappa$ and $\gamma$,
respectively the sum $\kappa$ and the product $\gamma=t_{r-1}t_{r}$ of the
first two sequential values of $t_{r}$ and $t_{r-1}$. Note that the first
three relations (2.3) to (2.5) are independent from the value of $k$.
## 3\. Congruence of $\xi$ modulo $k$
We use the following notations: for $A,B,C\in\mathbb{Z},B<C,C>1$, $A\equiv
B\left(\text{mod\,}C\right)$ means that $\exists D\in\mathbb{Z}$ such that
$A=DC+B$, where $B$ and $C$ are called respectively the remainder and the
modulus. To search numerically for the values of $t_{n}$ and $\xi_{n}$ that
solve (1.1), one can use the congruent properties of
$\xi\left(\text{mod\,}k\right)$ given in the following propositions. In other
words, we search in the following propositions for expressions of the
remainders in function of $k$ or of its factors.
###### Proposition 1.
For $\forall s,k\in\mathbb{Z}^{+}$, $k$ non-square,
$\exists\xi,\mu,\upsilon,i,j\in\mathbb{Z}^{+}$, such that if $\xi_{i}$ are
solutions of (1.1), then for $\xi_{i}\equiv\mu_{j}\left(\text{mod\,}k\right)$
with $1\leq j\leq\upsilon$, the number $\upsilon$ of remainders is always
even, $\upsilon\equiv 0\left(\text{mod\,}2\right)$, the remainders come in
pairs whose sum is always equal to $\left(k-1\right)$, and the sum of all
remainders is always equal to the product of $\left(k-1\right)$ and the number
of remainder pairs, $\sum_{j=1}^{\upsilon}\mu_{j}=\left(k-1\right)\upsilon/2$.
###### Proof.
Let $s,i,j,k,\xi,\mu,\upsilon,\alpha,\beta\in\mathbb{Z}^{+}$, $k$ non-square,
and $\xi_{i}$ solutions of (1.1). Rewriting (1.1) as
$T_{t_{i}}=T_{\xi_{i}}/k$, for $T_{t_{i}}$ to be integer, $k$ must divide
exactly $T_{\xi_{i}}=\xi_{i}\left(\xi_{i}+1\right)/2$, i.e., among all
possibilities, $k$ divides either $\xi_{i}$ or $\left(\xi_{i}+1\right)$,
yielding two possible solutions $\xi_{i}\equiv 0\left(\text{mod\,}k\right)$ or
$\xi_{i}\equiv-1\left(\text{mod}\,k\right)$, i.e. $\upsilon=2$ and the set of
$\mu_{j}$ includes $\left\\{0,\left(k-1\right)\right\\}$. This means that
$\xi_{i}$ are always congruent to either $0$ or $\left(k-1\right)$ modulo $k$
for all non-square values of $k$.
Furthermore, if some $\xi_{i}$ are congruent to $\alpha$ modulo $k$, then
other $\xi_{i}$ are also congruent to $\beta$ modulo $k$ with
$\beta=\left(k-\alpha-1\right)$. As
$\xi_{i}\equiv\alpha\left(\text{mod}\,k\right)$, then
$\xi_{i}\left(\xi_{i}+1\right)/2\equiv\left(\alpha\left(\alpha+1\right)/2\right)\left(\text{mod\,}k\right)$
and replacing $\alpha$ by $\alpha=\left(k-\beta-1\right)$ yields
$\left(\alpha\left(\alpha+1\right)/2\right)=\left(\left(k-\beta-1\right)\left(k-\beta\right)/2\right)$,
giving
$\xi_{i}\left(\xi_{i}+1\right)/2\equiv\left(\left(k-\beta-1\right)\left(k-\beta\right)/2\right)\left(\text{mod\,}k\right)\equiv$
$\left(\beta\left(\beta+1\right)/2\right)\left(\text{mod\,}k\right)$. In this
case, $\upsilon=4$ and the set of $\mu_{j}$ includes, but not necessarily
limits to,
$\left\\{0,\alpha,\left(k-\alpha-1\right),\left(k-1\right)\right\\}$. ∎
Note that in some cases, $\upsilon>4$, as for $k=66,70,78,105,...$ , $\nu=8$.
However, in some other cases, $\upsilon=2$ only and the set of $\mu_{j}$
contains only $\left\\{0,\left(k-1\right)\right\\}$, as shown in the next
proposition. In this proposition, several rules (R) are given constraining the
congruence characteristics of $\xi_{i}$.
###### Proposition 2.
For $\forall s,k,\alpha,n\in\mathbb{Z}^{+}$, $k$ non-square, $\alpha>1$,
$\exists\xi,\mu,\upsilon,i\in\mathbb{Z}^{+}$, such that if $\xi_{i}$ are
solutions of (1.1), then $\xi_{i}$ are always only congruent to $0$ and
$\left(k-1\right)$ modulo $k$ , and $\upsilon=2$ if either (R1) $k$ is prime,
or (R2) $k=\alpha^{n}$ with $\alpha$ prime and $n$ odd, or (R3) $k=s^{2}+1$
with $s$ even, or (R4) $k=s^{\prime 2}-1$ or (R5) $k=s^{\prime 2}-2$ with
$s^{\prime}$ odd.
###### Proof.
Let $s,s^{\prime},k,\alpha>1,n,i,\xi,\mu,\upsilon\in\mathbb{Z}^{+}$, $k$ non-
square, and $\xi_{i}$ are solutions of (1.1).
(R1)+(R2): If $k$ is prime or if $k=\alpha^{n}$ (with $\alpha$ prime and $n$
odd as $k$ is non-square), then, in both cases, $k$ can only divide either
$\xi_{i}$ or $\left(\xi_{i}+1\right)$, yielding the two congruences
$\xi_{i}\equiv 0\left(\text{mod\,}k\right)$ and
$\xi_{i}\equiv-1\left(\text{mod\,}k\right)$.
(R3): If $k=s^{2}+1$ with $s$ even, the rank $r$ is always $r=2$ [11], and the
only two sets of solutions are
(3.1) $\displaystyle\left(t_{1},\xi_{1}\right)$
$\displaystyle=\left(s\left(s-1\right),\left(s^{2}+1\right)\left(s-1\right)\right)$
(3.2) $\displaystyle\left(t_{2},\xi_{2}\right)$
$\displaystyle=\left(s\left(s+1\right),\left(s^{2}+1\right)\left(s+1\right)-1\right)$
as can be easily shown. For $t_{1}$, forming
$\displaystyle kT_{t_{1}}$
$\displaystyle=\frac{1}{2}\left(s^{2}+1\right)\left(s\left(s-1\right)\right)\left(s\left(s-1\right)+1\right)$
$\displaystyle=\frac{1}{2}\left[\left(s^{2}+1\right)\left(s-1\right)\right]\left[\left(s^{2}+1\right)\left(s-1\right)+1\right]=T_{\xi_{1}}$
which is the triangular number of $\xi_{1}$. One obtains similarly $\xi_{2}$
from $t_{2}$. These two relations (3.1) and (3.2) show respectively that
$\xi_{1}$ is congruent to $0$ modulo $k$ and $\xi_{2}$ is congruent to
$\left(k-1\right)$ modulo $k$.
(R4) For $k=s^{\prime 2}-1$ with $s^{\prime}$ odd, the rank $r=2$ [11], and
the only two sets of solutions are
(3.3) $\displaystyle\left(t_{1},\xi_{1}\right)$
$\displaystyle=\left(\left(s^{\prime}-1\right)s^{\prime}-1,\left(s^{\prime
2}-1\right)\left(s^{\prime}-1\right)-1\right)$ (3.4)
$\displaystyle\left(t_{2},\xi_{2}\right)$
$\displaystyle=\left(\left(s^{\prime}-1\right)\left(s^{\prime}+2\right)+1,\left(s^{\prime
2}-1\right)\left(s^{\prime}+1\right)\right)$
as can be easily demonstrated as above. These two relations (3.3) and (3.4)
show that $\xi_{1}$ and $\xi_{2}$ are congruent respectively to
$\left(k-1\right)$ and $0$ modulo $k$.
(R5) For $k=s^{\prime 2}-2$ with $s^{\prime}$ odd, the rank $r=2$ [11], and
the only two sets of solutions are
(3.5) $\displaystyle\left(t_{1},\xi_{1}\right)$
$\displaystyle=\left(\frac{1}{2}\left(s^{\prime}-2\right)\left(s^{\prime}+1\right),\frac{1}{2}\left(s^{\prime
2}-2\right)\left(s^{\prime}-1\right)-1\right)$ (3.6)
$\displaystyle\left(t_{2},\xi_{2}\right)$
$\displaystyle=\left(\frac{s^{\prime}}{2}\left(s^{\prime}+1\right)-1,\frac{1}{2}\left(s^{\prime
2}-2\right)\left(s^{\prime}+1\right)\right)$
as can easily be shown as above. These two relations (3.5) and (3.6) show that
$\xi_{1}$ and $\xi_{2}$ are congruent respectively to $\left(k-1\right)$ and
$0$ modulo $k$. ∎
There are other cases of interest as shown in the next two Propositions
###### Proposition 3.
For $\forall n\in\mathbb{Z}^{+}$, $\exists k,\xi,\mu<k,i,j\in\mathbb{Z}^{+}$,
$k$ non-square, such that if $\xi_{i}$ are solutions of (1.1) with
$\xi_{i}\equiv\mu_{j}\left(\text{mod}\,k\right)$, and (R6) if $k$ is twice a
triangular number $k=n\left(n+1\right)=2T_{n}$, then the set of $\mu_{j}$
includes $\left\\{0,n,\left(n^{2}-1\right),\left(k-1\right)\right\\}$, with
$1\leq j\leq\upsilon$.
###### Proof.
Let $n,k,\xi,\mu<k,i,j\in\mathbb{Z}^{+}$, $k$ non-square, and $\xi_{i}$
solutions of (1.1). Let $\xi_{i}\equiv\mu_{j}\left(\text{mod\,}k\right)$ with
$1\leq j\leq\upsilon$. As the ratio $\xi_{i}\left(\xi_{i}+1\right)/k$ must be
integer, $\xi_{i}\left(\xi_{i}+1\right)\equiv 0\left(\text{mod\,}k\right)$ or
$\mu_{j}\left(\mu_{j}+1\right)\equiv
0\left(\text{mod}\,n\left(n+1\right)\right)$ which is obviously satisfied if
$\mu_{j}=n$ or $\mu_{j}=\left(n^{2}-1\right)$. ∎
Finally, this last proposition gives a general expression of the congruence
$\xi_{i}\left(\text{mod\,}k\right)$ for most cases to find the remainders
$\mu_{j}$ other than $0$ and $\left(k-1\right)$.
###### Proposition 4.
For $\forall n>1\in\mathbb{Z}^{+}$, $\exists
k,f,\xi,\nu<n<k,\mu<k,m<n,i,j\in\mathbb{Z}^{+}$, $k$ non-square, let $\xi_{i}$
be solutions of (1.1) with $\xi_{i}\equiv\mu_{j}\left(\text{mod}\,k\right)$,
let $f$ be a factor of $k$ such that $f=k/n$ with
$f\equiv\nu\left(\text{mod\,}n\right)$ and $k\equiv\nu
n\left(\text{mod}\,n^{2}\right)$, then the set of $\mu_{j}$ includes either
$\left\\{0,mf,\left(\left(n-m\right)f-1\right),\left(k-1\right)\right\\}$ or
$\left\\{0,\left(mf-1\right),\left(n-m\right)f,\left(k-1\right)\right\\}$,
where $m$ is an integer multiplier of $f$ in the congruence relation and such
that $m<n/2$ or $m<\left(n+1\right)/2$ for $n$ being even or odd respectively,
and $1\leq j\leq\upsilon$.
###### Proof.
Let $n>1,k,f,\xi,\mu<k,m<n,i,j<n<k\in\mathbb{Z}^{+}$, $k$ non-square, and
$\xi_{i}$ a solution of (1.1). Let
$\xi_{i}\equiv\mu_{j}\left(\text{mod\,}k\right)$ with $1\leq j\leq\upsilon$.
As the ratio $\xi_{i}\left(\xi_{i}+1\right)/k$ must be integer,
$\xi_{i}\left(\xi_{i}+1\right)\equiv 0\left(\text{mod\,}k\right)$ or
$\mu_{j}\left(\mu_{k}+1\right)\equiv 0\left(\text{mod\,}fn\right)$. For a
proper choice of the factor $f$ of $k$, let $\mu_{j}$ be a multiple of $f$,
$\mu_{j}=mf$, then $m\left(mf+1\right)\equiv 0\left(\text{mod\,}n\right)$. As
$f\equiv\nu\left(\text{mod\,}n\right)$, one has
(3.7) $m\left(m\nu+1\right)\equiv 0\left(\text{mod}\,n\right)$
. Let now $\left(\mu_{j}+1\right)$ be a multiple of $f$, $\mu_{j}+1=mf$, then
$m\left(mf-1\right)\equiv 0\left(\text{mod\,}n\right)$ or
(3.8) $m\left(m\nu-1\right)\equiv 0\left(\text{mod\,}n\right)$
An appropriate combination of integer parameters $m$ and $\nu$ guarantees that
(3.7) and (3.8) are satisfied. Proposition 1 yields the other remainder value
as $mf+\left(n-m\right)f-1=k-1$ and $\left(mf-1\right)+\left(n-m\right)f=k-1$.
∎
The appropriate combinations of integer parameters $m$ and $\nu$ are given in
Table 3 for $2\leq n\leq 12$. The sign $-$ in subscript corresponds to the
remainder $\left(mf-1\right)$; the sign $/$ indicates an absence of
combination.
Table 3. Combination of parameters $m$ and $\nu$ for $2\leq n\leq 12$ $m$ | | $\nu$
---|---|---
| $\searrow$ | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11
$n$ | 2 | 1_ | | | | | | | | | |
3 | 1_ | 1 | | | | | | | | |
4 | 1_ | / | 1 | | | | | | | |
5 | 1_ | 2 | 2_ | 1 | | | | | | |
6 | 1_ | / | / | / | 1 | | | | | |
7 | 1_ | 3 | 2 | 2_ | 3_ | 1 | | | | |
8 | 1_ | / | 3_ | / | 3 | / | 1 | | | |
9 | 1_ | 4 | / | 2 | 2_ | / | 4_ | 1 | | |
10 | 1_ | / | 3 | / | 5_ | / | 3_ | / | 1 | |
11 | 1_ | 5 | 4_ | 3_ | 2 | 2_ | 3 | 4 | 5_ | 1 |
12 | 1_ | / | / | / | 3 | / | 4_ | / | / | / | 1
One deduces from Table 3 the following simple rules:
1) $\forall n\in\mathbb{Z}^{+}$, only those values of $\nu$ that are co-prime
with $n$ must be kept, all other combinations (indicated by $/$ in Table 3)
must be discarded as they correspond to combinations with smaller values of
$n$ and $\nu$; for $n$ even, this means that all even values of $\nu$ must be
discarded. For example, $\nu=2$ and $n=4$ are not co-prime and their
combination obviously corresponds to $\nu=1$ and $n=2$.
2) For $\nu=1$ and $\nu=n-1$, all values of $m$ are $m=1$ with respectively
the remainders $\left(mf-1\right)$ and $mf$.
3) For $\forall n,i\in\mathbb{Z}^{+}$, $n$ odd, $2\leq
i\leq\left(n-1\right)/2$, and for $\nu=\left(n-\left(2i-3\right)\right)/2$ and
$\nu=\left(n+\left(2i-3\right)\right)/2$, all the values of $m$ are $m=i$.
4) For $\forall n\in\mathbb{Z}^{+}$, $n$ odd, and for $\nu=2$ and $\nu=n-2$,
the remainders are respectively $mf$ and $\left(mf-1\right)$.
5) For $\forall n,i\in\mathbb{Z}^{+}$, $n$ even, $2\leq i\leq n/2$, and for
$\nu=\left(n-\left(2i-3\right)\right)/2$ and
$\nu=\left(n+\left(2i-3\right)\right)/2$, all the values of $m$ are $m=i$.
Expressions of $\mu_{i}$ are given in Table 4 for $2\leq n\leq 12$ (with codes
E$n\nu$). For example, for $k\equiv 12\nu\left(\text{mod}\,12^{2}\right)$ and
$\nu=5$ (code E125), i.e. $k=60,204,348,...$,
$\xi_{i}\equiv\mu_{j}\left(\text{mod\,}k\right)$ with the set of remainders
$\mu_{j}$ including
$\left\\{0,mf,\left(\left(n-m\right)f-1\right),\left(k-1\right)\right\\}$ with
$m=3$ (see Table 3) and $f=k/12=5,17,29...$respectively.
Table 4. Expressions of $\mu_{j}$ for $2\leq n\leq 12$ $n$ | $\nu$ | $m$ | $k\equiv$ | $f$ | $\mu_{j}$ | Code
---|---|---|---|---|---|---
2 | 1 | 1 | $2\left(\text{mod\,}4\right)$ | $k/2$ | $0,(k/2)-1,k/2,k-1$ | E21
3 | 1 | 1 | $3\left(\text{mod\,}9\right)$ | $k/3$ | $0,\left(k/3\right)-1,2k/3,k-1$ | E31
| 2 | 1 | $6\left(\text{mod}\,9\right)$ | | $0,k/3,\left(2k/3\right)-1,k-1$ | E32
4 | 1 | 1 | $4\left(\text{mod\,}16\right)$ | $k/4$ | $0,\left(k/4\right)-1,3k/4,k-1$ | E41
| 3 | 1 | $12\left(\text{mod\,}16\right)$ | | $0,k/4,\left(3k/4\right)-1,k-1$ | E43
5 | 1 | 1 | $5\left(\text{mod\,}25\right)$ | $k/5$ | $0,\left(k/5\right)-1,4k/5,k-1$ | E51
| 2 | 2 | $10\left(\text{mod\,}25\right)$ | | $0,2k/5,\left(3k/5\right)-1,k-1$ | E52
| 3 | 2 | $15\left(\text{mod\,}25\right)$ | | $0,\left(2k/5\right)-1,3k/5,k-1$ | E53
| 4 | 1 | $20\left(\text{mod\,}25\right)$ | | $0,k/5,\left(4k/5\right)-1,k-1$ | E54
6 | 1 | 1 | $6\left(\text{mod\,}36\right)$ | $k/6$ | $0,\left(k/6\right)-1,5k/6,k-1$ | E61
| 5 | 1 | $30\left(\text{mod\,}36\right)$ | | $0,k/6,\left(5k/6\right)-1,k-1$ | E65
7 | 1 | 1 | $7\left(\text{mod\,}49\right)$ | $k/7$ | $0,\left(k/7\right)-1,6k/7,k-1$ | E71
| 2 | 2 | $14\left(\text{mod\,}49\right)$ | | $0,3k/7,\left(4k/7\right)-1,k-1$ | E72
| 3 | 3 | $21\left(\text{mod\,}49\right)$ | | $0,2k/7,\left(5k/7\right)-1,k-1$ | E73
| 4 | 3 | $28\left(\text{mod\,}49\right)$ | | $0,\left(2k/7\right)-1,5k/7,k-1$ | E74
| 5 | 2 | $35\left(\text{mod\,}49\right)$ | | $0,\left(3k/7\right)-1,4k/7,k-1$ | E75
| 6 | 1 | $42\left(\text{mod\,}49\right)$ | | $0,k/7,\left(6k/7\right)-1,k-1$ | E76
8 | 1 | 1 | $8\left(\text{mod\,}64\right)$ | $k/8$ | $0,\left(k/8\right)-1,7k/8,k-1$ | E81
| 3 | 3 | $24\left(\text{mod\,}64\right)$ | | $0,\left(3k/8\right)-1,5k/8,k-1$ | E83
| 5 | 3 | $40\left(\text{mod\,}64\right)$ | | $0,3k/8,\left(5k/8\right)-1,k-1$ | E85
| 7 | 1 | $56\left(\text{mod\,}64\right)$ | | $0,k/8,\left(7k/8\right)-1,k-1$ | E87
9 | 1 | 1 | $9\left(\text{mod\,}81\right)$ | $k/9$ | $0,(k/9)-1,8k/9,k-1$ | E91
| 2 | 4 | $18\left(\text{mod\,}81\right)$ | | $0,4k/9,(5k/9)-1,k-1$ | E92
| 4 | 2 | $36\left(\text{mod\,}81\right)$ | | $0,2k/9,(7k/9)-1,k-1$ | E94
| 5 | 2 | $45\left(\text{mod\,}81\right)$ | | $0,(2k/9)-1,7k/9,k-1$ | E95
| 7 | 4 | $63\left(\text{mod\,}81\right)$ | | $0,(4k/9)-1,5k/9,k-1$ | E97
| 8 | 1 | $72\left(\text{mod\,}81\right)$ | | $0,k/9,(8k/9)-1,k-1$ | E98
10 | 1 | 1 | $10\left(\text{mod\,}100\right)$ | $k/10$ | $0,(k/10)-1,9k/10,k-1$ | E101
| 3 | 3 | $30\left(\text{mod\,}100\right)$ | | $0,3k/10,(7k/10)-1,k-1$ | E103
| 7 | 3 | $70\left(\text{mod\,}100\right)$ | | $0,(3k/10)-1,7k/10,k-1$ | E107
| 9 | 1 | $90\left(\text{mod}\,100\right)$ | | $0,k/10,(9k/10)-1,k-1$ | E109
11 | 1 | 1 | $11\left(\text{mod\,}121\right)$ | $k/11$ | $0,(k/11)-1,10k/11,k-1$ | E111
| 2 | 5 | $22\left(\text{mod\,}121\right)$ | | $0,5k/11,(6k/11)-1,k-1$ | E112
| 3 | 4 | $33\left(\text{mod\,}121\right)$ | | $0,(4k/11)-1,7k/11,k-1$ | E113
| 4 | 3 | $44\left(\text{mod\,}121\right)$ | | $0,(3k/11)-1,8k/11,k-1$ | E114
| 5 | 2 | $55\left(\text{mod\,}121\right)$ | | $0,2k/11,(9k/11)-1,k-1$ | E115
| 6 | 2 | $66\left(\text{mod\,}121\right)$ | | $0,(2k/11)-1,9k/11,k-1$ | E116
| 7 | 3 | $77\left(\text{mod\,}121\right)$ | | $0,3k/11,(8k/11)-1,k-1$ | E117
| 8 | 4 | $88\left(\text{mod\,}121\right)$ | | $0,4k/11,(7k/11)-1,k-1$ | E118
| 9 | 5 | $99\left(\text{mod\,}121\right)$ | | $0,(5k/11)-1,6k/11,k-1$ | E119
| 10 | 1 | $110\left(\text{mod\,}121\right)$ | | $0,k/11,(10k/11)-1,k-1$ | E1110
12 | 1 | 1 | $12\left(\text{mod\,}144\right)$ | $k/12$ | $0,(k/12)-1,11k/12,k-1$ | E121
| 5 | 3 | $60\left(\text{mod\,}144\right)$ | | $0,3k/12,(9k/12)-1,k-1$ | E125
| 7 | 4 | $84\left(\text{mod\,}144\right)$ | | $0,(4k/12)-1,8k/12,k-1$ | E127
| 11 | 1 | $132\left(\text{mod\,}144\right)$ | | $0,k/12,(11k/12)-1,k-1$ | E1211
Values of the remainders $\mu_{j}$ are given in Table 5 for $2\leq k\leq 120$,
with rule (R) and expression (E) codes as references. R and E codes separated
by comas imply that all references apply simultaneously to the case; E codes
separated by + mean that all expressions are applicable to the case; some
expression references are sometimes missing. One observes that in two cases
(for $k=74$ and 104), expressions could not be found (indicated by question
marks).
Table 5. Values of $\mu_{j}$ for $2\leq k\leq 120$ $k$ | $\mu_{j}$ | References | $k$ | $\mu_{j}$ | References
---|---|---|---|---|---
2 | 0,1 | R1,R6,E21 | 63 | 0,27,35,62 | E72,E97
3 | 0,2 | R1,E31 | 65 | 0,64 | R3
5 | 0,4 | R1,R3,E51 | 66 | 0,11,21,32,33,44,54,65 | E21+E31+E65+E116
6 | 0,2,3,5 | R6,E21,E32,E61 | 67 | 0,66 | R1
7 | 0,6 | R1,R5,E71 | 68 | 0,16,51,67 | E41
8 | 0,7 | R2,R4,E81 | 69 | 0,23,45,68 | E32
10 | 0,4,5,9 | E21,E52,E101 | 70 | 0,14,20,34,35,49,55,69 | E21+E54+E73+E107
11 | 0,10 | R1,E111 | 71 | 0,70 | R1
12 | 0,3,8,11 | R6,E31,E43,E121 | 72 | 0,8,63,71 | R6,E81,E98
13 | 0,12 | R1 | 73 | 0,72 | R1
14 | 0,6,7,13 | E21,E72 | 74 | 0,73 | ?
15 | 0,5,9,14 | E32,E53 | 75 | 0,24,50,74 | E31
17 | 0,16 | R1,R3 | 76 | 0,19,56,75 | E43
18 | 0,8,9,17 | E21,E92 | 77 | 0,21,55,76 | E74,E117
19 | 0,18 | R1 | 78 | 0,12,26,38,39,51,65,77 | E21+E32+E61
20 | 0,4,15,19 | R6,E41,E54 | 79 | 0,78 | R1,R5
21 | 0,6,14,20 | E31,E73 | 80 | 0,79 | R4
22 | 0,10,11,21 | E21,E112 | 82 | 0,40,41,81 | E21
23 | 0,22 | R1,R5 | 83 | 0,82 | R1
24 | 0,23 | R4 | 84 | 0,27,56,83 | E31,E127
26 | 0,12,13,25 | E21 | 85 | 0,34,50,84 | E52
27 | 0,26 | R2 | 86 | 0,42,43,85 | E21
28 | 0,7,20,27 | E43,E74 | 87 | 0,29,57,86 | E32
29 | 0,28 | R1 | 88 | 0,32,55,87 | E83,E118
30 | 0,5,24,29 | R6,E51,E65 | 89 | 0,88 | R1
31 | 0,30 | R1 | 90 | 0,9,80,89 | R6,E91,E109
32 | 0,31 | R2 | 91 | 0,13,77,90 | E75
33 | 0,11,21,32 | E32,E113 | 92 | 0,23,68,91 | E43
34 | 0,16,17,33 | E21 | 93 | 0,30,62,92 | E31
35 | 0,14,20,34 | E52,E75 | 94 | 0,46,47,93 | E21
37 | 0,36 | R1,R3 | 95 | 0,19,75,94 | E54
38 | 0,18,19,37 | E21 | 96 | 0,32,63,95 | E32
39 | 0,12,26,38 | E31 | 97 | 0,96 | R1
40 | 0,15,24,39 | E53,E85 | 98 | 0,48,49,97 | E21
41 | 0,40 | R1 | 99 | 0,44,54,98 | E92,E119
42 | 0,6,35,41 | R6,E61,E76 | 101 | 0,100 | R1,R3
43 | 0,42 | R1 | 102 | 0,50,51,102 | E21
44 | 0,11,32,43 | E43,E114 | 103 | 0,102 | R1
45 | 0,9,35,44 | E54,E95 | 104 | 0,103 | ?
46 | 0,22,23,245 | E21 | 105 | 0,14,20,35,69,84,90,104 | E32+E51+E71
47 | 0,46 | R1,R5 | 106 | 0,52,53,105 | E21
48 | 0,47 | R4 | 107 | 0,106 | R1
50 | 0,24,25,49 | E21 | 108 | 0,27,80,107 | E43
51 | 0,17,33,50 | E32 | 109 | 0,108 | R1
52 | 0,12,39,51 | E41 | 110 | 0,10,99,109 | R6,E101,E1110
53 | 0,52 | R1 | 111 | 0,36,74,110 | E31
54 | 0,26,27,53 | E21 | 112 | 0,48,63,111 | E72
55 | 0,10,44,54 | E51,E115 | 113 | 0,112 | R1
56 | 0,7,48,55 | R6,E71,E87 | 114 | 0,56,57,113 | E21
57 | 0,18,38,56 | E31 | 115 | 0,45,69,114 | E53
58 | 0,28,29,57 | E21 | 116 | 0,28,87,115 | E41
59 | 0,58 | R1 | 117 | 0,26,90,116 | E94
60 | 0,15,44,59 | E43,E125 | 118 | 0,58,59,117 | E21
61 | 0,60 | R1 | 119 | 0,118 | R1,R5
62 | 0,30,31,61 | E21 | 120 | 0,15,104,119 | E87
This Table 5 gives correctly the values of the remainder pairs in most of the
cases. There are although some exceptions and some values missing.
Among the exceptions to the values given in Table 5, for $n=2$, remainders
values for $k=30,42,74,90,110,\ldots$ are different from the theoretical ones
in Table 4. Furthermore, for $k=66,70,78,105,...$, additional remainders
exist. Expressions are missing for $k=74$ (E21) and 104 (E85). Finally, one
observes also that for 16 cases, some Rules or Expressions supersede some
other Expressions (indicated by Ra > Exy or Exy > Ezt), as reported in Table
6. For example, Rule 6 supersedes Expression 21 (R6 > E21) for
$k=30,42,90,110$, i.e., $k=2T_{5},2T_{6},2T_{9},2T_{10},...$ and more
generally for all $k=2T_{i}$ for $i\equiv 1,2\left(\text{mod}4\right)$.
Table 6. Rules and Expressions superseding other Rules and Expressions $k$ |
---|---
24 | R4 > E32; R4 > E83
30 | R6 > E21; R6 > E31; R6 > E103; E51 > E103; E65 > E103
42 | R6 > E21; R6 > E32
48 | R4 > E31
56 | R6 > E43
60 | E43 > E32; E43 > E52
65 | R3 > E53
72 | R6 > E43
80 | R4 > E51
84 | E31 > E41; E31 > E75
90 | R6 > E21; R6 > E53
102 | E21 > E31; E21 > E65
110 | R6 > E21; R6 > E52
114 | E21 > E32; E21 > E61
119 | R1 > E73; R5 > E73
120 | E87 > R4; E87 > E31; E87 > E54
Note that 11 of these 16 values of $k$ are multiple of 6, the others are 2 mod
6 and 5 mod 6 for, respectively three and two cases. One notices as well, that
generally, Ra and Exy supersede Ezt with $x<z$ and $t<y$, except for $k=60$
and $120$.
## 4\. Conclusions
We have shown that, for indices $\xi$ of triangular numbers multiples of other
triangular numbers, the remainders in the congruence relations of $\xi$ modulo
$k$ come always in pairs whose sum always equal $\left(k-1\right)$, always
include 0 and $\left(k-1\right)$, and only 0 and $\left(k-1\right)$ if $k$ is
prime, or an odd power of a prime, or an even square plus one or an odd square
minus one or minus two. If the multiplier $k$ is twice a triangular number of
$n,$the set of remainders includes also $n$ and $\left(n^{2}-1\right)$ and if
$k$ has integer factors, the set of remainders include multiple of a factor
following certain rules. Finally, algebraic expressions are found for
remainders in function of $k$ and its factors. Several exceptions are noticed
as well and it appears that there are superseding rules between the various
rules and expressions.
This approach allows to eliminate in numerical searches those
$\left(k-\upsilon\right)$ values of $\xi_{i}$ that are known not to provide
solutions of (1.1), where $\upsilon$ is the even number of remainders. The
gain is typically in the order of $k/\upsilon$, with $\upsilon\ll k$ for large
values of $k$.
## References
* [1] Andrews, G.E. Number Theory, Dover, New York, 1971.
* [2] Weisstein, E, W. "Triangular Number." From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/TriangularNumber.html. Last accessed 14 February 2021.
* [3] Cunningham A., Mathematical Questions and Solutions in Continuation of the Mathematical Columns of "the Educational Times"., Volume 75, F. Hodgson, 1901, 87-88.
* [4] de Joncourt E., The nature and notable use of the most simple trigonal numbers. The Hague: Husson, 1762.
* [5] Roegel D., A reconstruction of Joncourt’s table of triangular numbers (1762). LOCOMAT project, https://locomat.loria.fr/joncourt1762/joncourt1762doc.pdf, 2013\. Last accessed 14 February 2021.
* [6] Dickson L.E., History of the Theory of Numbers, Vol. II: Diophantine Analysis, Dover Publ., New York, 2005, p. 587.
* [7] Chahal J.S. and D’Souza H., Some remarks on Triangular Numbers, in Number Theory with an Emphasis on the Markoff Spectrum, A.D. Pollington and W. Mran, eds., Marcel Dekker Inc., New York, 1993.
* [8] Breiteig T., Quotients of triangular numbers. The Mathematical Gazette,Vol. 99, 545, 243-255, July 2015. DOI: 10.1017/mag.2015.33
* [9] V. Pletser, Recurrent Relations for Multiple of Triangular Numbers being Triangular Numbers, ArXiv 2101.00998, 2021, http://arxiv.org/abs/2101.00998, last accessed 14 February 2021.
* [10] N. J. A. Sloane, editor, The On-Line Encyclopedia of Integer Sequences, published electronically at https://oeis.org, Last accessed 14 February 2021.
* [11] V. Pletser, Searching for Multiple of Triangular Numbers being Triangular Numbers, Preprint, to be submitted, February 2021\.
|
# WYTIWYR: A User Intent-Aware Framework with Multi-modal Inputs for
Visualization Retrieval
Shishi Xiao10009-0008-0262-5289, Yihan Hou10000-0002-1459-8766, Cheng
Jin20000-0002-3522-3592, and Wei Zeng1,20000-0002-5600-8824
1The Hong Kong University of Science and Technology (Guangzhou), Guangzhou,
China 2 The Hong Kong University of Science and Technology, Hong Kong SAR,
China Wei Zeng is the corresponding author. Email<EMAIL_ADDRESS>
###### Abstract
Retrieving charts from a large corpus is a fundamental task that can benefit
numerous applications such as visualization recommendations. The retrieved
results are expected to conform to both explicit visual attributes (_e.g._ ,
chart type, colormap) and implicit user intents (_e.g._ , design style,
context information) that vary upon application scenarios. However, existing
example-based chart retrieval methods are built upon non-decoupled and low-
level visual features that are hard to interpret, while definition-based ones
are constrained to pre-defined attributes that are hard to extend. In this
work, we propose a new framework, namely _WYTIWYR (What-You-Think-Is-What-You-
Retrieve)_ , that integrates user intents into the chart retrieval process.
The framework consists of two stages: first, the _Annotation_ stage
disentangles the visual attributes within the query chart; and second, the
_Retrieval_ stage embeds the user’s intent with customized text prompt as well
as bitmap query chart, to recall targeted retrieval result. We develop a
prototype _WYTIWYR_ system leveraging a contrastive language-image pre-
training (CLIP) model to achieve zero-shot classification as well as multi-
modal input encoding, and test the prototype on a large corpus with charts
crawled from the Internet. Quantitative experiments, case studies, and
qualitative interviews are conducted. The results demonstrate the usability
and effectiveness of our proposed framework.
<ccs2012> <concept> <concept_id>10003120.10003145</concept_id>
<concept_desc>Human-centered computing Visualization</concept_desc>
<concept_significance>500</concept_significance> </concept> <concept>
<concept_id>10002951.10003317.10003325.10003327</concept_id>
<concept_desc>Information systems Query intent</concept_desc>
<concept_significance>500</concept_significance> </concept> <concept>
<concept_id>10010147.10010178</concept_id> <concept_desc>Computing
methodologies Artificial intelligence</concept_desc>
<concept_significance>500</concept_significance> </concept> </ccs2012>
[500]Human-centered computing Visualization [500]Information systems Query
intent [500]Computing methodologies Artificial intelligence
††volume: 42††issue: 3
## 1 Introduction
Data visualization can empower users’ understanding of data and facilitate
communication. Therefore, enormous charts are flourishing on the Internet.
Designing an appropriate chart is a time-consuming and labor-intensive process
that needs to consider various visual attributes (_e.g._ , chart type, color)
[Shn96], as well as the design style (_e.g._ , context information, 2D/3D
effect) [MTW∗12]. Therefore, designing based on existing examples, rather than
starting with sketches, is a more preferred approach [BFW21, PSP21]. Chart
retrieval, as an approach to return the ranked examples with respect to the
corresponding query, has gained much interest in both industry and academia
[LWW∗22, HA19, SHL∗16].
Figure 1: Comparison of pipelines of existing and our proposed chart retrieval
frameworks: example-based retrieval (left-top), definition-based retrieval
(left-bottom), and our proposed intent-aware retrieval (right).
However, existing works mainly focus on improving the similarity between query
charts and retrieval results while neglecting implicit user intent. As
illustrated in Figure 1, existing chart retrieval frameworks might be divided
into two categories: example-based (Figure 1 (left-top)) and definition-based
(Figure 1 (left-bottom)) approaches. Example-based methods (_e.g._ , [SSK06,
DW14, BBK∗18]) characterize the relevance criteria by the visual similarity of
charts, whilst definition-based methods (_e.g._ , [CCA15, HA19]) measure
similarity based on attributes extracted from the input charts. However, these
methods may suffer from both input and target shifts problems. First, many
existing works take only specific formats (_e.g._ , scalable vector graphics
(SVG) [HA19, LWW∗22]) of visualization as inputs, which are often synthesized
with monotonous data distributions and limited chart types that incurs
overfitting problem of models embedded in the framework. In contrast, most
online charts are generally in bitmap formats, and they are highly diverse in
terms of chart types and visual styles[BDM∗18]. In addition, online charts
contain noisy and redundant information, putting higher demands on
generalizability and robustness of the retrieval framework. Second, example-
based methods would indiscriminately recall pixel-wise visual similarity in
the input chart, while definition-based methods consider only a small number
of pre-defined and fixed chart attributes. Such rigid similarity measurements
would return unwanted target charts. Furthermore, it may tend to limited the
retrieval to a specific chart attribute, a specific combination of chart
attributes, or even chart attributes that are not necessarily enclosed by the
query chart.
To mitigate such shifts, we propose a novel chart retrieval framework named
WYTIWYR. It explicitly extracts visual attributes for the query chart, and
offers a set of target charts through a customized retrieval procedure that
considers flexible combinations of the visual attributes and users’ intents.
As Figure 1 (right) illustrates, our method tears down the query chart
flexibly with customizable attributes and actively injects user intent as
auxiliary inputs to seek better chart retrieval outputs.
Our intent-aware framework consists of two essential stages, the first stage
is Annotation, which highlights explicitly disentangling the visual attributes
for query charts. The disentangled attributes would enable the flexibly for
users to combine the attributes representing their intent. Because these
attributes differ amongst chart visualizations, we performed preliminary
research with 18 visualization categories to establish the branching of each
attribute and what users would anticipate when retrieving charts. Based on the
study, we utilize deep neural networks to train several independent attribute
classifiers tailored to four primary visual attributes, namely {Type, Trend,
Color, Layout}. To enable the resilience to user intent, we leverage the
state-of-the-art contrastive language-image pretraining (CLIP) [RKH∗21] model
famous for zero-shot classification. The model allows users for their own
creation of the attribute classifier to identify the extended attributes.
These attributes are excluded from our preliminary set of classifiers but are
still within the query process.
The second stage is Retrieval, which highlights the role of human-in-the-loop
through intent attributes as prompts. The intent attributes would condition
intent-aware filter, narrow the search scope from a huge collection of charts,
and finally generate multiple candidate charts. The multi-modal encoder fuses
context information from both the query chart and user intent prompt into a
joint representation. After that, similarity modeling is conducted between
such joint representation and candidate charts. Ultimately, we rank the
similarity scores in decreasing order and output the top-$K$ results as the
target charts. In summary, our main contributions are summarized as follows:
* •
Intent-aware Retrieval. We propose a novel framework integrating user intent
into chart retrieval process. The Annotation stage disentangles attributes and
enables a flexible combination of attributes. The Retrieval stage digests both
query chart and user intent as multi-modal inputs to get target retrieval
results.
* •
Prototype Construction. We implement a prototype system combining CLIP model
with visual interface to support intent-aware retrieval. Dataset, code,
pretrained model are released at https://github.com/SerendipitysX/WYTIWYR.
* •
Extensive Evaluation. We conduct extensive quantitative experiments, case
studies as well as expert interviews to validate the effectiveness of our
approach.
## 2 Related Works
Chart Attributes. Visualization is the process of mapping data to images. The
data can be encoded in different ways, yielding various types of chart
attributes that can be categorized in several levels. Low-level visual stimuli
of charts include color, position, and shape [RTOT06], while high-level
taxonomy includes consideration of the object of study, data, design model,
and user model [TM04]. The importance of choosing appropriate chart attributes
has been highlighted. Recent studies (_e.g._ , [MMA18, KRS∗21, LCF∗15,
SLC∗23]) stress the effects of chart attributes on users’ comprehension and
cognitive loads. Specifically, Li et al.[LCF∗15] point out that two charts of
the same data but with different layouts can cause perceptional imbalance,
even though they are informatively equivalent. Despite the importance, the
design space of chart attributes is too large for designers to choose from,
resulting in various types of charts in designers’ own style[SLC∗23]. Most of
these works follow a coarse-to-fine strategy to first identify chart types and
then extract visual marks and channels [SKC∗11, JKS∗17, PMH17, YZF∗22].
Existing methods often mix visual attributes as global style for similarity
estimation [MTW∗18, SHL∗16, JKS∗17], in which not all attributes are of
interest to users [LWW∗22]. To fill the gap, the Annotation stage of the
proposed WYTIWYR framework allows users to disentangle attributes in a given
chart.
Chart Retrieval. The core for effective retrieval is to delineate the
similarity between the query input and retrieval candidates in the database.
Based on the object of similarity measurement in the retrieval process,
existing chart retrieval strategies can be mainly categorized into two
classes: definition- and example-based approaches. Definition-based approaches
(_e.g._ , [CCA15, SHL∗16, HA19, ZDCC21, ZFF22, LWW∗22]) characterize the
criteria of similarity by explicit chart attributes, making it preferable for
well-configured chart formats, including SVG-style and Vega-Lite grammars. For
instance, Hoque and Agrawala[HA19] developed a search engine for D3
visualizations by using a JSON-like configuration that dictates query visual
encoding. Moreover, some studies (_e.g._ , [HMSA08, SGP∗18]) further use user
interaction records and visualization states represented in a provenance graph
to match the previous exploration states. Recent example-based methods regard
visualization charts as images and leverage deep learning models [MTW∗18,
ZFC∗23, SHL∗16, JKS∗17, YHZ23] to automatically extract visual features via an
end-to-end manner. A set of implicit low-level features are leveraged to
estimate the global similarity. However, definition-based approaches only
consider partially predetermined visual attributes, while example-based
approaches tend to match the query with all indiscriminate attributes.
Our proposed WYTIWYR framework combines the advantages of example-based
approaches in terms of their capability of capturing implicit attributes and
also definition-based approaches that leverage explicit attributes with high
interpretability. Moreover, the Retrieval stage enables a flexible combination
of these attributes with respect to user intent.
Vision-Language Pretraining. Large-scale pretrained models have shown
promising performance in both natural language processing and vision tasks
these years, such as BERT [DCLT18], RoBerTa [LOG∗19], and GPT series [BMR∗20,
RNS∗18, RWC∗19]. Following the paradigm of pretrain and finetune, many
downstream tasks [GKSS∗19, XBK∗15, LGR∗20] transfer the knowledge from the
pretrained model without training a new model from scratch by utilizing it.
Prompt, a text sequence in the form of natural language, links the pretrained
model and downstream tasks as a bridge. Strobelt _et al._ [SWS∗22] develops a
visual interface to provide a promising prompt evaluation. Combined with a
high-capacity text encoder and visual encoder, contrastive language-image
pretraining (CLIP) [RKH∗21] learns heterogeneous multi-modality
representations from 400 million image-text pairs by resorting to semantic
supervision from the embedding space of CLIP. Several studies [PWS∗21, GPM∗21,
GLKC22] extend its applicability in a zero-shot manner, which means the model
can predict samples whose class is not observed during training.
In this work, we take advantage of vision-language pretraining models, to
encode user intents as a prompt to collaborate with the decisive process of
retrieval. The CLIP-driven model is leveraged to align user intent with
corresponding visual attributes in both Annotation and Retrieval stages. In
this way, we can offer highly customizable attributes for chart annotation and
multi-modal representation extraction for chart retrieval.
## 3 WYTIWYR Framework
To build the WYTIWYR framework, we first formulate chart attributes based on a
preliminary study (Sec. 3.1), then present an overview of the WYTIWYR
framework (Sec. 3.2).
Figure 2: Results of our preliminary study. _Type, color, trend,_ and _layout_
are voted by most participants and identified as _primary_ attributes, whilst
the others are categorized as _extended_ attributes.
### 3.1 Attribute Formulation
There are two categories of chart attributes for chart retrieval: 1) _primary
attributes_ that are generally considered for chart retrieval, and 2)
_extended attributes_ that meet the specific needs of different users. To
better understand user intents, we conducted a comprehensive study to
categorize chart attributes.
Study design. Our preliminary study was conducted online and involved 40
participants whose ages ranged from 20 to 52 (mean = 24.3). Before the study,
we first introduced the background and purpose of the experiment to the
participants. Then we tested their knowledge on the chart through a set of
verification questions. Only the participants who could identify typical
charts, such as bar and pie charts, were allowed to participate in the study.
Fortunately, all participants were familiar with charts since their popularity
on social media. Next, the participants were shown multiple real-world charts
and asked to indicate what attributes were of interest. To eliminate any
potential subjective bias during the chart selection process for testing, a
random sample of five images (three in the Beagle dataset[BDM∗18], two in
other real-world examples) was taken for 18 types of chart from the collected
data. The testing charts were carefully chosen according to chart type
taxonomy as in the study by Borkin et al.[BVB∗13]. The detail of sampled
testing chart can be seen at supplementary material. Their answers were
recorded in text and summarized in different perspectives after the
experiments. The perspectives were cross-validated by two of our co-authors to
ensure correctness.
Figure 3: Illustration of attribute options of primary attributes for several
chart types. Some options may not be available for certain chart types, such
as _Layout_ for the point chart. Figure 4: Overall pipeline of attributes-
aware retrieval. The pipeline consists of two stages: Annotation for
extracting the attributes and Retrieval for modeling the similarity between
query charts and charts in the database.
Result. As shown in Figure 2, we identify a total of eight perspectives of
attributes are of interest from the feedback, which are also consistent with
previous studies [BVB∗13, LWW∗22]. Among them, _Type, Color, Trend_ , and
_Layout_ are the four attributes mostly frequently mentioned by the
participants and can be applied to most types of charts. Although the _Form_
attribute was also frequently mentioned by participants, it is primarily
associated with line charts and less frequently mentioned in the other chart
types. Therefore, to ensure generalization and expressivity, we decided not to
combine _Form_ with the other four attributes. We dub primary attributes as
${{\color[rgb]{0,0.2,0.572}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.2,0.572}{\mathcal{T}_{p}}}}:=\left\\{Type,\text{
}Color,\text{ }Trend,\text{ }Layout\right\\}$. In addition, the participants
also brought up some other chart attributes, including _form, background,
context_ , and _2D/3D_ , which we dubbed as extended attributes
$\mathcal{T}_{e}$ that includes the other attributes. For these chart
attributes, the variation between different users is large, making it
difficult to be comprehensively enumerated.
Moreover, the study also revealed a set of design options for each chart
attribute. Specifically, we choose 10 primary categories and 18 subcategories
of chart types from the visualization taxonomy[BVB∗13]. Figure 3 illustrates
the options in the attribute type of {_color_ , _trend_ , _layout_} for
several chart types including bar, line, and point charts. Notice that some
chart types may not have all options mentioned above. For example, _layout_
for point charts are not available, and maps only have _color_ options.
### 3.2 Overall Pipeline
The workflow of our proposed WYTIWYR framework is depicted in Figure 4. For
the Annotation stage, we employ several classifiers based on robust neural
network architecture to disentangle
$\\{{\color[rgb]{0,0.2,0.572}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.2,0.572}{\mathcal{T}_{p}}},{\color[rgb]{0.33,0.8,0.725}\definecolor[named]{pgfstrokecolor}{rgb}{0.33,0.8,0.725}{\mathcal{T}_{e}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\\}\in\mathcal{T}}$
embedded in the query chart $\mathcal{Q}$. After that, users can select the
disentangled attributes $\mathcal{I}_{A}$ as their intents based on their
needs. Also, $\mathcal{I}_{A}$ would condition the components of the filter in
the retrieval stage. For the Retrieval stage, we take a dual-path similarity
modeling between the query feature and the candidate info with the guidance of
intent attributes to yield the return charts. For the path of the query
feature, bitmap query chart $\mathcal{Q}$ with user intent text prompt
$\mathcal{I}_{P}$ is fed into a multi-modal encoder to obtain the joint
representation. For the candidate info path, the feature is produced by charts
in the database filtered by the intent-aware filter built by
$\mathcal{I}_{A}$. Throughout the overall pipeline, user intent can be
injected and guide the chart retrieval process by the following three ways:
* •
Classifier Customization. Users can tailor the usage of classifiers depending
on their needs in order to determine the presence of an attribute in the query
chart.
* •
Disentangled Attributes Selection. The attributes adopted in the Retrieval
stage can be selected and combined by the users.
* •
User Prompt Tuning. Users can add specific implicit intent as the text prompt
to guide the retrieval process, which is independent of the query chart.
## 4 WYTIWYR Prototype System
### 4.1 Stage1: Annotation
As shown in Figure 5, the task of the Annotation stage is to disentangle and
generate the visual attributes $\mathcal{T}$ adhered to the given query chart
$\mathcal{Q}$. For the primary attributes
${\color[rgb]{0,0.2,0.572}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.2,0.572}{\mathcal{T}_{p}}}$,
we adopt four classifiers for separate extraction in a supervised learning
manner. For the extended attributes
${\color[rgb]{0.33,0.8,0.725}\definecolor[named]{pgfstrokecolor}{rgb}{0.33,0.8,0.725}{\mathcal{T}_{e}}}$,
as they are absent from our training, we conduct the extraction process in the
fashion of zero-shot learning.
#### 4.1.1 Annotation for Primary Attributes
Redundant Information Removal. Several practical scenario-based raw data are
shown in Figure 6 (a), including fancy decoration, text information,
background, and legends. This work focuses on the intrinsic attributes of
$\mathcal{Q}$ instead of this redundant information. Hence we remove the
redundant information using ISNet [QDH∗22], a segmentation network with
pretrained weights.
Nevertheless, there exists context beyond the segmentation capability of
ISNet, such as the emojis in the left-most example in Figure 6 (a) & (b). The
redundant emojis in red and yellow colors would degrade the performance of
colormap classification. Instead of being categorized as categorical colormap,
the chart would be mistakenly recognized to diverging colormap. To mitigate
such issue, we ignore the colors that share less than 10% pixels of non-
transparent regions of the image after segmentation.
Figure 5: The Annotation stage of our pipeline. To disentangle various
attributes in the query chart, four classifiers are built for primary
attributes. In addition, one CLIP-based classifier is employed to optionally
annotate the extended attribute via user customization. Figure 6: Redundant
information removal and main color extraction. (a) Raw images. (b) The chart
after removing redundant information. (c) The extracted color after filtering
redundant colors.
Unified Classifiers. For annotation of the primary attributes, we employ
ResNet50[HZRS16] architecture as the backbone for all four attribute
classifiers. Thanks to the residual connection among the convolution layers,
the network is both lightweight and performance-guaranteed. During the
training process, the network will learn the complex features under the
supervision of given attributes. In the reference process, the trained
classifier would identify the corresponding attribute in the query chart, as
in Figure 5.
The only difference in the four classifiers is the number of output channels
in the last fully connection layer. For instance, the number of $Type$
classifiers is set to $18$ since there are $18$ types of charts under
consideration. Similarly, based on the specific task, we set this number to
$3$ for the $Layout$, $Trend$, and $Color$ classifiers with the respect to
their attribute options; see Figure 3 for detail. Note that $\mathcal{Q}$ may
not have the coverage of all attributes included in the classifiers. For
instance,
${\color[rgb]{0,0.2,0.572}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.2,0.572}{\mathcal{T}_{p}}}$
of a heatmap chart is $\left\\{Type,\text{ }Color\right\\}$ while
$\left\\{Trend,\text{ }Layout\right\\}$ attributes are absent.
Loss Function. There are two challenges to overcome when formulating the loss
function: 1) imbalanced samples in each class of the dataset (see Table 1),
and 2) the existence of noisy samples that are hard to classify. In order to
improve the accuracy for
${\color[rgb]{0,0.2,0.572}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.2,0.572}{\mathcal{T}_{p}}}$
annotation, we introduce Focal Loss [LGG∗17], which modifies the standard
cross entropy loss to overcome the above-mentioned challenges. The Focal Loss
is defined as:
$\mathcal{L_{\mathrm{FL}}}(p_{t})=-\alpha_{t}(1-p_{t})^{\gamma}\operatorname{log}(p_{t}),$
(1)
where $p_{t}\in[0,1]$ represents the estimated probability of class $t$,
$\alpha_{t}$ represents the scaling factor, and $\gamma$ represents the
modulating factor. Among them, $\alpha_{t}$ is set by inverse class frequency,
thus learning parameters tend to contribute to classes with fewer samples, and
$\gamma$ assists in up-weight the loss assigned to poor-classified examples,
avoiding the possibility that the training process is dominated by the amount
of well-classified samples.
#### 4.1.2 Annotation for Extended Attributes
Since the primary attributes only consider the general needs in chart
retrieval, we offer an optional classifier manipulated by the users to
distinguish $\mathcal{T}_{e}$ in this query chart, as shown in Figure 5. The
input of $\mathcal{T}_{e}$ classifier requires the user to provide several
labels of the attribute apart from $\mathcal{Q}$ (_e.g._ , style). We denote
these labels as $\\{{t}_{1},{t}_{2},\ldots,{t}_{m}\\}\in{T}$. As these labels
are out of our dataset, a task of zero-shot classification is naturally
formed. In the following, we will briefly introduce the mechanism of the CLIP
model at a general level with a toy example as in Figure 5, where the user
sets the text labels as [“3D style”, “Flat style”, “Sketch style”].
The CLIP model aligns $T$ embeddings with $\mathcal{Q}$ embeddings in a multi-
modality embedding space in a contrastive manner. Specifically, in the
embedding space of the example in Figure 5, the distance of $\mathcal{Q}$ and
“3D style” would be less than the other two irrelevant labels. As such, the
query results tend to be “3D style” bar charts. The CLIP model balances well
between the performance and computational cost in the prototype system.
Nevertheless, the model in our framework can be feasibly replaced with other
advanced language-image models, _e.g._ ,[LLXH22, LSG∗21, ZZF∗22].
### 4.2 Stage2: Retrieval
As the Annotation stage completes the disentanglement of the attributes
$\mathcal{T}$ in the query chart, users in the Retrieval stage can select the
intent attributes $\mathcal{I}_{A}$ based on their intents. Depicted in Figure
7, there are two branches with different modules for our Retrieval stage,
namely multi-modal encoder and intent-aware filter. The intent-aware filter
rules out charts whose attributes are beyond the range of $\mathcal{I}_{A}$.
The multi-modal encoder integrates $\mathcal{Q}$ and $\mathcal{I}_{P}$ as a
joint representation. Then, similarity modeling is conducted between results
of these two branches, yielding the final retrieval results based on the
modeling score.
Figure 7: The Retrieval stage consists of two branches. The intent-aware
filter branch (bottom) filters out charts based on intent attributes selected
by the user, generating multiple chart candidates. The multi-modal encoder
branch (top) produces the query feature based on the multi-modal inputs.
Similarity modeling is conducted between results of these two branches.
#### 4.2.1 Multi-Modal Encoder
This branch would form a CLIP-generated query feature with inputs
$\mathcal{Q}\text{, }\mathcal{I}_{P}$. As introduced in Sec. 4.1.2, the CLIP
embedding space has heterogeneous aligned text and visual features from the
multi-modal input. Hence, $\mathcal{Q}$ and $\mathcal{I}_{P}$ are encoded into
features by their respective CLIP encoders, and fused as a joint multi-modal
feature $f_{\mathcal{M}}$ that can be denoted as:
$f_{\mathcal{M}}=\frac{F_{\theta}(\mathcal{Q})+G_{\phi}(\mathcal{I}_{P})}{2},$
(2)
where $F_{\theta}$ and $G_{\phi}$ are denoted as the image encoder and text
encoder of the CLIP model, respectively.
#### 4.2.2 Intent-Aware Filter
The branch of Intent-Aware Filter rules out irrelevant charts in the database
and forms the candidate feature. Previous works (_e.g._ , [HA19, MTW∗18]) on
chart filtering primarily rely on fixed settings of the model, which are
coarse and neglect the user’s intents. For this shortcoming, we propose an
intent-aware filter constructed by $\mathcal{I}_{A}$, a set of disentangled
attributes selected by the user. For example, with $\mathcal{Q}$ illustrated
in Figure 7, a user may select the [Type, Layout, Trend] as the target
attributes for the retrieval, then the filter retains $n$ charts containing
these target attributes in the chart database. Then, the retained charts would
serve as candidates $C:=\\{c_{1},c_{2},\cdots,c_{n}\\}$ that are encoded as
candidate feature $f_{C}:=\\{f_{c_{1}},f_{c_{2}},\cdots,f_{c_{n}}\\}$ by image
encoder in the CLIP model.
#### 4.2.3 Similarity Modeling
As the core of retrieval, similarity modeling is comprehensively performed
with triplets $\\{\mathcal{Q}$, $\mathcal{I}_{A}$, $f_{\mathcal{M}}\\}$. Among
them, $\mathcal{Q}$ provides the global perception composed of implicit
features. The explicit intent attributes $\mathcal{I}_{A}$ and implicit query
feature $f_{\mathcal{M}}$ that include user intent prompt and query chart,
work together to recall user-intended examples throughout the retrieval
process. The overall similarity score $\mathcal{S}$ can be denoted as:
$\mathcal{S}=S_{\mathcal{Q}}\cdot\text{e}^{\nu S_{\mathcal{I}_{A}}+\mu
S_{\mathcal{M}}},$ (3)
where $S_{\mathcal{Q}}$ demotes the global perception at pixel level, and
$S_{\mathcal{I}_{A}}$ and $S_{\mathcal{M}}$ denotes the similarity score of
user-selected attributes and the multi-modality feature, respectively. The
scaling factors $\nu$ and $\mu$ are empirically set as $1$ and $5$,
respectively. All $S_{\mathcal{Q}}$, $S_{\mathcal{I}_{A}}$ and
$S_{\mathcal{M}}$ are normalized with range of $[0,1]$. In the following, we
will introduce these similarity scores in detail.
Global Perception Score. Although salient attributes with respect to user
intent are disentangled and selected, a multitude of implicit features are
intertwined within the chart image, which forms the concept of global
perception. We adopt the classic cosine similarity and compute it at the pixel
level between $\mathcal{Q}$ and the feature of every candidate
$f_{C_{i}},i=1,2,\ldots,n$ as follows:
$S_{\mathcal{Q}}=\frac{{F_{\theta}({\mathcal{Q}}})\cdot{f_{C_{i}}}}{\|{F_{\theta}({\mathcal{Q}}})\|\|{f_{C_{i}}}\|}.$
(4)
Intent Attributes Score. Extended attribute
${\color[rgb]{0.33,0.8,0.725}\definecolor[named]{pgfstrokecolor}{rgb}{0.33,0.8,0.725}{\mathcal{T}_{e}}}$
contains a user’s expected attributes of retrieval results charts apart from
four primary attributes. Among the attributes, Type and Layout define core
properties of a chart, which is easy to distinguish from $\mathcal{Q}$.
Therefore, we neglect them in the similarity crafting. The other attributes
serve as the miscellaneous variant, and the change of them can be minuscule.
This motivates us to build another score $S_{\mathcal{I}_{A}}$ to further
enhance the retrieval process. Therefore, we mainly consider the following
three attributes: Trend $\mathcal{N}$, Color $\mathcal{C}$ and extended
attribute
${\color[rgb]{0.33,0.8,0.725}\definecolor[named]{pgfstrokecolor}{rgb}{0.33,0.8,0.725}{\mathcal{T}_{e}}}$.
$S_{\mathcal{I}_{A}}$ is formulated as:
$S_{\mathcal{I}_{A}}=S_{\mathcal{N}}+S_{\mathcal{C}}+S_{\mathcal{T}_{e}}.$ (5)
For $S_{\mathcal{N}}$, we build an extractor to extract the trend feature from
the query chart $\mathcal{Q}$ and every candidate chart
$C_{i},i=1,2,\ldots,n$. The extractor shares the parameters with the trend
classifier in the Annotation stage except the last fully connected layer.
Then, $S_{\mathcal{N}}$ can be estimated by cosine similarity between the
extracted features.
For $S_{\mathcal{C}}$, we follow the scheme in Figure 6 to transform
$\mathcal{Q}$ into a proportional color palette after the steps of background
removal and color extraction. Then the proportional color palette is
transformed into a 128-bin color histogram in all RGB channels and separately
stored in three vectors. We then concatenate these vectors to form the
ultimate color vector $V$. Denote $V_{\mathcal{Q}}$,
$V_{C_{i}},i=1,2,\ldots,n$ as the color vectors of $\mathcal{Q}$,
$V_{C_{i}},i=1,2,\ldots,n$, respectively, we then estimate the cosine
similarity between them.
For $S_{\mathcal{T}_{e}}$, since it is hard to quantify the similarity of
intents, we leverage the power of the CLIP model again to give an accurate
response. We feed the CLIP model with candidate chart $C_{i},i=1,2,\ldots,n$
and text labels in user intent $t_{i},i=1,2,\ldots,m$. For each chart, we
would obtain $m$ outputs $\\{y_{1},y_{2},\ldots,y_{m}\\}$. Prior to the
computation, the user would select one text label indexed $s$ as the attribute
that best represented their intent. Then, we denote $S_{\mathcal{T}_{e}}$ for
a candidate chart as:
$S_{\mathcal{T}_{e}}=\text{e}^{y_{s}}/{\sum_{m}\text{e}^{y_{m}}}.$
Feature Matching Score. Similarly, we match the closeness between the multi-
modal feature and the candidate feature by the following equation:
$S_{\mathcal{M}}=\frac{f_{\mathcal{M}}\cdot{f_{C_{i}}}}{{\|{f_{\mathcal{M}}\|\|{f_{C_{i}}}\|}}}.$
### 4.3 System Interface
We design an interface for the prototype system that allows users to retrieve
a chart according to their intents.
Figure 8: Interface of our prototype system. A) the _Annotation_ view
decouples the properties of the query chart; B) the _Retrieval_ view allows
users to select the primary and extended attributes, and C) the _Result_ view
shows the top 5 retrieved charts.
Annotation View. In this view, users can upload a query chart and get
disentangled primary attributes, including Type, Color, Trend, and Layout. As
shown in Figure 8 (A), a bar chart is uploaded and the automatically annotated
attributes, “ _Barchart, Categorical Colormap, Increasing Trend, Horizontal
Layout_ ” are presented.
Retrieval View. In this view (Figure 8 (B)), users can view and choose both
primary and extended attributes. The first tag in green corresponds to the
annotated attributes from the query chart, while other options are also shown,
allowing users to select the desired attributes. If users feel that the
default primary attributes are insufficient or have other attributes of
interest, they can add a new classifier by clicking on the chart in the upper
right corner, which reflects the philosophy of the proposed intent-aware
design. In the input box below, users can enter their intentions as intent
prompts for more customized queries. As shown in Figure 8 (B1), the user
selects “Bar chart” for the type attribute and enters “Fancy style with icon”
in the Retrieval view.
Result View. This view (Figure 8 (C)) shows top five query results, arranged
in decreasing order according to their similarity scores.
## 5 Evaluation
Figure 9: Visual comparison of chart retrieval by different methods. First row: Retrieval results by the conventional HOG method. Second row: Retrieval results by the deep-learning-based CNN method. Third to fourth rows: Retrieval results of our WYTIWYR framework. Table 1: Our integrated dataset consists of 18 types of charts from Beagle[BDM∗18] and manual collection. Chart Type | #Count | Chart Type | #Count
---|---|---|---
Bar chart | 7269 | Heatmap | 352
Stacked Bar Chart | 1159 | Line Graph | 11605
Circular Bar Chart | 608 | Star Plot | 491
Donut Chart | 2459 | Choropleth Map | 640
Pie Chart | 2587 | Scatter Plot | 3000
Sankey Diagram | 266 | Word Cloud | 406
Timeline | 324 | Dendrogram | 298
Box Plot | 571 | Network | 395
Histogram | 695 | Circular packing chart | 139
### 5.1 Quantitative Experiment
Dataset. Previous works concentrate on retrieval of limited chart types, e.g.,
single [MTW∗18] or a few type [CCA15, HBL∗19] chart retrieval. Moreover, some
of the literature coarsely considers the categorization of charts, by broadly
grouping several chart categories into one [LWW∗22]. Many datasets are
synthesized with simple composition and monotonous variation, making it
difficult to adapt to real-world scenarios that are much more complex. In this
work, we utilize the Beagle dataset [BDM∗18] _with charts_ in bitmap format,
which offers visualization collections designed by real-world users through
multiple tools, including D3, Plotly, Chartblocks, Fusion Charts, and Graphiq.
To make our framework more robust, we further add more difficult instances by
manually collecting 4k images from Pinterest, which supplements a large number
of stylized and irregular charts. We filter out charts whose type is beyond
our scope (_e.g._ , scientific visualizations) and manually label them with
$\mathcal{T}_{p}$. Finally, our dataset consists of 33,260 images in total.
The detailed distribution is shown in Table 1.
Table 2: Accuracy of Annotation results. Method | Type | Trend | Layout | Color
---|---|---|---|---
ResNet50+MSE Loss | 0.9518 | 0.8290 | 0.9537 | 0.7963
ResNet50+Focal Loss | 0.9601 | 0.8424 | 0.9653 | 0.8019
Annotation Accuracy. We evaluate the performance of
${\color[rgb]{0,0.2,0.572}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.2,0.572}{\mathcal{T}_{p}}}$
annotation using ResNet50 architectures with different losses. As Table 2
lists, the best result is achieved with the designated Focal Loss.
Table 3: F1-scores of Retrieval results. | | F1-Score
---|---|---
Top-K | Method | Type | Trend | Layout | Color
3 | HOG | 0.6199 | 0.6140 | 0.5449 | 0.6800
CNN | 0.9154 | 0.8043 | 0.7095 | 0.7717
Ours | 0.9549 | 0.8360 | 0.9280 | 0.8260
5 | HOG | 0.5067 | 0.4853 | 0.3857 | 0.6045
CNN | 0.8872 | 0.7550 | 0.6324 | 0.7076
Ours | 0.9364 | 0.7944 | 0.9118 | 0.7730
10 | HOG | 0.4124 | 0.4261 | 0.2741 | 0.5388
CNN | 0.8607 | 0.7085 | 0.5527 | 0.6520
Ours | 0.9091 | 0.7546 | 0.8812 | 0.7223
Retrieval Results. To evaluate the retrieval performance comprehensively, we
examine both precisions and recall via the F1-score. Denoting the
classification result with true positive, true negative, false positive, and
false negative conditions as $TP,TN,FP,FN$, F1-score can be formulated as:
$F1\text{-}score=\frac{2TP}{2TP+FP+FN}.$ (6)
The retrieval performance with $\mathcal{T}_{p}$ is individually computed
using F1-score in the top-$K$ fashion, where $K\in\\{3,5,10\\}$. We devise two
settings of with and without user intent to fully evaluate our approach. For
the setting of the absence of user intent, similarity estimation depends on
the global perception $S_{Global}$.
To demonstrate the effectiveness of our approach, we take two other approaches
as the counterpart: a conventional method (Histogram of Oriented Gradients
[DT05], denoted as HOG from hereon), and a deep-learning-based method
(ResNet50, denoted as CNN from hereon). The quantitative results are displayed
in Table 3. The results reveal that our method significantly outperforms the
others. Furthermore, as $K$ increases, our method keeps the performance
superiority while other methods degrade dramatically. For the evaluation of
user intent retrieval, as there are no ground-truth labels to compute
quantitative results, we visually compare results generated by different
methods for several typical retrieval results. As shown in Figure 9, CNN
outputs visually more similar results than HOG. Our method also produces
similar results without user intent. Nevertheless, our method allows users to
add their intent by selecting the disentangled primary attributes
${\color[rgb]{0,0.41,0.09}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.41,0.09}{\mathcal{T}_{p}}}$
or adding extra intent, which contributes to retrieving results of interest as
shown in the last two rows.
### 5.2 Case Study
During the process of visualization design, many inspirations may arise, but
bringing those inspirations to fruition can be time-consuming. Retrieval is a
fast way to validate those inspirations. Inspired by [BLBL22], two usage
scenarios of chart retrieval are illustrated as a proof-of-concept of the
WYTIWYR framework: 1) extending the design space by explicit goals and 2)
fuzzy retrieval based on implicit user intent. For each scenario, we compare
the results generated by our intent-aware chart retrieval technique with those
generated by intent-free chart retrieval technique.
#### 5.2.1 Design Space Extension by Explicit Goals
With the prototype system, users can customize retrieval inputs based on
disentangled attributes of the query chart and the user intent prompt.
Moreover, beyond the four primary attribute classifiers, users can further add
customized classifiers to extract extended attributes. In this scenario, we
divide the disentangled attribute operation into three categories: origin
attribute change, new attribute addition, and existing attribute deletion. In
the end, we show an interesting case of attribute transfer that combines these
operations. Figure 10 shows four cases of our study. In general, retrieval
without user intent provides similar results as the input but gives minimum
design insights. Instead, with the guidance of user intent, our framework
retrieves more diverse results that are well-accepted. The following lists
more details for each case.
Figure 10: Four cases for extending the design space by explicit attributes.
In each case, the first line presents results without user intent, while the
second line presents results with user intent.
Original Attribute Change. The disentangled attributes are independent of each
other, thus, users can replace one of the attributes while keeping others
unchangeable. In Figure 10 (a), the color of the choropleth map is changed to
user prompt “pink”.
New Attribute Addition. Users can add new attributes, together with the
disentangled attributes, to better describe their specific needs. In Figure 10
(b), users add “with a curve to indicate the distribution”, and the results
are changed to combinations of histograms and line chart.
Existing Attribute Deletion. As the attributes can be disentangled as users’
needs, they can remove some attributes of no interest. In Figure 10 (c), the
user dislikes dark background, then can discard the attribute by adding a new
classifier with labels of [“dark background”, “white background”].
Attribute Transfer. As in Figure 10 (d), given a circular packing chart as a
query chart, the user gets the attribute annotation as $\\{$_Type: “Circular
Packing Chart”, Color: “Sequential Colormap”_ $\\}$. The user would like to
get heatmaps with the same form as the query chart but with more contrast
colors. S/he can get the desired results by changing Type and Color attributes
to $\\{$_Type: “Heatmap”, Color: “Diverging Colormap”_ $\\}$ in the retrieval
stage.
#### 5.2.2 Fuzzy Retrieval by User Intent
Designing an expressive visualization incurs a steep learning rate for some
novices, who may not have a specific design prior and prefer to perform trial
and error on the search engine to seek a desirable design. Below, we list
three scenarios to illustrate how our method supports such retrieval.
Similarly, we will compare the results of the intent-free and intent-aware
retrieval.
Figure 11: Three cases for showing fuzzy retrieval. Negative examples are
highlighted with a red border.
Text Information Seeking. Keyword-based searching is popular, but existing
works approach this goal by storing text information in advance [HA19, CCA15]
or relying on OCR [SKC∗11], hindering generalizability and robustness of the
works. Instead, our method allows to recognize text information based on
knowledge transferred by the pretrained CLIP model. As shown in Figure 11 (a),
to generate a chart for visualizing “life expectancy”, the user keeps the
origin chart type Scatter plot and data trend Increase trend, and adds the
user intent as a text prompt “life expectancy”. The intent-aware retrieval
results imitate the famous design in Gapminder.
Relevant Topic Finding. Our method also has the capacity to find chart
examples related to a specific topic. Previous work used word2vec [Chu17] to
model the distances between words [HA19], limiting the search scope within
texts. We enhance the generalizability of matching text prompt with both text
information and visual attributes. In Figure 11 (b), given a word cloud with
the theme of nature, the user can retrieve some examples with text prompt as
“about technology topic”. Our system returns examples with content about
technology. Moreover, with the text prompt of “all words form a shape of a
tree”, our system aligns the semantic information of text with visual
attributes, returning examples with tree shapes instead of only containing the
word “tree”.
Abstract Description Searching. In the process of retrieval, users not having
a clear search target tend to use ambiguous and abstract descriptions. In
Figure 11 (c), the user inputs a basic timeline diagram with “fancy style”,
and gets several well-designed examples. To constrain the scope of vertical
layout, the user adds such attribute and performs targeted retrieval.
### 5.3 Qualitative Evaluation
#### 5.3.1 Study Design
Our study recruited seven participants (three women, four men) with ages 22 to
26. To demonstrate that our framework is useful for different levels of users,
we enroll four experts, i.e., visualization designers (E1 - E4), and three
novices with only experience in using visualizations in reports or courses (N1
- N3).
Before the experiment, we collected basic information from the participants
through a questionnaire and introduced them to the use scenarios of chart
retrieval. Then we instructed them to familiarize themselves with the system
workflow. We also showed some examples of using prompts to help participants
understand the prompt better and make a better choice of prompts. After
understanding the background, participants were allowed to freely explore and
use our system. They could upload a query chart and try to get a satisfactory
design in multiple iterations. If there was no satisfactory design, they could
also try combinations of attributes and prompts to see if the retrieved chart
has inspiring results. Throughout the process, participants were encouraged to
think aloud and give feedback whenever they wanted. After they felt it was
sufficient, we conducted interviews about the usability of our system. Three
questions were included: 1) Whether our approach is _effective_ in helping
them find the chart that meets their intents; 2) Whether our approach is
_efficient_ enough to avoid long waits; 3) Whether they are _satisfied_ with
our system. The experiment lasted about 45 minutes. The participants were not
compensated with money but beverages worthing about $5 dollars after the
study.
#### 5.3.2 Feedback
Effectiveness. In general, all users agreed that our system effectively
supported for retrieving the desired charts based on their intents. Most
participants (6 out of 7) thought that our setting of the primary attribute
was appropriate and comprehensive. E2 suggested to extract the main hue used
in the chart in the annotation stage. Regarding the comprehensiveness and
accuracy of the prompts, all participants felt that their search needs were
largely met and the results were generally consistent with their intents. N1
spent a long time exploring the map as shown in Figure 10 (a). He found it
exciting when typing “India”, an India map was retrieved. Our system also
effectively recognized his intents when he searched for maps of various
colors.
The participants pointed out some limitations as well. When the prompt given
by the user had multiple meanings, sometimes only one or two of the search
results were exactly what the user expects. E4 tried to add a trend arrow on
the bar chart. However, when he only typed “with arrow”, some bar charts using
arrow icons were also retrieved. He said that sometimes it was required to go
and try several prompts to get the desired result. E3 encountered a similar
situation, and it might be due to our dataset only containing limited charts
that matched her specifications. Besides, sometimes participants may want to
adjust attributes in the query chart, such as changing the white background to
dark (N2), and changing the colorful design to black and white (E2). However,
one or two of the returned results still had the features that they would like
to change. “There are times when I want the query to focus more on the prompt
I give and less on the original chart,” said N2.
Efficiency and Satisfaction. All participants agreed that our system was
responsive and user friendly. They were satisfied with our retrieving
framework, and they pointed out some possible improvements. They found it
helpful to decouple a chart into attributes and have a selective query. N2
stated, “It saved me the time of checking a lot of examples and found the
desired design directly”. N3 appreciated the function of adding a new
classifier since it can be used to explore new types of visualizations. “It is
convenient for novices who do not understand the type and content of the chart
at first,” he gave the reason. E1, a participant with a design background,
said our results were inspiring. She was willing to see more results, even if
they did not exactly match the search intents. “This could inspire me to come
up with new ideas,” she added.
## 6 Discussion
WYTIWYR is an intent-aware chart query framework that can address various
issues related to traditional chart retrieval and expedite the design process.
In conventional retrieval, not all chart attributes may be relevant to users’
intents, whilst some preferred attributes may even be absent from the chart.
Instead, our system disentangles and combines attributes to ensure that the
retrieved results encompass all attributes of user’s intents. There are also
several limitations in our current methods and avenues for future work.
### 6.1 Limitations
User Intent or Query Chart. The input of our method consists of a query chart
and user intent. The composition of two factors affects the retrieval results.
As shown in Figure 10 (b), users have a strong intent of charts that have a
curve, and our system would tend to return the results meeting such intents.
However, the perception of global distribution may be lost. The priority
between user intents as text and global distribution in the input chart is
hard to tackle due to the usage scenarios. Furthermore, within the user
intent, the weights of selected attributes and text prompts are hard to set in
the similarity modeling. A dynamic adjustment of the priority between the user
intent and query chart is demanded in this scenario.
Prompt Sensibility. Text prompt is integrated with query chart as joint input
for retrieval in our work. However, designing effective prompts is challenging
due to the significant impact of even slight wording changes on
performance[ZYLL22]. In our framework, prompt design is limited by the CLIP
pretrained model, which is trained mainly on natural images, which makes it
difficult to accurately interpret expert expressions for visualization domain.
For instance, Figure 12 shows that using a more professional prompt in the
third row results in negative results, when compared to the second row with
identical intent. The CLIP model also struggles to identify rare or novel
objects. Approaches for fine-tuning [LLXH22, LSG∗21, ZZF∗22] the CLIP model or
using advanced language-image models are promising. Obtaining enough chart-
description pairs for training is also necessary, but the process is time-
consuming and resource-intensive.
Dataset. Despite collecting a large number of real-world charts, our database
remains limited in meeting vast user needs. In Figure 11 (b), only the first
three charts are tree-shaped since we only have these charts that meet the
user’s intent in our database. The unbalanced dataset may adversely affect
retrieval performance, as the attribute classifier training could become
biased towards classes with larger volumes. We alleviate it by using focal
loss, which assigns higher weights to minority class and misclassified
examples. Due to the lack of a more detailed category in our dataset, some
inaccurate results will occur in the retrieval. In Figure 12, two types of
heatmaps are visible in the retrieval results.
Figure 12: A case for revealing the limitations. Negative examples are
highlighted with a red border.
Chart Attributes. In the preliminary study, we employed several chart types to
determine the chart attributes that users tend to concentrate on. We randomly
chose samples from both synthesized datasets and real-world collections to
diversify the chart attributes. Nevertheless, the attributes may not fully
cover user intents. A more comprehensive survey will be conducted to address
the issue.
### 6.2 Future Work
Trade-off Control. To balance between user intent and query chart, we plan to
add a slider equipped with dynamic weight control to let the user manage this
trade-off by themselves in the near future.
Text Prompt Auto-completion. Previous works [SHKC20, WHS∗22] utilize auto-
completion as a hint to assist users in the application process, which is also
helpful for our system to alleviate the interpretability shift between the
user side and the model side. Specifically, we aim to build a mapping table to
store the relationship between appropriate prompts and common user text
expressions. Then, when a user completes the input, the keywords of it will be
extracted by the named entity recognition technique [NS07]. Finally, the
corresponding optimal prompt would be automatically completed by the search in
the mapping table.
Text-only retrieval. Our system currently supports chart-only retrieval and
chart-text retrieval. To offer a more generalized retrieval framework, we plan
to add a text-only query, allowing users to obtain their retrieval by only
providing text input. The text-only retrieval is beneficial to users who do
not have any reference chart. To this end, we aim to prepare several basic
charts with primary attributes in advance to serve as temporary query charts,
which can be replaced by a more reliable chart from returned results by
retrieval. The process can be regarded as an iteration of desirable results,
with a clearer search goal and narrowed search scope.
## 7 Conclusion
In this paper, we propose a user intent-aware chart retrieval framework, which
leverages multi-modal input to fuse explicit visual attributes and implicit
user intent into the retrieval process. This pipeline consists of two core
stages, namely Annotation, and Retrieval. The Annotation stage disentangles
visual attributes in the query chart to ensure a flexible combination of user
intent attributes used in retrieval. The Retrieval stage allows users to
integrate the text prompt with the query chart to achieve more customized
retrieval. Quantitative experiments prove the superior performance of our
method compared with previous methods. Furthermore, we conduct two case
studies containing two common retrieval strategies and interviews to
demonstrate the effectiveness and usability. As an initial step to fuse user
prompt in chart retrieval, we hope to enhance the prompt capacity to better
meets users’ growing needs and various usage scenarios. Dataset, code,
pretrained model have been released to promote future research in this
direction.
## 8 Acknowledgments
The authors wish to thank anonymous reviewers for their constructive comments.
The work was supported in part by National Natural Science Foundation of China
(62172398), and the Red Bird Program at the Hong Kong University of Science
and Technology (Guanghzou).
## References
* [BBK∗18] Behrisch M., Blumenschein M., Kim N. W., Shao L., El-Assady M., Fuchs J., Seebacher D., Diehl A., Brandes U., Pfister H., Schreck T., Weiskopf D., Keim D. A.: Quality metrics for information visualization. _Comput. Graph. Forum 37_ , 3 (2018), 625–662.
* [BDM∗18] Battle L., Duan P., Miranda Z., Mukusheva D., Chang R., Stonebraker M.: Beagle: Automated extraction and interpretation of visualizations from the web. In _Proc. ACM CHI_ (2018), pp. 594:1–8.
* [BFW21] Battle L., Feng D., Webber K.: Exploring visualization implementation challenges faced by D3 users online. _arXiv preprint arXiv:2108.02299_ (2021).
* [BLBL22] Bako H. K., Liu X., Battle L., Liu Z.: Understanding how designers find and use data visualization examples. _IEEE Trans. Vis. Comput. Graph._ (2022).
* [BMR∗20] Brown T., Mann B., Ryder N., Subbiah M., Kaplan J. D., Dhariwal P., Neelakantan A., Shyam P., Sastry G., Askell A., et al.: Language models are few-shot learners. In _Proc. NIPS_ (2020), pp. 1877–1901.
* [BVB∗13] Borkin M. A., Vo A. A., Bylinskii Z., Isola P., Sunkavalli S., Oliva A., Pfister H.: What makes a visualization memorable? _IEEE Trans. Vis. Comput. Graph. 19_ , 12 (2013), 2306–2315.
* [CCA15] Chen Z., Cafarella M., Adar E.: DiagramFlyer: A search engine for data-driven diagrams. In _Proc. WWW_ (2015), pp. 183–186.
* [Chu17] Church K. W.: Word2vec. _Natural Language Engineering 23_ , 1 (2017), 155–162.
* [DCLT18] Devlin J., Chang M.-W., Lee K., Toutanova K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In _Proc. NAACL_ (2018), pp. 4171–4186.
* [DT05] Dalal N., Triggs B.: Histograms of oriented gradients for human detection. In _Proc. CVPR_ (2005), pp. 886–893.
* [DW14] Dang T. N., Wilkinson L.: Scagexplorer: Exploring scatterplots by their scagnostics. In _Proc. IEEE PacificVis_ (2014), IEEE, pp. 73–80.
* [GKSS∗19] Goyal Y., Khot T., Summers-Stay D., Batra D., Parikh D.: Making the V in VQA matter: Elevating the role of image understanding in visual question answering. _Int. J. Comput. Vis._ (2019), 398–414.
* [GLKC22] Gu X., Lin T.-Y., Kuo W., Cui Y.: Open-vocabulary object detection via vision and language knowledge distillation. In _Proc. ICLR_ (2022).
* [GPM∗21] Gal R., Patashnik O., Maron H., Chechik G., Cohen-Or D.: StyleGAN-NADA: CLIP-guided domain adaptation of image generators. _ACM Trans. Graph. 41_ (2021), 141:1–13.
* [HA19] Hoque E., Agrawala M.: Searching the visual style and structure of D3 visualizations. _IEEE Trans. Vis. Comput. Graph. 26_ , 1 (2019), 1236–1245.
* [HBL∗19] Hu K., Bakker M. A., Li S., Kraska T., Hidalgo C.: VizML: A machine learning approach to visualization recommendation. In _Proc. ACM CHI_ (2019), pp. 128: 1–12.
* [HMSA08] Heer J., Mackinlay J., Stolte C., Agrawala M.: Graphical histories for visualization: Supporting analysis, communication, and evaluation. _IEEE Trans. Vis. Comput. Graph. 14_ , 6 (2008), 1189–1196.
* [HZRS16] He K., Zhang X., Ren S., Sun J.: Deep residual learning for image recognition. In _Proc. CVPR_ (2016), pp. 770–778.
* [JKS∗17] Jung D., Kim W., Song H., Hwang J.-i., Lee B., Kim B., Seo J.: ChartSense: Interactive data extraction from chart images. In _Proc. ACM CHI_ (2017), pp. 6706–6717.
* [KRS∗21] Kim H., Rossi R., Sarma A., Moritz D., Hullman J.: An automated approach to reasoning about task-oriented insights in responsive visualization. _IEEE Trans. Vis. Comput. Graph. 28_ , 1 (2021), 129–139.
* [LCF∗15] Li Z., Carberry S., Fang H., McCoy K. F., Peterson K., Stagitis M.: A novel methodology for retrieving infographics utilizing structure and message content. _DKE 100_ (2015), 191–210.
* [LGG∗17] Lin T.-Y., Goyal P., Girshick R., He K., Dollár P.: Focal loss for dense object detection. In _Proc. ICCV_ (2017), pp. 2980–2988.
* [LGR∗20] Lu J., Goswami V., Rohrbach M., Parikh D., Lee S.: 12-in-1: Multi-task vision and language representation learning. In _Proc. ICML_ (2020), pp. 10437–10446.
* [LLXH22] Li J., Li D., Xiong C., Hoi S.: Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In _Proc. ICML_ (2022), pp. 12888–12900.
* [LOG∗19] Liu Y., Ott M., Goyal N., Du J., Joshi M., Chen D., Levy O., Lewis M., Zettlemoyer L., Stoyanov V.: RoBERTa: A robustly optimized BERT pretraining approach. In _Proc. ICLR_ (2019).
* [LSG∗21] Li J., Selvaraju R., Gotmare A., Joty S., Xiong C., Hoi S. C. H.: Align before fuse: Vision and language representation learning with momentum distillation. In _Proc. NIPS_ (2021), pp. 9694–9705.
* [LWW∗22] Li H., Wang Y., Wu A., Wei H., Qu H.: Structure-aware visualization retrieval. In _Proc. ACM CHI_ (2022), pp. 409: 1–14.
* [MMA18] Majooni A., Masood M., Akhavan A.: An eye-tracking study on the effect of infographic structures on viewer’s comprehension and cognitive load. _Inf. Vis. 17_ , 3 (2018), 257–266.
* [MTW∗12] Moere A. V., Tomitsch M., Wimmer C., Christoph B., Grechenig T.: Evaluating the effect of style in information visualization. _IEEE Trans. Vis. Comput. Graph. 18_ , 12 (2012), 2739–2748.
* [MTW∗18] Ma Y., Tung A. K., Wang W., Gao X., Pan Z., Chen W.: ScatterNet: A deep subjective similarity model for visual analysis of scatterplots. _IEEE Trans. Vis. Comput. Graph. 26_ , 3 (2018), 1562–1576.
* [NS07] Nadeau D., Sekine S.: A survey of named entity recognition and classification. _Lingvisticae Investigationes 30_ , 1 (2007), 3–26.
* [PMH17] Poco J., Mayhua A., Heer J.: Extracting and retargeting color mappings from bitmap images of visualizations. _IEEE Trans. Vis. Comput. Graph. 24_ , 1 (2017), 637–646.
* [PSP21] Parsons P., Shukla P., Park C.: Fixation and creativity in data visualization design: Experiences and perspectives of practitioners. In _Proc. IEEE VIS_ (2021), pp. 76–80.
* [PWS∗21] Patashnik O., Wu Z., Shechtman E., Cohen-Or D., Lischinski D.: StyleCLIP: Text-driven manipulation of styleGAN imagery. In _Proc. ICCV_ (2021), pp. 2085–2094.
* [QDH∗22] Qin X., Dai H., Hu X., Fan D.-P., Shao L., Van Gool L.: Highly accurate dichotomous image segmentation. In _Proc. ECCV_ (2022), pp. 38–56.
* [RKH∗21] Radford A., Kim J. W., Hallacy C., Ramesh A., Goh G., Agarwal S., Sastry G., Askell A., Mishkin P., Clark J., et al.: Learning transferable visual models from natural language supervision. In _Proc. ICML_ (2021), pp. 8748–8763.
* [RNS∗18] Radford A., Narasimhan K., Salimans T., Sutskever I., et al.: _Improving language understanding by generative pre-training_. Tech. rep., 2018.
* [RTOT06] Rodrigues J. F., Traina A. J. M., Oliveira M. C. F. d., Traina C.: Reviewing data visualization: an analytical taxonomical study. In _Proc. ICCV_ (2006), pp. 713–720.
* [RWC∗19] Radford A., Wu J., Child R., Luan D., Amodei D., Sutskever I., et al.: Language models are unsupervised multitask learners. _OpenAI blog 1_ , 8 (2019), 9.
* [SGP∗18] Stitz H., Gratzl S., Piringer H., Zichner T., Streit M.: Knowledgepearls: Provenance-based visualization retrieval. _IEEE Trans. Vis. Comput. Graph. 25_ , 1 (2018), 120–130.
* [SHKC20] Setlur V., Hoque E., Kim D. H., Chang A. X.: Sneak pique: Exploring autocompletion as a data discovery scaffold for supporting visual analysis. In _Proc. ACM UIST_ (2020), pp. 966–978.
* [SHL∗16] Siegel N., Horvitz Z., Levin R., Divvala S., Farhadi A.: Figureseer: Parsing result-figures in research papers. In _Proc. ECCV_ (2016), Springer, pp. 664–680.
* [Shn96] Shneiderman B.: The eyes have it: a task by data type taxonomy for information visualizations. In _Proc. IEEE Symp. Vis. Lang._ (1996), pp. 336 – 343.
* [SKC∗11] Savva M., Kong N., Chhajta A., Fei-Fei L., Agrawala M., Heer J.: ReVision: Automated classification, analysis and redesign of chart images. In _Proc. ACM UIST_ (2011), pp. 393–402.
* [SLC∗23] Shi Y., Liu P., Chen S., Sun M., Cao N.: Supporting expressive and faithful pictorial visualization design with visual style transfer. _IEEE Trans. Vis. Comput. Graph. 29_ (2023), 236 – 246.
* [SSK06] Schneidewind J., Sips M., Keim D. A.: Pixnostics: Towards measuring the value of visualization. In _Proc. IEEE VAST_ (2006), pp. 199–206.
* [SWS∗22] Strobelt H., Webson A., Sanh V., Hoover B., Beyer J., Pfister H., Rush A. M.: Interactive and visual prompt engineering for ad-hoc task adaptation with large language models. _IEEE Trans. Vis. Comput. Graph._ (2022).
* [TM04] Tory M., Möller T.: Rethinking visualization: A high-level taxonomy. In _Proc. IEEE InfoVis_ (2004), pp. 151–158.
* [WHS∗22] Wang Y., Hou Z., Shen L., Wu T., Wang J., Huang H., Zhang H., Zhang D.: Towards natural language-based visualization authoring. _IEEE Trans. Vis. Comput. Graph._ (2022).
* [XBK∗15] Xu K., Ba J., Kiros R., Cho K., Courville A., Salakhudinov R., Zemel R., Bengio Y.: Show, attend and tell: Neural image caption generation with visual attention. In _Proc. ICML_ (2015), pp. 2048–2057.
* [YHZ23] Ye Y., Huang R., Zeng W.: VISAtlas: An image-based exploration and query system for large visualization collections via neural image embedding. _IEEE Trans. Vis. Comput. Graph._ (2023), 1–15.
* [YZF∗22] Yuan L., Zeng W., Fu S., Zeng Z., Li H., Fu C. W., Qu H.: Deep colormap extraction from visualizations. _IEEE Trans. Vis. Comput. Graph. 28_ , 12 (2022), 4048 – 4060.
* [ZDCC21] Zeng W., Dong A., Chen X., Cheng Z.-l.: VIStory: interactive storyboard for exploring visual information in scientific publications. _J. Vis. 24_ , 1 (2021), 69–84.
* [ZFC∗23] Zhang T., Feng H., Chen W., Chen Z., Zheng W., Luo X.-N., Huang W., Tung A. K.: ChartNavigator: an interactive pattern identification and annotation framework for charts. _IEEE Trans. Knowl. Data Eng. 35_ , 2 (2023), 1258 – 1269.
* [ZFF22] Zhao J., Fan M., Feng M.: ChartSeer: Interactive steering exploratory visual analysis with machine intelligence. _IEEE Trans. Vis. Comput. Graph. 28_ , 3 (2022), 1500–1513.
* [ZYLL22] Zhou K., Yang J., Loy C. C., Liu Z.: Learning to prompt for vision-language models. _Int. J. Comput. Vis. 130_ , 9 (2022), 2337–2348.
* [ZZF∗22] Zhang R., Zhang W., Fang R., Gao P., Li K., Dai J., Qiao Y., Li H.: Tip-adapter: Training-free adaption of CLIP for few-shot classification. In _Proc. ECCV_ (2022), pp. 493–510.
|
# A Batch Sequential Halving Algorithm without
Performance Degradation
Sotetsu Koyamada
<EMAIL_ADDRESS>
ATR, Kyoto University &Soichiro Nishimori
<EMAIL_ADDRESS>
The University of Tokyo &Shin Ishii
<EMAIL_ADDRESS>
Kyoto University, ATR
###### Abstract
In this paper, we investigate the problem of pure exploration in the context
of multi-armed bandits, with a specific focus on scenarios where arms are
pulled in fixed-size batches. Batching has been shown to enhance computational
efficiency, but it can potentially lead to a degradation compared to the
original sequential algorithm’s performance due to delayed feedback and
reduced adaptability. We introduce a simple batch version of the Sequential
Halving (SH) algorithm (Karnin et al., 2013) and provide theoretical evidence
that batching does not degrade the performance of the original algorithm under
practical conditions. Furthermore, we empirically validate our claim through
experiments, demonstrating the robust nature of the SH algorithm in fixed-size
batch settings.
## 1 Introduction
In this study, we consider the pure exploration problem in the field of
stochastic multi-armed bandits, which aims to identify the best arm within a
given budget (Audibert et al., 2010). Specifically, we concentrate on the
_fixed-size batch pulls_ setting, where we pull a fixed number of arms
simultaneously. Batch computation plays a crucial role in improving
computational efficiency, especially in large-scale bandit applications where
reward computation can be expensive. For instance, consider applying this to
tree search algorithms like Monte Carlo tree search (Tolpin & Shimony, 2012).
The reward computation here typically involves the value network evaluation
(Silver et al., 2016; 2017), which can be computationally expensive. By
leveraging batch computation and hardware accelerators (e.g., GPUs), we can
significantly reduce the computational cost of the reward computation.
However, while batch computation enhances computational efficiency, its
performance (e.g., simple regret) may not match that of sequential computation
with the same total budget, due to delayed feedback reducing adaptability.
Therefore, the objective of this study is to develop a pure exploration
algorithm that maintains its performance regardless of the batch size.
We focus on the _Sequential Halving_ (SH) algorithm (Karnin et al., 2013), a
popular and well-analyzed pure exploration algorithm. Due to its simplicity,
efficiency, and lack of task-dependent hyperparameters, SH finds practical
applications in, but not limited to, hyperparameter tuning (Jamieson &
Talwalkar, 2016), recommendation systems (Aziz et al., 2022), and state-of-
the-art AlphaZero (Silver et al., 2018) and MuZero (Schrittwieser et al.,
2020) family (Danihelka et al., 2022). In this study, we aim to extend SH to a
batched version that matches the original SH algorithm’s performance, even
with large batch sizes. To date, Jun et al. (2016) introduced a simple batched
extension of SH and reported that it performed well in their experiments.
However, the theoretical properties of batched SH have not yet been well-
studied in the setting of fixed-size batch pulls.
We consider two simple and natural batched variants of SH (Sec. 3): _Breadth-
first Sequential Halving_ (BSH) and _Advance-first Sequential Halving_ (ASH).
We introduce BSH as an intermediate step to understanding ASH, which is our
main focus. Our main contribution is providing a theoretical guarantee for ASH
(Sec. 4), showing that _it is algorithmically equivalent to SH as long as the
batch budget is not extremely small_ — For example, in a 32-armed stochastic
bandit problem, ASH can match SH’s choice with 100K sequential pulls using
just 20 batch pulls, each of size 5K. This means that ASH can achieve the same
performance as SH with significantly fewer pulls when the batch size is
reasonably large. Moreover, one can understand the theoretical properties of
ASH using the theoretical properties of SH, which have been well-studied
(Karnin et al., 2013; Zhao et al., 2023). In our experiments, we validate our
claim by comparing the behavior of ASH and SH (Sec. 5.1) and analyze the
behavior of ASH with the extremely small batch budget as well (Sec. 5.2).
## 2 Preliminary
##### Pure Exploration Problem.
Consider a pure exploration problem involving $n$ arms and a budget $T$. We
define a reward matrix $\mathcal{R}\in[0,1]^{n\times T}$, where each element
$\mathcal{R}_{i,j}\in[0,1]$ represents the reward of the $j$-th pull of arm
$i\in[n]\coloneqq\\{1,\ldots,n\\}$, with $j$ being counted independently for
each arm. Each element in the $i$-th row is an i.i.d. sample from an unknown
reward distribution of $i$-th arm with mean $\mu_{i}$. Without loss of
generality, we assume that $1\geq\mu_{1}\geq\mu_{2}\geq\ldots\geq\mu_{n}\geq
0$. In the standard sequential setting, a pure exploration algorithm
sequentially observes $T$ elements from $\mathcal{R}$ by pulling arms one by
one for $T$ times. The algorithm then selects one arm as the best arm
candidate. Note that we only consider deterministic pure exploration
algorithms in this study. Such an algorithm can be characterized by a mapping
$\pi:[0,1]^{n\times T}\to[n]$ that takes $\mathcal{R}$ as input and outputs
the selected arm $a_{T}$. The natural performance measure in pure exploration
is the _simple regret_ , defined as
$\mathbb{E}_{\mathcal{R}}[\mu_{1}-\mu_{a_{T}}]$ (Bubeck et al., 2009), which
compares the performance of the selected arm $a_{T}$ with the best arm $1$.
Sequential Halving (SH; Karnin et al. (2013)) is a sequential elimination
algorithm designed for the pure exploration problem. It begins by initializing
the set of best arm candidates as $\mathcal{S}_{0}\coloneqq[n]$. In each of
the $\lceil\log_{2}n\rceil$ rounds, the algorithm halves the set of candidates
(i.e., $|\mathcal{S}_{r+1}|=\left\lceil|\mathcal{S}_{r}|/2\right\rceil$) until
it narrows down the candidates to a single arm in
$\mathcal{S}_{\lceil\log_{2}n\rceil}$. During each round
$r\in\\{0,\ldots,\lceil\log_{2}n\rceil-1\\}$, the arms in the active arm set
$\mathcal{S}_{r}$ are pulled equally
$J_{r}\coloneqq\bigl{\lfloor}\frac{T}{|\mathcal{S}_{r}|\lceil\log_{2}n\rceil}\bigr{\rfloor}$
times, and the total budget consumed for round $r$ is $T_{r}\coloneqq
J_{r}\times|\mathcal{S}_{r}|$. The SH algorithm is described in Algorithm 1.
We denote the mapping induced by the SH algorithm as $\pi_{\text{SH}}$. It has
been shown that the simple regret of SH satisfies
$\mathbb{E}_{\mathcal{R}}[\mu_{1}-\mu_{a_{T}}]\leq\tilde{\mathcal{O}}(\sqrt{n/T})$,
where $\tilde{\mathcal{O}}(\cdot)$ ignores the logarithmic factors of $n$
(Zhao et al., 2023). Note that the consumed budget
$\sum_{r<\lceil\log_{2}n\rceil}T_{r}$ might be less than $T$. In this study,
we assume that the remaining budget is consumed equally by the last two arms
in the final round.
Algorithm 1 SH: Sequential Halving (Karnin et al., 2013)
1:input number of arms: $n$, budget: $T$
2:initialize best arm candidates $\mathcal{S}_{0}\coloneqq[n]$
3:for round $r=0,\ldots,\lceil\log_{2}n\rceil-1$ do
4: pull each arm $a\in S_{r}$ for
$J_{r}=\left\lfloor\frac{T}{|\mathcal{S}_{r}|\lceil\log_{2}n\rceil}\right\rfloor$
times
5: $\mathcal{S}_{r+1}\leftarrow\textrm{top-}\lceil|\mathcal{S}_{r}|/2\rceil$
arms in $\mathcal{S}_{r}$ w.r.t. the empirical rewards
6:return the only arm in $\mathcal{S}_{\lceil\log_{2}n\rceil}$
## 3 Batch Sequential Halving Algorithms
In this study, we consider the fixed-size batch pulls setting, where we
simultaneously pull $b$ arms for $B$ times, with $b$ being the fixed batch
size and $B$ being the batch budget (Jun et al., 2016). The standard
sequential case corresponds to $b=1$ and $B=T$. Our interest is to compare the
performance of the batch SH algorithms with a large batch size $b$ and a small
batch budget $B$ to that of the standard SH algorithm when pulling
sequentially $T$ times. Therefore, we compare the performance of the batch SH
algorithms under the assumption that $T=b\times B$ holds, so that the total
budget is the same in both the sequential and batch settings. In this section,
we first reconstruct the SH algorithm so that it can be easily extended to the
batched setting (Sec. 3.1). Then, we consider _Breadth-first Sequential
Halving_ (BSH), one of the simplest batched extensions of SH, as an
intermediate step (Sec. 3.2). Finally, we introduce _Advance-first Sequential
Halving_ (ASH) as a further extension (Sec. 3.3).
### 3.1 SH implementation with target pulls
Algorithm 2 SH (Karnin et al., 2013) implementation with target pulls
$L^{\text{\color[rgb]{0,0,0.80}{{B}}}}$/$L^{\text{\color[rgb]{0.80,0,0}{{A}}}}$
1:input number of arms: $n$, budget: $T$
2:initialize empirical mean $\bar{\mu}_{a}\coloneqq 0$ and arm pulls
$N_{a}\coloneqq 0$ for all $a\in[n]$
3:for $t=0,\ldots,T-1$ do
4: let $\mathcal{A}_{t}$ be $\\{a\in[n]\mid N_{a}=L_{t}\\}$$\triangleright$
$L_{t}$ is either $L_{t}^{\text{\color[rgb]{0,0,0.80}{{B}}}}$ (2) or
$L_{t}^{\text{\color[rgb]{0.80,0,0}{{A}}}}$ (3)
5: pull arm $a_{t}\coloneqq\text{argmax}_{a\in\mathcal{A}_{t}}\bar{\mu}_{a}$
6: update $\bar{\mu}_{a_{t}}$ and $N_{a_{t}}\leftarrow N_{a_{t}}+1$
7:return $\text{argmax}_{a\in[n]}(N_{a},\bar{\mu}_{a})$
Algorithm 3 _Breadth-first_ target pulls
$L^{\text{\color[rgb]{0,0,0.80}{{B}}}}$
1:input number of arms: $n$, budget: $T$
2:initialize empty $L^{\text{\color[rgb]{0,0,0.80}{{B}}}}$, $K\coloneqq n$,
$J\coloneqq 0$
3:for $r=0,\ldots\lceil\log_{2}n\rceil-1$ do
4: for $\vartriangleright$ $j=0,\ldots,J_{r}-1$ do
5: for $\blacktriangleright$ $k=0,\ldots,K-1$ do
6: append $J+j$ to $L^{\text{\color[rgb]{0,0,0.80}{{B}}}}$
7: $K\leftarrow\lceil K/2\rceil$ and $J\leftarrow J+J_{r}$
8:return $L^{\text{\color[rgb]{0,0,0.80}{{B}}}}$ $\triangleright$ (0,0,0,...)
Algorithm 4 _Advance-first_ target pulls
$L^{\text{\color[rgb]{0.80,0,0}{{A}}}}$
1:input number of arms: $n$, budget: $T$
2:initialize empty $L^{\text{\color[rgb]{0.80,0,0}{{A}}}}$, $K\coloneqq n$,
$J\coloneqq 0$
3:for $r=0,\ldots\lceil\log_{2}n\rceil-1$ do
4: for $\blacktriangleright$ $k=0,\ldots,K-1$ do
5: for $\vartriangleright$ $j=0,\ldots,J_{r}-1$ do
6: append $J+j$ to $L^{\text{\color[rgb]{0.80,0,0}{{A}}}}$
7: $K\leftarrow\lceil K/2\rceil$ and $J\leftarrow J+J_{r}$
8:return $L^{\text{\color[rgb]{0.80,0,0}{{A}}}}$ $\triangleright$ (0,1,2,...)
Since BSH/ASH is a natural batched extension of SH, we first reconstruct the
implementation of the SH algorithm as Algorithm 2 so that it can be easily
extended to BSH/ASH. Note that, in this study, the operation
$\text{argmax}_{x\in\mathcal{X}}(\ell_{x},m_{x})$ selects the element
$x\in\mathcal{X}$ that maximizes $\ell_{x}$ first. If multiple elements
achieve this maximum, it then selects among these the one that maximizes
$m_{x}$. At the $t$-th arm pull, SH selects the arm $a_{t}$ that has the
highest empirical reward $\bar{\mu}_{a}$ among the candidates
$\mathcal{A}_{t}$:
$\displaystyle
a_{t}\coloneqq\text{argmax}_{a\in\mathcal{A}_{t}}\bar{\mu}_{a},$ (1)
where $\mathcal{A}_{t}\coloneqq\\{a\in[n]\mid N_{a}=L_{t}\\}$ are the
candidates at the $t$-th arm pull, $N_{a}$ is the total number of pulls of arm
$a$, and $L_{t}$ is the number of _target pulls_ at $t$, defined as either
breadth-first manner
$L_{t}^{\text{\color[rgb]{0,0,0.80}{{B}}}}\coloneqq\underbrace{\sum_{r^{\prime}<r(t)}J_{r^{\prime}}}_{\text{\scriptsize
pulls before
}r(t)}+\underbrace{\left\lfloor\frac{t-\sum_{r^{\prime}<r(t)}T_{r^{\prime}}}{|\mathcal{S}_{r(t)}|}\right\rfloor}_{\text{\scriptsize
pulls in }r(t)},$ (2)
or advance-first manner
$L_{t}^{\text{\color[rgb]{0.80,0,0}{{A}}}}\coloneqq\underbrace{\sum_{r^{\prime}<r(t)}J_{r^{\prime}}}_{\text{\scriptsize
pulls before
}r(t)}+\underbrace{\left(\left(t-\sum_{r^{\prime}<r(i)}T_{r^{\prime}}\right)\bmod
J_{r(t)}\right)}_{\text{\scriptsize pulls in }r(t)},$ (3)
where $r(t)$ is the round of the $t$-th arm pull. This
$L_{t}^{\text{\color[rgb]{0,0,0.80}{{B}}}}$/$L_{t}^{\text{\color[rgb]{0.80,0,0}{{A}}}}$
represents the cumulative number of pulls of the arm selected at the $t$-th
pull before the $t$-th arm pull. We omitted the dependency on $n$ and $T$ for
simplicity. The definition of
$L_{t}^{\text{\color[rgb]{0,0,0.80}{{B}}}}$/$L_{t}^{\text{\color[rgb]{0.80,0,0}{{A}}}}$
is somewhat complicated, and it may be straightforward to write down the
algorithm that constructs
$L^{\text{\color[rgb]{0,0,0.80}{{B}}}}\coloneqq(L_{0}^{\text{\color[rgb]{0,0,0.80}{{B}}}},\ldots,L_{T}^{\text{\color[rgb]{0,0,0.80}{{B}}}})$
and
$L^{\text{\color[rgb]{0.80,0,0}{{A}}}}\coloneqq(L_{0}^{\text{\color[rgb]{0.80,0,0}{{A}}}},\ldots,L_{T}^{\text{\color[rgb]{0.80,0,0}{{A}}}})$
as shown in Algorithm 3 and Algorithm 4, respectively. Note that the choice
between $L^{\text{\color[rgb]{0,0,0.80}{{B}}}}$ and
$L^{\text{\color[rgb]{0.80,0,0}{{A}}}}$ is arbitrary and does not affect the
behavior of SH — as long as the arm pull is sequential (not batched). Python
code for this SH implementation is available in App. A. Note that using target
pulls to implement SH is natural and not new. For example,
Mctx111https://github.com/google-deepmind/mctx (Babuschkin et al., 2020) has a
similar implementation.
### 3.2 BSH: Breadth-first Sequential Halving
Now, we extend SH to BSH, in which we select arms so that the number of pulls
of each arm becomes as equal as possible using
$L^{\text{\color[rgb]{0,0,0.80}{{B}}}}$. Note that
$L^{\text{\color[rgb]{0,0,0.80}{{B}}}}$ uses $T=b\times B$ as the scheduled
total budget. When pulling arms in a batch, we need to consider not only the
number of pulls of the arms but also the number of scheduled pulls in the
current batch. Therefore, we introduce _virtual arm pulls_ $M_{a}$, the number
of scheduled pulls of arm $a$ in the current batch. For each batch pull, we
sequentially select $b$ arms with the highest empirical rewards from the
candidates $\\{a\in[n]\mid
N_{a}+M_{a}=L_{t}^{\text{\color[rgb]{0,0,0.80}{{B}}}}\\}$ and pull them as a
batch. The BSH algorithm is described in App. B. BSH is similar to a batched
extension of SH introduced in Jun et al. (2016) in the sense that it selects
arms so that the number of pulls of each arm becomes as equal as possible.
Algorithm 5 ASH: Advance-first Sequential Halving
1:input number of arms: $n$, batch size: $b$, batch budget: $B$
2:initialize counter $t\coloneqq 0$, empirical mean $\bar{\mu}_{a}\coloneqq
0$, and arm pulls $N_{a}\coloneqq 0$ for all $a\in[n]$
3:for $B$ times do
4: initialize empty batch $\mathcal{B}$ and virtual arm pulls $M_{a}=0$ for
all $a\in[n]$
5: for $b$ times do
6: let $\mathcal{A}_{t}$ be $\\{a\in[n]\mid
N_{a}+M_{a}=L_{t}^{\text{\color[rgb]{0.80,0,0}{{A}}}}\\}$ $\triangleright$ BSH
uses $L^{\text{\color[rgb]{0,0,0.80}{{B}}}}_{t}$
instead$\text{argmax}_{a\in\mathcal{A}_{t}}\bar{\mu}_{a}$
7: push
$a_{t}\coloneqq\text{argmax}_{a\in\mathcal{A}_{t}}$$(N_{a},\bar{\mu}_{a})$ to
$\mathcal{B}$ $\triangleright$ BSH uses
$\text{argmax}_{a\in\mathcal{A}_{t}}\bar{\mu}_{a}$
instead$L^{\text{\color[rgb]{0,0,0.80}{{B}}}}_{t}$
8: update $t\leftarrow t+1$ and $M_{a_{t}}\leftarrow M_{a_{t}}+1$
9: batch pull arms in $\mathcal{B}$
10: update $\bar{\mu}_{a}$ and $N_{a}\leftarrow N_{a}+M_{a}$ for all
$a\in\mathcal{B}$
11:return $\text{argmax}_{a\in[n]}(N_{a},\bar{\mu}_{a})$
Figure 1: Pictorial representation of _breadth-first_ SH (BSH; Sec. 3.2) and
_advance-first_ SH (ASH; Sec. 3.3) for an 8-armed bandit problem. Batch size
$b$ is $24$ and batch budget $B$ is $8$. The same color indicates the same
batch pull — For example, in the first batch pull (blue), BSH pulls each of
the 8 arms 3 times, while ASH pulls 3 arms 8 times each. BSH selects arms so
that the number of pulls of each active arm becomes as equal as possible,
while ASH selects arms so that once an arm is selected, it is pulled until the
budget for the arm in the round is exhausted. These pull sequences are
characterized by the target pulls $L^{\text{\color[rgb]{0,0,0.80}{{B}}}}$ and
$L^{\text{\color[rgb]{0.80,0,0}{{A}}}}$:
$L^{\text{\color[rgb]{0,0,0.80}{{B}}}}=$
(0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,4,4,4,4,4,4,4,4,5,5,5,5,5,5,...)
$L^{\text{\color[rgb]{0.80,0,0}{{A}}}}=$
(0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1,2,3,4,5,...)
### 3.3 ASH: Advance-first Sequential Halving
We further extend SH to ASH in a manner similar to BSH. The ASH algorithm is
described in Algorithm 5. Fig. 1 shows the pictorial representation of BSH and
ASH. Python code for this ASH implementation is available in App. A. The
differences between BSH and ASH are that:
1. 1.
ASH selects arms in _advance-first_ manner using
$L^{\text{\color[rgb]{0.80,0,0}{{A}}}}$ instead of
$L^{\text{\color[rgb]{0,0,0.80}{{B}}}}$ (line 6), and
2. 2.
ASH considers not only the empirical rewards $\bar{\mu}_{a}$ but also the
number of actual pulls $N_{a}$ when selecting arms in a batch (line 7).
The second difference ensures that, when the batch spans two rounds, the arm
to be promoted is selected from the arms that have completed pulling (e.g.,
see the 3rd batch pull in Fig. 1). Note that this second modification is not
useful for BSH. Let $\pi_{\text{ASH}}:[0,1]^{n\times T}\to[n]$ be the mapping
induced by the ASH algorithm. In Sec. 4, we will show that ASH is
algorithmically equivalent to SH with the same total budget $T=b\times B$ —
$\pi_{\text{ASH}}$ is identical to $\pi_{\text{SH}}$.
## 4 Algorithmic Equivalence of SH and ASH
This section presents a theoretical guarantee for the ASH algorithm.
###### Theorem 1
Given a stochastic bandit problem with $n\geq 2$ arms, let $b\geq 2$ be the
batch size and $B$ be the batch budget satisfying
$B\geq\max\\{4,\frac{n}{b}\\}\lceil\log_{2}n\rceil$. Then, the ASH algorithm
(Algorithm 5) is algorithmically equivalent to the SH algorithm (Algorithm 2)
with the same total budget $T=b\times B$ — the mapping
$\pi_{\textnormal{ASH}}$ is identical to $\pi_{\textnormal{SH}}$.
##### Proof sketch
Figure 2: Inequality (4).
A key observation is that ASH and SH differ only when a batch pull spans two
rounds, like the 3rd batch pull in Fig. 1. In this case, ASH may promote an
incorrect arm to the next round that would not have been promoted in SH. We
can prove that such incorrect promotion does not occur under the condition
$B\geq\max\\{4,\frac{n}{b}\\}\lceil\log_{2}n\rceil$. This is done by
demonstrating that the inequality (4) holds for any $z<b$, the number of pulls
for the current round $r$ in the batch. Fig. 2 illustrates (4).
Proof. The condition $B\geq\max\\{4,\frac{n}{b}\\}\lceil\log_{2}n\rceil$ is
divided into two separate conditions:
$\displaystyle B\geq\frac{n}{b}\lceil\log_{2}n\rceil,$ (C1)
and
$\displaystyle B\geq 4\lceil\log_{2}n\rceil.$ (C2)
We focus on the scenario where a batch pull spans two rounds. In this case,
let $z<b$ be the number of pulls that consume the budget for round $r$, and
$b-z$ be the number of pulls that consume the budget for round $r+1$. The
following proposition is demonstrated: $\forall n\geq 2,\forall b\geq 2$,
$\forall r<\lceil\log_{2}n\rceil-1$, $\forall z<b$, if (C1) and (C2) hold,
then
$\displaystyle|\mathcal{S}_{r+1}|-\left\lceil\frac{b-z}{J_{r+1}}\right\rceil\geq\left\lceil\frac{z}{J_{r}}\right\rceil.$
(4)
The left-hand side (LHS) of (4) represents the number of arms promoting to the
subsequent round post-batch pull, whereas the right-hand side (RHS) quantifies
the arms pending completion of their pulls at the batch pull juncture. This
inequality, if satisfied, ensures that, even when a batch spans two rounds,
arms supposed to advance to the next round in SH are not left behind in ASH,
i.e., no incorrect promotion occurs. Considering the scenario where $z=b-1$
suffices, as it represents the worst-case condition. Let
$x\coloneqq|S_{r}|\geq 3$ for the given $r<\lceil\log_{2}n\rceil-1$. Two cases
are considered. Case 1: when $n\leq 4b$. Given that
$J_{r}=\bigl{\lfloor}\frac{b\times
B}{x\lceil\log_{2}n\rceil}\bigr{\rfloor}\geq\left\lfloor 4b/x\right\rfloor$ as
derived from (C2), it is sufficient to show
$\displaystyle\left\lceil\frac{x}{2}\right\rceil-1\geq\left\lceil\frac{b-1}{\left\lfloor
4b/x\right\rfloor}\right\rceil$ (5)
in $x\in[3,4b]$. This assertion is directly supported by Lemma 1. Case 2: when
$4b<n$. Given that $J_{r}=\bigl{\lfloor}\frac{b\times
B}{x\lceil\log_{2}n\rceil}\bigr{\rfloor}\geq\left\lfloor n/x\right\rfloor$ as
derived from (C1), it is sufficient to show
$\left\lceil\frac{x}{2}\right\rceil-1\geq\bigl{\lceil}\frac{n/4-1}{\left\lfloor
n/x\right\rfloor}\bigr{\rceil}$ in $x\in[3,n]$. This conclusion follows by the
same reasoning applied in Case 1. $\square$
###### Lemma 1
For any integer $b\geq 2$, the inequality
$\left\lceil\frac{x}{2}\right\rceil-1\geq\left\lceil\frac{b-1}{\left\lfloor
4b/x\right\rfloor}\right\rceil$ holds for all integers $x\in[3,4b]$.
Figure 3: Lemma 1.
The proof of Lemma 1 is in App. C. Here, we provide the visualization of (5)
in Fig. 3 to intuitively show that Lemma 1 holds. Each colored line represents
the RHS for different $b\leq 32$. One can see that the LHS is always greater
than the RHS for any $x\in[3,4b]$.
##### Remark 1
The condition (C1) is common to both SH and ASH — SH implicitly assumes $T\geq
n\lceil\log_{2}n\rceil$ as the minimum condition to execute. This is because
we need to pull each arm at least once in the first round (i.e., $J_{1}\geq
1$). With the same argument, the batch budget $B$ must satisfy (C1). On the
other hand, (C2) is specific to ASH and is required to ensure the equivalence.
As we discuss in the Sec. 4.1, we argue that this additional (C2) is not
practically problematic.
##### Remark 2
Note that the condition (C2) is tight; Theorem 1 does not hold even if
$B\geq\alpha\lceil\log_{2}n\rceil$ for any positive value $\alpha<4$.
Proof. We aim to demonstrate the existence of a value $x$ such that
$\left\lceil\frac{x}{2}\right\rceil-1-\left\lceil\frac{b-1}{\left\lfloor\alpha
b/x\right\rfloor}\right\rceil<0$ when $n\leq\alpha b$. Consider the case when
$x=4$. In this scenario, the LHS of the inequality can be rewritten as
$1-\left\lceil\frac{b-1}{\left\lfloor\alpha b/4\right\rfloor}\right\rceil\leq
1-\frac{b-1}{\left\lfloor\alpha b/4\right\rfloor}\leq
1-\frac{4}{\alpha}\frac{b-1}{b}\to 1-\frac{4}{\alpha}$ as $b\to\infty$. As
$\alpha<4$, it follows that $\text{LHS}<0$ for sufficiently large values of
$b$. $\square$
##### Remark 3
When $b$ is sufficiently large, the minimum $B$ that satisfies both (C1) and
(C2) is $4\lceil\log_{2}n\rceil$. Theorem 1 implies that for arbitrarily large
target budget $T$, ASH can achieve the same performance as SH by increasing
the batch size $b$ without increasing the batch budget $B$ from
$4\lceil\log_{2}n\rceil$ — ASH guarantees its scalability in batch
computation.
##### Remark 4
Theorem 1 allows us to understand the properties of ASH based on existing
theoretical research on SH, such as the simple regret bound (Zhao et al.,
2023).
### 4.1 Discussion on the conditions
To show that SH and ASH are algorithmically equivalent, we used an additional
condition (C2) of $\mathcal{O}(\log n)$. However, we argue that this condition
is not practically problematic because the condition (C1), the minimum
condition required to execute (unbatched) SH, is dominant ($\mathcal{O}(n\log
n)$). This condition (C1) is dominant over (C2) as shown in Fig. 4. We can see
that the condition (C2) only affects the algorithm when the batch size is
sufficiently larger than the number of arms ($b\gg n$). This is a reasonable
result, meaning that we cannot guarantee the equivalent behavior to SH with an
extremely small batch budget, such as $B=1$. On the other hand, if the user
secures the minimum budget $B=4\lceil\log_{2}n\rceil$ that depends only on the
number of arms $n$ and increases only logarithmically, regardless of the batch
size $b$, they can increase the batch size arbitrarily and achieve the same
result as when SH is executed sequentially with the same total budget, with
high computational efficiency.
Figure 4: Visualization of conditions (C1) and (C2) for $n\leq 1024$, $B\leq
1024$, and $b\in\\{4,64,1024\\}$.
## 5 Empirical Validation
Figure 5: Polynomial$(\alpha)$
We conducted experiments to empirically demonstrate that ASH maintains its
performance for large batch size $b$, in comparison to its sequential
counterpart SH. To evaluate this, we utilized a polynomial family
parameterized by $\alpha$ as a representative batch problem instance, where
the reward gap $\Delta_{a}\coloneqq\mu_{1}-\mu_{a}$ follows a polynomial
distribution with parameter $\alpha$: $\Delta_{a}\propto(a/n)^{\alpha}$
(Jamieson et al., 2013; Zhao et al., 2023). This choice is motivated by the
observation that real-world applications exhibit polynomially distributed
reward gaps, as mentioned in Zhao et al. (2023). In our study, we considered
three different values of $\alpha$ ($0.5$, $1.0$, and $2.0$) to capture
various reward distributions (see Fig. 5). Additionally, we characterized each
bandit problem instance by specifying the minimum and maximum rewards, denoted
as $\mu_{\text{min}}$ and $\mu_{\text{max}}$ respectively. Hence, we denote a
bandit problem instance as
$\mathcal{T}(n,\alpha,\mu_{\text{min}},\mu_{\text{max}})$.
We also implemented a simple batched extension of SH introduced by Jun et al.
(2016) as a baseline for comparison. We refer to this algorithm as Jun+16. The
implementation of Jun+16 is described in App. D. Jun et al. (2016) did not
provide a theoretical guarantee for Jun+16, but it has shown performance
comparable to or better than their proposed algorithm in their experiments.
### 5.1 Large batch budget scenario: $B\geq 4\lceil\log_{2}n\rceil$
First, we empirically confirm that, as we claimed in Sec. 4, ASH is indeed
equivalent to SH under the condition (C2). We generated 10K instances of
bandit problems and applied ASH and SH to each instance with 100 different
seeds. We randomly sampled $n$ from $\\{2,\ldots,1024\\}$, $\alpha$ from
$\\{0.5,1.0,2.0\\}$, and $\mu_{\text{min}}$ and $\mu_{\text{max}}$ from
$\\{0.1,0.2,\ldots,0.9\\}$. For each instance
$\mathcal{T}(n,\alpha,\mu_{\text{min}},\mu_{\text{max}})$, we randomly sampled
the batch budget $B\leq 10\lceil\log_{2}n\rceil$ and the batch size $b\leq 5n$
so that the condition (C1) and (C2) are satisfied. _As a result, we confirmed
that the selected arms of ASH and SH are identical in all 10K instances and
100 seeds for each instance._ We also conducted the same experiment for BSH
and Jun+16. We plotted the simple regret of BSH, ASH, and Jun+16 against SH in
Fig. 6. There are 10K instances, and each point represents the average simple
regret of 100 seeds for each instance. To compare the performance, we fitted a
linear regression model to the simple regret of BSH, ASH, and Jun+16 against
SH as $y=\beta x$, where $y$ is the simple regret of BSH, ASH, or Jun+16, $x$
is the simple regret of SH. The slope $\beta$ is estimated by the least
squares method. The estimated slope $\beta$ is 1.008 for BSH, 1.000 for ASH,
and 0.971 for Jun+16, which indicates that the simple regret of ASH, BSH, and
Jun+16 is comparable to SH on average.
Figure 6: Single regret comparison of BSH, ASH, and Jun+16 against SH when
$B\geq 4\lceil\log_{2}n\rceil$.
### 5.2 Small batch budget scenario: $B<4\lceil\log_{2}n\rceil$
Figure 7: Single regret comparison of BSH, ASH, and Jun+16 against SH when
$B<4\lceil\log_{2}n\rceil$.
Next, we examined the performances of BSH, ASH, and Jun+16 against SH when the
additional condition (C2) is not satisfied, i.e., when the batch budget is
extremely small $B<4\lceil\log_{2}n\rceil$ and thus Theorem 1 does not hold.
We conducted the same experiment as in Sec. 5.1 except the batch budget
$B<4\lceil\log_{2}n\rceil$. We sampled $B$ so that $B$ is larger than the
number of rounds. The results are shown in Fig. 7. The slope $\beta$ is
estimated as 1.059 for BSH, 1.011 for ASH, and 1.017 for Jun+16. All the
estimated slopes are worse than when $B\geq 4\lceil\log_{2}n\rceil$. However,
the estimated slopes are still close to 1, which indicates that while we do
not have a theoretical guarantee, the performance of BSH, ASH, and Jun+16 is
comparable to SH on average.
## 6 Related Work
##### Sequential Halving.
Among the algorithms for the pure exploration problem in multi-armed bandits
(Audibert et al., 2010), Sequential Halving (SH; Karnin et al. (2013)) is one
of the most popular algorithms. The theoretical properties of SH have been
well studied (Karnin et al., 2013; Zhao et al., 2023). Due to its simplicity,
SH has been widely used for these (but is not limited to) applications: In the
context of _tree-search_ algorithms, as the root node selection of Monte Carlo
tree search can be regarded as a pure exploration problem (Tolpin & Shimony,
2012), Danihelka et al. (2022) incorporated SH into the root node selection
and significantly reduced the number of simulations to improve the performance
during AlphaZero/MuZero training. From the min-max search perspective, some
studies recursively applied SH to the internal nodes of the search tree
(Cazenave, 2014; Pepels et al., 2014). SH is also used for _hyperparameter
optimization_ ; Jamieson & Talwalkar (2016) formalized the hyperparameter
optimization problem in machine learning as a _non-stochastic_ multi-armed
bandit problem, where the reward signal is not from stochastic stationary
distributions but from deterministic function changing over training steps. Li
et al. (2018; 2020) applied SH to hyperparameter optimization in asynchronous
parallel settings, which is similar to our batch setting. Their asynchronous
approach may have _incorrect promotions_ to the next rounds but is more
efficient than the synchronous approach. Aziz et al. (2022) applied SH to
_recommendation systems_ , which identify appealing podcasts for users.
##### Batched bandit algorithms.
Batched bandit algorithms have been studied in various contexts (Perchet et
al., 2016; Gao et al., 2019; Esfandiari et al., 2021; Jin et al., 2021a; b;
Kalkanli & Ozgur, 2021; Karbasi et al., 2021; Provodin et al., 2022). Among
the batched bandit studies for the pure exploration problem (Agarwal et al.,
2017; Grover et al., 2018; Jun et al., 2016), Jun et al. (2016) is the most
relevant to our work as they also consider the _fixed-size batch pulls_
setting. To the best of our knowledge, the first batched SH variant with a
fixed batch size $b$ was introduced by Jun et al. (2016) as a baseline
algorithm in their study (Jun+16). It is similar to BSH and it pulls arms so
that the number of pulls of the arms is as equal as possible (breadth-first
manner). They reported that Jun+16 experimentally performs comparably to or
better than their proposed method but did not provide a theoretical guarantee
for Jun+16. Our ASH is different from their batch variant in that ASH pulls
arms in an advance-first manner instead of a breadth-first manner.
## 7 Limitation and Future Work
Our batched variants of SH assume that the reward distributions of the arms
are from i.i.d. distributions. This property is essential to allow batch
pulls. One limitation is that it may be difficult to apply our algorithms to
bandit problems where the reward distribution is non-stationary. For example,
Jamieson & Talwalkar (2016) applied SH to hyperparameter tuning, where rewards
are time-series losses during model training. We cannot apply our batched
variants to this problem because we cannot observe “future losses” in a batch.
Our batched variants of SH are suitable for tasks where arms can be evaluated
efficiently in batches rather than sequentially. For instance, when the
evaluation of arms depends on the output of neural networks, the process can
be efficiently conducted in batches using accelerators like GPUs. An example
of this scenario is provided by Danihelka et al. (2022), where value networks
are used in Monte Carlo tree search. Applying our batched variants to such
algorithms is a possible future direction. Additionally, combining them with
reinforcement learning environments that run on GPU/TPU accelerators (Freeman
et al., 2021; Lange, 2022; Koyamada et al., 2023; Gulino et al., 2023; Nikulin
et al., 2023; Bonnet et al., 2024; Rutherford et al., 2024; Matthews et al.,
2024) for efficient batch evaluation is also promising.
## 8 Conclusion
In this paper, we proposed ASH as a simple and natural extension of the SH
algorithm. We theoretically showed that ASH is algorithmically equivalent to
SH as long as the batch budget is not excessively small. This allows ASH to
inherit the well-studied theoretical properties of SH, including the simple
regret bound. Our experimental results confirmed this claim and demonstrated
that ASH and other batched variants of SH, like Jun+16, perform comparably to
SH in terms of simple regret. These findings suggest that we can utilize
simple batched variants of SH for efficient evaluation of arms with large
batch sizes while avoiding performance degradation compared to the sequential
execution of SH. By providing a practical solution for efficient arm
evaluation, our study opens up new possibilities for applications that require
large budgets. Overall, our work highlights the batch robust nature of SH and
its potential for large-scale bandit problems.
#### Broader Impact Statement
The findings in this work on the bandit problem are focused on theoretical
results and do not involve direct human or ethical implications. Therefore,
concerns related to broader ethical, humanitarian, and societal issues are not
applicable to this research. However, if our approach is applied to large-
scale bandit problems, especially when batch evaluation involves large neural
networks, there could be an indirect impact on energy consumption due to the
computational resources required.
#### Acknowledgments
This paper is based on results obtained from a project, JPNP20006, subsidized
by the New Energy and Industrial Technology Development Organization (NEDO),
and partly supported by KAKENHI (No. 22H04998 and 23H04676) from Japan Society
for the Promotion of Science (JSPS). We sincerely thank the reviewers for
their invaluable feedback and constructive comments, which have significantly
enhanced the quality of this research. We would also like to express our
gratitude to the developers of the software libraries utilized in this
research, including NumPy (Harris et al., 2020), SciPy (Virtanen et al.,
2020), Matplotlib (Hunter, 2007), JAX (Bradbury et al., 2018), and Mctx
(Babuschkin et al., 2020).
## References
* Agarwal et al. (2017) Arpit Agarwal, Shivani Agarwal, Sepehr Assadi, and Sanjeev Khanna. Learning with Limited Rounds of Adaptivity: Coin Tossing, Multi-Armed Bandits, and Ranking from Pairwise Comparisons. In _COLT_ , 2017.
* Audibert et al. (2010) Jean-Yves Audibert, Sébastien Bubeck, and Rémi Munos. Best Arm Identification in Multi-armed Bandits. In _COLT_ , 2010.
* Aziz et al. (2022) Maryam Aziz, Jesse Anderton, Kevin Jamieson, Alice Wang, Hugues Bouchard, and Javed Aslam. Identifying new podcasts with high general appeal using a pure exploration infinitely-armed bandit strategy. In _RecSys_ , 2022.
* Babuschkin et al. (2020) Igor Babuschkin, Kate Baumli, Alison Bell, Surya Bhupatiraju, Jake Bruce, Peter Buchlovsky, David Budden, Trevor Cai, Aidan Clark, Ivo Danihelka, Claudio Fantacci, Jonathan Godwin, Chris Jones, Ross Hemsley, Tom Hennigan, Matteo Hessel, Shaobo Hou, Steven Kapturowski, Thomas Keck, Iurii Kemaev, Michael King, Markus Kunesch, Lena Martens, Hamza Merzic, Vladimir Mikulik, Tamara Norman, John Quan, George Papamakarios, Roman Ring, Francisco Ruiz, Alvaro Sanchez, Rosalia Schneider, Eren Sezener, Stephen Spencer, Srivatsan Srinivasan, Luyu Wang, Wojciech Stokowiec, and Fabio Viola. The DeepMind JAX Ecosystem. https://github.com/google-deepmind, 2020.
* Bonnet et al. (2024) Clément Bonnet, Daniel Luo, Donal Byrne, Shikha Surana, Vincent Coyette, Paul Duckworth, Laurence I. Midgley, Tristan Kalloniatis, Sasha Abramowitz, Cemlyn N. Waters, Andries P. Smit, Nathan Grinsztajn, Ulrich A. Mbou Sob, Omayma Mahjoub, Elshadai Tegegn, Mohamed A. Mimouni, Raphael Boige, Ruan de Kock, Daniel Furelos-Blanco, Victor Le, Arnu Pretorius, and Alexandre Laterre. Jumanji: a Diverse Suite of Scalable Reinforcement Learning Environments in JAX. _ICLR_ , 2024.
* Bradbury et al. (2018) James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs. https://github.com/google/jax, 2018.
* Bubeck et al. (2009) Sébastien Bubeck, Rémi Munos, and Gilles Stoltz. Pure Exploration in Multi-armed Bandits Problems. In _ALT_ , 2009.
* Cazenave (2014) Tristan Cazenave. Sequential Halving Applied to Trees. _IEEE T-CIAIG_ , 7(1):102–105, 2014.
* Danihelka et al. (2022) Ivo Danihelka, Arthur Guez, Julian Schrittwieser, and David Silver. Policy improvement by planning with Gumbel. In _ICLR_ , 2022.
* Esfandiari et al. (2021) Hossein Esfandiari, Amin Karbasi, Abbas Mehrabian, and Vahab Mirrokni. Regret Bounds for Batched Bandits. In _AAAI_ , 2021.
* Freeman et al. (2021) Daniel Freeman, Erik Frey, Anton Raichuk, Sertan Girgin, Igor Mordatch, and Olivier Bachem. Brax \- A Differentiable Physics Engine for Large Scale Rigid Body Simulation. In _NeurIPS Track on Datasets and Benchmarks_ , 2021.
* Gao et al. (2019) Zijun Gao, Yanjun Han, Zhimei Ren, and Zhengqing Zhou. Batched Multi-armed Bandits Problem. In _NeurIPS_ , 2019.
* Grover et al. (2018) Aditya Grover, Todor Markov, Peter Attia, Norman Jin, Nicolas Perkins, Bryan Cheong, Michael Chen, Zi Yang, Stephen Harris, William Chueh, and Stefano Ermon. Best arm identification in multi-armed bandits with delayed feedback. In _AISTATS_ , 2018.
* Gulino et al. (2023) Cole Gulino, Justin Fu, Wenjie Luo, George Tucker, Eli Bronstein, Yiren Lu, Jean Harb, Xinlei Pan, Yan Wang, Xiangyu Chen, John Co-Reyes, Rishabh Agarwal, Rebecca Roelofs, Yao Lu, Nico Montali, Paul Mougin, Zoey Yang, Brandyn White, Aleksandra Faust, Rowan McAllister, Dragomir Anguelov, and Benjamin Sapp. Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous Driving Research. In _NeurIPS_ , 2023.
* Harris et al. (2020) Charles R Harris, K Jarrod Millman, Stéfan J Van Der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J Smith, et al. Array programming with NumPy. _Nature_ , 585(7825):357–362, 2020.
* Hunter (2007) John D. Hunter. Matplotlib: A 2D graphics environment. _Computing in Science & Engineering_, 9(3):90–95, 2007.
* Jamieson & Talwalkar (2016) Kevin Jamieson and Ameet Talwalkar. Non-stochastic Best Arm Identification and Hyperparameter Optimization. In _AISTATS_ , 2016.
* Jamieson et al. (2013) Kevin Jamieson, Matthew Malloy, Robert Nowak, and Sebastien Bubeck. On Finding the Largest Mean Among Many. _arXiv:1306.3917_ , 2013.
* Jin et al. (2021a) Tianyuan Jin, Jing Tang, Pan Xu, Keke Huang, Xiaokui Xiao, and Quanquan Gu. Almost Optimal Anytime Algorithm for Batched Multi-Armed Bandits. In _ICML_ , 2021a.
* Jin et al. (2021b) Tianyuan Jin, Pan Xu, Xiaokui Xiao, and Quanquan Gu. Double Explore-then-Commit: Asymptotic Optimality and Beyond. In _COLT_ , 2021b.
* Jun et al. (2016) Kwang-Sung Jun, Kevin Jamieson, Robert Nowak, and Xiaojin Zhu. Top Arm Identification in Multi-Armed Bandits with Batch Arm Pulls. In _AISTATS_ , 2016.
* Kalkanli & Ozgur (2021) Cem Kalkanli and Ayfer Ozgur. Batched Thompson Sampling. In _NeurIPS_ , 2021.
* Karbasi et al. (2021) Amin Karbasi, Vahab Mirrokni, and Mohammad Shadravan. Parallelizing Thompson Sampling. In _NeurIPS_ , 2021.
* Karnin et al. (2013) Zohar Karnin, Tomer Koren, and Oren Somekh. Almost Optimal Exploration in Multi-Armed Bandits. In _ICML_ , 2013.
* Koyamada et al. (2023) Sotetsu Koyamada, Shinri Okano, Soichiro Nishimori, Yu Murata, Keigo Habara, Haruka Kita, and Shin Ishii. Pgx: Hardware-Accelerated Parallel Game Simulators for Reinforcement Learning. In _NeurIPS_ , 2023.
* Lange (2022) Robert Tjarko Lange. gymnax: A JAX-based Reinforcement Learning Environment Library. http://github.com/RobertTLange/gymnax, 2022.
* Li et al. (2020) Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Jonathan Ben-tzur, Moritz Hardt, Benjamin Recht, and Ameet Talwalkar. A System for Massively Parallel Hyperparameter Tuning. In _MLSys_ , 2020.
* Li et al. (2018) Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization. _JMLR_ , 18(185):1–52, 2018.
* Matthews et al. (2024) Michael Matthews, Michael Beukman, Benjamin Ellis, Mikayel Samvelyan, Matthew Jackson, Samuel Coward, and Jakob Foerster. Craftax: A Lightning-Fast Benchmark for Open-Ended Reinforcement Learning. _arXiv:2402.16801_ , 2024.
* Nikulin et al. (2023) Alexander Nikulin, Vladislav Kurenkov, Ilya Zisman, Viacheslav Sinii, Artem Agarkov, and Sergey Kolesnikov. XLand-MiniGrid: Scalable Meta-Reinforcement Learning Environments in JAX. In _NeurIPS 2023 Workshop_ , 2023.
* Pepels et al. (2014) Tom Pepels, Tristan Cazenave, Mark HM Winands, and Marc Lanctot. Minimizing Simple and Cumulative Regret in Monte-Carlo Tree Search. In _CGW_ , 2014.
* Perchet et al. (2016) Vianney Perchet, Philippe Rigollet, Sylvain Chassang, and Erik Snowberg. Batched bandit problems. _Ann. Stat._ , 44(2):660 – 681, 2016.
* Provodin et al. (2022) D. Provodin, P. Gajane, M. Pechenizkiy, and M. Kaptein. The Impact of Batch Learning in Stochastic Linear Bandits. In _ICDM_ , 2022.
* Rutherford et al. (2024) Alexander Rutherford, Benjamin Ellis, Matteo Gallici, Jonathan Cook, Andrei Lupu, Garðar Ingvarsson, Timon Willi, Akbir Khan, Christian Schroeder de Witt, Alexandra Souly, et al. JaxMARL: Multi-Agent RL Environments and Algorithms in JAX. In _AAMAS_ , 2024.
* Schrittwieser et al. (2020) Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering Atari, Go, chess and shogi by planning with a learned model. _Nature_ , 588(7839):604–609, 2020.
* Silver et al. (2016) David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. _Nature_ , 529(7587):484–489, 2016.
* Silver et al. (2017) David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of Go without human knowledge. _Nature_ , 550(7676):354–359, 2017.
* Silver et al. (2018) David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. _Science_ , 362(6419):1140–1144, 2018.
* Tolpin & Shimony (2012) David Tolpin and Solomon Shimony. MCTS Based on Simple Regret. In _AAAI_ , 2012.
* Virtanen et al. (2020) Pauli Virtanen, Ralf Gommers, Travis E Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, et al. SciPy 1.0: fundamental algorithms for scientific computing in Python. _Nature methods_ , 17(3):261–272, 2020.
* Zhao et al. (2023) Yao Zhao, Connor Stephens, Csaba Szepesvari, and Kwang-Sung Jun. Revisiting Simple Regret: Fast Rates for Returning a Good Arm. In _ICML_ , 2023.
## Appendix A Python code
For the sake of reproducibility and a better understanding, we provide Python
code for the Sequential Halving (SH) algorithm using advance-first target
pulls and the Advance-first Sequential Halving (ASH) algorithm in Fig. 8.
Figure 8: Python implementation of the SH algorithm using advance-first
target pulls (Algorithm 2) and the ASH algorithm (Algorithm 5).
## Appendix B BSH algorithm
Algorithm 6 shows the detailed BSH algorithm (see Sec. 3.2).
Algorithm 6 BSH: Breadth-first Sequential Halving
1:input number of arms: $n$, batch size: $b$, batch budget: $B$
2:initialize counter $t\coloneqq 0$, empirical mean $\bar{\mu}_{a}\coloneqq
0$, and arm pulls $N_{a}\coloneqq 0$ for all $a\in[n]$
3:for $B$ times do
4: initialize empty batch $\mathcal{B}$ and virtual arm pulls $M_{a}=0$ for
all $a\in[n]$
5: for $b$ times do
6: let $\mathcal{A}_{t}$ be $\\{a\in[n]\mid
N_{a}+M_{a}=L_{t}^{\text{\color[rgb]{0,0,0.80}{{B}}}}\\}$
7: push $a_{t}\coloneqq\text{argmax}_{a\in\mathcal{A}_{t}}\bar{\mu}_{a}$ to
$\mathcal{B}$
8: update $t\leftarrow t+1$ and $M_{a_{t}}\leftarrow M_{a_{t}}+1$
9: batch pull arms in $\mathcal{B}$
10: update $\bar{\mu}_{a}$ and $N_{a}\leftarrow N_{a}+M_{a}$ for all
$a\in\mathcal{B}$
11:return $\text{argmax}_{a\in[n]}(N_{a},\bar{\mu}_{a})$
## Appendix C Proof of Lemma 1
##### Lemma 1
For any integer $b\geq 2$, the inequality
$\displaystyle\left\lceil\frac{x}{2}\right\rceil-1\geq\left\lceil\frac{b-1}{\left\lfloor
4b/x\right\rfloor}\right\rceil$ (6)
holds for all integers $x\in[3,4b]$.
Proof. This proof demonstrates that for any integer $b\geq 2$ and
$x\in[3,4b]$, the inequality (6) is satisfied. Given $z\geq c\implies
z\geq\lceil c\rceil$ for any integer $z$ and real number $c$, it suffices to
demonstrate that
$\displaystyle\left\lceil\frac{x}{2}\right\rceil-1\geq\frac{b-1}{\left\lfloor
4b/x\right\rfloor}\iff\left\lceil\frac{x}{2}\right\rceil-1-\frac{b-1}{\left\lfloor
4b/x\right\rfloor}\geq 0.$ (7)
Given that $\left\lfloor\frac{4b}{x}\right\rfloor>0$, it follows that
$\displaystyle\left(\left\lceil\frac{x}{2}\right\rceil-1\right)\left\lfloor\frac{4b}{x}\right\rfloor-(b-1)\geq
0,$ (8)
for any integer $b\geq 2$ and $x\in[3,4b]$. Two cases are considered:
Case 1: $x$ is even. Suppose $x=2y$, with $y\in[2,2b]$. We aim to show that
$\displaystyle\left(y-1\right)\left\lfloor\frac{2b}{y}\right\rfloor-(b-1)\geq
0.$ (9)
Two sub-cases are considered:
1. 1.
For $y\in[b+1,2b]$, as $\left\lfloor\frac{2b}{y}\right\rfloor=1$,
$\text{LHS}=(y-1)-(b-1)\geq 0$.
2. 2.
For $y\in[2,b]$, as $\lfloor c\rfloor>c-1$ for any real number $c$, we have
$\text{LHS}>\left(y-1\right)\left(\frac{2b}{y}-1\right)-(b-1)=-\frac{(y-2)(y-b)}{y}$.
As $y>0$ and $-(y-2)(y-b)\geq 0$ in $y\in[2,b]$, we have $\text{LHS}\geq 0$.
Consequently, it has been established that for even values of $x$, the
inequality (9) is upheld.
Case 2: $x$ is odd. Suppose $x=2y+1$, with $y\in[1,2b-1]$. We aim to show that
$\displaystyle y\left\lfloor\frac{4b}{2y+1}\right\rfloor-(b-1)\geq 0.$ (10)
Two sub-cases are considered:
1. 1.
For $y\in[b,2b-1]$, as $\left\lfloor\frac{4b}{2y+1}\right\rfloor=1$,
$\text{LHS}=y-(b-1)\geq 0$.
2. 2.
For $y\in[1,b-1]$, as $\lfloor c\rfloor>c-1$ for any real number $c$, we have
$\text{LHS}>y\left(\frac{4b}{2y+1}-1\right)-(b-1)=\frac{2by-b-2y^{2}+y+1}{2y+1}=\frac{-2y(y-(b+\frac{1}{2}))-(b-1)}{2y+1}\geq
0$. As $2y+1>0$ and $-2y(y-(b+\frac{1}{2}))-(b-1)\geq 0$ in $y\in[1,b-1]$, we
have $\text{LHS}\geq 0$.
Similarly, it has been demonstrated that for odd values of $x$, the inequality
(10) is upheld.
Therefore, through the analysis of these two cases, it is proven that for any
integer $b\geq 2$ and $x\in[3,4b]$, the inequality (8) is satisfied, thereby
confirming the validity of (6). $\square$
## Appendix D Batch Sequential Halving introduced in Jun et al. (2016)
Algorithm 7 shows the detailed batched version of the Sequential Halving
algorithm introduced in Jun et al. (2016).
Algorithm 7 Batched Sequential Halving introduced in Jun et al. (2016)
1:input number of arms: $n$, batch budget: $B$, batch size: $b$
2:initialize best arm candidates $\mathcal{S}_{0}\coloneqq[n]$
3:for round $r=0,\ldots,\lceil\log_{2}n\rceil-1$ do
4: for $\bigl{\lfloor}B/\lceil\log_{2}n\rceil\bigr{\rfloor}$ times do
5: select batch actions $\mathcal{B}$ so that the number of pulls of each arm
in $\mathcal{S}_{r}$ is as equal as possible
6: pull arms $\mathcal{B}$ in the batch
7: $\mathcal{S}_{r+1}\leftarrow\textrm{top-}\lceil|\mathcal{S}_{r}|/2\rceil$
arms in $\mathcal{S}_{r}$ w.r.t. the empirical rewards
8:return the only arm in $S_{\lceil\log_{2}n\rceil}$
|
# Vanishing of Tors of absolute integral closures in equicharacteristic zero
Shravan Patankar Department of Mathematics, Statistics and Computer Science
University of Illinois at Chicago
Chicago, IL 60607-7045
USA<EMAIL_ADDRESS>
###### Abstract.
We show that a ring $R$ is regular if $Tor_{i}^{R}(R^{+},k)=0$ for some $i\geq
1$ assuming further that $R$ is a $\mathbb{N}$-graded ring of dimension $2$
finitely generated over an equi-characteristic zero field $k$. This answers a
question of Bhatt, Iyengar, and Ma. We use _almost mathematics_ over $R^{+}$
to deduce properties of the _noetherian_ ring $R$ and rational surface
singularities. Moreover we show that $R^{+}$ in equi-characteristic zero is
$m$-adically ideal(wise) separated, a condition which appears in the proof of
local criterion for flatness. In dimension $2$ it is Ohm-Rush and intersection
flat. As an application we show that the hypothesis can be astonishingly
vacuous for $i\ll dim(R)$. We show that a positive answer to an old question
of Aberbach and Hochster also answers this question. We use our techniques to
make some remarks on a question of André and Fiorot regarding ‘fpqc analgoues’
of splinters.
###### Contents
1. 1 Introduction
2. 2 Preliminaries
3. 3 $m$-adic ideal separatedness and vacuous assumptions
4. 4 The main theorem
5. 5 Higher dimensions
6. 6 A question of André and Fiorot
7. 7 Acknowledgements
## 1\. Introduction
Throughout this article, all rings are assumed to be commutative and contain
an identity element. The _absolute integral closure_ of an integral domain
$R$, denoted by $R^{+}$, is the integral closure of $R$ inside an algebraic
closure of its fraction field. In spite of being large and non-noetherian it
is of great importance in commutative algebra and algebraic geometry. The
purpose of this document is to give the first answers to the following
question of Bhatt, Iyengar, and Ma:
###### Question 1.1 [BIM19,end of Section 4].
If $(R,m,k)$ is a noetherian local domain of equi-characteristic zero (i.e.
$\mathbb{Q}\subset R$) and $Tor_{i}^{R}(R^{+},k)=0$ for some $i\geq 1$, then
is $R$ regular?
We show the following:
###### Theorem A.
Let $R$ be a $\mathbb{N}$-graded ring of dimension $2$ finitely generated over
an equi-characterstic zero field $k$. If $Tor_{i}^{R}(R^{+},k)=0$ for some
$i\geq 1$ then $R$ is regular.
Henceforth we will often refer to equi-characteristic zero (i.e.
$\mathbb{Q}\subset R)$ simply as characteristic zero. The work of Bhatt,
Iyengar, and Ma has been the subject of several seminars and reading groups
around the world because of it’s connections to the direct summand theorem and
perfectoid rings and Question 1.1 is the only question explicitly stated in
it. The motivation for Question 1.1 arises naturally out of the following
theorem of Bhatt, Iyengar, and Ma and the role absolute integral closures have
played in commutative algebra and algebraic geometry.
###### Theorem 1.2 [BIM19,Theorem. 4.13], [Bha21,Remark. 5.6].
Let $R$ be an excellent local domain of positive or mixed characteristic. If
$Tor_{i}^{R}(R^{+},k)=0$ for some $i\geq 1$, then $R$ is regular.
We note that such vanishing of Tors type statements and questions are not new
and are anticipated as the ‘only if’ version of the above statement is true
and follows from the seminal results of Hochster-Huneke [HH94] and Bhatt
[Bha21] on Cohen-Macaualyness on absolute integral closures. That is, when $R$
is an excellent regular domain of positive characteristic the Hochster-Huneke
result precisely says that $R^{+}$ is _flat_ over $R$ and Bhatt shows that
when $R$ is an excellent local domain of mixed characteristic a system of
parameters starting with $p$ is a _Koszul regular_ sequence on $R^{+}$, both
of which are clearly _stronger_ statements than $Tor_{i}^{R}(R^{+},k)=0$ for
some $i\geq 1$. In particular the positive characteristic part of the above
theorem is originally due to Aberbach and Li [AL08] (who use completely
different methods) and partial results can be found in the work of Aberbach
[Abe04] and Schoutens [Sch03] dating back to an explicit question of Huneke
[Hun96, Exercise 8.8]. The importance of Hochster-Huneke’s and Bhatt’s
theorems is hard to overstate. Bhatt’s mixed characteristic analogue of this
result has led to startling progress in the minimal model program in mixed
characteristic and to quote Bhatt, the Hochster-Huneke result forms the
bedrock of tight closure theory and a large amount of positive characteristic
commutative algebra in general. These ideas also have fairly strong
applications to complex geometry and algebraic geometry.
The proof of our main theorem is of independent interest and we give a brief
sketch in this paragraph. The hypothesis imply that every module finite normal
extension of $R$ is flat. It follows by taking the normalization that $R$ is
normal. Next, assume for the moment that $R$ has rational singularities. Since
$R$ has dimension $2$ classical theory of rational surface singularities
[Lip78] implies that $R$ is $\mathbb{Q}$-Gorenstein and hence its cyclic cover
is a module finite normal Gorenstein extension and by descent one can also
conclude that $R$ is Gorenstein. This implies that $R$ is a rational double
point and it is a classical fact that it has a module finite extension which
is regular, and hence we have that $R$ is regular by descent.
The direct summand conjecture was a famous conjecture in commutative algebra,
open for fourty years until it was settled by Y. André in 2016 [And18].
André’s proof uses Scholze’s theory of perfectoid spaces [Sch12] and is based
on an observation of Bhatt [Bha16] which solves certain cases of the direct
summand conjecture using Falting’s almost purity theorem. Bhatt’s idea, simply
put, was to use a standard trick involving the interplay between almost
mathematics and flat maps to transfer the direct summand conjecture to a
problem about perfectoid spaces and rings which in particular are very large
and non-noetherian. The pièce de résistance of our proof is to use the same
trick for the non-noetherian ring $R^{+}$ in characteristic zero to show that
the hypothesis of the question imply that $R$ has rational singularities.
There is evidence showing that this use of $R^{+}$ is necessary. Boutot’s
theorem implies that an equicharacteristic zero normal (equivalently a
splinter) non-pseudorational ring has _no_ module finite extensions which are
pseudorational. Hence one can’t obtain rationality using the finite extensions
techniques used to deduce other properties of $R$.
The reader may now guess why we described the proof. It is perhaps surprising
that in spite of being a question purely in commutative algebra our proof
makes use of the main theorems about rational surface singularities. This can
be compared with previous results in the literature exhibiting surprising
algebraic characterizations of rational singularities [Ma18], [Har98], [MS97].
Moreover, this suggests a Kunz-type theorem in characteristic zero, i.e. $R$
is regular if and only if $Tor^{R}_{i}(R^{+},k)=0$ for some $i$ large enough.
A similar theorem has been proved by Ma-Schwede [MS20] who show that $R$ is
regular if every alteration has ‘finite projective dimension’ and our result
implies ‘alterations’ could be relaxed to ‘finite covers’. In their study of
the behaviour of finite coverings of (affine) schemes in Grothendieck
topologies André and Fiorot [AF21] ask for rings which are “fpqc analgoues of
splinters”. It is shown using Kunz’s theorem that in positive characteristic
the only “fpqc analgoues of splinters” are regular rings. Using our techniques
we are able to make progress towards this question after imposing some
conditions on the corresponding coverings. These conditions are closely
related to Hochster and Huneke’s proof of existence of big Cohen-Macaulay
algebras in characteristic zero and “big equational tight closure”. In higher
dimensions we expect certain strengthenings of equational lemma (in the sense
of Huneke and Lyubeznik [HL07]) type statements to help make progress towards
Question 1.1 and this raises questions even in positive characteristic.
In a more homological direction, we show that $R^{+}$ is $m$-adically ideal
separated in characteristic zero, a condition which essentially appears in the
proof of miracle flatness or the local criterion of flatness. If $R$ is
regular of dimension $2$ this implies $R^{+}$ is _Ohm-Rush_ or _intersection
flat_ , notions recently studied by Hochster-Jefferies [HJ21] and Epstein-
Shapiro [ES21]. These raise the interesting questions of whether the same
holds in positive or mixed characteristic and imply that the hypothesis of
Question 1.1 can be vacuous for $i\ll dim(R)$.
This paper is organized as follows. In Section $2$ we state and prove
preliminaries. In Section $3$ we show that $R^{+}$ is $m$-adically ideal
separated and that the hypothesis of the question can be vacuous for $i\ll
dim(R)$. In Section $4$ we prove the main theorem and some corollaries. In
Section $5$ we state natural questions which arise when we try to apply our
techniques in higher dimensions. In Section $6$ we use our techniques to make
remarks on a question of André and Fiorot [AF21] and in particular answer it
under very specific assumptions.
## 2\. Preliminaries
In this section we gather several lemmas which we will need in the future
sections. They are standard and have been used in works surrounding the
homological conjectures and commutative algebra. We also briefly discuss
general background regarding rational singularities, $m$-adic ideal
separatedness and ‘fpqc analogues of splinters’ in the sense of André and
Fiorot.
###### Lemma 2.1.
Let $R$ be a normal characteristic zero domain. Then the map $R\rightarrow
R^{+}$ and consequently $R\rightarrow S$ (where $S$ is any module-finite
domain extension of $R$) splits.
###### Proof.
Let $\theta$ be an element of $R^{+}$ satisfying a minimal polynomial of
degree $d$. It is readily checked that the normalized trace map
$\frac{1}{d}Tr:R[\theta]\rightarrow R$ gives a splitting of $R\rightarrow
R[\theta]$ and hence we have a splitting of $R\rightarrow R^{+}$ by doing this
for every element of $R^{+}$. ∎
The following lemma is used several times in the proof of our main theorem.
###### Lemma 2.2.
Let $(R,m,k)$ be an excellent local characteristic zero domain. Let $S$ be a
module finite normal extension of $R$, which exists as $R$ is excellent. If
$R$ satisfies the assumption of our main theorem, that is if
$Tor^{R}_{i}(R^{+},k)=0$ for some $i$ then $S$ has projective dimension
$dim(R)-i$ over $R$. In particular if $dim(R)=2$ then $S$ is flat over $R$.
###### Proof.
The above lemma implies that $S\rightarrow S^{+}=R^{+}$ splits as an
$S$-module and hence an $R$-module map. Since $Tor$ is a functor we have
$Tor^{R}_{i}(S,k)$ is a direct summand of $Tor^{R}_{i}(R^{+},k)$ and is hence
equal to $0$. Since $S$ is a finitely generated module over a noetherian local
ring $R$, standard theory of noetherian local rings implies $S$ has finite
flat dimension and hence finite projective dimension. Since $R$ has dimension
$2$ and $S$ has depth $2$ the statement of the Auslander-Buschbaum theorem
implies that the projective dimension of $S$ is $0$ and hence $S$ is flat over
$R$. ∎
The reader can guess that the above statement allows us to reduce the
statement of the main theorem about the large non-noetherian object $R^{+}$ to
a finitistic statement about finite normal covers of $R$. That is, it is not
hard to see that the above lemma implies that if we have a ring $R$ satisfying
the assumptions of the main theorem (that is if $Tor^{R}_{i}(R^{+},k)=0$ and
$dim(R)=2$) then every normal extension of $R$ is flat over $R$.
Let $P$ be a property of rings. $P$ is said to descend under faithfully flat
maps if for a faithfully flat map of noetherian local rings $R\rightarrow S$,
$S$ is $P$ implies that $R$ is $P$. Many singularities and properties of rings
descend under faithfully flat maps and we refer the reader to the work of
Datta and Murayama [DM20] for an excellent exposition surrounding this topic.
We will need that the properties of a ring being normal, Gorenstein, and
regular descend along faithfully maps.
###### Theorem 2.3.
Let $R\rightarrow S$ be a faithfully flat map of noetherian local rings. If
$S$ is normal so is $R$.
###### Proof.
[Mat89, Cor. to Thm. 23.9]. ∎
###### Theorem 2.4.
Let $R\rightarrow S$ be a faithfully flat map of noetherian local rings. If
$S$ is Gorenstein so is $R$.
###### Proof.
[Mat89, Cor. to Thm. 23.4]. ∎
###### Theorem 2.5.
Let $R\rightarrow S$ be a faithfully flat map of noetherian local rings. If
$S$ is regular so is $R$.
###### Proof.
[Mat89, Cor. to Thm. 23.7]. ∎
###### Theorem 2.6.
Let $R\rightarrow S$ be a cyclically pure homomorphism of locally quasi-
excellent $\mathbb{Q}$-algebras, in particular $R\rightarrow S$ can split or
be faithfully flat. If $S$ is pseudo-rational (i.e. has rational
singularities) so does $R$.
###### Proof.
This is [Mur21, Theorem C] and the theorem under stronger assumptions is due
to Boutot (1987) and Schoutens (2008) and the reader can see Murayama’s work
for references. ∎
It is striking that our proof of a statement about the absolute integral
closure or finite covers involves rational singularities which are defined
using resolutions of singularities, which is a proper birational map and in
particular very far from being a finite map.
###### Definition 2.7.
Let $X$ be an excellent scheme which admits a dualizing complex and let
$f:Y\rightarrow X$ be a resolution of singularities, that is $Y$ is non-
singular and the map $f:Y\rightarrow X$ is proper birational. $X$ is said to
have (resolution) rational singularities if
$R_{\ast}f(\mathcal{O}_{Y})\simeq\mathcal{O}_{X}$ and
$f_{\ast}\mathcal{O}_{Y}\simeq\mathcal{O}_{X}$ .
Rational singularities are one of the most widely studied singularities in
singularity theory. Indeed to quote Kovacs, “rational singularities are
arguably one of the most mild and useful classes of singularities one can
imagine”. We will need criteria for determining when certain graded rings or
cones over smooth projective varieties have rational singularities.
###### Theorem 2.8.
Let $Y$ be a smooth projective variety of dimension $n$ and let $L$ be an
ample line bundle. The affine cone over $Y$ with conormal $L$ is the affine
algebraic variety
$\displaystyle X=Spec\bigoplus_{m\geq 0}H^{0}(Y,L^{m})$
$X$ has rational singularities if and only if $H^{i}(Y,L^{m})=0$ for every
$i>0$ and for every $m\geq 0$.
###### Proof.
This is classical, see [Kol13, Prop. 3.13]. ∎
###### Theorem 2.9.
Let $R$ be a normal $\mathbb{N}$-graded ring which is finitely generated over
a field $R_{0}=K$ of characteristic zero. The $a$-invariant of $R$ is the
highest integer a such that $[H^{d}_{m}(R)]_{a}$ is nonzero. Then $R$ has
rational singularities if and only if the open set $Spec(R)-{m_{R}}$ has only
rational singularities and the ring $R$ is Cohen-Macaulay with $a(R)<0$.
###### Proof.
This is (by now) classical, see [Wat83, Theorem 2.2] ∎
Our main results naturally tie in with the notion of dagger closure. Dagger
closure was defined as an attempt to provide a characteristic free ideal
closure operation with properties similar to tight closure. Hochster-Huneke
[HH91, Page 236] remark that a priori, each valuation may give a different
dagger closure, and show that this is not the case if $R$ has positive
characteristic by proving that dagger closure agrees with tight closure
([HH91, Theorem 3.1]) for complete local domains (see [BS12, Proposition 1.8]
where it is shown that dagger closure and tight closure agree for domains
essentially of finite type over an excellent local ring). We do not know if
dagger closure depends on the choice of the valuation if $R$ has
equicharacteristic zero or mixed characteristic. Although tight closure has
been studied extensively in positive characteristic, dagger closure continues
to be mysterious111Some progress towards understanding dagger closure has been
made by Stäbler in his thesis work [Sta10]. and we refer the reader to [RSS07,
Questions 4.1, 4.2] for some basic questions about dagger closure that remain
unanswered. The results of this note can be viewed as an application of the
theory of dagger closure.
###### Definition 2.10 [BS12].
Let $R$ be a domain and $I$ an ideal. Then an element $f$ belongs to the
dagger closure $I^{\dagger}$ of $I$ if for every valuation $v$ of rank at most
one on $R^{+}$ and every positive $\epsilon$ there exists $a\in R^{+}$ with
$v(a)<\epsilon$ and $af\in IR^{+}$.
This definition of dagger closure is slightly more general than the original
definition due to Hochster and Huneke [HH91]. In [HH91] dagger closure is
defined only for complete local domains and under this definition both
coincide since both coincide with tight closure in positive characteristic.
###### Definition 2.11 [BS12].
Let $R$ be a $\mathbb{Q}$-graded domain. The map $v\colon(R-{0})\rightarrow Q$
sending $f\in(R-{0})$ to deg $f_{i}$, where $f_{i}$ is the minimal homogeneous
component of $f$ induces a valuation on $R$ with values in $Q$. This valuation
will be referred to as the valuation induced by the grading.
###### Definition 2.12 [BS12].
Let $R$ denote an $\mathbb{N}$-graded domain and let $I$ be an ideal of $R$.
Let $v$ be the valuation on $R^{+GR}$ induced by the grading on $R$. Then an
element $f$ belongs to the graded dagger closure $I^{\dagger GR}$ of an ideal
$I$ if for all positive $\epsilon$ there exists an element $a\in R^{+GR}$ with
$v(a)<\epsilon$ such that $af\in IR^{+GR}$. If $R$ is not a domain we say that
$f\in I^{\dagger GR}$ if $f\in(IR/P)^{\dagger GR}$ for all minimal primes $P$
of $R$ (this is well defined since minimal primes are homogeneous).
We review basics about the notions of $I$-adic separatedness.
###### Definition 2.13.
Let $R$ be a ring, $I$ be an ideal, and $M$ be an $R$-module. Then $M$ is said
to be $I$-adically separated if $\cap_{n\geq 1}I^{n}M=0$.
If $R$ is noetherian and $M$ is finitely generated and if $I$ is a proper
ideal then $M$ is $I$-adically separated. Hence this notion is particulary
interesting for non-noetherian modules.
This notion ($m$-adically separated) is significantly weaker than the notion
of $m$-adically ideal separatedness. For example it is proved by Hochster
[Ho02] (see also [Shi10, Lemma 4.2]) that $R^{+}$ is $m$-adically separated
but as is discussed in Section 3, $R^{+}$ being $m$-adically ideal separated
(in positive characteristic) would give another proof of 1.2 which apriori
uses the deep Cohen-Macaulayness of $R^{+}$.
###### Definition 2.14.
Let $R$ be a noetherian ring, $J$ be an ideal, and $M$ be an $R$-module. Then
$M$ is said to be $J$-adically ideal(wise) separated if $M\otimes I$ is
$J$-adically separated for every ideal $I$ of $R$.
The significance of this definition is in the following theorem known as
‘Local criterion for flatness’. The local criterion of flatness helps in
characterizing etalé morphisms.
###### Theorem 2.15.
Let $A$ be a noetherian ring, $I$ an ideal, and $M$ an $I$-adically ideal
separated module. If $Tor_{1}^{A}(A/I,M)=0$ then $M$ is flat as an $A$ module.
This is used in the proof of the following classical theorem due to
Grothendieck, called ‘miracle flatness’ by Brian Conrad.
###### Theorem 2.16.
Let $R\rightarrow S$ be a local ring homomorphism between local Noetherian
rings. If $S$ is flat over $R$, then $dim(S)=dim(R)+dim(S/mS)$ where $m$ is
the maximal ideal of $R$. Conversely, if this dimension equality holds, if R
is regular and if S is Cohen–Macaulay (e.g., regular), then S is flat over R.
Let us review some basics of ‘fpqc analogues of splinters’ in the sense of
Andre and Fiorot.
###### Definition 2.17.
Let $X$ be a noetherian affine scheme. $X$ is said to be an ‘fpqc analogue of
splinter’ if every finite cover $S$ of $X$ is a covering for the fpqc
topology. Equivalently if $X=Spec(R)$ then there exists an $S$-algebra $T$ for
any module finite extension $S$ of $R$ such that $T$ is flat over $R$ and
$R\rightarrow S.\rightarrow T$.
It is known that a $F$-finite noetherian ring of positive characteristic is an
fpqc analogue of splinter only if it is regular. This follows from Kunz’s
theorem. We give a brief sketch of the proof. We have a faithfully flat
algebra $T$ (by assumption) such that $R\rightarrow F_{*}(R)\rightarrow T$ and
by base change we have that $F_{*}(R)\rightarrow T\rightarrow F_{*}(T)$ is
flat and in particular $F_{*}(R)\rightarrow T$ is pure. This implies
$R\rightarrow F_{*}(R)$ is flat. The fact that excellent regular noetherian
rings (of any characteristic) are fpqc analogues of splinters is deep and
follows from the existence of ‘Big Cohen-Macaulay algebras’ in the sense of
Hochster [Ho75], Andre[And18].
## 3\. $m$-adic ideal separatedness and vacuous assumptions
The goal of this section is to prove that the absolute integral closure,
$R^{+}$, of an excellent equi-characteristic zero domain is $m$-adically ideal
separated and to show that the assumptions of the question of Bhatt, Iyengar,
Ma 1.1 can be _vacuous_ under certain mild assumptions. That is to say that
there are _no_ complete local domains $(R,m,k)$ of equi-characteristic zero
such that $Tor_{i}^{R}(R^{+},k)=0$ for $i\leq dim(R)-3$. Here we show this
only when $dim(R)=4$ and leave higher dimensions as an excercise to the
reader. It is easily seen that imposing $i\geq dim(R)$ in Question 1.1 fixes
this issue.
###### Theorem 3.1.
Let $R$ is an excellent (or more generally Nagata) local domain of
characteristic zero. Then $R^{+}$ is $m$-adically ideal separated as an
$R$-module.
###### Proof.
It suffices to show (by definition) that $R^{+}\otimes I$ is $m$-adically
separated for every ideal $I$ of $R$. Let $\alpha\in\cap_{n\geq
0}m^{n}I\otimes R^{+}$. We need to show $\alpha=0$. Let
$\alpha=\sum_{i=1}^{k}r_{i}\otimes m_{i}$ where $r_{i}\in R^{+}$ and $m_{i}\in
I$. We may assume that all $r_{i}$ are contained in a finite normal extension
of $R$, $S$, by simply adjoining $r_{i}$ to $R$ and then taking the
normalization. We used here that $R$ is Nagata which implies that
normalizations are module-finite. Since $S$ has characteristic zero (and
$S^{+}=R^{+}$ the maps $S\rightarrow R^{+}$ and $I\otimes S\rightarrow
I\otimes R^{+}$ split. $\alpha$ is in $m^{n}(R^{+}\otimes I)$ and hence is in
$m^{n}(S\otimes I)$ for every $n$. This clearly implies $\alpha$ is $0$ as $S$
and $I$ are finitely generated $R$ modules and hence so is $S\otimes I$. All
finitely generated modules over a noetherian local domain are separated by
Krull’s intersection theorem.
∎
###### Theorem 3.2.
Let $R$ be a equi-characteristic zero complete local domain such that
$dim(R)\geq 4$. Then $Tor_{1}^{R}(R^{+},k)\neq 0$. In particular the
hypothesis of the question of Bhatt, Iyengar, and Ma can be vacuous if one
does not ask for $i\geq dim(R)$ in Question 1.1.
###### Proof.
First assume $R$ is regular. Suppose $Tor_{1}^{R}(R^{+},k)=0$. Local criterion
of flatness [Mat89, Theorem 22.3] implies $R\rightarrow R^{+}$ is _flat_. This
implies that $R^{+}$ is Cohen-Macaulay which is well known to be false given
that $R$ has characteristic zero.
When $R$ is not regular we simply notice that the problem reduces to when it
is. We get by the same argument above that $R^{+}$ is flat. Choose a prime
$\mathfrak{p}$ of height at least $3$ such that $R_{\mathfrak{p}}$ is regular.
Such a prime exists by the lemma below. Absolute integral closures localize
and localization of a flat module is flat. Hence we have that
$(R^{+})_{\mathfrak{p}}=(R_{\mathfrak{p}})^{+}$ is flat over
$R_{\mathfrak{p}}$. This implies that $R^{+}$ is Cohen-Macaulay which is well
known to be false given that $R$ has characteristic zero. See [Bha21,
Introduction, last line of paragraph after Theorem 1.1] and one can follow
[ST21, Proposition 2.4] for an explicit proof. ∎
We thank Kevin Tucker for conversations regarding the lemma below.
###### Lemma 3.3.
Let $(R,m)$ be a noetherian normal local ring of dimension $2$ or more. Then
there exists a prime $\mathfrak{p}$ of height $dim(R)-1$ such that
$R_{\mathfrak{p}}$ is regular.
###### Proof.
Let $I=\\{\cap\mathfrak{q}|R_{\mathfrak{q}}\text{is not regular}\\}$. If $I=m$
then any prime of height $dim(R)-1$ works, and such a prime exists as complete
local domains are catenary. If $I$ is not $m$ and the height of $I$ is $d$,
then there are finitely many prime ideals of height $d$ containing $I$ which
are the minimal primes by primary decomposition. Since $R$ is normal $d\geq
2$. Then $R$ has infinitely many primes of height $d$ (by [Kap74, Theorem
144]) so one can simply take $\mathfrak{p}$ to be one which is not minimal
over and consequently does not contain $I$. ∎
###### Remark 3.4.
It is clear that the above argument is going to work with, for example,
$Tor_{2}^{R}(R^{+},k)=0$ and $dim(R)=5$. While this does not imply $R^{+}$ is
_flat_ over $R$, it does show that flat dimension of $R^{+}$ is 2 (since it is
a direct limit of normal rings $S$ which have flat dimension $2$) which will
force the depth of $R^{+}$ to be greater than or equal to $3$ (arguing
similarly by Auslander-Buschbaum formula). One can localize to primes of
height $3$ to see that the depth of $R^{+}$ cannot be greater than $2$. We
omit this discussion for the sake of clarity and brevity.
Hence we have the following ‘corrected’ version of the question of Bhatt,
Iyengar, and Ma:
###### Question 3.5.
If $(R,m,k)$ is a noetherian local domain of equi-characteristic zero (i.e.
$\mathbb{Q}\subset R$) and $Tor_{i}^{R}(R^{+},k)=0$ for some $i\geq dim(R)$,
then is $R$ regular?
This raises the following _fascinating_ question in other characteristics.
###### Question 3.6.
Let $R$ be a complete local domain of positive characteristic. Is $R^{+}$
$m$-adically ideal separated? If $R$ is of mixed characteristic is the
$p$-adic completion of $R^{+}$, $\widehat{R^{+}}$ $m$-adically ideal
separated?
Following the arguments of the proof of the theorem above we see that a
positive answer to the above question would have ‘another’ proof of the
theorem of Bhatt, Iyengar, and Ma atleast for $Tor_{1}$ \-
$Tor_{1}^{R}(R^{+},k)=0$ implies $R^{+}$ is flat which by ‘Kunz’s theorem’
implies $R$ is regular [BIM19, Theorem 4.7]. However one can check that the
proof of the theorem of Bhatt, Iynegar, and Ma (for $Tor_{1}$) uses Cohen-
Macaulayness of $R^{+}$, hence we expect the answer to the above question to
use deep properties of $R^{+}$. It is easy to see why we ask for the $p$-adic
completion of $R^{+}$ in mixed characteristic instead of $R^{+}$. $R^{+}$ is
not $m$-adically ideal separated in mixed characteristic, we know that
$Tor_{1}^{R}(R^{+},k)=0$, hence if true the local criterion for flatness would
imply $R^{+}$ is flat and hence Cohen-Macaulay. It is well known that $R^{+}$
is not Cohen-Macaulay in mixed charactertistic.
The perfection of a ring $R$ of positive characteristic, $R_{perf}$ is
$lim(R\rightarrow R\rightarrow R\rightarrow\cdots)$ where each map is the
frobenius. It is sometimes denoted as $R^{\frac{1}{p^{\infty}}}$ in literature
and conceptually is the large ring containing all $p$-th power roots of
elements of $R$. It is easy to see that our proof goes through for algebras
which are a limit of split noetherian ring maps. For example our proof goes
through to show that the perfection of a positive characteristic $F$-split
ring is $m$-adically ideal separated. This raises the question:
###### Question 3.7.
Let $R$ be a noetherian domain of positive characteristic. Is $R_{perf}$
$m$-adicaly ideal separated?
###### Remark 3.8.
The notion of $m$-adic ideal separatedness is closely related to the notion of
weakly intersection flatness [HJ21, Propsition 5.7 e)] for ideals and Ohm-Rush
modules [ES16], [ES19], [ES21]. While it does not make sense to ask for
intersection flatness of $R^{+}$ in characteristic zero as the results of this
section show that $R^{+}$ is (usually) never flat over $R$, it does make sense
to ask if $R^{+}$ is weakly intersection flat for ideals when $R$ has
characteristic zero and whether ($p$-adic completion of ) $R^{+}$ is Ohm-Rush
or intersection flat if $R$ is regular of positive or mixed characteristics.
These are under investigation.
We avoid stating the definitions of weakly intersection flatness, intersection
flatness, and Ohm-Rush modules as we feel they are technical for the purposes
of this section. We do mention the following proposition due to Hochster and
Jefferies which says, roughly speaking, that when $R$ is a complete local ring
and $S$ is an $R$-flat algebra, $m$-adic ideal separatedness with ideals
replaced by all finitely generated modules is equivalent to intersection
flatness. The forward implication is non-trivial.
###### Proposition 3.9.
(ref. [HJ21, Proposition 5.7 e)] Let $R$ be a complete local ring and $S$ is
an $R$-flat algebra. If $S\otimes M$ is $m$-adically separated for all
finitely generated modules $M$ then $S$ is intersection flat as an $R$ module.
We make the following observation as an application of our techniques:
###### Theorem 3.10.
Let $R$ be a regular complete local characteristic zero domain of dimension
$2$ (a power series ring in two variables by Cohen structure theorem). Then
$R^{+}$ is _intersection flat_ and Ohm-Rush as a $R$-module.
###### Proof.
This follows from our observation that in characteristic two $R^{+}$ is a
‘limit of split (noetherian) maps’ and that in dimension $2$ $R\rightarrow
R^{+}$ is flat when $R$ is regular. More precisely, since $R^{+}$ is Cohen-
Macaulay in dimension $2$ it is flat over $R$ if $R$ is regular, in any
characteristic.
It suffices to show (by [HJ21, Proposition 5.7 e)] that $R^{+}\otimes M$ is
$m$-adically separated for every finitely generated module $M$. Let
$\alpha\in\cap_{n\geq 0}R^{+}\otimes M$. We need to show $\alpha=0$. Let
$\alpha=\sum_{i=1}^{k}r_{i}\otimes m_{i}$ where $r_{i}\in R^{+}$ and $m_{i}\in
M$. We may assume that all $r_{i}$ are contained in a finite normal extension
of $R$, $S$, by simply adjoining $r_{i}$ to $R$ and then taking the
normalization. We used here that $R$ is Nagata which implies that
normalizations are module-finite. Since $R,S,R^{+}$ all have characteristic
zero the maps $S\rightarrow R^{+}$ and $M\otimes S\rightarrow M\otimes R^{+}$
split. $\alpha$ is in $m^{n}(R^{+}\otimes M)$ and hence is in $m^{n}(S\otimes
M)$ for every $n$. This clearly implies $\alpha$ is $0$ as $S$ and $I$ are
finitely generated $R$ modules and hence so is $S\otimes I$. All finitely
generated modules over a noetherian local ring are separated by Krull’s
intersection theorem. ∎
Note that our main result is in a sense a converse to this statement. That is
if $R\rightarrow R^{+}$ is flat (under the stated assumptions on $R$) then $R$
is regular. One can similarly prove that if $R$ is regular and $F$-finite of
positive characteristic then $R_{perf}$ is intersection flat. This fact was
observed independently and earlier by Neil Epstein.
The proposition above raises the question:
###### Question 3.11.
Let $R$ be a regular complete local domain of positive characteristic. Is
$R^{+}$ intersection flat or Ohm-Rush? If $R$ is regular of mixed
characteristic is the $p$-adic completion of $R^{+}$, $\widehat{R^{+}}$
intersection flat or Ohm-Rush?
We have to assume $R$ is regular in the above question as intersection flat/
Ohm-Rush modules are flat and $R^{+}$ will be flat over $R$ only if $R$ is
regular by the main theorems of the work of Bhatt, Iyengar, and Ma [BIM19,
Theorems 4.7, 4.12]. The proof of our theorems above show that if $R^{+}$ is a
‘limit of split maps’ then the answer is yes, however this is known to be
false in both positive and mixed characteristics. If true it would imply every
complete local domain has a module finite extension which is a splinter,
however splinters are Cohen-Macaulay (this follows from the theorems of
Hochster-Huneke and Bhatt on Cohen-Macaulayness of $R^{+}$) and it is known
that ‘small Cohen-Macaulay algebras’ do not exist [ST21]. Hence it seems that
a proof of this statement will require almost mathematics and other non-
trivial techniques.
We now show that a positive answer to an old question of Aberbach and Hochster
answers the question of Bhatt, Iyengar, and Ma.
###### Question 3.12 [AH97,Question 3.7].
If $R$ is a complete local domain containing $\mathbb{Q}$ then is the $Tor$
dimension of $R^{+}/m_{R^{+}}$ (as an $R^{+}$ module) equal to $dim(R)$?
We make some remarks on this question. It is fairly straightforward and was
observed in [Pat22, Remark 4.5] that the answer to this question is yes if
$dim(R)=1$. This follows from the fact that under this assumption $R^{+}$ is a
valuation domain. Hence the maximal ideal $m_{R^{+}}$ is flat as an $R^{+}$
module. The projective dimension of $R^{+}/m_{R^{+}}$ however is $2$. We
briefly state the remarks Aberbach and Hochster make regarding Question 3.12.
###### Lemma 3.13.
Let $(R,m)$ be a quasi local ring of dimension $d$ such that every $d$-element
$m$-primary ideal is a regular sequence. If every finitely generated
$m$-primary ideal is contained in a $d$-generated ideal then $Tordim(R/m)\leq
d$.
###### Proof.
This is [AH97, Lemma 3.8]. ∎
###### Corollary 3.14.
Let $(R,m)$ be a complete local domain of dimension $2$ such that every
finitely generated $m$-primary ideal of $R^{+}$ is contained in a
$2$-generated ideal then $Tordim(R^{+}/m_{R^{+}})\leq 2$.
###### Proof.
It is not known whether the conditions of this Corollary are satisfied and by
an easy induction argument it is enough to show this for ideals generated by
$3$ elements. $R^{+}$ is a limit of two dimensional normal and hence Cohen-
Macaulay rings. Thus every pair of elements generating a height $2$ ideal in
$R^{+}$ is an $R^{+}$ sequence. Now we may apply the above lemma. ∎
###### Lemma 3.15.
If $z$ satisfies a polynomial of the form $X^{n}-f(u,v)$ then $(u,v,z)$ is
contained in a two-generated ideal.
###### Proof.
This is [AH97, Lemma 3.10]. ∎
We observe the following:
###### Proposition 3.16.
Let $R$ be a complete local domain of characteristic zero. Assume that the
answer to Question 3.12 is yes. Then the answer to Question 1.1 is yes.
###### Proof.
$Tor_{i}^{R}(R^{+},k)=0$ implies $Tor_{i}^{R}(S,k)=0$ where $S$ is any normal
module finite extension of $R$ (since $S$ is a splinter if it is normal in
characteristic zero). This implies $S$ has finite flat dimension at most $i$.
Since $R^{+}$ is a limit of all such $S$ we have $R^{+}$ has finite flat
dimension (as an $R$ module). Hence we have $Tor_{i}^{R}(R^{+},k)=0$ implies
$Tor_{j}^{R}(R^{+},k)=0$ for any $j\geq i$. The corresponding statement in
non-zero characteristics follows from homological properties of perfect(oid)
rings and Cohen-Macaulayness of $R^{+}$ when $i\leq dim(R)$. This now follows
from [BIM19, Corollary 2.4] with $S=U=R^{+}$. ∎
###### Remark 3.17.
If one goes through the proof of Theorem 1.2 one can verify that for
$i>dim(R)$ the proof does _not_ use the Cohen-Macaulayness of $R^{+}$ and the
theorem holds for any perfectoid ring, not just $R^{+}$. The (almost) Cohen-
Macaulayness of $R^{+}$ is used for low $i$’s and in particular for $i=1$. At
the time of publication or appearance of the preprint containing [BIM19,
Theorem 4.13] Cohen-Macaulayness of $R^{+}$ was not known in mixed
characteristic in dimension $3$. In particular at the time of publication of
[BIM19, Theorem 4.13 3)] was a theorem with vacuous hypothesis: one needs
Cohen-Macaulayness of $R^{+}$ to conclude that $Tor_{i}^{R}(R^{+},k)=0$ when
$dim(R)=3$ and $R$ is a excellent regular local ring of mixed characteristic.
It follows from our techniques that $R^{+}$ has flat and projective dimension
$dim(R)-2$ when $R$ is regular. For instance $R^{+}$ is flat when $R$ is
regular of dimension $2$. It also follows that the converse is equivalent to
the question of Bhatt, Iyengar and Ma. In other characteristics, $R^{+}$ or
its $p$-adic completion are flat over $R$ and hence the flat dimension is $0$.
However, astonishingly the projective dimension of $R^{+}$ as a $R$ module
seems to be unknown in other characteristics.
###### Question 3.18.
Let $R$ be an complete local domain of positive characteristic. What is the
projective dimension of $R^{+}$ as an $R$ module?
This question has appeared in literature before. André and Fiorot ask whether
there is a countably generated $S$-algebra which is $R$-free given
$R\rightarrow S$ is a finite extension of complete local domains [AF21, Remark
6.3]. If the answer to Question 3.18 is zero then it answers this question
positively.
## 4\. The main theorem
Here is our main theorem, which as promised makes a use of almost mathematics
in the sense of the theorem of Roberts, Singh, and Srinivas which states that
the image of $H^{2}_{m}(R)_{\geq 0}$ in $H^{2}_{m}(R^{+})$ is annihilated by
elements of arbitrarily small positive degree (is ‘almost zero’).
###### Theorem A.
Let $R$ be a $2$-dimensional $\mathbb{N}$-graded ring finitely generated over
a characteristic zero field. If $Tor_{i}^{R}(R^{+},k)=0$ for some $i\geq 1$
then $R$ is regular.
###### Proof.
We will first use elementary commutative algebra to show $R$ is normal. Let
$R^{n}$ be the normalization of $R$. Since normal rings are splinters we have
that the map $R^{n}\rightarrow R^{+}$ splits (as $R^{n}$ and hence $R$ module
map). Since $Tor$ is functorial $Tor_{i}^{R}(R^{n},k)$ is a direct summand of
$Tor_{i}^{R}(R^{+},k)=0$ and is $0$. As $R$ is excellent $R^{n}$ is module
finite over $R$ and hence the projective dimension of $R^{n}$ is finite over
$R$. The Auslander-Buschbaum formula then implies that $R^{n}$ must be free
and in particular flat over $R$. Normality descends along flat maps which
implies that $R=R^{n}$ and hence $R$ is normal.
Now we come to the key innovative idea in this work. Let $S$ be a finite
normal extension of $R$. Since normal rings are splinters we have that the map
$S\rightarrow R^{+}$ splits (as $S$ and hence $R$ module map). Since $Tor$ is
functorial $Tor_{i}^{R}(S,k)$ is a direct summand of $Tor_{i}^{R}(R^{+},k)=0$
and is $0$. Hence $S$ is flat as an $R$-module. $R^{+GR}$ is normal and hence
is flat as it is a direct summand of the flat module $R^{+}$. Let
$[\alpha]\in[H^{2}_{m}(R)]_{\geq 0}$ (degree $\geq 0$ part of the top local
cohomology). Since $R\rightarrow R^{+GR}$ is flat, local cohomology and
consequently it’s annihilators base change, so we have
$ann_{R}([\alpha])\otimes R^{+GR}=ann_{R^{+GR}}([\alpha])$. However [RSS07,
Corollary 3.5] states that there are elements of arbitrarily small positive
degree in $ann_{R^{+GR}}([\alpha])$, and $ann_{R}([\alpha])\otimes
R^{+GR}=ann_{R^{+GR}}([\alpha])$ is clearly finitely generated. Hence all
elements in have a degree greater than the minimum of all of it’s generators.
Hence $ann_{R^{+GR}}([\alpha])$ contains $1$ and consequently $[\alpha]=0$. It
is well known that this implies $R$ has rational singularities. Commutative
algebraists can observe that this precisely says that the $a$-invariant of $R$
is negative and since $R$ is a normal ring of dimension $2$ the statement
follows from [Wat83, Theorem 2.2]. Algebraic geometers can check that
criterions for cones over smooth projective varieties [Kol13] work in this
case.
Next, we show that $R$ is Gorenstein. Since $R$ is a rational surface
singularity we know that it is $\mathbb{Q}$-Gorenstein [Lip78, Theorem 17.4]
and that it’s cyclic cover $S$ is normal and Gorenstein (standard theory of
cyclic covers). Note that $S$ need not have rational singularities (atleast we
do not claim so). Since normal rings are splinters we have that the map
$S\rightarrow R^{+}$ splits (as $S$ and hence $R$ module map). Since $Tor$ is
functorial $Tor_{i}^{R}(S,k)$ is a direct summand of $Tor_{i}^{R}(R^{+},k)=0$
and is $0$. Hence $R\rightarrow S$ is flat. The property of a ring being
Gorenstein descends along a flat map [Mat89, Theorem 23.4] we have that $R$ is
Gorenstein.
The desired conclusion follows from classical work on rational double points.
[Pri67], [Lip78] say that $R$ has a module finite extension $S$ such that $S$
is regular. Arguments in above paragraphs imply that $S$ is flat over $R$
which implies $R$ is regular since regularity descends under flat maps. ∎
###### Remark 4.1.
Theorem A implies $R$ is in fact a polynomial ring (see [Ser00, Appendix III,
3, Theorem 1]).
We discuss the information our proof gives us and the sharpness of our ideas.
Let us examine it in more detail.
The following propositions are easily seen to pop out of our proof above:
###### Proposition 4.2.
Let $R$ be an excellent characteristic zero noetherian local domain such that
$Tor_{i}^{R}(R^{+},k)=0$ for some $i\geq 1$. Then $R$ is normal.
###### Proof.
$R$ has a module finite normal extension $S$ and since $S$ has characteristic
zero it is a direct summand of $R^{+}$. Hence we have that
$Tor_{i}^{R}(S,k)=0$. Hence $S$ has finite projective dimension. Let
$\mathfrak{p}$ be a height two prime ideal of $R$. We have $S_{\mathfrak{p}}$
is normal (normality localizes) has finite projective dimension over
$R_{\mathfrak{p}}$. Auslander-Buschbaum formula implies projective dimension
of $S_{\mathfrak{p}}$ over $R_{\mathfrak{p}}$ must be zero and hence
$S_{\mathfrak{p}}$ is faithfully flat over $R_{\mathfrak{p}}$ and hence
$R_{\mathfrak{p}}$ is normal. This implies $R$ is normal as it is enough to
check it on localization at height $2$ primes. ∎
###### Proposition 4.3.
Let $R$ be a noetherian characteristic zero complete local domain such that
$Tor_{i}^{R}(R^{+},k)=0$ for some $i\geq 1$. Suppose $R$ is a module finite
direct summand of a regular local ring $S$. Then $R$ is regular.
###### Proof.
Since $S$ is normal and has characteristic zero it is a direct summand of
$R^{+}$. Hence $Tor_{i}^{R}(S,k)=0$ for some $i$ and we have that $S$ has
finite projective dimension over $R$. The Auslander Buschbaum formula implies
that the projective dimension must be $0$ and consequently $S$ is flat as a
$R$ module. Faithfully flat descent of regularity implies $R$ must be regular
itself. ∎
###### Theorem 4.4.
Let $(R,m,k)$ be a complete local domain with a characteristic zero
algebraically closed residue field (this implies $R$ has characteristic zero).
If $Tor_{i}^{R}(R^{+},k)=0$ then $R$ is Gorenstein.
###### Proof.
We have that $R$ has a module finite extension $S$ which is local, normal, and
Gorenstein by [ST21, Proposition 2.4]. Since normal rings are splinters we
have that the map $S\rightarrow R^{+}$ splits (as $S$ and hence $R$ module
map). Since $Tor$ is functorial $Tor_{i}^{R}(S,k)$ is a direct summand of
$Tor_{i}^{R}(R^{+},k)=0$ and is $0$. Hence $R\rightarrow S$ is flat. The
property of a ring being Gorenstein descends along a flat local map [Mat89,
Theorem 23.4] we have that $R$ is Gorenstein. ∎
On the way to prove that the ring is regular, three out of the four steps in
our proof use the fact that $R^{+}$ is ‘ind-$X$’ where $X$ is normal,
Gorenstein, and given that it is a rational double point, regular. The
starting point of our result was when inspired by the preprint of Shimomoto
and Tavanfar after which the author asked is $R^{+}$ ‘ind pseudo-rational’?
The answer is no, and for a good reason. Boutot’s theorem implies that a
normal non-pseudorational ring has _no_ module finite extensions which are
pseudo-rational since normal rings are splinters in characteristic zero. Hence
new ideas are required and the theorem of Roberts, Singh, and Srinivas comes
into play. In a sense, one has to use all of $R^{+}$ to show that a ring $R$
satisfying the hypothesis of the question of Bhatt, Iyengar, and Ma has
rational singularities. This suggests deep connections between the absolute
integral closure and rational singularities in all characteristics. In
particular this indicates that dagger closure might give yet another algebraic
characterization of rational singularities in characteristic zero. In
dimension $2$ our proof uses the main theorems about rational surface
singularities. It seems from our proof and that certain higher dimensional
analogues of the theorem of Roberts, Singh, and Srinivas (see Section $5$
below) can be used to show $R$ has rational singularities in higher dimensions
if it satisfies the hypothesis of Question 1.1. To show $R$ is regular we
expect theorems such as [MS20, Prop. 3.5] and proof of [MS20, Theorem 3.6] to
be useful.
###### Remark 4.5.
In a similar spirit additional obstructions to having a regular ring as a
module finite extension are studied in Devlin Mallory’s work [Mal21], [Mal22].
For example it follows from an elementary analysis of divisor class groups or
picard groups that
$k[x_{1},x_{2},x_{3},x_{4}]/(x_{1}^{3}+x_{2}^{3}+x_{3}^{3}+x_{4}^{3})$ is not
a module finite direct summand of a polynomial ring whereas one needs to use
the more technical machinery of differential operators to show that it is not
a summand of an arbitrary polynomial ring.
While the answer to Question 1.1 in higher dimensions is currently out of
reach, to answer it for all excellent noetherian domains it seems practical to
make the following definition.
###### Definition 4.6.
A ring $R$ is said to be of BIM-type if every module-finite extension of $R$
has finite projective dimension over it. A ring $R$ is said to be of normal
BIM-type if every module-finite normal extension of $R$ has finite projective
dimension over it.
We expect noetherian domains to be of BIM-type if and only if they are
regular. Question 1.1 precisely asks whether $R$ is regular if $R$ is an
excellent characteristic zero noetherian local domain of normal BIM-type. It
follows from Kunz’s theorem if $R$ is $F$-finite of positive characteristic
then and by [BIM19, Theorem 4.13] if $R$ is excellent local of mixed
characteristic that $R$ is of BIM-type if and only if it is regular. It
follows from the arguments in this section that excellent noetherian rings of
BIM-type are normal.
We expect the following question to be of importance while trying to extend
positive results for Question 1.1 from nice classes of rings such as complete
local or graded domains to all excellent noetherian local domains.
###### Question 4.7.
What permanence properties does the property a ring being of BIM-type have?
For excellent noetherian local rings, does this property descend along
faithfully flat maps? Does it ascend along completions?
## 5\. Higher dimensions
We believe our proof of the main theorem creates sufficient evidence to
suggest that higher dimensional analogues of [RSS07, Theorem 3.4] and the
theory of rational surface singularities, for example [MS20, Section 3] will
make progress towards Question 3.5. In this section we attempt to make this
precise as some of our techniques are clearly restricted to dimension $2$. In
particular we list several questions asking for statements for future study.
These naturally lead to questions open even in positive characteristic.
The proof of our main theorem uses the assumption that $R$ has dimension $2$
in an essential way. We crucially use that the assumptions of Question 1.1
imply that the map $R\rightarrow R^{+}$ is flat so that we can base change
local cohomology to $R^{+}$. This fails in higher dimensions. It was shown in
Section 2 that there are _no_ rings $R$ (under mild assumptions) such that
$R\rightarrow R^{+}$ is flat. This motivates us to ask for higher dimensional
analogues of theorems of Shimomoto and Tavanfar and Roberts, Singh, Srinivas
and with _depth constraints_. The depth constraints will force maps to be flat
(because of the Auslander-Buschbaum formula).
###### Question 5.1.
Let $R$ be a $\mathbb{N}$-graded domain, finitely generated over a field
$R_{0}$ of characteristic zero. Is there a finite extension of $R$, $S$, such
that $depth(S)=depth(R)$ and $S$ is normal and quasi-Gorenstein?
###### Question 5.2.
Let $R$ be an $\mathbb{N}$-graded domain, finitely generated over a field
$R_{0}$ of characteristic zero. For $i<\dim R$, and
$\epsilon\in\mathbb{R}_{\geq 0}$ is there a finite extension of $R$, $S$, such
that $depth(S)=depth(R)$ and
$H^{i}_{{\mathfrak{m}}}(R)\to H^{i}_{{\mathfrak{m}}}(S)$
killed by elements of $R^{+GR}$ of positive degree $d\leq\epsilon$?
and here is the ‘Segre product’ version of the above question:
Let $R$ be an $\mathbb{N}$-graded domain of dimension $d$, finitely generated
over a field $R_{0}$ of characteristic zero. For $\epsilon\in\mathbb{R}_{\geq
0}$ is there a finite normal extension of $R$, $S$, such that such that
$depth(S)=depth(R)$ and the image of
$\left[H^{d}_{{\mathfrak{m}}}(R)\right]_{\geq 0}\to H^{d}_{{\mathfrak{m}}}(S)$
killed by elements of $R^{+GR}$ of arbitrarily small positive degree?
It is possible that for a ring satisfying the assumptions of Question 1.1 a
positive answer to the original question of Roberts, Singh, and Srinivas
(which is precisely Question 5.2 without the depth constraint
$depth(S)=depth(R)$ and $R^{+}$ in place of $S$) gives a positive answer to
Question 5.2, we do not know. However this already yields the following
natural question in positive characteristic which we believe to be
interesting. If true it would be a refinement of a result of Huneke and
Lyubeznik which has had many applications to positive characteristic algebraic
geometry and singularities.
###### Question 5.3.
Let $(R,m)$ be an excellent local domain of positive characteristic. A famous
result of Huneke and Lyubeznik states that there exists a module finite
extension $S$ of $R$ has such that the map from $H^{i}_{m}(R)\rightarrow
H^{i}_{m}(S)$ is zero for all $i<dim(R)$. Does there exist such an $S$ such
that $depth(S)=depth(R)$ ?
The condition $depth(S)=depth(R)$ is sharp. It is likely not true that there
will exist an $S$ such that $depth(S)\geq depth(R)+1$. For example Bhatt
shows, using Witt vector cohomology, that cones over abelian surfaces (which
are normal rings of dimension $3$) do not have any module finite Cohen-
Macaulay ring extensions. It is not hard to see that the answer to the above
question is yes if $dim(R)=2$ (by simply taking normalization), if the
Frobenius kills all local cohomologies (i.e. if $R$ is ‘$F$-nilpotent’), or if
$R$ is the cone over an abelian variety (by using the multiplication by $p$
map). We leave these as an excercise to the reader.
###### Remark 5.4.
Even positive answers to above questions are not enough to conclude properties
of $R$ in higher dimensions. For example, to apply our main trick in dimension
$2$ involving the interplay between almost mathematics and flat maps, we need
the extensions in Question 5.2 to form a direct limit system or to dominate
each other. That is it would be necessary, or atleast useful, to have a ring
$P$ satisfying the conditions in Question 5.2 such that $S$ and $T$ map to $P$
for any $S$ and $T$ satisfying the conditions in Question 5.2. We do not make
this precise here for the sake of brevity and clarity.
## 6\. A question of André and Fiorot
The purpose of this section is to demonstrate that a question of André and
Fiorot is related to Question 1.1. We begin by stating the question.
###### Question 6.1 [AF21,Question 10.1].
Which affine Noetherian schemes have the property that every finite covering
is a covering for the fpqc topology? (Equivalently: which Noetherian rings $R$
have the property that for every finite extension $S$, there is an
$S$-algebra. faithfully flat over $R$?)
We begin with the following observation:
###### Proposition 6.2.
Let $R$ be an excellent noetherian domain which is a “fpqc analogue of
splinter”. Then $R$ is normal.
###### Proof.
Let $S$ be the normalization of $R$. By assumption we have a faithfully flat
$R$ algebra $T$ containing $S$. This is easily seen to imply that
$R\rightarrow T$ and hence $R\rightarrow S$ is pure. Since $S$ is normal we
have $R$ is normal. ∎
###### Proposition 6.3.
(see [AF21, Theorem 10.4 2) and 1)]) Let $R$ be an excellent noetherian domain
which is a ‘fpqc analogue of splinter’. If $R$ has a module finite extension
which is a regular local ring $S$ then it is regular itself.
###### Proof.
By assumption we have a faithfully flat $R$ algebra $T$ containing $S$.
$R\rightarrow S$ splits since $R\rightarrow T$ is pure. This implies $R$ is
Cohen-Macaulay (and in fact has rational singularities by Boutot’s theorem).
Since $T$ is faithfully flat over $R$ it follows that $T$ is a big Cohen-
Macaulay $R$ algebra. Since $S$ is a module finite extension of $R$. $T$ is
also a big Cohen-Macaulay $S$ algebra. Since $S$ is regular $T$ is faithfully
flat over $S$. Hence $R\rightarrow S$ is faithfully flat and by descent it
follows that $R$ is regular. ∎
Note the parallels to propositions 4.2, 4.3 from Section 4.
###### Remark 6.4.
In fact, the above proposition is true even without the module finite
assumption and follows from a theorem of Bhatt, Iyengar, and Ma. See [AF21,
Theorem 10.4]. We are investigating whether [AF21, Theorem 10.4] can be used
to answer Question 1.1 under these assumptions (that $R$ has a ring extension
$S$ which is regular).
###### Theorem 6.5.
Assume the following:
* •
$R$ is a complete local domain which is completion of a $\mathbb{N}$-graded
ring of dimension $2$ (at the homogenous maximal ideal).
* •
every faithfully flat $R$ algebra $T$ maps to a permissible Cohen-Macaulay
algebra in the sense of [HH95].
Assume $R$ is a fpqc analogue of a splinter. Then $R$ has rational
singularities. Additionally if $R$ is Gorenstein then it is regular.
###### Proof.
We may assume $R$ itself is the $\mathbb{N}$-graded ring while talking about
local cohomology since it is invariant under completion. Let $\alpha$ be in
$[H^{2}_{m}(R)]_{\geq 0}$ and $\frac{y}{x_{1}...x_{m}}$ a fraction
representing $\alpha$. Let $v$ be the valuation arising from grading.
Translating the cohomological statement from [RSS07, Theorem 3.6] to
containment of implies that $R$ has a module finite extension $S$ inside
$R^{+GR}$ such that there is an element $z\in R^{+GR}$ with
$v(z)<min_{i}v(y_{i})$ where $y_{i}$ generate the colon ideal
$(y:_{R}(x_{1},...,x_{m}))$ and $z\in(y:_{R^{+GR}}(x_{1},...,x_{m}))$. Note
that R is excellent. Since $R$ is a fpqc analgoue of a splinter we have a flat
$R$ algebra $T$ which contains $S$ and which maps to a permissible big Cohen-
Macaulay algebra $U$. Since colon ideals base change under flat maps we have
the same containment in $U$. By works of Hochster-Huneke [Ho94, Theorem 5.6
(a)] and [HH95, Theorem 5.12], $z$ is then contained in the big equational
tight closure of $(y_{1},...,y_{n})$. Hence, it is a fortiori contained in the
integral closure of $(y_{1},...,y_{n})$. Now the result follows by the
characterisation of integral closure in terms of valuations ([HS18, Theorem
10.2.4 a)]). If one considers the valuation induced by the grading we get that
$v(z)\geq min_{i}v(y_{i})$ however using the theorem of Roberts, Singh, and
Srinivas we choose $z$ to have valuation lower than $v(y_{i})$ for all $i$.
Hence the colon ideal must be the entire ring and the cohomology class must be
$0$. Hence by [Wat83, Theorem 2.2] we have that the ring has rational
singularities. If in addition $R$ is Gorenstein then it is a rational double
point and the result follows from propositions above (rational double points
have module finite extensions which are regular). ∎
Techniques inspiring the theorem above can be found in [BS12, Lemma 2.2]. This
bolsters our philosophy that our main result and answers to Question 1.1 can
be considered as analogues of Kunz’s theorem in charactersitic zero as the
fact that (F-finite) fpqc analogues of splinters are regular rings in positive
characteristic follows from Kunz’s theorem.
###### Remark 6.6.
We do not know whether one can conclude that $R$ is Gorenstein with the
assumptions (1) and (2) above. Conceptually speaking this is because the
author is not aware of a way to detect Gorensteinness using closure operations
similar to rationality.
###### Remark 6.7.
It is possible that the above proof will go through if we replace permissible
Cohen-Macaulay algebras with Schouten’s big Cohen-Macaulay algebras [Sch03]
(big Cohen-Macaulay algebras which arise as ultraproducts of positive
characteristic big Cohen-Macaulay algebras) in place of permissible big Cohen-
Macaulay algebras.
###### Remark 6.8.
The assumption ‘maps to a permissible big Cohen-Macaulay algebra’ is perhaps
not that surprising. Due to the absence of Frobenius in characteristic zero
and mixed characteristic authors have had to impose conditions on big Cohen-
Macaulay algebras (to make them more controllable) in the past. See [CLM+22,
footnote on p. 37] and first arxiv versions of [MS21]. The recent work T.
Yamaguchi [Ya22] tries to relate arbitrary big Cohen-Macaulay algebras in
characteristic zero with Schouten’s big Cohen-Macaulay algebras [Sch03].
###### Remark 6.9.
Fpqc analogues of splinters behave well with ultraproducts and in particular
ascend under completion. However it is not known whether they are compatible
with Hochster’s $\Gamma$ construction and hence we do not know whether one can
omit $F$-finiteness from the positive characteristic statement that $F$-finite
fpqc analogues of splinters are regular rings. We are grateful to S. Lyu for
this remark.
## 7\. Acknowledgements
The author thanks Mohsen Asgharzadeh, Bhargav Bhatt, Neil Epstein, Arnab
Kundu, Shiji Lyu, Linquan Ma, Devlin Mallory, Alapan Mukhopadhyay, Swaraj
Pande, Vaibhav Pandey, Sambit Senapati, Kazuma Shimomoto, Austyn Simpson,
Sridhar Venkatesh and his advisor Kevin Tucker for conversations,
encouragement, friendship, mentorship, useful suggestions, or a reading of the
manuscript. The author especially thanks Wenliang Zhang for bringing his
attention to Theorem 1.2 in Fall 2019 when the mixed characteristic statement
was not known (in dimensions $4$ or more).
## References
* [Abe04] I. M. Aberbach. “The vanishing of $Tor_{1}^{R}(R^{+},k)$ implies that $R$ is regular.” Proc. Amer. Math. Soc. 133 (2004) 27–29. doi: https://doi.org/10.1090/S0002-9947-02-03162-8.
* [Art71] M. Artin. “On the joins of Hensel rings.” Adv. Math 7 (1971), pp. 282–296. doi: https://doi.org/10.1090/S0002-9947-02-03162-8.
* [AH97] Ian M. Aberbach and Melvin Hochster. “Finite Tor dimension and failure of coherence in absolute integral closures” Journal of Pure and Applied Algebra Volume 122, Issue 3, 1997, Pages 171-184, ISSN 0022-4049. doi: https://doi.org/10.1016/S0022-4049(97)00049-2.
* [AF21] Y. André and L Fiorot. “On the canonical, fpqc, and finite topologies on affine schemes. The state of the art” Journal of Pure and Applied Algebra Volume 122, Issue 3, 1997, Pages 171-184, ISSN 0022-4049. doi: https://doi.org/10.1016/S0022-4049(97)00049-2.
* [AL08] I. Aberbach and J. Li. “Asymptotic vanishing conditions which force regularity in local rings of prime characteristic” Math. Res. Lett. (15) 4 (2008), pp. 815–820. mr: 1147957 doi: https://doi.org/10.2307/2946563.
* [And18] Y. André. “La conjecture du facteur direct.”, Publ. Math. Inst. Hautes Etudes Sci. 127 (2018), pp. 71–93. doi: https://doi.org/10.1007/s10240-017-0097-9
* [Bha12] B. Bhatt. Derived splinters in positive characteristic, Compos. Math. 148 (2012), pp. 1757–1786.
* [Bha16] B. Bhatt. (2014). Almost direct summands. Nagoya Mathematical Journal, 214, 195-204. doi:10.1215/00277630-2648180
* [Bha21] B. Bhatt. “Cohen-Macaulayness of absolute integral closures.” Eprint, arxiv:2008.08070v2, October 2021.
* [BIM19] B. Bhatt, S. B. Iyengar, and L. Ma. “Regular rings and perfect(oid) algebras” Communications in Algebra, 47:6, Pages 2367-2383. doi: https://doi.org/10.1080/00927872.2018.1524009.
* [BS12] H. Brenner and A. Stäbler. “Dagger closure in regular rings containing a field.” J. of Algebra, Pages 176-185 from Volume 370 (2012). doi: https://doi.org/10.1016/j.jalgebra.2012.07.043.
* [CLM+22] H. Cai, S. Lee, L. Ma, K. Schwede, and K. Tucker. “Perfectoid signature, perfectoid Hilbert-Kunz multiplicity, and an application to local fundamental groups”. September 2022 Eprint, arxiv:2209.04046.
* [DM20] R. Datta and T. Murayama. “Permanence properties of F-injectivity”, January 2020, Eprint, arxiv.org:1906.11399.
* [ES16] N. Epstein and J. Shapiro. “The Ohm-Rush content function”, J. Algebra Appl. 15 (2016), 1650009.
* [ES19] N. Epstein and J. Shapiro. “The Ohm-Rush content function II”. Noetherian rings, valuation domains, and base change. J. Algebra Appl. 18 (2019), 1950100.
* [ES21] N. Epstein and J. Shapiro. “The Ohm-Rush content function III”. Completion, globalization, and power-content algebras. Eprint, arxiv.org:2008.07616.
* [Har98] N. Hara. “A characterization of rational singularities in terms of injectivity of Frobenius maps”, Amer. J. Math. 120 (1998), no. 5, 981–996. MR1646049 (99h:13005)
* [Ho75] M. Hochster. “Topics in the homological theory of modules over commutative rings”, In “CMBSregional conference series” 24, Amer. Math. Soc. (1975), vii+75.
* [Ho94] M. Hochster. “Tight closure in equal characteristic”, big Cohen-Macaulay algebras, and solid closure, Contemp. Math. 159 (1994), 173–196.
* [Ho02] M. Hochster. “Big Cohen–Macaulay algebras in dimension three via Heitmann’s theorem”, J. Algebra 254 (2002) 395–408.
* [HH91] M. Hochster and C. Huneke. “Tight closure and elements of small order in integral extensions” Journal of Pure and Applied Algebra, Volume 71, Issues 2–3, 1991, Pages 233-247, ISSN 0022-4049, doi: https://doi.org/10.1016/0022-4049(91)90149-V.
* [HH94] M. Hochster and C. Huneke. “Infinite integral extensions and big Cohen-Macaulay algebras” Ann. of Math. (2) 135 (1992), no. 1, pp. 53–89. mr: 1147957 doi: https://doi.org/10.2307/2946563.
* [HH95] M. Hochster and C. Huneke. “Applications of the existence of big Cohen-Macaulay algebras.” Adv. Math. 113.1 (1995), pp. 45–117. doi: https://doi.org/10.1006/aima.1995.1035. mr: https://mathscinet.ams.org/mathscinet-getitem?mr=1332808.
* [HJ21] M. Hochster and J. Jefferies. “Extensions of primes, flatness, and intersection flatness” Commutative Algebra: 150 years with Roger and Sylvia Wiegand, (2021) 63–81.
* [HL07] C. Huneke and G. Lyubeznik. “Absolute integral closure in positive characteristic”. Adv. Math. 210 (2007), 498–504.
* [HS18] C. Huneke and I. Swanson. “Integral Closure of Ideals, Rings, and Modules” LMS Lecture Note Series 336 2018 (online version)
* [Hun96] C. Huneke. “Tight Closure and its Applications”, volume 88 of CBMS Regional Conf. Ser. in Math. Amer. Math. Soc., 1996.
* [Kap74] I. Kaplansky. “Commutative Rings”. The University of Chicago Press.
* [Kol13] János Kollár. Singularities of the minimal model program, volume 200 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge, 2013. With a collaboration of Sándor Kovács. DOI:10.1017/CBO9781139547895.
* [Kov20] S. J. Kovács. “Rational singularities.” Jul. 23, 2020. https://arxiv.org/abs/1703.02269v8 https://arxiv.org/abs/1703.02269v8.
* [Lip78] J. Lipman. Rational singularities, with applications to algebraic surfaces and unique factorization. Inst. Hautes Etudes Sci. Publ. Math. ´ 36 (1969), 195–279. MR 43:1986
* [Ma18] L. Ma. “The vanishing conjecture for maps of Tor and derived splinters.”, J. Eur. Math. Soc. 20 (2018), no. 2, pp. 315–338
* [Mal21] D. Mallory. “Bigness of the tangent bundle of del Pezzo surfaces and D-simplicity”, Algebra Number Theory 15(8): 2019-2036 (2021). DOI: 10.2140/ant.2021.15.2019.
* [Mal22] D. Mallory. “Homogeneous coordinate rings as direct summands of regular rings”, June 2022. Eprint, arXiv:2206.03621.
* [Mat89] H. Matsumura. Commutative ring theory. Second ed. Translated from the Japanese by M. Reid. Cambridge Stud. Adv. Math., Vol. 8. Cambridge: Cambridge Univ. Press, 1989. doi: https://doi.org/10.1017/CBO9781139171762. mr: https://mathscinet.ams.org/mathscinet-getitem?mr=1011461.
* [Mur21] T. Murayama. Relative vanishing theorems for $\mathbb{Q}$-schemes.
* [MS97] V. B. Mehta and V. Srinivas. “A characterization of rational singularities”, Asian J. Math. 1 (1997), no. 2, 249–271. MR1491985 (99e:13009)
* [MS20] L. Ma and K. Schwede. “A Kunz-type characterization of regular rings via alterations” Journal of Pure and Applied Algebra, Volume 224, Issue 3, 2020, Pages 1124-1131, ISSN 0022-4049, https://doi.org/10.1016/j.jpaa.2019.07.008.
* [MS21] L. Ma and K. Schwede. “Singularities in mixed characteristic via perfectoid big Cohen-Macaulay algebras.” Duke Math. J. 170.13 (2021), pp. 2815–2890. doi: https://doi.org/10.1215/00127094-2020-0082. mr: https://mathscinet.ams.org/mathscinet-getitem?mr=4312190.
* [Pat22] S. Patankar. “Coherence of absolute integral closures.” Proc. Amer. Math. Soc. Ser. B 9 (2022), 75-89.
* [Pri67] D. Prill. “Local classification of quotients of complex manifolds by discontinuous groups.” Duke Math. J. 34 (1967), 375–386, 210944.
* [Wat83] K.i. Watanabe. Rational singularities with k*-action, in: Commutative algebra (Trento, 1981), Lecture Notes in Pure and Appl. Math. 84, Dekker, New York, (1983), 339–351.
* [RSS07] P. Roberts, A. K. Singh, and V. Srinivas. “Annihilators of local cohomology in characteristic zero.” Illinois J. Math. 51 (1) 237 - 254, Spring 2007. doi: https://doi.org/10.1215/ijm/1258735334.
* [Sch03] H. Schoutens. “On the vanishing of Tor of the absolute integral closure.” Journal of Algebra, Volume 275, Issue 2, 2004, Pages 567-574, ISSN 0021-8693, doi: https://doi.org/10.1016/S0021-8693(03)00504-0.
* [Sch12] P. Scholze. “Perfectoid spaces.”, Publ. Math. Inst. Hautes Etudes Sci, 116 (2012), 245–313. 3090258.
* [Shi10] K. Shimomoto. “F-coherent rings with applications to tight closure theory” Journal of Algebra, Volume 338, Issue 1, 2011, Pages 24-34, ISSN 0021-8693. doi: https://doi.org/10.1016/j.jalgebra.2011.05.006.
* [Ser00] J. P. Serre. “Local algebra” Springer Monographs in Mathematics, Springer-Verlag, Berlin, 2000.
* [Smi94] K. Smith. “Tight closure of parameter ideals” Invent Math 115, 41–60 (1994). doi: https://doi.org/10.1007/BF01231753.
* [Sta10] A. Stäbler. Dagger closure, Ph.D. thesis, Universität Osnabrück, 2010
* [ST21] K. Shimomoto and E. Tavanfar. “On local ring without small Cohen-Macaulay algebras in mixed characteristic” October 2021. Eprint, arXiv:2109.12700
* [Ya22] T. Yamaguchi. “Big Cohen-Macaulay test ideals in equal characteristic zero via ultraproducts” July 2022. Eprint, arXiv:2207.04247
|
# Tree-based Node Aggregation in Sparse Graphical Models
Ines Wilmsa and Jacob Bienb
a Department of Quantitative Economics, Maastricht University, Maastricht, The
Netherlands
b Data Sciences and Operations, University of Southern California, Los
Angeles, CA, USA
#### Abstract.
High-dimensional graphical models are often estimated using regularization
that is aimed at reducing the number of edges in a network. In this work, we
show how even simpler networks can be produced by aggregating the nodes of the
graphical model. We develop a new convex regularized method, called the tree-
aggregated graphical lasso or tag-lasso, that estimates graphical models that
are both edge-sparse and node-aggregated. The aggregation is performed in a
data-driven fashion by leveraging side information in the form of a tree that
encodes node similarity and facilitates the interpretation of the resulting
aggregated nodes. We provide an efficient implementation of the tag-lasso by
using the locally adaptive alternating direction method of multipliers and
illustrate our proposal’s practical advantages in simulation and in
applications in finance and biology.
#### Keywords.
aggregation, graphical model, high-dimensionality, regularization, sparsity
## 1 Introduction
Graphical models are greatly useful for understanding the relationships among
large numbers of variables. Yet, estimating graphical models with many more
parameters than observations is challenging, which has led to an active area
of research on high-dimensional inverse covariance estimation. Numerous
methods attempt to curb the curse of dimensionality through regularized
estimation procedures (e.g., Meinshausen and Bühlmann, 2006; Yuan and Lin,
2007; Banerjee et al., 2008; Friedman et al., 2008; Rothman et al., 2008; Peng
et al., 2009; Yuan, 2010; Cai et al., 2011, 2016). Such methods aim for
sparsity in the inverse covariance matrix, which corresponds to graphical
models with only a small number of edges. A common method for estimating
sparse graphical models is the graphical lasso (glasso) (Yuan and Lin, 2007;
Banerjee et al., 2008; Rothman et al., 2008; Friedman et al., 2008), which
adds an $\ell_{1}$-penalty to the negative log-likelihood of a sample of
multivariate normal random variables. While this and many other methods focus
on the edges for dimension reduction, far fewer contributions (e.g., Tan et
al., 2015; Eisenach et al., 2020; Pircalabelu and Claeskens, 2020) focus on
the nodes as a guiding principle for dimension reduction.
Nonetheless, node dimension reduction is becoming increasingly relevant in
many areas where data are being measured at finer levels of granularity. For
instance, in biology, modern high-throughput sequencing technologies provide
low-cost microbiome data at high resolution; in neuroscience, brain activity
in hundreds of regions of interest can be measured; in finance, data at the
individual company level at short time scales are routinely analyzed; and in
marketing, joint purchasing data on every stock-keeping-unit (product) is
recorded. The fine-grained nature of this data brings new challenges. The
sheer number of fine-grained, often noisy, variables makes it difficult to
detect dependencies. Moreover, there can be a mismatch between the resolution
of the measurement and the resolution at which natural meaningful
interpretations can be made. The purpose of an analysis may be to draw
conclusions about entities at a coarser level of resolution than happened to
be measured. Because of this mismatch, practitioners are sometimes forced to
devise ad hoc post-processing steps involving, for example, coloring the nodes
based on some classification of them into groups in an attempt to make the
structure of an estimated graphical model more interpretable and the domain-
specific takeaways more apparent (e.g., Millington and Niranjan, 2019).
Figure 1: Top: True full graph and precision matrix $\boldsymbol{\Omega}$
with corresponding aggregated graph and precision matrix. Middle: Estimation
output of the tag-lasso. Bottom: Estimation output of the glasso.
Our solution to this problem is to incorporate the side information about the
relationship between nodes directly into the estimation procedure. In our
framework, this side information is encoded as a tree whose leaves correspond
to the measured variables. Such tree structures are readily available in many
domains (e.g., taxonomies in biology and hierarchical classifications of jobs,
companies, and products in business) and is well-suited to expressing multi-
resolution structure that is present in many problems. We propose a new convex
regularization procedure, called tag-lasso, which stands for tree-aggregated-
graphical-lasso. This procedure combines node (or variable) aggregation with
edge-sparsity. The tree-based aggregation serves to both amplify the signal of
similar, low-level variables and render a graphical model involving nodes at
an appropriate level of scale to be relevant and interpretable. The edge-
sparsity encourages the graphical model involving the aggregated nodes has a
sparse network structure.
Our procedure is based on a tree-based parameterization strategy that
translates the node aggregation problem into a sparse modeling problem,
following an approach previously introduced in the regression setting (Yan and
Bien, 2020). In Figure 1 (to be discussed more thoroughly in Section 4), we
see that tag-lasso is able to recover the aggregated, sparse graph structure.
By doing so, it yields a more accurate estimate of the true graph, and its
output is easier to interpret than the full, noisy graph obtained by the
glasso.
The rest of the paper is organized as follows. Section 2 introduces the tree-
based parameterization structure for nodewise aggregation in graphical models.
Section 3 introduces the tag-lasso estimator, formulated as a solution to a
convex optimization problem, for which we derive an efficient algorithm.
Section 4 presents the results of a simulation study. Section 5 illustrates
the practical advantages of the tag-lasso on financial and microbiome data
sets. Section 6 concludes.
## 2 Node Aggregation in Penalized Graphical Models
Let $\bf S$ be the empirical covariance matrix based on $n$ multivariate
normal observations of dimension $p$, with mean vector $\boldsymbol{\mu}$ and
covariance matrix $\boldsymbol{\Sigma}$. The target of estimation is the
precision matrix $\boldsymbol{\Omega}=\boldsymbol{\Sigma}^{-1}$, whose
sparsity pattern provides the graph structure of the Gaussian graphical model,
since $\Omega_{jk}=0$ is equivalent to variables $j$ and $k$ being
conditionally independent given all other variables. To estimate the precision
matrix, it is common to use a convex penalization method of the form
$\widehat{\boldsymbol{\Omega}}=\underset{\boldsymbol{\Omega}}{\operatorname{argmin}}\\{-\text{logdet}(\boldsymbol{\Omega})+\text{tr}({\bf
S}\boldsymbol{\Omega})+\lambda\mathcal{P}(\boldsymbol{\Omega})\ \ \text{s.t.}\
\boldsymbol{\Omega}=\boldsymbol{\Omega}^{\top},\boldsymbol{\Omega}\succ 0\\},$
(1)
where $\text{tr}(\cdot)$ denotes the trace, $\mathcal{P}(\cdot)$ is a convex
penalty function, and $\lambda>$ is a tuning parameter controlling the degree
of penalization. Choosing the $\ell_{1}$-norm
$\mathcal{P}(\boldsymbol{\Omega})=\|\boldsymbol{\Omega}^{-\text{diag}}\|_{1},$
(2)
where $\boldsymbol{\Omega}^{-\text{diag}}$ contains the unique off-diagonal
elements, yields the graphical lasso (glasso) (Friedman et al., 2008; Yuan and
Lin, 2007; Banerjee et al., 2008; Rothman et al., 2008). It encourages
$\widehat{\boldsymbol{\Omega}}$ to be sparse, corresponding to a graphical
model with few edges.
However, when $\boldsymbol{\Omega}$ is not sparse, demanding sparsity in
$\widehat{\boldsymbol{\Omega}}$ may not be helpful, as we will show in Section
2.1. Such settings can arise when data are measured and analyzed at ever
higher resolutions (a growing trend in many areas, see e.g. Callahan et al.
2017). A tree is a natural way to represent the different scales of data
resolution, and we introduce a new choice for $\mathcal{P}$ that uses this
tree to guide node aggregation, thereby allowing for a data adaptive choice of
data scale for capturing dependencies. Such tree-based structures are
available in many domains. For instance, companies can be aggregated according
to hierarchical industry classification codes; products can be aggregated from
brands towards product categories; brain voxels can be aggregated according to
brain regions; microbiome data can be aggregated according to taxonomy. The
resulting penalty function then encourages a more general and yet still highly
interpretable structure for $\widehat{\boldsymbol{\Omega}}$. In the following
subsection, we use a toy example to illustrate the power of such an approach.
### 2.1 Node Aggregation
Consider a toy example with $p$ variables
$\displaystyle X_{1}$ $\displaystyle=$
$\displaystyle\sum_{j=3}^{p}X_{j}+\varepsilon_{1}$ $\displaystyle X_{2}$
$\displaystyle=$ $\displaystyle\sum_{j=3}^{p}X_{j}+\varepsilon_{2}$
$\displaystyle X_{j}$ $\displaystyle=$ $\displaystyle\varepsilon_{j},\
\text{for}\ 3\leq j\leq p,$
where $\varepsilon_{1},\ldots,\varepsilon_{p}$ are independent standard normal
random variables. By construction, it is clear that there is a very simple
relationship between the variables: The first two variables both depend on the
sum of the other $p-2$ variables. However, a standard graphical model on the
$p$ variables does not naturally express this simplicity. The first row of
Table 1 shows the covariance and precision matrices for the full set of
variables $X_{1},\ldots,X_{p}$. The graphical model on the full set of
variables is extremely dense $O(p^{2})$ edges. Imagine if instead we could
form a graphical model with only three variables: $X_{1},X_{2},\widetilde{X}$,
where the last variable $\widetilde{X}=\sum_{j=3}^{p}X_{j}$ aggregates all but
the first two variables. The bottom row of Table 1 results in a graphical
model that matches the simplicity of the situation.
The lack of sparsity in the $p$-node graphical model means that the graphical
lasso will not do well. Nonetheless, a method that could perform node
aggregation would be able to yield a highly-interpretable aggregated sparse
graphical model since $X_{1}$ and $X_{2}$ are conditionally independent given
the aggregated variable $\widetilde{X}$.
Table 1: Toy example: Covariance and precision matrices with corresponding
graphical model (drawn for $p=50$) for the full (top) and aggregated (bottom)
set of nodes.
Nodes | Covariance Matrix | Precision Matrix | Graphical
---|---|---|---
| $\boldsymbol{\Sigma}$ | $\boldsymbol{\Omega}$ | Model
$X_{1},\ldots,X_{p}$ | ${\footnotesize\begin{pmatrix}p-1&p-2&{\bf 1}_{p-2}^{\top}\\\ p-2&p-1&{\bf 1}_{p-2}^{\top}\\\ {\bf 1}_{p-2}^{\top}&{\bf 1}_{p-2}^{\top}&{\bf I}_{p-2}\\\ \end{pmatrix}}$ | ${\footnotesize\begin{pmatrix}1&0&-{\bf 1}_{p-2}^{\top}\\\ 0&1&-{\bf 1}_{p-2}^{\top}\\\ -{\bf 1}_{p-2}&-{\bf 1}_{p-2}&{\bf L}\\\ \end{pmatrix}}$ |
| | with ${\bf L}={\bf I}_{p-2}+2\cdot{\bf 1}_{p-2}\cdot{\bf 1}_{p-2}^{\top}$ |
$X_{1},X_{2},\widetilde{X}$ | ${\footnotesize\begin{pmatrix}p-1&p-2&p-2\\\ p-2&p-1&p-2\\\ p-2&p-2&p-2\\\ \end{pmatrix}}$ | ${\footnotesize\begin{pmatrix}1&0&-1\\\ 0&1&-1\\\ -1&-1&2+1/(p-2)\end{pmatrix}}$ |
Note: Let ${\bf 1}_{d}$ denote a $d$-dimensional column vector of ones, and
${\bf I}_{d}$ be the $d\times d$ identity matrix.
It is useful to map from the small aggregated graphical model to the original
$p$-node graphical model. One does so by writing the precision matrix in
“$G$-block” format (Bunea et al., 2020, although they introduce this
terminology in the context of the covariance matrix, not its inverse) for a
given partition $G=\\{G_{1},...,G_{K}\\}$ of the nodes $\\{1,\ldots,p\\}$ and
corresponding $p\times K$ membership matrix ${\bf M}$, with entries $M_{jk}=1$
if $j\in G_{k}$, and $M_{jk}=0$ otherwise. In particular, there exists a
$K\times K$ symmetric matrix ${\bf C}$ and a $p\times p$ diagonal matrix ${\bf
D}$ such that the precision matrix can be written as $\boldsymbol{\Omega}={\bf
M}{\bf C}{\bf M}^{\top}+{\bf D}$. The block-structure of $\boldsymbol{\Omega}$
is captured by the first part of the decomposition, the aggregated $K\times K$
precision matrix on the set of aggregated nodes can then be written as
$\boldsymbol{\Omega}_{\text{agg}}={\bf C}+{\bf D}_{\text{agg}},$ where ${\bf
D}_{\text{agg}}=({\bf M}^{\top}{\bf D}^{-1}{\bf M})^{-1}$ is diagonal. In the
above example, $K=3$, $G_{1}=\\{1\\},\ G_{2}=\\{2\\},\ G_{3}=\\{3,\ldots,p\\}$
and ${\bf M}{\bf C}{\bf M}^{\top}$ has only three distinct rows/columns since
the aggregated variables $j=3,\ldots,p$ share all their entries. In the
presence of node aggregation and edge sparsity, the graphical model
corresponding to the aggregated precision matrix is far more parsimonious than
the graphical model on the full precision matrix (see Table 1).
As motivated by this example, our main goal is to estimate the precision
matrix in such a way that we can navigate from a $p$-dimensional problem to a
$K$-dimensional problem whose corresponding graphical model provides a simple
description of the conditional dependency structure among $K$ aggregates of
the original variables. In the following proposition, we show that this can be
accomplished by looking for a precision matrix that has a $G$-block structure.
The proof of the proposition is included in Appendix A
###### Proposition 2.1.
Suppose ${\bf X}\sim N_{p}({\bf 0},\boldsymbol{\Omega}^{-1})$ with
$\boldsymbol{\Omega}={\bf M}{\bf C}{\bf M}^{\top}+{\bf D}$, where ${\bf
M}\in\\{0,1\\}^{p\times K}$ is the membership matrix, $\bf D\succ 0$, and let
${\bf\widetilde{X}}={\bf M}^{\top}{\bf X}\in\mathds{R}^{K}$ be the vector of
aggregated variables. Then ${\bf\widetilde{X}}$ has precision matrix ${\bf
C}+{\bf D}_{\text{agg}}$, where ${\bf D}_{\text{agg}}$ is a diagonal matrix,
and therefore $c_{ij}=0$ is equivalent to the aggregates $\widetilde{X}_{i}$
and $\widetilde{X}_{j}$ being conditionally independent given all other
aggregated variables.
While Proposition 2.1 gives us the desired interpretation in the graphical
model with $K$ aggregated nodes, in practice, the partition $G$, its size
$K$, and corresponding membership matrix ${\bf M}$ are, however, unknown.
Rather than considering arbitrary partitions of the variables, we constrain
ourselves specifically to partitions guided by a known tree. In so doing, we
allow ourselves to exploit side information and help ensure that the
aggregated nodes will be easily interpretable. To this end, we introduce a
tree-based parameterization strategy that allows us to embed the node
dimension reduction into a convex optimization framework.
### 2.2 Tree-Based Parameterization
Our aggregation procedure assumes that we have, as side information, a tree
that represents the closeness (or similarity) of variables. We introduce here
a matrix-valued extension of the tree-based parameterization developed in Yan
and Bien (2020) for the regression setting. We consider a tree $\mathcal{T}$
with $p$ leaves $\boldsymbol{\Omega}_{1},\ldots,\boldsymbol{\Omega}_{p}$ where
$\boldsymbol{\Omega}_{j}$ denotes column $1\leq j\leq p$ of
$\boldsymbol{\Omega}$. We restrict ourselves to partitions that can be
expressed as a collection of branches of $\mathcal{T}$. Newly aggregated nodes
are then formed by summing variables within branches. To this end, we assign a
$p$-dimensional parameter vector ${\boldsymbol{\gamma}}_{u}$ to each node $u$
in the tree $\mathcal{T}$ (see Figure 2 for an example). Writing the set of
nodes in the path from the root to the $j^{\text{th}}$ leaf (variable) as
$\text{ancestor}(j)\cup\\{j\\}$, we express each column/row in the precision
matrix as
$\boldsymbol{\Omega}_{j}=\sum_{u\in\text{ancestor}(j)\cup\\{j\\}}\boldsymbol{\gamma}_{u}+d_{j}{\bf
e}_{j},$ (3)
where we sum over all the $\boldsymbol{\gamma}_{u}$’s along this path, and
${\bf e}_{j}$ denotes the $p$-dimensional vector with all zeros except for its
$j^{\text{th}}$ element that is equal to one. In the remainder, we will make
extensive use of the more compact notation $\boldsymbol{\Omega}={\bf
A}\boldsymbol{\Gamma}+{\bf D},$ where ${\bf
A}\in\\{0,1\\}^{p\times|\mathcal{T}|}$ is a binary matrix with
$A_{jk}=1{\\{u_{k}\in\text{ancestor}(j)\cup\\{j\\}\\}}=1{\\{j\in\text{descendant}(u_{k})\cup\\{u_{k}\\}\\}}$,
$\boldsymbol{\Gamma}$ is a $|\mathcal{T}|\times p$ parameter matrix collecting
the $\boldsymbol{\gamma}_{u}$’s in its rows and ${\bf D}$ is a diagonal
parameter matrix with elements $d_{1},\ldots,d_{p}$.
Figure 2: An example of a tree $\mathcal{T}$ encoding similarity among $p=5$
variables. Figure 3: Left: An example of a $5\times 5$-dimensional
$\boldsymbol{\Omega}$ and a tree $\mathcal{T}$ that relates the corresponding
$p=5$ variables. We have
$\boldsymbol{\Omega}_{i}={\boldsymbol{\gamma}}_{i}+\boldsymbol{\gamma}_{1:3}+\boldsymbol{\gamma}_{1:5}$
for $i=1,2,3$ and
$\boldsymbol{\Omega}_{j}={\boldsymbol{\gamma}}_{j}+\boldsymbol{\gamma}_{4:5}+\boldsymbol{\gamma}_{1:5}$
for $j=4,5$, by equation (3), ignoring the diagonal elements. Middle: By
zeroing out the $\boldsymbol{\gamma}_{i}$’s in the gray nodes, we aggregate
the rows/columns of $\boldsymbol{\Omega}$ into two groups indicated by the two
colors:
$\boldsymbol{\Omega}_{1}=\boldsymbol{\Omega}_{2}=\boldsymbol{\Omega}_{3}=\boldsymbol{\gamma}_{1:3}+\boldsymbol{\gamma}_{1:5}$
(blue) and
$\boldsymbol{\Omega}_{4}=\boldsymbol{\Omega}_{5}=\boldsymbol{\gamma}_{1:5}$
(red). Right: The precision matrix $\boldsymbol{\Omega}$ thus has a block-
structure.
By zeroing out $\boldsymbol{\gamma}_{u}$’s, certain nodes will be aggregated,
as can be seen from the illustrative example in Figure 3. More precisely, let
${\mathcal{Z}}=\\{u:{{\boldsymbol{\gamma}}}_{u}\neq{\bf 0}\\}$ denote the set
of non-zero rows in ${\boldsymbol{\Gamma}}$ and let ${\bf A}_{{\mathcal{Z}}}$
be the sub-matrix of ${\bf A}$ where only the columns corresponding to the
non-zeros rows in ${{\boldsymbol{\Gamma}}}$ are kept. The number of blocks $K$
in the aggregated network is then given by the number of unique rows in ${\bf
A}_{{\mathcal{Z}}}$. The membership matrix $\bf M$ (Section 2.1), and hence
the set of aggregated nodes, can then be derived from the variables (rows) in
the matrix ${\bf A}_{{\mathcal{Z}}}$ that share all their row-entries.
We are now ready to introduce the tag-lasso, which is based on this
parameterization.
## 3 Tree Aggregated Graphical lasso
To achieve dimension reduction via node aggregation and edge sparsity
simultaneously, we extend optimization problem (1) by incorporating the
parameterization introduced above. Our estimator, called the tag-lasso, is
defined as
$(\widehat{\boldsymbol{\Omega}},\widehat{\boldsymbol{\Gamma}},\widehat{\boldsymbol{D}})=\underset{\boldsymbol{\Omega},\boldsymbol{\Gamma},\boldsymbol{D}}{\operatorname{argmin}}\\{-\text{logdet}(\boldsymbol{\Omega})+\text{tr}({\bf
S}\boldsymbol{\Omega})+\lambda_{1}\|\boldsymbol{\Gamma}_{-r}\|_{2,1}+\lambda_{2}\|\boldsymbol{\Omega}^{-\text{diag}}\|_{1}\
\\\ \text{s.t.}\
\boldsymbol{\Omega}=\boldsymbol{\Omega}^{\top},\boldsymbol{\Omega}\succ{\bf
0},\boldsymbol{\gamma}_{r}=\gamma{\bf 1}_{p},\ \boldsymbol{\Omega}={\bf
A}\boldsymbol{\Gamma}+{\bf D},\ {\bf D}\ \text{diag},\ D_{jj}\geq 0\
\text{for}\ j=1,\ldots,p\\},$ (4)
with
$\|\boldsymbol{\Gamma}_{-r}\|_{2,1}=\sum_{u\in\mathcal{T}_{-r}}\|\boldsymbol{\gamma}_{u}\|_{2}$
and $\mathcal{T}_{-r}$ being the set of all nodes in $\mathcal{T}$ other than
the root. This norm induces row-wise sparsity on all non-root rows of
${\boldsymbol{\Gamma}}$. This row-wise sparsity, in turn, induces node
aggregation as explained in Section 2.2. The root is excluded from this
penalty term so that in the extreme of large $\lambda_{1}$ one gets complete
aggregation but not necessarily sparsity (in this extreme, all off-diagonal
elements of $\widehat{\boldsymbol{\Omega}}$ are equal to the scalar $\gamma$
that appears in the equality constraint involving $\boldsymbol{\gamma}_{r}$).
While $\lambda_{1}$ controls the degree of node aggregation, $\lambda_{2}$
controls the degree of edge sparsity. When $\lambda_{1}=0$, the optimization
problem in (4) reduces to the glasso.
Finally, note that optimization problem (4) fits into the general formulation
of penalized graphical models given in (1) since it can be equivalently
expressed as
$\widehat{\boldsymbol{\Omega}}=\underset{\boldsymbol{\Omega}}{\operatorname{argmin}}\\{-\text{logdet}(\boldsymbol{\Omega})+\text{tr}({\bf
S}\boldsymbol{\Omega})+\lambda_{1}\mathcal{P}_{\text{aggregate}}(\boldsymbol{\Omega})+\lambda_{2}\mathcal{P}_{\text{sparse}}(\boldsymbol{\Omega})\
\text{s.t.}\
\boldsymbol{\Omega}=\boldsymbol{\Omega}^{\top},\boldsymbol{\Omega}\succ{\bf
0}\\},$
where
$\mathcal{P}_{\text{aggregate}}(\boldsymbol{\Omega})=\underset{\boldsymbol{\Gamma},{\bf
D}}{\operatorname{min}}\ \\{\|\boldsymbol{\Gamma}_{-r}\|_{2,1}\ \text{s.t.}\
\boldsymbol{\gamma}_{r}=\gamma{\bf 1}_{p},\ \boldsymbol{\Omega}={\bf
A}\boldsymbol{\Gamma}+{\bf D},\ {\bf D}\ \text{diag},\ D_{jj}\geq 0\
\text{for}\ j=1,\ldots,p\\}$
and $\mathcal{P}_{\text{sparse}}(\boldsymbol{\Omega})$ is the $\ell_{1}$-norm
defined in (2).
### 3.1 Locally Adaptive Alternating Direction Method of Multipliers
We develop an alternating direction method of multipliers (ADMM) algorithm
(Boyd et al., 2011), specifically tailored to solving (4). Our ADMM algorithm
is based on solving this equivalent formulation of (4):
$\underset{\underset{\boldsymbol{\Gamma}^{(1)},\boldsymbol{\Gamma}^{(2)},\boldsymbol{\Omega},\boldsymbol{\Gamma},\boldsymbol{D}}{\boldsymbol{\Omega}^{(1)},\boldsymbol{\Omega}^{(2)},\boldsymbol{\Omega}^{(3)}}}{\operatorname{min}}\\{-\text{logdet}(\boldsymbol{\Omega}^{(1)})+\text{tr}({\bf
S}\boldsymbol{\Omega}^{(1)})+\lambda_{1}\|\boldsymbol{\Gamma}^{(1)}_{-r}\|_{2,1}+\lambda_{2}\|\boldsymbol{\Omega}^{-\text{diag}(3)}\|_{1}\\\
\text{s.t.}\
\boldsymbol{\Omega}^{(1)}={\boldsymbol{\Omega}^{(1)}}^{\top},\boldsymbol{\Omega}^{(1)}\succ{\bf
0},\boldsymbol{\gamma}_{r}^{(1)}=\gamma^{(1)}{\bf 1}_{p},\
\boldsymbol{\Omega}^{(2)}={\bf A}\boldsymbol{\Gamma}^{(2)}+{\bf D},\ {\bf D}\
\text{diag},\ D_{jj}\geq 0\ \text{for}\ j=1,\ldots,p,\\\
\boldsymbol{\Omega}=\boldsymbol{\Omega}^{(1)}=\boldsymbol{\Omega}^{(2)}=\boldsymbol{\Omega}^{(3)}\
\text{and}\
\boldsymbol{\Gamma}=\boldsymbol{\Gamma}^{(1)}=\boldsymbol{\Gamma}^{(2)}\\}.$
(5)
Additional copies of $\boldsymbol{\Omega}$ and $\boldsymbol{\Gamma}$ are
introduced to efficiently decouple the optimization problem.
Furthermore, we use an extension called locally adaptive-ADMM (LA-ADMM, Xu et
al., 2017) with adaptive penalization to improve performance. The full details
of the algorithm are provided in Appendix B.
### 3.2 Selection of the Tuning Parameters
To select the tuning parameters $\lambda_{1}$ and $\lambda_{2}$, we form a
$10\times 10$ grid of ($\lambda_{1},\lambda_{2}$) values and find the pair
that minimizes a 5-fold cross-validated likelihood-based score,
$\frac{1}{5}\sum_{k=1}^{5}\left\\{-\text{logdet}(\widehat{\boldsymbol{\Omega}}_{-\mathcal{F}_{k}})+\text{tr}({\bf
S}_{\mathcal{F}_{k}}\widehat{\boldsymbol{\Omega}}_{-\mathcal{F}_{k}})\right\\},$
(6)
where $\widehat{\boldsymbol{\Omega}}_{-\mathcal{F}_{k}}$ is an estimate of the
precision matrix trained while withholding the samples in the $k^{\text{th}}$
fold and ${\bf S}_{\mathcal{F}_{k}}$ is the sample covariance matrix computed
on the $k^{\text{th}}$ fold. In particular, we take
$\widehat{\boldsymbol{\Omega}}_{-\mathcal{F}_{k}}$ to be a re-fitted version
of our estimator (e.g., Belloni and Chernozhukov, 2013). After fitting the
tag-lasso, we obtain
$\widehat{\mathcal{Z}}=\\{u:\widehat{{\boldsymbol{\gamma}}}_{u}\neq{\bf
0}\\},$ the set of non-zero rows in $\widehat{{\boldsymbol{\Gamma}}}$, which
suggests a particular node aggregation; and
$\widehat{\mathcal{P}}=\\{(i,j):\widehat{{\boldsymbol{\Omega}}}_{ij}\neq{\bf
0}\\},$ the set of non-zero elements in $\widehat{{\boldsymbol{\Omega}}}$,
which suggests a particular edge sparsity structure. We then re-estimate
$\boldsymbol{\Omega}$ by maximizing the likelihood subject to these
aggregation and sparsity constraints:
$\displaystyle\underset{\boldsymbol{\Omega},\boldsymbol{\Gamma}_{\widehat{\mathcal{Z}}},\boldsymbol{D}}{\operatorname{min}}$
$\displaystyle-\text{logdet}(\boldsymbol{\Omega})+\text{tr}({\bf
S}\boldsymbol{\Omega})$ (7) subject to
$\displaystyle\boldsymbol{\Omega}=\boldsymbol{\Omega}^{\top},\boldsymbol{\Omega}\succ{\bf
0},$ $\displaystyle\boldsymbol{\gamma}_{\widehat{\mathcal{Z}},r}=\gamma{\bf
1}_{p},$ $\displaystyle\boldsymbol{\Omega}={\bf
A}_{\widehat{\mathcal{Z}}}\boldsymbol{\Gamma}_{\widehat{\mathcal{Z}}}+{\bf
D},{\bf D}\ \text{diag.},\ D_{jj}\geq 0\ \text{for}\ j=1,\ldots,p$
$\displaystyle\boldsymbol{\Omega}_{ij}=0,\ \text{for}\
(i,j)\notin\widehat{\mathcal{P}}.$
We solve this with an LA-ADMM algorithm similar to what is described in
Section 3.1 and Appendix B.
### 3.3 Connections to Related Work
Combined forms of dimension reduction in graphical models can be found in,
amongst others, Chandrasekaran et al. (2012); Tan et al. (2015); Eisenach et
al. (2020); Brownlees et al. (2020); Pircalabelu and Claeskens (2020).
Chandrasekaran et al. (2012) consider a blend of principal component analysis
with graphical modeling by combining sparsity with a low-rank structure. Tan
et al. (2015) and Eisenach et al. (2020) both propose two-step procedures that
first cluster variables in an initial dimension reduction step and
subsequently estimate a cluster-based graphical model. Brownlees et al. (2020)
introduce partial correlation network models with community structures but
rely on the sample covariance matrix of the observations to perform spectral
clustering. Our procedure differs from these works by introducing a single
convex optimization problem that simultaneously induces aggregation and edge
sparsity for the precision matrix.
Our work is most closely related to Pircalabelu and Claeskens (2020) who
estimate a penalized graphical model and simultaneously classify nodes into
communities. However, Pircalabelu and Claeskens (2020) do not use tree-based
node-aggregation. Our approach, in contrast, considers the tree $\mathcal{T}$
as an important part of the problem to help determine the extent of node
aggregation, and as a consequence the number of aggregated nodes (i.e.
clusters, communities or blocks) $K$, in a data-driven way through guidance of
the tree-based structure on the nodes.
## 4 Simulations
We investigate the advantages of jointly exploiting node aggregation and edge
sparsity in graphical models. To this end, we compare the performance of the
tag-lasso to two benchmarks:
1. (i)
oracle: The aggregated, sparse graphical model in (7) is estimated subject to
the true aggregation and sparsity constraints. The oracle is only available
for simulated data and serves as a “best case” benchmark.
2. (ii)
glasso: This does not perform any aggregation (corresponding to the tag-lasso
with $\lambda_{1}=0$). A sparse graph on the full set of variables is
estimated. The glasso is computed using the same LA-ADMM algorithm as detailed
in Appendix B. The tuning parameter is selected from a 10-dimensional grid as
the value that minimizes the 5-fold cross-validation likelihood-based score in
equation (6) with $\widehat{\boldsymbol{\Omega}}_{-\mathcal{F}_{k}}$ taken to
be the glasso estimate.
All simulations were performed using the simulator package (Bien, 2016) in R
(R Core Team, 2017). We evaluate the estimators in terms of three performance
metrics: estimation accuracy, aggregation performance, and sparsity recovery.
We evaluate estimation accuracy by averaging over many simulation runs the
Kullback-Leibler (KL) distance
$\text{KL}=-\text{logdet}(\boldsymbol{\Sigma}\widehat{\boldsymbol{\Omega}})+\text{tr}(\boldsymbol{\Sigma}\widehat{\boldsymbol{\Omega}})-p,$
where $\boldsymbol{\Sigma}=\boldsymbol{\Omega}^{-1}$ is the true covariance
matrix. Note that the KL distance is zero if the estimated precision matrix
equals the true precision matrix.
To evaluate aggregation performance, we use two measures: the Rand index
(Rand, 1971) and the adjusted Rand index (Hubert and Arabie, 1985). Both
indices measure the degree of similarity between the true partition on the set
of nodes ${1,\ldots,p}$ and the estimated partition. The Rand index ranges
from zero to one, where one means that both partitions are identical. The
adjusted Rand index performs a re-scaling to account for the fact that random
chance will cause some variables to occupy the same group.
Finally, to evaluate sparsity recovery, we use the false positive and false
negative rates
FPR $\displaystyle=\frac{\\#\\{(i,j):\widehat{\Omega}_{ij}\neq
0\>\text{and}\>\Omega_{ij}=0\\}}{\\#\\{(i,j):\Omega_{ij}=0\\}}\ \ \text{and}\
\ \text{FNR}$
$\displaystyle=\frac{\\#\\{(i,j):\widehat{\Omega}_{ij}=0\>\text{and}\>\Omega_{j}\neq
0\\}}{\\#\\{(i,j):\Omega_{ij}\neq 0\\}}.$
The FPR reports the fraction of truly zero components of the precision matrix
that are estimated as nonzero. The FNR gives the fraction of truly nonzero
components of the precision matrix that are estimated as zero.
Figure 4: Four aggregation designs: chain, random, unbalanced and
unstructured graphs with corresponding precision matrix (top) and graph on the
set of aggregated nodes (bottom).
### 4.1 Simulation Designs
Data are drawn from a multivariate normal distribution with mean zero and
covariance matrix $\boldsymbol{\Sigma}=\boldsymbol{\Omega}^{-1}$. We take
$p=15$ variables and investigate the effect of increasing the number of
variables in Section 4.3. We consider four different simulation designs, shown
in Figure 4, each having a different combination of aggregation and sparsity
structures for the precision matrix $\boldsymbol{\Omega}$.
Aggregation is present in the first three structures. The precision matrix has
a $G$-block structure with $K=3$ blocks. In Section 4.4, we investigate the
effect of varying the number of blocks. In the chain graph, adjacent
aggregated groups are connected through an edge. This structure corresponds to
the motivating example of Section 1. In the random graph, one non-zero edge in
the aggregated network is chosen at random. In the unbalanced graph, the
clusters are of unequal size. In the unstructured graph, no aggregation is
present.
Across all designs, we take the diagonal elements of $\boldsymbol{\Omega}$ to
be $1$, the elements within a block of aggregated variables to be $0.5$, and
the non-zero elements across blocks to be $0.25$. We generate $100$ different
data sets for every simulation design and use a sample size of $n=120$. The
number of parameters ($p+p(p-1)/2=120$) equals the sample size.
Figure 5: A simple tree used for the “tag-lasso ideal” (left) and a more
realistic tree used for the “tag-lasso realistic” (right).
The tag-lasso estimator relies on the existence of a tree to perform node
dimension reduction. We consider two different tree structures throughout the
simulation study. First, we use an “ideal” tree which contains the true
aggregation structure as the sole aggregation level between the leaves and the
root of the tree. As an example, the true aggregation structure for the chain
graph structure is shown in the left panel of Figure 5. We form ${\bf A}$
corresponding to this oracle tree to obtain the “tag-lasso ideal” estimator.
We also consider a more realistic tree, shown in the right panel of Figure 5,
following a construction similar to that of Yan and Bien (2020). The tree is
formed by performing hierarchical clustering of $p$ latent points chosen to
ensure that the tree contains the true aggregation structure and that these
true clusters occur across a variety of depths. In particular, we generate $K$
cluster means $\mu_{1},\ldots,\mu_{K}$ with $\mu_{i}=1/i$. We set the number
of latent points associated with each of the $K$ means equal to the cluster
sizes from Figure 4. These latent points are then drawn independently from
$N(\mu_{i},[0.05\cdot\min_{j}(\mu_{i}-\mu_{j})]^{2})$. Finally, we form ${\bf
A}$ corresponding to this tree to obtain the “tag-lasso realistic” estimator.
### 4.2 Results
We subsequently discuss the results on estimation accuracy, aggregation
performance, and sparsity recovery.
Figure 6: Estimation accuracy of the tree estimators relative to the oracle.
#### Estimation Accuracy.
Boxplots of the KL distances for the three estimators (tag-lasso ideal, tag-
lasso realistic and glasso) relative to the oracle are given in Figure 6. The
first three panels correspond to simulation designs with aggregation
structures. In these settings, the tag-lasso estimators considerably
outperform the glasso, on average by a factor five. The tag-lasso ideal method
performs nearly as well as the oracle. Comparing the tag-lasso realistic
method to the tag-lasso ideal method suggests a minimal price paid for using a
more realistic tree.
The “unstructured” panel of Figure 6 shows a case in which there is sparsity
but no aggregation in the true data generating model. As expected, the glasso
performs best in this case; however, we observe minimal cost to applying the
tag-lasso approaches (which encompass the glasso as a special case when
$\lambda_{1}=0$).
#### Aggregation Performance.
Table 2 summarizes the aggregation performance of the three estimators in
terms of the Rand index (RI) and adjusted Rand index (ARI). No results on the
ARI in the unstructured simulation design are reported since it cannot be
computed for a partition consisting of singletons. The tag-lasso estimators
perform very well. If one can rely on an oracle tree, the tag-lasso perfectly
recovers the aggregation structure, as reflected in the perfect (A)RI values
of the tag-lasso ideal method. Even when the tag-lasso uses a more complex
tree structure, it recovers the correct aggregation structure in the vast
majority of cases. The glasso returns a partition of singletons as it is
unable to perform dimension reduction through aggregation, as can be seen from
its zero values on the ARI.
Table 2: Aggregation performance of the three estimators, as measured by the
Rand index (RI) and adjusted Rand index (ARI), for the four simulation
designs. Standard errors are in parentheses.
Estimators | chain | random | unbalanced | unstructured
---|---|---|---|---
| RI | ARI | RI | ARI | RI | ARI | RI | ARI
tag-lasso ideal | 1.00 (.00) | 1.00 (.01) | 1.00 (.00) | 1.00 (.00) | 1.00 (.00) | 0.99 (.01) | 0.84 (.02) | NA
tag-lasso realistic | 0.95 (.01) | 0.88 (.01) | 0.97 (.01) | 0.93 (.01) | 0.94 (.01) | 0.85 (.02) | 0.81 (.02) | NA
glasso | 0.71 (.00) | 0.00 (.00) | 0.71 (.00) | 0.00 (.00) | 0.67 (.00) | 0.00 (.00) | 1.00 (.00) | NA
#### Sparsity Recovery.
Table 3 summarizes the results on sparsity recovery (FPR and FNR). The tag-
lasso estimators enjoy favorable FPR and FNR, mostly excluding the irrelevant
conditional dependencies (as reflected by their low FPR) and including the
relevant conditional dependencies (as reflected by their low FNR). In the
simulation designs with aggregation, the glasso pays a big price for not being
able to reduce dimensionality through aggregation, leading it to include too
many irrelevant conditional dependencies, as reflected through its large FPRs.
In the unstructured design, the rates of all estimators are, overall, low.
Table 3: Sparsity recovery of the three estimators, as measured by the false
positive rate (FPR) and false negative rate (FNR), for the four simulation
designs. Standard errors are in parentheses.
Estimators | chain | random | unbalanced | unstructured
---|---|---|---|---
| FPR | FNR | FPR | FNR | FPR | FNR | FPR | FNR
tag-lasso ideal | 0.22 (.04) | 0.00 (.00) | 0.19 (.04) | 0.00 (.01) | 0.46 (.05) | 0.00 (.00) | 0.06 (.01) | 0.15 (.01)
tag-lasso realistic | 0.30 (.04) | 0.02 (.01) | 0.13 (.02) | 0.09 (.01) | 0.44 (.04) | 0.05 (.01) | 0.05 (.01) | 0.14 (.01)
glasso | 0.80 (.02) | 0.08 (.01) | 0.73 (.01) | 0.09 (.01) | 0.82 (.02) | 0.07 (.01) | 0.16 (.01) | 0.04 (.01)
### 4.3 Increasing the Number of Nodes
We investigate the sensitivity of our results to an increasing number of
variables $p$. We focus on the chain simulation design from Section 4.1 and
subsequently double $p$ from 15 to 30, 60 and 120 while keeping the number of
blocks $K$ fixed at three. The sample size $n$ is set proportional to the
complexity of the model, as measured by $Kp+p$. Hence, the sample sizes
corresponding to the increasing values of $p$ are respectively,
$n=120,240,480,960$, thereby keeping the ratio of the sample size to the
complexity fixed at two. In each setting, the number of parameters to be
estimated is large, equal to 120, 465, 1830, 7260, respectively; thus
increasing relative to the sample size.
The left panel of Figure 7 shows the mean KL distance (on a log-scale) of the
four estimators as a function of $p$. As the number of nodes increases, the
estimation accuracy of the tag-lasso estimators and the oracle increases
slightly. For fixed $K$ and increasing $p$, the aggregated nodes—which can be
thought of as the average of $p/K$ random variables—may be stabler, thereby
explaining why the problem at hand does not get harder when increasing $p$ for
the methods with node aggregation. By contrast, the glasso—which is unable to
exploit the aggregation structure—performs worse as $p$ increases. For
$p=120$, for instance, the tag-lasso estimators outperform the glasso by a
factor 50.
Figure 7: Estimation accuracy of the four estimators (on a log-scale) for
increasing number of variables $p$ (and fixed $K=3$, left panel) the number of
blocks $K$ (and fixed $p=30$, right panel).
Results on aggregation performance and sparsity recovery are presented in
Figure 12 of Appendix C. The tag-lasso ideal method perfectly recovers the
aggregation structure for all values of $p$. The realistic tag-lasso’s
aggregation performance is close to perfect and remains relatively stable as
$p$ increases. The glasso is unable to detect the aggregation structure, as
expected and reflected through its zero ARIs. The tag-lasso estimators also
maintain a better balance between the FPR and FNR than the glasso. While their
FPRs increase as $p$ increases, their FNRs remain close to perfect, hence all
relevant conditional dependencies are recovered. The glasso, in contrast,
fails to recover the majority of relevant conditional dependencies when
$p=60,120$, thereby explaining its considerable drop in estimation accuracy.
### 4.4 Increasing the Number of Blocks
Finally, we investigate the effect of increasing the number of blocks $K$. We
take the chain simulation design from Section 4.1 and increase the number of
blocks from $K=3$ to $K=5,6,10$, while keeping the number of variables fixed
at $p=30$. The right panel of Figure 7 shows the mean KL distance (on a log-
scale) of the four estimators as a function of $K$. As one would expect, the
difference between the aggregation methods and the glasso decreases as $K$
increases. However, for all $K$ considered, the glasso does far less well than
the aggregation based methods.
Similar conclusions hold in terms of aggregation and sparsity recovery
performance. Detailed results are presented in Figure 13 of Appendix C. The
tag-lasso ideal method performs as well as the oracle in terms of capturing
the aggregation structure; the tag-lasso realistic method performs close to
perfect and its aggregation performance improves with increasing $K$. In terms
of sparsity recovery, the tag-lasso estimators hardly miss relevant
conditional dependencies and only include a small number of irrelevant
conditional dependencies. The glasso’s sparsity recovery performance is
overall worse but does improve with increasing $K$.
## 5 Applications
### 5.1 Financial Application
We demonstrate our method on a financial data set containing daily realized
variances of $p=31$ stock market indices from across the world in 2019
($n=254$). Daily realized variances based on five minute returns are taken
from the Oxford-Man Institute of Quantitative Finance (publicly available at
http://realized.oxford-man.ox.ac.uk/data/download). Following standard
practice, all realized variances are log-transformed. An overview of the stock
market indices is provided in Appendix D. We encode similarity between the 31
stock market indices according to geographical region, and use the tree shown
in Figure 8 to apply the tag-lasso estimator.
Since the different observations of the consecutive days are (time)-dependent,
we first fit the popular and simple heterogeneous autoregressive (HAR) model
of (Corsi, 2009) to each of the individual log-transformed realized variance
series. Graphical displays of the residual series of these 31 HAR models
suggest that almost all autocorrelation in the series is captured. We then
apply the tag-lasso to the residual series to learn the conditional dependency
structure among stock market indices.
Figure 8: Geography-based tree for the stock market data, which aggregates the
$p=31$ stock market indices (leaves) over several sub-continents towards a
single root. Leaves, which represent individual stock markets, are displayed
horizontally.
#### Estimated Graphical Model.
We fit the tag-lasso estimator, with 5-fold cross validation to select tuning
parameters, to the full data set, with the matrix ${\bf A}$ encoding the tree
structure in Figure 8. The tag-lasso returns a solution with $K=6$ aggregated
blocks; the sparsity pattern of the full estimated precision matrix is shown
in the top left panel of Figure 9. The coloring of the row labels and the
numbering of columns convey the memberships of each variable to aggregated
blocks (to avoid clutter, only the first column of each block is labeled).
Figure 9: Stock market indices data. Top left: Sparsity pattern (non-zeros in
black) of full $\hat{\boldsymbol{\Omega}}$ with aggregation structure conveyed
through row label coloring and column numbering. Top right: Test errors across
the ten replications (dots) for the tag-lasso versus glasso. Bottom:
Aggregated graph for the $K=6$ nodes obtained with the tag-lasso as an
adjacency matrix (bottom left) and as a network (bottom right) with the size
of each node proportional to the number of original variables it aggregates.
Dimension reduction mainly occurs through node aggregation, as can be seen
from the aggregated precision matrix in the bottom right panel of Figure 9.
The resulting aggregated graphical model is rather dense with only about half
of the off-diagonal entries being non-zero in the estimated aggregated
precision matrix, thereby suggesting strong volatility connectedness. The
solution returned by the tag-lasso estimator consists of one single-market
block (block 5: Canada) and five multi-market blocks, which vary in size. The
Australian, South-America, and all Asian stock markets form one aggregated
block (block 6). Note that the tag-lasso has “aggregated” these merely because
they have the same non-dependence structure (i.e. all of these markets are
estimated to be conditionally independent of each other and all other
markets). The remaining aggregated nodes concern the US market (block 4) and
three European markets, which are divided into North-Europe (block 1),
Central-, South-Europe & STOXX50E (block 2), and West-Europe (block 3). In the
aggregated network, the latter two and the US play a central role as they are
the most strongly connected nodes: These three nodes are connected to each
other, the US node is additionally connected to Canada, whereas these European
nodes are additionally connected with North-Europe.
#### Out-of-sample Performance.
We conduct an out-of-sample exercise to compare the tag-lasso estimator to the
glasso estimator. We take a random $n=203$ observations (80% of the full data
set) to form a “training sample” covariance matrix and use the remaining data
to form a “test sample” covariance matrix ${\bf S}^{\text{test}}$, and repeat
this procedure ten times. We fit both the tag-lasso and glasso estimator to
the training covariance matrix, with 5-fold cross-validation on the training
data to select tuning parameters. Next, we compute their corresponding out-of-
sample errors on the test data, as in (6).
The top right panel of Figure 9 shows each of these ten test errors for both
the tag-lasso (x-axis) and the glasso estimator (y-axis). The fact that in all
ten replicates the points are well above the 45-degree line indicates that the
tag-lasso estimator has better estimation error than the glasso. Tag-lasso has
a lower test error than glasso in all ten replicates, resulting in a
substantial reduction in glasso’s test errors. This indicates that jointly
exploiting edge and node dimension reduction is useful for precision matrix
estimation in this context.
### 5.2 Microbiome Application
We next turn to a data set of gut microbial amplicon data in HIV patients
(Rivera-Pinto et al., 2018), where our goal is to estimate an interpretable
graphical model, capturing the interplay between different taxonomic groups of
the microbiome. Bien et al. (2020) recently showed that tree-based aggregation
in a supervised setting leads to parsimonious predictive models. The data set
has $n=152$ HIV patients, and we apply the tag-lasso estimator to all $p=104$
bacterial operational taxonomic units (OTUs) that have non-zero counts in over
half of the samples. We use the taxonomic tree that arranges the OTUs into
natural hierarchical groupings of taxa: with 17 genera, 11 families, five
orders, five classes, three phyla, and one kingdom (the root node). We employ
a standard data transformation from the field of compositional data analysis
(see e.g., Aitchison, 1982) called the centered log-ratio (clr) transformation
that is commonly used in microbiome graphical modeling (Kurtz et al., 2015; Lo
and Marculescu, 2018; Kurtz et al., 2019). After transformation, Kurtz et al.
(2015) apply the glasso, Lo and Marculescu (2018) incorporate phylogenetic
information into glasso’s optimization problem through weights within the
$\ell_{1}$-penalty, and Kurtz et al. (2019) estimate a latent graphical model
which combines sparsity with a low-rank structure. We instead, use the tag-
lasso to learn a sparse aggregated network from the clr-transformed microbiome
compositions. While the clr-transform induces dependence between otherwise
independent components, Proposition 1 in Cao et al. (2019) provides intuition
that as long as the underlying graphical model is sparse and $p$ is large,
these induced dependencies may have minimal effect on the covariance matrix.
Future work could more carefully account for the induced dependence,
incorporating ideas from Cao et al. (2019) or Kurtz et al. (2019).
Figure 10: Microbiome data. Full precision matrix (left) and aggregated
precision matrix (right) estimated by the tag-lasso with an unconstrained
five-fold cross-validation (top) and with a cross-validation subject to the
constraint that there are at most ten blocks (bottom).
#### Estimated Graphical Model.
We fit the tag-lasso to the full data set and use 5-fold cross-validation to
select the tuning parameters. The tag-lasso estimator provides a sparse
aggregated graphical model with $K=28$ aggregated blocks (a substantial
reduction in nodes from the original $p=104$ OTUs). The top panel of Figure 10
shows the sparsity pattern of the $p\times p$ estimated precision matrix (top
left) and of the $K\times K$ estimated aggregated precision matrix (top
right). A notable feature of the tag-lasso solution is that it returns a wide
range of aggregation levels: The aggregated network consists of 17 OTUs, 7
nodes aggregated to the genus level (these nodes start with “g_”), 3 to the
family level (these nodes start with“f_”), and 1 node to the kingdom level
(this node starts with “k_”). Some aggregated nodes, such as the “g_Blautia”
node (block 19), contain all OTUs within their taxa; some other aggregated
nodes, indicated with an asterisk like the “k_Bacteria*” node (block 28), have
some of their OTUs missing. This latter “block” consists of 18 OTUs from
across the phylogenetic tree that are estimated to be conditionally
independent with all other OTUs in the data set.
Figure 11: Microbiome data. Left: Aggregated network estimated by the
constrained CV version of the tag-lasso. The colour of the nodes is based on
their level of aggregation (OTU: pink, genus: orange, family: blue); their
width is proportional to the number of OTUs they aggregate. Middle: Network
estimated by the glasso. Right: Test errors across the ten replications for
the unconstrained (solid black) and constrained (unfilled blue) CV version of
the tag-lasso versus the glasso.
While the tag-lasso determines the aggregation level in a data-driven way
through cross validation, practitioners or researchers may also sometimes wish
to restrict the number of blocks $K$ to a pre-determined level when such prior
knowledge is available or if this is desirable for interpretability. As an
illustration, we consider a constrained cross-validation scheme in which we
restrict the number of blocks $K$ to maximally ten and select the sparsity
parameters with the best cross validated error among those solutions with
$K\leq 10$. The bottom panel of Figure 10 shows the sparsity pattern of the
full and aggregated precision matrices estimated by this constrained version
of the tag-lasso.
The resulting network consists of $K=8$ aggregated nodes. The “k_Bacteria*”
node now aggregates 78 OTUs that are estimated to be conditionally independent
with each other and all others. The interactions among the remaining nodes are
shown in the left panel of Figure 11, which consists of three OTUs (OTU134,
OTU156, and OTU161, in pink), three genera (Prevotella, Bacteroides, and
Alistipes in orange) and one family (Porphyromonadaceae in blue). The
resulting network is much simpler than the one estimated by the glasso, shown
in the middle panel of Figure 11. The glasso finds 58 OTUs to be conditionally
independent with all others, but the interactions among the remaining 46 OTUs
are much more difficult to interpret. The glasso is limited to working at the
OTU-level, which prevents it from providing insights about interactions that
span different levels of the taxonomy.
#### Out-of-sample Performance.
We conduct the same out-of-sample exercise as described in Section 5.1. The
right panel of Figure 11 presents the ten test errors (black dots) for the
unconstrained CV tag-lasso and glasso. In all but one case, the tag-lasso
leads to a better fit than the glasso, suggesting that it is better suited for
modeling the conditional dependencies among the OTUs. The unfilled blue dots
show the same but for the constrained CV tag-lasso. In all ten cases, it
underperforms the unconstrained CV tag-lasso (see shift to the right on the
horizontal axis); however, its performance is on a par with the glasso, with
test errors close to the 45 degree line. Thus, there does not appear to be a
cost in out-of-sample-performance to the interpretability gains of the
constrained tag-lasso over the glasso.
## 6 Conclusion
Detecting conditional dependencies between variables, as represented in a
graphical model, forms a cornerstone of multivariate data analysis. However,
graphical models, characterized by a set of nodes and edges, can quickly
explode in dimensionality due to ever-increasing fine-grained levels of
resolution at which data are measured. In many applications, a tree is
available that organizes the measured variables into various meaningful levels
of resolution. In this work, we introduce the tag-lasso, a novel estimation
procedure for graphical models that curbs this curse of dimensionality through
joint node and edge dimension reduction by leveraging this tree as side
information. Node dimension reduction is achieved by a penalty that allows
nodes to be aggregated according to the tree structure; edge dimension
reduction is achieved through a standard sparsity-inducing penalty. As such,
the tag-lasso generalizes the popular glasso approach to sparse graphical
modelling. An `R` package called `taglasso` implements the proposed method and
is available on the GitHub page of the first author.
#### Acknowledgments
We thank Christian Müller for useful discussions. Jacob Bien was supported in
part by NSF CAREER Award DMS-1653017 and NIH Grant R01GM123993.
## References
* Aitchison (1982) Aitchison, J. (1982), “The statistical analysis of compositional data,” Journal of the Royal Statistical Society: Series B (Methodological), 44, 139–160.
* Banerjee et al. (2008) Banerjee, O.; Ghaoui, L. E. and d’Aspremont, A. (2008), “Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data,” Journal of Machine Learning Research, 9, 485–516.
* Belloni and Chernozhukov (2013) Belloni, A. and Chernozhukov, V. (2013), “Least squares after model selection in high-dimensional sparse models,” Bernoulli, 19, 521–547.
* Bien (2016) Bien, J. (2016), “The simulator: an engine to streamline simulations,” arXiv preprint arXiv:1607.00021.
* Bien et al. (2020) Bien, J.; Yan, X.; Simpson, L. and Müller, C. L. (2020), “Tree-Aggregated Predictive Modeling of Microbiome Data,” bioRxiv.
* Boyd et al. (2011) Boyd, S.; Parikh, N.; Chu, E.; Peleato, B. and Eckstein, J. (2011), “Distributed optimization and statistical learning via the alternating direction method of multipliers.” Found. Trends Mach. Learn., 3, 1–122.
* Brownlees et al. (2020) Brownlees, C.; Gumundsson, G. S. and Lugosi, G. (2020), “Community detection in partial correlation network models,” Journal of Business & Economic Statistics, 1–33.
* Bunea et al. (2020) Bunea, F.; Giraud, C.; Luo, X.; Royer, M. and Verzelen, N. (2020), “Model assisted variable clustering: minimax-optimal recovery and algorithms,” The Annals of Statistics, 48, 111–137.
* Cai et al. (2011) Cai, T.; Liu, W. and Luo, X. (2011), “A constrained $\ell_{1}$ minimization approach to sparse precision matrix estimation,” Journal of the American Statistical Association, 106, 594–607.
* Cai et al. (2016) Cai, T. T.; Liu, W. and Zhou, H. H. (2016), “Estimating sparse precision matrix: Optimal rates of convergence and adaptive estimation,” The Annals of Statistics, 44, 455–488.
* Callahan et al. (2017) Callahan, B. J.; McMurdie, P. J. and Holmes, S. P. (2017), “Exact sequence variants should replace operational taxonomic units in marker-gene data analysis,” The ISME journal, 11, 2639–2643.
* Cao et al. (2019) Cao, Y.; Lin, W. and Li, H. (2019), “Large covariance estimation for compositional data via composition-adjusted thresholding,” Journal of the American Statistical Association, 114, 759–772.
* Chandrasekaran et al. (2012) Chandrasekaran, V.; Parrilo, P. A. and Willsky, A. S. (2012), “Latent variable graphical model selection via convex optimization,” The Annals of Statistics, 40, 1935–1967.
* Corsi (2009) Corsi, F. (2009), “A simple approximate long-memory model of realized volatility,” Journal of Financial Econometrics, 7(2), 174–196.
* Eisenach et al. (2020) Eisenach, C.; Bunea, F.; Ning, Y. and Dinicu, C. (2020), “High-Dimensional Inference for Cluster-Based Graphical Models,” Journal of Machine Learning Research, 21, 1–55.
* Friedman et al. (2008) Friedman, J.; Hastie, T. and Tibshirani, R. (2008), “Sparse inverse covariance estimation with the graphical lasso,” Biostatistics, 9, 432–441.
* Henderson and Searle (1981) Henderson, H. V. and Searle, S. R. (1981), “On deriving the inverse of a sum of matrices,” Siam Review, 23, 53–60.
* Hubert and Arabie (1985) Hubert, L. and Arabie, P. (1985), “Comparing partitions,” Journal of classification, 2, 193–218.
* Kurtz et al. (2019) Kurtz, Z. D.; Bonneau, R. and Müller, C. L. (2019), “Disentangling microbial associations from hidden environmental and technical factors via latent graphical models,” bioRxiv.
* Kurtz et al. (2015) Kurtz, Z. D.; Müller, C. L.; Miraldi, E. R.; Littman, D. R.; Blaser, M. J. and Bonneau, R. A. (2015), “Sparse and compositionally robust inference of microbial ecological networks,” PLoS Comput Biol, 11, e1004226.
* Lo and Marculescu (2018) Lo, C. and Marculescu, R. (2018), “PGLasso: Microbial Community Detection through Phylogenetic Graphical Lasso,” arXiv preprint arXiv:1807.08039.
* Meinshausen and Bühlmann (2006) Meinshausen, N. and Bühlmann, P. (2006), “High-dimensional graphs and variable selection with the lasso,” The Annals of statistics, 34, 1436–1462.
* Millington and Niranjan (2019) Millington, T. and Niranjan, M. (2019), “Quantifying influence in financial markets via partial correlation network inference,” in 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA), IEEE, pp. 306–311.
* Peng et al. (2009) Peng, J.; Wang, P.; Zhou, N. and Zhu, J. (2009), “Partial correlation estimation by joint sparse regression models,” Journal of the American Statistical Association, 104, 735–746.
* Pircalabelu and Claeskens (2020) Pircalabelu, E. and Claeskens, G. (2020), “Community-Based Group Graphical Lasso.” Journal of Machine Learning Research, 21, 1–32.
* R Core Team (2017) R Core Team (2017), R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.
* Rand (1971) Rand, W. M. (1971), “Objective criteria for the evaluation of clustering methods,” Journal of the American Statistical Association, 66, 846–850.
* Rivera-Pinto et al. (2018) Rivera-Pinto, J.; Egozcue, J. J.; Pawlowsky-Glahn, V.; Paredes, R.; Noguera-Julian, M. and Calle, M. L. (2018), “Balances: a New Perspective for Microbiome Analysis,” mSystems, 3, 1–12.
* Rothman et al. (2008) Rothman, A. J.; Bickel, P. J.; Levina, E. and Zhu, J. (2008), “Sparse permutation invariant covariance estimation,” Electronic Journal of Statistics, 2, 494–515.
* Tan et al. (2015) Tan, K. M.; Witten, D. and Shojaie, A. (2015), “The cluster graphical lasso for improved estimation of Gaussian graphical models,” Computational statistics & data analysis, 85, 23–36.
* Xu et al. (2017) Xu, Y.; Liu, M.; Lin, Q. and Yang, T. (2017), “ADMM without a fixed penalty parameter: Faster convergence with new adaptive penalization,” in Advances in Neural Information Processing Systems, pp. 1267–1277.
* Yan and Bien (2020) Yan, X. and Bien, J. (2020), “Rare feature selection in high dimensions,” Journal of the American Statistical Association, doi:10.1080/01621459.2020.1796677.
* Yuan (2010) Yuan, M. (2010), “High dimensional inverse covariance matrix estimation via linear programming,” Journal of Machine Learning Research, 11, 2261–2286.
* Yuan and Lin (2007) Yuan, M. and Lin, Y. (2007), “Model selection and estimation in the Gaussian graphical model,” Biometrika, 94, 19–35.
## Appendices
## Appendix A Proof of Proposition 2.1
###### Proof.
First, note that ${\bf\widetilde{X}}$ follows a $K$-dimensional multivariate
normal distribution with mean zero and covariance matrix ${\bf M}^{\top}({\bf
D}+{\bf M}{\bf C}{\bf M}^{\top})^{-1}{\bf M}$. Next, we re-write this
covariance matrix by two successive applications of equation (23) in Henderson
and Searle (1981):
$\displaystyle{\bf M}^{\top}({\bf D}+{\bf M}{\bf C}{\bf M}^{\top})^{-1}{\bf
M}$ $\displaystyle=$ $\displaystyle{\bf M^{\top}}{\bf D}^{-1}{\bf M}-{\bf
M^{\top}}{\bf D}^{-1}{\bf M}({\bf I}+{\bf C}{\bf M^{\top}}{\bf D}^{-1}{\bf
M})^{-1}{\bf C}{\bf M^{\top}}{\bf D}^{-1}{\bf M}$ $\displaystyle=$
$\displaystyle\left([{\bf M^{\top}}{\bf D}^{-1}{\bf M}]^{-1}+{\bf
C}\right)^{-1}.$
Hence, the precision matrix of ${\bf\widetilde{X}}$ is given by $({\bf
M^{\top}}{\bf D}^{-1}{\bf M})^{-1}+{\bf C}$. Now since $({\bf M^{\top}}{\bf
D}^{-1}{\bf M})^{-1}$ is diagonal, $c_{ij}=0\Leftrightarrow\widetilde{X}_{i}\
\bot\ \widetilde{X}_{j}|{\bf\widetilde{X}}_{-\\{i,j\\}}$, for any
$i,j=1,\ldots,K$ and with ${\bf\widetilde{X}}_{-\\{i,j\\}}$ containing all
aggregated variables expect for aggregate $i$ and $j$. ∎
## Appendix B Details of the LA-ADMM Algorithm
The augmented Lagrangian of (5) is given by
$\displaystyle-\text{logdet}(\boldsymbol{\Omega}^{(1)})+\text{tr}({\bf
S}\boldsymbol{\Omega}^{(1)})+1_{\infty}\\{\boldsymbol{\Omega}^{(1)}={\boldsymbol{\Omega}^{(1)}}^{\top},\boldsymbol{\Omega}^{(1)}\succ{\bf
0}\\}+\langle{\bf
U}^{(1)},{\boldsymbol{\Omega}}^{(1)}-{\boldsymbol{\Omega}}\rangle+\dfrac{\rho}{2}\|{\boldsymbol{\Omega}}^{(1)}-{\boldsymbol{\Omega}}\|^{2}_{F}$
(8) $\displaystyle+$
$\displaystyle\lambda_{1}\|\boldsymbol{\Gamma}^{(1)}_{-r}\|_{2,1}+1_{\infty}\\{\boldsymbol{\gamma}_{r}^{(1)}=\gamma^{(1)}{\bf
1}_{p}\\}\ +\langle{\bf
U}^{(4)},{\boldsymbol{\Gamma}}^{(1)}-{\boldsymbol{\Gamma}}\rangle+\dfrac{\rho}{2}\|{\boldsymbol{\Gamma}}^{(1)}-{\boldsymbol{\Gamma}}\|^{2}_{F}$
$\displaystyle+$ $\displaystyle 1_{\infty}\\{\boldsymbol{\Omega}^{(2)}={\bf
A}\boldsymbol{\Gamma}^{(2)}+{\bf D},{\bf D}\ \text{diag.},D_{jj}\geq
0\\}+\langle{\bf
U}^{(2)},{\boldsymbol{\Omega}}^{(2)}-{\boldsymbol{\Omega}}\rangle+\dfrac{\rho}{2}\|{\boldsymbol{\Omega}}^{(2)}-{\boldsymbol{\Omega}}\|^{2}_{F}+\langle{\bf
U}^{(5)},{\boldsymbol{\Gamma}}^{(2)}-{\boldsymbol{\Gamma}}\rangle+\dfrac{\rho}{2}\|{\boldsymbol{\Gamma}}^{(2)}-{\boldsymbol{\Gamma}}\|^{2}_{F}$
$\displaystyle+$
$\displaystyle\lambda_{2}\|\boldsymbol{\Omega}^{-\text{diag}(3)}\|_{1}+\langle{\bf
U}^{(3)},{\boldsymbol{\Omega}}^{(3)}-{\boldsymbol{\Omega}}\rangle+\dfrac{\rho}{2}\|{\boldsymbol{\Omega}}^{(2)}-{\boldsymbol{\Omega}}\|^{2}_{F},$
where ${\bf U}^{(i)}$ (for $i=1,\ldots,5)$ are the dual variables, and $\rho$
is a penalty parameter. Note that equation (8) is of the same form as Equation
(3.1) in Boyd et al. (2011) and thus involves iterating three basic steps: (i)
minimization with respect to
$(\boldsymbol{\Omega}^{(1)},\boldsymbol{\Omega}^{(2)},\boldsymbol{\Omega}^{(3)},\boldsymbol{\Gamma}^{(1)},\boldsymbol{\Gamma}^{(2)},{\bf
D})$, (ii) minimization with respect to
$(\boldsymbol{\Omega},\boldsymbol{\Gamma})$, and (iii) update of $({\bf
U}^{(1)},\ldots,{\bf U}^{(5)})$.
Step (i) decouples into four independent problems, whose solutions are worked
out in Sections B.1-B.4. Step (ii) involves the minimization of a
differentiable function of ${\bf\Omega}$ and ${\bf\Gamma}$ and boils down to
the calculation of simple averages, as shown in Section B.5. Step (iii)’s
update of the dual variables is provided in B.6.
Algorithms 1-2 then provide an overview of the LA-ADMM algorithm to solve
problem (5). We use the LA-ADMM algorithm with $\rho_{1}=0.01,\
\texttt{T}_{\text{stages}}=10,\ \texttt{maxit}=100$.
Algorithm 1 ADMM
Input:
${\bf S},{\bf
A},p,|\mathcal{T}|,\lambda_{1},\lambda_{2},\rho,\texttt{maxit},\boldsymbol{\Omega}_{0},\boldsymbol{\Gamma}_{0}.$
Initialization:
Set
* $\widehat{{\boldsymbol{\Omega}}}^{(i)}_{0}\leftarrow\widehat{{\bf U}}^{(i)}_{0}\leftarrow\boldsymbol{\Omega}_{0}\ \hskip 11.38092pt\text{for}\ i=1,\ldots,3$
* $\widehat{{\boldsymbol{\Gamma}}}^{(j)}_{0}\leftarrow\widehat{{\bf U}}^{(j+3)}_{0}\leftarrow\boldsymbol{\Gamma}_{0}\ \text{for}\ j=1,\ldots,2$
* $k\leftarrow 0$
for
$k\leq{\tt maxit}$ do
* $k\leftarrow k+1$
* $\widehat{{\boldsymbol{\Omega}}}_{k}^{(1)}\leftarrow{\bf Q}{\bar{\boldsymbol{\Omega}}}_{k-1}{\bf Q}^{\top}$, see equation (10).
* $\widehat{\Omega}^{(3)}_{k,ij}\leftarrow S({\widehat{\Omega}}_{k-1,ij}-{{\widehat{U}}}_{k-1,ij}^{(3)}/\rho,\ \lambda_{2}/\rho),\forall\ i,j=1,\ldots,p$, see equation (16).
* $\boldsymbol{\widehat{\Gamma}}^{(1)}_{k,j}\leftarrow S_{G}(\boldsymbol{\widehat{\Gamma}}_{k-1,j}-{\bf{\widehat{U}}}_{k-1,j}^{(4)}/\rho,\ \lambda_{1}/\rho),\forall j=1,\ldots,|\mathcal{T}|\backslash\\{r\\}$, see equation (11).
* $\boldsymbol{\widehat{\Gamma}}^{(1)}_{k,r}\leftarrow\widehat{\gamma}_{k-1}{\bf 1}_{p}$, see equation (11).
* $\text{diag}({\bf{\widehat{D}}}_{k})\leftarrow\text{diag}({\bf C}^{\top}{\bf C})^{-1}\text{diag}({\bf B}^{\top}{\bf C})_{+}$, see equation (15).
* $\boldsymbol{\widehat{\Gamma}}^{(2)}_{k}\leftarrow({\bf A}^{\top}{\bf A}+{\bf I}_{|\mathcal{T}|})^{-1}({\bf A}^{\top}:{\bf I}_{|\mathcal{T}|})({\bf\widetilde{M}}-{\bf{\widetilde{D}}}_{k})$, see equation (14).
* $\widehat{\boldsymbol{\Omega}}^{(2)}_{k}={\bf A}\widehat{\boldsymbol{\Gamma}}^{(2)}_{k}+{\bf{\widehat{D}}}_{k}$, see equation (13)
* $\widehat{\boldsymbol{\Omega}}_{k}\leftarrow(\widehat{\boldsymbol{\Omega}}^{(1)}_{k}+\widehat{\boldsymbol{\Omega}}^{(2)}_{k}+\widehat{\boldsymbol{\Omega}}^{(3)}_{k})/3$
* $\widehat{\boldsymbol{\Gamma}}_{k}\leftarrow(\widehat{\boldsymbol{\Gamma}}^{(1)}_{k}+\widehat{\boldsymbol{\Gamma}}^{(2)}_{k})/2$
* $\widehat{\bf U}^{(i)}_{k}\leftarrow\widehat{\bf U}^{(i)}_{k-1}+\rho\left(\widehat{{\boldsymbol{\Omega}}}_{k}^{(i)}-\widehat{{\boldsymbol{\Omega}}}_{k}\right),\ \text{for}\ i=1,\ldots,3$
* $\widehat{\bf U}^{(j+3)}_{k}\leftarrow\widehat{\bf U}^{(j+3)}_{k-1}+\rho\left(\widehat{{\boldsymbol{\Gamma}}}_{k}^{(j)}-\widehat{{\boldsymbol{\Gamma}}}_{k}\right),\ \text{for}\ j=1,\ldots,2$
end for
Output:
$\widehat{{\boldsymbol{\Omega}}}_{\texttt{maxit}},\
\widehat{{\boldsymbol{\Gamma}}}_{\texttt{maxit}},\ \widehat{{\bf
D}}_{\texttt{maxit}}$
Algorithm 2 LA-ADMM
Input:
${\bf S},{\bf
A},p,|\mathcal{T}|,\lambda_{1},\lambda_{2},\rho_{1},\texttt{maxit},\texttt{T}_{\text{stages}}$
Initialization:
Set
* $\widehat{{\boldsymbol{\Omega}}}_{0}\leftarrow{\bf 0}$; $\widehat{{\boldsymbol{\Gamma}}}_{0}\leftarrow{\bf 0}$
* $t\leftarrow 0$
for
$t\leq\texttt{T}_{\text{stages}}$ do
* $t\leftarrow t+1$
* $(\boldsymbol{\widehat{\Omega}}_{t},\boldsymbol{\widehat{\Gamma}}_{t},\widehat{{\bf D}}_{t})\leftarrow\text{ADMM}({\bf S},{\bf A},p,|\mathcal{T}|,\lambda_{1},\lambda_{2},\rho_{t},\texttt{maxit},\widehat{{\boldsymbol{\Omega}}}_{t-1},\widehat{{\boldsymbol{\Gamma}}}_{t-1})$
* $\rho_{t+1}\leftarrow 2\rho_{t}$
end for
Output:
$\widehat{{\boldsymbol{\Omega}}}_{\texttt{T}_{\text{stages}}},\
\widehat{{\boldsymbol{\Gamma}}}_{\texttt{T}_{\text{stages}}},\ \widehat{{\bf
D}}_{\texttt{T}_{\text{stages}}}$
### B.1 Solving for $\boldsymbol{\Omega}^{(1)}$
Minimizing the augmented Lagrangian with respect to
$\boldsymbol{\Omega}^{(1)}$ gives
$\displaystyle\widehat{\boldsymbol{\Omega}}^{(1)}_{k+1}$ $\displaystyle:=$
$\displaystyle\underset{\boldsymbol{\Omega}^{(1)}}{\operatorname{argmin}}\\{-\text{logdet}(\boldsymbol{\Omega}^{(1)})+\text{tr}({\bf
S}\boldsymbol{\Omega}^{(1)})+\langle{\bf
U}^{(1)},{\boldsymbol{\Omega}}^{(1)}-{\boldsymbol{\Omega}}\rangle+\dfrac{\rho}{2}\|{\boldsymbol{\Omega}}^{(1)}-{\boldsymbol{\Omega}}\|^{2}_{F}\
\ \text{s.t.}\ \
\boldsymbol{\Omega}^{(1)}={\boldsymbol{\Omega}^{(1)}}^{\top},\boldsymbol{\Omega}^{(1)}\succ{\bf
0}\\}$ $\displaystyle=$
$\displaystyle\underset{\boldsymbol{\Omega}^{(1)}}{\operatorname{argmin}}\\{-\text{logdet}(\boldsymbol{\Omega}^{(1)})+\text{tr}({\bf
S}\boldsymbol{\Omega}^{(1)})+\dfrac{\rho}{2}\|\boldsymbol{\Omega}^{(1)}-(\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(1)}/\rho)\|^{2}_{F}\
\ \text{s.t.}\ \
\boldsymbol{\Omega}^{(1)}={\boldsymbol{\Omega}^{(1)}}^{\top},\boldsymbol{\Omega}^{(1)}\succ{\bf
0}\\}$
The solution should satisfy the first order optimality condition
$\rho\boldsymbol{\widehat{\Omega}}^{(1)}_{k+1}-{\boldsymbol{\widehat{\Omega}}_{k+1}^{(1)}}^{-1}=\rho\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(1)}-{\bf
S}.$ (9)
This means that the eigenvectors of
$\boldsymbol{\widehat{\Omega}}^{(1)}_{k+1}$ are the same as the eigenvectors
of $\rho\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(1)}-{\bf
S}$ and that the eigenvalues of $\boldsymbol{\widehat{\Omega}}^{(1)}_{k+1}$
are a simple function of the eigenvalues of
$\rho\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(1)}-{\bf S}$.
Consider the orthogonal eigenvalue decomposition of right hand side:
$\rho\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(1)}-{\bf
S}={\bf Q}{\boldsymbol{\Lambda}}{\bf Q}^{\top},$
where ${\boldsymbol{\Lambda}}={\text{\bf diag}}(\delta_{1},\ldots,\delta_{p})$
and ${\bf Q}{\bf Q}^{\top}={\bf Q}^{\top}{\bf Q}={\bf I}$. Multiply (9) by
${\bf Q}^{\top}$ on the left and ${\bf Q}$ on the right
$\rho{\boldsymbol{\bar{\Omega}}}^{(1)}_{k+1}-{\boldsymbol{\bar{\Omega}}_{k+1}^{(1)}}^{-1}={\boldsymbol{\Lambda}},\
\text{with}\ {\boldsymbol{\bar{\Omega}}}^{(1)}_{k+1}={\bf
Q}^{\top}{\boldsymbol{\widehat{\Omega}}}_{k+1}^{(1)}{\bf Q}.$
Then
${\boldsymbol{\widehat{\Omega}}}_{k+1}^{(1)}={\bf
Q}{\boldsymbol{\bar{\Omega}}}_{k+1}^{(1)}{\bf Q}^{\top},\ \text{with}\
{{\bar{\Omega}}}^{(1)}_{k+1,jj}=\dfrac{\delta_{j}+\sqrt{\delta_{j}^{2}+4\rho}}{2\rho}.$
(10)
### B.2 Solving for $\boldsymbol{\Gamma}^{(1)}$
Minimizing the augmented Lagrangian with respect to
$\boldsymbol{\Gamma}^{(1)}$ gives
$\widehat{\boldsymbol{\Gamma}}^{(1)}_{k+1}:=\underset{\boldsymbol{\Gamma}^{(1)}}{\operatorname{argmin}}\\{\dfrac{\rho}{2}\|\boldsymbol{\Gamma}^{(1)}-(\boldsymbol{\widehat{\Gamma}}_{k}-{\bf{\widehat{U}}}_{k}^{(4)}/\rho)\|^{2}_{F}+\lambda_{1}\|\boldsymbol{\Gamma}^{(1)}_{-r}\|_{2,1}\
\text{s.t.}\ \boldsymbol{\gamma}_{r}^{(1)}=\gamma^{(1)}{\bf 1}_{p}\\}$
The solution is groupwise soft-thresholding:
$\boldsymbol{\widehat{\Gamma}}^{(1)}_{k+1,j}=\begin{cases}S_{G}(\boldsymbol{\widehat{\Gamma}}_{k,j}-{\bf{\widehat{U}}}_{k,j}^{(4)}/\rho,\lambda_{1}/\rho),&\text{if}\
j=1,\ldots,|\mathcal{T}|\backslash\\{r\\}\\\ \widehat{\gamma}_{k}{\bf
1}_{p},&\text{if}\ j=r.\end{cases}$ (11)
with the group-wise soft-thresholding operator
$S_{G}({\boldsymbol{\gamma}},\lambda)=\text{max}(1-\lambda/\|{\boldsymbol{\gamma}}\|_{2},0){\boldsymbol{\gamma}}$
applied to $\boldsymbol{\gamma}\in\mathds{R}^{p}$, and $\widehat{\gamma}_{k}$
is equal to the average of the $p$-dimensional vector
$\boldsymbol{\widehat{\Gamma}}_{k,r}-{\bf{\widehat{U}}}_{k,r}^{(4)}/\rho$.
Note that in this Appendix we use the capitalized $\boldsymbol{\Gamma}_{j}$
notation to index the $j^{\text{th}}$ row of the matrix $\boldsymbol{\Gamma}$
whereas we use lowercase $\boldsymbol{\gamma}_{u}$ when indexing a node $u$
based on the tree structure in Section 2 of the main paper.
### B.3 Solving for $\boldsymbol{\Omega}^{(2)},\boldsymbol{\Gamma}^{(2)},{\bf
D}$
Minimizing the augmented Lagrangian with respect to
$\boldsymbol{\Omega}^{(2)},\boldsymbol{\Gamma}^{(2)},{\bf D}$ gives
$(\widehat{\boldsymbol{\Omega}}^{(2)}_{k+1},\widehat{\boldsymbol{\Gamma}}^{(2)}_{k+1},\widehat{\boldsymbol{D}}_{k+1}):=\underset{\boldsymbol{\Omega}^{(2)},\boldsymbol{\Gamma}^{(2)},{\bf
D}}{\operatorname{argmin}}\\{\dfrac{\rho}{2}\|\boldsymbol{\Omega}^{(2)}-(\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(2)}/\rho)\|^{2}_{F}+\dfrac{\rho}{2}\|\boldsymbol{\Gamma}^{(2)}-(\boldsymbol{\widehat{\Gamma}}_{k}-{\bf{\widehat{U}}}_{k}^{(5)}/\rho)\|^{2}_{F}\\\
\text{s.t.}\ \boldsymbol{\Omega}^{(2)}={\bf A}\boldsymbol{\Gamma}^{(2)}+{\bf
D},\ {\bf D}\ \text{diagonal},\ D_{jj}\geq 0\ \text{for}\ j=1,\ldots,p,\\}$
(12)
The solution
$\widehat{\boldsymbol{\Omega}}^{(2)}_{k+1}={\bf
A}\widehat{\boldsymbol{\Gamma}}^{(2)}_{k+1}+{\bf{\widehat{D}}}_{k+1}$ (13)
is immediate and we are left with
$(\widehat{\boldsymbol{\Gamma}}^{(2)}_{k+1},\widehat{\boldsymbol{D}}_{k+1}):=\underset{\boldsymbol{\Gamma}^{(2)},{\bf
D}}{\operatorname{argmin}}\\{\dfrac{1}{2}\|{\bf{\widetilde{A}}}\boldsymbol{\Gamma}^{(2)}+{\bf{\widetilde{D}}}-{\bf{\widetilde{M}}}\|^{2}_{F}\
\text{s.t.}\ {\bf D}\ \text{diagonal},\ D_{jj}\geq 0\ \text{for}\
j=1,\ldots,p,\\}$
where we have substituted ${\boldsymbol{\Omega}}^{(2)}={\bf
A}{\boldsymbol{\Gamma}}^{(2)}+{\bf{D}}$ and we denote
${\bf{\widetilde{A}}}=\begin{pmatrix}{\bf A}\\\ {{\bf
I}_{|\mathcal{T}|}}\end{pmatrix}\in\mathds{R}^{(p+|\mathcal{T}|)\times|\mathcal{T}|},\
{\bf{\widetilde{D}}}=\begin{pmatrix}{\bf D}\\\ {{\bf 0}_{|\mathcal{T}|\times
p}}\end{pmatrix}\in\mathds{R}^{(p+|\mathcal{T}|)\times p},\ \text{and}\
{\bf{\widetilde{M}}}=\begin{pmatrix}\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(2)}/\rho\\\
\boldsymbol{\widehat{\Gamma}}_{k}-{\bf{\widehat{U}}}_{k}^{(5)}/\rho\end{pmatrix}\in\mathds{R}^{(p+|\mathcal{T}|)\times
p}.$
The solution
$\displaystyle\boldsymbol{\widehat{\Gamma}}^{(2)}_{k+1}$ $\displaystyle=$
$\displaystyle({\bf{\widetilde{A}}}^{\top}{\bf{\widetilde{A}}})^{-1}{\bf{\widetilde{A}}}^{\top}({\bf{\widetilde{M}}}-{\bf{\widetilde{D}}}_{k+1})$
(14) $\displaystyle=$ $\displaystyle({\bf A}^{\top}{\bf A}+{\bf
I}_{|\mathcal{T}|})^{-1}({\bf A}^{\top}:{\bf
I}_{|\mathcal{T}|})({\bf\widetilde{M}}-{\bf{\widetilde{D}}}_{k+1})$
is immediate and we are left with
$\displaystyle\widehat{\boldsymbol{D}}_{k+1}$ $\displaystyle:=$
$\displaystyle\underset{{\bf
D}}{\operatorname{argmin}}\\{\dfrac{1}{2}\|({\bf{\widetilde{M}}}-{\bf{\widetilde{D}}})-{\bf{\widetilde{A}}}({\bf{\widetilde{A}}}^{\top}{\bf{\widetilde{A}}})^{-1}{\bf{\widetilde{A}}}^{\top}({\bf{\widetilde{M}}}-{\bf{\widetilde{D}}})\|^{2}_{F}\
\text{s.t.}\ {\bf D}\ \text{diag.},\ D_{jj}\geq 0\ \text{for}\
j=1,\ldots,p,\\}$ $\displaystyle=$ $\displaystyle\underset{{\bf
D}}{\operatorname{argmin}}\\{\dfrac{1}{2}\|({\bf
I}_{p+|\mathcal{T}|}-{\bf{\widetilde{A}}}({\bf{\widetilde{A}}}^{\top}{\bf{\widetilde{A}}})^{-1}{\bf{\widetilde{A}}}^{\top})({\bf{\widetilde{M}}}-{\bf{\widetilde{D}}})\|^{2}_{F}\
\text{s.t.}\ {\bf D}\ \text{diag.},\ D_{jj}\geq 0\ \text{for}\
j=1,\ldots,p,\\}$ $\displaystyle=$ $\displaystyle\underset{{\bf
D}}{\operatorname{argmin}}\\{\dfrac{1}{2}\|{\bf B}-{\bf C}{\bf{D}}\|^{2}_{F}\
\text{s.t.}\ {\bf D}\ \text{diag.},\ D_{jj}\geq 0\ \text{for}\
j=1,\ldots,p,\\}$
with ${\bf B}=({{\bf
I}_{p+|\mathcal{T}|}}-{\bf{\widetilde{A}}}({\bf{\widetilde{A}}}^{\top}{\bf{\widetilde{A}}})^{-1}{\bf{\widetilde{A}}}^{\top}){\bf{\widetilde{M}}}\in\mathds{R}^{(p+|\mathcal{T}|)\times
p},\ {\bf C}=({{\bf I}_{p}}:{{\bf
0}_{p\times|\mathcal{T}|}})^{\top}-{\bf{\widetilde{A}}}({\bf{\widetilde{A}}}^{\top}{\bf{\widetilde{A}}})^{-1}{\bf{A}}^{\top}\in\mathds{R}^{(p+|\mathcal{T}|)\times
p}$. The solution is
$\text{diag}({\bf{\widehat{D}}}_{k+1})=\text{diag}({\bf C}^{\top}{\bf
C})^{-1}\text{diag}({\bf B}^{\top}{\bf C})_{+}.$ (15)
### B.4 Solving for $\boldsymbol{\Omega}^{(3)}$
Minimizing the augmented Lagrangian with respect to
$\boldsymbol{\Omega}^{(3)}$ gives
$\displaystyle\widehat{\boldsymbol{\Omega}}^{(3)}_{k+1}$ $\displaystyle:=$
$\displaystyle\underset{\boldsymbol{\Omega}^{(3)}}{\operatorname{argmin}}\\{\langle{\bf
U}^{(3)},{\boldsymbol{\Omega}}^{(3)}-{\boldsymbol{\Omega}}\rangle+\dfrac{\rho}{2}\|{\boldsymbol{\Omega}}^{(3)}-{\boldsymbol{\Omega}}\|^{2}_{F}+\lambda_{2}\|\boldsymbol{\Omega}^{-\text{diag}(3)}\|_{1}\\}$
$\displaystyle=$
$\displaystyle\underset{\boldsymbol{\Omega}^{(3)}}{\operatorname{argmin}}\\{\dfrac{\rho}{2}\|\boldsymbol{\Omega}^{(3)}-(\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(3)}/\rho)\|^{2}_{F}+\dfrac{1}{2\rho}\|{\bf
U}^{(3)}\|_{F}^{2}+\lambda_{2}\|\boldsymbol{\Omega}^{-\text{diag}(3)}\|_{1}\\}$
$\displaystyle=$
$\displaystyle\underset{\boldsymbol{\Omega}^{(3)}}{\operatorname{argmin}}\\{\dfrac{\rho}{2}\|\boldsymbol{\Omega}^{(3)}-(\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(3)}/\rho)\|^{2}_{F}+\lambda_{2}\|\boldsymbol{\Omega}^{-\text{diag}(3)}\|_{1}\\}$
The solution is simply elementwise soft-thresholding:
$\widehat{\Omega}^{(3)}_{k+1,ij}=\begin{cases}S({\widehat{\Omega}}_{k,ij}-{{\widehat{U}}}_{k,ij}^{(3)}/\rho,\lambda_{2}/\rho),&\text{if}\
i\neq j\\\
{\widehat{\Omega}}_{k,ij}-{{\widehat{U}}}_{k,ij}^{(3)}/\rho,&\text{if}\
i=j,\\\ \end{cases}$ (16)
with the soft-threshold operator
$S(\omega,\lambda)=\text{sign}(\omega)\text{max}(|\omega|-\lambda,0)$ applied
to $\omega\in\mathds{R}$.
### B.5 Update Variables $\boldsymbol{\Omega}$ and $\boldsymbol{\Gamma}$
Minimizing the augmented Lagrangian with respect to variables
$\boldsymbol{\Omega}$ and $\boldsymbol{\Gamma}$ gives
$\displaystyle\widehat{\boldsymbol{\Omega}}_{k+1}$ $\displaystyle:=$
$\displaystyle\underset{\boldsymbol{\Omega}}{\operatorname{argmin}}\left\\{\sum_{i=1}^{3}\|\boldsymbol{\widehat{\Omega}}^{(i)}_{k+1}-(\boldsymbol{\Omega}-\widehat{{\bf
U}}^{(i)}_{k}/\rho)\|_{F}^{2}\right\\}=\bar{{\boldsymbol{\Omega}}}_{k+1}+\dfrac{1}{\rho}\bar{\bf
U}^{\Omega}_{k}$ (17) $\displaystyle\widehat{\boldsymbol{\Gamma}}_{k+1}$
$\displaystyle:=$
$\displaystyle\underset{\boldsymbol{\Gamma}}{\operatorname{argmin}}\left\\{\sum_{i=1}^{2}\|\boldsymbol{\widehat{\Gamma}}^{(i)}_{k+1}-(\boldsymbol{\Gamma}-\widehat{{\bf
U}}^{(i+3)}_{k}/\rho)\|_{F}^{2}\right\\}=\bar{{\boldsymbol{\Gamma}}}_{k+1}+\dfrac{1}{\rho}\bar{\bf
U}^{\Gamma}_{k},$ (18)
where
$\bar{{\boldsymbol{\Omega}}}_{k}:=\dfrac{\widehat{\boldsymbol{\Omega}}^{(1)}_{k}+\widehat{\boldsymbol{\Omega}}^{(2)}_{k}+\widehat{\boldsymbol{\Omega}}^{(3)}_{k}}{3},\bar{\bf
U}^{\Omega}_{k}:=\dfrac{\widehat{\bf U}^{(1)}_{k}+\widehat{\bf
U}^{(2)}_{k}+\widehat{\bf
U}^{(3)}_{k}}{3},\bar{{\boldsymbol{\Gamma}}}_{k}:=\dfrac{\widehat{\boldsymbol{\Gamma}}^{(1)}_{k}+\widehat{\boldsymbol{\Gamma}}^{(2)}_{k}}{2},\bar{\bf
U}^{\Gamma}_{k}:=\dfrac{\widehat{\bf U}^{(4)}_{k}+\widehat{\bf
U}^{(5)}_{k}}{2}.$
### B.6 Update Dual Variables
The updates of the dual variables are given by
$\displaystyle\widehat{\bf U}^{(i)}_{k+1}$ $\displaystyle:=$
$\displaystyle\widehat{\bf
U}^{(i)}_{k}+\rho\left(\widehat{{\boldsymbol{\Omega}}}_{k+1}^{(i)}-\widehat{{\boldsymbol{\Omega}}}_{k+1}\right),\
\text{for}\ i=1,\ldots,3$ $\displaystyle\widehat{\bf U}^{(j+3)}_{k+1}$
$\displaystyle:=$ $\displaystyle\widehat{\bf
U}^{(j+3)}_{k}+\rho\left(\widehat{{\boldsymbol{\Gamma}}}_{k+1}^{(j)}-\widehat{{\boldsymbol{\Gamma}}}_{k+1}\right),\
\text{for}\ j=1,\ldots,2.$
Similarly, averaging the first three updates and the latter two gives
$\displaystyle\bar{\bf U}^{\Omega}_{k+1}$ $\displaystyle:=$
$\displaystyle\bar{\bf
U}^{\Omega}_{k}+\rho\left(\bar{{\boldsymbol{\Omega}}}_{k+1}-\widehat{{\boldsymbol{\Omega}}}_{k+1}\right),\
\text{for}\ i=1,\ldots,3$ (19) $\displaystyle\bar{\bf U}^{\Gamma}_{k+1}$
$\displaystyle:=$ $\displaystyle\bar{\bf
U}^{\Gamma}_{k}+\rho\left(\bar{{\boldsymbol{\Gamma}}}_{k+1}-\widehat{{\boldsymbol{\Gamma}}}_{k+1}\right),\
\text{for}\ j=1,\ldots,2,$ (20)
Substituting (17) and (18) into (19) and (20) yields that $\bar{\bf
U}^{\Omega}_{k+1}=\bar{\bf U}^{\Gamma}_{k+1}={\bf 0}$ after the first
iteration.
## Appendix C Additional Simulation Results
Figure 12: Simulation results for increasing number of nodes $p$. Top:
Aggregation performance (RI: left; ARI: right); Bottom: Sparsity recovery
(FPR: left; FNR: right) of the four estimators
Figure 13: Simulation results for increasing number of blocks $K$. Top:
Aggregation performance (RI: left; ARI: right); Bottom: Sparsity recovery
(FPR: left; FNR: right) of the four estimators
## Appendix D Financial Application: Data Description
Table 4: Financial Application: Data Description, as taken from https://realized.oxford-man.ox.ac.uk/data/assets. Abbreviation | Description | Location
---|---|---
DJI | Dow Jones Industrial Average | US
IXIC | Nasdaq 100 | US
SPX | S&P 500 Index | US
RUT | Russel 2000 | US
GSPTSE | S&P/TSX Composite index | Canada
BVSP | BVSP BOVESPA Index | Brazil
MXX | IPC Mexico | Mexico
OMXC20 | OMX Copenhagen 20 Index | Denmark
OMXHPI | OMX Helsinki All Share Index | Finland
OMXSPI | OMX Stockholm All Share Index | Sweden
OSEAX | Oslo Exchange All-share Index | Norway
GDAXI | Deutscher Aktienindex | Germany
SSMI | Swiss Stock Market Index | Switzerland
BVLG | Portuguese Stock Index | Portugal
FTMIB | Financial Times Stock Exchange Milano Indice di Borsa | Italy
IBEX | Iberia Index 35 | Spain
SMSI | General Madrid Index | Spain
AEX | Amsterdam Exchange Index | Netherlands
BFX | Bell 20 Index | Belgium
FCHI | Cotation Assistée en Continue 40 | France
FTSE | Financial Times Stock Exchange 100 | UK
STOXX50E | EURO STOXX 50 | Europe
HSI | HANG SENG Index | Hong Kong
KS11 | Korea Composite Stock Price Index (KOSPI) | South Korea
N225 | Nikkei 225 | Japan
SSEC | Shanghai Composite Index | China
STI | Straits Times Index | Singapore
KSE | Karachi SE 100 Index | Pakistan
BSESN | S&P Bombay Stock Exchange Sensitive Index | India
NSEI | NIFTY 50 | India
AORD | All Ordinaries Index | Australia
|
# The impact of shear on the rotation of Galactic plane molecular clouds
Raffaele Rani,1 Jia-Lun Li,2 Toby J. T. Moore,3 David J. Eden,4 Andrew J.
Rigby,5 Geumsook Park,6,7,8 Yueh-Ning Lee 1
1Center of Astronomy and Gravitation, Department of Earth Sciences, National
Taiwan Normal University, 88, Sec. 4, Ting-Chou Rd., Wenshan District,
Taipei 116, Taiwan R.O.C.
2 Institute of Astronomy, National Tsing Hua University, Hsinchu 30013, Taiwan
R.O.C.
3Astrophysics Research Institute, Liverpool John Moores University, IC2,
Liverpool Science Park, 146 Brownlow Hill, Liverpool, L3 5RF, UK
4Armagh Observatory and Planetarium, College Hill, Armagh, BT61 9DB, UK
5School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT, UK
6Telepix Co., Ltd., 17, Techno 4-ro, Yuseong-gu, Daejeon 34013, Republic of
Korea
7Research Institute of Natural Sciences, Chungnam National University, 99
Daehak-ro, Yuseong-gu, Daejeon 34134, Republic of Korea
8Korea Astronomy and Space Science Institute, 776 Daedeokdae-ro, Yuseong-gu,
Daejeon 34055, Republic of Korea E-mail<EMAIL_ADDRESS>(RR)
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
Stars form in the densest regions of molecular clouds, however, there is no
universal understanding of the factors that regulate cloud dynamics and their
influence on the gas-to-stars conversion. This study considers the impact of
Galactic shear on the rotation of giant molecular clouds (GMCs) and its
relation to the solenoidal modes of turbulence. We estimate the direction of
rotation for a large sample of clouds in the ^13CO/C^18O (3-2) Heterodyne
Inner Milky Way Plane Survey (CHIMPS) and their corresponding sources in a new
segmentation of the ^12CO(3-2) High-Resolution Survey (COHRS). To quantify the
strength of shear, we introduce a parameter that describes the shear’s ability
to disrupt growing density perturbations within the cloud. Although we find no
correlation between the direction of cloud rotation, the shear parameter, and
the magnitude of the velocity gradient, the solenoidal fraction of the
turbulence in the CHIMPS sample is positively correlated with the shear
parameter and behaves similarly when plotted over Galactocentric distance.
GMCs may thus not be large or long-lived enough to be affected by shear to the
point of showing rotational alignment. In theory, Galactic shear can
facilitate the rise of solenoidal turbulence and thus contribute to
suppressing star formation. These results also suggest that the rotation of
clouds is not strictly related to the overall rotation of the disc, but is
more likely to be the imprint of Kelvin-Helmholtz instabilities in the
colliding flows that formed the clouds.
###### keywords:
molecular data – Physical Data and Processes – ISM: kinematics and dynamics –
Interstellar Medium (ISM) – Nebulae– ISM: clouds –
††pubyear: 2022††pagerange: The impact of shear on the rotation of Galactic
plane molecular clouds–A
## 1 Introduction
The efficiency and rate at which molecular gas is converted into stars
determine the evolution and the observable properties of galaxies. Compressive
motions associated with large-scale instabilities (gravitational instabilities
upon entering regions with differing densities, shear originating from the
differential galactic rotation, expanding superbubbles created by supernova
explosions Elmegreen, 1995; McKee & Ostriker, 2007) induce the transition from
atomic to molecular gas in the diffuse interstellar medium ISM and the
formation of giant molecular clouds.
In particular, the Kelvin–Helmholtz Instability (KHI) is a well-known
instability arising at the interface of shearing fluids (Drazin P., 1981). In
the ISM, the presence of shearing flows can lead to the formation of KHI and
may cause the formation of clouds. KHIs may induce rotation (Fleck & Jr, 1989)
and contribute to turbulence and/or change the properties and structures of
the molecular cloud through turbulence and mixing (Berné et al., 2010; Röllig
et al., 2011; Sahai et al., 2012; Meidt et al., 2018, 2019). These mechanisms
lead to the formation of shock-bounded layers between convergent flows, a
process that induces fragmentation through non-linear instabilities (Vishniac,
1994). Numerical simulations of this scenario show that fully developed
turbulence arises in the shock-driven layers (Hunter et al., 1986; Klein &
Woods, 1998; Heitsch et al., 2009; Inoue & Inutsuka, 2012; Pudritz & Kevlahan,
2013; Inutsuka et al., 2015). This turbulent state is maintained throughout
the duration of a stream collision and its fragmentation into molecular
clouds. Cloud structure can also be changed by cloud-cloud collisions, which
yield larger and more massive clouds (Tan, 2000).
The internal turbulence of molecular clouds originates from a dissipative
energy cascade in compressible turbulent flows. At every scale, the fraction
of the energy that is not dissipated through shocks is transmitted to smaller-
scale structures (Kornreich & Scalo, 2000). In this framework, the relatively
high star-formation efficiency (SFE) observed in disc clouds is linked to the
prevalence of compressive (curl-free) turbulent modes. In contrast, the low
SFE that characterises clouds in the Central Molecular Zone (CMZ) is related
to the shear-driven solenoidal (divergence-free) component (Longmore et al.,
2013). A similar analysis of the Orion B molecular cloud (Orkisz et al., 2017)
found that the turbulence is mostly solenoidal, consistent with the low star-
formation rate associated with the cloud. These solenoidal modes are, however,
position-dependent and vary with scale within the Orion B cloud, with motions
around the main star-forming regions being strongly compressive. Thus,
significant inter-cloud variability of the compressive/solenoidal mode
fractions may be a decisive agent of variations in the SFE.
This framework suggests that the magnitude of shear in galaxies or the shear-
induced cloud-cloud collisions contributes to the SFE and star formation rate
(SFR Silk, 1997; Tan, 2000). Models (Tan, 2000) and simulations (Weidner et
al., 2010; Colling et al., 2018) yield contrasting evidence on the relation
between enhanced/reduced SFR and high/low shear. Although, shear-induced
cloud-cloud collisions promote higher SFRs, accounting for magnetic fields and
stellar feedback in hydrodynamical simulations seems to complicate the
relationship between star formation and shear (Colling et al., 2018). If a
weaker strength of shear should enable higher rates of star formation,
feedback from the first generation of stars will dissipate the gas in
molecular clouds over shorter timescales, resulting in lower SFEs. A study
(Watson et al., 2012) of 20 bulgeless galaxies showed no correlation between
SFE and the galaxies’ circular velocities. Irregular galaxies also show poor
correlations between the shear magnitude and the SFRs (Hunter et al., 1998).
The analysis of shear-quantifying parameters in a sample of clouds in the
Galactic Ring Survey (Jackson et al., 2006a) found no evidence that shear
plays a significant role in opposing gravitational collapse. In addition, the
shear parameter of the clouds (see sub-section 4.2) does not depend on the
Galactic environment and no correlations between this measure (estimated on a
Galactic rotation model) and several indicators of their star formation
activity was found (Dib et al., 2012).
If shear enhances the solenoidal modes of turbulence in clouds by inducing a
velocity gradient in molecular clouds, it could be a factor responsible for
the observed decline in the relative fraction of power in turbulent solenoidal
modes (the solenoidal fraction) with Galactocentric distances (Rani et al.,
2022, shear being assumed to be stronger closer to the Galactic centre) in
the^13CO/C^18O ($J=3-2$) Heterodyne Inner Milky Way Plane Survey, CHIMPS Rigby
et al. (2016). In this framework, shear could also be a major contributor to
the negative correlation between the solenoidal fraction and the SFE in CHIMPS
Rani et al. (2022). If this were the case and shear really induced a full-
scale velocity gradient in molecular clouds, we ought to be able to quantify
this effect by considering the overall rotation of molecular clouds. Shear-
induced rotation would then also introduce full-scale solenoidal modes.
However, although a cloud’s overall rotation indeed contributes to the
increase of the solenoidal modes over the entire structure (see below), local
vorticity and the gas motions in larger, looser, more rarefied envelopes also
result in higher values of the overall solenoidal modes (Orkisz et al., 2017).
The solenoidal fraction, used in our studies to quantify the solenoidal
turbulence in fact, provides an estimate of the relative power in the
solenoidal component of turbulence within an individual cloud without
distinguishing how these motions originate.
Any cloud rotation induced by shear related to Galactic rotation would
manifest as an alignment of the clouds’ axes of rotation with the axis of the
Galaxy. This alignment would then be lost with the diminishing impact of shear
at larger Galactocentric distances.
In this work, we consider the sample of CHIMPS clouds produced by (Rani et
al., 2022) together with a novel sample of ^12CO(3-2) sources in the CO High
Resolution Survey, COHRS (Dempsey et al., 2013; Park et al., 2022) to quantify
the strength of shear through induced cloud rotation. Furthermore, making use
of shear parameters defined by Dib et al. (2012), we attempt to find a
connection between shear, SFE and the solenoidal fraction in CHIMPS clouds.
Section 2 provides a brief description of the surveys chosen and the
construction of a new COHRS catalogue, including emission extraction and
distance assignments. Section 4.1 introduces the methods used to identify
velocity gradients and the directions of the rotation axes in first-moment
maps. It also provides the definition of a parameter used to quantify the
effect of shear. Finally, the results of the analysis are presented and
discussed in Sections 5 and 6.
## 2 Data
### 2.1 Surveys
The ^13CO/C^18O ($J=3-2$) Heterodyne Inner Milky Way Plane Survey (CHIMPS) is
a spectral survey of the $J=3-2$ rotational transitions of ^13CO at 330.587
GHz and C^18O at 329.331 GHz. The survey encompasses approximately 19 square
degrees of the Galactic plane, ranging in longitudes from
$27\aas@@fstack{\circ}5$ to $46\aas@@fstack{\circ}4$ and with latitudes
$|b|<0\aas@@fstack{\circ}5$. It was conducted with an angular resolution of 15
arcseconds using observations carried out at the James Clerk Maxwell Telescope
(JCMT) in Hawaii (Rigby et al., 2016). Both isotopologues were observed
concurrently (Buckle et al., 2009) using the Heterodyne Array Receiver
Programme (HARP) together with the Auto-Correlation Spectral Imaging System
(ACSIS). The data collected are structured into position-position-velocity
(PPV) cubes, each with velocity channels of 0.5 km s-1 and a bandwidth of 200
km s-1. The velocity range varies, covering from $-50<v_{\rm LSR}<150$ km s-1
at $28^{\circ}$ to $-75<v_{\rm LSR}<125$ km s-1 at $46^{\circ}$, in order to
accommodate the Galactic velocity gradient associated with the spiral arms in
the kinematic local standard of rest (e.g., Dame et al., 2001). The mean root
mean square (rms) sensitivities of the ^13CO survey are $\sigma(T_{\rm
A}^{*})\approx 0.6$ K per velocity channel, while for C^18O we have
$\sigma(T_{A}^{*})\approx 0.7$ K, where $T_{A}^{*}$ represents the antenna
temperature corrected for ohmic losses inside the instrument, spillover,
rearward scattering, and atmospheric attenuation (Rigby et al., 2016).
The CO Hi-Resolution Survey (COHRS) mapped the ^12CO ($3-2$) emission in the
Inner Milky Way plane, covering latitudes
$10\aas@@fstack{\circ}25<l<17\aas@@fstack{\circ}5$ with longitudes
$|\,b\,|\leq 0\aas@@fstack{\circ}25$ and
$17\aas@@fstack{\circ}5<l<50\aas@@fstack{\circ}25$ with $|\,b\,|\leq
0\aas@@fstack{\circ}25$ (Park et al., 2022). This particular region was
selected to match a set of important surveys including CHIMPS, the Galactic
Ring Survey (GRS; Jackson et al., 2006b), the FOREST Unbiased Galactic plane
Imaging survey with the Nobeyama 45-m telescope (FUGIN; Umemoto et al., 2017),
the Galactic Legacy Infrared Mid Plane Survey Extraordinaire (GLIMPSE;
Churchwell et al., 2009), the Bolocam Galactic Plane Survey (BGPS; Aguerre et
al., 2011), and the Herschel Infrared Galactic Plane Survey (Hi-GAL; Molinari
et al., 2016). COHRS observations were also performed at JCMT with HARP at
$345.786$ GHz and ACSIS set to a 1-GHz bandwidth, yielding a frequency
resolution of $0.488$ MHz ($0.42$ km s-1). The survey covers a velocity range
between $-30$ and $155$ km s-1, with a spectral resolution of $0.635$ km s-1
and angular resolution of $16.6$ arcsec (FWHM), producing a mean rms of
$\approx 0.7$ K in $T_{A}^{*}$.
The COHRS data (second release) are publicly
available111https://doi.org/10.11570/22.0078. In our analysis, we construct a
sub-sample of the full set of COHRS sources by only considering those COHRS
sources that contain the emission peaks of CHIMPS clouds. This set comprises
452 clouds. Hence this catalogue will be simply referred to as the COHRS
catalogue/survey.
## 3 Constructing a new COHRS catalogue
### 3.1 Cloud extraction
To identify molecular clouds in the COHRS data, we employ the Spectral
Clustering for Interstellar Molecular Emission Segmentation (SCIMES) algorithm
(Colombo et al., 2015). This image-segmentation method encodes the global
hierarchical structure of emission within a molecular-line datacube into a
dendrogram. This emission dendrogram is produced through the Python package
for astronomical dendrograms (Astrodendro, Astropy Collaboration et al., 2013,
2018). SCIMES then uses similarity criteria to analyse the dendrogram by
recasting it as a weighted complete graph (with vertices corresponding to the
leaves of the dendrogram and weighted edges representing the chosen affinity
relations between the leaves). The SCIMES algorithm then uses spectral
clustering on the Laplacian of the affinity matrix representing the graph to
partition the dendrogram into separate components. These clusters define a
segmentation of the emission into individual clouds.
Because of the variable weather conditions and the varying number of active
receptors during the 4 semesters of COHRS observations, the original cubes do
not present an entirely uniform sensitivity across the whole survey (Park et
al., 2022). To avoid high-noise regions being incorrectly identified as clouds
(i.e. false positives) and to prevent the loss of real signal-to-noise sources
in regions of low background (false negatives), the SCIMES extraction is
performed on the signal-to-noise ratio (SNR) cubes instead of brightness-
temperature data (Moore et al., 2015; Eden et al., 2017). This novel COHRS
catalogue thus differs from the one constructed by Colombo et al. (2019) for
the first COHRS data release for which the actual emission maps (masked for
given SNR thresholds) were used for the SCIMES segmentation.
Since the area covered by both CHIMPS and COHRS is too large to be analysed as
a single datacube and since our analysis of shear focuses on the largest ^12CO
structures in the emission as they are more likely to be impacted by the
differential rotation of the Galaxy, we organise the original datacubes into
18 regions of $4^{\circ}$ longitude each, with resolution degraded by a factor
2 in all 3 axes. Each pair of adjacent regions overlaps by $2^{\circ}$. This
wide overlap allows for any source in contact with the edges of any region to
be completely included and accounted for in the adjacent one (see below).
After the extraction, only clouds that contained at least one voxel with SNR
$\geq 10$ are retained. The parameters defining the emission dendrogram are
chosen as multiples of the background $\sigma_{\mathrm{rms}}$ (with
$\sigma_{\mathrm{rms}}=1$ in SNR cubes). We set each branch of the dendrogram
to be defined by a SNR change (min_delta) of $3$ and to contain at least 5
voxels ($\mathtt{min\\_npix}=5$). Any value of the SNR below
$3\sigma_{\mathrm{rms}}$ ($\mathtt{min\\_val}=3$) is not considered.
Figure 1: Prescription for cloud removal in the overlapping area of adjacent
regions (between edge A and B in the figure). In each region, we remove all
clouds in contact with the edges of the field of observation. Clouds that
touch the B edge are assigned to the right region (light green area), while
those t between edges B and C are assigned to the next region (grey and shaded
green areas). The process is repeated on all regions with the exception of the
last region on the right, for which all the clouds between A and B are kept.
Green ticks and red crosses indicate the clouds that are removed and those
that are left, respectively.
The catalogue is rid of spurious sources and noise artefacts that are left
after extraction by applying an additional filter. This mask removes those
sources that either cover fewer than 4 pixels in each spatial direction or
four velocity channels. This requirement ensures that each cloud is fully
resolved in each direction and that the selection does not include sources
with too small a field size.
Clouds touching the edges of the field of observation are removed from the
catalogues. To avoid double counting of sources in the overlapping areas
between regions, the sources to remove in each region are selected according
to the recipe shown in Figure 1. The two-degree overlap was chosen after
visual inspection to ensure that the recipe would yield a unique set of
sources for all the 18 regions COHRS is divided into. The final COHRS
catalogue comprises 3271 sources.
### 3.2 Distances
The distances and physical properties of the sources identified through the
segmentation of CHIMPS are taken from the SCIMES catalogue introduced in Rani
et al. (2023). Helio- and Galactocentric distances to COHRS sources are
assigned through direct comparison with the aforementioned CHIMPS catalogue.
COHRS and CHIMPS sources are matched by considering the position of the COHRS
peak emission in the CHIMPS segmentation assignment cubes. A source whose peak
lies within a CHIMPS cloud is ascribed the distance of that cloud. Unassigned
sources are not required in the present study, however they can be
givendistances by calculating their position with a Galactic rotation curve
model (Brand & Blitz, 1993; Reid et al., 2016).
### 3.3 Size and mass
We estimate the size of the CHIMPS and COHRS sources by considering an
‘equivalent’ radius that approximates the spatial extension of a cloud.
Following Rigby et al. (2019), we define the equivalent radius
$R_{\mathrm{eq}}$ as the radius of the circle whose area ($A_{c}$) is
equivalent to the projected area of the source scaled to its distance,
$R_{\mathrm{eq}}=\sqrt{A_{c}/\pi}.$ (1)
The values of the equivalent radii associated with the SCIMES and COHRS
sources were calculated directly from the values of the exact areas produced
by the Astrodendro dendrogram statistics tools. Figure 2 displays the
distribution of the equivalent radii of sources in the two surveys. Although
12CO (3–2) emission detected some slightly larger structures than 13CO (3–2),
the mean size of the sources in the two surveys takes on similar values. We
notice that the lower optical depths of ^13CO imply that, although these
clouds are more centrally concentrated, $R_{\mathrm{eq}}$ does not account for
the intensity distribution, and so it returns an upper limit for the cloud
size. Due to the high opacity of ^12CO and a resulting lower effective
critical density, the ^12CO transition probes significantly lower H_2 volume
densities in the clouds than ^13CO (Tang et al., 2013; Roueff et al., 2020;
Mazumdar et al., 2021). Therefore, the COHRS map delineates broader scales,
whereas the emission of ^13CO is responsive to gas with higher column density,
thus delineating the denser, more compact clumps. As larger structures are
expected to be more susceptible to the impact of shear, the COHRS sample is
instrumental in revealing any preferential rotation induced by shear.
The mass of CHIMPS clouds can be then estimated through the column-density
cubes (Rigby et al., 2019). The H_2 mass of the cloud is estimated by
considering the mean mass per H_2 molecule, taken to be 2.72 times the mass of
the proton, accounting for a helium fraction of 0.25 (Allen, 1973), and an
abundance of $10^{6}$ H_2 molecules per ^13CO molecule (Draine, 2011).
COHRS masses are expressed in terms of the molecular gas luminosity and
calculated through a ^12CO luminosity-mass conversion factor
$M=\alpha_{\mathrm{CO}}L_{\mathrm{CO}}$, with
$\alpha_{\mathrm{{}^{12}CO(1-0)}}=4.35\,\mathrm{M}_{\odot}\,\mathrm{pc}^{-2}\,\mathrm{km}^{-1}\mathrm{s}$,
assuming a mean molecular weight of $2.72m_{\mathrm{H}}$ per hydrogen molecule
(Colombo et al., 2019).
Figure 2: Distribution of equivalent radii across CHIMPS (red) and COHRS
(blue) matching samples. The vertical lines indicate the mean values. Figure
3: Distribution of masses across CHIMPS (red) and COHRS (blue). The vertical
lines indicate the mean values.
The distribution of masses in COHRS (Fig. 3) peaks at higher values and
exhibits an overall greater number of sources at higher mass values than
CHIMPS. This behaviour may be enhanced by the different methods used in the
mass estimation.
## 4 Cloud rotation, shear and turbulence
### 4.1 Velocity gradients and rotation angles
If shear is the dominant force behind cloud rotation, at least at short
Galactocentric distances where shear is stronger, we should expect clouds to
rotate with axes parallel to the axis of rotation of the Galaxy (but with
opposite direction, see Fig. 4). Cloud rotation gives rise to a velocity
gradient across a molecular cloud. Although several processes (such as
turbulence Burkert & Bodenheimer, 2000) may induce velocity gradients on
different scales in molecular clouds, we assume a full-scale gradient be an
indicator of cloud rotation as it may be induced by Galactic shear. Measuring
the overall velocity gradient is the most reliable and systematic approach to
the detection of rotation in large samples of clouds (Braine et al., 2018).
The evaluation of the velocity gradient and the subsequent determination of
the direction of the cloud’s axis of rotation is performed in two steps.
First, for each cloud, we calculate the first moment of the velocity of the
emission map in the frame of reference of the centre of mass of the cloud
(thus by shifting the velocity axis by the velocity of the cloud’s centroid)
$v(l,b)=\frac{\sum_{i}I_{i}v_{i}dv}{\sum_{i}I_{i}dv},$ (2)
where all non-zero emission voxels $i$ in the cloud contribute to the
summations with their velocity $v_{i}$ and emission $I_{i}$. All velocity
channels have the same size $dv$. This yields an emission-weighted velocity
value for each position ($l,b$) in the cloud, Considering a frame of reference
centred at the average values of the map, we now fit a plane (Imara & Blitz,
2011):
$v(l,b)=C_{1}l+C_{2}b+C_{3}$ (3)
with $C_{1}=\partial v/\partial l$, $C_{2}=\partial v/\partial b$, being the
velocity gradients in $l$ and $b$, and $C_{3}$ is a constant ($C_{3}=0$ in the
frame of reference of the centre of mass of the cloud).
The magnitude of the linear velocity gradient is then
$\Omega=\frac{(C_{1}^{2}+C_{2}^{2})^{\frac{1}{2}}}{d},$ (4)
where $d$ is the distance of the source (see also Fig. 8 and 9).
The direction of the velocity gradient (direction of increasing velocities)
with respect to the positive direction of Galactic longitude axis (in the
$l-b$ plane) can be estimated as the angle
$\theta=\arctan\left(\frac{C_{2}}{C_{1}}\right).$ (5)
The angle $\theta$, measured in degrees also represents the angle between the
axes of cloud rotation, perpendicular to the direction of the velocity
gradient, and the rotation of the Galaxy as shown in Fig. 4.
Since we lack information on the inclination $i$ of the individual clouds to
our line of sight, the measurement of the velocity gradient underestimates the
actual value by a factor $\sin(i)^{-1}$ (Phillips, 1999; Imara & Blitz, 2011).
Thus, the values of $\theta$ calculated through equation 5 represent a lower
bound for the sample, with the alignment with the Galactic rotation axis
worsening when the actual inclination of the cloud rotation is considered. The
lack of information on cloud inclination on the other hand leads to
underestimating the measured magnitude of the velocity gradient ($\Omega$)
and, consequently, the angular velocity of a cloud’s rotation (Imara & Blitz,
2011), making these measures much less reliable than the rotation angle for
the proposed analysis of shear . The method presented in this section focuses
on determining the direction of rotation of each cloud in the sample. As the
direction of the axis of rotation is ultimately determined by the overall
direction of the velocity gradient, this, in turn, accounts for the strengths
of the internal motion of the substructure within the cloud.
Figure 4: Definition of the direction of rotation of a cloud (see text). The
angle $\theta$ is defined with reference to the direction of rotation induced
by Galactic shear.
In our study, we consider the angle $\theta$. defined in Fig. 4 as
$\theta=90^{\circ}-\tilde{\theta}$, with $\tilde{\theta}$ derived from fitting
the first-moment map with a plane, as in equations 3 and 5.
The angle $\theta$ quantifies the difference in orientation between the
direction of cloud rotation and the direction of shear-induced rotation.
### 4.2 Quantifying the impact of shear
Assuming that overdensities in the molecular ISM accrete mass through
streaming motions along magnetic field lines, their growth then depends solely
on the interplay between gravity and the local level of galactic shear
(Elmegreen, 1993; Hunter et al., 1998). In this framework, since magnetic
fields can transfer the angular momentum away from the cloud, self-gravity and
shear play a more relevant role in core formation than the competition between
self-gravity, pressure, and the Coriolis force (which is represented by the
Toomre-Q parameter (Toomre, 1964).
In a scenario in which mass accretion i is determined by the competition
between their self-gravity and the local strength of Galactic shear, the
growth rate of the density perturbations can be expressed as
$P_{r}=\frac{\pi G\Sigma}{\sigma_{v}}\ {\rm s}^{-1},$ (6)
where $\Sigma$ is the local gas surface density, $\sigma_{v}$ the velocity
dispersion, and $G$ the gravitational constant. Perturbations grow most
efficiently when $P_{r}<|1/A|$, where $A$ is the Oort constant
$A=0.5\left(\frac{V}{R_{\mathrm{gc}}}-\frac{dV}{dR_{\mathrm{gc}}}\right),$ (7)
where $R_{\mathrm{gc}}$ is the Galactocentric distance of the source and $V$
the rotation velocity of the gas at a given Galactocentric radius. In
particular, for a source of size $L$ with centroid located at
$R_{\mathrm{gc}}$.
$A=0.5\left(\frac{V(R_{\mathrm{gc}})}{R_{\mathrm{gc}}}-\frac{|V(R_{\mathrm{gc}}+L/2)-V(R_{\mathrm{gc}}-L/2)|}{L}\right).$
(8)
From the Oort constant, we can define a critical surface density; this is the
threshold beyond which a density perturbation in the ISM becomes so
significant that it cannot be erased by shear:
$\Sigma_{\mathrm{sh}}=\frac{\alpha_{A}A\sigma_{v}}{\pi G},$ (9)
with $\alpha_{A}=\ln(Q)/2$. The factor $Q$ represents the growth factor that
perturbations in the diffuse ISM must have to overcome the disruptive impact
of shear. Hunter et al. (1998) quantified $Q$ to equal 100. This value yields
the density contrast between the diffuse ISM with densities $\sim
0.1-1\,\mathrm{cm}^{-3}$ and the molecular phase $\gtrsim
100\,\mathrm{cm}^{-3}$.
The critical surface density can thus be used to define a shear parameter for
gravitational instabilities:
$S=\frac{\Sigma_{\mathrm{sh}}}{\Sigma}=\frac{\alpha_{A}A\sigma_{v}}{\pi
G\Sigma},$ (10)
Shear will disrupt density perturbations at values of $S>1$ while being
ineffective at $S<1$.
The quantities that enter the definition of $S$ above are taken from the
catalogues produced by SCIMES (in particular the velocity dispersion and the
exact area of each source, adjusted for the distance). The masses in CHIMPS
are derived from the column density data cubes by scaling each voxel in the
source with the assigned distance. The surface density is then taken to be the
ratio between the mass and the exact projected area of the source (Rani et
al., 2023). COHRS surface densities are calculated using the luminosity masses
reported in the COHRS catalogue (Park et al., 2022).
### 4.3 Solenoidal fraction
Our turbulence analysis is based on the the statistical method developed by
Brunt et al. (2010) and Brunt & Federrath (2014), which allows us to quantify
the relative fraction of power in the solenoidal modes of turbulence present
in a molecular cloud from emission and column density observations. The main
idea behind the method is to reconstruct the properties of a three-dimensional
source from the information contained in its observed two-dimensional line-of-
sight projection.
According to the Helmholtz theorem, a three-dimensional vector field can be
decomposed into a divergence-free (solenoidal or transverse). component and a
curl-free (compressive, parallel) component. It can be shown through symmetry
arguments and the local orthogonality of the Helmholtz components that, under
the assumption of statistical isotropy, the projected line-of-sight component
of the three-dimensional field is proportional to the solenoidal component of
the full field in $k_{z}=0$ cut in Fourier space. The solenoidal fraction or
the amount of power in the solenoidal modes of the full field is then defined
as the ratio of the variances of the projected component and the full field,
which can be written in terms of the power spectra of these fields (2D and 3D)
through the Parceval theorem. Therefore, the power spectrum of the projected
component is a direct measure of the power spectrum of the solenoidal
component of the full field. In the case of optically thin gas, this relation
translates naturally into a function of the power spectra of the zeroth
($W_{0}$) and first ($W_{1}$) moments of velocity in which the three-
dimensional field in question is the momentum density’,$\rho\mathbf{v}$. In
emission data cubes, the solenoidal fraction, $R$ of each molecular cloud can
be practically estimated as
$R=\Bigg{[}\frac{\langle W_{1}^{2}\rangle}{\langle
W_{0}^{2}\rangle}\Bigg{]}\Bigg{[}\frac{\langle W_{0}^{2}/\langle
W_{0}\rangle^{2}\rangle}{1+B_{1}(\langle W_{0}^{2}\rangle/\langle
W_{0}\rangle^{2}-1)}\Bigg{]}\Bigg{[}g_{21}\frac{\langle W_{2}\rangle}{\langle
W_{0}\rangle}\Bigg{]}^{-1}B_{2}$ (11)
where
$B_{1}=\frac{(\sum_{k_{x}}\sum_{k_{y}}\sum_{k_{z}}f(k))-f(0)}{\sum_{k_{x}}\sum_{k_{y}}f(k))-f(0)},$
(12)
and
$B_{2}=\frac{\sum_{k_{x}}\sum_{k_{y}}\sum_{k_{z}}f_{\perp}(k)\frac{k_{x}^{2}+k_{y}^{2}}{k^{2}}}{\sum_{k_{x}}\sum_{k_{y}}f_{\perp}(k)}.$
(13)
Figure 5: Distribution of the shear parameter $S$ with Galactocentric distance
in the CHIMPS (red) and COHRS (blue) samples. The vertical and horizontal
solid black lines mark the $5.5$-kpc and $S=1$ values respectively. These
reference values are used to define a $R_{\mathrm{gc}}$-limited and an
$S$-limited subsample of CHIMPS sources (see text). The right panel shows the
binned means and the standard errors of the mean (S.E.M.) of the distributions
of $S$ over the Galactocentric distance.
with $f(k)$ and $f_{\perp}(k)$ being the angular (azimuthal) averages of the
power spectra of the zeroth and first moments (notation after Orkisz et al.,
2017). The angle brackets denote the spatial averages of the velocity moments
$W_{0}$, $W_{1}$, and $W_{2}$, calculated in the reference frame of the centre
of mass of the cloud. The statistical correction factor $g_{21}$ accounts for
the correlations between the variations of $\rho$ and $\mathbf{v}$. If $\rho$
and $\mathbf{v}$ are uncorrelated, $g_{21}=1$. The constant $g_{21}$ can be
expressed In terms of density, velocity and the spatial average of the density
$\rho_{0}$, by the variance of the three-dimensional volume density
$\langle(\rho/\rho_{0})^{2}\rangle$
$g_{21}=\frac{\langle\rho^{2}v^{2}\rangle/\langle\rho^{2}\rangle}{\langle\rho
v^{2}\rangle/\langle\rho\rangle}=\Bigg{\langle}\frac{\rho^{2}}{\rho_{0}^{2}}\Bigg{\rangle}^{\epsilon},$
(14)
where $\epsilon$ is a is a small positive constant which is the exponent of
the power law expressing the relation between the variance of the velocity
$\sigma_{v}^{2}$ and the density $\rho$ (Brunt et al., 2010; Brunt &
Federrath, 2014). In our study of the CHIMPS sample, we use the values of the
solenoidal fraction published by Rani et al. (2022). As the COHRS ^12CO
emission violates the requirement of being optically thin, the COHRS source
will not be considered when discussing the solenoidal fraction.
## 5 Results
Fig. 5 shows how the shear parameter $S$ changes with Galactocentric distance
in the CHIMPS and COHRS samples. The plot suggests that in both samples shear
has a stronger impact on clouds at shorter distances from the Galactic centre,
declining outwards. Fig. 5 also emphasises source-crowding within the main
spiral arms, with large peaks in source concentration at $\sim 4.5$ and $\sim
6.5$ kpc. These are the locations of the Scutum and Sagittarius arms seen from
the Galactic Centre. The smaller peak at $\sim 7.5$ kpc corresponds to the
Perseus arm. These areas encompass the broadest ranges of $S$. The spikes in
$S$ observed at spiral arms location result from the larger variety of clouds
collected in these regions.
Figure 6: Distributions of the inclinations of the axes of rotation of the
sources in the CHIMPS (left panel) and COHRS (right panel) samples with
respect to the axis of rotation of the Galaxy.
The greatest change in $S$ occurs between $3.5$ (the shortest Galactocentric
distance probed by CHIMPS) and $5.5$ kpc. COHRS sources exhibit larger values
of $S$ overall, but exhibit the same trend with distance as their CHIMPS
counterparts. In both samples, $S$ peaks at the inner edge of the Inner Galaxy
($\sim 3$ kpc) and falls off with increasing distance from the Galactic
centre. This trend seem to reflect the behaviour of the solenoidal fraction
described by Rani et al. (2022), where the relative fraction of power in the
solenoidal modes of turbulence peaks at $3.5-4$ kpc, the edge of the region
swept by the rotation of the Galactic bar, and declines outwards with a
shallow gradient (see Appendix A). The similarity in the behaviour of $R$ and
$S$ with Galactocentric distance suggests a connection between them.
Figure 7: Distributions of rotation angles with Galactocentric distance.
Figure 8: Top panel: Top panel: Cloud rotation angles (as defined in Fig. 4 as
a function of the solenoidal fraction $R$ for the full CHIMPS survey (red
dots) and $R_{\mathrm{gc}}$-limited subsample (lime triangles). Middle panel:
The positive correlation between the shear parameter ($S$) and the solenoidal
fraction ($R$) in CHIMPS clouds. The picture highlights the higher values of
both $S$ and $R$ in the $R_{\mathrm{gc}}$-limited sample. Bottom panel:
Distribution of the magnitude of the velocity gradients as a function of the
solenoidal fraction $R$ for the full CHIMPS survey (red dots), the
$R_{\mathrm{gc}}$-limited subsample (green triangles), and the S-limited
samples (cyan triangles). Figure 9: Left panel: Distribution of the magnitude
of the velocity gradients as a function of the equivalent radii of the sources
for the full CHIMPS survey (red dots), the $R_{\mathrm{gc}}$-limited subsample
(green triangles), and the S-limited samples (cyan triangles). The
distribution in the COHRS sample is shown in the right panel.
Where $S\geq 1$, shear is strong enough to disrupt the
perturbations/overdensities that seed star formation. It is in sources with
$S$ above this threshold that shear is most likely to induce an overall cloud
rotation. In the analysis that follows, we will consider a subset of CHIMPS
sources defined by $S\geq 1$ (hence referred to as the ‘$S$-limited’ set).
This set contains the sources for which shear can potentially disrupt the
gravitational collapse of clouds. We also investigate the set of sources with
Galactocentric distances between 3.5 and 5.5 kpc. This
’$R_{\mathrm{gc}}$-limited’ subset comprise those sources with the greatest
variation in $S$. The former subset includes 550 sources and the latter 926.
The distribution of the angles of the clouds’ rotation axes with respect to
the anti-parallel direction of rotation of the Galaxy ($\theta$, see Fig. 4)
is shown in Fig. 6. No preferred direction of rotation is apparent in any of
the CHIMPS samples considered. The ^12CO sources show accumulations of sources
around $100^{\circ}$ and $300^{\circ}$. These directions however are not in
agreement with shear-induced rotation. Fig. 7 show the distribution of
rotation angles with the Galactocentric distance. Even in this case, no
correlation is found between distance and $\theta$ (Spearman test with p-value
$>>0.001$) suggesting that even at shorter distances from the Galactic center
no obvious alignment is achieved.
The solenoidal fraction, $R$, is positively correlated with the shear factor
(Fig. 8) in both the full CHIMPS sample (Spearman correlation coefficient
$r=0.34$, $p$-value $\ll 10^{-3}$) and the two subsamples
($R_{\mathrm{gc}}$-limited: $r=0.36$; $S$-limited: $r=0.15$, both also with
p-values $\ll 10^{-3}$). This result is in agreement with the idea that
stronger shear promotes the rotational modes in the gas by enhancing both the
overall or partial rotation of a cloud. Both $R$ and $S$ are positively
correlated with the size of the CHIMPS clouds measured by the equivalent
radius (Spearman $r=0.31$ and $0.17$, respectively, with $p$-value $\ll 0.001$
for both). The shear-size correlation is much more marked in the COHRS sample,
for which we find $r=0.72$. These correlations suggest that the outer, less
dense layers of a molecular cloud are more sensitive to shear. Although, no
correlation is found between the clouds’ overall direction of rotation and the
solenoidal fraction $R$ (Spearman test with p-value $>>0.001$, Fig. 8) in any
of the CHIMPS samples, $R$ is positively correlated with $S$ (Spearman
$r=0.34$ with $p$-value $\ll 0.001$). This correlation is also reflected in
the distribution of $S$ with Galactocentric distance (Fig. 5) which reflects
the declining values of $R$ past the edge of the Inner Galaxy shown in Rani et
al. (2022). Removing the location dependence of both $S$ and $R$ through as
partial Spearman correlation test reveals positive correlation ($r=0.3$ with
p-value $\ll 0.001$) between these the two quantities.
Fig. 8 highlights that, within the complex structure of CHIMPS molecular
clouds, the solenoidal modes of turbulence are more likely to arise from local
perturbations rather than from shear-driven global contributions. The impact
of shear on $R$ in the form of a contribution to vorticity is not detected by
the magnitude of directed gradient in the distribution of velocity in first-
moment maps. We also find a slightly negative correlation between the gradient
magnitude and $R$ in our three samples (full CHIMPS: $r=-0.13$,
$R_{\mathrm{gc}}$-limited: $r=-0.11$, $S$-limited $r=-0.17$, with p-values
$\ll 10^{-3}$) and between the gradient magnitude and the equivalent radius of
the sources (Fig. 9, Spearman $r=-0.15$, p-values $\ll 0.001$). The latter
result indicates the higher probability of finding more well-defined gradients
in smaller clouds.
## 6 Discussion and Conclusions
Molecular clouds form as a result of the condensation of lower-density atomic
ISM gas, thereby acquiring its turbulent and shear-driving motions (Meidt et
al., 2018, 2019). Galactic dynamics can thus either stabilise clouds (Meidt et
al., 2013) or compress them, facilitating star formation (Jeffreson &
Kruijssen, 2018). The solenoidal modes of turbulence, in particular, have been
shown to be linked to reduced star formation efficiency in molecular clouds
(Rani et al., 2022). The relative fraction of power in the solenoidal modes of
turbulence (the solenoidal fraction, $R$) peaks at the edge of the Inner
Galaxy (the region swept by the rotation of the Galactic bar) and declines
with a shallow gradient with Galactocentric distance. The strength of shear
originating from the differential rotation of the Galaxy follows a similar
trend (see section 5). To quantify the strength of shear, we introduced the
shear factor $S$, which defines the ability of density perturbations to grow
within the cloud. The higher the value of $S$, the more likely shear is to
disrupt the growth of over-density and gravitational collapse (Dib et al.,
2012). Both CHIMPS and COHRS clouds display the same trend with $S$ peaking at
the inner edge of the Inner Galaxy and decaying with Galactocentric distance.
The similarities between the distributions of $R$ and $S$ over Galactocentric
distances suggest that, theoretically, shear may act as a facilitator of
solenoidal turbulence. This connection is strengthened by the positive
correlation between the shear parameter and the solenoidal fraction in CHIMPS
clouds for which the solenoidal fraction has been calculated. The declining
values of $R$ and $S$ with increased Galactocentric distances also agree with
the studies of the dense gas fraction within Galactic-plane molecular clouds
(Longmore et al., 2013; Urquhart et al., 2013) pointing to the heightened
shear of the CMZ as the cause of higher turbulent gas pressure, which raises
the density threshold for star formation. A clear example of this phenomenon
is the low SFE in the cloud G0.253+0.016 which appears to be caused by a
prevalence of shear-driven solenoidal turbulence (Federrath et al., 2016).
The parameter $S$ exhibits overall higher values in COHRS, a fact that is
compatible with the larger size of the structures traced by ^12CO emission
($S$ is positively correlated with the equivalent radius $R_{\mathrm{eq}}$) in
both the CHIMPS and COHRS samples). Despite the correlation between the shear
parameter and Galactocentric distance, our study of the clouds’ rotation
(Braine et al., 2018) in both the CHIMPS and COHRS samples found no preferred
direction of rotation induced by Galactic shear (anti-parallel to the axis of
rotation of the Galaxy) at all Galactocentric distances (Fig. 7). The absence
of correlation between rotation angles and the Galactocentric distance
indicates that cloud rotation is not sensitive to the strength of shear even
at distances where $S>1$ (Fig. 5). At this value of $S$, shear becomes strong
enough to prevent the gravitational collapse of dense cores. Although this
condition is satisfied for a large portion of CHIMPS sources and the majority
of COHRS clouds at Galactocentric distances shorter than 7 kpc, the magnitude
of Galactic shear does not suffice to induce an overall cloud rotation. This
finding hints at shear within the Galactic disc not being strong enough to
induce rotation at scales at the scale of the largest clouds in the CHIMPS and
COHRS samples. The rotation of these clouds thus must be imprinted at the time
of their formation or subsequently. An alternative hypothesis would see most
clouds at distances $<7$ kpc being too short-lived (Jeffreson et al., 2018b,
a) for shear to induce a preferred rotation in a statistically significant
number of them.
Our study also found that the magnitude of the velocity gradient in the
sources is negatively correlated to both the $R$ and the equivalent radius. As
the shear due to the Galactic rotation is not producing any preferred
direction in the clouds it can not be the mechanism responsible for these
correlation, and thus ’local perturbations’ are the most likely candidates to
explain these trends. These must be greater at lower $R_{\mathrm{gc}}$,
however, so cloud or/and flow collisions are more influential at smaller
Galactic radii.
To calculate the angle of rotation with respect to the direction of the
Galactic rotation we introduce a rigid-body approximation by fitting the
velocity gradient with a plane. The negative correlation between the magnitude
of the velocity gradient and the size ($R_{\mathrm{eq}}$) of the source
emphasises that larger sources may exhibit more complex structures, such as
velocity flows that change significantly in different regions within the
cloud. In this framework, the velocity distribution appears to be dominated by
internal gas motions originating upon cloud formation (Vishniac, 1994), cloud-
cloud collisions (Tanvir & Dale, 2020) or produced by stellar feedback (Fall
et al., 2020), even in the subset of clouds with shear parameters greater than
unity. The slightly negative correlation between $R$ and the magnitude of the
velocity gradient may imply that turbulence in smaller sources with more
structured gradients is, in general, less solenoidally dominated than sources
that possess more extended envelopes (both $R$ and $S$ are positively
correlated with size). In the case of sources that extend over tens of
parsecs, shear may increase the relative fraction of solenoidal modes in the
turbulence by inducing rotation in the most extended envelopes.
Together, these results, along with the overall lack of a preferred direction
of rotation in all samples we have examined would suggest that the
differential rotation of the Galactic disc has little impact on molecular
clouds both during their formation and after they formed. The rotation of the
clouds would thus be inherited as an imprint of the instabilities (principally
Kelvin-Helmoltz instabilities) that lead to the formation of the clouds or
result from their history of mergers rather than arising from instabilities
induced by Galactic shear.
## Acknowledgements
G.P. was supported by the National Research Foundation of Korea through grants
NRF-2020R1A6A3A01100208 & RS-2023-00242652. This work was also partly
supported by the Korea Astronomy and Space Science Institute grant funded by
the Korea government(MSIT) (Project No. 2022-1-840-05).
## Data availability
The CHIMPS catalogue used for this paper are available from the archives of
the CHIMPS (Rigby et al., 2019). The COHRS catalogue constructed for the
analysis in this article is available to download from the CANFAR
archive222https://www.canfar.net/citation/landing?doi=23.0019.
## References
* Aguerre et al. (2011) Aguerre J. E., et al., 2011, ApJS, 192, S82
* Allen (1973) Allen C. W., 1973, Astrophysical Quantities. Springer
* Astropy Collaboration et al. (2013) Astropy Collaboration Robitaille T. P., Tollerud E. J., Greenfield et al., 2013, A&A, 558, A33
* Astropy Collaboration et al. (2018) Astropy Collaboration Price-Whelan A. M., Sipőcz B. M., et al., 2018, AJ, 156, 123
* Berné et al. (2010) Berné O., Marcelino N., Cernichora J., 2010, Nature, 466, 947
* Braine et al. (2018) Braine J., et al., 2018, A&A, 612, A1
* Brand & Blitz (1993) Brand J., Blitz L., 1993, A&A, 275, 67
* Brunt & Federrath (2014) Brunt C. M., Federrath C., 2014, MNRAS, 442, 1451
* Brunt et al. (2010) Brunt C. M., Federrath C., Price D. J., 2010, MNRAS, 403, 1507
* Buckle et al. (2009) Buckle J. V., Hills R. E., Smith H., et al., 2009, MNRAS, 399, 1026
* Burkert & Bodenheimer (2000) Burkert A., Bodenheimer P., 2000, ApJ, 543, 822
* Churchwell et al. (2009) Churchwell E., et al., 2009, PASP, 121, 213
* Colling et al. (2018) Colling C., et al., 2018, A&A, 620, A21
* Colombo et al. (2015) Colombo D., Rosolowsky E., Ginsburg A., Duarte-Cabral A., Hughes A., 2015, MNRAS, 454, 2067
* Colombo et al. (2019) Colombo D., Rosolowsky E., Duarte-Cabral A., et al., 2019, MNRAS, 483, 4291
* Dame et al. (2001) Dame T. M., Hartmann D., Thaddeus 2001, ApJ, 547, 792
* Dempsey et al. (2013) Dempsey J. T., Thomas H. S., Currie M. J., 2013, ApJS, 209, 8
* Dib et al. (2012) Dib S., Helou G., Moore T. J. T., Urquhart J. S., Dariushm A., 2012, ApJ, 758, 125
* Draine (2011) Draine B. T., 2011, Physics of the interstellar and intergalactic medium. Princeton University Press, Oxford
* Drazin P. (1981) Drazin P. R. W., 1981, Hydrodynamic Stability. Cambridge Univ. Press, New York
* Eden et al. (2017) Eden D. J., Moore T. J. T., Plume R., et al., 2017, MNRAS, 469, 2163
* Elmegreen (1993) Elmegreen B. G., 1993, ApJ, 411, 170
* Elmegreen (1995) Elmegreen B. G., 1995, in Yuan C., You Y.-H., eds, , Molecular Clouds Star Formation. Singapore: World Scientific, p. 149
* Fall et al. (2020) Fall M. S., Krumholz M. R., Metzner D., 2020, ApJ, 710, L142
* Federrath et al. (2016) Federrath C., Rathborne J. M., Longmore S. N., et al., 2016, ApJ, 832, 143
* Fleck & Jr (1989) Fleck R. C., Jr 1989, AJ, 97, 783
* Heitsch et al. (2009) Heitsch F., et al., 2009, ApJ, 695, 248
* Hunter et al. (1986) Hunter J., Sandford M. T., Whitacker R. W., Klein R. I., 1986, ApJ, 305, 3
* Hunter et al. (1998) Hunter D. A., Elmegreen B. G., Baker A. L., 1998, ApJ, 493, 595
* Imara & Blitz (2011) Imara N., Blitz L., 2011, ApJ, 732, 78
* Inoue & Inutsuka (2012) Inoue T., Inutsuka S., 2012, ApJ, 759, 35
* Inutsuka et al. (2015) Inutsuka S., et al., 2015, A& A, 580, A49
* Jackson et al. (2006a) Jackson J. M., et al., 2006a, ApJS, 163, 145
* Jackson et al. (2006b) Jackson J. M., et al., 2006b, ApJS, 163, S145
* James & Percival (2016) James P. A., Percival S. M., 2016, MNRAS, 457, 917
* Jeffreson & Kruijssen (2018) Jeffreson S. M. R., Kruijssen J. M. D., 2018, MNRAS, 476, 3688
* Jeffreson et al. (2018a) Jeffreson S. M. R., et al., 2018a, MNRAS, 476, 3688
* Jeffreson et al. (2018b) Jeffreson S. M. R., et al., 2018b, MNRAS, 478, 3380
* Klein & Woods (1998) Klein R. I., Woods T., 1998, ApJ, 497, 777
* Kornreich & Scalo (2000) Kornreich P., Scalo J., 2000, ApJ, 531, 366
* Longmore et al. (2013) Longmore S., Bally J., Testi L. o., 2013, MNRAS, 429, 987
* Mazumdar et al. (2021) Mazumdar P., et al., 2021, A& A, 650, A164
* McKee & Ostriker (2007) McKee C. F., Ostriker E. C., 2007, ARA&A, 45, 565
* Meidt et al. (2013) Meidt S. E., et al., 2013, ApJ, 779, 45
* Meidt et al. (2018) Meidt S. E., et al., 2018, ApJ, 854, 100
* Meidt et al. (2019) Meidt S. E., et al., 2019, ApJ, 892, 73
* Molinari et al. (2016) Molinari S., et al., 2016, A& A, 591, a149
* Moore et al. (2015) Moore T. J. T., Plume R., Thompson M. A., et al., 2015, MNRAS, 453, 4264
* Orkisz et al. (2017) Orkisz J. H., Pety J., Gerin M., et al., 2017, A&A, 599, A99
* Park et al. (2022) Park G., et al., 2022, ApJ, Suppl. Ser., 264, 16
* Phillips (1999) Phillips J. P., 1999, A&AS, 134, 241
* Pudritz & Kevlahan (2013) Pudritz R. E., Kevlahan N. K.-R., 2013, PHILOS T R SOC A, 371, 2012
* Rani et al. (2022) Rani R., et al., 2022, MNRAS, 515, 271
* Rani et al. (2023) Rani R., et al., 2023, MNRAS, 523, 1832
* Reid et al. (2016) Reid M. J., Dame T. M., Menten K. M., Brunthaler A., 2016, ApJ, 823, 77
* Rigby et al. (2016) Rigby A. J., Moore T. J. T., Plume R., et al., 2016, MNRAS, 456, 2885
* Rigby et al. (2019) Rigby A. J., Moore T. J. T., Eden D. J., Uruqhart J. S., et al., 2019, A&A, 632, A58
* Röllig et al. (2011) Röllig M., et al., 2011, EAS Publications Series, 52, 281
* Roueff et al. (2020) Roueff A., et al., 2020, A&A, p. A26
* Sahai et al. (2012) Sahai R., Morris M. R., Claussen M. J., 2012, ApJ, 751, 69
* Silk (1997) Silk J., 1997, ApJ, 481, 703
* Tan (2000) Tan J. C., 2000, ApJ, 536, 173
* Tang et al. (2013) Tang X. D., et al., 2013, A& A, 551, A28
* Tanvir & Dale (2020) Tanvir T. S., Dale J. E., 2020, MNRAS, 494, 246
* Toomre (1964) Toomre A., 1964, ApJ, 139, 1217
* Umemoto et al. (2017) Umemoto T., et al., 2017, PASJ, 69, 1
* Urquhart et al. (2013) Urquhart J. S., Moore T. J. T., Schuller F., et al., 2013, MNRAS, 431, 1752
* Vishniac (1994) Vishniac E., 1994, ApJ, 428, 186
* Watson et al. (2012) Watson L. C., et al., 2012, ApJ, 751, 123
* Weidner et al. (2010) Weidner C., Bonnell I. A., Zinnecker H., 2010, ApJ, 724, 1503
## Appendix A The solenoidal fraction across CHIMPS sources
To facilitate the comparison between the trend shown in Fig. 5 and the
behaviour of the solenoidal fraction, $R$ across the sample considered in this
study, we reproduce the main result obtained by Rani et al. (2022). As Fig. 10
shows, $R$ peaks at the 3–4-kpc bin. Although the confirmation of this trend
is required by analysing sources at lower longitudes, this result is
consistent with the disc becoming stable against gravitational collapse at
these radii.
The 3–4-kpc distance bin marks the boundary between the inner Galaxy and the
region of influence of the Galactic bar, which in extragalactic systems has
been observed to quench star formation (James & Percival, 2016).
Figure 10: Distributions of the solenoidal fraction with Galactocentric
distance in CHIMPS. The size of the bins is adjusted to the number of source
(Rani et al., 2022). The horizontal lines represent the mean value within each
bin, while the vertical bars indicate the standard error of the mean.
|
# Thompson Sampling Regret Bounds for Contextual Bandits with sub-Gaussian
rewards ††thanks: This work was partially supported by (i) the Wallenberg AI,
Autonomous Systems and Software Program (WASP) funded by the Knut and Alice
Wallenberg Foundation and (ii) the Swedish Research Council under contract
2019-03606.
Amaury Gouverneur, Borja Rodríguez-Gálvez, Tobias J. Oechtering, and Mikael
Skoglund Division of Information Science and Engineering (ISE) KTH Royal
Institute of Technology<EMAIL_ADDRESS>
###### Abstract
In this work, we study the performance of the Thompson Sampling algorithm for
Contextual Bandit problems based on the framework introduced by [1] and their
concept of lifted information ratio. First, we prove a comprehensive bound on
the Thompson Sampling expected cumulative regret that depends on the mutual
information of the environment parameters and the history. Then, we introduce
new bounds on the lifted information ratio that hold for sub-Gaussian rewards,
thus generalizing the results from [1] which analysis requires binary rewards.
Finally, we provide explicit regret bounds for the special cases of
unstructured bounded contextual bandits, structured bounded contextual bandits
with Laplace likelihood, structured Bernoulli bandits, and bounded linear
contextual bandits.
## I Introduction
Contextual bandits encompasses sequential decision-making problems where at
each round an agent must choose an action that results in a reward. This
action is chosen based on a context of the environment and a history of past
contexts, rewards, and actions [2].111This setting is also known as bandit
problems with covariates [3, 4], associative reinforcement learning [5, 6, 7],
or associative bandit problems [8]. Contextual bandits have become an
important subset of sequential decision-making problems due to their multiple
applications in healthcare, finance, recommender systems, or
telecommunications (see [9] for a survey on different applications).
There is an interest to study the theoretical limitations of algorithms for
contextual bandits. This is often done considering their _regret_ , which is
the difference in the collected rewards that an algorithm obtains compared to
an oracle algorithm that chooses the optimal action at every round [10, 11,
12, 13, 14, 15, 16, 1].
A particularly successful approach is the _Thomson Sampling (TS) algorithm_
[17], and was originally introduced for multi armed bandits, which are
sequential decision-making problems without context. Despite its simplicity,
this algorithm has been shown to work remarkably well for contextual bandits
[18, 19]. This algorithm has been studied for multi armed bandits [20, 21, 22]
and in the more general context of Markov decision processes [23]. A crucial
quantity for the analysis of TS in the multi armed bandit setting is the
_information ratio_ [20], which trades off achieving low regret and gaining
information about the optimal action.
In [1], the authors extend this concept to the _lifted information ratio_ to
fit the more challenging setting of contextual bandits, where the optimal
action changes at every round based on the context. However, their main
results are limited to contextual bandits with binary rewards. Albeit this is
a common setting, as often rewards represent either a success or a failure
[19], it fails to capture more nuanced scenarios, like dynamic pricing where
rewards represent revenue [24].
In this paper, we extend the results from [1] to contextual bandits with sub-
Gaussian rewards. These rewards include the common setup where the rewards are
bounded, but are not necessarily binary [10, 11, 12, 13, 14, 15, 16], or
setups where the expected reward is linear but is corrupted by a sub-Gaussian
noise [24].
More precisely, our contributions in this paper are:
* •
A comprehensive bound on the TS regret that depends on the mutual information
between the environment parameters and the history collected by the agent
(Theorem 1). Compared to [1, Theorem 1], this bound highlights that, given an
average lifted information ratio, the regret of TS does not depend on all the
uncertainty of the problem, but only on the uncertainty that can be explained
by the data collected from the TS algorithm.
* •
An alternative proof of [1, Theorem 2] showing that, if the log-likelihood of
the rewards satisfies certain regularity conditions, the TS regret is bounded
by a measure of the complexity of the parameters’ space in cases where this is
not countable. The presented proof (Theorem 2) highlights that the rewards
need not to be binary.
* •
Showing the lifted information ratio is bounded by the number of actions
$|\mathcal{A}|$ in unstructured settings (Lemma 1) and by the dimension $d$
when the expected rewards are linear (Lemma 2). These bounds extend [1,
Lemmata 1 and 2] from the case where the rewards are binary to the more
general setting where they are sub-Gaussian.
* •
Explicit regret bounds for particular settings as an application of the above
results (Section IV). Namely, bounds for (i) bounded unstructured contextual
bandits that show that TS has a regret with the desired [11, 25] rate of
$O(\sqrt{|\mathcal{A}|T\log|\mathcal{O}|})$, (ii) bounded structured
contextual bandits including those with Laplace likelihoods and Bernoulli
bandits, and (iii) bounded linear bandits that show that the TS regret is
competitive with LinUCB’s [12].
## II Preliminaries
### II-A General Notation
Random variables $X$ are written in capital letters, their realizations $x$ in
lowercase letters, their outcome space in calligraphic letters $\mathcal{X}$,
and its distribution is written as $\mathbb{P}_{X}$. The density of a random
variable $X$ with respect to a measure $\mu$ is written as
$f_{X}\coloneqq\frac{d\mathbb{P}_{X}}{d\mu}$. When two (or more) random
variables $X,Y$ are considered, the conditional distribution of $Y$ given $X$
is written as $\mathbb{P}_{Y|X}$ and the notation is abused to write their
joint distribution as $\mathbb{P}_{X}\mathbb{P}_{Y|X}$.
### II-B Problem Setting: Contextual Bandits
A _contextual bandit_ is a sequential decision problem where, at each time
step, or round $t\in[T]$, an agent interacts with an environment by observing
a context $X_{t}\in\mathcal{X}$ and by selecting an action
$A_{t}\in\mathcal{A}$ accordingly. Based on the context and the action taken,
the environment produces a random reward $R_{t}\in\mathbb{R}$. The data is
collected in a history $H^{t+1}=H^{t}\cup H_{t+1}$, where
$H_{t+1}=\\{A_{t},X_{t},R_{t}\\}$. The procedure repeats until the end of the
time horizon, or last round $t=T$.
In the Bayesian setting, the environment is characterized by a parameter
$\Theta\in\mathcal{O}$ and a contextual bandit problem $\Phi$ is completely
defined by a prior environment parameter $\mathbb{P}_{\Theta}$, a context
distribution $\mathbb{P}_{X}$, and a fixed reward kernel
$\kappa_{\textnormal{reward}}:\mathcal{B}(\mathbb{R})\times(\mathcal{X},\mathcal{A},\mathcal{O})\to[0,1]$
such that
$\mathbb{P}_{R_{t}|X_{t},A_{t},\Theta}=\kappa_{\textnormal{reward}}\big{(}\cdot,(X_{t},A_{t},\Theta)\big{)}$.
Thus, the reward may be written as $R_{t}=R(X_{t},A_{t},\Theta)$ for some
(possibly random) function $R$.
The task in a Bayesian contextual bandit is to learn a policy
$\varphi=\\{\varphi_{t}:\mathcal{X}\times\mathcal{H}^{t}\to\mathcal{A}\\}_{t=1}^{T}$
taking an action $A_{t}$ based on the context $X_{t}$ and on the past
collected data $H^{t}$ that maximizes the _expected cumulative reward_
$R_{\Phi}(\varphi)\coloneqq\mathbb{E}\big{[}\sum_{t=1}^{T}R(X_{t},\varphi_{t}(X_{t},H^{t}),\Theta)\big{]}$.
#### II-B1 The Bayesian expected regret
The Bayesian expected regret of a contextual bandit problem measures the
difference between the performance of a given policy and the optimal one,
which is the policy that knows the true reward function and selects the
actions yielding the highest expected reward. For a given contextual bandit
problem, we define the performance of the optimal policy as the _optimal
cumulative reward_.
###### Definition 1
The _optimal cumulative reward_ of a contextual bandit problem $\Phi$ is
defined as
$R^{\star}_{\Phi}\coloneqq\sup_{\psi}\mathbb{E}\bigg{[}\sum_{t=1}^{T}R(X_{t},\psi(X_{t},\Theta),\Theta)\bigg{]},$
where the supremum is taken over the decision rules
$\psi:\mathcal{X}\times\mathcal{O}\to\mathcal{A}$ such that the expectation
above is defined.
A policy that achieves the supremum of Definition 1 is denoted as
$\psi^{\star}$ and the actions it generates are
$A^{\star}_{t}\coloneqq\psi^{\star}(X_{t},\Theta)$.
###### Assumption 1 (Compact action set)
The set of actions $\mathcal{A}$ is compact. Therefore, an optimal policy
$\psi^{\star}$ always exists.
The difference between the expected cumulative reward of a policy $\varphi$
and the optimal cumulative reward is the _Bayesian expected regret_.
###### Definition 2
The _Bayesian expected regret_ of a policy $\varphi$ in a contextual bandit
problem $\Phi$ is defined as
$\textnormal{REG}_{\Phi}(\varphi)\coloneqq
R^{\star}_{\Phi}-R_{\Phi}(\varphi).$
#### II-B2 The Thompson sampling algorithm
Thomson Sampling (TS) is an elegant algorithm to solve decision problems when
the environment $\Theta$ is unknown. It works by randomly selecting actions
according to their posterior probability of being optimal. More specifically,
at each round $t\in[T]$, the agent samples a Bayes estimate $\hat{\Theta}_{t}$
of the environment parameters $\Theta$ based on the past collected data
$H^{t}$ and selects the action given the optimal policy $\psi^{\star}$ for the
estimated parameters and the observed context $X_{t}$, that is
$\hat{A}_{t}=\psi^{\star}(X_{t},\hat{\Theta}_{t})$. The history collected by
the TS algorithm up to round $t$ is denoted $\hat{H}^{t}$. The pseudocode for
this procedure is given in Algorithm 1. Therefore, the Bayesian cumulative
reward $R^{\textnormal{TS}}_{\Phi}$ of the TS algorithm is
$R^{\textnormal{TS}}_{\Phi}\coloneqq\mathbb{E}\bigg{[}\sum_{t=1}^{T}R(X_{t},\psi^{\star}(X_{t},\hat{\Theta}_{t}),\Theta)\bigg{]},$
where $\hat{\Theta}_{t}$ has the property that
$\mathbb{P}_{\hat{\Theta}|\hat{H}^{t}}=\mathbb{P}_{\Theta|\hat{H}^{t}}$ a.s..
The Bayesian expected regret of the TS is denoted
$\textnormal{REG}^{\textnormal{TS}}_{\Phi}$ and is usually referred to as the
_TS cumulative regret_.
#### II-B3 Notation specific to contextual bandits
To aid the exposition, and since the $\sigma$-algebras of the history
$\hat{H}^{t}$ and the context $X_{t}$ are often in the conditioning of the
expectations and probabilities used in the analysis, similarly to [21, 1], we
define the operators
$\mathbb{E}_{t}[\cdot]\coloneqq\mathbb{E}[\cdot|\hat{H}^{t},X_{t}]$ and
$\mathbb{P}_{t}[\cdot]\coloneqq\mathbb{P}[\cdot|\hat{H}^{t},X_{t}]$, whose
outcomes are $\sigma(\mathcal{H}^{t}\times\mathcal{X})$-measurable random
variables and $\mathcal{H}=\mathcal{A}\times\mathcal{X}\times\mathbb{R}$.
Similarly, we define
$\textup{I}_{t}(\Theta;R_{t}|\hat{A}_{t})\coloneqq\mathbb{E}_{t}[\textup{D}_{\textnormal{KL}}(\mathbb{P}_{R_{t}|\hat{H}^{t},X_{t},\hat{A}_{t},\Theta}\lVert\mathbb{P}_{R_{t}|\hat{H}^{t},X_{t},\hat{A}_{t}})]$
as the _disintegrated_ conditional mutual information between the parameter
$\Theta$ and the reward $R_{t}$ given the action $\hat{A}_{t}$, _given the
history $\hat{H}^{t}$ and the context $X_{t}$_, see [26, Definition 1.1],
which is itself as well a
$\sigma(\mathcal{H}^{t}\times\mathcal{X})$-measurable random variable.
Algorithm 1 Thompson Sampling algorithm
1: Input: environment parameters prior $\mathbb{P}_{\Theta}$.
2: for $t=1$ to T do
3: Observe the context $X_{t}\sim\mathbb{P}_{X}$.
4: Sample a parameter estimation
$\smash{\hat{\Theta}_{t}\sim\mathbb{P}_{\Theta|\hat{H}^{t}}}$.
5: Take the action $\hat{A}_{t}=\psi^{\star}(X_{t},\hat{\Theta}_{t})$.
6: Collect the reward $R_{t}=R(X_{t},\hat{A}_{t},\Theta)$.
7: Update the history
$\hat{H}^{t+1}=\\{\hat{H}^{t},\hat{A}_{t},X_{t},R_{t}\\}$.
8: end for
## III Main results
In this section, we present our main results to bound the TS cumulative regret
for contextual bandits. In Section III-A, we first (Theorem 1) prove a
comprehensive bound on the TS cumulative regret that, rather than depending on
the entropy of the environment’s parameters as [1, Theorem 1], it depends on
their mutual information with the history. This highlights that, given an
average lifted information ratio, the TS cumulative regret does not depend on
the uncertainty of the parameters, but on the uncertainty of the parameters
explained by the history. Then (Theorem 2), we slightly relax the assumptions
of [1, Theorem 2] and digest this result with an alternative proof, which
formalizes that the TS cumulative regret is bounded by the complexity of the
environment’s space. In Section III-B, we provide bounds on the lifted
information ratio. First (Lemma 1), without assuming any structure in the
rewards, we show a bound that scales linearly with the number of actions. We
then (Lemma 2) consider the special case of linear contextual bandits and show
that in that case we can obtain a bound that scales with the dimension of the
problem. These results, in turn, generalize [1, Lemmata 1 and 2], which are
only valid for binary losses.
### III-A Bounding the TS cumulative regret
In the contextual bandits setting, the concept of _lifted information ratio_
was introduced in [1] as the random variable
$\Gamma_{t}\coloneqq\frac{\mathbb{E}_{t}[R^{\star}_{t}-R_{t}]^{2}}{\textup{I}_{t}(\Theta;R_{t}|\hat{A}_{t})},$
where $R_{t}$ is the reward collected by the TS algorithm and $R^{\star}_{t}$
is the one collected playing optimally, i.e.
$R(X_{t},\psi^{\star}_{t}(X_{t},\Theta),\Theta)$. This concept was inspired by
the _information ratio_ from [21] in the non-contextual multi armed bandit
problem setting and it is closely related to the _decoupling coefficient_ from
[16].
In the proof of [1, Theorem 1], it is shown that
$\textnormal{REG}_{\Phi}^{\textnormal{TS}}\leq\sqrt{\bigg{(}\sum_{t=1}^{T}\mathbb{E}[\Gamma_{t}]\bigg{)}\bigg{(}\sum_{t=1}^{T}\textup{I}(\Theta;R_{t}|\hat{H}^{t},X_{t},\hat{A}_{t})\bigg{)}}.$
(1)
This is employed to show a result bounding the TS cumulative regret for
problems with a countable environment space $\Theta$. However, this
intermediate step can also be leveraged to obtain a more general, and perhaps
more revealing bound on the TS cumulative regret.
###### Theorem 1
Assume that the average of the lifted information ratios is bounded
$\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}[\Gamma_{t}]\leq\Gamma$ for some
$\Gamma>0$. Then, the TS cumulative regret is bounded as
$\displaystyle\textnormal{REG}^{\textnormal{TS}}_{\Phi}$
$\displaystyle\leq\sqrt{\Gamma T\textup{I}(\Theta;\hat{H}^{T+1})}$
$\displaystyle=\sqrt{\Gamma
T\mathbb{E}[\textup{D}_{\textnormal{KL}}(\mathbb{P}_{\Theta|\hat{H}^{T+1}}\lVert\mathbb{P}_{\Theta})]}.$
###### Proof:
The proof follows by an initial application of the chain rule of the mutual
information. Namely,
$\displaystyle\smash{\textup{I}(\Theta;\hat{H}^{T+1})=\sum\nolimits_{t=1}^{T}\textup{I}(\Theta;\hat{H}_{t+1}|\hat{H}^{t}).}$
Applying the chain rule once more to each term shows that
$\displaystyle\textup{I}(\Theta;\hat{H}_{t+1}|\hat{H}^{t})=\textup{I}(\Theta;X_{t},\hat{A}_{t}|\hat{H}^{t})+\textup{I}(\Theta;R_{t}|\hat{H}^{t},X_{t},\hat{A}_{t}).$
Finally, the non-negativity of the mutual information completes the proof as
$\textup{I}(\Theta;\hat{H}_{t+1}|\hat{H}^{t})\geq\textup{I}(\Theta;R_{t}|\hat{H}^{t},X_{t},\hat{A}_{t})$.
∎
Theorem 1 has [1, Theorem 1] as a corollary by noting that for countable
parameters’ spaces $\textup{I}(\Theta;\hat{H}^{T+1})\leq\textup{H}(\Theta)$
and that if $\Gamma_{t}\leq\Gamma$ a.s. for all $t\in[T]$, then
$\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}[\Gamma_{t}]\leq\Gamma$. This seemingly
innocuous generalization gives us insights on the TS cumulative regret via the
following two factors:
* •
The bound on the average of lifted information ratios $\Gamma$. This measures
the maximum information gain on the environment parameters on average through
the rounds. This is different to the requirement that
$\mathbb{E}[\Gamma_{t}]\leq\Gamma^{\prime}$ from [1], which penalizes equally
rounds with large or little information gain. This may be relevant in
scenarios where the lifted information ratio can vary drastically among
rounds.
* •
The mutual information between the parameters $\Theta$ and the history
$\hat{H}^{t}$. Contrary to the entropy $\textup{H}(\Theta)$ featured in the
bound [1, Theorem 1], which is a measure of the uncertainty of the parameters,
the mutual information $\textup{I}(\Theta;\hat{H}^{t})$ measures the
uncertainty of the parameters that is explained by the history of TS since
$\textup{I}(\Theta;\hat{H}^{t})=\underbrace{\textup{H}(\Theta)}_{\textnormal{Uncertainty
of
$\Theta$}}-\underbrace{\textup{H}(\Theta|\hat{H}^{t}).}_{\begin{subarray}{c}\textnormal{Uncertainty
of $\Theta$}\\\ \textnormal{not explained by $\hat{H}^{t}$}\end{subarray}}$
Moreover, the mutual information is the relative entropy between the TS
posterior on the parameters and the true parameters’ prior, i.e.
$\mathbb{E}[\textup{D}_{\textnormal{KL}}(\mathbb{P}_{\Theta|\hat{H}^{T+1}}\lVert\mathbb{P}_{\Theta})]$,
which measures how well is the TS posterior aligned with the true parameters’
distribution in the last round. As for the TS algorithm we can sample from the
posterior $\mathbb{P}_{\Theta|\hat{H}^{T+1}}$, there are situations where the
posterior is known analytically and thus this relative entropy can be
numerically estimated at each round [20, Section 6].
In [1], for binary rewards, i.e.
$R:\mathcal{X}\times\mathcal{A}\times\mathcal{O}\to\\{0,1\\}$, it is shown
that regularity on the reward’s log-likelihood is sufficient to guarantee a
bound on the TS cumulative regret _à la Lipschitz maximal inequality_ [27,
Lemma 5.7]. More precisely, if the parameters’ space $\mathcal{O}$ is a metric
space $(\mathcal{O},\rho)$, they impose that the log-likelihood is Lipschitz
continuous for all actions and all contexts. However, requiring the log-
likelihood random variable to be a Lipschitz process is sufficient, as we will
show shortly.
###### Assumption 2 (Lipschitz log-likelihood)
There is a random variable $C>0$ that can depend only on $R_{t},X_{t}$, and
$\hat{A}_{t}$ such that $|\log
f_{R_{t}|X_{t},\hat{A}_{t},\Theta=\theta}(R_{t})-\log
f_{R_{t}|X_{t},\hat{A}_{t},\Theta=\theta^{\prime}}(R_{t})|\leq
C\rho(\theta,\theta^{\prime})$ a.s. for all
$\theta,\theta^{\prime}\in\mathcal{O}$.
With this regularity condition, the TS cumulative regret can be bounded from
above by the “complexity" of the parameter’s space $\mathcal{O}$, measured by
the $\epsilon$-covering number of the space.
###### Definition 3
A set $\mathcal{N}$ is an $\epsilon$-net for $(\mathcal{O},\rho)$ if for every
$\theta\in\mathcal{O}$, there exists a _projection map_
$\pi(\theta)\in\mathcal{N}$ such that $\rho(\theta,\pi(\theta))\leq\epsilon$.
The smallest cardinality of an $\epsilon$-net for $(\mathcal{O},\rho)$ is
called _the $\epsilon$-covering number_
$|\mathcal{N}(\mathcal{O},\rho,\epsilon)|\coloneqq\inf\\{|\mathcal{N}|:\mathcal{N}\textnormal{
is an }\epsilon\textnormal{-net for }(\mathcal{O},\rho)\\}.$
In [1], they prove their result manipulating the densities and employing the
_Bayesian telescoping_ technique to write the so called “Bayesian marginal
distribution" as the product of “posterior predictive distributions" [28].
Observing their proof, it seems that their result did not require the rewards
to be binary to hold. Below, using the properties of mutual information and
standard arguments to bound Lipschitz processes [27, Section 5.2] we provide
an alternative proof for this result where the weaker regularity condition and
the unnecessary requirement of binary rewards is apparent.
###### Theorem 2
Assume that the parameters’ space is a metric space $(\mathcal{O},\rho)$ and
let $|\mathcal{N}(\mathcal{O},\rho,\varepsilon)|$ be the $\epsilon$-covering
number of this space for any $\varepsilon>0$. Assume as well that the log-
likelihood is a Lipschitz process according to 2 and that the average of the
lifted information ratios is bounded
$\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}[\Gamma_{t}]\leq\Gamma$ for some
$\Gamma>0$. Then, the TS cumulative regret is bounded as
$\textnormal{REG}_{\Phi}^{\textnormal{TS}}\leq\sqrt{\Gamma
T\min_{\varepsilon>0}\big{\\{}\varepsilon\mathbb{E}[C]T+\log|\mathcal{N}(\mathcal{O},\rho,\varepsilon)|\big{\\}}}.$
###### Proof:
The proof follows considering (1) again. The mutual information terms can be
written as
$\displaystyle\textup{I}(\Theta;R_{t}|\hat{H}^{t},X_{t},\hat{A}_{t})=\mathbb{E}\bigg{[}\log\frac{f_{R_{t}|\hat{H}^{t},X_{t},\hat{A}_{t},\Theta}(R_{t})}{f_{R_{t}|\hat{H}^{t},X_{t},\hat{A}_{t}}(R_{t})}\bigg{]}.$
(2)
Consider now an $\varepsilon$-net of $\mathcal{O}$ with minimal cardinality
$|\mathcal{N}(\mathcal{O},\rho,\epsilon)|$, where $\pi$ is its projecting map.
Then, the mutual information in (2) can equivalently be written as
$\displaystyle\mathbb{E}\bigg{[}\int_{\mathcal{O}}f_{\Theta|R_{t},\hat{H}^{t},X_{t},\hat{A}_{t}}(\theta)$
$\displaystyle\bigg{(}\log\frac{f_{R_{t}|X_{t},\hat{A}_{t},\Theta=\theta}(R_{t})}{f_{R_{t}|X_{t},\hat{A}_{t},\Theta=\pi(\theta)}(R_{t})}$
$\displaystyle+\log\frac{f_{R_{t}|\hat{H}^{t},X_{t},\hat{A}_{t},\Theta=\pi(\theta)}(R_{t})}{f_{R_{t}|\hat{H}^{t},X_{t},\hat{A}_{t}}(R_{t})}\bigg{)}d\theta\bigg{]},$
since
$f_{R_{t}|\hat{H}^{t},X_{t},\hat{A}_{t},\Theta}=f_{R_{t}|X_{t},\hat{A}_{t},\Theta}$
a.s. by the conditional Markov chain $R_{t}-\hat{A}_{t}-\hat{H}\ |\
\Theta,X_{t}$. The regularity condition in 2 ensures that the first term is
bounded by $\varepsilon\mathbb{E}[C]$. Then, defining the random variable
$\Theta_{\pi}\coloneqq\pi(\Theta)$, we note that the second term is equal to
$\textup{I}(\Theta_{\pi};R_{t}|\hat{H}^{t},X_{t},\hat{A}_{t})$.
Summing the $T$ terms from the regularity condition results in
$\varepsilon\mathbb{E}[C]T$ and, similarly to the proof of Theorem 1, summing
the $T$ mutual information
$\textup{I}(\Theta_{\pi};R_{t}|\hat{H}^{t},X_{t},\hat{A}_{t})$ terms results
in the upper bound
$\smash{\sum\nolimits_{t=1}^{T}\textup{I}(\Theta_{\pi};R_{t}|\hat{H}^{t},X_{t},\hat{A}_{t})\leq\textup{I}(\Theta_{\pi};\hat{H}^{T+1})\leq\textup{H}(\Theta_{\pi}).}$
Finally, bounding the entropy by the cardinalitiy of the net
$\textup{H}(\Theta_{\pi})\leq\log|\mathcal{N}(\mathcal{O},\rho,\varepsilon)|$
completes the proof.
∎
### III-B Bounding the lifted information ratio
The next lemma provides a bound on the lifted information ratio that holds for
settings with a finite number of actions and sub-Gaussian rewards. This result
generalizes [1, Lemma 1] as their proof technique requires the rewards to be
binary. Under this specific case, we recover their result with a smaller
constant as binary random variables are $1/4$-sub-Gaussian.222Random variables
in $[0,L]$ are $\frac{L^{2}}{4}$-sub-Gaussian[29, Theorem 1].
###### Lemma 1
Assume the number of actions $|\mathcal{A}|$ is finite. If for all $t\in[T]$,
$h^{t}\in\mathcal{H}^{t}$, and $x\in\mathcal{X}$, the random rewards $R_{t}$
are $\sigma^{2}$-sub-Gaussian under
$\mathbb{P}_{R_{t}|\hat{H}^{t}=h^{t},X_{t}=x}$, then $\Gamma_{t}\leq
2\sigma^{2}|\mathcal{A}|$.
###### Proof:
The proof adapts [20, Proof of Proposition 3] to contextual bandits. The
adaptation considers sub-Gaussian rewards using the Donsker–Varadhan
inequality [30, Theorem 5.2.1] as suggested in [20, Appedix D]. This
adaptation completely differs from the one in [1], which is based on convex
analysis of the relative entropy of distributions with binary supports. The
full proof is in Appendix A. ∎
Next, we consider cases of linear expected rewards. This setting is an
extension of the stochastic linear bandit problem studied in [21, Section 6.5]
to contextual bandit problems. The following lemma provides a bound on the
lifted information ratio for problems in this setting with sub-Gaussian
rewards, thus generalizing [1, Lemma 2] which only considers binary random
rewards. It useful in cases where the dimension is smaller than the number of
actions $d<|\mathcal{A}|$.
###### Lemma 2
Assume the number of actions $|\mathcal{A}|$ is finite, the expectation of the
rewards is $\mathbb{E}[R(x,a,\theta)]=\langle\theta,m(x,a)\rangle$ for some
feature map $m:\mathcal{X}\times\mathcal{A}\to\mathbb{R}^{d}$, and that
$\mathcal{O}\subseteq\mathbb{R}^{d}$. If for all $t\in[T]$,
$h^{t}\in\mathcal{H}^{t}$, and $x\in\mathcal{X}$, the random rewards $R_{t}$
are $\sigma^{2}$-sub-Gaussian under
$\mathbb{P}_{R_{t}|\hat{H}^{t}=h^{t},X_{t}=x}$, then $\Gamma_{t}\leq
2\sigma^{2}d$.
###### Proof:
The proof adapts [20, Proof of Proposition 5] to contextual bandits similarly
to [1, Proof of Lemma 2]. The key difference with the latter is that instead
of binary rewards [1], this considers sub-Gaussian ones using again the
Donsker–Varadhan inequality [30, Theorem 5.2.1] similarly to the proof of
Lemma 1. The full proof is in Appendix A. ∎
## IV Applications
### IV-A Unstructured bounded contextual bandits
The problem of contextual bandits with bounded rewards
$R:\mathcal{X}\times\mathcal{A}\times\mathcal{O}\to[0,1]$ and a finite number
of actions $|\mathcal{A}|$ and of parameters $|\mathcal{O}|$ is well studied.
In [11] and [25], respectively, the authors showed that the algorithms Policy
Elimination and Exp4.P have a regret upper bound in
$O\big{(}\sqrt{|\mathcal{A}|T\log(T|\mathcal{O}|/\delta)}\big{)}$ and in
$O\big{(}\sqrt{|\mathcal{A}|T\log(|\mathcal{O}|/\delta)}\big{)}$ with
probability at least $1-\delta$. Then, it was shown that there exist some
contextual bandit algorithm with a regret upper bound in
$O(\sqrt{|\mathcal{A}|T\log|\mathcal{O}|})$ [14] and that, for all algorithms,
there is a parameters’ space $\mathcal{O}^{\prime}$ with cardinality smaller
than $|\mathcal{O}|$ such that the regret lower bounded is in
$\Omega(\sqrt{|\mathcal{A}|T\log|\mathcal{O}|/\log|\mathcal{A}|})$ [13]. This
sparked the interest to study how the TS or related algorithms’ regret
compared to these bounds. In [16, Section 5.1], it was shown that the Feel-
Good TS regret has a rate in $O(\sqrt{|\mathcal{A}|T\log|\mathcal{O}|})$ and
recently, in [1, Theorem 3], it was shown that if the reward is binary, the TS
also has a rate in $O(\sqrt{|\mathcal{A}|T\log|\mathcal{O}|})$. Here, as a
corollary of Theorem 1 and Lemma 1, we close the gap on the regret of the TS
algorithm showing that it is in $O(\sqrt{|\mathcal{A}|T\log|\mathcal{O}|})$
for sub-Gaussian rewards, and thus for bounded ones.
###### Corollary 1
Assume that the rewards are bounded in $[0,L]$. Then, for any contextual
bandit problem $\Phi$, the TS cumulative regret after $T$ rounds is bounded as
$\displaystyle\textnormal{REG}^{\textnormal{TS}}_{\Phi}\leq\sqrt{\frac{L^{2}|\mathcal{A}|T\textup{H}(\Theta)}{2}}.$
Note that the above result also holds for $\sigma^{2}$-sub-Gaussian rewards by
replacing $L^{2}/2$ by $2\sigma^{2}$.
### IV-B Structured bounded contextual bandits
#### IV-B1 Bandits with Laplace likelihoods
We introduce the setting of contextual bandits with Laplace likelihoods. In
this setting, we model the rewards’ random variable with a Laplace
distribution. More precisely, this setting considers rewards with a likelihood
proportional to $\exp\Big{(}-\frac{|r-f_{\theta}(x,a)|}{\beta}\Big{)}$ for
some $\beta>0$. In addition, this setting assumes that the random variable
$f_{\theta}(X,A)$ is a Lipschitz process with respect to $\theta$ with random
variable $C\coloneqq C(X,A)$. This ensures 2 with random variable
$\frac{C}{\beta}$ as by the triangle inequality
$|r-f_{\theta}(x,a)|-|r-f_{\theta^{\prime}}(x,a)|\leq|f_{\theta}(x,a)-f_{\theta^{\prime}}(x,a)|.$
Theorem 2 and Lemma 1 yield the following corollary, where we further use the
bound on the $\varepsilon$-covering number
$|\mathcal{N}(\mathcal{O},\rho,\epsilon)|\leq\big{(}\frac{3S}{\varepsilon}\big{)}^{d}$
[27, Lemma 5.13] and we let $\varepsilon=\frac{d\beta}{\mathbb{E}[C]T}$.
###### Corollary 2
Assume that $\mathcal{O}\subset\mathbb{R}^{d}$ with
$\textnormal{diam}(\mathcal{O})\leq S$. Consider a contextual bandit problem
$\Phi$ with Laplace likelihood and rewards bounded in $[0,L]$. Then, the TS
cumulative regret after $T$ rounds is bounded as
$\displaystyle\textnormal{REG}^{\textnormal{TS}}_{\Phi}\leq\sqrt{\frac{L^{2}|\mathcal{A}|Td}{2}\bigg{(}1+\log\bigg{(}\frac{3S\mathbb{E}[C]T}{d\beta}\bigg{)}\bigg{)}}.$
In particular, for linear functions
$f_{\theta}(x,a)=\langle\theta,m(x,a)\rangle$ with a bounded feature map, i.e.
$\lVert m(x,a)\rVert\leq B$ for all $x\in\mathcal{X}$ and all
$a\in\mathcal{A}$, then $C\leq B$ a.s..
#### IV-B2 Bernoulli bandits with structure
A common setting is that of Bernoulli contextual bandits, where the random
rewards $R_{t}$ are binary and Bernoulli distributed [18, 19]. This is an
attractive setting as binary rewards are usually modeled to measure success in
e-commerce. In this setting, usually $R_{t}\sim\textnormal{Ber}\big{(}g\circ
f_{\Theta}(X_{t},\hat{A}_{t})\big{)}$, where $g$ is a _binomial link function_
and $f$ is a linear function $f_{\theta}(x,a)=\langle\theta,m(x,a)\rangle$ for
some feature map $m$. When the link function is the logistic function
$g(z)=\sigma(z)\coloneqq(1+e^{-z})^{-1}$, $f$ is $C$-Lipschitz (e.g., when it
is a linear function with a bounded feature map), and the parameters’ space is
bounded $\lVert\theta\rVert\leq S$ for all $\theta\in\mathcal{O}$, [1] showed
that the TS cumulative regret rate is in
$O\big{(}\sqrt{|\mathcal{A}|Td\log(SCT)}\big{)}$. This result is founded in
their Theorem 2 and Lemma 1, and the fact that $\log\sigma$ is a $1$-Lipschitz
function. We note that this is also true for other link functions such as the
generalized logistic function
$\sigma_{\alpha}(z)\coloneqq(1+e^{-z})^{-\alpha}$, whose $\log$ is
$\alpha$-Lipschitz for all $\alpha>0$, or the algebraic logistic function
$\sigma_{\textnormal{alg}}(z)\coloneqq\frac{1}{2}(1+\frac{z}{\sqrt{1+z^{2}}})$,
whose $\log$ is $2$-Lipschitz. Moreover, we also note that with an appropriate
choice of $\varepsilon$ as in Corollary 2, these results improve their rate to
$O\big{(}\sqrt{|\mathcal{A}|Td\log(SCT/d)}\big{)}$.
### IV-C Bounded linear contextual bandits
In this section, we focus on the setting of contextual bandits with linear
expected rewards. This setting has been introduced by [10] and further studied
in [12]. In this setting, the rewards are bounded in $[0,1]$ and their
expectation is linear $\mathbb{E}[R(x,a,\theta)]=\langle\theta,m(x,a)\rangle$
with a bounded feature map $m:\mathcal{X}\times\mathcal{A}\to[0,1]$ and
parameters’ space $\textnormal{diam}(\mathcal{O})=1$.
In this setting, [12] showed that LinUCB has a regret bound in
$O\big{(}\sqrt{dT\log^{3}(|\mathcal{A}|T\log(T)/\delta)}\Big{)}$ with
probability no smaller than $1-\delta$. The following corollary shows that if
one is able to work with a discretized version $\mathcal{O}_{\varepsilon}$ of
$\mathcal{O}$ with precision $\varepsilon$, i.e. $\mathcal{O}_{\varepsilon}$
is an $\varepsilon$-net of $\mathcal{O}$, then TS has a regret bound in
$O\Big{(}\sqrt{d^{2}T\log\big{(}\frac{3}{\varepsilon}}\big{)}\Big{)}$, which
also follows from the bound on the $\varepsilon$-covering number
$|\mathcal{N}(\mathcal{O},\lVert\cdot\rVert,\varepsilon)|\leq\big{(}\frac{3}{\varepsilon}\big{)}^{d}$
[27, Lemma 5.13]. This bound is especially effective when the dimension $d$ is
small or the number of actions $|\mathcal{A}|$ is large. More precisely, it is
tighter than [12]’s bound when
$d\log(1/\varepsilon)<\log^{3}(|\mathcal{A}|T\log T)$.
###### Corollary 3
Assume that $\mathcal{O}=\\{\theta_{1},\ldots,\theta_{|\mathcal{O}|}\\}$ where
$\theta\in\mathbb{R}^{d}$. Consider a contextual bandit problem $\Phi$ with a
finite number of actions $|\mathcal{A}|$, rewards bounded in $[0,L]$ and such
that the expectation of the rewards is
$\mathbb{E}[R(x,a,\theta)]=\langle\theta,m(x,a)\rangle$ for some feature map
$m:\mathcal{X}\times\mathcal{A}\to\mathbb{R}^{d}$. Then the TS cumulative
regret after $T$ rounds is bounded as
$\displaystyle\textnormal{REG}^{\textnormal{TS}}_{\Phi}\leq\sqrt{\frac{L^{2}dT\log(|\mathcal{O}|)}{2}}$
###### Proof:
It follows from Theorem 1 and Lemma 2. ∎
## V Conclusion
In this paper, we showed in Theorem 1 that the TS cumulative regret for
contextual bandit problems is bounded from above by the mutual information
between the environment parameters and the history. Compared to [1, Theorem
1], this highlights that, given an average lifted information ratio, the
regret of TS does not depend on all the uncertainty of the environment
parameters, but only on the uncertainty that can be explained by the history
collected by the algorithm. In Theorem 2, we provided an alternative proof to
[1, Theorem 2] showing that the TS regret is bounded by the "complexity" of
the parameters’ space, where we highlighted that this result holds without the
requirement of the rewards being binary.
In Lemmata 1 and 2, we provided bounds on the lifted information ratio that
hold for contextual bandit problems with sub-Gaussian rewards. This includes
the standard setting where the rewards are bounded [10, 11, 12, 13, 14, 15,
16], and setups where the expected reward is linear but is corrupted by a sub-
Gaussian noise [24], thus extending the results from [1] that worked only with
binary rewards. When no structure of the problem is assumed, the lifted
information ratio bound scales with the number of actions $|\mathcal{A}|$
(Lemma 1), and for problems with linear expected rewards, the bound scales
with the dimension $d$ of the parameters’ space $\mathcal{O}$ (Lemma 2).
Finally, we applied our results to some particular settings such as: bounded
unstructured contextual bandits, for which TS has a regret with rate of
$O(\sqrt{|\mathcal{A}|T\log|\mathcal{O}|})$; bounded structured contextual
bandits including those with Laplace likelihoods and Bernoulli bandits; and
lastly, bounded linear bandits underlining that TS has a regret bound
competing with LinUCB [12].
## References
* [1] G. Neu, J. Olkhovskaya, M. Papini, and L. Schwartz, “Lifting the information ratio: An information-theoretic analysis of thompson sampling for contextual bandits,” _arXiv preprint arXiv:2205.13924_ , 2022.
* [2] J. Langford and T. Zhang, “The epoch-greedy algorithm for multi-armed bandits with side information,” _Advances in neural information processing systems_ , vol. 20, 2007.
* [3] J. Sarkar, “One-armed bandit problems with covariates,” _The Annals of Statistics_ , pp. 1978–2002, 1991.
* [4] M. Woodroofe, “A one-armed bandit problem with a concomitant variable,” _Journal of the American Statistical Association_ , vol. 74, no. 368, pp. 799–806, 1979.
* [5] A. G. Barto and P. Anandan, “Pattern-recognizing stochastic learning automata,” _IEEE Transactions on Systems, Man, and Cybernetics_ , no. 3, pp. 360–375, 1985.
* [6] V. Gullapalli, _Associative reinforcement learning of real-valued functions_. Citeseer, 1990.
* [7] L. P. Kaelbling, “Associative reinforcement learning: A generate and test algorithm,” _Machine Learning_ , vol. 15, pp. 299–319, 1994.
* [8] A. L. Strehl, C. Mesterharm, M. L. Littman, and H. Hirsh, “Experience-efficient learning in associative bandit problems,” in _Proceedings of the 23rd international conference on Machine learning_ , 2006, pp. 889–896.
* [9] D. Bouneffouf, I. Rish, and C. Aggarwal, “Survey on applications of multi-armed and contextual bandits,” in _2020 IEEE Congress on Evolutionary Computation (CEC)_. IEEE, 2020, pp. 1–8.
* [10] N. Abe, A. W. Biermann, and P. M. Long, “Reinforcement learning with immediate rewards and linear hypotheses,” _Algorithmica_ , vol. 37, pp. 263–293, 2003\.
* [11] M. Dudik, D. Hsu, S. Kale, N. Karampatziakis, J. Langford, L. Reyzin, and T. Zhang, “Efficient optimal learning for contextual bandits,” _arXiv preprint arXiv:1106.2369_ , 2011.
* [12] W. Chu, L. Li, L. Reyzin, and R. Schapire, “Contextual bandits with linear payoff functions,” in _Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics_. JMLR Workshop and Conference Proceedings, 2011, pp. 208–214.
* [13] A. Agarwal, M. Dudík, S. Kale, J. Langford, and R. Schapire, “Contextual bandit learning with predictable rewards,” in _Artificial Intelligence and Statistics_. PMLR, 2012, pp. 19–26.
* [14] D. Foster and A. Rakhlin, “Beyond ucb: Optimal and efficient contextual bandits with regression oracles,” in _International Conference on Machine Learning_. PMLR, 2020, pp. 3199–3210.
* [15] D. J. Foster and A. Krishnamurthy, “Efficient first-order contextual bandits: Prediction, allocation, and triangular discrimination,” _Advances in Neural Information Processing Systems_ , vol. 34, pp. 18 907–18 919, 2021.
* [16] T. Zhang, “Feel-good thompson sampling for contextual bandits and reinforcement learning,” _SIAM Journal on Mathematics of Data Science_ , vol. 4, no. 2, pp. 834–857, 2022.
* [17] W. R. Thompson, “On the likelihood that one unknown probability exceeds another in view of the evidence of two samples,” _Biometrika_ , vol. 25, no. 3-4, pp. 285–294, 1933.
* [18] S. L. Scott, “A modern bayesian look at the multi-armed bandit,” _Applied Stochastic Models in Business and Industry_ , vol. 26, no. 6, pp. 639–658, 2010.
* [19] O. Chapelle and L. Li, “An empirical evaluation of Thompson sampling,” _Advances in neural information processing systems_ , vol. 24, 2011.
* [20] D. Russo and B. Van Roy, “Learning to optimize via information-directed sampling,” _Advances in Neural Information Processing Systems_ , vol. 27, 2014.
* [21] ——, “An information-theoretic analysis of Thompson sampling,” _The Journal of Machine Learning Research_ , vol. 17, no. 1, pp. 2442–2471, 2016.
* [22] S. Dong and B. Van Roy, “An information-theoretic analysis for thompson sampling with many actions,” _Advances in Neural Information Processing Systems_ , vol. 31, 2018.
* [23] A. Gouverneur, B. Rodríguez-Gálvez, T. J. Oechtering, and M. Skoglund, “An information-theoretic analysis of bayesian reinforcement learning,” in _2022 58th Annual Allerton Conference on Communication, Control, and Computing (Allerton)_. IEEE, 2022, pp. 1–7.
* [24] J. W. Mueller, V. Syrgkanis, and M. Taddy, “Low-rank bandit methods for high-dimensional dynamic pricing,” _Advances in Neural Information Processing Systems_ , vol. 32, 2019.
* [25] A. Beygelzimer, J. Langford, L. Li, L. Reyzin, and R. Schapire, “Contextual bandit algorithms with supervised learning guarantees,” in _Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics_. JMLR Workshop and Conference Proceedings, 2011, pp. 19–26.
* [26] J. Negrea, M. Haghifam, G. K. Dziugaite, A. Khisti, and D. M. Roy, “Information-theoretic generalization bounds for sgld via data-dependent estimates,” _Advances in Neural Information Processing Systems_ , vol. 32, 2019.
* [27] R. van Handel, “Probability in high dimension,” PRINCETON UNIV NJ, Tech. Rep., 2014.
* [28] P. Grünwald, “The safe bayesian: learning the learning rate via the mixability gap,” in _Algorithmic Learning Theory: 23rd International Conference, ALT 2012, Lyon, France, October 29-31, 2012. Proceedings 23_. Springer, 2012, pp. 169–183.
* [29] W. Hoeffding, “Probability inequalities for sums of bounded random variables,” in _The collected works of Wassily Hoeffding_. Springer, 1994, pp. 409–426.
* [30] R. M. Gray, _Entropy and information theory_. Springer Science & Business Media, 2011.
## Appendix A Proofs of lemmata
See 1
###### Proof:
The proof follows the same methodology as [21, Proof of Proposition 3], taking
care of the presence of contexts in the analysis. For the sake of brevity, we
introduce the following notation $R^{\prime}_{t}(a)\coloneqq
R(X_{t},a,\Theta)$ and recall the previously defined notations
$A^{\star}_{t}\coloneqq\psi^{\star}(X_{t},\Theta)$ and
$\hat{A}_{t}\coloneqq\psi^{\star}(X_{t},\hat{\Theta}_{t})$. Then at each round
$t\in[T]$, one can write the expected regret conditioned on
$\hat{H}^{t},X_{t}$ as
$\displaystyle\mathbb{E}_{t}[R^{\star}_{t}-R_{t}]=\sum_{a\in\mathcal{A}}$
$\displaystyle\mathbb{P}_{t}[A^{\star}_{t}=a]\mathbb{E}_{t}[R^{\prime}_{t}(a)|A^{\star}_{t}=a]$
$\displaystyle-\sum_{a\in\mathcal{A}}\mathbb{P}_{t}[\hat{A}_{t}=a]\mathbb{E}_{t}[R^{\prime}_{t}(a)|\hat{A}_{t}=a]\
\textnormal{a.s.}.$
By definition of the TS algorithm
$\mathbb{P}_{t}[A^{\star}_{t}=a]=\mathbb{P}_{t}[\hat{A}_{t}=a]$ a.s..
Observing as well that conditioned on $\hat{H}^{t}$ and $X_{t}$, the reward
$R^{\prime}_{t}(a)$ is independent of the TS action $\hat{A}_{t}$, the
conditional expected regret can be a.s. rewritten as
$\displaystyle\sum_{a\in\mathcal{A}}\mathbb{P}_{t}[A^{\star}_{t}=a]\big{(}\mathbb{E}_{t}[R^{\prime}_{t}(a)|A^{\star}_{t}=a]-\mathbb{E}_{t}[R^{\prime}_{t}(a)]\big{)}.$
(3)
As the rewards are $\sigma^{2}$-sub-Gaussian, the difference of expectations
in this last rewriting can be upper bounded using the Donsker-Varadhan
inequality [30, Theorem 5.2.1] as in [20, Lemma 3]. It then comes that (3) can
be a.s. upper bounded by
$\displaystyle\sum_{a\in\mathcal{A}}\underbrace{\mathbb{P}_{t}[A^{\star}_{t}=a]\sqrt{2\sigma^{2}D_{\textnormal{KL}}(\mathbb{P}_{R^{\prime}_{t}(a)|\hat{H}^{t},X_{t},A^{\star}_{t}=a}\>\|\>\mathbb{P}_{R^{\prime}_{t}(a)|\hat{H}^{t},X_{t}})}}_{\coloneqq
v_{a}}.$ (4)
Using the Cauchy-Schwartz inequality, i.e.
$\sum_{a\in\mathcal{A}}u_{a}v_{a}\leq\sqrt{\sum_{a\in\mathcal{A}}u_{a}^{2}\sum_{a\in\mathcal{A}}v_{a}^{2}},$
with $u_{a}=1$ for all $a\in\mathcal{A}$ and $v_{a}$ defined as above it
follows that (4) is a.s. upper bounded by
$\displaystyle\sqrt{2\sigma^{2}|\mathcal{A}|\sum_{a\in\mathcal{A}}\mathbb{P}_{t}[A^{\star}_{t}=a]^{2}}$
$\displaystyle\qquad\qquad\cdot\sqrt{D_{\textnormal{KL}}(\mathbb{P}_{R^{\prime}_{t}(a)|\hat{H}^{t},X_{t},A^{\star}_{t}=a}\>\|\>\mathbb{P}_{R^{\prime}_{t}(a)|\hat{H}^{t},X_{t}})}.$
Adding the non-negative extra terms
$2\sigma^{2}|\mathcal{A}|\sum_{a\in\mathcal{A}}\mathbb{P}_{t}[A^{\star}_{t}=a]\sum_{b\in\mathcal{A}\setminus
a}\mathbb{P}_{t}[A^{\star}=b]D_{\textnormal{KL}}(\mathbb{P}_{R^{\prime}_{t}(b)|\hat{H}^{t},X_{t},A^{\star}_{t}=a}\>\|\>\mathbb{P}_{R^{\prime}_{t}(b)|\hat{H}^{t},X_{t}})$
in the square root gives
$\displaystyle\mathbb{E}_{t}[R^{\star}_{t}-R_{t}]\leq\sqrt{2\sigma^{2}|\mathcal{A}|\textup{I}_{t}(A^{\star}_{t};R_{t}|\hat{A}_{t})}\textnormal{
a.s.},$
using that
$\textup{I}_{t}(A^{\star}_{t};R_{t}|\hat{A}_{t})=\sum_{a,b\in\mathcal{A}}\mathbb{P}_{t}[A^{\star}_{t}=a]\mathbb{P}_{t}[A^{\star}_{t}=b]D_{\textnormal{KL}}(\mathbb{P}_{R^{\prime}_{t}(b)|\hat{H}^{t},X_{t},A^{\star}_{t}=a}\>\|\>\mathbb{P}_{R^{\prime}_{t}(b)|\hat{H}^{t},X_{t}})$
a.s.. Then, as the Markov chain $A^{\star}_{t}-\Theta-R_{t}\ |\
\hat{H}^{t},X_{t},\hat{A}_{t}$ holds, by the data processing inequality
$\textup{I}_{t}(A^{\star}_{t};R_{t}|\hat{A}_{t})\leq\textup{I}_{t}(\Theta;R_{t}|\hat{A}_{t})$
a.s.. Squaring and reordering the terms yields the desired result. ∎
See 2
###### Proof:
This proof follows the techniques from [21, Proof of Proposition 5] taking
care of the presence of contexts similarly to [1, Proof of Lemma 2]. The
difference with the latter is that instead of using Pinsker’s inequality after
noting that the expected value of a Bernoulli random variable is its
probability of success, restriting the analysis to binary rewards, it uses the
Donsker–Varadhan inequality [30, Theorem 5.2.1] as in the proof of Lemma 1 to
allow sub-Gaussian rewards in the analysis.
Let $\mathcal{A}=\\{a_{1},\ldots,a_{|\mathcal{A}|}\\}$ without loss of
generality and for any round $t\in[T]$, conditioned on the history
$\hat{H}^{t}$ and the context $X_{t}$, we define a random matrix
$M\in\mathbb{R}^{|\mathcal{A}|\times|\mathcal{A}|}$ by specifying the entry
$M_{i,j}$ to be equal to
$\displaystyle\sqrt{\mathbb{P}_{t}[A_{t}^{\star}=a_{i}]\mathbb{P}_{t}[A_{t}^{\star}=a_{j}]}\big{(}\mathbb{E}_{t}[R^{\prime}_{t}(a_{j})|A^{\star}_{t}=a_{i}]-\mathbb{E}_{t}[R^{\prime}_{t}(a_{j})]\big{)}$
for all $i,j\in\big{[}|\mathcal{A}|\big{]}$. Then, the expected regret of the
TS algorithm is equal to the trace of the matrix $M$. Indeed,
$\displaystyle\mathbb{E}_{t}[$ $\displaystyle R^{\star}_{t}-R_{t}]$
$\displaystyle=\sum_{a\in\mathcal{A}}\mathbb{P}_{t}[A^{\star}_{t}=a]\big{(}\mathbb{E}_{t}[R^{\prime}_{t}(a)|A^{\star}_{t}=a]-\mathbb{E}_{t}[R^{\prime}_{t}(a)]\big{)}\textnormal{
a.s.}$ $\displaystyle=\textnormal{Trace}(M)\textnormal{ a.s.}.$
In the same fashion as in [21, Proposition 5], we relate
$\textup{I}_{t}(\Theta;R_{t}|\hat{A}_{t})$ to the squared Frobenius norm of
$M$ as:
$\displaystyle\textup{I}_{t}(\Theta;$ $\displaystyle R_{t}|\hat{A}_{t})$
$\displaystyle\geq\textup{I}_{t}(A^{\star}_{t};R_{t}|\hat{A}_{t})\textnormal{
a.s.}$
$\displaystyle=\sum_{a_{i},a_{j}\in\mathcal{A}}\mathbb{P}_{t}[A^{\star}_{t}=a_{i}]\mathbb{P}_{t}[A^{\star}_{t}=a_{j}]$
$\displaystyle\quad\quad\cdot
D_{\textnormal{KL}}(\mathbb{P}_{R^{\prime}_{t}(a_{j})|\hat{H}^{t},X_{t},A^{\star}_{t}=a_{i}}\>\|\>\mathbb{P}_{R^{\prime}_{t}(a_{j})|\hat{H}^{t},X_{t}})\textnormal{
a.s.}$
$\displaystyle\geq\sum_{a_{i},a_{j}\in\mathcal{A}}\mathbb{P}_{t}(A^{\star}_{t}=a_{i})\mathbb{P}_{t}(A^{\star}_{t}=a_{j})$
$\displaystyle\quad\quad\cdot\frac{1}{2\sigma^{2}}\big{(}\mathbb{E}_{t}[R^{\prime}_{t}(a_{j})|A^{\star}_{t}=a_{i}]-\mathbb{E}_{t}[R^{\prime}_{t}(a_{j})]\big{)}^{2}\textnormal{
a.s.}$ $\displaystyle=\frac{1}{2\sigma^{2}}||M||_{F}^{2}\textnormal{ a.s.},$
where the last inequality is obtained again using the Donsker-Varadhan
inequality [30, Theorem 5.2.1] as in [20, Lemma 3]. Combining the last two
equations and using the inequality
$\textnormal{trace}(M)\leq\sqrt{\textnormal{rank}(M)}||M||_{F}$ [21, Fact 10],
it comes that
$\displaystyle\Gamma_{t}=\frac{\mathbb{E}_{t}[R^{\star}_{t}-R_{t}]^{2}}{\textup{I}_{t}(\Theta;R_{t}|\hat{A}_{t})}\leq
2\sigma^{2}\frac{\textnormal{Trace}(M)^{2}}{||M||_{F}^{2}}\leq
2\sigma^{2}\textnormal{Rank}(M)\textnormal{ a.s.}.$
The proof concludes showing the rank of the matrix $M$ is upper bounded by
$d$. For the sake brevity, we define
$\Theta_{t}\coloneqq\mathbb{E}_{t}[\Theta]$ and
$\Theta_{t,i}\coloneqq\mathbb{E}_{t}[\Theta|A^{\star}_{t}=a_{i}]$ for all
$i\in\big{[}|\mathcal{A}|\big{]}$. We then have
$\mathbb{E}_{t}[\langle\Theta,m(X_{t},a_{j})\rangle]=\langle\Theta_{t},m(X_{t},a_{j})\rangle$
a.s. and
$\mathbb{E}_{t}[\langle\Theta,m(X_{t},a_{j})\rangle|A^{\star}_{t}=a_{i}]=\langle\Theta_{t,i},m(X_{t},a_{j})\rangle$
a.s.. Since the inner product is linear, we can rewrite each entry $M_{i,j}$
of the matrix $M$ as
$\displaystyle\sqrt{\mathbb{P}_{t}(A_{t}^{\star}=a_{i})\mathbb{P}_{t}(A_{t}^{\star}=a_{j})}\langle\Theta_{t,i}-\Theta_{t},m(X_{t},a_{j})\rangle.$
Equivalently, the matrix $M$ can be written as
$\displaystyle\begin{bmatrix}\sqrt{\mathbb{P}_{t}[A_{t}^{\star}=a_{1}]}(\Theta_{t,1}-\Theta_{t})\\\
\vdots\\\
\sqrt{\mathbb{P}_{t}[A_{t}^{\star}=a_{|\mathcal{A}|}]}(\Theta_{t,|\mathcal{A}|}-\Theta_{t})\end{bmatrix}\begin{bmatrix}\sqrt{\mathbb{P}_{t}[A_{t}^{\star}=a_{1}]}m(X_{t},a_{1})\\\
\vdots\\\
\sqrt{\mathbb{P}_{t}[A_{t}^{\star}=a_{|\mathcal{A}|}]}m(X_{t},a_{|\mathcal{A}|})\end{bmatrix}^{\intercal}.$
This rewriting highlights that $M$ can be written as the product of a
$|\mathcal{A}|$ by $d$ matrix and a $d$ by $|\mathcal{A}|$ matrix and
therefore has a rank lower or equal than $\min(d,|\mathcal{A}|)$. ∎
|
$P\succeq Q$. A simple calculation shows that
(226) $\Delta(P)=\gamma_{\ell}\left(2-\frac{3}{4}\log
3\right)\leq\gamma_{\ell}=\Delta(Q),$
with strict inequality if $\gamma_{\ell}>0$. This would imply that $\Delta$ is
not monotone. Hence, we find that $\gamma_{\ell}=0$ for all
$\ell\in[d-1]\setminus\\{k\\}$. Therefore, if $k\neq d$,
(227) $\Delta(P)=\gamma_{d}\sum_{i\in{\rm
supp}\,p^{(k)}}p_{i}^{(k)}\log\left(\frac{p_{i}^{(k)}}{\widetilde{p_{i}^{(d)}}}\right)=\gamma_{d}D_{1}\left(p^{(k)}\|p^{(d)}\right),$
where we used that ${\rm supp}\,p^{(k)}\subseteq{\rm supp}\,p^{(d)}$ to
identify the sum as the Kullback-Leibler divergence $D_{1}$. Note that the
monotonicity of $\Delta$ follows from the data-processing inequality satisfied
by $D_{1}$. For the case $k=d$, we have $\gamma_{\ell}=0$ for all $\ell\neq
d$, hence $\Delta(P)=0$ for every $P\in S_{\rm d.c.}^{d}$.
∎
### 4.3. Power universal elements
The following Proposition characterizes the power universal elements in
$S_{\rm d.c.}^{d}$.
###### Proposition 16.
Let $U=(u^{(1)},\ldots,u^{(d)})\in S_{\rm d.c.}^{d}$ with all $u^{(k)}$ for
$k\in[d]$, probability distributions. Then the following are equivalent
1. (i)
$U$ is power universal;
2. (ii)
It holds that ${\rm supp}\,u^{(k)}\subsetneq{\rm supp}\,u^{(d)}$ for all
$k\in[d-1]$, and ${\rm supp}\,u^{(k)}\nsubseteq{\rm supp}\,u^{(k^{\prime})}$
for all $k,k^{\prime}\in[d-1]$, $k\neq k^{\prime}$;
3. (iii)
Any nondegenerate homomorphism $\Phi\in\mathcal{H}^{\rm n.d.}(S^{d}_{\rm
d.c.},\mathbb{R}_{+})\cup\mathcal{H}^{\rm n.d.}(S^{d}_{\rm
d.c.},\mathbb{R}_{+}^{\rm op})\cup\mathcal{H}^{\rm n.d.}(S^{d}_{\rm
d.c.},\mathbb{T}\mathbb{R}_{+})$ satisfies $\Phi(U)\neq 1$.
###### Proof.
(iii) $\Rightarrow$ (ii): Note that for a monotone homomorphism
$\Phi:S^{d}_{\rm d.c.}\to\mathbb{\mathbb{R}^{\rm op}}$, $\Phi(U)\neq 1$
implies $\Phi(U)<1$. Since $\Phi_{e_{d},\\{k,d\\}}(U)<1$ for $k\in[d-1]$,
there is a row with a vanishing entry at $k$ and a positive value at $d$.
Similarly, since $\Phi_{e_{k},\\{k,k^{\prime},d\\}}(U)<1$ for
$k,k^{\prime}\in[d-1]$, $k\neq k^{\prime}$, there is a row with a vanishing
entry at $k^{\prime}$ and a positive value at $k$. Hence, the conditions in
(ii) are satisfied.
(ii) $\Rightarrow$ (i): From (ii), it follows that for each $k=2,\ldots,d-1$,
$U$ has a row with positive entries at positions $1$ and $d$, and $0$ at
position $k$. Then $U^{\boxtimes(d-2)}$ will contain the $\boxtimes$-product
of these $d-2$ rows, which is a row with positive entries at positions $1$ and
$d$, and $0$ everywhere else. Similarly, for the other columns
$k^{\prime}=2,\ldots,d-2$, $U^{\boxtimes(d-2)}$ will contain a row with
positive entries at positions $k^{\prime}$ and $d$ and $0$ everywhere else.
Note that $U^{\boxtimes(d-1)}=U^{\boxtimes(d-2)}\boxtimes U$ will also contain
rows of these specific types. From (ii) it also follows that for each
$k=1,\ldots,d-1$, $U$ contains a row with a positive entry at position $d$ and
$0$ at position $k$. Then $U^{\boxtimes(d-1)}$ will contain the
$\boxtimes$-product of these $d-1$ rows, which is a row with a positive entry
at position $d$ and $0$ everywhere else. One sees that it is possible to the
sum rows in $U^{\boxtimes(d-1)}$ in such a way that
(228)
$U^{\boxtimes(d-1)}\succeq\left(\begin{array}[]{cccc}a_{1}&\cdots&a_{d-1}&b_{0}\\\
1-a_{1}&\cdots&0&b_{1}\\\ 0&\ddots&\vdots&\vdots\\\
0&\cdots&1-a_{d-1}&b_{d-1}\\\ 0&\cdots&0&1-b\end{array}\right):=\tilde{U},$
where $b:=\sum_{i=0}^{d-1}b_{i}$, and $a_{k},b_{i},b\in(0,1)$ for
$k=1,\ldots,d-1$ and $i=0,\ldots,d-1$. Define the stochastic map (matrix) $T$
mapping probability vectors on $\\{0,1,\ldots,d\\}^{2}$ into probability
vectors on $\\{0,1.\ldots,d\\}$ through
(229) $(Tp)_{m}=p_{m,m}+\sum_{j=0}^{m-1}\big{(}p_{j,m}+p_{m,j}\big{)},\qquad
m=0,\ldots,d,$
for all $p=(p_{j,k})_{i,j=0}^{d}$ where empty sums $\sum_{j=0}^{-1}(\cdots)$
vanish. Also define matrices $U_{\ell}$ through $U_{1}=\tilde{U}$ and
$U_{\ell+1}=T(\tilde{U}\boxtimes U_{\ell})$ for all $\ell=1,2,\ldots$. One
easily sees that $\tilde{U}^{\boxtimes\ell}\succeq U_{\ell}$ and a
straightforward calculation combined with an induction argument shows that,
for all $\ell\in\mathbb{N}$, there are $b^{(\ell)}_{i}$, $i=0,\ldots,d-1$ such
that $b^{(\ell)}_{0}+\cdots+b^{(\ell)}_{d-1}=b^{\ell}$ and
(230)
$U_{\ell}=\left(\begin{array}[]{cccc}a_{1}^{\ell}&\cdots&a_{d-1}^{\ell}&b_{0}^{(\ell)}\\\
1-a_{1}^{\ell}&\cdots&0&b_{1}^{(\ell)}\\\ 0&\ddots&\vdots&\vdots\\\
0&\cdots&1-a_{d-1}^{\ell}&b_{d-1}^{(\ell)}\\\
0&\cdots&0&1-b^{\ell}\end{array}\right).$
Hence, by choosing $\ell$ sufficiently large, we can make $a_{k}^{\ell}$ for
$k=1,\ldots,d-1$ and $b_{i}^{(\ell)}$ for $i=0,\ldots,d-1$ arbitrarily small.
Let $P=\big{(}p^{(1)},\ldots,p^{(d)}\big{)}\in\mathcal{V}_{\rm d.c.}^{d}$ with
columns of unit $1$-norm, where
$p^{(k)}=\big{(}p^{(k)}_{1},\ldots,p^{(k)}_{n}\big{)}$. We may assume WLOG
that $p^{(k)}_{1}>0$ for $k=1,\ldots,d$. Let us denote $p_{1}:=\min_{1\leq
k\leq d-1}p^{(k)}_{1}>0$. Let $t^{(0)}:=(1,0,\ldots,0)$ (with $n$ entries).
Let us denote
(231) $t^{(k)}_{v}:=\frac{1}{1-v}\big{(}p^{(k)}-vt^{(0)}\big{)}$
for $v\in(0,p_{1})$ and $k=1,\ldots,d-1$. For any
$v=(v_{1},\ldots,v_{d-1})\in(0,p_{1})^{d-1}$ and
$w=(w_{0},\ldots,w_{d-1})\in\mathbb{R}_{>0}^{d}$ such that
$w_{0}+\cdots+w_{d-1}<1$, let us also define
(232) $t^{(d)}_{v,w}:=\frac{1}{1-w_{0}-\cdots-
w_{d-1}}\big{(}p^{(d)}-w_{0}t^{(0)}-w_{1}t^{(1)}_{v_{1}}-\cdots-
w_{d-1}t^{(d-1)}_{v_{d-1}}\big{)}.$
A sufficient condition for $t^{(d)}_{v,w}$ to be a probability vector is
(233) $\displaystyle
p^{(d)}_{1}-w_{0}-\sum_{k=1}^{d-1}w_{k}\frac{p^{(k)}_{1}}{1-p_{1}}$
$\displaystyle\geq 0,$ (234) $\displaystyle
p^{(d)}_{i}-\sum_{k=1}^{d-1}w_{k}\frac{p^{(k)}_{i}}{1-p_{1}}$
$\displaystyle\geq 0,\qquad i=2,\ldots,n,$
independent of $v_{1},\ldots,v_{d-1}$. Thus, there is
$w=(w_{0},\ldots,w_{d-1})\in\mathbb{R}_{>0}^{d}$ such that
$t^{(d)}_{v,w^{\prime}}$ is a probability vector for all $v\in(0,p_{1})^{d-1}$
and $w^{\prime}\in\mathbb{R}_{>0}^{d}$ whose entries are upper-bounded by
those of $w$. Suppose then that $\ell\in\mathbb{N}$ is large enough so that
$a_{k}^{\ell}<p_{1}$ for $k=1,\ldots,d-1$ and $b^{\ell}<w_{i}$ (so that
$b^{(\ell)}_{i}\leq b^{\ell}<w_{i}$) for $i=0,\ldots,d-1$ where $a_{k}$,
$b_{i}$, and $b^{(\ell)}_{i}$ are as in (228) and (230). Setting
$v:=\big{(}a_{1}^{\ell},\ldots,a_{d-1}^{\ell}\big{)}$ and
$w^{\prime}:=\big{(}b_{0}^{(\ell)},\ldots,b_{d-1}^{(\ell)}\big{)}$ and
$t^{(k)}:=t^{(k)}_{v}$ for $k=1,\ldots,d-1$ and
$t^{(d)}:=t^{(d)}_{v,w^{\prime}}$, we may set up the stochastic matrix
$T=\big{(}t^{(0)},\ldots,t^{(d)}\big{)}$. It follows that we may write
$TU_{\ell}=P$, i.e.,
$U^{\boxtimes\ell(d-1)}\succeq\tilde{U}^{\boxtimes\ell}\succeq U_{\ell}\succeq
P$.
We now have shown that, for all $P\in\mathcal{V}^{d}_{\rm d.c.}$ with columns
of unit $1$-norm, there is $\ell\in\mathbb{N}$ such that
$U^{\boxtimes\ell}\succeq P$. Let us assume next that $P,Q\in\mathcal{V}_{\rm
d.c.}^{d}$ are such that $P\succeq Q$. WLOG we may assume that the columns of
$P$ and $Q$ have unit 1-norm. Thus, there is $\ell\in\mathbb{N}$ such that
(235) $Q\boxtimes U^{\boxtimes\ell}\succeq(1\,\cdots\,1)\boxtimes
U^{\boxtimes\ell}=U^{\boxtimes\ell}\succeq P,$
showing that $U$ is a power universal.
(i) $\Rightarrow$ (iii): Let $P\in\mathcal{V}^{d}_{\rm d.c.}$ be such that all
the columns of $P$ have unit 1-norm and $\Phi(P)\neq 1$ for all the
nondegenerate monotone homomorphisms of $S^{d}_{\rm d.c.}$ into
$\mathbb{K}\in\\{\mathbb{R}_{+},\mathbb{R}_{+}^{\rm
op},\mathbb{T}\mathbb{R}_{+},\mathbb{T}\mathbb{R}_{+}^{\rm op}\\}$. We may
choose, e.g.,
(236) $P=\left(\begin{array}[]{cccc}1/2&\cdots&1/2&1/d\\\ 1/2&\cdots&0&1/d\\\
\vdots&\ddots&\vdots&\vdots\\\ 0&\cdots&1/2&1/d\end{array}\right)$
for which $\Phi(P)>1$ when $\Phi\in\mathcal{H}^{\rm n.d.}(S^{d}_{\rm
d.c.},\mathbb{R}_{+})\cup\mathcal{H}^{\rm d.c.}(S^{d}_{\rm
d.c.},\mathbb{T}\mathbb{R}_{+})$ and $\Phi(P)<1$ otherwise. According to (i),
there is $\ell\in\mathbb{N}$ such that $U^{\boxtimes\ell}\succeq P$, so that
$\Phi(U)^{\ell}\geq\Phi(P)>1$ for $\Phi\in\mathcal{H}^{\rm n.d.}(S^{d}_{\rm
d.c.},\mathbb{R}_{+})\cup\mathcal{H}^{\rm d.c.}(S^{d}_{\rm
d.c.},\mathbb{T}\mathbb{R}_{+})$ and $\Phi(U)^{\ell}\leq\Phi(P)<1$ otherwise.
All in all, $\Phi(U)\neq 1$ for all the monotone homomorphisms.
∎
###### Remark 4.
Note that for the implication (iii) $\Rightarrow$ (ii) above, we do not need
all non-degenerate monotone homomorphisms to be unequal to $1$. It is
sufficient that this holds for the homomorphisms $\Phi_{e_{d},\\{k,d\\}}$ for
$k\in[d-1]$, and $\Phi_{e_{k},\\{k,k^{\prime},d\\}}$ for
$k,k^{\prime}\in[d-1]$, $k\neq k^{\prime}$.
###### Remark 5.
Note that, according to item (ii) of Proposition 16 and item (iii) of
Proposition 9, $U=\big{(}u^{(1)},\ldots,u^{(d)}\big{)}$ is a power universal
of $S^{d}_{\rm d.c.}$ if and only if $\big{(}u^{(1)},\ldots,u^{(d-1)}\big{)}$
is a power universal of $S^{d-1}$ (n.b., not of $S^{d-1}_{\rm d.c.}$) and
${\rm supp}\,u^{(k)}\subsetneq{\rm supp}\,u^{(d)}$ for $k=1,\ldots,d-1$.
For $d=2$, $U=\big{(}u^{(1)},u^{(2)}\big{)}$ is a power universal of
$S^{d}_{\rm d.c.}$ if and only if ${\rm supp}\,u^{(1)}\subsetneq{\rm
supp}\,u^{(2)}$. For $d=3$, the general form of a power universal, up to the
$\approx$-equivalence, is
(237) $U=\left(\begin{array}[]{ccc}*&*&*\\\ &0&*\\\ 0&*&*\end{array}\right)$
where the asterisques represent positive entries (or vectors with positive
entries).
### 4.4. Results: applying the Vergleichsstellensatz
Applying the Vergleichsstellensatz in the form of Theorem 1, we can use the
results from the previous sections to find the conditions for large sample and
catalytic majorization in the semiring $S^{d}_{\rm d.c.}$, as given in the
following Theorem. Note that just like the analogous result for the semiring
$S^{d}$, the strict inequalities in (238) and (239) already imply that $P$ is
power universal by Proposition 16, hence we do not need to require this
explicitly in the conditions of the Theorem.
The conditions on the monotones and derivations for asymptotic and catalytic
majorization of input $P\in\mathcal{V}^{d}$ and output $Q\in\mathcal{V}^{d}$
can be expressed as
(238)
$\displaystyle\Phi_{\underline{\alpha},C}(P)<\Phi_{\underline{\alpha},C}(Q)$
$\displaystyle\forall\;\Phi_{\underline{\alpha},C}\in\mathcal{H}^{\rm
n.d.}(S^{d}_{\rm d.c.},\mathbb{R}_{+}^{\rm op}),$ (239)
$\displaystyle\Phi_{\alpha,c}(P)>\Phi_{\alpha,c}(Q)$
$\displaystyle\forall\;\Phi_{\alpha,c}\in\mathcal{H}^{\rm n.d.}(S^{d}_{\rm
d.c.},\mathbb{R}_{+}),$ (240) $\displaystyle
D_{1}(p^{(k)}\|p^{(d)})>D_{1}(q^{(k)}\|q^{(d)}),\,D_{\infty}(p^{(k)}\|p^{(d)})>D_{\infty}(q^{(k)}\|q^{(d)})$
$\displaystyle\forall k=1,\ldots,d-1.$
Recall that $D_{1}$ is the Kullback-Leibler divergence and $D_{\infty}$ the
max-divergence between two columns. The homomorphisms
$\Phi_{\underline{\alpha},C}\in\mathcal{H}^{\rm n.d.}(S^{d}_{\rm
d.c.},\mathbb{R}_{+}^{\rm op})$ are exactly as in (163), where we again
exclude all degenerate homomorphisms $\Phi^{(k)}$, $k\in[d]$ (i.e. those with
either $\underline{\alpha}=e_{k}$ and $C=\\{k\\}$ for some $k\in[d]$, or
$\underline{\alpha}=e_{k}$ and $C=\\{k,d\\}$ for some $k\in[d-1]$). The
homomorphisms $\Phi_{\alpha,c}\in\mathcal{H}^{\rm n.d.}(S^{d}_{\rm
d.c.},\mathbb{R}_{+})$ were defined in (181) and can be conveniently written
as
(241)
$\Phi_{\alpha,c}(P)=\sum_{i=1}^{n}\big{(}p^{(c)}\big{)}^{\alpha}\big{(}p^{(d)}\big{)}^{1-\alpha},$
where $c\in[d-1]$ and $\alpha>1$.
###### Theorem 17.
Let $P=(p^{(1)},\ldots,p^{(d)})$, $Q=(q^{(1)},\ldots,q^{(d)})$ be tuples of
probability vectors such that $\bigcap_{k=1}^{d}{\rm
supp}\,p^{(k)}\neq\emptyset$ and ${\rm supp}\,p^{(k)}\subseteq{\rm
supp}\,p^{(d)}$ for $k\in[d-1]$, and similarly for $Q$. If the inequalities in
(238), (239) and (240) hold, then we have the following:
1. (a)
there exist a stochastic map $T$ and a tuple of probability vectors
$(r^{(1)},\ldots,r^{(d)})$, satisfying $\bigcap_{k=1}^{d}{\rm
supp}\,r^{(k)}\neq\emptyset$ and ${\rm supp}\,r^{(k)}\subseteq{\rm
supp}\,r^{(d)}$ for $k\in[d-1]$, such that
(242) $T(r^{(k)}\otimes p^{(k)})=r^{(k)}\otimes q^{(k)}\quad\text{for
}k\in[d],$
and we may choose, for $n$ sufficiently large:
(243)
$r^{(k)}=\frac{1}{n+1}\bigoplus_{\ell=0}^{n}\left(q^{(k)}\right)^{\otimes\ell}\otimes\left(p^{(k)}\right)^{\otimes(n-\ell)}\quad\text{
for }k\in[d],$
2. (b)
for $n$ sufficiently large there exists a stochastic map $T_{n}$ such that
(244) $T_{n}(p^{(k)})^{\otimes n}=(q^{(k)})^{\otimes n}\quad\text{for
}k\in[d].$
Conversely, if either one of these holds, then the inequalities (238), (239)
and (240) hold non-strictly.
###### Remark 6.
We may derive an asymptotic version of this result, analogous to Theorem 11.
However, we would have to require the input $P$ to be power universal, which
according to the characterization in (ii) of Proposition 16 is quite
restrictive. Nevertheless, for the case $d=2$, the situation is manageable,
since there are essentially three interesting cases: either both columns have
the same support and are different, or both columns have part of the support
outside of the support of the other column, or one column has support strictly
contained in the other. Majorization results (both for the exact and
asymptotic setting) for the first case were derived in [7], and the second
case is covered by the results in Section 3.5. The third case is covered by
Theorem 17 (for the exact setting) above and by Theorem 19 (for the asymptotic
setting) in the next section, where we will derive an analogue of Theorem 11
for the case $d=2$.
Note that for $d>2$ there are many possibilities for the relationships of the
supports of the columns, and each would require a different semiring to be
studied. We will discuss this issue further in our concluding remarks.
### 4.5. The two-column case: dichotomies
Here we specify to the case $d=2$. In this context, any pair
$\big{(}p^{(1)},p^{(2)}\big{)}$, where $p^{(1)}$ and $p^{(2)}$ are probability
vectors (i.e., $\big{\|}p^{(1)}\big{\|}_{1}=1=\big{\|}p^{(2)}\big{\|}_{1}$)
and satisfy ${\rm supp}\,p^{(1)}\subseteq{\rm supp}\,p^{(2)}$, is called a
_dichotomy_. This special case will turn out to be particularly interesting in
the theory of thermal state transformations in the next section. We derive
sufficient conditions for exact large-sample and catalytic majorization in
this setting and sufficient and necessary conditions for approximate large-
sample and catalytic majorization with asymptotically vanishing error in
Corollary 18 and Theorem 19. We describe the conditions in the form of
inequalities involving the traditional Rényi divergences. We also complete the
picture of asymptotic majorization in large samples and catalytically in
Corollary 20 to obtain a full characterization without having to make any
support restrictions.
For all dichotomies $\big{(}p^{(1)},p^{(2)}\big{)}$, we have
(245)
$\frac{1}{\alpha-1}\log{\Phi_{(\alpha,1-\alpha),\\{1,2\\}}\big{(}p^{(1)}\|p^{(2)}\big{)}}=D_{\alpha}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)},$
when $\alpha\in[0,1)$, corresponding to the homomorphisms in $\mathcal{H}^{\rm
n.d.}(S^{d}_{\rm d.c.},\mathbb{R}_{+}^{\rm op})$, and
(246)
$\frac{1}{\alpha-1}\log{\Phi_{\alpha,1}\big{(}p^{(1)}\|p^{(2)}\big{)}}=D_{\alpha}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)},$
when $\alpha\in(0,\infty)$, corresponding to the homomorphisms in
$\mathcal{H}^{\rm n.d.}(S^{d}_{\rm d.c.},\mathbb{R}_{+})$. We have the two
degenerate homomorphisms
$\Phi^{(1)}=\Phi_{(1,0),\\{1\\}}=\Phi_{(1,0),\\{1,2\\}}$ and
$\Phi^{(2)}=\Phi_{(0,1),\\{2\\}}$. Up to a positive multiplier (and
interchangeability), we have only one monotone derivation at $\Phi^{(1)}$,
which is simply the Kullback-Leibler divergence, and none at $\Phi^{(2)}$. We
also have essentially only one nondegenerate tropical homomorphism
$\Phi^{\mathbb{T}}_{1}$ and
(247)
$\log{\Phi^{\mathbb{T}}_{1}\big{(}p^{(1)},p^{(2)}\big{)}}=D_{\infty}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)}$
for all dichotomies $\big{(}p^{(1)},p^{(2)}\big{)}$. Thus, we may unify the
nondegenerate monotone homomorphisms and derivations of $S^{2}_{\rm d.c.}$
within the family $D_{\alpha}$, $0\leq\alpha\leq\infty$, of Rényi divergences;
see Figure 4 for a depiction. The monotonicity of the homomorphisms and
derivation is now encapsulated by the data processing inequality
(248) $D_{\alpha}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)}\geq
D_{\alpha}\big{(}Tp^{(1)}\big{\|}Tp^{(2)}\big{)},\qquad\forall\alpha\in[0,\infty],$
for all dichotomies $\big{(}p^{(1)},p^{(2)}\big{)}$ and stochastic matrices
$T$.
We reformulate Theorem 17 specified to the case of dichotomies and in terms of
inequalities of Rényi divergences.
###### Corollary 18.
Let $\big{(}p^{(1)},p^{(2)}\big{)}$ and $\big{(}q^{(1)},q^{(2)}\big{)}$ be
dichotomies. If
(249)
$D_{\alpha}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)}>D_{\alpha}\big{(}q^{(1)}\big{\|}q^{(2)}\big{)},\qquad\forall\alpha\in[0,\infty],$
then we have the following:
* (a)
There is a stochastic matrix $T$ and a dichotomy
$\big{(}r^{(1)},r^{(2)}\big{)}$ such that
(250) $T\big{(}r^{(k)}\otimes p^{(k)}\big{)},\qquad k=1,2.$
There is $n\in\mathbb{N}$ such that we may choose
(251)
$r^{(k)}=\frac{1}{n+1}\bigoplus_{\ell=0}^{n}\big{(}q^{(k)}\big{)}^{\otimes\ell}\otimes\big{(}p^{(k)}\big{)}^{\otimes(n-\ell)},\quad
k=1,2.$
* (b)
For any $n\in\mathbb{N}$ sufficiently large, there is a stochastic matrix
$T_{n}$ such that
(252) $T_{n}\big{(}p^{(k)}\big{)}^{\otimes n}=\big{(}q^{(k)}\big{)}^{\otimes
n},\qquad k=1,2.$
Conversely, if item (a) or (b) above holds, then (249) holds with non-strict
inequalities.
\begin{overpic}[scale={0.65},unit=1mm]{2d.png} \put(0.0,9.0){\large
$\Phi^{(2)}$ } \put(49.0,9.0){\large $\Phi^{(1)}$ } \put(0.0,20.0){\Large
{\color[rgb]{0.0078125,0.390625,0.078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0078125,0.390625,0.078125}$D_{0}$}
} \put(0.0,28.0){
{\color[rgb]{0.0078125,0.390625,0.078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0078125,0.390625,0.078125}$\Phi_{e_{2},\\{1,2\\}}$}
} \put(18.0,9.0){\large
{\color[rgb]{0.0078125,0.390625,0.078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0078125,0.390625,0.078125}$0<\alpha<1$}
} \put(18.0,20.0){\Large
{\color[rgb]{0.0078125,0.390625,0.078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0078125,0.390625,0.078125}$D_{\alpha}$}
} \put(18.0,28.0){
{\color[rgb]{0.0078125,0.390625,0.078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0078125,0.390625,0.078125}$\Phi_{(\alpha,1-\alpha),\\{1,2\\}}$}
} \put(90.0,9.0){\large
{\color[rgb]{0.0078125,0.390625,0.078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0078125,0.390625,0.078125}$1<\alpha<\infty$}
} \put(90.0,20.0){\Large
{\color[rgb]{0.0078125,0.390625,0.078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0078125,0.390625,0.078125}$D_{\alpha}$}
} \put(90.0,28.0){
{\color[rgb]{0.0078125,0.390625,0.078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.0078125,0.390625,0.078125}$\Phi_{(\alpha,1-\alpha),\\{1,2\\}}$}
} \put(49.0,20.0){\Large
{\color[rgb]{0.349609375,0.39765625,0.933984375}\definecolor[named]{pgfstrokecolor}{rgb}{0.349609375,0.39765625,0.933984375}$D_{1}$}
} \put(135.0,9.0){\large
{\color[rgb]{0.890625,0.046875,0.046875}\definecolor[named]{pgfstrokecolor}{rgb}{0.890625,0.046875,0.046875}$\alpha=\infty$}
} \put(140.0,20.0){\Large
{\color[rgb]{0.890625,0.046875,0.046875}\definecolor[named]{pgfstrokecolor}{rgb}{0.890625,0.046875,0.046875}$D_{\infty}$}
} \put(140.0,28.0){
{\color[rgb]{0.890625,0.046875,0.046875}\definecolor[named]{pgfstrokecolor}{rgb}{0.890625,0.046875,0.046875}$\Phi^{\mathbb{T}}_{1}$}
} \end{overpic} Figure 4. The monotone homomorphisms and derivations depicted
for $S^{2}_{\rm d.c.}$: The line is the compactification of the half line
$[0,\infty]$. Nondegenerate temperate homomorphisms are in green in the area
$\alpha\in[0,1)\cup(1,\infty)$. These correspond to the Rényi divergences
$D_{\alpha}$. At points $\alpha=0$ and $\alpha=1$ we also have the degenerate
homomorphisms $\Phi^{(2)}$ and $\Phi^{(1)}$, the latter of which is associated
with the non-zero derivation $D_{1}$, the Kullback-Leibler divergence. At the
infinity (in red) we have the tropical homomorphism corresponding to the max-
divergence $D_{\infty}$.
We may also derive an asymptotic version of the above result using largely the
same methods as in the proof of Theorem 11. However, since the second
probability vector dominates the first one in a dichotomy, we may remove the
vanishing error completely on the second probability vector as stated at the
end of the following result.
###### Theorem 19.
Consider dichotomies $\big{(}p^{(1)},p^{(2)}\big{)}$ and
$\big{(}q^{(1)},q^{(2)}\big{)}$ and assume ${\rm supp}\,p^{(1)}\subsetneq{\rm
supp}\,p^{(2)}$. The following conditions are equivalent:
* (i)
For all $\alpha\in(0,\infty]$,
(253) $D_{\alpha}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)}\geq
D_{\alpha}\big{(}q^{(1)}\big{\|}q^{(2)}\big{)}.$
* (ii)
For all $\varepsilon>0$, there is a stochastic matrix $T_{\varepsilon}$ and
dichotomies $\big{(}q_{\varepsilon}^{(1)},q_{\varepsilon}^{(2)}\big{)}$ and
$\big{(}r_{\varepsilon}^{(1)},r_{\varepsilon}^{(2)}\big{)}$ such that
(254) $\big{\|}q^{(k)}-q^{(k)}_{\varepsilon}\big{\|}_{1}\leq\varepsilon,\qquad
k=1,2,$
and
(255) $T_{\varepsilon}\big{(}r_{\varepsilon}^{(k)}\otimes
p^{(k)}\big{)}=r_{\varepsilon}^{(k)}\otimes q_{\varepsilon}^{(k)},\qquad
k=1,2.$
Again, we may choose $r_{\varepsilon}^{(k)}$ as in (251) (with
$q_{\varepsilon}^{(k)}$ substituted for $q^{(k)}$) for sufficiently large
$n\in\mathbb{N}$.
* (iii)
For all $\varepsilon>0$, there is a dichotomy
$\big{(}q_{\varepsilon}^{(1)},q_{\varepsilon}^{(2)}\big{)}$ such that
(256) $\big{\|}q^{(k)}-q_{\varepsilon}^{(k)}\big{\|}_{1}\leq\varepsilon,\qquad
k=1,2,$
and for $n\in\mathbb{N}$ sufficiently large there is a stochastic matrix
$T_{\varepsilon,n}$ such that
(257) $T_{\varepsilon,n}\big{(}p^{(k)}\big{)}^{\otimes
n}=\big{(}q_{\varepsilon}^{(k)}\big{)}^{\otimes n},\qquad k=1,2.$
In statements (ii) and (iii) above, we may assume that
$q_{\varepsilon}^{(2)}=q^{(2)}$ for all $\varepsilon>0$.
###### Proof.
Let us assume condition (i). Let us choose $w\in\mathcal{P}_{n}$ which
dominates $q^{(2)}$ (and, hence, automatically $q^{(1)}$). Naturally, we may
choose $w=q^{(2)}$; this choice yields the final claim of the statement of the
theorem, as one clearly subsequently sees. Let us denote, for brevity,
$\big{(}p^{(1)},p^{(2)}\big{)}=:P$, $\big{(}q^{(1)},q^{(2)}\big{)}=:Q$, and
$W:=(w,w)$ and define, for all $\varepsilon\in(0,2]$,
(258)
$q^{(k)}_{\varepsilon}:=\left(1-\frac{\varepsilon}{2}\right)q^{(1)}+\frac{\varepsilon}{2}w,\qquad
k=1,2,$
and
(259)
$Q_{\varepsilon}:=\big{(}q^{(1)}_{\varepsilon},q^{(2)}_{\varepsilon}\big{)}=\left(1-\frac{\varepsilon}{2}\right)Q+\frac{\varepsilon}{2}W.$
Clearly, $Q_{\varepsilon}$ is a dichotomy with mutually supporting columns for
all $\varepsilon\in(0,2]$. Moreover,
$\big{\|}q^{(k)}-q^{(k)}_{\varepsilon}\big{\|}_{1}\leq\varepsilon$, as one
easily verifies. We may show that
$D_{\alpha}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)}>D_{\alpha}\big{(}q^{(1)}_{\varepsilon}\big{\|}q^{(2)}_{\varepsilon}\big{)}$
for all $\alpha\in[0,1)$ in the same way as in the proof of Theorem 11.
Using the joint convexity of the Kullback-Leibler divergence, we have, for all
$\varepsilon\in(0,2]$,
(260)
$D_{1}\big{(}q^{(1)}_{\varepsilon}\big{\|}q^{(2)}_{\varepsilon}\big{)}\leq\left(1-\frac{\varepsilon}{2}\right)D_{1}\big{(}q^{(1)}\big{\|}q^{(2)}\big{)}+\frac{\varepsilon}{2}\underbrace{D_{1}(w\|w)}_{=0}\leq
D_{1}\big{(}q^{(1)}\big{\|}q^{(2)}\big{)}\leq
D_{1}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)}.$
If $D_{1}\big{(}q^{(1)}\big{\|}q^{(2)}\big{)}=0$, then
$D_{1}\big{(}q^{(1)}\big{\|}q^{(2)}\big{)}=0<D_{1}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)}$
since $p^{(1)}\neq p^{(2)}$. If, on the other hand,
$D_{1}\big{(}q^{(1)}\big{\|}q^{(2)}\big{)}>0$, then the penultimate inequality
in the above calculation is strict. Thus, we have
(261)
$D_{1}\big{(}q^{(1)}_{\varepsilon}\big{\|}q^{(2)}_{\varepsilon}\big{)}<D_{1}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)},\qquad\forall\varepsilon\in(0,2].$
Let now $\alpha\in(1,\infty)$ and denote
$\Phi_{\alpha}:=\Phi_{(\alpha,1-\alpha),\\{1,2\\}}$. Similarly as in the proof
of Theorem 11, we may evaluate, for all $\varepsilon\in(0,2]$,
(262) $\displaystyle
D_{\alpha}\big{(}q^{(1)}_{\varepsilon}\big{\|}q^{(2)}_{\varepsilon}\big{)}$
$\displaystyle\leq\frac{1}{\alpha-1}\log{\bigg{(}Q_{\alpha}(Q)-\frac{\varepsilon}{2}\underbrace{\big{(}\Phi_{\alpha}(Q)-1\big{)}}_{\geq
0}\bigg{)}}$ (263)
$\displaystyle\leq\frac{1}{\alpha-1}\log{\Phi_{\alpha}(Q)}=D_{\alpha}\big{(}q^{(1)}\big{\|}q^{(2)}\big{)}\leq
D_{\alpha}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)}.$
If $\Phi_{\alpha}(Q)=1$, then $\Phi_{\alpha}(Q)=1<\Phi_{\alpha}(P)$ and the
final inequality above is strict. If $\Phi_{\alpha}(Q)>1$, then the
penultimate inequality above is strict. Thus,
(264)
$D_{\alpha}\big{(}q^{(1)}_{\varepsilon}\big{\|}q^{(2)}_{\varepsilon}\big{)}<D_{\alpha}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)},\qquad\forall\varepsilon\in(0,2].$
Let us then look at the tropical homomorphism $\Phi^{\mathbb{T}}_{1}$. Using
monotonicity, we have, for all $\varepsilon\in(0,2]$,
(265) $\displaystyle\Phi^{\mathbb{T}}_{1}(Q_{\varepsilon})$
$\displaystyle=\Phi^{\mathbb{T}}_{1}\left(\left(1-\frac{\varepsilon}{2}\right)Q+\frac{\varepsilon}{2}W\right)\leq\Phi^{\mathbb{T}}_{1}\left(\left(1-\frac{\varepsilon}{2}\right)Q\boxplus\frac{\varepsilon}{2}W\right)$
(266)
$\displaystyle=\max\big{\\{}\Phi^{\mathbb{T}}_{1}(Q),1\big{\\}}=\Phi^{\mathbb{T}}_{1}(Q)\leq\Phi^{\mathbb{T}}_{1}(P).$
Especially,
$\Phi^{\mathbb{T}}_{1}(Q_{\varepsilon})\leq\Phi^{\mathbb{T}}_{1}(Q)$, i.e.,
(267) $\max_{1\leq i\leq
n}\frac{\left(1-\frac{\varepsilon}{2}\right)q^{(1)}_{i}+\frac{\varepsilon}{2}w_{i}}{\left(1-\frac{\varepsilon}{2}\right)q^{(2)}_{i}+\frac{\varepsilon}{2}w_{i}}\leq\max_{1\leq
i\leq n}\frac{q^{(1)}_{i}}{q^{(2)}_{i}}$
for all $\varepsilon\in(0,2]$. When $0<\varepsilon\leq\varepsilon^{\prime}\leq
2$, then $Q_{\varepsilon}\succeq Q_{\varepsilon^{\prime}}$, implying that the
LHS above is non-increasing in $\varepsilon$ and coincides with the RHS when
$\varepsilon\to 0$. Thus, if we have equality with some
$\varepsilon_{0}\in(0,2]$, then we have equality for all
$\varepsilon\in[0,\varepsilon_{0}]$. Let us assume this. Since the LHS is a
pointwise maximum of a finite set of real-analytic functions, the maximum is
reached with the same index $i$ for all $\varepsilon\in[0,\varepsilon_{0}]$.
The real-analyticity of the LHS now implies that the LHS is constant for all
$\varepsilon\in[0,2]$, i.e.,
$\Phi^{\mathbb{T}}_{1}(Q)=1=\Phi^{\mathbb{T}}_{1}(W)$. Thus, if
$\Phi^{\mathbb{T}}_{1}(Q_{\varepsilon})=\Phi^{\mathbb{T}}_{1}(Q)$ for some
$\varepsilon\in(0,2]$, this is the case for all $\varepsilon\in[0,2]$ and
$\Phi^{\mathbb{T}}_{1}(Q)=1<\Phi^{\mathbb{T}}_{1}(P)$. On the other hand, if
$\Phi^{\mathbb{T}}_{1}(Q)>1$ then
$\Phi^{\mathbb{T}}(Q_{\varepsilon})<\Phi^{\mathbb{T}}(Q)$, so that
$\Phi^{\mathbb{T}}_{1}(Q_{\varepsilon})<\Phi^{\mathbb{T}}_{1}(Q)\leq\Phi^{\mathbb{T}}_{1}(P)$.
All in all,
(268)
$D_{\infty}\big{(}q^{(1)}_{\varepsilon}\big{\|}q^{(2)}_{\varepsilon}\big{)}<D_{\infty}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)},\qquad\forall\varepsilon\in(0,2].$
Thus, the conditions of Corollary 18 are satisfied for $Q_{\varepsilon}$ and
$P$, and conditions (ii) and (iii) of the claim follow.
Let us then assume that (ii) or (iii) holds. The proof that
$D_{\alpha}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)}\geq
D_{\alpha}\big{(}q^{(1)}\big{\|}q^{(2)}\big{)}$ for $\alpha\in(0,1)$ follows
from the same argument as in the end of the proof of Theorem 11. To show the
inequality for $\alpha\in(1,\infty)$, we use almost the same argument:
consider any $D_{\alpha}$ with $\alpha\in(1,\infty)$, which corresponds to the
monotone homomorphism $\Phi_{\alpha,1}$. Then we use similar notation and
reasoning as in the proof of Theorem 11, except that $\Phi=\Phi_{\alpha,1}$
and the direction of the inequalities is reversed, and we note that
(269)
$\Phi_{\alpha,1}(Q_{\varepsilon}^{1}\boxplus(1\,\cdots\,1))\geq(1+\varepsilon)^{1-\alpha},$
where we used monotonicity of $\Phi_{\alpha,1}$ and that the sum of each
column in $Q_{\varepsilon}^{1}\boxplus(1\,\cdots\,1)$ is between $1$ and
$1+\varepsilon$. Finally, the inequalities
$D_{\alpha}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)}\geq
D_{\alpha}\big{(}q^{(1)}\big{\|}q^{(2)}\big{)}$ for $\alpha\in\\{1,\infty\\}$
follow from the same inequalities for $\alpha\in(0,1)\cup(1,\infty)$ and the
continuity of $D_{\alpha}$ in $\alpha$.
∎
Combining the above result with Theorem 22 in [7] gives immediately the
following corollary.
###### Corollary 20.
Denote by $\mathcal{P}_{n}$ the set of probability vectors with $n$ components
and consider probability vectors $p^{(2)},q^{(2)}\in\mathcal{P}_{n}$ of full
support and some additional $p^{(1)},q^{(1)}\in\mathcal{P}_{n}$. Also assume
that $p^{(1)}\neq p^{(2)}$. The following conditions are equivalent:
* (i)
For all $\alpha\geq 1/2$,
(270) $D_{\alpha}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)}\geq
D_{\alpha}\big{(}q^{(1)}\big{\|}q^{(2)}\big{)},\qquad
D_{\alpha}\big{(}p^{(2)}\big{\|}p^{(1)}\big{)}\geq
D_{\alpha}\big{(}q^{(2)}\big{\|}q^{(1)}\big{)}.$
* (ii)
For all $\varepsilon>0$, there is a stochastic matrix $T_{\varepsilon}$ and
dichotomies $\big{(}q_{\varepsilon}^{(1)},q_{\varepsilon}^{(2)}\big{)}$ and
$\big{(}r_{\varepsilon}^{(1)},r_{\varepsilon}^{(2)}\big{)}$ such that
(271) $\big{\|}q^{(k)}-q^{(k)}_{\varepsilon}\big{\|}_{1}\leq\varepsilon,\qquad
k=1,2,$
and
(272) $T_{\varepsilon}\big{(}r_{\varepsilon}^{(k)}\otimes
p^{(k)}\big{)}=r_{\varepsilon}^{(k)}\otimes q_{\varepsilon}^{(k)},\qquad
k=1,2.$
Again, we may choose $r_{\varepsilon}^{(k)}$ as in (251) (with
$q_{\varepsilon}^{(k)}$ substituted for $q^{(k)}$) for sufficiently large
$n\in\mathbb{N}$.
* (iii)
For all $\varepsilon>0$, there is a dichotomy
$\big{(}q_{\varepsilon}^{(1)},q_{\varepsilon}^{(2)}\big{)}$ with probability
vectors in $\mathcal{P}_{n}$ such that
(273) $\big{\|}q^{(k)}-q_{\varepsilon}^{(k)}\big{\|}_{1}\leq\varepsilon,\qquad
k=1,2,$
and for $n\in\mathbb{N}$ sufficiently large there is a stochastic matrix
$T_{\varepsilon,n}$ such that
(274) $T_{\varepsilon,n}\big{(}p^{(k)}\big{)}^{\otimes
n}=\big{(}q_{\varepsilon}^{(k)}\big{)}^{\otimes n},\qquad k=1,2.$
In statements (i) and (ii), we may assume that $q_{\varepsilon}^{(2)}=q^{(2)}$
for all $\varepsilon>0$.
###### Proof.
Let us first assume that ${\rm supp}\,p^{(1)}={\rm
supp}\,p^{(2)}=\\{1,\ldots,n\\}$ and ${\rm supp}\,q^{(1)}={\rm
supp}\,q^{(2)}=\\{1,\ldots,n\\}$. In this case, the desired result is given by
Theorem 22 in [7]. Strictly speaking, only the equivalence of statements (i)
and (ii) is shown there, but adding (iii) can be done in the same way that we
have already employed earlier.
Let us now assume that ${\rm supp}\,p^{(1)}\subsetneq{\rm supp}\,p^{(2)}$ or
${\rm supp}\,q^{(1)}\subsetneq{\rm supp}\,q^{(2)}$. Let us first show that the
latter case is out of the question if the first one does not hold. So, assume
that ${\rm supp}\,p^{(1)}={\rm supp}\,p^{(2)}$ while ${\rm
supp}\,q^{(1)}\subsetneq{\rm supp}\,q{(2)}$. If we assume (i), we obtain from
the second set of inequalities that
(275) $1=\sum_{i\in{\rm supp}\,p^{(1)}}p^{(2)}_{i}\leq\sum_{i\in{\rm
supp}\,q^{(1)}}q^{(2)}_{i}<1$
as $\alpha\to 1$, a contradiction. Thus, we only have to consider the case
where ${\rm supp}\,p^{(1)}\subsetneq{\rm supp}\,p^{(2)}$. In this case, the
second set of the inequalities in (270) are irrelevant when $\alpha\geq 1$.
Indeed, $D_{\alpha}\big{(}p^{(2)}\big{\|}p^{(1)}\big{)}=\infty$ when
$\alpha\geq 1$. Thus we may ignore this set of inequalities. For $0<\alpha\leq
1/2$, we have
$D_{\alpha}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)}=\frac{1-\alpha}{\alpha}D_{1-\alpha}\big{(}p^{(2)}\big{\|}p^{(1)}\big{)}$.
This means that, if (i) holds, then
$D_{\alpha}\big{(}p^{(1)}\big{\|}p^{(2)}\big{)}\geq
D_{\alpha}\big{(}q^{(1)}\big{\|}q^{(2)}\big{)}$ for all $\alpha>0$. Naturally
this also holds for $\alpha=0$ as a pointwise limit of the above. Finally,
condition (i) is equivalent with (253), and we are in the situation of Theorem
19 providing the proof for the remainder of the claim. ∎
### 4.6. Thermal majorization
Let us consider a finite-dimensional quantum thermodynamic system. We assume
that the system’s steady state is described by the Gibbs state
$\gamma_{\beta}$ at inverse temperature $\beta>0$, i.e.,
(276) $\gamma_{\beta}=\frac{1}{Z}e^{-\beta H}$
where $Z=\mathrm{Tr}\left(e^{-\beta H}\right)$ and $H$ is the Hamiltonian of
the system. Let $\\{h_{i}\\}_{i=0}^{d-1}$ be the orthonormal basis of the
system’s Hilbert space that diagonalizes $H$, i.e.,
$H=\sum_{i=0}^{d-1}E_{i}|h_{i}\rangle\langle h_{i}|$ where $E_{i}$ are the
eigenvalues of $H$. This means
(277) $\gamma_{\beta}=\sum_{i=0}^{d-1}\lambda^{\beta}_{i}|h_{i}\rangle\langle
h_{i}|$
where $\lambda^{\beta}_{i}=e^{-\beta E_{i}}/Z$. We define the probability
vector $\lambda^{\beta}:=(\lambda^{\beta}_{0},\ldots,\lambda^{\beta}_{d-1})$.
From now on, we assume that $E_{i}>0$ for $i=0,\ldots,d-1$; we are free to
assume this as the zero energy can be shifted if necessary.
We are interested now in the question when a state of this thermodynamic
system can be transformed into another with a thermal operation. Recall that a
quantum channel (completely positive trace-preserving linear map) $\Phi$ whose
input and output systems coincide with this thermodynamic system is a thermal
operation if $\Phi(\gamma_{\beta})=\gamma_{\beta}$. When $\Phi(\rho)=\sigma$
where $\Phi$ is a thermal operation, we say that $\rho$ thermally majorizes
$\sigma$ and denote $\rho\succeq_{\beta}\sigma$. Thermal majorization of
$\rho$ over $\sigma$ in large samples means that there is $n\in\mathbb{N}$
such that $\rho^{\otimes n}\succeq_{\beta}\sigma^{\otimes n}$, i.e., there is
a channel $\Phi_{n}$ on $n$ copies of the system such that
$\Phi_{n}(\rho^{\otimes n})=\sigma^{\otimes n}$ and
$\Phi_{n}(\gamma_{\beta}^{\otimes n})=\gamma_{\beta}^{\otimes n}$. Catalytic
thermal majorization means that there are states $\tau^{(1)},\tau^{(2)}$ on
possibly another system (e.g., a heat bath), where we may interpret
$\tau^{(2)}$ as the Gibbs state of the other system, and a channel $\Phi$ such
that $\Phi(\rho\otimes\tau^{(1)})=\sigma\otimes\tau^{(1)}$ and
$\Phi(\gamma_{\beta}\otimes\tau^{(2)})=\gamma_{\beta}\otimes\tau^{(2)}$. We
are particularly interested in asymptotic catalytic thermal majorization: for
any $\varepsilon>0$, there are catalysts
$\tau_{\varepsilon}^{(1)},\tau_{\varepsilon}^{(2)}$ on some system (which may
depend on $\varepsilon$), a state $\sigma_{\varepsilon}$ on our system of
interest such that $\|\sigma-\sigma_{\varepsilon}\|_{1}<\varepsilon$, and a
channel $\Phi_{\varepsilon}$ such that
$\Phi_{\varepsilon}(\rho\otimes\tau_{\varepsilon}^{(1)})=\sigma_{\varepsilon}\otimes\tau_{\varepsilon}^{(1)}$
and
$\Phi_{\varepsilon}(\gamma_{\beta}\otimes\tau_{\varepsilon}^{(2)})=\gamma_{\beta}\otimes\tau_{\varepsilon}^{(2)}$.
We concentrate on the case where the states commute with the Gibbs state. If
the energy spectrum is nondegenerate, i.e., there are no repeating eigenvalues
$E_{i}$, then $[\rho,\gamma_{\beta}]=0$ if and only if $\rho$ diagonalizes in
the energy eigenbasis, i.e.,
(278) $\rho=\sum_{i=0}^{d-1}\lambda_{i}|h_{i}\rangle\langle h_{i}|.$
Even if the energy spectrum is degenerate, we will focus on states which are
diagonalizable in the energy eigenbasis, i.e., those $\rho$ as in (278) with
$\lambda_{i}\geq 0$, $\lambda_{0}+\cdots+\lambda_{d-1}=1$. To explicitly refer
to the state, we will associate the above $\\{h_{i}\\}_{i}$-diagonal state
$\rho$ with the probability vector
$\lambda^{\rho}=(\lambda^{\rho}_{0},\ldots,\lambda^{\rho}_{d-1})$ where
(279) $\lambda^{\rho}_{i}=\lambda_{i}=\langle h_{i}|\rho\,h_{i}\rangle,\qquad
i=0,\ldots,d-1.$
If $\rho$ and $\sigma$ are $\\{h_{i}\\}_{i}$-diagonal, then one can easily see
that $\Phi(\rho)=\sigma$ for a channel $\Phi$ if and only if
$T\lambda^{\rho}=\lambda^{\sigma}$ where $T=(T_{i,j})_{i,j=0}^{d-1}$ is a
stochastic matrix given by $T_{i,j}=\langle h_{i}|\Phi(|h_{j}\rangle\langle
h_{j}|)|h_{i}\rangle$. This means that, for $\\{h_{i}\\}_{i}$-diagonal states
$\rho$ and $\sigma$, we have $\rho\succeq_{\beta}\sigma$ if and only if there
is a stochastic matrix $T$ such that $T\lambda^{\rho}=\lambda^{\sigma}$ and
$T\lambda^{\beta}=\lambda^{\beta}$, i.e.,
(280)
$\big{(}\lambda^{\rho},\lambda^{\beta}\big{)}\succeq(\lambda^{\sigma},\lambda^{\beta}).$
Thus, we arrive at matrix majorization in the case $d=2$ where the second
column dominates the first one since we assume that the energy spectrum is
non-vanishing. Moreover, we have that $\rho$ thermally majorizes $\sigma$ in
the asymptotic catalytic regime if and only if, for all $\varepsilon>0$, there
are catalytic probability vectors
$r_{\varepsilon}^{(1)},r_{\varepsilon}^{(2)}$ (of undetermined finite length),
$\lambda_{\varepsilon}\in\mathcal{P}_{d}$ such that
$\|\lambda^{\sigma}-\lambda_{\varepsilon}\|_{1}<\varepsilon$ and a stochastic
matrix $T_{\varepsilon}$ such that $T_{\varepsilon}(\lambda^{\rho}\otimes
r_{\varepsilon}^{(1)})=\lambda_{\varepsilon}\otimes r_{\varepsilon}^{(1)}$ and
$T_{\varepsilon}(\lambda^{\beta}\otimes
r_{\varepsilon}^{(2)})=\lambda^{\beta}\otimes r_{\varepsilon}^{(2)}$. Thus,
asymptotic catalytic thermal majorization can be recast as asymptotic
catalytic matrix majorization which we have exhaustively characterized in
Section 4.5.
Consider now states $\rho$ and $\sigma$ both diagonalizable in the energy
eigenbasis $\\{h_{i}\\}_{i}$. If $\rho$ and $\sigma$ are both of full rank (as
is the Gibbs state $\gamma_{\beta}$), then, according to Corollary 24 of [7],
$\rho$ asymptotically catalytically thermally majorizes $\sigma$ if and only
if $D_{\alpha}(\lambda^{\rho}\|\lambda^{\beta})\geq
D_{\alpha}(\lambda^{\sigma}\|\lambda^{\beta})$ and
$D_{\alpha}(\lambda^{\beta}\|\lambda^{\rho})\geq
D_{\alpha}(\lambda^{\beta}\|\lambda^{\sigma})$ for all $\alpha\geq 1/2$. Now
we can also address the case of general rank. If $\rho$ is not of full rank,
then, according to Proposition 16, $(\lambda^{\rho},\lambda^{\beta})$ is a
power universal of $S^{2}_{\rm d.c.}$ and, according to Theorem 19, $\rho$
asymptotically catalytically thermally majorizes $\sigma$ if and only if only
$D_{\alpha}(\lambda^{\rho}\|\lambda^{\beta})\geq
D_{\alpha}(\lambda^{\sigma}\|\lambda^{\beta})$ for all $\alpha\geq 0$. In this
case, we have $D_{\alpha}(\lambda^{\beta}\|\lambda^{\rho})=\infty$ for
$\alpha\geq 1$. Because also
(281) $\displaystyle D_{\alpha}(\lambda^{\beta}\|\lambda^{\rho})$
$\displaystyle=\frac{\alpha}{1-\alpha}D_{1-\alpha}(\lambda^{\rho}\|\lambda^{\beta})\geq\frac{\alpha}{1-\alpha}D_{1-\alpha}(\lambda^{\sigma}\|\lambda^{\beta})=D_{\alpha}(\lambda^{\beta}\|\lambda^{\sigma})$
for all $\alpha\in(0,1)$, we see that the asymptotic catalytic thermal
majorization of the state $\rho$ of not full rank over $\sigma$ is equivalent
with exactly the same inequalities as in the full-rank case. The case where
$\rho$ asymptotically catalytically thermally majorizes $\sigma$ while $\rho$
is of full support and $\sigma$ is not is impossible. We can see this as
follows: Suppose that $\rho$ asymptotically catalytically thermally majorizes
$\sigma$. Because $D_{\alpha}$ are monotones, we now have that
$D_{\alpha}(\lambda^{\rho}\|\lambda^{\beta})\geq
D_{\alpha}(\lambda^{\sigma}\|\lambda^{\beta})$ for all $\alpha\geq 0$. As
$\alpha\to 0$, it follows that
(282) $\sum_{i\in{\rm supp}\,\lambda^{\rho}}e^{-\beta E_{i}}\leq\sum_{i\in{\rm
supp}\,\lambda^{\sigma}}e^{-\beta E_{i}}.$
Because $E_{i}>0$ for all $i$, this means that $|{\rm
supp}\,\lambda^{\rho}|\leq|{\rm supp}\,\lambda^{\sigma}|$, i.e., the support
of $\rho$ cannot be larger than the support of $\sigma$.
Note that, whenever $\rho$ diagonalizes in the energy eigenbasis, then
$D(\lambda^{\rho}\|\lambda^{\beta})=\tilde{D}_{\alpha}(\rho\|\gamma_{\beta})$
where $\tilde{D}_{\alpha}$ is the ‘sandwiched’ quantum Rényi divergence, i.e.,
(283)
$\tilde{D}_{\alpha}(\rho\|\tau)=\left\\{\begin{array}[]{ll}-\log{\mathrm{Tr}\left(\sigma\,{\rm
supp}\,\rho\right)},&\alpha=0\ {\rm and}\ {\rm supp}\,\rho\cap{\rm
supp}\,\sigma\neq\\{0\\},\\\
\frac{1}{\alpha-1}\log{\mathrm{Tr}\left(\big{(}\sigma^{\frac{1-\alpha}{2\alpha}}\rho\sigma^{\frac{1-\alpha}{2\alpha}}\big{)}^{\alpha}\right)},&\alpha\in(0,1)\
{\rm and}\ {\rm supp}\,\rho\cap{\rm supp}\,\sigma\neq\\{0\\},\\\
\mathrm{Tr}\left(\rho(\log{\rho}-\log{\sigma})\right),&\alpha=1\ {\rm and}\
{\rm supp}\,\rho\subseteq{\rm supp}\,\sigma,\\\
\frac{1}{\alpha-1}\log{\mathrm{Tr}\left(\big{(}\sigma^{\frac{1-\alpha}{2\alpha}}\rho\sigma^{\frac{1-\alpha}{2\alpha}}\big{)}^{\alpha}\right)},&\alpha\in(0,1)\
{\rm and}\ {\rm supp}\,\rho\subseteq{\rm supp}\,\sigma,\\\
\big{\|}\sigma^{-1/2}\rho\sigma^{-1/2}\big{\|},&\alpha=\infty\ {\rm and}\ {\rm
supp}\,\rho\subseteq{\rm supp}\,\sigma,\\\ \infty&{\rm
otherwise}\end{array}\right.$
where $\|\cdot\|$ is the operator norm and we use ${\rm supp}\,\rho$ for both
the supporting subspace of $\rho$ and for the orthogonal projection onto this
subspace. Let us define the free energies $F^{\pm}_{\alpha,\beta}$ through
(284)
$F^{+}_{\alpha,\beta}(\rho):=\tilde{D}_{\alpha}(\rho\|\gamma_{\beta}),\quad
F^{-}_{\alpha,\beta}(\rho):=\tilde{D}_{\alpha}(\gamma_{\beta}\|\rho)$
for all $\alpha\geq 1/2$. Thus, the above considerations yield the following
result.
###### Corollary 21.
Suppose that the eigen-energies $E_{i}>0$ for all $i$, and $\rho$ and $\sigma$
diagonalize in the energy eigenbasis. Then $\rho$ asymptotically catalytically
thermally majorizes $\sigma$ if and only if
(285) $F^{\pm}_{\alpha,\beta}(\rho)\geq
F^{\pm}_{\alpha,\beta}(\sigma),\qquad\forall\alpha\geq\frac{1}{2}.$
Note that, although this result was previously claimed in [4] (see also [10]
for discussions on the proof and possible fixes), our present proof avoids
some technical issues in the original proof. We also would like to highlight
the fact that, instead of using the result on vector majorization proven in
[13], our semiring methods allow us to directly study the problem from the
matrix majorization view without having to embed the problem in the vector
majorization form as was done in [4].
Our results also allow us to study multiple thermal majorization: for
$d$-tuples $\big{(}\rho^{(1)},\ldots,\rho^{(d)}\big{)}$ and
$\big{(}\sigma^{(1)},\ldots,\sigma^{(d)}\big{)}$ of states, we denote
(286)
$\big{(}\rho^{(1)},\ldots,\rho^{(d)}\big{)}\succeq_{\beta}\big{(}\sigma^{(1)},\ldots,\sigma^{(d)}\big{)}$
if there is a thermal channel $\Phi$ (i.e.,
$\Phi(\gamma_{\beta})=\gamma_{\beta}$) such that
$\Phi\big{(}\rho^{(k)}\big{)}=\sigma^{(k)}$ for $k=1,\ldots,d$. This
majorization can naturally also be studied in the large-sample, catalytic, and
asymptotic regimes, the latter one arguably being the most physically relevant
case. If we assume that all the states $\rho^{(k)}$ and $\sigma^{(k)}$
diagonalize in the energy eigenbasis, we are naturally again back to matrix
majorization with the dominating column $\lambda^{\beta}$. We point out though
that, in order to derive necessary and sufficient conditions for asymptotic
thermal majorization in this case,
$\big{(}\lambda^{\rho^{(1)}},\ldots,\lambda^{\rho^{(d)}},\lambda^{\beta}\big{)}$
has to be a power universal of the semiring $S^{d+1}_{\rm d.c.}$. Conditions
for this are quite strict, as we have seen in Proposition 16. Thus, there
remains plenty of unanswered questions in multiple thermal majorization.
## 5\. Conclusion and outlook
We have identified sufficient conditions for matrix majorization in large
samples (i.e., having many copies of the matrices) as well as in the catalytic
regime with varying support conditions. This is in contrast to the earlier
work [7], where the columns within each matrix were assumed to have a common
support. We have seen that the conditions can be written in the form of
inequalities involving multi-party generalizations of the bivariate Rényi
divergences $D_{\underline{\alpha}}$ but, unlike in the common-support case,
the support restrictions greatly affect the set of multi-party divergences to
be considered. Moreover, additional conditions arise at the boundaries where
the $\underline{\alpha}$-parameter space is cut. We have also derived
sufficient conditions for asymptotic large-sample and catalytic matrix
majorization in the case of different support conditions. Many of these
conditions are also necessary. In particular, our results completely resolve
the problem of asymptotic catalytic thermal majorization in the field of
quantum thermodynamics also studied in [4].
One issue with our results is that the sufficient conditions for the
majorization of a matrix $P$ over another matrix $Q$ in the large-sample or
catalytic, and both exact and asymptotic regime, is that the input matrix $P$
has to be a power universal of the preordered semiring we are studying. In the
case of exact large-sample or catalytic majorization (Theorems 10 and 17), the
strict ordering of the values of the homomorphisms already imply that $P$ is
power universal. However, in the case of asymptotic large-sample or catalytic
majorization (Theorems 11 and 19), we have to require $P$ to be power
universal. Being power universal is typically quite restrictive, as
highlighted in Propositions 9 and 16. We suggest the following strategy to
overcome this: If $P$ is not a power universal with respect to either of the
semirings we are studying in this work, we should redefine the semiring so
that all the matrices in this new semiring are required to have the same
support restrictions satisfied by $P$. As an example,
$P=\big{(}p^{(1)},\ldots,p^{(d)}\big{)}$ might be such that $p^{(d)}$
dominates all other columns and ${\rm supp}\,p^{(1)}=\cdots={\rm
supp}\,p^{(d-1)}$. We now move on to the semiring where all the matrices
satisfy these support conditions. If we also remove possible repetitions of
the same columns, $P$ is now a power universal of the new semiring and we
obtain tighter conditions for majorization of $P$ over other matrices. More
generally, given any $P$, we define a new semiring $S_{\rm res}^{d}$ by
specifying for any ordered pair of columns $(k,k^{\prime})$,
$k,k^{\prime}\in[d]$, whether either ${\rm supp}\,p^{(k)}\subseteq{\rm
supp}\,p^{(k^{\prime})}$, or ${\rm supp}\,p^{(k)}\supseteq{\rm
supp}\,p^{(k^{\prime})}$, or ${\rm supp}\,p^{(k)}={\rm
supp}\,p^{(k^{\prime})}$, or there is no condition. One checks that the
resulting restricted set $S_{\rm res}^{d}$ is a semiring with the usual
operations $\boxplus,\boxtimes$ and pre-order $\preceq$. Extrapolating the
results we found for the conditions for $U$ to be power universal in
Propositions 9 and 16, and Lemma 12 in [7] (the case of minimal restrictions,
one dominating column, and equal supports, respectively), we expect that each
condition on the supports of two columns will give rise to a corresponding
condition for $U$, according to the following table.
Condition on $P\in S_{\rm res}^{d}$ | Condition on power universal $U\in S_{\rm res}^{d}$
---|---
${\rm supp}\,p^{(k)}\subseteq{\rm supp}\,p^{(k^{\prime})}$ | ${\rm supp}\,u^{(k)}\subsetneq{\rm supp}\,u^{(k^{\prime})}$
${\rm supp}\,p^{(k)}\supseteq{\rm supp}\,p^{(k^{\prime})}$ | ${\rm supp}\,u^{(k)}\supsetneq{\rm supp}\,u^{(k^{\prime})}$
${\rm supp}\,p^{(k)}={\rm supp}\,p^{(k^{\prime})}$ | $u^{(k)}\neq u^{(k^{\prime})}$
no condition | ${\rm supp}\,u^{(k)}\nsubseteq{\rm supp}\,u^{(k^{\prime})}$ and ${\rm supp}\,u^{(k^{\prime})}\nsubseteq{\rm supp}\,u^{(k)}$
In both semirings studied in this paper, we were able to show that
(287) $U\text{ is power universal}\quad\Longleftrightarrow\quad\Phi(U)\neq
1\quad\forall\,\Phi\text{ nondegenerate homomorphism.}$
In proving this, we used conditions on $U$ similar to the ones in the table
above. Hence, we conjecture that (287) also holds for a semiring $S_{\rm
res}^{d}$ with other suppport restrictions. Going even further, (287) might be
true for more general semirings (i.e. not just the matrix semirings considered
here).
From this we can see that, given any matrix $P$ that does not have repeated
columns, we can tailor the semiring $S_{\rm res}^{d}$ such that $P$ is power
universal in that semiring. We expect that the homomorphisms and derivations
corresponding to $S_{\rm res}^{d}$ are similar to the ones found in this
paper, precisely those for parameters $\underline{\alpha}$ or
$\underline{\beta}$ that are well-defined, i.e. those where there does not
appear a factor $\left(q^{(k)}_{i}\right)^{\alpha}$ or
$\left(q^{(k)}_{i}\right)^{\beta}$ with $\alpha,\beta<0$ where $q^{(k)}_{i}$
vanishes for at least one matrix $Q$ in the semiring. In this way, we would be
able to find the conditions for large-sample and catalytic majorization, both
for the exact and asymptotic case, for an input matrix with any kind of
support structure, not just the ones that are power universal in the semirings
discussed here or in [7].
From a practical point of view, the notion of asymptotic matrix majorization
in the large-sample or catalytic regime, as characterized in Theorems 11 and
19, is the most relevant case. After all, it is often enough to know that
large-sample or catalytic majorization only holds approximately, possibly with
vanishing error. Our current methods are geared towards identifying exact
majorization in large samples or catalytically and often yield extra
conditions which are not needed in the asymptotic picture, as we have seen.
Finding conditions directly for the desired asymptotic results without having
to take a scenic route through the theory and results for exact majorization
would obviously be of high interest.
There is ever-increasing interest also in multi-party quantum divergences [15,
17]. However, although we may define various multi-party quantum divergences
and often even show that they have desirable properties, e.g., monotonicity
under operating with the same quantum channel on all the states (data
processing inequality) and additivity under tensor products, characterization
of all the quantum divergences with these (and possibly other) properties
seems to be a daunting task, even in the two-party case [20]. Despite these
difficulties, we may still define semirings of $d$-tuples
$\vec{\rho}=\big{(}\rho^{(1)},\ldots,\rho^{(d)}\big{)}$ of quantum states with
varying conditions for the supports ${\rm supp}\,\rho^{(k)}$, where the
addition and multiplication are defined through componentwise direct sum and
tensor product respectively. These semirings can be preordered by denoting
(288)
$\big{(}\rho^{(1)},\ldots,\rho^{(d)}\big{)}\succeq\big{(}\sigma^{(1)},\ldots,\sigma^{(d)}\big{)}$
when there is a quantum channel $\Phi$ such that
$\Phi\big{(}\rho^{(k)}\big{)}=\sigma^{(k)}$ for $k=1,\ldots,d$. We may still
be able to identify relevant subsets of monotone homomorphisms and derivations
on these semirings yielding conditions for large-sample quantum majorization.
This remains an interesting research question.
## Acknowledgements
This work was supported by the National Research Foundation, Singapore and
A*STAR under its CQT Bridging Grant.
## References
* [1] G. Aubrun and I. Nechita. “Catalytic Majorization and $\ell_{p}$ Norms”. Communications in Mathematical Physics 278(1): 133–144 (2007).
* [2] D. Blackwell. “Comparison of experiments”. In Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, 1950, pages 93–102, (1951).
* [3] D. Blackwell. “Equivalent Comparisons of Experiments”. The Annals of Mathematical Statistics 24: 265–272 (1953).
* [4] F. G. S. L. Brandao, M. Horodecki, N. H. Y. Ng, J. Oppenheim, and S. Wehner. “The Second Laws of Quantum Thermodynamics”. Proceedings of the National Academy of Sciences USA 112(11): 3275–3279 (2014).
* [5] G. Bunth and P. Vrana. “Asymptotic relative submajorization of multiple-state boxes”. Letters in Mathematical Physics 111(4): 1–23 (2021).
* [6] G. Dahl. “Matrix majorization”. Linear Algebra and its Applications 288: 53–73 (1999).
* [7] M. U. Farooq, T. Fritz, E. Haapasalo, and M. Tomamichel. “Matrix Majorization in Large Samples”. IEEE Transactions on Information Theory 70(5): 3118–3144 (2024).
* [8] T. Fritz. “Abstract Vergleichsstellensätze for Preordered Semifields and Semirings II”, (2022). arXiv: 2112.05949.
* [9] T. Fritz. “Abstract Vergleichsstellensätze for Preordered Semifields and Semirings I”. SIAM Journal on Applied Algebra and Geometry 7(2): 505–547 (2023).
* [10] G. Gour and M. Tomamichel. “Entropy and Relative Entropy From Information-Theoretic Principles”. IEEE Trans. Inf. Theor. 67(10): 6313–6327 (2021).
* [11] H. Heyer. Comparison of Finite Experiments. Springer New York (1982).
* [12] A. K. Jensen. “Asymptotic Majorization of Finite Probability Distributions”. IEEE Transactions on Information Theory 65(12): 8131–8139 (2019).
* [13] M. Klimesh. “Inequalities that Collectively Completely Characterize the Catalytic Majorization Relation”, (2007). arXiv: 0709.3680.
* [14] H. K. Mishra, M. Nussbaum, and M. M. Wilde. “On the optimal error exponents for classical and quantum antidistinguishability”. Letters in Mathematical Physics 114(76): 1573–0530 (2024).
* [15] M. Mosonyi, G. Bunth, and P. Vrana. “Geometric relative entropies and barycentric Rényi divergences”. Linear Algebra and its Applications 699: 159–276 (2024).
* [16] X. Mu, L. Pomatto, P. Strack, and O. Tamuz. “From Blackwell Dominance in Large Samples to Rényi Divergences and Back Again”. Econometrica 89(1): 475–506 (2020).
* [17] T. Nuradha, H. K. Mishra, F. Leditzky, and M. M. Wilde. “Multivariate Fidelities”, (2024). arXiv: 2404.16101.
* [18] C. Perry, P. Vrana, and A. H. Werner. “The Semiring of Dichotomies and Asymptotic Relative Submajorization”. IEEE Transactions on Information Theory 68(1): 311–321 (2022).
* [19] V. Strassen. “The asymptotic spectrum of tensors and the exponent of matrix multiplication”. 27th Annual Symposium on Foundations of Computer Science (sfcs 1986) pages 49–54 (1986).
* [20] M. Tomamichel. Quantum Information Processing with Finite Resources — Mathematical Foundations. volume 5 of SpringerBriefs in Mathematical Physics, Springer International Publishing (2016).
|
# GenAI Arena: An Open Evaluation Platform for Generative Models
Dongfu Jiang∗ Max Ku∗ Tianle Li∗ Yuansheng Ni Shizhuo Sun Rongqi Fan Wenhu
Chen
University of Waterloo
{dongfu.jiang, m3ku, t29li<EMAIL_ADDRESS>
https://hf.co/spaces/TIGER-Lab/GenAI-Arena
###### Abstract
Generative AI has made remarkable strides to revolutionize fields such as
image and video generation. These advancements are driven by innovative
algorithms, architecture, and data. However, the rapid proliferation of
generative models has highlighted a critical gap: the absence of trustworthy
evaluation metrics. Current automatic assessments such as FID, CLIP, FVD, etc
often fail to capture the nuanced quality and user satisfaction associated
with generative outputs. This paper proposes an open platform GenAI-Arena to
evaluate different image and video generative models, where users can actively
participate in evaluating these models. By leveraging collective user feedback
and votes, GenAI-Arena aims to provide a more democratic and accurate measure
of model performance. It covers three arenas for text-to-image generation,
text-to-video generation, and image editing respectively. Currently, we cover
a total of 27 open-source generative models. GenAI-Arena has been operating
for four months, amassing over 6000 votes from the community. We describe our
platform, analyze the data, and explain the statistical methods for ranking
the models. To further promote the research in building model-based evaluation
metrics, we release a cleaned version of our preference data for the three
tasks, namely GenAI-Bench. We prompt the existing multi-modal models like
Gemini, GPT-4o to mimic human voting. We compute the correlation between model
voting with human voting to understand their judging abilities. Our results
show existing multimodal models are still lagging in assessing the generated
visual content, even the best model GPT-4o only achieves a Pearson correlation
of 0.22 in the quality subscore, and behaves like random guessing in others.
Figure 1: GenAI Arena contains three components: (1) text-to-image, text-to-
video and image editing arena, which accept community voting to obtain the
preference pairs. (2) The leaderboard utilizes the preference pairs to
calculate elo ranking for all the evaluated models. (3) We further release
GenAI-Bench to judge different multimodal LLM judges.
## 1 Introduction
Image generation and manipulation technologies have seen rapid advancements,
leading to their widespread application across various domains such as
creating stunning artwork [41, 53, 66, 18], enhancing visual content [4, 35],
and aiding in medical imaging [64, 9]. Despite these advancements, navigating
through the multitude of available models and assessing their performance
remains a challenging task [51]. Traditional evaluation metrics like PSNR,
SSIM [60], LPIPS [67], and FID [17], while valuable, offer very specific
insights into precise aspects of visual content generation. However, these
metrics often fall short in providing a comprehensive assessment of overall
model performance, especially when considering subjective qualities like
aesthetics and user satisfaction [45].
To address these challenges, we introduce GenAI-Arena—a novel platform
designed to enable fair evaluation. Inspired by successful implementations in
other domains [69, 40], GenAI-Arena offers a dynamic and interactive platform
where users can generate images, compare them side-by-side, and vote for their
preferred models. Such a platform not only simplifies the process of comparing
different models but also provides a ranking system that reflects human
preferences, thereby offering a more holistic evaluation of model
capabilities. To our knowledge, GenAI-Arena is the first evaluation platform
with comprehensive evaluation capabilities across multiple properties. Unlike
other platforms, it supports a wide range of tasks across text-to-image
generation, text-guided image editing, and text-to-video generation, along
with a public voting process to ensure labeling transparency. The votes are
utilized to access the evaluation ability of Multimodal Large Language Model
(MLLM) evaluators. Table 1 shows our platform excels in its versatility and
transparency.
Since February 11th, 2024, we have collected over 6000 votes for three
multimodal generative tasks. We constructed leaderboards for each task with
these votes, identifying the state-of-the-art models as PlayGround V2.5,
MagicBrush, and T2VTurbo, respectively (until June 4th, 2024). Detailed
analyses based on the votes are presented. For example, our plotted winning
fraction heatmaps reveal that while the Elo rating system is generally
effective, it can be biased by imbalances between "easy games" and "hard
games". We also performed several case studies for qualitative analysis,
demonstrating that users can provide preference votes from multiple evaluation
aspects, which help distinguish subtle differences between the outputs and
upload high-quality votes for Elo rating computation.
Automatically assessing the quality of generated visual content is a
challenging problem for several reasons: (1) images and videos have many
different aspects like visual quality, consistency, alignment, artifacts, etc.
Such a multi-faceted nature makes the evaluation intrinsically difficult. (2)
the supervised data is relatively scarce on the web. In our work, we release
the user voting data as GenAI-Bench to enable further development in this
field. Specifically, we calculate the correlation between different
image/video auto-raters (i.e. MLLM judges like GPT-4o, Gemini, etc.) with user
preference to understand their judging abilities. Our results show that even
the best MLLM, GPT-4o achieves at most 0.22 Pearson correlation with human
preference.
Table 1: Comparison with different evaluation platforms on different
properties.
Platform | | Text-To-Image
---
Generation
| Text-Guided
---
Image Editing
| Text-To-Video
---
Generation
| Human Label
---
Transparency
| Open/Public
---
Voting Process
| Judging
---
MLLM judge
T2I-CompBench [20] | ✓ | ✗ | ✗ | ✗ | ✗ | ✗
HEIM [31] | ✓ | ✗ | ✗ | ✓ | ✗ | ✗
ImagenHub [28] | ✓ | ✓ | ✗ | ✓ | ✗ | ✗
VBench [21] | ✗ | ✗ | ✓ | ✓ | ✗ | ✗
EvalCrafter [37] | ✗ | ✗ | ✓ | ✓ | ✗ | ✗
GenAI-Arena | ✓ | ✓ | ✓ | ✓ | ✓ | ✓
To summarize, our work’s contributions include:
* •
GenAI-Arena, the first open platform to rank multi-modal generative AI based
on user preferences.
* •
Discussion and case studies of collected user votes, showing the reliability
of GenAI-Arena.
* •
GenAI-Bench, a public benchmark for judging MLLM’s evaluation ability for
generative tasks.
## 2 Related Work
### 2.1 Generative AI Evaluation Metrics
Numerous methods have been proposed to evaluate the performance of multi-modal
generative models in various aspects. In the context of image generation,
CLIPScore [16] is proposed to measure the text-alignment of an image and a
text through computing the cosine similarity of the two embeddings from CLIP
[50]. IS [54] and FID [17] measure image fidelity by computing a distance
function between real and synthesized data distributions. PSNR, SSIM [60]
assess the image similarity. LPIPS [67] and the follow-up works [12, 13]
measure the perceptual similarity of images. More recent works leverage the
Multimodal Large Language Model (MLLM) as a judge. T2I-CompBench [20] proposed
the use of miniGPT4 [70] to evaluate compositional text-to-image generation
task. TIFA [19] further adapted visual question answering to compute scores
for the text-to-image generation task. VIEScore [27] leveraged MLLMs as a
unified metric across image generation and editing tasks, reporting that MLLM
has great potential in replacing human judges.
Metrics in similar fashions are also proposed for the video domain. For
example, FVD [57] measures the coherence shifts and quality in frames. CLIPSIM
[50] utilizes an image-text similarity model to assess the similarity between
video frames and text. VBench [21] and EvalCrafter [37] also proposed
different metrics for evaluating different aspects of the video generation
task. However, these automatic metrics still lag compared with human
preferences, achieving low correlation and thus giving doubts to their
reliability.
### 2.2 Generative AI Evaluation Platforms
While auto-metric focuses on evaluating a single model’s performance,
evaluation platforms aim to systematically rank a group of models. Recently,
several benchmark suites have been developed to comprehensively assess
generative AI models. For image generation, T2ICompBench [20] evaluates
compositional text-to-image generation tasks, while HEIM [31] offers a
holistic evaluation framework that measures text-to-image tasks across
multiple dimensions, including safety and toxicity. Similarly, ImagenHub [28]
evaluates text-to-image, image editing, and other prevalent image generation
tasks in a unified benchmark suite. For video generation, VBench [21] and
EvalCrafter [37] provide structured evaluation approaches ensuring rigorous
assessment. Despite their functionality, these benchmarks rely on model-based
evaluation metrics, which are less reliable than human evaluation.
To address this issue, variable model arenas have been developed to collect
direct human preferences for ranking models. Chatbot Arena by LMsys [11] is
the pioneering platform in this regard, setting the standard for evaluation.
Subsequent efforts have led to the creation of arenas for vision-language
models [62], TTS models [40], and tokenizers [24]. However, there is no
existing arena for generative AI models. To fill this gap, we propose GenAI-
Arena as a complementary solution in this field.
## 3 GenAI-Arena: Design and Implementation
Figure 2: GenAI Arena User Voting Interface.
### 3.1 Design
GenAI-Arena is designed to offer an intuitive and comprehensive evaluation
platform for generative models, facilitating user interaction and
participation. The platform is structured around three primary tasks: text-to-
image generation, image edition, and text-to-video generation. Each task is
supported by a set of features that include an anonymous side-by-side voting
system, a battle playground, a direct generation tab, and a leaderboard as
shown in Figure 2 . These features are designed to cater to both casual users
and researchers, ensuring a democratic and accurate assessment of model
performance.
#### Standardized Inference
To ensure a fair comparison between different models, we ported the highly
dispersed codebase from the existing works and then standardized them into a
unified format. During inference, we fixed the hyper-parameters and the prompt
format to prevent per-instance prompt or hyper-parameter tuning, which makes
the inference of different models fair and reproducible. Following ImagenHub
[28], we build the new library of VideoGenHub, which aims to standardize the
inference procedure for different text-to-video and image-to-video models. We
find the best hyper-parameters of these models to ensure their highest
performance.
#### Voting Rules
The anonymous battle section is designed to ensure unbiased voting and
accurate evaluation of generative models. The rules for this section are as
follows:
1. 1.
Users input a prompt, which is then used to generate outputs from two
anonymous models within the same category of task.
2. 2.
The generated outputs from the two anonymous models are presented side-by-side
for comparison.
3. 3.
Users can vote based on their preference using the options: 1) left is better;
2) right is better; 3) tie; 4) both are bad. These four options are being used
to calculate Elo ranking.
4. 4.
Once the user has made their decision, they click the Vote button to submit
their vote. It is important to ensure that the identity of the models remains
anonymous throughout the process. Votes will not be counted if the model
identity is revealed during the interaction.
### 3.2 Model Integration
In GenAI-Arena, we incorporate a diverse array of state-of-the-art generative
models, covering a broad range of generative tasks including text-to-image
generation, image edition, and text-to-video generation. To ensure
comprehensive evaluations, the platform includes models that employ diverse
underlying technologies, such as different types of architectures, training
paradigms, training data and acceleration techniques. These variations can
offer insights to understand these factors rigorously.
#### Text-to-Image Generation
In Table 2, we list all the included text-to-image generation models. For
example, SDXL, SDXL-Turbo, and SDXL-Lightning are all derived based on SDXL
[49], while SDXL-Turbo [55] and SDXL-Lightning [36] adopt different
distillation method. We also include diffusion transformer models [47] like
PixArt-$\alpha$ and PixArt-$\sigma$. Playground V2 and Playground V2.5 are
based on SDXL architecture, but trained by Playground.ai from scratch with an
internal dataset.
Table 2: The overview of all text-to-image generation models.
Model | Size | Method | Resolution | #Steps
---|---|---|---|---
OpenJourney [44] | 1B | SD-2.1 + MidJourney Dataset | 512x512 | 50
LCM [38] | 1B | SD-2.1 + Consistency Distillation | 512x512 | 4
SDXL [49] | 3.5B | Latent Diffusion Model | 1K$\times$1K | 50
SDXL-Turbo [55] | 3.5B | Latent Diffusion Model + Distillation | 1K$\times$1K | 1
SDXL-Lightning [36] | 3.5B | Latent Diffusion Model + Distillation | 1K$\times$1K | 4
PixArt-$\alpha$ [7] | 0.6B | Diffusion Transformer | 1K$\times$1K | 50
PixArt-$\sigma$ [8] | 0.6B | Diffusion Transformer + Weak-to-Strong | 4K$\times$4K | 50
StableCascade [48] | 1.5B + 3.6B | Würstchen Architecture | 1K$\times$1K | 20+10
Playground V2 [33] | 3.5B | Latent Diffusion Model | 1K$\times$1K | 50
Playground V2.5 [32] | 3.5B | Latent Diffusion Model | 1K$\times$1K | 50
Table 3: Overview of all the image editing models.
Model | Trained? | Method | Runtime
---|---|---|---
Pix2PixZero [46] | Zero-shot | Editing Direction Discovery + Attention Control | 21s
SDEdit [39] | Zero-shot | Iteratively Denoising through SDE | 13s
CycleDiffusion [61] | Zero-shot | Reconstructable Encoder for Stochastic DPMs | 9s
Prompt2Prompt [15] | Zero-shot | Prompt-based Cross-attention Control | 120s
PnP [56] | Zero-shot | Feature and Self-attention Injection | 120s
InfEdit [63] | Zero-shot | Consistent Model + Uni-Attention Control | 5s
InstructPix2Pix [4] | Trained | Instruction-based Fine-tuning with Synthetic Data | 12s
MagicBrush [65] | Trained | Instruction-based Fine-tuning with Annotated Data | 12s
CosXLEdit [1] | Trained | Cosine-Continuous EDM VPred schedule | 50s
Table 4: Overview of all text-to-video generation models.
Model | Base | Len | FPS | Dataset | Resolution | #Steps
---|---|---|---|---|---|---
AnimateDiff [14] | SD-1.5 | 2s | 8 | WebVid10M | 512 x 512 | 25
AnimateDiff-Turbo [14] | SD-1.5 | 2s | 8 | WebVid10M | 512 x 512 | 4
ModelScope [58] | SD-1.5 | 2s | 8 | WebVid10M | 256 x 256 | 50
LaVie [59] | SD-1.5 | 2s | 8 | Vimeo25M | 320 x 512 | 50
StableVideoDiffusion [2] | SD-2.1 | 2.5s | 10 | LVD-500M | 576 x 1024 | 20
VideoCrafter2 [6] | SD-2.1 | 2s | 16 | WebVid10M | 320 x 512 | 50
T2V-Turbo [34] | VideoCrafter2 | 2s | 8 | WebVid10M | 320 x 512 | 4
OpenSora [42] | Pixart-$\alpha$ | 2s | 16 | WebVid10M | 320 x 512 | 50
#### Text-guided Image Editing
In Table 3, we list all the image editing models and approaches. Some of them
are plug-and-play approaches without requiring any training, like Pix2PixZero
[46], InfEdit [63], SDEdit [39], etc. These methods can be applied to a broad
range of diffusion models. Some of the models like PnP [56] and Prompt2Prompt
[15] require DDIM inversion, which takes much longer time than the other
approaches. We also include specialized trained image editing models like
InstructP2P [4], MagicBrush [65] and CosXLEdit [1].
#### Text-to-Video Generation
In Table 4, we list all the text-to-video generation models. We include
different types of models. For example, AnimateDiff [14], ModelScope [58],
Lavie [59] are initialized from SD-1.5 and continue trained by injecting a
motion layer to capture the temporal relation between frames. In contrast,
StableVideoDiffusion [2] and VideoCrafter2 [5] are iniialized from SD-2.1.
Besides these models, we also include OpenSora [42], which utilizes a Sora-
like diffusion transformer [47] architecture for joint space-time attention.
### 3.3 Elo Rating System
#### Online Elo Rating
The Elo rating system models the probability of player $i$ winning against
player $j$, based on their current ratings, $R_{i}$ and $R_{j}$ respectively,
where $i,j\in N$. We define a binary outcome $Y_{ij}$ for each comparison
between player $i$ and player $j$, where $Y_{ij}=1$ if player $i$ wins and
$Y_{ij}=0$ otherwise. The logistic probability is formulated as:
$P(Y_{ij}=1)=\frac{1}{1+10^{(R_{j}-R_{i})/\alpha}}$ (1)
where $\alpha=400$ for Elo rating computation. After each match, a player’s
rating is updated using the formula:
$R^{\prime}_{i}=R_{i}+K\times(S(i,j)-E(i,j))$ (2)
where $S(i,j)$ is the actual match outcome, $S(i,j)=1$ for a win $S(i,j)=0.5$
for a tie, and $S(i,j)=0$ for a loss, and $E(i,j)=P(Y{i,j}=1)$.
For example, given a model’s Elo rating as 1200 and the other model’s elo
rating as 1100, then the estimated probability of the first model winning will
be $\frac{1}{1+10^{(1100-1200)/400}}\approx 0.64$. In this way, we can have a
direct understanding of the elo rating’s meaning. This mapping from absolute
number to the pairwise winning rate of two models gives a more straightforward
understanding of the meaning of elo rating score.
Another design logic behind the Elo rating is that a higher-rated player
should gain fewer points if they win a lower-rated player, but lose more if
they lose the game, whereas the lower-rated player experiences the opposite.
In this way, the order of a specific set of matches will significantly affect
the final computed Elo rating, as the player’s Elo rating and the rating gain
of each match are both changing dynamically. This online Elo rating system
might be good for real-world competitions, where players usually have less
than 100 competitions a year. However the arena for AI models usually comes
with thousands of votes (competitions), and the quality of votes is not
ensured. Thus, it’s necessary to acquire an order-consistent and more stable
elo rating. To do this, we follow Chatbot Arena [10] to adopt the
Bradley–Terry model [3] for a statistically estimated elo rating.
#### Bradley–Terry Model Estimation
The Bradley–Terry (BT) model [3] estimates Elo ratings using logistic
regression and maximum likelihood estimation (MLE). Suppose there are $N$
players and we have a series of pairwise comparisons, where $W_{ij}$ is the
number of times player $i$ has won against player $j$. The log-likelihood
function for all pairwise comparisons is written as:
$\mathcal{L}(\mathbf{R})=\sum_{i,j\in N,i\neq j}\left(W_{ij}Y_{ij}\log
P(Y_{ij}=1)\right)$ (3)
where $\mathbf{R}=\\{R_{1},\ldots,R_{N}\\}$ represents the Elo ratings of each
player. The Bradley–Terry model provides a stable statistical estimation of
the players’ ratings by consistently incorporating all pairwise comparisons,
thus overcoming the limitations of direct Elo computation in online settings.
Since the BT model does not account for ties, we first duplicate all the
votes, then allocate half of the "tie" votes to the scenario where model $i$
wins ($Y_{ij}=1$) and the other half to the scenario where model $j$ wins
($Y_{ij}=0$) in practice. We model the solver to be a logistic regression
model and solve it via the LogisticRegression model from sklearn for the
solving.
#### Confidence Interval
To further investigate the variance of the estimated Elo rating, we use the
"sandwich" standard errors described in Huber et al. [22]. That is, for each
round, we record the estimated Elo rating based on the same number of battles
sampled from the previous round. This process continues for 100 rounds. We
select the lowest sampled elo rating as the lower bound of the confidence
interval, and the highest sampled elo rating as the upper bound of the elo
rating.
### 3.4 GenAI-Museum
Current GenAI-Arena runs the model on the Hugging Face Zero GPU system [23].
As shown in Table 3, the time for a single generative inference usually ranges
from 5 to 120 seconds. Unlike the auto-regression language model, where
inference acceleration techniques like VLLM [29], SGLang [68] generate
responses in less than a second, diffusion model community does not have such
powerful infrastructure. Therefore, pre-computation becomes a necessary way to
mitigate computational overhead and streamline user interaction.
To achieve this, we serve GenAI-Museum as a pre-computed data pool comprising
various inputs from existing datasets or user collection, along with each
model’s output. Based on this, a "Random Sample" button shown in Figure 2 is
additionally implemented to facilitate the random generation of prompts and
the immediate retrieval of corresponding images or videos. This functionality
operates by sending requests to our deployed GenAI-Museum every time "Random
Sample" button is hit, receiving input and two random model’s pre-computed
outputs. In this way, we save the computation time on the GPU, enable users to
do instant comparisons and votes on the UI, and balance the votes for each
unique input so we gradually collect votes for a full combination of all
models. The input prompts were sampled from ImagenHub [28] and VBench [21].
Table 5: GenAI-Arena Leaderboards.
(Last updated on June 4th, 2024)
(a) Text-to-Image
Model | Elo | 95% CI
---|---|---
PlayGround V2.5 | 1150 | +21/-22
Playground V2 | 1101 | +14/-20
StableCascade | 1057 | +20/-24
SDXL-Lightning | 1053 | +22/-22
PixArt-$\alpha$ | 1052 | +15/-19
PixArt-$\sigma$ | 1050 | +26/-23
SDXL | 1001 | +15/-14
SDXL-Turbo | 935 | +18/-16
OpenJourney | 853 | +13/-17
LCM | 817 | +20/-20
(b) Image editing
Model | Elo | 95% CI
---|---|---
MagicBrush | 1111 | +28/-32
InfEdit | 1079 | +27/-33
CosXLEdit | 1066 | +31/-30
InstructPix2Pix | 1033 | +32/-26
PNP | 998 | +37/-36
Prompt2prompt | 988 | +25/-25
CycleDiffusion | 939 | +23/-26
SDEdit | 929 | +25/-21
Pix2PixZero | 857 | +20/-24
(c) Text-to-video
Model | Elo | 95% CI
---|---|---
T2V-Turbo | 1113 | +53/-46
StableVDiffusion | 1105 | +45/-37
VideoCrafter2 | 1077 | +18/-18
AnimateDiff | 1075 | +23/-26
LaVie | 997 | +24/-26
OpenSora | 916 | +19/-23
ModelScope | 866 | +19/-22
AnimateDiff-Turbo | 851 | +18/-20
(a) Text-to-Image
(b) Image Editing
(c) Text-to-Video
Figure 3: Winning fraction heatmap of different models for the three tasks in
GenAI-Arena
(d) Text-to-Image
(e) Image Editing
(f) Text-to-Video
Figure 4: Battle count heatmap of different models for the three tasks in
GenAI-Arena (without Ties)
## 4 Benchmarks and Results Discussion
### 4.1 Arena Leaderboard
We report our leaderboard at the time of paper writing in Table 5. For image
generation, we collected 4443 votes in total. The currently top-ranked models
are Playground V2.5 and Playground V2. Both of the models are released by
Playground.ai, which follows the same architecture as SDXL but is trained with
a private dataset. In contrast, SDXL only ranks in the seventh position,
lagging significantly behind. Such finding highlights the importance of the
training dataset. Following the Playground models is StableCascade, which
utilizes a highly efficient cascade architecture to lower the training cost.
According to Würstchen [48], StableCascade only requires a 10% training cost
of SD-2.1, yet it can beat SDXL significantly on our leaderboard. This
highlights the importance of the diffusion architecture to achieve strong
performance. For image editing, a total of 1083 votes have been collected.
MagicBrush, InFEdit, CosXLEdit, and InstructPix2Pix ranked higher as they can
perform localized editing on images. PNP preserves the structure with feature
injections, thus limiting the edit variety. The older methods such as Prompt-
to-Prompt, CycleDiffusion, SDEdit, and Pix2PixZero, frequently result in
completely different images during editing despite the high-quality images,
which explains the lower ranking of these models. For text-to-video, there is
a total of 1568 votes. T2VTurbo leads with the highest Elo score, suggesting
it is the most effective model. Close behind, StableVideoDiffusion ranks
second. Following VideoCrafter2 and AnimateDiff have very close elo scores,
showing nearly equivalent capabilities. LaVie, OpenSora, ModelScope, and
AnimateDiff-Turbo follow with decreasing scores, indicating progressively
lower performance.
### 4.2 Discussion and Insights
#### Winning Fraction and Elo Rating
We visualize the winning fraction heatmap in Figure 4, where each cell
represents the actual winning fraction of Model A over Model B. The models are
ordered by their Elo rating in the heatmap. Horizontally across each row, the
winning fraction of Model A increases as the Elo rating of Model B decreases,
demonstrating the effectiveness of the Elo rating system in ranking different
models.
Specific cells in the heatmap reveal notable findings. For instance, although
PlayGround 2.5 achieves the state-of-the-art (SOTA) Elo rating in the Text-to-
Image task, its winning fraction over PixArt-$\sigma$ is only $0.48$, which is
below 50%. Similarly, the Text-to-Video SoTA model, T2V-Turbo, has a lower
winning fraction against StableVideoDiffusion. The higher Elo rating of
T2V-Turbo might be due to our Arena collecting more votes from "easy games"
with low-ranked models and fewer from "harder games" with high-ranked models.
For example, the number of battles between T2V-Turbo and AnimateDiff-Turbo
($30$) is way more than T2V-Turbo with other models (around $10$) in Figure 4.
These anomalies highlight potential drawbacks of the Elo rating system: (1) a
reliable and robust Elo rating requires a large amount of voting data, and (2)
the estimated Elo rating may be biased by the imbalance between "easy games"
and "harder games," as they carry similar weight in the estimation.
Figure 5: Example of votes from users on the GenAI-Arena for the three
generative tasks
#### Case Study
We present case studies in Figure 5, showcasing the votes collected for three
generative tasks. These cases demonstrate that GenAI-Arena users can provide
high-quality votes, even for the most advanced models. For instance, in the
text-to-image task, the image generated by PlayGround V2.5 was preferred over
that of SDXL-Lightning for the prompt "a cute dog is playing with a ball," as
the latter depicted two dogs instead of one. Users can clearly distinguish and
vote based on the quality of the outputs, even when both models complete the
task. In the image editing task, the edited image from Prompt2Prompt appeared
more natural than the one from InfEdit, leading users to make a definitive
vote. Similarly, votes collected for the text-to-video task were also of high
quality.
## 5 GenAI-Bench
### 5.1 Dataset
We applied Llama Guard [25] as an NSFW filter to ensure that the user input
prompt is appropriate for a wide range of audiences and protects users of the
benchmark from exposure to potentially harmful or offensive content. In the
text-to-image generation task, we collect 4.3k anonymous votes in total and
there are 1.7k votes left after filtering for the safe content. We observe a
large amount of the prompt is filtered out due to sexual content, which takes
up 85.6% of the abandoned data. In the text-guided image editing task, we
collect 1.1k votes from users before filtering. After applying Llama Guard,
there are 0.9k votes for the image edition being released. In this task, 87.5%
of the unsafe inputs contain violent crimes, and the other 12.5% is filtered
out resulting from sex-related crimes. For text-to-video generation task, our
platform collects 1.2k votes before post-processing. After cleaning it with
the NSFW filter, we release the remaining 1.1k votes. All of the unsafe data
abandoned in this task is due to the sexual content. We released the current
version of GenAI-Bench111https://huggingface.co/datasets/TIGER-Lab/GenAI-Bench
on the HuggingFace Dataset website, with an MIT license to allow the reuse
with or without modification.
### 5.2 Correlations
To further analyze the collected human votes, we compute the correlation with
several existing metrics. Specifically, We selected CLIPScore [16], GPT-4o
[43], Gemini-1.5-Pro [52], Idefics2 [30], and Mantis [26] as our judges. For
these MLLMs, we used the prompt from VIEScore [27] which includes the rating
of semantics, quality, and overall performance, to evaluate the image
generation tasks. Since VIEScore does not cover prompts related to video
evaluation, we designed a suite of prompt templates in subsection A.6 for
prompting MLLMs to evaluate the quality of the output for test-to-video
generation tasks. Videos are extracted into image frames and fed into them as
an image sequence. We encoded the voting results and computed the correlations
with the score differences of the existing metrics between the two models. As
shown in Table 6, the correlations are generally low. Most MLLM correlations
with this preference-based voting approach are nearly random. Notably,
CLIPScore achieved a low but significant correlation in the range of 0.2.
GPT-4o’s quality measure, while achieving a similar low correlation for text-
to-image and text-to-video tasks, shows a random correlation for the image
editing task.
Table 6: Correlation Study on existing metrics and human votings in GenAI-Bench. Metrics | Text-To-Image | Image Editing | Text-To-Video
---|---|---|---
Subscore | semantics | quality | overall | semantics | quality | overall | semantics | quality | overall
Random | -0.0188 | -0.0188 | -0.0188 | -0.0293 | -0.0293 | -0.0293 | -0.0168 | -0.0168 | -0.0168
CLIPScore [16] | 0.1450 | 0.1450 | 0.1450 | 0.1434 | 0.1434 | 0.1434 | 0.2643 | 0.2643 | 0.2643
GPT-4o [43] | -0.0749 | 0.2259 | -0.0233 | -0.0314 | 0.0048 | -0.0187 | 0.0216 | 0.2169 | 0.0393
Gemini-1.5-Pro [52] | -0.0008 | 0.1725 | 0.0114 | 0.0997 | -0.0308 | 0.0916 | 0.0428 | -0.0388 | -0.0073
Idefics2 [30] | -0.0571 | -0.1155 | -0.0956 | 0.0325 | -0.0363 | 0.0101 | -0.1647 | -0.0807 | -0.1490
Mantis [26] | -0.0045 | 0.0118 | -0.0078 | -0.0754 | -0.1006 | -0.1258 | -0.0301 | -0.0001 | 0.0050
## 6 Conclusion
In this paper, we introduced GenAI-Arena, an open platform designed to rank
generative models across text-to-image, image editing, and text-to-video tasks
based on user preference. unlike other platforms, GenAI-Arena is driven by
community voting to ensure transparency and sustainable operation. We employed
the side-by-side human voting method to evaluate the models and collected over
6000 votes starting from February 11th, 2024. We compiled an Elo leaderboard
with the votings and found that PlayGround V2.5, MagicBrush, and T2V-Turbo are
the current state-of-the-art models in the three tasks (until June 4th, 2024).
Analysis based on the collected votes shows that while the Elo rating is
generally functional, but can biased by the imbalance of the "easy games" and
"hard games". Several case studies demonstrate the high quality of our
collected votes. What’s more, we also released the human preference voting as
GenAI-Bench. We prompt the existing MLLMs to evaluate the generated images and
videos on GenAI-Bench and compute the correlations with human voting. The
experiment showed that the existing MLLMs achieve very low correlation, even
the best model GPT-4o can only achieve $0.22$ Pearson correlation with human
voting on quality, same as random guessing on other aspects. In the future, we
will continue collecting human votes to update the leaderboard, helping the
community to keep track of the research progress. We also plan to develop a
more robust MLLM to better approximate human ratings in GenAI-Bench.
## References
* AI [2024] S. AI. CosXL. https://huggingface.co/stabilityai/cosxl, 2024. Accessed on: 2024-04-13.
* Blattmann et al. [2023] A. Blattmann, T. Dockhorn, S. Kulal, D. Mendelevitch, M. Kilian, and D. Lorenz. Stable video diffusion: Scaling latent video diffusion models to large datasets. _ArXiv_ , abs/2311.15127, 2023. URL https://api.semanticscholar.org/CorpusID:265312551.
* Bradley and Terry [1952] R. A. Bradley and M. E. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. _Biometrika_ , 39(3/4):324–345, 1952.
* Brooks et al. [2023] T. Brooks, A. Holynski, and A. A. Efros. Instructpix2pix: Learning to follow image editing instructions. In _CVPR_ , 2023.
* Chen et al. [2024a] H. Chen, Y. Zhang, X. Cun, M. Xia, X. Wang, C. Weng, and Y. Shan. Videocrafter2: Overcoming data limitations for high-quality video diffusion models. _arXiv preprint arXiv:2401.09047_ , 2024a.
* Chen et al. [2024b] H. Chen, Y. Zhang, X. Cun, M. Xia, X. Wang, C.-L. Weng, and Y. Shan. Videocrafter2: Overcoming data limitations for high-quality video diffusion models. _ArXiv_ , abs/2401.09047, 2024b. URL https://api.semanticscholar.org/CorpusID:267028095.
* Chen et al. [2023] J. Chen, J. Yu, C. Ge, L. Yao, E. Xie, Y. Wu, Z. Wang, J. T. Kwok, P. Luo, H. Lu, and Z. Li. Pixart-$\alpha$: Fast training of diffusion transformer for photorealistic text-to-image synthesis. _ArXiv_ , abs/2310.00426, 2023. URL https://api.semanticscholar.org/CorpusID:263334265.
* Chen et al. [2024c] J. Chen, C. Ge, E. Xie, Y. Wu, L. Yao, X. Ren, Z. Wang, P. Luo, H. Lu, and Z. Li. Pixart-$\sigma$: Weak-to-strong training of diffusion transformer for 4k text-to-image generation. _ArXiv_ , abs/2403.04692, 2024c. URL https://api.semanticscholar.org/CorpusID:268264262.
* Chen et al. [2024d] Q. Chen, X. Chen, H. Song, Z. Xiong, A. Yuille, C. Wei, and Z. Zhou. Towards generalizable tumor synthesis, 2024d.
* Chiang et al. [2024a] W.-L. Chiang, L. Zheng, Y. Sheng, A. N. Angelopoulos, T. Li, D. Li, H. Zhang, B. Zhu, M. Jordan, J. E. Gonzalez, and I. Stoica. Chatbot arena: An open platform for evaluating llms by human preference. _ArXiv_ , abs/2403.04132, 2024a. URL https://api.semanticscholar.org/CorpusID:268264163.
* Chiang et al. [2024b] W.-L. Chiang, L. Zheng, Y. Sheng, A. N. Angelopoulos, T. Li, D. Li, H. Zhang, B. Zhu, M. Jordan, J. E. Gonzalez, et al. Chatbot arena: An open platform for evaluating llms by human preference. _arXiv preprint arXiv:2403.04132_ , 2024b.
* Fu et al. [2024] S. Fu, N. Tamir, S. Sundaram, L. Chai, R. Zhang, T. Dekel, and P. Isola. Dreamsim: Learning new dimensions of human visual similarity using synthetic data. _Advances in Neural Information Processing Systems_ , 36, 2024.
* Ghazanfari et al. [2023] S. Ghazanfari, A. Araujo, P. Krishnamurthy, F. Khorrami, and S. Garg. Lipsim: A provably robust perceptual similarity metric. In _The Twelfth International Conference on Learning Representations_ , 2023.
* Guo et al. [2023] Y. Guo, C. Yang, A. Rao, Z. Liang, Y. Wang, Y. Qiao, M. Agrawala, D. Lin, and B. Dai. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. In _The Twelfth International Conference on Learning Representations_ , 2023.
* Hertz et al. [2022] A. Hertz, R. Mokady, J. M. Tenenbaum, K. Aberman, Y. Pritch, and D. Cohen-Or. Prompt-to-prompt image editing with cross attention control. _ArXiv_ , abs/2208.01626, 2022. URL https://api.semanticscholar.org/CorpusID:251252882.
* Hessel et al. [2021] J. Hessel, A. Holtzman, M. Forbes, R. L. Bras, and Y. Choi. CLIPScore: a reference-free evaluation metric for image captioning. In _EMNLP_ , 2021.
* Heusel et al. [2017] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. _Advances in neural information processing systems_ , 30, 2017.
* Ho et al. [2022] J. Ho, W. Chan, C. Saharia, J. Whang, R. Gao, A. Gritsenko, D. P. Kingma, B. Poole, M. Norouzi, D. J. Fleet, et al. Imagen video: High definition video generation with diffusion models. _arXiv preprint arXiv:2210.02303_ , 2022.
* Hu et al. [2023] Y. Hu, B. Liu, J. Kasai, Y. Wang, M. Ostendorf, R. Krishna, and N. A. Smith. Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pages 20406–20417, 2023.
* Huang et al. [2023] K. Huang, K. Sun, E. Xie, Z. Li, and X. Liu. T2i-compbench: A comprehensive benchmark for open-world compositional text-to-image generation. _Advances in Neural Information Processing Systems_ , 36:78723–78747, 2023.
* Huang et al. [2024] Z. Huang, Y. He, J. Yu, F. Zhang, C. Si, Y. Jiang, Y. Zhang, T. Wu, Q. Jin, N. Chanpaisit, Y. Wang, X. Chen, L. Wang, D. Lin, Y. Qiao, and Z. Liu. VBench: Comprehensive benchmark suite for video generative models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2024.
* Huber et al. [1967] P. J. Huber et al. The behavior of maximum likelihood estimates under nonstandard conditions. In _Proceedings of the fifth Berkeley symposium on mathematical statistics and probability_ , volume 1, pages 221–233. Berkeley, CA: University of California Press, 1967.
* Hugging Face [2024] Hugging Face. Zerogpu. https://huggingface.co/zero-gpu-explorers, 2024. Accessed: 2024-06-02.
* Hugging Face Spaces [2024] Hugging Face Spaces. Tokenizer arena. https://huggingface.co/spaces/eson/tokenizer-arena, 2024. Accessed: 2024-06-05.
* Inan et al. [2023] H. Inan, K. Upasani, J. Chi, R. Rungta, K. Iyer, Y. Mao, M. Tontchev, Q. Hu, B. Fuller, D. Testuggine, and M. Khabsa. Llama guard: Llm-based input-output safeguard for human-ai conversations. _ArXiv_ , abs/2312.06674, 2023. URL https://api.semanticscholar.org/CorpusID:266174345.
* Jiang et al. [2024] D. Jiang, X. He, H. Zeng, C. Wei, M. Ku, Q. Liu, and W. Chen. Mantis: Interleaved multi-image instruction tuning. _arXiv preprint arXiv:2405.01483_ , 2024.
* Ku et al. [2024a] M. Ku, D. Jiang, C. Wei, X. Yue, and W. Chen. Viescore: Towards explainable metrics for conditional image synthesis evaluation. In _Proceedings of Annual Meeting of the Association for Computational Linguistics_ , 2024a.
* Ku et al. [2024b] M. Ku, T. Li, K. Zhang, Y. Lu, X. Fu, W. Zhuang, and W. Chen. Imagenhub: Standardizing the evaluation of conditional image generation models. In _The Twelfth International Conference on Learning Representations_ , 2024b. URL https://openreview.net/forum?id=OuV9ZrkQlc.
* Kwon et al. [2023] W. Kwon, Z. Li, S. Zhuang, Y. Sheng, L. Zheng, C. H. Yu, J. E. Gonzalez, H. Zhang, and I. Stoica. Efficient memory management for large language model serving with pagedattention. In _Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles_ , 2023.
* Laurençon et al. [2024] H. Laurençon, L. Tronchon, M. Cord, and V. Sanh. What matters when building vision-language models?, 2024.
* Lee et al. [2024] T. Lee, M. Yasunaga, C. Meng, Y. Mai, J. S. Park, A. Gupta, Y. Zhang, D. Narayanan, H. Teufel, M. Bellagente, et al. Holistic evaluation of text-to-image models. _Advances in Neural Information Processing Systems_ , 36, 2024.
* Li et al. [2024a] D. Li, A. Kamko, E. Akhgari, A. Sabet, L. Xu, and S. Doshi. Playground v2.5: Three insights towards enhancing aesthetic quality in text-to-image generation. _ArXiv_ , abs/2402.17245, 2024a. URL https://api.semanticscholar.org/CorpusID:268033039.
* Li et al. [2024b] D. Li, A. Kamko, A. Sabet, E. Akhgari, L. Xu, and S. Doshi. Playground v2, 2024b. URL [https://huggingface.co/playgroundai/playground-v2-1024px-aesthetic](https://huggingface.co/playgroundai/playground-v2-1024px-aesthetic).
* Li et al. [2024c] J. Li, W. Feng, T.-J. Fu, X. Wang, S. Basu, W. Chen, and W. Y. Wang. T2v-turbo: Breaking the quality bottleneck of video consistency model with mixed reward feedback. _ArXiv_ , 2024c. URL https://api.semanticscholar.org/CorpusID:270094742.
* Li et al. [2023] T. Li, M. Ku, C. Wei, and W. Chen. Dreamedit: Subject-driven image editing. _Transactions on Machine Learning Research_ , 2023.
* Lin et al. [2024] S. Lin, A. Wang, and X. Yang. Sdxl-lightning: Progressive adversarial diffusion distillation. _ArXiv_ , abs/2402.13929, 2024. URL https://api.semanticscholar.org/CorpusID:267770548.
* Liu et al. [2023] Y. Liu, X. Cun, X. Liu, X. Wang, Y. Zhang, H. Chen, Y. Liu, T. Zeng, R. Chan, and Y. Shan. Evalcrafter: Benchmarking and evaluating large video generation models. _arXiv preprint arXiv:2310.11440_ , 2023.
* Luo et al. [2023] S. Luo, Y. Tan, L. Huang, J. Li, and H. Zhao. Latent consistency models: Synthesizing high-resolution images with few-step inference. _ArXiv_ , abs/2310.04378, 2023. URL https://api.semanticscholar.org/CorpusID:263831037.
* Meng et al. [2021] C. Meng, Y. He, Y. Song, J. Song, J. Wu, J.-Y. Zhu, and S. Ermon. Sdedit: Guided image synthesis and editing with stochastic differential equations. In _International Conference on Learning Representations_ , 2021.
* mrfakename et al. [2024] mrfakename, V. Srivastav, C. Fourrier, L. Pouget, Y. Lacombe, main, and S. Gandhi. Text to speech arena. https://huggingface.co/spaces/TTS-AGI/TTS-Arena, 2024.
* Nichol et al. [2022] A. Q. Nichol, P. Dhariwal, A. Ramesh, P. Shyam, P. Mishkin, B. Mcgrew, I. Sutskever, and M. Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In _International Conference on Machine Learning_ , pages 16784–16804. PMLR, 2022.
* of Singapore [2024] N. U. of Singapore. Open-Sora: Democratizing Efficient Video Production for All. https://github.com/hpcaitech/Open-Sora/blob/main/docs/report_01.md, 2024. Accessed on: 2024-05-24.
* OpenAI [2023] OpenAI. Gpt-4 technical report, 2023.
* openjourney.ai [2023] openjourney.ai. Openjourney is an open source stable diffusion fine tuned model on midjourney images, 2023. URL https://huggingface.co/prompthero/openjourney.
* Otani et al. [2023] M. Otani, R. Togashi, Y. Sawai, R. Ishigami, Y. Nakashima, E. Rahtu, J. Heikkilä, and S. Satoh. Toward verifiable and reproducible human evaluation for text-to-image generation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 14277–14286, 2023.
* Parmar et al. [2023] G. Parmar, K. Kumar Singh, R. Zhang, Y. Li, J. Lu, and J.-Y. Zhu. Zero-shot image-to-image translation. In _ACM SIGGRAPH 2023 Conference Proceedings_ , pages 1–11, 2023.
* Peebles and Xie [2023] W. Peebles and S. Xie. Scalable diffusion models with transformers. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pages 4195–4205, 2023.
* Pernias et al. [2023] P. Pernias, D. Rampas, M. L. Richter, C. Pal, and M. Aubreville. Würstchen: An efficient architecture for large-scale text-to-image diffusion models. In _The Twelfth International Conference on Learning Representations_ , 2023.
* Podell et al. [2023] D. Podell, Z. English, K. Lacey, A. Blattmann, T. Dockhorn, J. Muller, J. Penna, and R. Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. _ArXiv_ , abs/2307.01952, 2023. URL https://api.semanticscholar.org/CorpusID:259341735.
* Radford et al. [2021] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever. Learning transferable visual models from natural language supervision. In _International Conference on Machine Learning_ , 2021.
* Ramesh et al. [2022] A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen. Hierarchical text-conditional image generation with clip latents. _ArXiv_ , abs/2204.06125, 2022. URL https://api.semanticscholar.org/CorpusID:248097655.
* Reid et al. [2024] M. Reid, N. Savinov, D. Teplyashin, D. Lepikhin, T. Lillicrap, J.-b. Alayrac, R. Soricut, A. Lazaridou, O. Firat, J. Schrittwieser, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. _arXiv preprint arXiv:2403.05530_ , 2024.
* Saharia et al. [2022] C. Saharia, W. Chan, S. Saxena, L. Li, J. Whang, E. L. Denton, K. Ghasemipour, R. Gontijo Lopes, B. Karagol Ayan, T. Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. _Advances in Neural Information Processing Systems_ , 35:36479–36494, 2022.
* Salimans et al. [2016] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, and X. Chen. Improved techniques for training gans. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, _Advances in Neural Information Processing Systems_ , volume 29. Curran Associates, Inc., 2016. URL https://proceedings.neurips.cc/paper_files/paper/2016/file/8a3363abe792db2d8761d6403605aeb7-Paper.pdf.
* Sauer et al. [2023] A. Sauer, D. Lorenz, A. Blattmann, and R. Rombach. Adversarial diffusion distillation. _ArXiv_ , abs/2311.17042, 2023. URL https://api.semanticscholar.org/CorpusID:265466173.
* Tumanyan et al. [2023] N. Tumanyan, M. Geyer, S. Bagon, and T. Dekel. Plug-and-play diffusion features for text-driven image-to-image translation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 1921–1930, 2023.
* Unterthiner et al. [2018] T. Unterthiner, S. van Steenkiste, K. Kurach, R. Marinier, M. Michalski, and S. Gelly. Towards accurate generative models of video: A new metric & challenges. _arXiv preprint arXiv:1812.01717_ , 2018.
* Wang et al. [2023a] J. Wang, H. Yuan, D. Chen, Y. Zhang, X. Wang, and S. Zhang. Modelscope text-to-video technical report. _ArXiv_ , abs/2308.06571, 2023a. URL https://api.semanticscholar.org/CorpusID:260887737.
* Wang et al. [2023b] Y. Wang, X. Chen, X. Ma, S. Zhou, Z. Huang, Y. Wang, C. Yang, Y. He, J. Yu, P. Yang, et al. Lavie: High-quality video generation with cascaded latent diffusion models. _arXiv preprint arXiv:2309.15103_ , 2023b.
* Wang et al. [2004] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. _IEEE transactions on image processing_ , 13(4):600–612, 2004.
* Wu and la Torre [2023] C. H. Wu and F. D. la Torre. A latent space of stochastic diffusion models for zero-shot image editing and guidance. In _ICCV_ , 2023.
* Xu et al. [2023] P. Xu, W. Shao, K. Zhang, P. Gao, S. Liu, M. Lei, F. Meng, S. Huang, Y. Qiao, and P. Luo. Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models. _arXiv preprint arXiv:2306.09265_ , 2023.
* Xu et al. [2024] S. Xu, Y. Huang, J. Pan, Z. Ma, and J. Chai. Inversion-free image editing with natural language. In _Conference on Computer Vision and Pattern Recognition 2024_ , 2024.
* Zhang et al. [2024] H. Zhang, J. Yang, S. Wan, and P. Fua. Lefusion: Synthesizing myocardial pathology on cardiac mri via lesion-focus diffusion models, 2024.
* Zhang et al. [2023a] K. Zhang, L. Mo, W. Chen, H. Sun, and Y. Su. Magicbrush: A manually annotated dataset for instruction-guided image editing. _NeurIPS dataset and benchmark track_ , 2023a.
* Zhang et al. [2023b] L. Zhang, A. Rao, and M. Agrawala. Adding conditional control to text-to-image diffusion models. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pages 3836–3847, 2023b.
* Zhang et al. [2018] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The unreasonable effectiveness of deep features as a perceptual metric. In _CVPR_ , 2018.
* Zheng et al. [2023] L. Zheng, L. Yin, Z. Xie, J. Huang, C. Sun, C. H. Yu, S. Cao, C. Kozyrakis, I. Stoica, J. E. Gonzalez, C. Barrett, and Y. Sheng. Efficiently programming large language models using sglang, 2023.
* Zheng et al. [2024] L. Zheng, W.-L. Chiang, Y. Sheng, S. Zhuang, Z. Wu, Y. Zhuang, Z. Lin, Z. Li, D. Li, E. Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. _Advances in Neural Information Processing Systems_ , 36, 2024.
* Zhu et al. [2023] D. Zhu, J. Chen, X. Shen, X. Li, and M. Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models. In _The Twelfth International Conference on Learning Representations_ , 2023.
## Appendix A Appendix
### A.1 Broader Society Impacts
The establishment of GenAI-Arena and the release of GenAI-Bench have broader
societal implications. By democratizing the evaluation of generative models,
GenAI-Arena encourages transparency and community engagement in AI
development. This can lead to more trust in AI technologies as the public can
gain insights into how models perform according to peer evaluations. Moreover,
involving the community in such evaluations can accelerate the identification
of potentially harmful biases or unethical uses of AI technologies. However,
there are potential risks associated with the widespread use of generative AI
technologies that GenAI-Arena evaluates. For instance, advancements in text-
to-image and text-to-video generation can be misused for creating misleading
or harmful content, such as those filtered by NSFW Filter.
### A.2 Limitation
While the release of GenAI-Arena can enable a more reasonable evaluation of
the generative models, there are several limitations in its development.
First, the diversity and representativeness of the user base participating in
GenAI-Arena may not fully encapsulate the broader population’s preferences,
which will potentially bias the evaluation results. Despite efforts to attract
voters with diverse backgrounds, there is an inherent challenge in ensuring a
balanced representation across different cultures or professional backgrounds.
In addition, the reliance on user feedback and votes introduces subjectivity
into the evaluation process. While this is partially mitigated by the volume
of data collected, individual biases and varying levels of expertise among
users can skew the results.
### A.3 Data Collection
We stated in the GenAI-Arena UI that the input and votes will be collected for
research purposes only. By using this GenAI-Arena tool, the users agree to the
collection of their input and votes for research purposes. The users are
acknowledged that their data will be anonymized and will not be used for
commercial purposes.
### A.4 Extra Visualization on GenAI-Arena
We included more analysis in Figure 6 and 7 to show the reliability of GenAI-
Arena. Specifically, Figure 6 shows the error bar of the Elo rating to prove
the reliability. For Figure 7, it predicts the average win rate if the model
is played against other models.
(a) Text-to-Image
(b) Image Editing
(c) Text-to-Video
Figure 6: Bootstrap of Elo Estimates (1000 Rounds of Random Sampling)
(a) Text-to-Image
(b) Image Editing
(c) Text-to-Video
Figure 7: Average Win Rate Against All Other Models (Assuming Uniform Sampling
and No Ties)
### A.5 VideoGenHub
VideoGenHub is an open-source library to standardize the inference and
evaluation of all the conditional video generation models, similar to
ImagenHub [28] in the image domain. In the library, all models are implemented
with the literature standard, and the seeds are set as 42 for a fair
comparison, which is the same standard as ImagenHub [28] implementation.
### A.6 Prompt Templates
In the following table, we present the prompts used for the experiments in
subsection 5.2 evaluating text-to-video. The text-to-image and image editing
prompts are directly used the ones from the VIEScore [27]. The overall score
is computed as a geometric mean of both semantic consistency and perceptual
quality.
For Semantic consistency in Text-To-Video task:
[User] You are a professional digital artist. You will have to evaluate the
effectiveness of the AI-generated image(s) based on the given rules. You will
have to give your output in this way (Keep your reasoning concise and short.):
{ "score" : […], "reasoning" : "…" } [RULES]: The images are extracted from an
AI-generated video according to the text prompt. The objective is to evaluate
how successfully the video has been generated. From scale 0 to 10: A score
from 0 to 10 will be given based on the success in following the prompt. (0
indicates that the image frames do not follow the prompt at all. 10 indicates
the image frames follow the prompt perfectly.) Put the score in a list such
that output score = [score]. Text Prompt: <prompt>
For Perceptual Quality in Text-To-Video task:
[User] You are a professional digital artist. You will have to evaluate the
effectiveness of the AI-generated image(s) based on the given rules. You will
have to give your output in this way (Keep your reasoning concise and short.):
{ "score" : […], "reasoning" : "…" } [RULES]: The image frames are AI-
generated. The objective is to evaluate how successfully the image frames have
been generated. From scale of 0 to 10: A score from 0 to 10 will be given
based on the image frames’ naturalness. ( 0 indicates that the scene in the
image frames does not look natural at all or gives an unnatural feeling such
as a wrong sense of distance, or wrong shadow, or wrong lighting. 10 indicates
that the image frames look natural. ) A second score from 0 to 10 will rate
the image frames artifacts. ( 0 indicates that the image frames contain a
large portion of distortion, watermark, scratches, blurred faces, unusual body
parts, or subjects not harmonized. 10 indicates the image frames have no
artifacts. ) Put the score in a list such that output score = [naturalness,
artifacts]
|
# Discovering the Network Granger Causality in Large Vector Autoregressive
Models
Yoshimasa Uematsu Correspondence: Yoshimasa Uematsu, Department of Social Data
Science, Hitotsubashi University, 2-1 Naka, Kunitachi, Tokyo 186-8601, Japan
(E-mail: [email protected]). Takashi Yamagata†
###### Abstract
This paper proposes novel inferential procedures for the network Granger
causality in high-dimensional vector autoregressive models. In particular, we
offer two multiple testing procedures designed to control discovered networks’
false discovery rate (FDR). The first procedure is based on the limiting
normal distribution of the $t$-statistics constructed by the debiased lasso
estimator. The second procedure is based on the bootstrap distributions of the
$t$-statistics made by imposing the null hypotheses. Their theoretical
properties, including FDR control and power guarantee, are investigated. The
finite sample evidence suggests that both procedures can successfully control
the FDR while maintaining high power. Finally, the proposed methods are
applied to discovering the network Granger causality in a large number of
macroeconomic variables and regional house prices in the UK.
Keywords. Multiple testing, FDR and power, Debiased lasso, Bootstrap.
## 1 Introduction
Revealing a dynamic interrelationship among variables is a critical challenge
in economics, finance, neuroscience, genomics, and so forth. In particular,
identifying the ability of a time series variable to predict the future values
of other time series variables has been of great interest in various fields.
Following the vast literature published after Granger (1969), when the past
values of a time series $x:=\\{x_{t}\\}$ can predict the future values of
another time series $y$, this will be expressed as “$x$ is Granger-causal for
$y$” in this article. This Granger causality has conventionally been discussed
in the bivariate relationship, typically with (bivariate) vector
autoregressive (VAR) models; see Sims (1972) and Hosoya (1977), among many
others.
### 1.1 Network Granger causality
Suppose that an $N$-dimensional time series $\mathbf{y}$ follows the
stationary VAR($K$) model:
$\displaystyle\mathbf{y}_{t}=\bm{\Phi}_{1}\mathbf{y}_{t-1}+\cdots+\bm{\Phi}_{K}\mathbf{y}_{t-K}+\mathbf{u}_{t},$
where $\bm{\Phi}_{k}=(\phi_{ij,k})$ is the $k$th coefficient matrix and
$\mathbf{u}_{t}$ is the error vector. The predictability relationship among
the $N$ time series manifests as the sparsity pattern of the coefficient
matrices in the VAR($K$) model. For this VAR model, the Granger-causal network
is defined as the graph $G=(V,E)$ with vertex set $V=\\{1,\dots,N\\}=:[N]$ and
edge set $E$ such that for distinct $i,j\in V$, $i\to j\in E$ if and only if
$\phi_{ji,k}\not=0$ for some $k\in[K]$. The same definition has been adopted
by Basu et al. (2015) and Eichler (2007, 2012b),111He includes contemporaneous
connections via non-zero error covariance in his causal graph called the path
diagram, but we do not consider them in this article. for instance. See also
Shojaie and Fox (2021) for recent advances in the analysis of network Granger
causality. As an illustration, we consider the 4-dimensional VAR(2) model with
the sparsity pattern of the coefficient matrices shown in Figure 1(a), where
the non-zero and zero elements in $\bm{\Phi}_{1}$ and $\bm{\Phi}_{2}$ are
indicated as the gray and white cells, respectively. From this sparse
structure, we readily obtain the associated Granger-causal network in Figure
1(b).
$\bm{\Phi}_{1}$$\bm{\Phi}_{2}$ (a) Sparsity pattern of the coefficients
$y_{1}$$y_{2}$$y_{3}$$y_{4}$ (b) Granger-causal network
Figure 1: Sparse VAR$(2)$ coefficients and associated Granger-causal network
Although the network Granger causality has been simply defined as the direct
causal network as above, we have to remark the existence of further indirect
causalities over the lagged variables. For the aforementioned VAR(2) model,
Figure 2 depicts the causal chain that takes all the lags into account
(Eichler, 2012a, Sec. 3), where the chain continues to the infinite past and
future due to the stationarity. In this figure, we can identify many indirect
causalities; for instance, $y_{3,t-2}$ is indirectly causal to $y_{2t}$ via
$y_{1,t-1}$. To refine the concept of such direct and indirect causalities,
Dufour and Renault (1998) introduced short- and long-run causality called the
$h$-step ($h=1,2,\dots$) causality. In this context, the $h$-step non-
causality is characterized by zero restrictions to a part of the first $N$
rows of the first to $h$th powers of the companion coefficient matrices
(Lütkepohl, 2005, p.50). Indeed, in Figure 2, it can be shown that the one-
step (i.e., direct) causality is expressed by the thick arrows in Figure 2
while the aforementioned indirect effects will be included in $h$-step
causalities for $h=2,3,\dots$. Apparently, this one-step causality corresponds
to the network Granger causality we have defined.
$y_{1}^{t+2}$$y_{2}^{t+2}$$y_{3}^{t+2}$$y_{4}^{t+2}$$y_{1}^{t+1}$$y_{2}^{t+1}$$y_{3}^{t+1}$$y_{4}^{t+1}$$y_{1}^{t}$$y_{2}^{t}$$y_{3}^{t}$$y_{4}^{t}$$y_{1}^{t-1}$$y_{2}^{t-1}$$y_{3}^{t-1}$$y_{4}^{t-1}$$y_{1}^{t-2}$$y_{2}^{t-2}$$y_{3}^{t-2}$$y_{4}^{t-2}$$y_{1}^{t-3}$$y_{2}^{t-3}$$y_{3}^{t-3}$$y_{4}^{t-3}$$y_{1}^{t-4}$$y_{2}^{t-4}$$y_{3}^{t-4}$$y_{4}^{t-4}$
Figure 2: Causal chain of the VAR(2)
The Granger-causal network can be discovered just by estimating the sparse
$\bm{\Phi}_{k}$’s in an appropriate manner. In early empirical studies,
including Fujita et al. (2007) and Lozano et al. (2009) among others, such
statistical methods were already employed and a network Granger causality was
estimated. Recently the theoretical properties of these methods have begun to
be investigated. Basu and Michailidis (2015), Kock and Callot (2015), and Basu
et al. (2019) study properties of the lasso (Tibshirani, 1996) and its
variants in high-dimensional VAR models. Han et al. (2015) extends the Dantzig
selector of Candès and Tao (2007) to be applicable to (weakly) sparse VAR
models. Kock and Callot (2015) and Barigozzi and Brownlees (2019) propose the
adaptive lasso estimator. Davis et al. (2016) consider an alternative two-step
estimator, which uses estimates of the partial spectral coherence.
### 1.2 Classical Granger causality test
Estimation-based network detection such as the lasso selection is appealing
because it is simple to use. However, it is quite unstable, and this, along
with the difficulty of evaluating the type I error of the selection result,
may cause a lack of reproducibility. Thus, this article discusses the
statistical inference of the Granger causality network. Indeed, in Granger
(1969), statistical inference of Granger causality was the central issue, as
Sir Clive Granger himself put it: “the problem is how to devise a definition
of causality and feedback and to test for their existence” (Granger, 1969,
p.428).
Regardless of the model dimensionality, it is possible to test the null of
Granger non-causality if an asymptotically normal estimator is available. For
the purpose of illustration, consider an $N$-dimensional VAR(1) model. Given
$\mathcal{H}\subset[N]\times[N]$, we may test $H_{0}:\phi_{ij}=0$ for all
$(i,j)\in\mathcal{H}$ versus $H_{1}:\phi_{ij}\not=0$ for some
$(i,j)\in\mathcal{H}$ using the asymptotic Wald statistic constructed by the
debiased lasso estimator (Zheng and Raskutti, 2019; Zhu and Liu, 2020; Babii
et al., 2021). This is effective when a very small $\mathcal{H}$ is of our
interest. Unfortunately, however, this approach is not suitable for
discovering the virtually high-dimensional Granger causality networks in
$\mathcal{H}$. This is because, when $\mathcal{H}$ is large, the rejection of
this null hypothesis tells us only that there may exist Granger causality
between some of the $N$ variables. This is not informative for our purpose.
### 1.3 Discovering the network with FDR control
The network discovering is characterized as the multiple testing for the
sequence of hypotheses, $H_{0}^{(i,j)}:\phi_{ij}=0$ versus
$H_{1}^{(i,j)}:\phi_{ij}\not=0$ for each $(i,j)\in\mathcal{H}$. The false
discovery rate (FDR), which is considered as a measure of type I error in
multiple tests, is defined as
$\operatorname{FDR}=\operatorname{\mathbb{E}}[{|\hat{\mathcal{S}}\cap\mathcal{S}^{c}|}/{(|\hat{\mathcal{S}}|\vee
1)}]$ with $\mathcal{S}=\\{(i,j)\in\mathcal{H}:\phi_{ij}\not=0\\}$ and
$\hat{\mathcal{S}}=\\{(i,j)\in\mathcal{H}:H_{0}^{(i,j)}\text{ is rejected}\\}$
(Benjamini and Hochberg, 1995). The FDR controlled multiple testing is
expected to exhibit higher power
(Power=$\operatorname{\mathbb{E}}[{|\hat{\mathcal{S}}\cap\mathcal{S}|}/{|\mathcal{S}|}]$)
than that controlling the family-wise error rate
($\operatorname{FWER}=\operatorname{\mathbb{P}}(|\hat{\mathcal{S}}\cap\mathcal{S}^{c}|\geq
1)$), typically by the so-called Bonferroni correction (Bonferroni, 1935;
Holm, 1979). This is because the FDR is smaller than or equal to the FWER; the
FDR control allows more “aggressive” selection.
In this paper, we propose two novel inferential methods for discovering the
network Granger causality in high-dimensional VAR models, based on asymptotic
and bootstrap $t$-statistics, respectively, with the debiased lasso estimator.
The key aspect is that they are designed to control the FDR, which will lead
to a more stable discovery. Their theoretical properties, including the FDR
control and power guarantee, are investigated. To the best of our knowledge,
this is the first paper to propose such inferential procedures for high-
dimensional VAR models and to investigate their theoretical properties. The
finite sample evidence suggests that both of the procedures can successfully
control the FDR while maintaining the high power. The proposed methods are
applied to a large dataset of macroeconomic time series and regional house
prices in the UK.
Finally, there are a few studies that consider Granger causality between
groups of variables, typically using group lasso; see Basu et al. (2015), Lin
and Michailidis (2017) and Basu et al. (2019). This approach requires the
researcher to know the group members. Guðmundsson and Brownlees (2021) develop
methods to statistically identify groups among the variables. We also notice
that the FDR control for time series is essentially quite difficult, and there
are very few studies. One exception is Chi et al. (2021), but this does not
appear readily applicable to our purpose.
### 1.4 Organization and notation
The paper is organized as follows. Section 2 formally defines the VAR model.
Section 3 proposes two methods for discovering the networks based on a
multiple testing. Section 4 explores the statistical theory for the FDR
control and power guarantee of our methods. Section 5 confirms the finite
sample validity via Monte Carlo experiments. Section 6 applies our methods to
large datasets. Section 7 concludes. All the proofs of our theoretical results
and supplemental analyses are collected in Supplementary Materials.
Notation. For any matrix $\mathbf{M}=(m_{ti})\in\mathbb{R}^{T\times N}$,
denote by $\|\mathbf{M}\|_{\mathrm{F}}$, $\|\mathbf{M}\|_{2}$,
$\|\mathbf{M}\|_{1}$, $\|\mathbf{M}\|_{\max}$, and $\|\mathbf{M}\|_{\infty}$
the Frobenius norm, induced $\ell_{2}$ (spectral) norm, entrywise
$\ell_{1}$-norm, entrywise $\ell_{\infty}$-norm, and induced
$\ell_{\infty}$-norm, respectively. Specifically, they are defined by
$\|\mathbf{M}\|_{\mathrm{F}}=(\sum_{t,i}m_{ti}^{2})^{1/2}$,
$\|\mathbf{M}\|_{2}=\lambda_{1}^{1/2}(\mathbf{M}^{\prime}\mathbf{M})$,
$\|\mathbf{M}\|_{1}=\sum_{t,i}|m_{ti}|$,
$\|\mathbf{M}\|_{\max}=\max_{t,i}|m_{ti}|$, and
$\|\mathbf{M}\|_{\infty}=\max_{t}\sum_{i}|m_{ti}|$, where
$\lambda_{i}(\mathbf{S})$ refers to the $i$th largest eigenvalue of a
symmetric matrix $\mathbf{S}$. Denote by $\mathbf{I}_{N}$ and
$\mathbf{0}_{T\times N}$ the $N\times N$ identity matrix and $T\times N$
matrix with all the entries being zero, respectively. We use $\lesssim$
($\gtrsim$) to represent $\leq$ ($\geq$) up to a positive constant factor. For
any positive sequence $a_{n}$ and $b_{n}$, we write $a_{n}\asymp b_{n}$ if
$a_{n}\lesssim b_{n}$ and $a_{n}\gtrsim b_{n}$. Moreover, denote by $a_{n}\sim
b_{n}$ if $a_{n}/b_{n}\to 1$. For any positive values $a$ and $b$, $a\vee b$
and $a\wedge b$ stand for $\max(a,b)$ and $\min(a,b)$, respectively. The
indicator function is denoted by $1\\{\cdot\\}$. $\operatorname{sgn}$ is the
sign function defined as $\operatorname{sgn}(x)=x/|x|$ for $x\not=0$ and
$\operatorname{sgn}(0)=0$.
## 2 Model
Suppose that the $N$-dimensional vector of stationary time series,
$\mathbf{y}_{t}=(y_{1t},\dots,y_{Nt})^{\prime}$, is generated from the
VAR($K$) model:
$\displaystyle\mathbf{y}_{t}$
$\displaystyle=\bm{\Phi}_{1}\mathbf{y}_{t-1}+\dots+\bm{\Phi}_{K}\mathbf{y}_{t-K}+\mathbf{u}_{t}=\bm{\Phi}\mathbf{x}_{t}+\mathbf{u}_{t},$
(1)
where $\bm{\Phi}_{k}$’s are the $N\times N$ sparse coefficient matrices with
$\bm{\Phi}_{k}=(\phi_{ij,k})$,
$\mathbf{x}_{t}=(\mathbf{y}_{t-1}^{\prime},\dots,\mathbf{y}_{t-K}^{\prime})^{\prime}$
is the $KN$-dimensional vector of lagged variables, and
$\mathbf{u}_{t}=(u_{1t},\dots,u_{Nt})^{\prime}$ is an error vector with mean
zero and finite positive definite covariance matrix
$\bm{\Sigma}_{u}=(\sigma_{ij})=\operatorname{\mathbb{E}}\mathbf{u}_{t}\mathbf{u}_{t}^{\prime}$.
Let
$\bm{\Sigma}_{x}=\operatorname{\mathbb{E}}\mathbf{x}_{t}\mathbf{x}_{t}^{\prime}$
denote the finite positive definite covariance matrix. Moreover, define
$\bm{\Gamma}_{y}(k-1)=\operatorname{\mathbb{E}}[\mathbf{y}_{t-1}\mathbf{y}_{t-k}^{\prime}]$
with $\bm{\Gamma}_{y}(k-1)=\bm{\Gamma}_{y}(-k+1)^{\prime}$ for $k\in[K]$. Then
$\bm{\Sigma}_{x}$ is composed of submatrices $\bm{\Gamma}_{y}(k-1)$. Moreover,
define the precision matrix of $\mathbf{x}_{t}$ as
$\bm{\Omega}=(\omega_{ij})=\bm{\Sigma}_{x}^{-1}$. We also denote
$\sigma_{i}^{2}=\sigma_{ii}$ and $\omega_{i}^{2}=\omega_{ii}$. Denote by
$\bm{\omega}_{j}$ the $j$th column vector of $\bm{\Omega}$ for $j\in[KN]$.
Throughout the paper, we allow $N$ to be as large as or possibly larger than
$T$, and $N\wedge T$ tends to infinity while $K$ is fixed.
Stacking the $T$ observations in columns such that
$\mathbf{Y}=(\mathbf{y}_{1},\dots,\mathbf{y}_{T})\in\mathbb{R}^{N\times T}$,
$\mathbf{X}=(\mathbf{x}_{1},\dots,\mathbf{x}_{T})\in\mathbb{R}^{KN\times T}$,
and $\mathbf{U}=(\mathbf{u}_{1},\dots,\mathbf{u}_{T})\in\mathbb{R}^{N\times
T}$, we have the matrix form
$\displaystyle\mathbf{Y}=\bm{\Phi}\mathbf{X}+\mathbf{U},$ (2)
where $\bm{\Phi}=\left(\bm{\Phi}_{1},\dots,\bm{\Phi}_{K}\right)$. Denote by
$\mathbf{a}_{\cdot i}$ the $i$th row vector of matrix $\mathbf{A}$. Then, by
extracting the $i$th row of (2), the model is also written as
$\displaystyle\mathbf{y}_{i\cdot}=\bm{\phi}_{\cdot
i}\mathbf{X}+\mathbf{u}_{i\cdot},$ (3)
which is $1\times T$.
To describe the sparsity pattern of $\bm{\Phi}$, define
$\mathcal{S}=\\{(i,j)\in[N]\times[KN]:\phi_{ij}\not=0\\}$ with cardinality
$s=|\mathcal{S}|$. Similarly, the sparsity pattern of $\bm{\phi}_{i\cdot}$ is
described as $\mathcal{S}_{i}=\\{j\in[KN]:\phi_{ij}\not=0\\}$ with
$s_{i}=|\mathcal{S}_{i}|$ and $\bar{s}=\max_{i\in[N]}s_{i}$. There is a one-
to-one correspondence between $\mathcal{S}$ and the Granger-causal network as
mentioned in Section 1.1. Our goal is to discover the network controlling the
FDR.
## 3 Inferential Methodology
We propose two multiple testing procedures that can control the FDR of
discovering the Granger-causal networks, which are based on asymptotic and
bootstrap $t$-statistics, respectively. The associated statistical theory is
developed in Section 4.
### 3.1 Debiased lasso estimator
We start with constructing the row-wise lasso estimator,
$\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}$, defined as
$\displaystyle\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}=\operatorname*{\arg\min}_{\bm{\phi}_{i\cdot}\in\mathbb{R}^{1\times
KN}}~{}(2T)^{-1}\|\mathbf{y}_{i\cdot}-\bm{\phi}_{i\cdot}\mathbf{X}\|_{2}^{2}+\lambda\|\bm{\phi}_{i\cdot}\|_{1},$
(4)
where $\lambda>0$ is a regularization parameter. It is well-known that the
lasso estimator has a bias caused by this regularization. Following Javanmard
and Montanari (2014), we remove the bias, which leads to the asymptotic
normality of each element; see also van de Geer et al. (2014) and Zhang and
Zhang (2014) for a related discussion. Define the debiased lasso estimator as
$\displaystyle\hat{\bm{\phi}}_{i\cdot}=\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}+(\mathbf{y}_{i\cdot}-\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}\mathbf{X})\mathbf{X}^{\prime}\hat{\bm{\Omega}}/T,$
(5)
where $\hat{\bm{\Omega}}$ is a consistent estimator of the precision matrix
$\bm{\Omega}$. Let $\hat{\bm{\Sigma}}_{x}=\mathbf{X}\mathbf{X}^{\prime}/T$.
Then by a simple calculation, (5) is equivalently written as
$\displaystyle\sqrt{T}(\hat{\bm{\phi}}_{i\cdot}-\bm{\phi}_{i\cdot})=\mathbf{z}_{i\cdot}+\mathbf{r}_{i\cdot},$
(6)
where
$\displaystyle\mathbf{z}_{i\cdot}=\mathbf{u}_{i\cdot}\mathbf{X}^{\prime}\bm{\Omega}/\sqrt{T}~{}~{}~{}\mbox{and}~{}~{}~{}\mathbf{r}_{i\cdot}=\mathbf{u}_{i\cdot}\mathbf{X}^{\prime}(\hat{\bm{\Omega}}-\bm{\Omega})/\sqrt{T}-\sqrt{T}(\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}-\bm{\phi}_{i\cdot})(\hat{\bm{\Sigma}}_{x}\hat{\bm{\Omega}}-\mathbf{I}_{KN}).$
For each $i\in[N]$, it is expected that each element of $\mathbf{z}_{i\cdot}$
can be asymptotically normal while $\|\mathbf{r}_{i\cdot}\|_{\max}$ becomes
negligible for a relevant choice of the estimator, $\hat{\bm{\Omega}}$. The
asymptotic behavior of this estimator is formally investigated in Section 4.1.
### 3.2 Multiple test
For $\mathcal{H}\subset[N]\times[KN]$, consider discovering the underlying
Granger causal network in $\mathcal{H}$. This problem is understood as the
multiple test for the sequence of hypotheses:
$\displaystyle
H_{0}^{(i,j)}:\phi_{ij}=0~{}~{}~{}\mbox{versus}~{}~{}~{}H_{1}^{(i,j)}:\phi_{ij}\not=0~{}~{}\text{
for each }~{}~{}(i,j)\in\mathcal{H}.$ (7)
As observed in Section 3.1, under regularity conditions, each entry of the
debiased lasso estimator will be expressed as
$\hat{\phi}_{ij}=z_{ij}+o_{p}(1)$, where the leading term $z_{ij}$ is expected
to be asymptotically normal with the variance,
$\displaystyle\operatorname{Var}(z_{ij})=\sigma_{i}^{2}\bm{\omega}_{j}^{\prime}\bm{\Sigma}_{x}{\bm{\omega}}_{j}=\sigma_{i}^{2}\omega_{j}^{2}.$
(8)
Due to the expression of (8), the $t$-test for each pair of hypotheses in (7)
is performed with the $t$-statistic, either
$\displaystyle\texttt{T}_{ij}=\frac{\sqrt{T}\hat{\phi}_{ij}}{\hat{\sigma}_{i}\sqrt{\hat{\bm{\omega}}^{\prime}_{j}\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}}}\text{~{}~{}or~{}~{}}\frac{\sqrt{T}\hat{\phi}_{ij}}{\hat{\sigma}_{i}\hat{\omega}_{j}},$
(9)
where the denominator (standard error, s.e.) is composed of some consistent
estimators. Hereafter, denote by $\hat{m}_{ij}$ either
$\hat{\sigma}_{i}\sqrt{\hat{\bm{\omega}}^{\prime}_{j}\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}}$
or $\hat{\sigma}_{i}\hat{\omega}_{j}$. Repeating the $t$-test over
$(i,j)\in\mathcal{H}$ with a critical value t leads to a set of discoveries,
$\hat{\mathcal{S}}(\texttt{t}):=\\{(i,j)\in\mathcal{H}:|\texttt{T}_{ij}|\geq\texttt{t}\\}$.
Here, the choice of t is critical; we propose two procedures to determine t
that achieves controlling the directional FDR of
$\hat{\mathcal{S}}(\texttt{t})$,
$\displaystyle\operatorname{dFDR}=\operatorname{\mathbb{E}}\left[\operatorname{dFDP}\right],~{}~{}~{}\operatorname{dFDP}=\frac{|\\{(i,j)\in\hat{\mathcal{S}}(\texttt{t}):\operatorname{sgn}(\hat{\phi}_{ij})\not=\operatorname{sgn}(\phi_{ij})\\}|}{|\hat{\mathcal{S}}(\texttt{t})|\vee
1},$
below a preassigned level. Notice that the $\operatorname{dFWER}$ and
$\operatorname{dPower}$ are also defined in a similar manner.
There are several ways to construct consistent estimators of the nuisance
parameters, $\sigma_{i}^{2}$ and $\bm{\omega}_{j}$. In this paper, we use
$\hat{\sigma}_{i}^{2}=\sum_{t=1}\hat{u}_{it}^{2}/(T-d_{i})$ with
$\hat{u}_{it}$ the $(i,t)$th element of the lasso residual matrix
$\hat{\mathbf{U}}$ and the CLIME estimator (Cai et al., 2011)
$\hat{\bm{\Omega}}=(\hat{\bm{\omega}}_{1},\dots,\hat{\bm{\omega}}_{KN})$,
respectively. Here, $d_{i}=o(T)$ is a positive number for degrees of freedom
adjustment. A typical choice is $d_{i}=\hat{s}_{i}$, where
$\hat{s}_{i}=|\hat{\mathcal{S}}_{i}^{{\textsf{L}}}|$ with
$\hat{\mathcal{S}}_{i}^{{\textsf{L}}}=\operatorname{supp}(\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}})$.
#### 3.2.1 First procedure: Limiting normal distribution
We construct the set of discoveries as follows.
###### Procedure 1.
1. 1.
Set $\bar{\texttt{t}}=\sqrt{2\log(|\mathcal{H}|)-a\log\log(|\mathcal{H}|)}$
for given $\mathcal{H}\subset[N]\times[KN]$, where $a>3$ is an arbitrary fixed
constant.
2. 2.
For any level $q\in[0,1]$, compute
$\displaystyle\texttt{t}_{0}=\inf\left\\{\texttt{t}\in[0,\bar{\texttt{t}}]:\frac{2|\mathcal{H}|Q(\texttt{t})}{|\hat{\mathcal{S}}(\texttt{t})|\vee
1}\leq q\right\\},$ (10)
where
$|\hat{\mathcal{S}}(\texttt{t})|=\sum_{(i,j)\in\mathcal{H}}1\\{|\texttt{T}_{ij}|\geq\texttt{t}\\}$
is the total number of discoveries in $\mathcal{H}$ with critical value t, and
$Q(\texttt{t})=\operatorname{\mathbb{P}}(\mathcal{Z}>\texttt{t})$ with
$\mathcal{Z}$ a standard normal random variable. If (10) does not exist, set
$\displaystyle\texttt{t}_{0}=\sqrt{2\log(|\mathcal{H}|)}.$ (11)
3. 3.
For each $(i,j)\in\mathcal{H}$, reject $H_{0}^{(i,j)}$ in (7) if
$|\texttt{T}_{ij}|\geq\texttt{t}_{0}$, and obtain the set of discoveries,
$\hat{\mathcal{S}}(\texttt{t}_{0})=\\{(i,j)\in\mathcal{H}:|\texttt{T}_{ij}|\geq\texttt{t}_{0}\\}$.
This procedure is designed to asymptotically control the dFDR of
$\hat{\mathcal{S}}(\texttt{t}_{0})$ to be less than or equal to $q$. A similar
procedure can be found in Liu (2013), Javanmard and Javadi (2019), and Uematsu
and Yamagata (2021), for instance. The $\operatorname{dFDR}$ control always
implies the $\operatorname{FDR}$ control since
$\operatorname{FDR}\leq\operatorname{dFDR}$. Threshold (11) will even control
the dFWER (and hence the dFDR), which occurs if it is large enough not to
exist in $[0,\bar{\texttt{t}}]$. A theoretical justification for the
$\operatorname{dFDR}$ control and $\operatorname{dPower}$ guarantee is given
in Section 4.2.
###### Remark 1.
1. (a)
The constant $a>3$ is required for technical reasons. In practice, we can
choose an arbitrary value that is slightly larger than three.
2. (b)
A choice of $d_{i}$ effects the FDR control to some extent, but
$d_{i}=\hat{s}_{i}$ seems to bring good results by simulation studies in
Section 5. For other choices of $d_{i}$ and constructions of consistent
estimator of $\sigma_{i}^{2}$, see Reid et al. (2016).
#### 3.2.2 Second procedure: Bootstrapped distribution
Even in a low-dimensional setting, Brüggemann et al. (2016) point out that the
finite sample properties of asymptotic VAR inference can be rather poor, and
the use of bootstrap methods is often advocated. See also Kilian (1999),
Gonçalves and Kilian (2004), Hafner and Herwartz (2009), indicating that
bootstrap methods can be very beneficial in (V)ARs.
To improve the performance, we propose a second method based on the fixed-
design wild bootstrap (FWB). The bootstrapped multiple $t$-test we propose is
different from a conventional bootstrapped $t$-test for a single hypothesis.
Roughly speaking, it attempts to replicate a bootstrapped distribution under
the null by the FWB, which is substituted for the limit normal distribution
$Q$ in Procedure 1.
Let $\tilde{\mathcal{S}}\subset[N]\times[KN]$ denote an index set of
discoveries that is expected to satisfy
$\operatorname{\mathbb{P}}(\tilde{\mathcal{S}}\supset\mathcal{S})\to 1$. The
next procedure is available for $\mathcal{H}$ such that
$\tilde{\mathcal{S}}^{c}\cap\mathcal{H}\not=\emptyset$.
###### Procedure 2.
1. 1.
Obtain bootstrap version of the $t$-statistics,
$\\{\texttt{T}_{ij}^{*(b)}:(i,j)\in\tilde{\mathcal{S}}^{c}\cap\mathcal{H}\\}_{b=1}^{B}$,
by repeating (a)–(f) $B$ times:
1. (a)
Generate $\\{\zeta_{t}\\}$, a sequence of i.i.d. random variables with mean
zero and variance one.
2. (b)
Obtain $\\{\mathbf{u}_{t}^{*}\\}_{t=1}^{T}$ by the FWB;
$\mathbf{u}_{t}^{*}=\hat{\mathbf{u}}_{t}\zeta_{t}$, where
$\hat{\mathbf{u}}_{t}=\mathbf{y}_{t}-\hat{\bm{\Phi}}^{{\textsf{L}}}\mathbf{x}_{t}$.
3. (c)
Generate $\\{\mathbf{y}_{t}^{*}\\}_{t=1}^{T}$ by
$\mathbf{y}_{t}^{*}=\hat{\bm{\Phi}}^{{\textsf{L}}}\mathbf{x}_{t}+\mathbf{u}_{t}^{*}$,
and set $\mathbf{Y}^{*}=(\mathbf{y}_{1}^{*},\dots,\mathbf{y}_{T}^{*})$.
4. (d)
Compute a bootstrap version of the lasso estimate
$\hat{\bm{\Phi}}^{{\textsf{L}}*}$ by $\mathbf{Y}^{*}$ and $\mathbf{X}$.
5. (e)
Construct a bootstrap version of the debiased lasso estimate
$\displaystyle\hat{\bm{\Phi}}^{*}=\hat{\bm{\Phi}}^{{\textsf{L}}*}+\left(\mathbf{Y}^{*}-\hat{\bm{\Phi}}^{{\textsf{L}}*}\mathbf{X}\right)\mathbf{X}^{\prime}\hat{\bm{\Omega}}/T.$
6. (f)
Construct a bootstrap version of the $t$-statistics, either
$\displaystyle\texttt{T}_{ij}^{*}=\frac{\sqrt{T}\hat{\phi}_{ij}^{*}}{\hat{\sigma}^{*}_{i}\sqrt{\hat{\bm{\omega}}^{\prime}_{j}\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}}}\text{~{}~{}or~{}~{}}\frac{\sqrt{T}\hat{\phi}_{ij}^{*}}{\hat{\sigma}^{*}_{i}\hat{\omega}_{j}}~{}~{}~{}\text{for}~{}~{}~{}(i,j)\in\tilde{\mathcal{S}}^{c}\cap\mathcal{H},$
(12)
where $\hat{\sigma}^{*2}_{i}$ is given by
$\hat{\sigma}_{i}^{*2}=\sum_{t=1}^{T}\hat{u}_{it}^{*2}/(T-\hat{s}_{i})$ with
$\hat{\mathbf{U}}^{*}=\mathbf{U}^{*}-(\hat{\bm{\Phi}}^{{\textsf{L}}*}-\hat{\bm{\Phi}}^{{\textsf{L}}})\mathbf{X}$
and $\mathbf{U}^{*}=(\mathbf{u}_{1}^{*},\dots,\mathbf{u}_{T}^{*})$.
2. 2.
Compute the empirical distribution
$\displaystyle\mathbb{Q}_{B}^{*}(\texttt{t})$
$\displaystyle=\frac{1}{|\tilde{\mathcal{S}}^{c}\cap\mathcal{H}|}\sum_{(i,j)\in\tilde{\mathcal{S}}^{c}\cap\mathcal{H}}\left[\frac{1}{B}\sum_{b=1}^{B}1\left\\{\texttt{T}_{ij}^{*(b)}>\texttt{t}\right\\}\right].$
3. 3.
Run Procedure 1 with (13) replacing (10):
$\displaystyle\texttt{t}_{0}=\inf\left\\{\texttt{t}\in[0,\bar{\texttt{t}}]:\frac{|\mathcal{H}|\left\\{\mathbb{Q}_{B}^{*}(\texttt{t})+1-\mathbb{Q}_{B}^{*}(-\texttt{t})\right\\}}{|\hat{\mathcal{S}}(\texttt{t})|\vee
1}\leq q\right\\}.$ (13)
###### Remark 2.
1. (a)
We may adopt several distributions for $\zeta_{t}$. For example, Mammen (1993)
suggests using $\zeta_{t}$ that takes values $\mp(\sqrt{5}\mp 1)/2$ with
probability $(\sqrt{5}\pm 1)/(2\sqrt{5})$, respectively, while Davidson and
Flachaire (2008) propose using the Rademacher random variable, $\zeta_{t}=\pm
1$ with probability $1/2$.
2. (b)
A typical choice of $\tilde{\mathcal{S}}$ is the lasso-selected variables,
$\hat{\mathcal{S}}_{{\textsf{L}}}$. We may also use
$\hat{\mathcal{S}}(\texttt{t}_{0})$ obtained by Procedure 1.
3. (c)
We do not need to assign large $B$ because many
($|\tilde{\mathcal{S}}^{c}\cap\mathcal{H}|$) $t$-values are used to form the
null distribution $\mathbb{Q}_{B}^{*}(\texttt{t})$.
4. (d)
Unlike the construction of $\hat{\sigma}_{i}^{2}$ in Procedure 1, the degree
of freedom adjustment in $\hat{\sigma}_{i}^{*2}$ is not sensitive to the
results of resulting FDR control.
## 4 Statistical Theory
We develop a formal statistical theory for the inferential methodology
proposed in Section 3. Throughout this section, we set
$\mathcal{H}=[N]\times[KN]$, $d_{i}=0$, and
$\tilde{\mathcal{S}}=\hat{\mathcal{S}}_{{\textsf{L}}}$ to alleviate
unnecessary technical complications. We suppose $N$ is possibly larger than
$T$ at most polynomials of $T$.
###### Condition 1.
The error term, $\\{\mathbf{u}_{t}\\}$, is a sequence of i.i.d. sub-Gaussian
random vectors with mean zero and covariance matrix $\bm{\Sigma}_{u}$; for
every $N>0$, there exists some constant $c_{u}>0$ such that for all $x>0$,
$\max_{i\in[N]}\operatorname{\mathbb{P}}\left(|u_{it}|>x\right)\leq
2\exp(-x^{2}/c_{u})$.
###### Condition 2.
All the eigenvalues of the companion matrix of
$(\bm{\Phi}_{1},\dots,\bm{\Phi}_{K})$ are strictly less than one in modulus
uniformly in $N\in\mathbb{N}$. (Hereafter, denote the companion matrix by
$\mathbf{A}$.)
###### Condition 3.
For all $N\in\mathbb{N}$, there exists some constant $\gamma>0$ such that
$\gamma\leq\lambda_{\min}(\bm{\Sigma}_{x})\leq\lambda_{\max}(\bm{\Sigma}_{x})\leq
1/\gamma$.
Conditions 1 and 2 are commonly used in the literature. Condition 2 guarantees
that the VAR($K$) model is stable and stationary, so that it is inverted to
the VMA($\infty$) model:
$\displaystyle\mathbf{y}_{t}=\sum_{\ell=0}^{\infty}\mathbf{B}_{\ell}\mathbf{u}_{t-\ell},~{}~{}~{}~{}~{}b:=\sum_{\ell=0}^{\infty}\|\mathbf{B}_{\ell}\|_{\infty}<\infty$
(14)
uniformly in $N$, where $\mathbf{B}_{0}=\mathbf{I}_{N}$ and
$\mathbf{B}_{\ell}=\mathbf{J}^{\prime}\mathbf{A}^{\ell}\mathbf{J}$ with
$\mathbf{J}^{\prime}=(\mathbf{I}_{N},\mathbf{0}_{N\times(KN-N)})$ for
$\ell=1,2,\dots$. The summability condition in (14) is verified by Lemma 2 of
Supplementary Materials. Throughout this section, we set the lasso
regularization parameter in (4) to be
$\displaystyle\lambda=8bc_{uu}\sqrt{2(\nu+5)^{3}T^{-1}\log^{3}(N\vee T)},$
where $\nu>0$ is a fixed constant.
### 4.1 Theory for the debiased lasso estimator
The first proposition states that the lasso estimator,
$\hat{\bm{\Phi}}^{{\textsf{L}}}$, has nonasymptotic error bounds.
###### Proposition 1 (Nonasymptotic error bounds for the lasso).
If Conditions 1–3 are assumed, then the lasso estimator defined in (4)
satisfies the following inequalities with probability at least $1-O((N\vee
T)^{-\nu})$:
$\displaystyle(a)~{}$
$\displaystyle~{}\left\|\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}-\bm{\phi}_{i\cdot}\right\|_{2}\leq\frac{12\sqrt{s_{i}}\lambda}{\gamma-8bs_{i}\lambda},$
$\displaystyle(b)~{}$
$\displaystyle~{}\left\|\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}-\bm{\phi}_{i\cdot}\right\|_{1}\leq\frac{48s_{i}\lambda}{\gamma-8bs_{i}\lambda},$
$\displaystyle(c)~{}$
$\displaystyle~{}\frac{1}{T}\left\|(\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}-\bm{\phi}_{i\cdot})\mathbf{X}\right\|_{2}^{2}\leq\frac{144s_{i}\lambda^{2}}{\gamma-8bs_{i}\lambda}$
for all $i\in[N]$ such that $8bs_{i}\lambda<\gamma$.
Using Proposition 1, we next show the asymptotic linearity of the debiased
lasso estimator, $\hat{\bm{\Phi}}$. This requires some conditions on the
precision matrix, $\bm{\Omega}$.
###### Condition 4.
There exist some positive numbers $s_{\omega}$ and $M_{\omega}$ and some
constant $r\in[0,1)$ such that
$\max_{i\in[KN]}\sum_{j=1}^{KN}|\omega_{ij}|^{r}\leq s_{\omega}$ and
$\max_{j\in[KN]}\|{\bm{\omega}}_{j}\|_{1}\leq M_{\omega}$, where $s_{\omega}$
and $M_{\omega}$ can diverge as $N,T\to\infty$.
Condition 4 has frequently been used to derive the rate of convergence of the
CLIME estimator $\hat{\bm{\Omega}}$; see Cai et al. (2011) and Shu and Nan
(2019), for instance. This condition requires that $\bm{\Omega}$ is
(approximately) sparse. Especially when $r=0$, the condition reduces to the
exact sparsity assumption. Denote by $\\{\mathbf{e}_{j}\\}$ the standard basis
of $\mathbb{R}^{KN}$.
###### Proposition 2 (Asymptotic linearity of the debiased lasso).
The debiased lasso estimator defined in (5) has the representation
$\sqrt{T}(\hat{\bm{\Phi}}-\bm{\Phi})=\mathbf{Z}+\mathbf{R}$, where the
$(i,j)$th elements of $\mathbf{Z}$ and $\mathbf{R}$ are respectively given by
$\displaystyle
z_{ij}=T^{-1/2}\sum_{t=1}^{T}u_{it}\mathbf{x}_{t}^{\prime}{\bm{\omega}}_{j},~{}~{}~{}r_{ij}=\mathbf{u}_{i\cdot}\mathbf{X}^{\prime}(\hat{\bm{\omega}}_{j}-{\bm{\omega}}_{j})/\sqrt{T}-\sqrt{T}(\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}-\bm{\phi}_{i\cdot})(\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}-\mathbf{e}_{j}).$
Furthermore, if Conditions 1–4 are assumed, then the following inequality
holds with probability at least $1-O((N\vee T)^{-\nu})$:
$\displaystyle\max_{j\in[KN]}|r_{ij}|\lesssim\left(s_{\omega}M_{\omega}^{2-2r}\lambda^{1-r}+M_{\omega}s_{i}\lambda\right)\sqrt{\log^{3}(N\vee
T)}=:\bar{r}_{i}$
for all $i\in[N]$ such that $s_{i}\lambda=o(1)$.
To achieve the asymptotic normality of $\sqrt{T}(\hat{\phi}_{ij}-\phi_{ij})$,
we need $\bar{r}_{i}=o(1)$. Then $z_{ij}$ becomes the dominating term, which
can converge in distribution to a Gaussian random variable.
### 4.2 Theory for the multiple testing
We are now ready to develop a statistical theory of the multiple testing of
(7) by Procedures 1 and 2 in Section 3.2. We first establish the asymptotic
normality of the $t$-statistics in (9).
###### Condition 5.
For all $N\in\mathbb{N}$, there exists some constant $\gamma>0$ such that
$\gamma\leq\min_{i\in[N]}\sigma_{i}^{2}\leq\max_{i\in[N]}\sigma_{i}^{2}\leq
1/\gamma$ and
$\gamma\leq\min_{j\in[KN]}\omega_{j}^{2}\leq\max_{j\in[KN]}\omega_{j}^{2}\leq
1/\gamma$.
Condition 5 complements Condition 3, and is required to deal with the standard
errors (see Lemma 7 in Supplementary Materials).
###### Theorem 1 (Asymptotic normality of $t$-statistic).
If Conditions 1–5 are assumed, then both $t$-statistics $\texttt{T}_{ij}$
defined in (9) satisfy that $\texttt{T}_{ij}-\sqrt{T}\phi_{ij}/\hat{m}_{ij}$
converge in distribution to $\mathcal{N}(0,1)$ for all $(i,j)\in\mathcal{H}$
such that $\bar{v}_{i}:=\bar{r}_{i}+M_{\omega}^{2}\lambda=o(1)$.
In the theorem, $\bar{v}_{i}=o(1)$ entails that both $\bar{r}_{i}$ in
Proposition 2 and the estimation error of the standard error are
asymptotically negligible.
#### 4.2.1 Theory for the FDR control
For
$\left((i,j),(k,\ell)\right)\in\mathcal{H}\times\mathcal{H}=:\mathcal{H}^{2}$,
define the correlation between $z_{ij}$ and $z_{k\ell}$ as
$\displaystyle\rho_{(i,j),(k,\ell)}=\frac{\sigma_{ik}\omega_{j\ell}}{\sigma_{i}\sigma_{k}\omega_{j}\omega_{\ell}},$
(15)
which becomes $\rho_{(i,j),(k,\ell)}\asymp\sigma_{ik}\omega_{j\ell}$ under
Condition 5. Obviously $\rho_{(i,j),(i,j)}=1$ for all $(i,j)\in\mathcal{H}$,
but the “off-diagonal” correlations in
$\mathcal{H}_{\text{off}}^{2}:=\left\\{((i,j),(k,\ell))\in\mathcal{H}^{2}:(i,j)\not=(k,\ell)\right\\}$,
where $|\mathcal{H}_{\text{off}}^{2}|=|\mathcal{H}^{2}|-|\mathcal{H}|$, take
non-trivial values. To manipulate them, impose the following condition.
###### Condition 6.
There exists a partition of $\mathcal{H}_{\text{off}}^{2}$ denoted as
$\mathcal{H}_{\text{off}}^{2}=\mathcal{H}_{w}^{2}\cup\mathcal{H}_{s}^{2}$ such
that for some constants $c>0$ and $\bar{\rho}\in(0,1)$,
$\displaystyle|\rho_{(i,j),(k,\ell)}|\in\begin{cases}\left[0,c/\log^{2}(KN^{2})\right]&\text{for
}~{}((i,j),(k,\ell))\in\mathcal{H}_{w}^{2}~{}~{}~{}\text{(weak
correlations)},\\\ \left(c/\log^{2}(KN^{2}),\bar{\rho}\right]&\text{for
}~{}((i,j),(k,\ell))\in\mathcal{H}_{s}^{2}~{}~{}~{}\text{(strong
correlations)},\end{cases}$
where
$|\mathcal{H}_{w}^{2}|=|\mathcal{H}_{\text{off}}^{2}|-|\mathcal{H}_{s}^{2}|$
and
$|\mathcal{H}_{s}^{2}|=O\left(|\mathcal{H}_{\text{off}}^{2}|/\log^{2}(N)\right)$.
Condition 6 says that most of the correlations are “weak” so that they are
close to zero, while some are allowed to be “strong” enough to be non-
vanishing. Because $\texttt{T}_{ij}$’s are asymptotically normal as shown in
Theorem 1, Condition 6 ensures that most of $\texttt{T}_{ij}$’s are
asymptotically independent. This property leads to the main theoretical
results. Let $\bar{v}=\max_{i\in[N]}\bar{v}_{i}$.
###### Theorem 2 (FDR control: Limiting normal distribution).
Suppose $\nu>4$ and $\bar{v}=O(T^{-\kappa_{1}})$ for some constant
$\kappa_{1}\in(0,1/2)$. If Conditions 1–6 are assumed, then for any
predetermined level $q\in[0,1]$, Procedure 1 with either $t$-statistics in (9)
achieves the following: If $\texttt{t}_{0}$ is given by (10), the obtained
$\hat{\mathcal{S}}(\texttt{t}_{0})$ satisfies
$\displaystyle\limsup_{N,T\to\infty}\operatorname{dFDR}\leq q\text{ and
}\lim_{N,T\to\infty}\operatorname{\mathbb{P}}\left(\operatorname{dFDP}\leq
q+\varepsilon\right)=1\text{ for any $\varepsilon>0$}.$
If $\texttt{t}_{0}$ is given by (11), the obtained
$\hat{\mathcal{S}}(\texttt{t}_{0})$ satisfies
$\displaystyle\limsup_{N,T\to\infty}\operatorname{dFWER}\leq q.$
We also show the directional FDR control by Procedure 2 with setting
$\tilde{\mathcal{S}}=\hat{\mathcal{S}}_{{\textsf{L}}}$. First, we need a
distributional assumption on the wild bootstrap.
###### Condition 7.
$\\{\zeta_{t}\\}$ is a sequence of i.i.d. sub-gaussian random variables with
$\operatorname{\mathbb{E}}\zeta_{t}=0$ and
$\operatorname{\mathbb{E}}\zeta_{t}^{2}=1$, where for every $T>0$, there
exists some constant $c_{\zeta}>0$ such that for all $x>0$,
$\max_{t\in[T]}\operatorname{\mathbb{P}}\left(|\zeta_{t}|>x\right)\leq
2\exp(-x^{2}/c_{\zeta})$.
Let
$\displaystyle\mathbb{Q}^{*}(\texttt{t})$
$\displaystyle=\frac{1}{KN^{2}-\hat{s}}\sum_{(i,j)\in\hat{\mathcal{S}}_{{\textsf{L}}}^{c}}\operatorname{\mathbb{P}}^{*}\left(\texttt{T}_{ij}^{*}>\texttt{t}\right),$
where
$\operatorname{\mathbb{P}}^{*}(\texttt{T}_{ij}^{*}>\texttt{t}):=p\lim_{B\to\infty}B^{-1}\sum_{b=1}^{B}1\\{\texttt{T}_{ij}^{*(b)}>\texttt{t}\\}$.
It suffices to show that for computing $\texttt{t}_{0}$ using $\mathbb{Q}^{*}$
is asymptotically the same as using $Q$; more precisely, we prove the event,
$|\mathbb{Q}^{*}(\texttt{t})/Q(\texttt{t})-1|$ and
$|\\{1-\mathbb{Q}^{*}(-\texttt{t})\\}/Q(\texttt{t})-1|$ converge to zero
uniformly in $\texttt{t}\in[0,\bar{\texttt{t}}]$, occurs with high
probability. Define
$\bar{\mu}=\bar{v}+s_{\omega}M_{\omega}^{3-2r}\lambda^{1-r}\log^{2}(N\vee T)$.
###### Theorem 3 (FDR control: Bootstrap).
Suppose $\nu>4$, $\bar{\mu}=O(T^{-\kappa_{1}})$ for some constant
$\kappa_{1}\in(0,1/2)$, and
$\mathcal{S}\subset\hat{\mathcal{S}}_{{\textsf{L}}}$ with high probability. If
Conditions 1–7 are assumed, then for any predetermined level $q\in[0,1]$,
Procedure 2 with either $t$-statistics in (12) achieves the same results as
Theorem 2.
The proof focuses on the $t$-statistic that has the “sandwich” s.e., and will
complete in two steps. First, show that “sandwich” $\texttt{T}_{ij}^{*}$ can
be approximated by the self-normalized sum. Then, verify that it can be
uniformly normally approximated in the relative error using the Cramér-type
large deviation theory of Jing et al. (2003). The additional cost
$s_{\omega}M_{\omega}^{3-2r}\lambda^{1-r}\log^{2}(N\vee T)$ in $\bar{\mu}$ is
due to the approximation error of the “sandwich” $t$-statistic to the self-
normalized sum,
$\sum_{t=1}^{T}\hat{u}_{it}^{*}\mathbf{x}_{t}^{\prime}\hat{\bm{\omega}}_{j}/\sum_{t=1}^{T}(\hat{u}_{it}^{*}\mathbf{x}_{t}^{\prime}\hat{\bm{\omega}}_{j})^{2}$.
#### 4.2.2 Theory for the power guarantee
We next investigate the asymptotic power of our Procedures 1 and 2. A
condition on the signal strength is required to distinguish the nonzero
elements from zeros.
###### Condition 8.
For $\mathcal{S}\subset\mathcal{H}$, it holds that
$\displaystyle\min_{(i,j)\in\mathcal{S}}\frac{|\phi_{ij}|}{\sigma_{i}\omega_{j}}\geq
4\sqrt{\frac{2\log(KN^{2})}{T}}.$
###### Theorem 4 (Power guarantee).
Suppose $\nu>4$ and $\bar{v}=O(T^{-\kappa_{1}})$ for some constant
$\kappa_{1}\in(0,1/2)$. If Conditions 1–5 and 8 are assumed, then both
Procedures 1 and 2 achieve $\lim_{N,T\to\infty}\operatorname{dPower}=1$.
This theorem guarantees that both Procedures 1 and 2 do not asymptotically
miss any important relation between variables, whichever threshold of either
(10) or (11) is selected. The minimum signal of Condition 8 is strong enough
to achieve this.
## 5 Monte Carlo Experiments
We investigate the finite sample behavior of the proposed procedures by means
of Monte Carlo experiments.
### 5.1 First experiment: Stability
Before the comprehensive experiment, we carried out a small experiment to
illustrate the superiority of the selection result by our FDR-controlled
approach to those by the lasso and adaptive lasso selections. An $N\times 1$
vector ${\mathbf{y}}_{t}=$ for $t=-50,\dots,T$ is generated from the
stationary VAR(1) model for $N=20$ and $T=100$, with
$\mathbf{u}_{t}\sim\text{i.i.d.}N(\mathbf{0},\mathbf{I}_{N})$. The sparse
$\bm{\Phi}$ is generated as described in Section 5.2.1 in such a way that the
elements in the diagonal and $j$-diagonals for $j=-2,-1,1,2$, are non-zero.
Figure 3 shows heatmaps of the frequencies with which the ($i,j$)th element of
$\bm{\Phi}$ is selected as nonzero by different methods in 1000 replications.
Figure 3(a) shows the true sparsity pattern in $\bm{\Phi}$, and Figures 3(b),
(c), and (d) report the results using the lasso, adaptive lasso, and Procedure
1, respectively, with assigning the target FDR level $q=0.2$. The overall
frequency of false positives by the lasso is very high, and for certain
elements unreasonably so (shown by dark blue). The adaptive lasso reduces
overall false positives, but cannot do so sufficiently for the elements
wrongly selected in large numbers by lasso. In addition, some nonzero
$\phi_{ij}$’s are selected too infrequently by lasso, which is largely
inherited by adaptive lasso (shown by very weak red). Meanwhile, when the
multiple testing procedure is used, the frequency of false positives is evenly
spread among the elements and well controlled. Furthermore, nonzero elements
are selected more evenly than the (adaptive) lasso. These suggest that the
proposed inferential methods can provide more stable and reproducible results
for discovering the network Granger causality than existing popular estimation
methods.
(a) true Sparsity
(b) lasso
(c) adaptive lasso
(d) multiple test ($q=0.2$)
Figure 3: Heatmap of selection frequencies in $\bm{\Phi}_{1}$
### 5.2 Second experiment: Performance of our methods
#### 5.2.1 Design
We consider the following Data Generating Process (DGP). The $N\times 1$
vector is generated as
$\mathbf{y}_{t}=\sum_{\ell=1}^{K}{\bm{\Phi}}_{\ell}\mathbf{y}_{t-\ell}+\mathbf{u}_{t}$
for $t=-(50+K),\dots,T$ with $\mathbf{y}_{-(50+K)}=\mathbf{0}$, and the set
$\\{\mathbf{y}_{-(50+K)},\dots,\mathbf{y}_{-K}\\}$ is discarded. Define
$\mathbf{u}_{t}={\bm{\Sigma}}_{u}^{1/2}{\bm{\varepsilon}}_{t}$, where
${\bm{\Sigma}}_{u}=\operatorname{diag}(\sigma_{1}^{2},\dots,\sigma_{N}^{2})$
with $\sigma_{i}^{2}\sim\text{i.i.d.}U(0.5,1.5)$. Another construction of
$\bm{\Sigma}_{u}$ with non-zero off-diagonals is considered in Supplementary
Materials. Two different distributions of $\varepsilon_{ti}$ are considered:
1. (i)
Standard normal, $\varepsilon_{ti}\sim\text{i.i.d.}N\left(0,1\right)$.
2. (ii)
Standardized mixture normal,
$\varepsilon_{ti}=(\eta_{ti}-\mu_{\eta})/\sigma_{\eta}$ with
$\eta_{ti}=q_{ti}\xi_{ti}+\left(1-q_{ti}\right)\zeta_{ti}$,
$\mu_{\eta}=\mathbb{E}\eta_{ti}$, and
$\sigma_{\eta}^{2}=\mathbb{E}\eta_{ti}^{2}-\mu_{\eta}^{2}$, where
$q_{ti}\sim\text{i.i.d.}Ber(\pi)$,
$\xi_{ti}\sim\text{i.i.d.}N(\mu_{\xi},\sigma_{\xi}^{2})$, and
$\zeta_{ti}\sim\text{i.i.d.}N(\mu_{\zeta},\sigma_{\zeta}^{2})$. We have chosen
the parameter values $\mu_{\xi}=2$, $\sigma_{\xi}=2$, $\mu_{\zeta}=4$,
$\sigma_{\zeta}=10$, and $\pi=0.9$, which give $\mu_{\eta}=0.4$ and
$\sigma_{\eta}=3.88$. The resulting error variable, $\varepsilon_{ti}$, is
unimodal yet it has skewness 1.86 and kurtosis 3.53.
We focus on the model with $K=1$. The coefficient matrix
${\bm{\Phi}}=(\phi_{ij})$ is constructed as follows. First generate
$\mathbf{\Psi}=(\psi_{ij})$ with $\psi_{ij}=\rho^{1+|i-j|/4}1\\{|i-j|\leq
m\\}$. Now form an $N\times N$ sign matrix
$\bm{\Upsilon}^{(r)}=(\upsilon_{ij}^{(r)})$ for $r=1,2,\dots$, where
$\upsilon_{ij}^{(r)}=2\varphi_{ij}^{(r)}-1$ with
$\varphi_{ij}^{(r)}\sim\text{i.i.d.}Ber(0.5)$. Repeatedly compute
${\bm{\Phi}}^{(r)}={\bm{\Psi}}\circ\bm{\Upsilon}^{(r)}$ for $r=1,2,\dots$
until $\lambda_{\max}({\bm{\Phi}}^{(r)})\leq 0.96$. If the process stops at
the $R$th repetition, set ${\bm{\Phi}}={\bm{\Phi}}^{(R)}$. We consider
$\rho=0.4$ and $m\in\\{2,4,7\\}$. Observe that, except for the first and last
rows, we have $s_{i}=5,9,15$ for $m=2,4,7$, respectively. Set $q=0.1$ and
consider all the combinations of $N\in\\{50,100,200,300\\}$ and
$T\in\\{200,300\\}$.
The model is estimated for each $i$th row using the R package, “glmnet.” In
the bootstrap procedure, the same values of the lasso tuning parameters at the
estimation stage are used. The precision matrix estimator is constructed by
the CLIME using the R package, “fastclime.” All the results are based on 1000
replications and 100 bootstrap samples.
#### 5.2.2 Results
Table 1 summarizes the experimental results with
${\bm{\Sigma}}_{u}=\operatorname{diag}(\sigma_{11},\dots,\sigma_{NN})$, based
on $\texttt{T}_{ij}$ and its bootstrap version with
$\hat{m}_{ij}=\hat{\sigma}_{i}\sqrt{\hat{\bm{\omega}}^{\prime}_{j}\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}}$.
It contains three panels for $m=2,4,7$. The larger the value of $m$, the
larger the number of nonzero elements in ${\bm{\Phi}}$. The results with error
variance matrix with non-zero off-diagonals are qualitatively very similar, so
that they are reported in Table E2 in Supplementary Materials.
As can be seen, both the asymptotic and the bootstrap thresholds control the
dFDR to be around the predetermined level $q=0.1$, maintaining the high power.
In particular, the good performance for large $N\geq T$ is very encouraging.
The asymptotic threshold tends to produce slightly larger dFDR’s than $q$
while the bootstrap one tends to provide a conservative selection. The mixture
normal error moderately amplifies these tendencies. Despite this
conservativeness, the power based on the bootstrap threshold is almost
identical to that based on the asymptotic one. The exaggeration and the
conservativeness by both procedures are mitigated if $T$ is getting large. As
expected, when the nonzero elements in $\bm{\Phi}$ increase (i.e. $m$ rises),
the power goes down. However, it quickly rises as $T$ increases.
The results based on $\hat{m}_{ij}=\hat{\sigma}_{i}\hat{\omega}_{j}$ are
reported in Table E1. Bootstrap dFDR is virtually identical to bootstrap dFDR
based on
$\hat{m}_{ij}=\hat{\sigma}_{i}\sqrt{\hat{\bm{\omega}}^{\prime}_{j}\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}}$,
whilst asymptotic dFDR often becomes more conservative than bootstrap dFDR. As
a result, the asymptotic power is often lower than the bootstrap power. These
results suggest that the bootstrap procedure is expected to provide stable
performance regardless of the choice of $\hat{m}_{ij}$.
Table 1: Directional FDR and power using asymptotic and bootstrap thresholds
for $q=0.1$, with cross-sectionally uncorrelated errors $m=2$ ($\max s_{i}=5$)
---
| | | $T=200$ | | $T=300$
| | | asymptotic | | bootstrap | | asymptotic | | bootstrap
| | | dFDR | PWR | | dFDR | PWR | | dFDR | PWR | | dFDR | PWR
Standard Normal Error
| $N=50$ | | 9.3 | 97.5 | | 6.7 | 96.8 | | 8.8 | 99.8 | | 7.0 | 99.7
| $N=100$ | | 10.7 | 94.7 | | 6.9 | 93.2 | | 9.9 | 99.4 | | 7.4 | 99.2
| $N=200$ | | 10.7 | 91.9 | | 7.7 | 90.5 | | 10.3 | 99.0 | | 8.4 | 98.8
| $N=300$ | | 10.4 | 89.6 | | 6.8 | 87.3 | | 9.7 | 98.5 | | 8.5 | 98.4
Mixture Normal Error
| $N=50$ | | 10.4 | 94.2 | | 5.7 | 91.9 | | 9.5 | 99.0 | | 6.3 | 98.7
| $N=100$ | | 12.6 | 89.5 | | 5.5 | 84.5 | | 11.5 | 97.8 | | 6.5 | 96.8
| $N=200$ | | 13.1 | 87.0 | | 6.6 | 82.5 | | 11.7 | 97.2 | | 7.6 | 96.4
| $N=300$ | | 14.0 | 84.4 | | 5.5 | 77.4 | | 12.6 | 96.2 | | 7.7 | 95.1
$m=4$ ($\max s_{i}=9$)
| | | $T=200$ | | $T=300$
| | | asymptotic | | bootstrap | | asymptotic | | bootstrap
| | | dFDR | PWR | | dFDR | PWR | | dFDR | PWR | | dFDR | PWR
Standard Normal Error
| $N=50$ | | 8.3 | 89.1 | | 6.7 | 87.9 | | 7.9 | 96.7 | | 6.9 | 96.4
| $N=100$ | | 8.5 | 83.7 | | 8.5 | 83.6 | | 8.3 | 94.7 | | 8.5 | 94.8
| $N=200$ | | 9.3 | 78.1 | | 9.6 | 78.4 | | 8.8 | 91.9 | | 9.6 | 92.3
| $N=300$ | | 10.3 | 73.0 | | 8.8 | 71.8 | | 9.4 | 89.3 | | 9.9 | 89.5
Mixture Normal Error
| $N=50$ | | 8.5 | 85.1 | | 5.8 | 82.5 | | 7.8 | 94.8 | | 6.1 | 93.9
| $N=100$ | | 8.9 | 79.9 | | 7.2 | 78.2 | | 8.7 | 92.4 | | 7.6 | 91.9
| $N=200$ | | 11.5 | 74.7 | | 8.2 | 71.7 | | 10.6 | 88.9 | | 8.8 | 88.0
| $N=300$ | | 12.0 | 69.9 | | 6.9 | 65.3 | | 11.0 | 86.1 | | 8.9 | 85.1
$m=7$ ($\max s_{i}=15$)
| | | $T=200$ | | $T=300$
| | | asymptotic | | bootstrap | | asymptotic | | bootstrap
| | | dFDR | PWR | | dFDR | PWR | | dFDR | PWR | | dFDR | PWR
Standard Normal Error
| $N=50$ | | 6.8 | 68.6 | | 6.0 | 67.5 | | 6.7 | 80.4 | | 6.1 | 79.8
| $N=100$ | | 7.6 | 60.6 | | 8.0 | 61.0 | | 7.6 | 74.2 | | 8.0 | 74.6
| $N=200$ | | 9.5 | 53.0 | | 9.0 | 52.5 | | 8.6 | 67.8 | | 9.1 | 68.2
| $N=300$ | | 9.4 | 50.1 | | 9.3 | 50.1 | | 8.5 | 65.3 | | 10.1 | 66.4
Mixture Normal Error
| $N=50$ | | 6.4 | 65.5 | | 4.9 | 63.2 | | 6.1 | 77.9 | | 5.1 | 76.6
| $N=100$ | | 7.9 | 58.1 | | 6.8 | 56.7 | | 7.9 | 71.8 | | 7.2 | 71.1
| $N=200$ | | 9.4 | 51.1 | | 7.5 | 49.3 | | 9.0 | 65.6 | | 8.3 | 65.0
| $N=300$ | | 10.5 | 48.9 | | 7.5 | 46.5 | | 9.6 | 63.4 | | 9.1 | 62.9
## 6 Two Empirical Applications
We apply our proposed methods to two large datasets to discover the underlying
Granger-causal networks. Section 6.1 investigates the large macroeconomic and
financial variables. Section 6.2 analyzes the regional house price growths in
the UK.222The results reported in this section are based on
$\hat{m}_{ij}=\hat{\sigma}_{i}\sqrt{\hat{\bm{\omega}}^{\prime}_{j}\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}}$.
### 6.1 Large macroeconomic variables
The FRED-MD macroeconomic and financial data file of May 2019 is obtained from
McCracken’s website, and the variables are transformed as instructed by
McCracken and Ng (2016). The data consists of a balanced panel of 128 monthly
series spanning the period from June 1999 to May 2019. All series are
standardized before the analysis. Following McCracken and Ng (2016), the
series are categorized into eight groups: G1, Output and Income; G2, Labour
Market; G3, Consumption, Orders and Inventories; G4, Housing; G5, Interest and
Exchange Rate; G6, Prices; G7, Money and Credit; G8, Stock Market. (The group
order is different from McCracken and Ng, 2016.) The variables are numbered
from 1 to 128, and the descriptions for all the variables are reported in
Table F1 in Supplementary Materials.
We estimate a VAR(1) model. The asymptotic and bootstrap thresholds for the
$t$-ratios with $q=0.05$ were 2.88 and 4.41, respectively. Here we summarize
the result with the bootstrap threshold in Figure 4 as a network Granger
causality diagram. The result with the asymptotic threshold is reported in
Supplementary Materials. The nodes represent the variables, and their colors
show the eight categories. The size of a node indicates the number of
variables it significantly predicts; the larger the node size, the more
variables it can predict. The arrows show the direction of the Granger
causality; if $y_{1}$ can significantly predicts $y_{2}$, $y_{1}\rightarrow
y_{2}$. The self-lag effects are excluded in the figure. The main dynamic
inter-relations are clustered within the groups, yet interesting interlinkages
between the variable groups are observed. In particular, Price variables are
clustered together, and eleven price variables are caused by variable 112,
Real M2 Money Supply. Price variable 94, Crude Oil, also Granger-causes seven
other price variables. Finally, Real Manufactures and Trade Industries Sales
(variable 49) Granger-cause three producer price indices (variables 90, 91,
92). These findings make a lot of sense from an economic point of view. In
addition, it is easy to identify the variables which cause many other
variables (many edges come out from the node) and those which are caused by
many other variables (the node surrounded by many pointing arrows); see the
Housing variable cluster, for example.
Figure 4: Network Granger-causality: 128 macroeconomic variables, bootstrap
$t_{0}$, $q=0.05$
### 6.2 UK regional house price growths
We obtained the monthly average house prices at the local authority district
level, published in November 2021 by HM Land Registry in the UK. Before
analysis, we seasonally adjusted the prices, then deflated them by the UK
consumer price index (CPI).333The CPI index (D7BT, not seasonally adjusted) is
obtained from Office for National Statistics, UK. The house prices and the CPI
are seasonally adjusted using the R package “seasonal.” Denoting by $HP_{it}$
the seasonally adjusted real house price of the district $i$ at month $t$, the
monthly house price growth is computed as $\Delta
hp_{it}=\log(HP_{it}/HP_{it-1})$. In this analysis, we choose variables of 86
districts of Scotland, Wales, and the London area, spanning 209 months, from
February 2004 to June 2021. The variables are numbered from 1 to 86, and the
full list of the district names is reported in Table F2 in Supplementary
Materials. All the series are demeaned before the analysis.
We estimate a VAR(1) model. The asymptotic and bootstrap thresholds for the
$t$-ratios with $q=0.05$ were 2.73 and 4.22, respectively. Following the
previous subsection, we summarize the result with the bootstrap threshold in
Figure 5 as a network Granger causality diagram. The result with the
asymptotic threshold is reported in Supplementary Materials.
Figure 5: Network Granger causality: 86 UK regional house prices, bootstrap
$t_{0}$, $q=0.05$
The results show that house price growth causality networks are more or less
clustered in each of the three regions. However, there are interesting inter-
regional Granger-causal relationships. The London network Granger-causes
variables 21 and 24, which are the house price growth in the City of Edinburgh
and West Lothian. These two regions comprise a large part of Edinburgh, the
capital of Scotland. Furthermore, the London regional network is shown to
Granger-cause and also to be Granger-caused by Cardiff, the capital of Wales,
variable 78. These results suggest that house price fluctuations in London
interact dynamically with those in the capitals of other countries in the UK.
There is one unique inter-regional Granger Causality. Falkirk in Scotland
(Variable 6) has a significant concentration of arrows from London, Scotland,
and Wales. A little searching leads to the BBC News of 21 July 2020 reporting
that the Scottish and UK governments have pledged 90 million pounds in “Growth
Deal” funds to stimulate the economy around
Falkirk.444https://www.bbc.co.uk/news/uk-scotland-tayside-central-53471904 The
deal was signed off by the UK, Scottish governments, and Falkirk council on 21
December 2021.555https://www.bbc.co.uk/news/uk-scotland-scotland-
business-59734937 This large deal may have attracted investment to the Falkirk
property market and was statistically identified as a Granger Causality for
house price fluctuations in Falkirk.
## 7 Conclusion
This paper has proposed two multiple testing procedures that control the FDR
for discovering the network Granger causality in high-dimensional VAR models.
The first procedure is based on the limiting normal distribution of the
$t$-statistics constructed by the debiased lasso estimator, and the second
procedure is based on the bootstrap distributions of the $t$-statistics. Their
theoretical properties, including FDR control and power guarantee, have been
investigated. The finite sample evidence suggests that both procedures can
successfully control the FDR while maintaining high power. The proposed method
has been applied to a number of macroeconomic variables as well as UK regional
house prices, and interesting economically rational causal networks have been
uncovered.
The main focus of this article is statistical inference on network Granger
causality, which embodies the predictability relationship among a large number
of variables. Our FDR controlling procedures are fairly robust against
contemporaneous correlations among the variables of interest; see Condition 6.
As considered in Barigozzi and Brownlees (2019), extending the proposed method
to discover contemporaneous correlation networks in addition to network
Granger causality could be an interesting future research direction.
It has been pointed out in the literature that Granger causality found by
statistical methods can be spurious. For example, Stokes and Purdon (2017)
argue that if the lag order ($K$) is too small, the estimator will be biased,
and if $K$ is too large, the bias will be reduced, but the variance will
increase, both leading to spurious Granger causality findings. Our proposed
method is not expected to suffer much from this problem, so long as a
sufficiently (but not too) large value of $K$ is chosen. However,
investigating the ‘optimal’ choice of $K$ can be another interesting topic for
future research.
## Supplementary Material
Section A: Proof of Theorems, Section B: Proof of Propositions, Section C:
Lemmas and their proofs, Section D: Precision Matrix Estimation, Section E:
Additional Empirical Results, Section F: List of Variable Names in Empirical
Applications, Section G: Additional Results of Empirical Applications
## Acknowledgments
The authors thank …
## Funding
This work was supported by …
## References
* Babii et al. (2021) Babii, A., E. Ghysels, and J. Striaukas (2021). High-dimensional granger causality tests with an application to vix and news. SSRN: https://ssrn.com/abstract=3615718.
* Barigozzi and Brownlees (2019) Barigozzi, M. and C. Brownlees (2019). NETS: Network estimation for time series. Journal of Applied Econometrics 34, 347–364.
* Basu et al. (2019) Basu, S., X. Li, and G. Michailidis (2019). Low rank and structured modeling of high-dimensional vector autoregressions. IEEE Transactions on Signal Processing 67, 1207–1222.
* Basu and Michailidis (2015) Basu, S. and G. Michailidis (2015). Regularized estimation in sparse high-dimensional time series models. Annals of Statistics 43, 1535–1567.
* Basu et al. (2015) Basu, S., A. Shojaie, and G. Michailidis (2015). Network granger causality with inherent grouping structure. Journal of Machine Learning Research 16, 417–453.
* Benjamini and Hochberg (1995) Benjamini, Y. and Y. Hochberg (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B 57, 289–300.
* Bercu et al. (2015) Bercu, B., B. Delyon, and E. Rio (2015). Concentration Inequalities for Sums and Martingales (1st ed.). Springer.
* Bonferroni (1935) Bonferroni, C. E. (1935). Il calcolo delle assicurazioni su gruppi di teste. Studi in Onore del Professore Salvatore Ortu Carboni, 13–60.
* Brüggemann et al. (2016) Brüggemann, R., C. Jentsch, and C. Trenkler (2016). Inference in vars with conditional heteroskedasticity of unknown form. Journal of Econometrics 191, 69–85.
* Cai et al. (2011) Cai, T., W. Liu, and X. Luo (2011). A constrained $\ell_{1}$ minimization approach to sparse precision matrix estimation. Journal of the Amerian Statistical Association 106, 594–607.
* Candès and Tao (2007) Candès, E. and T. Tao (2007). The dantzig selector: statistical estimation when $p$ is much larger than $n$. Annals of Statistics 35, 2313–2351.
* Chi et al. (2021) Chi, C.-M., Y. Fan, C.-K. Ing, and J. Lv (2021). High-dimensional knockoffs inference for time series data. arXiv:2112.09851.
* Davidson and Flachaire (2008) Davidson, R. and E. Flachaire (2008). The wild bootstrap, tamed at last. Journal of Econometrics 146, 162–169.
* Davis et al. (2016) Davis, R. A., P. Zang, and T. Zheng (2016). Sparse vector autoregressive modeling. Journal of Computational and Graphical Statistics 25, 1077–1096.
* Dufour and Renault (1998) Dufour, J.-M. and E. Renault (1998). Short run and long run causality in time series: Theory. Econometrica 66, 1099––1125.
* Eichler (2007) Eichler, M. (2007). Granger causality and path diagrams for multivariate time series. Journal of Econometrics 137, 334–353.
* Eichler (2012a) Eichler, M. (2012a). Causal inference in time series analysis. In C. Berzuini, P. Dawid, and L. Bernardinelli (Eds.), Causality: Statistical Perspectives and Applications, pp. 327–354. Wiley.
* Eichler (2012b) Eichler, M. (2012b). Graphical modelling of multivariate time series. Probability Theory and Related Fields 153, 233–268.
* Fujita et al. (2007) Fujita, A., J. R. Sato, H. M. Garay-Malpartida, R. Yamaguchi, S. Miyano, M. C. Sogayar, and C. E. Ferreira (2007). Modeling gene expression regulatory networks with the sparse vector autoregressive model. BMC Systems Biology 1, 1–11.
* Gonçalves and Kilian (2004) Gonçalves, S. and L. Kilian (2004). Bootstrapping autoregressions with conditional heteroskedasticity of unknown form. Journal of Econometrics 123, 89–120.
* Granger (1969) Granger, C. W. J. (1969). Investigating causal relations by econometric models and cross-spectral methods. Econometrica 37, 424–438.
* Guðmundsson and Brownlees (2021) Guðmundsson, G. S. and C. Brownlees (2021). Detecting groups in large vector autoregressions. Journal of Econometrics forthcoming.
* Hafner and Herwartz (2009) Hafner, C. M. and H. Herwartz (2009). Testing for linear vector autoregressive dynamics under multivariate generalized autoregressive heteroskedasticity. Statistica Neerlandica 63, 294–323.
* Han et al. (2015) Han, F., H. Lu, and H. Liu (2015). A direct estimation of high dimensional stationary vector autoregressions. Journal of Machine Learning Research 16, 3115–3150.
* Holm (1979) Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70.
* Horn and Johnson (2012) Horn, R. A. and C. R. Johnson (2012). Matrix Analysis (2nd ed.). Cambridge University Press.
* Hosoya (1977) Hosoya, Y. (1977). On the Granger condition for non-causality. Econometrica 45, 1735–1736.
* Javanmard and Javadi (2019) Javanmard, A. and H. Javadi (2019). False discovery rate control via debiased lasso. Electronic Journal of Statistics 13, 1212–1253.
* Javanmard and Montanari (2014) Javanmard, A. and A. Montanari (2014). Confidence intervals and hypothesis testing for high-dimensional regression. Journal of Machine Learning Research 15, 2869–2909.
* Jing et al. (2003) Jing, B.-Y., Q.-M. Shao, and Q. Wang (2003). Self-normalized cramér-type large deviations for independent random variables. Annals of Probability 31, 2167–2215.
* Kifer (2013) Kifer, Y. (2013). Strong approximations for nonconventional sums and almost sure limit theorems. Stochastic Processes and their Applications 123, 2286–2302.
* Kilian (1999) Kilian, L. (1999). Finite-sample properties of percentile and percentile-t bootstrap confidence intervals for impulse responses. Review of Economics and Statistics 81, 652–660.
* Kock and Callot (2015) Kock, A. B. and L. Callot (2015). Oracle inequalities for high dimensional vector autoregressions. Journal of Econometrics 186, 325–344.
* Lin and Michailidis (2017) Lin, J. and G. Michailidis (2017). Regularized estimation and testing for high-dimensional multi-block vector-autoregressive models. Journal of Machine Learning Research 18, 1–49.
* Liu (2013) Liu, W. (2013). Gaussian graphical model estimation with false discovery rate control. Annals of Statistics 41, 2948–2978.
* Lozano et al. (2009) Lozano, A. C., N. Abe, Y. Liu, and S. Rosset (2009). Grouped graphical granger modeling for gene expression regulatory networks discovery. Bioinformatics 25, i110–i118.
* Lütkepohl (2005) Lütkepohl, H. (2005). New Introduction to Multiple Time Series Analysis. Springer Berlin Heidelberg.
* Mammen (1993) Mammen, E. (1993). Bootstrap and wild bootstrap for high dimensional linear models. Annals of Statistics 21, 255–285.
* McCracken and Ng (2016) McCracken, M. W. and S. Ng (2016). Fred-md: A monthly database for macroeconomic research. Journal of Business & Economic Statistics 34(4), 574–589.
* Reid et al. (2016) Reid, S., R. Tibshirani, and J. Friedman (2016). A study of error variance estimation in lasso regression. Statistica Sinica 26, 35–67.
* Shojaie and Fox (2021) Shojaie, A. and E. B. Fox (2021). Granger causality: A review and recent advances. arXiv:2105.02675.
* Shu and Nan (2019) Shu, H. and B. Nan (2019). Estimation of large covariance and precision matrices from temporally dependent observations. Annals of Statistics 47, 1321–1350.
* Sims (1972) Sims, C. A. (1972). Money, income, and causality. American Economic Review 62, 540–552.
* Stokes and Purdon (2017) Stokes, P. A. and P. L. Purdon (2017). A study of problems encountered in granger causality analysis from a neuroscience perspective. Proceedings of the National Academy of Sciences 114(34), E7063–E7072.
* Szarek and Werner (1999) Szarek, S. J. and E. Werner (1999). A nonsymmetric correlation inequality for gaussian measure. Journal of Multivariate Analysis 68, 193–211.
* Tibshirani (1996) Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B, 267–288.
* Uematsu and Yamagata (2021) Uematsu, Y. and T. Yamagata (2021). Inference in sparsity-induced weak factor models. Journal of Business & Economic Statistics.
* van de Geer et al. (2014) van de Geer, S., P. Bühlmann, Y. Ritov, and R. Dezeure (2014). On asymptotically optimal confidence regions and tests for high-dimensional models. Annals of Statistics 42, 1166–1202.
* Vershynin (2018) Vershynin, R. (2018). High-Dimensional Probability: An Introduction with Applications in Data Science. Cambridge University Press.
* Zhang and Zhang (2014) Zhang, C.-H. and S. S. Zhang (2014). Confidence intervals for low dimensional parameters in high dimensional linear models. Journal of Royal Statistical Society Series B 76, 217–242.
* Zheng and Raskutti (2019) Zheng, L. and G. Raskutti (2019). Testing for high-dimensional network parameters in auto-regressive models. Electronic Journal of Statistics 13, 4977–5043.
* Zhu and Liu (2020) Zhu, K. and H. Liu (2020). Confidence intervals for parameters in high-dimensional sparse vector autoregression. arXiv:2009.09462.
Supplementary Material for
Discovering the Network Granger Causality
in Large Vector Autoregressive Models
Yoshimasa Uematsu∗ and Takashi Yamagata†
*Center for the Promotion of Social Data Science Education and Research, Hitotsubashi University
*Department of Economics and Management, Tohoku University
$\dagger$Department of Economics and Related Studies, University of York
$\dagger$Institute of Social Economic Research, Osaka University
## Appendix A Proofs of Theorems
### A.1 Proof of Theorem 1
###### Proof.
Fix any $(i,j)\in\mathcal{H}$ such that $\bar{v}_{i}=o(1)$. Denote
$m_{ij}=\sqrt{\sigma_{i}^{2}\omega_{jj}}=\sqrt{\sigma_{i}^{2}{\bm{\omega}}_{j}^{\prime}\bm{\Sigma}_{x}{\bm{\omega}}_{j}}$
and
$\hat{m}_{ij}=\hat{\sigma}_{i}\sqrt{\hat{\bm{\omega}}_{j}^{\prime}\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}}$
or $\hat{\sigma}_{i}\hat{\omega}_{j}$. By the construction of the debiased
lasso estimator with Proposition 2 and Lemma 7, the $t$-statistic is written
as
$\displaystyle\texttt{T}_{ij}-\frac{\sqrt{T}\phi_{ij}}{\hat{m}_{ij}}$
$\displaystyle=\frac{\sqrt{T}(\hat{\phi}_{ij}-\phi_{ij})}{\hat{m}_{ij}}=\frac{z_{ij}+r_{ij}}{m_{ij}}\left\\{1+\left(\frac{m_{ij}}{\hat{m}_{ij}}-1\right)\right\\}$
$\displaystyle=\left\\{\frac{z_{ij}}{m_{ij}}+O\left(\bar{r}_{i}\right)\right\\}\left\\{1+O\left(M_{\omega}^{2}\lambda\right)\right\\},$
which holds with probability at least $1-O((N\vee T)^{-\nu})$. Here,
$\bar{r}_{i}$ and $M_{\omega}^{2}\lambda$ are asymptotically negligible (with
high probability) by the assumed condition.
Next, prove the asymptotic normality of $z_{ij}/m_{ij}$ for each
$(i,j)\in\mathcal{H}$. Recall that
$\displaystyle\frac{z_{ij}}{m_{ij}}=\sum_{t=1}^{T}\xi_{Tt}^{(i,j)},~{}~{}~{}~{}~{}\xi_{Tt}^{(i,j)}=\frac{u_{it}\mathbf{x}_{t}^{\prime}{\bm{\omega}}_{j}}{\sqrt{T}m_{ij}}.$
By a simple calculation, we have $\operatorname{Var}(\xi_{Tt}^{(i,j)})=1/T$
and $\sum_{t=1}^{T}(\xi_{Tt}^{(i,j)})^{2}\to_{p}1$. Furthermore, by an
application of Lemma 3, we obtain
$\displaystyle\max_{t}|\xi_{Tt}^{(i,j)}|\leq
T^{-1/2}\max_{t}\|u_{it}\mathbf{x}_{t}\|_{\infty}\|{\bm{\omega}}_{j}\|_{1}/m_{ij}\lesssim
M_{\omega}T^{-1/2}\sqrt{\log^{3}(N\vee T)}$
with high probability, which implies $\max_{t}|\xi_{Tt}^{(i,j)}|\to_{p}0$ when
$M_{\omega}^{2}\lambda=o(1)$. Thus McLeish’s central limit theorem is
applicable to achieve
$\texttt{T}_{ij}-\sqrt{T}\phi_{ij}/\hat{m}_{ij}\to_{d}N(0,1)$ for each
$(i,j)\in\mathcal{H}$ such that $\bar{r}_{i}+M_{\omega}^{2}\lambda=o(1)$. This
completes the proof. ∎
### A.2 Proof of Theorem 2
###### Proof.
Throughout this proof, let $p=|\mathcal{H}|=KN^{2}$ denote the total number of
parameters in the (augmented) coefficient matrix, $\bm{\Phi}$. Define
$\displaystyle\mathcal{H}_{\leq 0}=\mathcal{H}\cap\\{(i,j):\phi_{ij}\leq
0\\},~{}~{}~{}\mathcal{H}_{\geq 0}=\mathcal{H}\cap\\{(i,j):\phi_{ij}\geq
0\\},$
$\displaystyle\mathcal{S}_{<0}=\mathcal{H}\cap\\{(i,j):\phi_{ij}<0\\},~{}~{}~{}\mathcal{S}_{>0}=\mathcal{H}\cap\\{(i,j):\phi_{ij}>0\\}.$
By the definition of
$s=|\mathcal{S}|=|\mathcal{H}\cap\\{(i,j):\phi_{ij}\not=0\\}|$, there is some
sequence $\pi=\pi_{np}\in[0,1]$ such that $|\mathcal{S}_{<0}|=\pi s$ and
$|\mathcal{S}_{>0}|=(1-\pi)s$. We also have $|\mathcal{H}_{\leq
0}|=p-(1-\pi)s$ and $|\mathcal{H}_{\geq 0}|=p-\pi s$.
Case 1. Consider when (10) does not exist and $\texttt{t}_{0}=\sqrt{2\log p}$.
First, we observe that
$\displaystyle\operatorname{dFDR}(\texttt{t}_{0})$
$\displaystyle\leq\operatorname{dFWER}=\operatorname{\mathbb{P}}\left(\sum_{(i,j)\in\hat{\mathcal{S}}(\texttt{t}_{0})}1\\{\operatorname{sgn}(\hat{\phi}_{ij})\not=\operatorname{sgn}(\phi_{ij})\\}\geq
1\right)$
$\displaystyle\leq\operatorname{\mathbb{P}}\left(\sum_{(i,j)\in\mathcal{H}_{\leq
0}}1\\{\texttt{T}_{ij}\geq\texttt{t}_{0}\\}\geq
1\right)+\operatorname{\mathbb{P}}\left(\sum_{(i,j)\in\mathcal{H}_{\geq
0}}1\\{\texttt{T}_{ij}\leq-\texttt{t}_{0}\\}\geq 1\right).$ (A.1)
The first probability of (A.1) is further bounded as
$\displaystyle\operatorname{\mathbb{P}}\left(\sum_{(i,j)\in\mathcal{H}_{\leq
0}}1\\{\texttt{T}_{ij}\geq\texttt{t}_{0}\\}\geq
1\right)\leq\sum_{(i,j)\in\mathcal{H}_{\leq
0}}\operatorname{\mathbb{P}}\left(\texttt{T}_{ij}\geq\texttt{t}_{0}\right)$
$\displaystyle\quad\leq\sum_{(i,j)\in\mathcal{S}^{c}}\operatorname{\mathbb{P}}\left(\mathcal{Z}_{ij}\geq\texttt{t}_{0}-\delta_{1}\right)+\sum_{(i,j)\in\mathcal{S}_{<0}}\operatorname{\mathbb{P}}\left(\mathcal{Z}_{ij}+\sqrt{T}\phi_{ij}/{\hat{m}_{ij}}\geq\texttt{t}_{0}-\delta_{1}\right)+pO((N\vee
T)^{-\nu+2})$
$\displaystyle\quad\leq\sum_{(i,j)\in\mathcal{S}^{c}}\operatorname{\mathbb{P}}\left(\mathcal{Z}_{ij}\geq\texttt{t}_{0}-\delta_{1}\right)+\sum_{(i,j)\in\mathcal{S}_{<0}}\operatorname{\mathbb{P}}\left(\mathcal{Z}_{ij}\geq\texttt{t}_{0}-\delta_{1}\right)+pO((N\vee
T)^{-\nu+2})$ $\displaystyle\quad=(p-s+\pi
s)Q(\texttt{t}_{0}-\delta_{1})+pO((N\vee T)^{-\nu+2})$ $\displaystyle\quad\leq
pQ(\texttt{t}_{0}-\delta_{1})+pO((N\vee T)^{-\nu+2}),$
where the second inequality holds for some positive sequence
$\delta_{1}=O(T^{-\kappa})$ for some $\kappa\in(0,\kappa_{1}]$ by Lemma 8(a)
and $Q(\texttt{t})=\operatorname{\mathbb{P}}(\mathcal{Z}>\texttt{t})$ is the
upper tail probability of a standard normal random variable $\mathcal{Z}$.
Because $Q(\texttt{t})\leq(\texttt{t}\sqrt{2\pi})^{-1}\exp(-\texttt{t}^{2}/2)$
and $\texttt{t}_{0}-\delta_{1}=\sqrt{2\log p}+o(1)$, we have
$\displaystyle pQ(\texttt{t}_{0}-\delta_{1})\lesssim p(\sqrt{\log
p})^{-1}\exp(-\log p+o(1))=O(1/\sqrt{\log p}).$
We also obtain $pO((N\vee T)^{-\nu+2})=o(1)$ for $\nu>4$. The second
probability of (A.1) is bounded in the same way. We thus conclude
$\operatorname{dFDR}(\texttt{t}_{0})\leq\operatorname{dFWER}=o(1)$.
Case 2. Consider when $\texttt{t}_{0}$ is given by (10). Define
$\displaystyle
V=\sup_{\texttt{t}\in[0,\bar{\texttt{t}}]}\left|\frac{\sum_{(i,j)\in\mathcal{H}_{\leq
0}}\left[1\\{\texttt{T}_{ij}\geq\texttt{t}\\}-Q(\texttt{t})\right]+\sum_{(i,j)\in\mathcal{H}_{\geq
0}}\left[1\\{\texttt{T}_{ij}\leq-\texttt{t}\\}-Q(\texttt{t})\right]}{2pQ(\texttt{t})}\right|.$
Then we have
$\displaystyle\operatorname{dFDP}(\texttt{t}_{0})$
$\displaystyle=\frac{|\\{(i,j)\in\hat{\mathcal{S}}(\texttt{t}_{0}):\operatorname{sgn}(\hat{\phi}_{ij})\not=\operatorname{sgn}(\phi_{ij})\\}|}{|\hat{\mathcal{S}}(\texttt{t}_{0})|\vee
1}$
$\displaystyle=\frac{2pQ(\texttt{t}_{0})}{|\hat{\mathcal{S}}(\texttt{t}_{0})|\vee
1}\frac{\sum_{(i,j)\in\mathcal{H}_{\leq
0}}1\\{\texttt{T}_{ij}\geq\texttt{t}_{0}\\}+\sum_{(i,j)\in\mathcal{H}_{\geq
0}}1\\{\texttt{T}_{ij}\leq-\texttt{t}_{0}\\}}{2pQ(\texttt{t}_{0})}$
$\displaystyle\leq q\left[\frac{\sum_{(i,j)\in\mathcal{H}_{\leq
0}}1\\{\texttt{T}_{ij}\geq\texttt{t}_{0}\\}+\sum_{(i,j)\in\mathcal{H}_{\geq
0}}1\\{\texttt{T}_{ij}\leq-\texttt{t}_{0}\\}}{2pQ(\texttt{t}_{0})}\right]$
$\displaystyle\leq q\left[V+\frac{(|\mathcal{H}_{\leq 0}|+|\mathcal{H}_{\geq
0}|)Q(\texttt{t}_{0})}{2pQ(\texttt{t}_{0})}\right]\leq q\left(V+1\right).$
Thus the dFDR and dFDP are controlled if we prove $V=o_{p}(1)$ in view of
Fatou’s lemma and Markov’s inequality, respectively. Note that
$\displaystyle
V\leq\sup_{\texttt{t}\in[0,\bar{\texttt{t}}]}\left|\frac{\sum_{(i,j)\in\mathcal{H}_{\leq
0}}\left[1\\{\texttt{T}_{ij}\geq\texttt{t}\\}-Q(\texttt{t})\right]}{2pQ(\texttt{t})}\right|+\sup_{\texttt{t}\in[0,\bar{\texttt{t}}]}\left|\frac{\sum_{(i,j)\in\mathcal{H}_{\geq
0}}\left[1\\{\texttt{T}_{ij}\leq-\texttt{t}\\}-Q(\texttt{t})\right]}{2pQ(\texttt{t})}\right|.$
We only prove that the first term is $o_{p}(1)$ by symmetry.
To this end, consider discretization. That is, we partition
$[0,\bar{\texttt{t}}]$ into small intervals,
$0=\texttt{t}_{0}<\texttt{t}_{1}<\dots<\texttt{t}_{h}=\bar{\texttt{t}}=(2\log
p-a\log\log p)^{1/2}$, such that $\texttt{t}_{m}-\texttt{t}_{m-1}=v_{p}$ for
$m\in\\{1,\dots,h-1\\}$ and $\texttt{t}_{h}-\texttt{t}_{h-1}\leq v_{p}$, where
$v_{p}=(\log p\log\log p)^{-1/2}$. Then a simple calculation gives $1/h\leq
v_{p}/\bar{\texttt{t}}=O(1/(\log p\sqrt{\log\log p}))$. Fix arbitrary
$m\in\\{1,\dots,h\\}$. For any
$\texttt{t}\in[\texttt{t}_{m-1},\texttt{t}_{m}]$, we have
$\displaystyle\frac{\sum_{(i,j)\in\mathcal{H}_{\leq
0}}1\\{\texttt{T}_{ij}\geq\texttt{t}\\}}{2pQ(\texttt{t})}\leq\frac{\sum_{(i,j)\in\mathcal{H}_{\leq
0}}1\\{\texttt{T}_{ij}\geq\texttt{t}_{m-1}\\}}{2pQ(\texttt{t}_{m-1})}\frac{Q(\texttt{t}_{m-1})}{Q(\texttt{t}_{m})}$
and
$\displaystyle\frac{\sum_{(i,j)\in\mathcal{H}_{\leq
0}}1\\{\texttt{T}_{ij}\geq\texttt{t}\\}}{2pQ(\texttt{t})}\geq\frac{\sum_{(i,j)\in\mathcal{H}_{\leq
0}}1\\{\texttt{T}_{ij}\geq\texttt{t}_{m}\\}}{2pQ(\texttt{t}_{m})}\frac{Q(\texttt{t}_{m})}{Q(\texttt{t}_{m-1})}.$
Lemma 7.2 of Javanmard and Javadi (2019) gives
$\displaystyle\frac{Q(\texttt{t}_{m-1})}{Q(\texttt{t}_{m})}\leq\frac{Q(\texttt{t}_{m}-v_{p})}{Q(\texttt{t}_{m})}=1+O(v_{p}+v_{p}\bar{\texttt{t}})=1+o(1)$
uniformly in $m\in\\{1,\dots,h\\}$. Thus the proof completes if the following
is true:
$\displaystyle\tilde{V}:=\max_{m\in\\{1,\dots,h\\}}\left|\frac{\sum_{(i,j)\in\mathcal{H}_{\leq
0}}\left[1\\{\texttt{T}_{ij}\geq\texttt{t}_{m}\\}-Q(\texttt{t}_{m})\right]}{2pQ(\texttt{t}_{m})}\right|=o_{p}(1).$
Prove $\tilde{V}=o_{p}(1)$. Fix arbitrary $\varepsilon>0$. Then we have
$\displaystyle\operatorname{\mathbb{P}}\left(\tilde{V}>\varepsilon\right)$
$\displaystyle\leq
h\max_{m\in\\{1,\dots,h\\}}\operatorname{\mathbb{P}}\left(\left|\frac{\sum_{(i,j)\in\mathcal{H}_{\leq
0}}\left[1\\{\texttt{T}_{ij}\geq\texttt{t}_{m}\\}-Q(\texttt{t}_{m})\right]}{2pQ(\texttt{t}_{m})}\right|>\varepsilon\right)$
$\displaystyle\leq
h\max_{m\in\\{1,\dots,h\\}}\operatorname{\mathbb{E}}\left|\frac{\sum_{(i,j)\in\mathcal{H}_{\leq
0}}\left[1\\{\texttt{T}_{ij}\geq\texttt{t}_{m}\\}-Q(\texttt{t}_{m})\right]}{2pQ(\texttt{t}_{m})}\right|^{2}/\varepsilon^{2},$
where $h\lesssim\log p\sqrt{\log\log p}$. Consider bounding the expectation
uniformly in $m\in\\{1,\dots,b\\}$. Denote by $\sum$ and $\sum\sum$ the
summations over $(i,j)\in\mathcal{H}_{\leq 0}$ and
$(i,j),(k,\ell)\in\mathcal{H}_{\leq 0}$, respectively. By a simple calculus
and Lemma 8(a)(b), we obtain
$\displaystyle\operatorname{\mathbb{E}}\left[\frac{\sum\sum\left[1\\{\texttt{T}_{ij}\geq\texttt{t}_{m}\\}-Q(\texttt{t}_{m})\right]\left[1\\{\texttt{T}_{k\ell}\geq\texttt{t}_{m}\\}-Q(\texttt{t}_{m})\right]}{4p^{2}Q(\texttt{t}_{m})^{2}}\right]$
$\displaystyle=\frac{\sum\sum\operatorname{\mathbb{P}}(\texttt{T}_{ij}\geq\texttt{t}_{m},\texttt{T}_{k\ell}\geq\texttt{t}_{m})}{4p^{2}Q(\texttt{t}_{m})^{2}}-\frac{|\mathcal{H}_{\leq
0}|\sum\operatorname{\mathbb{P}}(\texttt{T}_{ij}\geq\texttt{t}_{m})}{2p^{2}Q(\texttt{t}_{m})}+\frac{|\mathcal{H}_{\leq
0}|^{2}}{4p^{2}}$
$\displaystyle\leq\frac{\sum\sum\operatorname{\mathbb{P}}(\mathcal{Z}_{ij}\geq\texttt{t}_{m}-\delta_{1},\mathcal{Z}_{k\ell}\geq\texttt{t}_{m}-\delta_{1})}{4p^{2}Q(\texttt{t}_{m})^{2}}-\frac{(1+O(s/p))Q(\texttt{t}_{m}+\delta_{1})}{2Q(\texttt{t}_{m})}+\frac{1}{4}$
$\displaystyle=:(i)+(ii)+1/4,$
where $(\mathcal{Z}_{ij},\mathcal{Z}_{k\ell})$ is a bivariate standard normal
random vector and $\delta_{1}=O(T^{-\kappa})$ for some constant $\kappa>0$.
Thus we will conclude $\tilde{V}=o_{p}(1)$ if we show that $(i)\leq
1/4+o(1/h)$ and $(ii)\leq-1/2+o(1/h)$, where $1/h=O(1/(\log p\sqrt{\log\log
p}))$.
First consider $(ii)$. Expand $Q(\texttt{t}_{m}+\delta_{1})$ around
$\delta_{1}=0$ by the mean value theorem. Then there exists $\delta_{1}^{*}$
between $0$ and $\delta_{1}$ such that
$\displaystyle(ii)$
$\displaystyle=-\frac{Q(\texttt{t}_{m}+\delta_{1})}{2Q(\texttt{t}_{m})}(1+o(1))=-\frac{Q(\texttt{t}_{m})+Q^{\prime}(\texttt{t}_{m}+\delta_{1}^{*})\delta_{1}}{2Q(\texttt{t}_{m})}(1+o(1))$
$\displaystyle\leq-1/2-\frac{(2\pi)^{-1/2}\exp\left\\{-(\texttt{t}_{m}+\delta_{1}^{*})^{2}/2\right\\}\delta_{1}}{2(2\pi)^{-1/2}\texttt{t}_{m}^{-1}\exp\left\\{-\texttt{t}_{m}^{2}/2\right\\}}(1+o(1))$
$\displaystyle=-1/2-\texttt{t}_{m}\exp\left\\{-\delta_{1}^{*}(\texttt{t}_{m}+\delta_{1}^{*}/2)\right\\}\delta_{1}(1+o(1))/2$
$\displaystyle=-1/2+o(1/h),$ (A.2)
where the last equality holds uniformly in $m\in\\{1,\dots,h\\}$ because
$\delta_{1}$ is polynomially decaying while $\texttt{t}_{m}$ is the
logarithmic function for all $m\in\\{1,\dots,h\\}$.
Next consider $(i)$ by decomposing the summation into two parts,
$(i,j)=(k,\ell)$ and $(i,j)\not=(k,\ell)$. First we see the summation over
$(i,j)=(k,\ell)$, which has $p$ entries. We have
$\displaystyle\frac{\sum\sum\operatorname{\mathbb{P}}(\mathcal{Z}_{ij}\geq\texttt{t}_{m}-\delta_{1},\mathcal{Z}_{k\ell}\geq\texttt{t}_{m}-\delta_{1})1\\{(i,j)=(k,\ell)\\}}{4p^{2}Q(\texttt{t}_{m})^{2}}$
$\displaystyle=\frac{|\mathcal{H}_{\geq
0}|}{4p}\frac{1}{pQ(\texttt{t}_{m})}\frac{Q(\texttt{t}_{m}-\delta_{1})}{Q(\texttt{t}_{m})}=o(1/h).$
(A.3)
The last estimate is true because we have $|\mathcal{H}_{\geq 0}|/p=O(1)$ and
$Q(\texttt{t}_{m}-\delta_{1})/Q(\texttt{t}_{m})=1+o(1/h)$ by the same reason
as above, and by Szarek and Werner (1999),
$\displaystyle\frac{1}{pQ(\texttt{t}_{m})}$
$\displaystyle\leq\frac{\bar{\texttt{t}}+(\bar{\texttt{t}}^{2}+4)^{1/2}}{p2(2\pi)^{-1/2}\exp\left\\{-\bar{\texttt{t}}^{2}/2\right\\}}\lesssim\frac{\bar{\texttt{t}}}{p\exp\left\\{-\bar{\texttt{t}}^{2}/2\right\\}}$
$\displaystyle\lesssim\log^{1/2}p\cdot\exp\\{-\log\log^{a/2}p\\}=O(\log^{1/2-a/2}p)=o(1/h)$
(A.4)
for any $a>3$.
Finally we bound $(i)$ with summation over $(i,j)\not=(k,\ell)$, which
contains $p^{2}-p$ entries. We have
$\displaystyle\frac{\sum\sum\operatorname{\mathbb{P}}(\mathcal{Z}_{ij}\geq\texttt{t}_{m}-\delta_{1},\mathcal{Z}_{k\ell}\geq\texttt{t}_{m}-\delta_{1})}{4p^{2}Q(\texttt{t}_{m})^{2}}$
$\displaystyle\leq\frac{\sum\sum\operatorname{\mathbb{P}}(\mathcal{Z}_{ij}\geq\texttt{t}_{m}-\delta_{1},\mathcal{Z}_{k\ell}\geq\texttt{t}_{m}-\delta_{1})}{4p^{2}Q(\texttt{t}_{m}-\delta_{1})^{2}}\frac{Q(\texttt{t}_{m}-\delta_{1})^{2}}{Q(\texttt{t}_{m})^{2}},$
(A.5)
where $Q(\texttt{t}_{m}-\delta_{1})^{2}/Q(\texttt{t}_{m})^{2}=1+o(1/h)$ as
above. By Lemma 9 and the inequality $1/\sqrt{1-x^{2}}\leq
1+|x|/\sqrt{1-x^{2}}$ for any $x\in(-1,1)$, we obtain
$\displaystyle\frac{\sum\sum\operatorname{\mathbb{P}}(\mathcal{Z}_{ij}\geq\texttt{t}_{m}-\delta_{1},\mathcal{Z}_{k\ell}\geq\texttt{t}_{m}-\delta_{1})}{4p^{2}Q(\texttt{t}_{m}-\delta_{1})^{2}}\leq\frac{1}{4p^{2}}\sum\sum\frac{1}{\sqrt{1-\rho_{(i,j),(k,\ell)}^{2}}}$
$\displaystyle=1/4+o(1/h)+\frac{1}{4p^{2}}\sum\sum\frac{|\rho_{(i,j),(k,\ell)}|}{\sqrt{1-\rho_{(i,j),(k,\ell)}^{2}}}.$
(A.6)
We evaluate the sum using Condition 6. Denote
$\mathcal{H}_{w}^{2}=\mathcal{H}_{w1}\times\mathcal{H}_{w2}$ and
$\mathcal{H}_{s}^{2}=\mathcal{H}_{s1}\times\mathcal{H}_{s2}$. Then it is
decomposed as
$\displaystyle\sum_{(i,j)\in\mathcal{H}_{\leq
0}}\sum_{(k,\ell)\in\mathcal{H}_{\leq 0}}=\sum_{(i,j)\in\mathcal{H}_{\leq
0}\cap\mathcal{H}_{w1}}\sum_{(k,\ell)\in\mathcal{H}_{\leq
0}\cap\mathcal{H}_{w2}}+\sum_{(i,j)\in\mathcal{H}_{\leq
0}\cap\mathcal{H}_{s1}}\sum_{(k,\ell)\in\mathcal{H}_{\leq
0}\cap\mathcal{H}_{s2}}.$
They are bounded by
$\displaystyle\frac{1}{4p^{2}}\sum_{(i,j)\in\mathcal{H}_{\leq
0}\cap\mathcal{H}_{w1}}\sum_{(k,\ell)\in\mathcal{H}_{\leq
0}\cap\mathcal{H}_{w2}}\frac{|\rho_{(i,j),(k,\ell)}|}{\sqrt{1-\rho_{(i,j),(k,\ell)}^{2}}}$
$\displaystyle\leq\frac{1}{4p^{2}}\sum_{(i,j)\in\mathcal{H}_{w1}}\sum_{(k,\ell)\in\mathcal{H}_{w2}}\frac{|\rho_{(i,j),(k,\ell)}|}{\sqrt{1-\rho_{(i,j),(k,\ell)}^{2}}}$
$\displaystyle\leq\frac{p^{2}-p-|\mathcal{H}_{s1}\times\mathcal{H}_{s2}|}{4p^{2}}\frac{c/\log^{2}p}{\sqrt{1-c^{2}/\log^{4}p}}=O(1)O(1/\log^{2}p)=o(1/h)$
(A.7)
and
$\displaystyle\frac{1}{4p^{2}}\sum_{(i,j)\in\mathcal{H}_{\leq
0}\cap\mathcal{H}_{s1}}\sum_{(k,\ell)\in\mathcal{H}_{\leq
0}\cap\mathcal{H}_{s2}}\frac{|\rho_{(i,j),(k,\ell)}|}{\sqrt{1-\rho_{(i,j),(k,\ell)}^{2}}}$
$\displaystyle\leq\frac{1}{4p^{2}}\sum_{(i,j)\in\mathcal{H}_{s1}}\sum_{(k,\ell)\in\mathcal{H}_{s2}}\frac{|\rho_{(i,j),(k,\ell)}|}{\sqrt{1-\rho_{(i,j),(k,\ell)}^{2}}}$
$\displaystyle\leq\frac{|\mathcal{H}_{s1}\times\mathcal{H}_{s2}|}{4p^{2}}\frac{|\bar{\rho}|}{\sqrt{1-\bar{\rho}^{2}}}=O(1/\log^{2}p)O(1)=o(1/h).$
(A.8)
From (A.3) and (A.5)–(A.8), we have $(i)=1/4+o(1/h)$. This completes the
proof. ∎
### A.3 Proof of Theorem 3
Before proceeding the proof, we recall the notation. In the proof, we mainly
use the $t$-statistics with the “sandwich” s.e., defined as
$\displaystyle\texttt{T}_{ij}^{*}=\frac{\sqrt{T}\hat{\phi}_{ij}^{*}}{\hat{\sigma}^{*}_{i}\sqrt{\hat{\bm{\omega}}^{\prime}_{j}\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}}}$
for $(i,j)\in\hat{\mathcal{S}}_{{\textsf{L}}}^{c}$, where
$\hat{\sigma}_{i}^{*2}=(T-\hat{s}_{i})^{-1}\sum_{t=1}^{T}\hat{u}_{it}^{*2}$.
The bootstrap debiased lasso estimator is also defined as
$\displaystyle\hat{\bm{\Phi}}^{*}$
$\displaystyle=\hat{\bm{\Phi}}^{{\textsf{L}}*}+\left(\mathbf{Y}^{*}-\hat{\bm{\Phi}}^{{\textsf{L}}*}\mathbf{X}\right)\mathbf{X}^{\prime}\hat{\bm{\Omega}}/T$
$\displaystyle=\hat{\bm{\Phi}}^{{\textsf{L}}}+\mathbf{U}^{*}\mathbf{X}^{\prime}\bm{\Omega}/T+\mathbf{U}^{*}\mathbf{X}^{\prime}(\hat{\bm{\Omega}}-\bm{\Omega})/T+\left(\hat{\bm{\Phi}}^{{\textsf{L}}}-\hat{\bm{\Phi}}^{{\textsf{L}}*}\right)(\hat{\bm{\Sigma}}_{x}\hat{\bm{\Omega}}-\mathbf{I}).$
By the definition, we have $\hat{\phi}_{ij}^{{\textsf{L}}}=0$ for all
$(i,j)\in\hat{\mathcal{S}}_{{\textsf{L}}}^{c}$, and hence
$\displaystyle\hat{\phi}_{ij}^{*}=\mathbf{u}_{i}^{*}\mathbf{X}^{\prime}\bm{\omega}_{j}/T+\mathbf{u}_{i}^{*}\mathbf{X}^{\prime}(\hat{\bm{\omega}}_{j}-\bm{\omega}_{j})/T+\left(\hat{\bm{\phi}}_{i}^{{\textsf{L}}}-\hat{\bm{\phi}}_{i}^{{\textsf{L}}*}\right)(\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}-\mathbf{e}_{j})$
for all $(i,j)\in\hat{\mathcal{S}}_{{\textsf{L}}}^{c}$. We have also defined
$\displaystyle\mathbb{Q}^{*}(\texttt{t})$
$\displaystyle=\frac{1}{p-\hat{s}}\sum_{(i,j)\in\hat{\mathcal{S}}_{{\textsf{L}}}^{c}}\operatorname{\mathbb{P}}^{*}\left(\texttt{T}_{ij}^{*}>\texttt{t}\right)~{}~{}\text{(conditional
on the observations)},$
where $p=KN^{2}$, and $\operatorname{\mathbb{P}}^{*}$ indicates the
probability measure induced by the bootstrap.
###### Proof.
It is sufficient to consider the case when $\texttt{t}_{0}$ is computed by
(13). From the beginning of Case 2 in the proof of Theorem 2 with the same
argument, the FDR is decomposed as
$\displaystyle\operatorname{dFDP}(\texttt{t}_{0})$
$\displaystyle=\frac{p\left\\{\mathbb{Q}^{*}(\texttt{t}_{0})+1-\mathbb{Q}^{*}(-\texttt{t}_{0})\right\\}}{|\hat{\mathcal{S}}(\texttt{t}_{0})|\vee
1}\cdot\frac{2Q(\texttt{t}_{0})}{\mathbb{Q}^{*}(\texttt{t}_{0})+1-\mathbb{Q}^{*}(-\texttt{t}_{0})}$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\cdot\frac{\sum_{(i,j)\in\mathcal{I}_{\leq
0}}1\\{\texttt{T}_{ij}\geq\texttt{t}_{0}\\}+\sum_{(i,j)\in\mathcal{I}_{\geq
0}}1\\{\texttt{T}_{ij}\leq-\texttt{t}_{0}\\}}{2pQ(\texttt{t}_{0})}$
$\displaystyle\leq
q\cdot\frac{2}{\mathbb{Q}^{*}(\texttt{t}_{0})/Q(\texttt{t}_{0})+\\{1-\mathbb{Q}^{*}(-\texttt{t}_{0})\\}/Q(\texttt{t}_{0})}\left\\{V+1+o(1)\right\\},$
where $V$ is defined in the proof of Theorem 2 and has been shown
$V=o_{p}(1)$. Hence, it suffices to prove that the event,
$|\mathbb{Q}^{*}(\texttt{t}_{0})/Q(\texttt{t}_{0})-1|$ and
$|\\{1-\mathbb{Q}^{*}(-\texttt{t}_{0})\\}/Q(\texttt{t}_{0})-1|$ converge to
zero uniformly in $\texttt{t}_{0}\in[0,\bar{\texttt{t}}]$, occurs with high
probability.
To this end, define
$\displaystyle S_{ij}^{*}$
$\displaystyle=T^{-1/2}\sum_{t=1}^{T}{u}_{it}^{*}\mathbf{x}_{t}^{\prime}\hat{\bm{\omega}}_{j},~{}~{}R_{ij}^{*}=T^{-1/2}\mathbf{u}_{i}^{*}\mathbf{X}^{\prime}(\hat{\bm{\omega}}_{j}-{\bm{\omega}}_{j})+\sqrt{T}(\hat{\bm{\phi}}_{i}^{L}-\hat{\bm{\phi}}_{i}^{L*})(\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}-\mathbf{e}_{j}),$
$\displaystyle\hat{m}_{ij}^{*2}$
$\displaystyle=\hat{\sigma}_{i}^{*2}\hat{\bm{\omega}}_{j}^{\prime}\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j},~{}~{}\tilde{m}_{ij}^{*2}=T^{-1}\sum_{t=1}^{T}{u}_{it}^{*2}\hat{\bm{\omega}}_{j}^{\prime}\mathbf{x}_{t}\mathbf{x}_{t}^{\prime}\hat{\bm{\omega}}_{j},$
where
$\hat{\sigma}_{i}^{*2}=(T-\hat{s}_{i})^{-1}\sum_{t=1}^{T}\hat{u}_{it}^{*2}$
with
$\hat{u}_{it}^{*2}={u}_{it}^{*2}-2{u}_{it}^{*}\mathbf{x}_{t}^{\prime}\bm{\delta}_{i}^{*}+{\bm{\delta}_{i}^{*}}^{\prime}\mathbf{x}_{t}\mathbf{x}_{t}^{\prime}\bm{\delta}_{i}^{*}$
and ${u}_{it}^{*2}=\hat{u}_{it}^{2}\zeta_{t}^{2}$. Then by the construction,
we have $\texttt{T}_{ij}^{*}=(S_{ij}^{*}+R_{ij}^{*})/\hat{m}_{ij}^{*}$. We
first check if $\texttt{T}_{ij}^{*}$ is asymptotically equivalent to the self-
normalized sum, $S_{ij}^{*}/\tilde{m}_{ij}^{*}$, and then verify that it can
be uniformly normally approximated in the relative error.
Define the events,
$\displaystyle
A_{1}^{*}=\left\\{\max_{i\in[N]}\max_{j\in[KN]}\left|\frac{\tilde{m}_{ij}^{*}}{\hat{m}_{ij}^{*}}-1\right|\lesssim\bar{\mu}_{1}\right\\},~{}~{}~{}A_{2}^{*}=\left\\{\max_{i\in[N]}\max_{j\in[KN]}\left|\frac{R_{ij}^{*}}{\hat{m}_{ij}^{*}}\right|\lesssim\bar{\mu}_{2}\right\\},$
where
$\displaystyle\bar{\mu}_{1}$
$\displaystyle=\max\left\\{\bar{\tau}_{1},\bar{\tau}_{2},\bar{\tau}_{3}\right\\},$
$\displaystyle\bar{\mu}_{2}$
$\displaystyle=\max\left\\{\lambda^{2-r}M_{\omega}^{2-2r}s_{\omega},\
\bar{s}\lambda M_{\omega}\log^{3/2}(N\vee T)\right\\}$
with $\bar{\tau}_{1}$, $\bar{\tau}_{2}$, and $\bar{\tau}_{3}$ defined in
Lemmas 13, 14, and 15, respectively. Then the event,
$\displaystyle
A:=\left\\{\hat{\mathcal{S}}_{{\textsf{L}}}\supset\mathcal{S}\right\\}\cap\left\\{\operatorname{\mathbb{P}}^{*}\left(A_{1}^{*c}\right)=O((NT)^{-\nu})\right\\}\cap\left\\{\operatorname{\mathbb{P}}^{*}\left(A_{2}^{*c}\right)=O((NT)^{-\nu})\right\\},$
occurs with high probability by the assumed condition on
$\hat{\mathcal{S}}_{L}$ and Lemmas 16 and 17. By a simple computation, we have
$\bar{\mu}_{1}\vee\bar{\mu}_{2}\asymp\bar{\mu}$ with
$\displaystyle\bar{\mu}$
$\displaystyle=\max\left\\{M_{\omega}^{3-2r}\lambda^{1-r}s_{\omega}\log^{2}(N\vee
T),~{}M_{\omega}\bar{s}\lambda\log^{3/2}(N\vee
T),~{}M_{\omega}^{2}\lambda\log(N\vee T)\right\\}.$
Because $\bar{\mu}$ tends to zero polynomially, we observe that conditional on
$A$,
$\displaystyle\mathbb{Q}^{*}(\texttt{t})=\frac{1}{|\hat{\mathcal{S}}_{{\textsf{L}}}^{c}|}\sum_{(i,j)\in\hat{\mathcal{S}}_{{\textsf{L}}}^{c}}\operatorname{\mathbb{P}}^{*}\left(\frac{S_{ij}^{*}+R_{ij}^{*}}{\hat{m}_{ij}^{*}}>\texttt{t}\right)$
$\displaystyle\leq\frac{1}{|\hat{\mathcal{S}}_{{\textsf{L}}}^{c}|}\sum_{(i,j)\in\hat{\mathcal{S}}_{{\textsf{L}}}^{c}}\operatorname{\mathbb{P}}^{*}\left(\frac{S_{ij}^{*}}{\tilde{m}_{ij}^{*}}\left\\{1+\left(\frac{\tilde{m}_{ij}^{*}}{\hat{m}_{ij}^{*}}-1\right)\right\\}>\texttt{t}-\frac{R_{ij}^{*}}{\hat{m}_{ij}^{*}},~{}A_{1}^{*}\cap
A_{2}^{*}\right)+\operatorname{\mathbb{P}}^{*}\left(A_{1}^{*c}\cup
A_{2}^{*c}\right)$
$\displaystyle\leq\frac{1}{|\hat{\mathcal{S}}_{{\textsf{L}}}^{c}|}\sum_{(i,j)\in\hat{\mathcal{S}}_{{\textsf{L}}}^{c}}\operatorname{\mathbb{P}}^{*}\left(\frac{S_{ij}^{*}}{\tilde{m}_{ij}^{*}}>\frac{\texttt{t}-O(\bar{\mu}_{2})}{1+O(\bar{\mu}_{1})}\right)+O((N\vee
T)^{-\nu})$ (A.9)
with high probability, where the terms, $O(\bar{\mu}_{1})$,
$O(\bar{\mu}_{2})$, and $O((N\vee T)^{-\nu})$, depend neither on t nor
$(i,j)$. Since $(i,j)\in\hat{\mathcal{S}}_{{\textsf{L}}}^{c}$ implies
$(i,j)\in\mathcal{S}^{c}$ on event $A$, Lemma 18 entails the normal
approximation of the self-normalized sum,
$\displaystyle\operatorname{\mathbb{P}}^{*}\left(\frac{S_{ij}^{*}}{\tilde{m}_{ij}^{*}}>\frac{\texttt{t}-O(\bar{\mu}_{2})}{1+O(\bar{\mu}_{1})}\right)$
$\displaystyle=Q\left(\frac{\texttt{t}-O(\bar{\mu}_{2})}{1+O(\bar{\mu}_{1})}\right)\left\\{1+O\left(\frac{M_{\omega}\log
NT}{T^{1/2}}\right)\left(1+\frac{\texttt{t}-O(\bar{\mu}_{2})}{1+O(\bar{\mu}_{1})}\right)^{3}\right\\},$
(A.10)
with high probability. Combining (A.9) and (A.10) and dividing the both sides
by $Q(\texttt{t})$ yield
$\displaystyle\frac{\mathbb{Q}^{*}(\texttt{t})}{Q(\texttt{t})}$
$\displaystyle=\frac{Q\left(\left\\{\texttt{t}-O(\bar{\mu}_{2})\right\\}/\left\\{1+O(\bar{\mu}_{1})\right\\}\right)}{Q(\texttt{t})}\left\\{1+O\left(\frac{M_{\omega}\log
NT}{T^{1/2}}\right)\left(1+\frac{\texttt{t}-O(\bar{\mu}_{2})}{1+O(\bar{\mu}_{1})}\right)^{3}\right\\}$
$\displaystyle+\frac{O((N\vee T)^{-\nu})}{Q(\texttt{t})}.$
By the assumed condition, $\bar{\mu}_{1}$, $\bar{\mu}_{2}$ and
$M_{\omega}/T^{1/2}$ decay polynomially while
$\texttt{t}\in[0,\bar{\texttt{t}}]$ at most diverges logarithmically. Thus we
obtain uniformly in $\texttt{t}\in[0,\bar{\texttt{t}}]$,
$\displaystyle 1+O\left(\frac{M_{\omega}\log
NT}{T^{1/2}}\right)\left(1+\frac{\texttt{t}-O(\bar{\mu}_{2})}{1+O(\bar{\mu}_{1})}\right)^{3}=1+o(1)$
and, by an application of Lemma 7.2 of Javanmard and Javadi (2019),
$\displaystyle\frac{Q\left(\\{\texttt{t}-O(\bar{\mu}_{2})\\}/\\{1+O(\bar{\mu}_{1})\\}\right)}{Q(\texttt{t})}=1+O(\bar{\mu}_{2})(1+\texttt{t})+O(\bar{\mu}_{1})(1+\texttt{t}^{2})=1+o(1)$
uniformly in $\texttt{t}\in[0,\bar{\texttt{t}}]$. By (A.4), we have
$\displaystyle\sup_{\texttt{t}\in[0,\bar{\texttt{t}}]}\frac{O((N\vee
T)^{-\nu})}{Q(\texttt{t})}=\frac{O((N\vee
T)^{-\nu})}{Q(\bar{\texttt{t}})}=O((N\vee T)^{-\nu+2}).$
Consequently, it holds that
$\displaystyle\sup_{\texttt{t}\in[0,\bar{\texttt{t}}]}\left|\frac{\mathbb{Q}^{*}(\texttt{t})}{Q(\texttt{t})}-1\right|=o(1)$
with high probability for any $\nu>2$. The same argument is applied to showing
$|\\{1-\mathbb{Q}^{*}(-\texttt{t})\\}/Q(\texttt{t})-1|=o(1)$ uniformly in
$\texttt{t}\in[0,\bar{\texttt{t}}]$. This completes the proof. ∎
### A.4 Proof of Theorem 4
###### Proof.
We use the same notation as in the proof of Theorem 2. Let
$\bar{\texttt{t}}_{0}=\sqrt{2\log p}$ denote the upper bound of the critical
value, $\texttt{t}_{0}$. By the definition of power and monotonicity of
probability, we have
$\displaystyle\operatorname{Power}(\texttt{t}_{0})$
$\displaystyle=\operatorname{\mathbb{E}}\left[\frac{|\\{(i,j)\in\hat{\mathcal{S}}(\texttt{t}_{0}):\operatorname{sgn}(\hat{\phi}_{ij})=\operatorname{sgn}(\phi_{ij})\\}|}{s\vee
1}\right]$
$\displaystyle=\frac{1}{s}\sum_{(i,j)\in\mathcal{S}_{<0}}\operatorname{\mathbb{P}}\left(\texttt{T}_{ij}\leq-\texttt{t}_{0}\right)+\frac{1}{s}\sum_{(i,j)\in\mathcal{S}_{>0}}\operatorname{\mathbb{P}}\left(\texttt{T}_{ij}\geq\texttt{t}_{0}\right)$
$\displaystyle\geq\frac{1}{s}\sum_{(i,j)\in\mathcal{S}_{<0}}\operatorname{\mathbb{P}}\left(\texttt{T}_{ij}\leq-\bar{\texttt{t}}_{0}\right)+\frac{1}{s}\sum_{(i,j)\in\mathcal{S}_{>0}}\operatorname{\mathbb{P}}\left(\texttt{T}_{ij}\geq\bar{\texttt{t}}_{0}\right).$
Consider the second probability in the lower bound. Proposition 2 gives
$\displaystyle\texttt{T}_{ij}=\sqrt{T}\hat{\phi}_{ij}/\hat{m}_{ij}=(\sqrt{T}\phi_{ij}+z_{ij}+r_{ij})/\hat{m}_{ij},$
where $\phi_{ij}=0$ if and only if $(i,j)\in\mathcal{S}^{c}$. It holds that
$\displaystyle\max_{(i,j)\in\mathcal{S}_{>0}}\operatorname{\mathbb{P}}\left(\texttt{T}_{ij}\leq\bar{\texttt{t}}_{0}\right)$
$\displaystyle\leq\max_{(i,j)\in\mathcal{S}_{>0}}\operatorname{\mathbb{P}}\left(\frac{z_{ij}+r_{ij}}{\hat{m}_{ij}}+\frac{m_{ij}}{\hat{m}_{ij}}\frac{\sqrt{T}\phi_{ij}^{0}}{m_{ij}}\leq\bar{\texttt{t}}_{0},~{}\frac{m_{ij}}{\hat{m}_{ij}}>\frac{1}{2}\right)+\max_{(i,j)\in\mathcal{S}_{>0}}\operatorname{\mathbb{P}}\left(\frac{m_{ij}}{\hat{m}_{ij}}\leq\frac{1}{2}\right)$
$\displaystyle\leq\max_{(i,j)\in\mathcal{S}_{>0}}\operatorname{\mathbb{P}}\left(\mathcal{Z}_{ij}\leq\bar{\texttt{t}}_{0}-\min_{(i,j)\in\mathcal{S}_{>0}}\frac{\sqrt{T}\phi_{ij}^{0}}{2m_{ij}}+\delta_{1}\right)+O\left((N\vee
T)^{-\nu}\right)$ $\displaystyle\leq\Phi\left(-\sqrt{2\log
p}+\delta_{1}\right)+O\left((N\vee T)^{-\nu}\right),$
where the second inequality follows from Lemmas 7 and 8, and the third
inequality holds by $\bar{\texttt{t}}_{0}=\sqrt{2\log p}$ and Condition 8.
Since $\delta_{1}\sqrt{\log p}=o(1)$ by the assumption, the Gaussian
probability in the upper bound tends to zero. The same result is obtained for
the first probability. Thus the power goes to unity because
$|\mathcal{S}_{<0}|+|\mathcal{S}_{>0}|=s$. This completes the proof. ∎
## Appendix B Proofs of Propositions
### B.1 Proof of Proposition 1
###### Proof.
Derive the non-asymptotic error bound for the Lasso estimator. First define
two events:
$\displaystyle\mathcal{E}_{1}=\left\\{\left\|T^{-1}\mathbf{X}\mathbf{U}^{\prime}\right\|_{\max}\leq\lambda/2\right\\},~{}~{}~{}\mathcal{E}_{2}=\left\\{\left\|\hat{\bm{\Sigma}}_{x}-\bm{\Sigma}_{x}\right\|_{\max}\leq
b\lambda/2\right\\},$
where $\lambda=8bc_{uu}\sqrt{2(\nu+5)^{3}\log^{3}(N\vee T)/T}$ with any
positive constant $\nu$. We work on event $\mathcal{E}_{1}\cap\mathcal{E}_{2}$
since Lemmas 3 and 4 guarantee that $\mathcal{E}_{1}\cap\mathcal{E}_{2}$
occurs with probability at least $1-O((N\vee T)^{-\nu})$.
Define
$\bm{\delta}_{i\cdot}=\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}-\bm{\phi}_{i\cdot}$.
Because $\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}$ minimizes the objective
function, we have
$\displaystyle(2T)^{-1}\|\mathbf{y}_{i\cdot}-\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}\mathbf{X}\|_{2}^{2}+\lambda\|\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}\|_{1}$
$\displaystyle\leq(2T)^{-1}\|\mathbf{y}_{i\cdot}-\bm{\phi}_{i\cdot}\mathbf{X}\|_{2}^{2}+\lambda\|\bm{\phi}_{i\cdot}\|_{1},$
which is equivalently written as
$\displaystyle(2T)^{-1}\|\bm{\delta}_{i\cdot}\mathbf{X}\|_{2}^{2}$
$\displaystyle\leq
T^{-1}\mathbf{u}_{i\cdot}\mathbf{X}^{\prime}\bm{\delta}_{i\cdot}^{\prime}+\lambda\|\bm{\phi}_{i\cdot}\|_{1}-\lambda\|\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}\|_{1}.$
By Hölder’s inequality and the triangle inequality, we have
$\displaystyle(2T)^{-1}\|\bm{\delta}_{i\cdot}\mathbf{X}\|_{2}^{2}$
$\displaystyle\leq\left|T^{-1}\mathbf{u}_{i\cdot}\mathbf{X}^{\prime}\bm{\delta}_{i\cdot}^{\prime}\right|+\lambda\|\bm{\phi}_{i\cdot}\|_{1}-\lambda\|\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}\|_{1}$
$\displaystyle\leq\|T^{-1}\mathbf{u}_{i\cdot}\mathbf{X}^{\prime}\|_{\infty}\|\bm{\delta}_{i\cdot}\|_{1}+\lambda\|\bm{\delta}_{i\cdot}\|_{1}.$
On event $\mathcal{E}_{1}$, we thus obtain the upper bound of
$\|\bm{\delta}_{i\cdot}\mathbf{X}\|_{2}^{2}$ as
$\displaystyle T^{-1}\|\bm{\delta}_{i\cdot}\mathbf{X}\|_{2}^{2}\leq
3\lambda\|\bm{\delta}_{i\cdot}\|_{1}.$ (A.11)
Next we bound $\|\bm{\delta}_{i\cdot}\mathbf{X}\|_{2}^{2}$ from below. Lemma 5
states that $\bm{\delta}_{i\cdot}$ lies in
$\mathcal{D}=\\{\bm{\delta}_{i\cdot}\in\mathbb{R}^{1\times
KN}:\|\bm{\delta}_{i\mathcal{S}_{i}^{c}}\|_{1}\leq
3\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{1}\\}$ on $\mathcal{E}_{1}$. Hence under
$\mathcal{E}_{1}$ and $\mathcal{E}_{2}$, Lemma 6 entails
$\displaystyle
T^{-1}\|\bm{\delta}_{i\cdot}\mathbf{X}\|_{2}^{2}/\|\bm{\delta}_{i\cdot}\|_{2}^{2}\geq\gamma-16s_{i}\|\hat{\bm{\Sigma}}_{x}-\bm{\Sigma}_{x}\|_{\max}\geq\gamma-8bs_{i}\lambda.$
(A.12)
By (A.11), (A.12), Lemma 5, and the Cauchy–Schwarz inequality, we have
$\displaystyle(\gamma-8bs_{i}\lambda)\|\bm{\delta}_{i\cdot}\|_{2}^{2}$
$\displaystyle\leq
3\lambda\|\bm{\delta}_{i\cdot}\|_{1}=3\lambda\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{1}+3\lambda\|\bm{\delta}_{i\mathcal{S}_{i}^{c}}\|_{1}$
$\displaystyle\leq 12\lambda\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{1}\leq
12s_{i}^{1/2}\lambda\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{2}\leq
12s_{i}^{1/2}\lambda\|\bm{\delta}_{i\cdot}\|_{2}.$
This concludes that
$\displaystyle\|\bm{\delta}_{i\cdot}\|_{2}\leq\frac{12s_{i}^{1/2}\lambda}{\gamma-8bs_{i}\lambda}.$
Next we derive the prediction error bound. By Lemma 5 and the Cauchy–Schwarz
inequality again, the error bound in the element-wise $\ell_{1}$-norm is given
by
$\displaystyle\|\bm{\delta}_{i\cdot}\|_{1}=\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{1}+\|\bm{\delta}_{i\mathcal{S}_{i}^{c}}\|_{1}\leq
4\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{1}\leq
4s_{i}^{1/2}\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{2}\leq
4s_{i}^{1/2}\|\bm{\delta}_{i\cdot}\|_{2}\leq\frac{48s_{i}\lambda}{\gamma-8bs_{i}\lambda}.$
(A.13)
From (A.11) and (A.13), the prediction error bound is obtained as
$\displaystyle T^{-1}\|\bm{\delta}_{i\cdot}\mathbf{X}\|_{2}^{2}\leq
3\lambda\|\bm{\delta}_{i\cdot}\|_{1}\leq\frac{144s_{i}\lambda^{2}}{\gamma-8bs_{i}\lambda}.$
This completes the proof. ∎
### B.2 Proof of Proposition 2
###### Proof.
The first assertion is trivial by the construction of the debiased lasso
estimator. We derive the upper bound of $|r_{ij}|$. Observe that for each
$i\in[N]$,
$\displaystyle|r_{ij}|$
$\displaystyle\leq\left\|T^{-1/2}\mathbf{u}_{i\cdot}\mathbf{X}^{\prime}\right\|_{\infty}\left\|\hat{\bm{\omega}}_{j}-{\bm{\omega}}_{j}\right\|_{1}+\sqrt{T}\left\|\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}-\bm{\phi}_{i\cdot}\right\|_{1}\left\|\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}-\mathbf{e}_{j}\right\|_{\infty}$
$\displaystyle\leq\left\|T^{-1/2}\mathbf{U}\mathbf{X}^{\prime}\right\|_{\max}\max_{j}\left\|\hat{\bm{\omega}}_{j}-{\bm{\omega}}_{j}\right\|_{1}+\sqrt{T}\left\|\hat{\bm{\phi}}_{i\cdot}^{{\textsf{L}}}-\bm{\phi}_{i\cdot}\right\|_{1}\left\|\hat{\bm{\Sigma}}_{x}\hat{\bm{\Omega}}-\mathbf{I}_{KN}\right\|_{\max}$
uniformly in $j\in[KN]$. Let
$\lambda_{1}=b\|\bm{\Omega}\|_{\ell_{1}}\lambda/2$, and consider the event
$\displaystyle\\{\|\mathbf{X}\mathbf{U}^{\prime}/T\|_{\max}\leq\lambda/2\\}\cap\\{\|\hat{\bm{\Sigma}}_{x}-\bm{\Sigma}_{x}\|_{\max}\leq\lambda_{1}/\|\bm{\Omega}\|_{\ell_{1}}\\}.$
Then this occurs with probability at least $1-O((N\vee T)^{-\nu})$ by Lemmas 3
and 4 with the proof of Proposition 1. On this event, the proof of Theorem 6
in Cai et al. (2011) establishes the bounds,
$\max_{j}\|\hat{\bm{\omega}}_{j}-{\bm{\omega}}_{j}\|_{1}\lesssim(\|\bm{\Omega}\|_{\ell_{1}}\lambda_{1})^{1-r}s_{\omega}$
and
$\|\hat{\bm{\Sigma}}_{x}\hat{\bm{\Omega}}-\mathbf{I}_{KN}\|_{\max}\leq\lambda_{1}$,
under Condition 4. Therefore, together with the Lasso bound derived in
Proposition 1 and $\lambda_{1}\lesssim M_{\omega}\lambda$, we obtain
$\displaystyle\max_{j\in[KN]}|r_{ij}|$
$\displaystyle\lesssim\sqrt{T}\lambda(\|\bm{\Omega}\|_{\ell_{1}}\lambda_{1})^{1-r}s_{\omega}/2+\frac{48s_{i}\sqrt{T}\lambda\lambda_{1}}{\gamma-8bs_{i}\lambda}$
$\displaystyle\lesssim\sqrt{T}\lambda
M_{\omega}^{1-r}(M_{\omega}\lambda)^{1-r}s_{\omega}+s_{i}\sqrt{T}\lambda^{2}M_{\omega}$
$\displaystyle\lesssim\left(s_{\omega}M_{\omega}^{2-2r}\lambda^{1-r}+s_{i}M_{\omega}\lambda\right)\sqrt{\log^{3}(N\vee
T)}~{}(=:\bar{r}_{i})$
for each $i\in[N]$ such that $s_{i}\lambda=o(1)$. This completes the proof. ∎
## Appendix C Lemmas and their Proofs
###### Lemma 1.
If Condition 1 is true, then for any $i,j\in[N]$ and $s,t\in[T]$, there exist
constants $c_{uu},c_{m}>0$ such that
$\displaystyle(a)~{}~{}$
$\displaystyle\operatorname{\mathbb{P}}\left(\left|\frac{1}{T}\sum_{t=1}^{T}(u_{it}u_{jt}-\operatorname{\mathbb{E}}[u_{it}u_{jt}])\right|>x\right)\leq
2\exp\left\\{-\frac{1}{2}\left(\frac{Tx^{2}}{c_{uu}^{2}}\wedge\frac{Tx}{c_{uu}}\right)\right\\},$
$\displaystyle(b)~{}~{}$
$\displaystyle\operatorname{\mathbb{E}}\left[\max_{i\in[N]}\max_{t\in[T]}|u_{it}|^{m}\right]\leq
c_{m}\log^{m/2}(N\vee T),$
where $m\in\mathbb{N}$ is an arbitrary fixed constant.
###### Proof of Lemma 1.
Prove $(a)$. Since $u_{it}$ is sub-Gaussian by Condition 1, Lemma 2.7.7 of
Vershynin (2018) entails that for any $i,j\in[N]$,
$\\{u_{it}u_{jt}-\operatorname{\mathbb{E}}[u_{it}u_{jt}]\\}_{t}$ is a sequence
of i.i.d. (centered) sub-exponential random variables. Thus the result
directly follows by Bernstein’s inequality (Vershynin, 2018, Theorem 2.8.1).
Prove $(b)$. For any fixed $m\in\mathbb{N}$, the sub-Gaussian tail property
for $u_{it}$ in Condition 1 implies for all $x>0$,
$\displaystyle\operatorname{\mathbb{P}}(|u_{it}|^{m}>x)=\operatorname{\mathbb{P}}(|u_{it}|>x^{1/m})\leq
2\exp(-x^{2/m}/c_{u}).$
Using this tail probability with the union bound, we have
$\displaystyle\operatorname{\mathbb{E}}\left[\max_{i,t\in\mathbb{N}}\frac{|u_{it}|^{m}}{(1+\log
it)^{m/2}}\right]\leq\int_{0}^{\infty}\operatorname{\mathbb{P}}\left(\max_{i,t\in\mathbb{N}}\frac{|u_{it}|^{m}}{(1+\log
it)^{m/2}}>x\right)\mathop{}\\!\mathrm{d}x$
$\displaystyle\quad\leq\int_{0}^{(2c_{u})^{m/2}}\mathop{}\\!\mathrm{d}x+2\sum_{i=1}^{\infty}\sum_{t=1}^{\infty}\int_{(2c_{u})^{m/2}}^{\infty}\exp\left(-\frac{x^{2/m}(1+\log
i+\log t)}{c_{u}}\right)\mathop{}\\!\mathrm{d}x$
$\displaystyle\quad\leq(2c_{u})^{m/2}+2\sum_{i=1}^{\infty}i^{-2}\sum_{t=1}^{\infty}t^{-2}\int_{(2c_{u})^{m/2}}^{\infty}\exp\left(-\frac{x^{2/m}}{c_{u}}\right)\mathop{}\\!\mathrm{d}x,$
where the upper bound is further bounded by some positive constants. Thus
there exists some constant $M>0$ such that
$\displaystyle M$
$\displaystyle\geq\operatorname{\mathbb{E}}\left[\max_{i,t\in\mathbb{N}}\frac{|u_{it}|^{m}}{(1+\log
it)^{m/2}}\right]$
$\displaystyle\geq\operatorname{\mathbb{E}}\left[\max_{i\in[N],t\in[T]}\frac{|u_{it}|^{m}}{(1+\log
it)^{m/2}}\right]\geq\operatorname{\mathbb{E}}\left[\max_{i\in[N],t\in[T]}|u_{it}|^{m}\right]\frac{1}{(1+\log
NT)^{m/2}}.$
Replacing the constant factor appropriately gives the result. This completes
the proof. ∎
###### Lemma 2.
If Condition 2 is true, then the summability condition in (14) holds.
Furthermore, there exists a constant $\delta\in(0,1)$ such that for any
monotonically divergent sequence $r_{T}>0$ with sufficiently large $T$,
$\displaystyle\sum_{\ell=r_{T}}^{\infty}\|\mathbf{B}_{\ell}\|_{\infty}\leq\frac{\delta^{r_{T}}}{1-\delta}.$
(A.14)
In particular, the summability condition in (14) holds.
###### Proof of Lemma 2.
The VAR($K$) model is written as the VAR(1) companion form with coefficient
$\displaystyle\mathbf{A}=\begin{pmatrix}\bm{\Phi}_{1}&\bm{\Phi}_{2}&\dots&\bm{\Phi}_{K-1}&\bm{\Phi}_{K}\\\
\mathbf{I}_{N}&\mathbf{0}&\dots&\mathbf{0}&\mathbf{0}\\\
\mathbf{0}&\mathbf{I}_{N}&\dots&\mathbf{0}&\mathbf{0}\\\
\vdots&\vdots&&\vdots&\vdots\\\
\mathbf{0}&\mathbf{0}&\dots&\mathbf{I}_{N}&\mathbf{0}\end{pmatrix}.$
Condition 2 entails that the spectral radius of $\mathbf{A}$, defined as
$\rho(\mathbf{A})=\max_{j}|\lambda_{j}(\mathbf{A})|$, is strictly less than
one uniformly in $N$. This implies that the VAR(1) model is invertible to the
VMA($\infty$). Therefore, we have the representation
$\mathbf{y}_{t}=\sum_{\ell=0}^{\infty}\mathbf{B}_{\ell}\mathbf{u}_{t-\ell}$,
where $\mathbf{B}_{0}=\mathbf{I}_{N}$ and
$\mathbf{B}_{\ell}=\mathbf{J}^{\prime}\mathbf{A}^{\ell}\mathbf{J}$ with
$\mathbf{J}^{\prime}=(\mathbf{I}_{N},\mathbf{0}_{N\times(KN-N)})$ for
$\ell=1,2,\dots$. Note that
$\displaystyle\sum_{\ell=0}^{\infty}\|\mathbf{B}_{\ell}\|_{\infty}\leq\sum_{\ell=0}^{\infty}\|\mathbf{J}\|_{\infty}^{2}\|\mathbf{A}^{\ell}\|_{\infty}=\sum_{\ell=0}^{\infty}\|\mathbf{A}^{\ell}\|_{\infty}.$
Again, since we have $\rho(\mathbf{A})<1$ uniformly in $N$ by Condition 2, we
can always pick up a small constant $\varepsilon>0$ such that
$\delta:=\rho(\mathbf{A})+\varepsilon$ lies in $(0,1)$. We apply the Gelfand
formula (Horn and Johnson, 2012); for this choice of $\varepsilon$, there
exists $T^{*}$ such that $\|\mathbf{A}^{r_{T}}\|_{\infty}\leq\delta^{r_{T}}$
for all $T\geq T^{*}$, where $r_{T}>0$ is any monotonically diverging sequence
as $T\to\infty$. Therefore, for any $T\geq T^{*}$ we have
$\displaystyle\sum_{\ell=r_{T}}^{\infty}\|\mathbf{A}^{\ell}\|_{\infty}\leq\sum_{\ell=r_{T}}^{\infty}\delta^{\ell}=\frac{\delta^{r_{T}}}{1-\delta},$
which proves (A.14). Finally, because
$\limsup_{\ell\to\infty}\|\mathbf{A}^{\ell}\|_{\infty}^{1/\ell}\leq\delta<1$,
the summability condition in (14) holds by the root test. This completes the
proof. ∎
###### Lemma 3.
Let $b=\sum_{\ell=0}^{\infty}\left\|\mathbf{B}_{\ell}\right\|_{\infty}$. If
Conditions 1 and 2 are true, then for any $\bar{c}_{1}>0$ and arbitrary
divergent sequence $r_{T}>0$ with sufficiently large $T$, we have for all
$x>0$,
$\displaystyle\operatorname{\mathbb{P}}\left(\left\|T^{-1}\mathbf{X}\mathbf{U}^{\prime}\right\|_{\max}>x\right)$
$\displaystyle\leq
2KN^{2}\exp\left(-\frac{x^{2}T}{2\bar{c}_{1}^{2}}\right)+2r_{T}KN^{2}T\exp\left(-\frac{\bar{c}_{1}}{4bc_{uu}}\right)+2c_{1}^{2}KT\frac{\delta^{r_{T}}\log
N}{\bar{c}_{1}(1-\delta)},$
where constants $\delta\in(0,1)$ and $c_{uu},c_{1}>0$ are given by Lemmas 1
and 2, respectively. In particular, if we set
$x=\sqrt{2\bar{c}_{1}^{2}(\nu+2)\log(N\vee T)/T}$,
$\bar{c}_{1}=4bc_{uu}(\nu+4)\log(N\vee T)$, and $r_{T}=T$ with any constant
$\nu>0$, then the upper bound becomes $O((N\vee T)^{-\nu})$.
###### Proof of Lemma 3.
Following the definition, we have for any $x,\bar{c}_{1}>0$,
$\displaystyle\operatorname{\mathbb{P}}\left(\left\|T^{-1}\mathbf{X}\mathbf{U}^{\prime}\right\|_{\max}>x\right)=\operatorname{\mathbb{P}}\left(\max_{k\in[K]}\left\|T^{-1}\sum_{t=1}^{T}\mathbf{y}_{t-k}\mathbf{u}_{t}^{\prime}\right\|_{\max}>x\right)$
$\displaystyle\leq\operatorname{\mathbb{P}}\left(\max_{k\in[K]}\left\|T^{-1}\sum_{t=1}^{T}\mathbf{y}_{t-k}\mathbf{u}_{t}^{\prime}\right\|_{\max}>x\mid\max_{k\in[K]}\max_{t\in[T]}\|\mathbf{y}_{t-k}\mathbf{u}_{t}^{\prime}\|_{\max}\leq\bar{c}_{1}\right)$
$\displaystyle\qquad+\operatorname{\mathbb{P}}\left(\max_{k\in[K]}\max_{t\in[T]}\|\mathbf{y}_{t-k}\mathbf{u}_{t}^{\prime}\|_{\max}>\bar{c}_{1}\right).$
Consider the first term. Note for any $k\in[K]$ that
$\\{\mathbf{y}_{t-k}\mathbf{u}_{t}^{\prime}\\}$ forms a martingale difference
sequence with respect to the filtration,
$\mathcal{F}_{t}=\sigma\\{\mathbf{u}_{t-s}:s=0,1,\dots\\}$. Thus the union
bound and the Azuma-Hoeffding inequality (Bercu et al., 2015) give
$\displaystyle\operatorname{\mathbb{P}}\left(\max_{k\in[K]}\left\|T^{-1}\sum_{t=1}^{T}\mathbf{y}_{t-k}\mathbf{u}_{t}^{\prime}\right\|_{\max}>x\mid\max_{k\in[K]}\max_{t\in[T]}\|\mathbf{y}_{t-k}\mathbf{u}_{t}^{\prime}\|_{\max}\leq\bar{c}_{1}\right)$
$\displaystyle\leq
KN^{2}\max_{k\in[K]}\max_{i,j\in[N]}\operatorname{\mathbb{P}}\left(\left|T^{-1}\sum_{t=1}^{T}y_{i,t-k}u_{jt}\right|>x\mid\max_{k\in[K]}\max_{t\in[T]}\|\mathbf{y}_{t-k}\mathbf{u}_{t}^{\prime}\|_{\max}\leq\bar{c}_{1}\right)$
$\displaystyle\leq 2KN^{2}\exp\left(-\frac{x^{2}T}{2\bar{c}_{1}^{2}}\right).$
For the second term, applying the triangle and Hölder’s inequalities yields
$\displaystyle\left\|\mathbf{y}_{t-k}\mathbf{u}_{t}^{\prime}\right\|_{\max}=\left\|\sum_{\ell=0}^{\infty}\mathbf{B}_{\ell}\mathbf{u}_{t-k-\ell}\mathbf{u}_{t}^{\prime}\right\|_{\max}\leq\left(\sum_{\ell=0}^{r-1}+\sum_{\ell=r}^{\infty}\right)\left\|\mathbf{B}_{\ell}\right\|_{\infty}\left\|\mathbf{u}_{t-k-\ell}\mathbf{u}_{t}^{\prime}\right\|_{\max},$
and hence,
$\displaystyle\operatorname{\mathbb{P}}\left(\max_{k\in[K]}\max_{t\in[T]}\|\mathbf{y}_{t-k}\mathbf{u}_{t}^{\prime}\|_{\max}>\bar{c}_{1}\right)$
$\displaystyle\leq\operatorname{\mathbb{P}}\left(\max_{k\in[K]}\max_{t\in[T]}\sum_{\ell=0}^{r-1}\left\|\mathbf{B}_{\ell}\right\|_{\infty}\left\|\mathbf{u}_{t-k-\ell}\mathbf{u}_{t}^{\prime}\right\|_{\max}>\bar{c}_{1}/2\right)$
$\displaystyle\qquad+\operatorname{\mathbb{P}}\left(\max_{k\in[K]}\max_{t\in[T]}\sum_{\ell=r}^{\infty}\left\|\mathbf{B}_{\ell}\right\|_{\infty}\left\|\mathbf{u}_{t-k-\ell}\mathbf{u}_{t}^{\prime}\right\|_{\max}>\bar{c}_{1}/2\right).$
Consider the probability with $\sum_{\ell=0}^{r-1}$. The union bound and the
summability condition in Lemma 2 yield
$\displaystyle\operatorname{\mathbb{P}}\left(\max_{k\in[K]}\max_{t\in[T]}\sum_{\ell=0}^{r-1}\left\|\mathbf{B}_{\ell}\right\|_{\infty}\left\|\mathbf{u}_{t-k-\ell}\mathbf{u}_{t}^{\prime}\right\|_{\max}>\bar{c}_{1}/2\right)$
$\displaystyle\leq
rKT\max_{k\in[K]}\max_{t\in[T]}\max_{\ell=0,\dots,r-1}\operatorname{\mathbb{P}}\left(\left\|\mathbf{u}_{t-k-\ell}\mathbf{u}_{t}^{\prime}\right\|_{\max}>\frac{\bar{c}_{1}}{2\sum_{\ell=0}^{r-1}\left\|\mathbf{B}_{\ell}\right\|_{\infty}}\right)$
$\displaystyle\leq
rKN^{2}T\max_{k\in[K+r-1]}\max_{t\in[T]}\max_{i,j\in[N]}\operatorname{\mathbb{P}}\left(\left|u_{i,t-k}u_{jt}\right|>\frac{\bar{c}_{1}}{2b}\right)$
$\displaystyle\leq 2rKN^{2}T\exp\left(-\frac{\bar{c}_{1}}{4bc_{uu}}\right),$
where the last inequality holds by Lemma 1$(a)$ since $u_{i,t-k}u_{jt}$ is
sub-exponential (Vershynin, 2018, Proposition 2.7.1 and Lemma 2.7.7). Consider
the probability with $\sum_{\ell=r}^{\infty}$. By the union bound and the
Markov inequality, we have
$\displaystyle\operatorname{\mathbb{P}}\left(\max_{k\in[K]}\max_{t\in[T]}\sum_{\ell=r}^{\infty}\left\|\mathbf{B}_{\ell}\right\|_{\infty}\left\|\mathbf{u}_{t-k-\ell}\mathbf{u}_{t}^{\prime}\right\|_{\max}>\bar{c}_{1}/2\right)$
$\displaystyle\leq
2KT\sum_{\ell=r}^{\infty}\left\|\mathbf{B}_{\ell}\right\|_{\infty}\operatorname{\mathbb{E}}\left[\left\|\mathbf{u}_{t-k-\ell}\mathbf{u}_{t}^{\prime}\right\|_{\max}\right]/\bar{c}_{1}$
$\displaystyle\leq
2KT\frac{\delta^{r}}{\bar{c}_{1}(1-\delta)}\operatorname{\mathbb{E}}\left[\left\|\mathbf{u}_{t}\right\|_{\max}\right]^{2}\leq
2c_{1}^{2}KT\frac{\delta^{r}\log N}{\bar{c}_{1}(1-\delta)},$
where the last inequality holds by Lemma 1(b). Combining these upper bounds
completes the proof. ∎
###### Lemma 4.
Let $b=\sum_{\ell=0}^{\infty}\left\|\mathbf{B}_{\ell}\right\|_{\infty}$. If
Conditions 1 and 2 are true, then for any $\bar{c}_{2}\geq c_{uu}$ and
arbitrary divergent sequence $r_{T}>0$ with sufficiently large $T$, we have
for all $x>0$,
$\displaystyle\operatorname{\mathbb{P}}\left(\left\|T^{-1}\mathbf{X}\mathbf{X}^{\prime}-\operatorname{\mathbb{E}}[T^{-1}\mathbf{X}\mathbf{X}^{\prime}]\right\|_{\max}>x\right)$
$\displaystyle\leq
2K^{2}r_{T}^{2}N^{2}\exp\left(-\frac{x^{2}T}{8b^{4}\bar{c}_{2}^{2}}\right)+2K^{2}r_{T}^{2}N^{2}T\exp\left(-\frac{\bar{c}_{2}}{2c_{uu}}\right)+6K^{2}bc_{2}\frac{\delta^{r_{T}}\log
N}{x(1-\delta)},$
where constants $\delta\in(0,1)$ and $c_{uu},c_{2}>0$ are given by Lemmas 1
and 2, respectively. In particular, if we set
$x=\sqrt{8(\nu+4)b^{4}\bar{c}_{2}^{2}\log(N\vee T)/T}$,
$\bar{c}_{2}=2(\nu+5)c_{uu}\log(N\vee T)$, and $r_{T}=T$ with any constant
$\nu>0$, the upper bound becomes $O((N\vee T)^{-\nu})$.
###### Proof of Lemma 4.
For $g,h\in[K]$, define
$\displaystyle\mathbf{W}_{g,h}=T^{-1}\sum_{t=1}^{T}\left(\mathbf{u}_{t-g}\mathbf{u}_{t-h}^{\prime}-\operatorname{\mathbb{E}}[\mathbf{u}_{t-g}\mathbf{u}_{t-h}^{\prime}]\right).$
By the VMA($\infty$) representation in Lemma 2, we have
$\displaystyle\mathbf{y}_{t-g}\mathbf{y}_{t-h}^{\prime}=\sum_{\ell=0}^{\infty}\sum_{m=0}^{\infty}\mathbf{B}_{\ell}\mathbf{u}_{t-g-\ell}\mathbf{u}_{t-h-m}^{\prime}\mathbf{B}_{m}^{\prime},$
so that
$\displaystyle\left\|T^{-1}\mathbf{X}\mathbf{X}^{\prime}-\operatorname{\mathbb{E}}[T^{-1}\mathbf{X}\mathbf{X}^{\prime}]\right\|_{\max}\leq\max_{g,h\in[K]}\left\|T^{-1}\sum_{t=1}^{T}\left(\mathbf{y}_{t-g}\mathbf{y}_{t-h}^{\prime}-\operatorname{\mathbb{E}}[\mathbf{y}_{t-g}\mathbf{y}_{t-h}^{\prime}]\right)\right\|_{\max}$
$\displaystyle\leq\max_{g,h\in[K]}\sum_{\ell=0}^{\infty}\sum_{m=0}^{\infty}\left\|\mathbf{B}_{\ell}\mathbf{W}_{g+\ell,h+m}\mathbf{B}_{m}^{\prime}\right\|_{\max}$
$\displaystyle\leq\max_{g,h\in[K]}\left(\sum_{\ell=0}^{r-1}+\sum_{\ell=r}^{\infty}\right)\left(\sum_{m=0}^{r-1}+\sum_{m=r}^{\infty}\right)\|\mathbf{B}_{\ell}\|_{\infty}\left\|\mathbf{W}_{g+\ell,h+m}\mathbf{B}_{m}^{\prime}\right\|_{\max}$
$\displaystyle\leq\max_{g,h\in[K]}\left(\sum_{\ell=0}^{r-1}\sum_{m=0}^{r-1}+3\sum_{\ell=0}^{\infty}\sum_{m=r}^{\infty}\right)\|\mathbf{B}_{\ell}\|_{\infty}\|\mathbf{B}_{m}\|_{\infty}\left\|\mathbf{W}_{g+\ell,h+m}\right\|_{\max}.$
Therefore, we obtain
$\displaystyle\operatorname{\mathbb{P}}\left(\left\|T^{-1}\mathbf{X}\mathbf{X}^{\prime}-\operatorname{\mathbb{E}}[T^{-1}\mathbf{X}\mathbf{X}^{\prime}]\right\|_{\max}>x\right)$
$\displaystyle\leq\operatorname{\mathbb{P}}\left(\max_{g,h\in[K]}\sum_{\ell=0}^{r-1}\sum_{m=0}^{r-1}\|\mathbf{B}_{\ell}\|_{\infty}\|\mathbf{B}_{m}\|_{\infty}\left\|\mathbf{W}_{g+\ell,h+m}\right\|_{\max}>x/2\right)$
$\displaystyle\quad+\operatorname{\mathbb{P}}\left(\max_{g,h\in[K]}3\sum_{\ell=0}^{\infty}\sum_{m=r}^{\infty}\|\mathbf{B}_{\ell}\|_{\infty}\|\mathbf{B}_{m}\|_{\infty}\left\|\mathbf{W}_{g+\ell,h+m}\right\|_{\max}>x/2\right).$
We consider the first probability. The union bound gives
$\displaystyle\operatorname{\mathbb{P}}\left(\max_{g,h\in[K]}\sum_{\ell=0}^{r-1}\sum_{m=0}^{r-1}\|\mathbf{B}_{\ell}\|_{\infty}\|\mathbf{B}_{m}\|_{\infty}\left\|\mathbf{W}_{g+\ell,h+m}\right\|_{\max}>x/2\right)$
$\displaystyle\leq
K^{2}\max_{g,h\in[K]}\operatorname{\mathbb{P}}\left(\max_{\ell,m=0,\dots,r-1}\left\|\mathbf{W}_{g+\ell,h+m}\right\|_{\max}\sum_{\ell=0}^{r-1}\sum_{m=0}^{r-1}\|\mathbf{B}_{\ell}\|_{\infty}\|\mathbf{B}_{m}\|_{\infty}>x/2\right)$
$\displaystyle\leq
K^{2}r^{2}\max_{g,h\in[K]}\max_{\ell,m=0,\dots,r-1}\operatorname{\mathbb{P}}\left(\left\|\mathbf{W}_{g+\ell,h+m}\right\|_{\max}>x/(2b^{2})\right)$
$\displaystyle=K^{2}r^{2}\max_{k\in[K+r-1]}\operatorname{\mathbb{P}}\left(\left\|\mathbf{W}_{k,1}\right\|_{\max}>x/(2b^{2})\right).$
If $k=1$, each component of $\mathbf{W}_{1,1}$ is a sample average of the
i.i.d. random variables,
$\\{u_{it}u_{jt}-\operatorname{\mathbb{E}}[u_{it}u_{jt}]\\}_{t}$. Clearly this
is a martingale difference sequence with respect to filtration
$\mathcal{F}_{t}=\sigma\\{u_{i,t-s},u_{j,t-s}:s=0,1,\dots\\}$. If $k\geq 2$,
we have $\operatorname{\mathbb{E}}[u_{i,t-k}u_{j,t-1}]=0$ and the sequence,
$\\{u_{i,t-k+1}u_{jt}\\}_{t}$, is also martingale difference with respect to
$\mathcal{F}_{t}$. Therefore, the Azuma-Hoeffding inequality (Bercu et al.,
2015) with the conditioning argument and the union bound as in the proof of
Lemma 3, we have
$\displaystyle\max_{k\in[K+r-1]}\operatorname{\mathbb{P}}\left(\left\|\mathbf{W}_{k,1}\right\|_{\max}>x/(2b^{2})\right)$
$\displaystyle\leq\max_{k\in[K+r-1]}\operatorname{\mathbb{P}}\left(\left\|\mathbf{W}_{k,1}\right\|_{\max}>x/(2b^{2})\mid\max_{t\in[T]}\left\|\mathbf{u}_{t-k}\mathbf{u}_{t-1}^{\prime}-\operatorname{\mathbb{E}}[\mathbf{u}_{t-k}\mathbf{u}_{t-1}^{\prime}]\right\|_{\max}\leq\bar{c}_{2}\right)$
$\displaystyle\quad+\max_{k\in[K+r-1]}\operatorname{\mathbb{P}}\left(\max_{t\in[T]}\left\|\mathbf{u}_{t-k}\mathbf{u}_{t-1}^{\prime}-\operatorname{\mathbb{E}}[\mathbf{u}_{t-k}\mathbf{u}_{t-1}^{\prime}]\right\|_{\max}>\bar{c}_{2}\right)$
$\displaystyle\leq
2N^{2}\exp\left(-\frac{x^{2}T}{8b^{4}\bar{c}_{2}^{2}}\right)+2N^{2}T\exp\left(-\frac{\bar{c}_{2}}{2c_{uu}}\right).$
We next consider the second probability. The union bound and the Markov
inequality with Lemma 2 give
$\displaystyle\operatorname{\mathbb{P}}\left(3\max_{g,h\in[K]}\sum_{\ell=0}^{\infty}\sum_{m=r}^{\infty}\|\mathbf{B}_{\ell}\|_{\infty}\|\mathbf{B}_{m}\|_{\infty}\left\|\mathbf{W}_{g+\ell,h+m}\right\|_{\max}>x/2\right)$
$\displaystyle\leq
6K^{2}\max_{g,h\in[K]}\sum_{\ell=0}^{\infty}\sum_{m=r}^{\infty}\|\mathbf{B}_{\ell}\|_{\infty}\|\mathbf{B}_{m}\|_{\infty}\operatorname{\mathbb{E}}\left\|\mathbf{W}_{g+\ell,h+m}\right\|_{\max}/x$
$\displaystyle\leq
6K^{2}\frac{b\delta^{r}}{x(1-\delta)}\max_{k\in\\{1,2\\}}\operatorname{\mathbb{E}}\left[\left\|\mathbf{u}_{t-k}\mathbf{u}_{t-1}^{\prime}-\operatorname{\mathbb{E}}[\mathbf{u}_{t-k}\mathbf{u}_{t-1}^{\prime}]\right\|_{\max}\right].$
By Lemma 1, the last expectation is evaluated as
$\displaystyle\max_{k\in\\{1,2\\}}\operatorname{\mathbb{E}}\left[\left\|\mathbf{u}_{t-k}\mathbf{u}_{t-1}^{\prime}-\operatorname{\mathbb{E}}\mathbf{u}_{t-k}\mathbf{u}_{t-1}^{\prime}\right\|_{\max}\right]\leq
c_{2}\log N.$
Combining the obtained results so far yields the desired inequality. This
completes the proof. ∎
###### Lemma 5.
Let $\bm{\delta}_{i\cdot}^{\prime}\in\mathbb{R}^{KN}$ denote the $i$-th column
vector of $\bm{\Delta}^{\prime}\in\mathbb{R}^{KN\times N}$ with
$\bm{\Delta}=\hat{\bm{\Phi}}-\bm{\Phi}^{0}$. If inequality (A.11) is true,
then it holds that
$\displaystyle\|\bm{\delta}_{i\mathcal{S}_{i}^{c}}\|_{1}\leq
3\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{1}.$
###### Proof of Lemma 5.
Note that $\bm{\phi}=\bm{\phi}_{\mathcal{S}}$ and
$\mathbf{v}=\mathbf{v}_{\mathcal{S}}+\mathbf{v}_{\mathcal{S}^{c}}$ for any
$KN$-dimensional vector $\mathbf{v}$. Hence the lower bound of
$\|\hat{\bm{\phi}}_{i\cdot}\|_{1}$ is computed as
$\displaystyle\|\hat{\bm{\phi}}_{i\cdot}\|_{1}$
$\displaystyle=\|\bm{\phi}_{i\cdot}+\bm{\delta}_{i\cdot}\|_{1}=\|\bm{\phi}_{i\mathcal{S}_{i}}+\bm{\delta}_{i\mathcal{S}_{i}}+\bm{\delta}_{i\mathcal{S}_{i}^{c}}\|_{1}\geq\|\bm{\phi}_{i\mathcal{S}_{i}}+\bm{\delta}_{i\mathcal{S}_{i}^{c}}\|_{1}-\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{1}$
$\displaystyle=\|\bm{\phi}_{i\mathcal{S}_{i}}\|_{1}+\|\bm{\delta}_{i\mathcal{S}_{i}^{c}}\|_{1}-\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{1}=\|\bm{\phi}_{i\cdot}\|_{1}+\|\bm{\delta}_{i\mathcal{S}_{i}^{c}}\|_{1}-\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{1},$
where the last equality holds by decomposability of the $\ell_{1}$-norm. Thus
we obtain
$\displaystyle\|\bm{\phi}_{i\cdot}\|_{1}-\|\hat{\bm{\phi}}_{i\cdot}\|_{1}\leq\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{1}-\|\bm{\delta}_{i\mathcal{S}_{i}^{c}}\|_{1}.$
From (A.11), we have
$\displaystyle 0\leq(2T)^{-1}\|\bm{\delta}_{i\cdot}\mathbf{X}\|_{2}^{2}$
$\displaystyle\leq
2^{-1}\lambda\|\bm{\delta}_{i\cdot}\|_{1}+\lambda\|\bm{\phi}_{i\cdot}\|_{1}-\lambda\|\hat{\bm{\phi}}_{i\cdot}\|_{1}$
$\displaystyle\leq
2^{-1}\lambda\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{1}+2^{-1}\lambda\|\bm{\delta}_{i\mathcal{S}_{i}^{c}}\|_{1}+\lambda\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{1}-\lambda\|\bm{\delta}_{i\mathcal{S}_{i}^{c}}\|_{1}$
$\displaystyle=(3/2)\lambda\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{1}-2^{-1}\lambda\|\bm{\delta}_{i\mathcal{S}_{i}^{c}}\|_{1},$
giving the desired inequality, $\|\bm{\delta}_{i\mathcal{S}_{i}^{c}}\|_{1}\leq
3\|\bm{\delta}_{i\mathcal{S}_{i}}\|_{1}$. This completes the proof. ∎
###### Lemma 6.
Suppose the same conditions as Proposition 1. We have
$\displaystyle\min_{\bm{\delta}_{i\cdot}\in\mathcal{D}_{i}}\frac{T^{-1}\|\bm{\delta}_{i\cdot}\mathbf{X}\|_{\mathrm{F}}^{2}}{\|\bm{\delta}_{i\cdot}\|_{2}^{2}}\geq\gamma-16s_{i}\left\|T^{-1}\mathbf{X}\mathbf{X}^{\prime}-\bm{\Sigma}_{x}\right\|_{\max},$
where
$\mathcal{D}_{i}=\\{\bm{\delta}\in\mathbb{R}^{KN}:\|\bm{\delta}_{\mathcal{S}_{i}^{c}}\|_{1}\leq
3\|\bm{\delta}_{\mathcal{S}_{i}}\|_{1}\\}$, and $\gamma>0$ is some constant.
###### Proof of Lemma 6.
Let $\bm{\delta}_{i\cdot}^{\prime}\in\mathbb{R}^{KN}$ denote the $i$th column
vector of $\bm{\Delta}^{\prime}\in\mathbb{R}^{KN\times N}$. We see that
$\displaystyle\frac{T^{-1}\|\bm{\delta}_{i\cdot}\mathbf{X}\|_{2}^{2}}{\|\bm{\delta}_{i\cdot}\|_{2}^{2}}=\frac{\bm{\delta}_{i\cdot}\operatorname{\mathbb{E}}[T^{-1}\mathbf{X}\mathbf{X}^{\prime}]\bm{\delta}_{i\cdot}^{\prime}}{\bm{\delta}_{i\cdot}\bm{\delta}_{i\cdot}^{\prime}}-\frac{\bm{\delta}_{i\cdot}\left(\operatorname{\mathbb{E}}[T^{-1}\mathbf{X}\mathbf{X}^{\prime}]-T^{-1}\mathbf{X}\mathbf{X}^{\prime}\right)\bm{\delta}_{i\cdot}^{\prime}}{\bm{\delta}_{i\cdot}\bm{\delta}_{i\cdot}^{\prime}}.$
(A.15)
Then it is sufficient to show that uniformly in
$\bm{\delta}_{i\cdot}\in\mathcal{D}_{i}$ the first term is bounded from below
by some constant and the second term converges to zero with high probability.
We consider the first term of (A.15). In view of the Rayleigh quotient with
Condition 3, we obtain
$\displaystyle\min_{\bm{\delta}_{i\cdot}\in\mathcal{D}_{i}}\frac{\bm{\delta}_{i\cdot}\operatorname{\mathbb{E}}[T^{-1}\mathbf{X}\mathbf{X}^{\prime}]\bm{\delta}_{i\cdot}^{\prime}}{\bm{\delta}_{i\cdot}\bm{\delta}_{i\cdot}^{\prime}}\geq\gamma.$
(A.16)
We next consider the second term of (A.15). By applying Hölder’s inequality
twice, the numerator is bounded as
$\displaystyle\left|\bm{\delta}_{i\cdot}^{\prime}\left(\operatorname{\mathbb{E}}[T^{-1}\mathbf{X}\mathbf{X}^{\prime}]-T^{-1}\mathbf{X}\mathbf{X}^{\prime}\right)\bm{\delta}_{i\cdot}^{\prime}\right|\leq\left\|\bm{\delta}_{i\cdot}\left(\operatorname{\mathbb{E}}[T^{-1}\mathbf{X}\mathbf{X}^{\prime}]-T^{-1}\mathbf{X}\mathbf{X}^{\prime}\right)\right\|_{\max}\|\bm{\delta}_{i\cdot}\|_{1}$
$\displaystyle\quad\leq\left\|\operatorname{\mathbb{E}}[T^{-1}\mathbf{X}\mathbf{X}^{\prime}]-T^{-1}\mathbf{X}\mathbf{X}^{\prime}\right\|_{\max}\|\bm{\delta}_{i\cdot}\|_{1}^{2}.$
(A.17)
We further compute the upper bound of $\|\bm{\delta}_{i\cdot}\|_{1}^{2}$.
Lemma 5 yields
$\displaystyle\|\bm{\delta}_{i\cdot}\|_{1}^{2}=\left(\|\bm{\delta}_{\mathcal{S}_{i}}\|_{1}+\|\bm{\delta}_{\mathcal{S}_{i}^{c}}\|_{1}\right)^{2}\leq
16\|\bm{\delta}_{\mathcal{S}_{i}}\|_{1}^{2}\leq
16s_{i}\|\bm{\delta}_{\mathcal{S}_{i}}\|_{2}^{2}\leq
16s_{i}\|\bm{\delta}_{i\cdot}\|_{2}^{2}.$ (A.18)
Combining (A.17) and (A.18) and rearranging the terms yield
$\displaystyle\max_{\bm{\delta}_{i\cdot}\in\mathcal{D}}\frac{\bm{\delta}_{i\cdot}\left(\operatorname{\mathbb{E}}[T^{-1}\mathbf{X}\mathbf{X}^{\prime}]-T^{-1}\mathbf{X}\mathbf{X}^{\prime}\right)\bm{\delta}_{i\cdot}^{\prime}}{\|\bm{\delta}_{i\cdot}\|_{2}^{2}}\leq
16s_{i}\left\|\operatorname{\mathbb{E}}[T^{-1}\mathbf{X}\mathbf{X}^{\prime}]-T^{-1}\mathbf{X}\mathbf{X}^{\prime}\right\|_{\max}.$
(A.19)
From (A.15), (A.16), and (A.19), we obtain
$\displaystyle\min_{\bm{\delta}_{i\cdot}\in\mathcal{D}}\frac{T^{-1}\|\bm{\delta}_{i\cdot}\mathbf{X}\|_{2}^{2}}{\|\bm{\delta}_{i\cdot}\|_{2}^{2}}\geq\gamma-16s_{i}\left\|\operatorname{\mathbb{E}}[T^{-1}\mathbf{X}\mathbf{X}^{\prime}]-T^{-1}\mathbf{X}\mathbf{X}^{\prime}\right\|_{\max}.$
This completes the proof. ∎
###### Lemma 7.
If Conditions 1–5 are true, then the following inequality holds for all
$i\in[N]$ such that $\bar{r}_{i}=o(1)$ with probability at least $1-O((N\vee
T)^{-\nu})$:
$\displaystyle(a)~{}$
$\displaystyle~{}\max_{j\in[KN]}\left|\hat{\sigma}_{i}^{2}\hat{\bm{\omega}}_{j}^{\prime}\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}-\sigma_{i}^{2}\omega_{j}^{2}\right|\lesssim
M_{\omega}^{2}\lambda,$ $\displaystyle(b)~{}$
$\displaystyle~{}\max_{j\in[KN]}\left|\hat{\sigma}_{i}^{2}\hat{\omega}_{j}^{2}-\sigma_{i}^{2}\omega_{j}^{2}\right|\lesssim
M_{\omega}^{2}\lambda,$ $\displaystyle(c)~{}$
$\displaystyle~{}\max_{j\in[KN]}\left|\frac{\sigma_{i}\omega_{j}}{\hat{\sigma}_{i}\sqrt{\hat{\bm{\omega}}_{j}^{\prime}\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}}}-1\right|\lesssim\frac{M_{\omega}^{2}\lambda}{\left|\gamma^{2}-O(\sqrt{M_{\omega}^{2}\lambda})\right|},$
$\displaystyle(d)~{}$
$\displaystyle~{}\max_{j\in[KN]}\left|\frac{\sigma_{i}\omega_{j}}{\hat{\sigma}_{i}\hat{\omega}_{j}}-1\right|\lesssim\frac{M_{\omega}^{2}\lambda}{\left|\gamma^{2}-O(\sqrt{M_{\omega}^{2}\lambda})\right|}.$
In consequence, if $\bar{v}_{i}=\bar{r}_{i}+M_{\omega}^{2}\lambda$ is $o(1)$,
all the upper bounds are $o(1)$.
###### Proof of Lemma 7.
For any $(i,j)\in\mathcal{H}$, we consider two bounds:
$\displaystyle(a)$
$\displaystyle~{}~{}\left|\hat{\sigma}_{i}^{2}\hat{\bm{\omega}}_{j}^{\prime}\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}-\sigma_{i}^{2}\omega_{j}^{2}\right|\leq\left(\left|\hat{\sigma}_{i}^{2}-\sigma_{i}^{2}\right|+\sigma_{i}^{2}\right)\left|\hat{\bm{\omega}}_{j}^{\prime}\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}-\omega_{j}^{2}\right|+\left|\hat{\sigma}_{i}^{2}-\sigma_{i}^{2}\right|\omega_{j}^{2},$
$\displaystyle(b)$
$\displaystyle~{}~{}\left|\hat{\sigma}_{i}^{2}\hat{\omega}_{j}^{2}-\sigma_{i}^{2}\omega_{j}^{2}\right|\leq\left(\left|\hat{\sigma}_{i}^{2}-\sigma_{i}^{2}\right|+\sigma_{i}^{2}\right)\left|\hat{\omega}_{j}^{2}-\omega_{j}^{2}\right|+\left|\hat{\sigma}_{i}^{2}-\sigma_{i}^{2}\right|\omega_{j}^{2}.$
Thus to complete the proof, we derive upper bounds of
$\displaystyle(i):=\max_{j}|\hat{\bm{\omega}}_{j}^{\prime}\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}-\omega_{j}^{2}|,~{}~{}(i)^{\prime}:=\max_{j}\left|\hat{\omega}_{j}^{2}-\omega_{j}^{2}\right|,~{}~{}\text{and}~{}~{}(ii):=|\hat{\sigma}_{i}^{2}-\sigma_{i}^{2}|.$
where $(i)^{\prime}$ is bounded as
$\displaystyle(i)^{\prime}\lesssim\max_{j}\|{\bm{\omega}}_{j}\|\lambda_{1}\lesssim
M_{\omega}^{2}\lambda$
by Theorem 6 of Cai et al. (2011). Bound $(i)$. Since
$w_{jj}={\bm{\omega}}_{j}^{\prime}\mathbf{e}_{j}$, we have
$\displaystyle(i)$
$\displaystyle\leq\max_{j}|(\hat{\bm{\omega}}_{j}-{\bm{\omega}}_{j})^{\prime}\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}|+|{\bm{\omega}}_{j}^{\prime}(\mathbf{e}_{j}-\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j})|$
$\displaystyle\leq\max_{j}|(\hat{\bm{\omega}}_{j}-{\bm{\omega}}_{j})^{\prime}(\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}-\mathbf{e}_{j})|+\max_{j}|(\hat{\bm{\omega}}_{j}-{\bm{\omega}}_{j})^{\prime}\mathbf{e}_{j}|+\max_{j}|{\bm{\omega}}_{j}^{\prime}(\mathbf{e}_{j}-\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j})|$
$\displaystyle\leq\max_{j}\|\hat{\bm{\omega}}_{j}-{\bm{\omega}}_{j}\|_{1}\|\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}-\mathbf{e}_{j}\|_{\infty}+\max_{j}\|\hat{\bm{\omega}}_{j}-{\bm{\omega}}_{j}\|_{\infty}+\max_{j}\|{\bm{\omega}}_{j}\|_{1}\|\mathbf{e}_{j}-\hat{\bm{\Sigma}}_{x}\hat{\bm{\omega}}_{j}\|_{\infty}$
$\displaystyle\leq\max_{j}\|\hat{\bm{\omega}}_{j}-{\bm{\omega}}_{j}\|_{1}\lambda_{1}+2M_{\omega}\lambda_{1}.$
By the proof of Proposition 2, it holds with probability at least $1-O((N\vee
T)^{-\nu})$ that
$\displaystyle\max_{j}\|\hat{\bm{\omega}}_{j}-{\bm{\omega}}_{j}\|_{1}\lesssim(\max_{j}\|{\bm{\omega}}_{j}\|_{\ell_{1}}\lambda_{1})^{1-r}s_{\omega}\lesssim
M_{\omega}^{2-2r}\lambda^{1-r}s_{\omega}.$
where we have used $\lambda_{1}=b\max_{j}\|{\bm{\omega}}_{j}\|_{1}\lambda/2$.
Then Conditions 4 and 5 yield
$\displaystyle(i)\lesssim
M_{\omega}^{3-2r}\lambda^{2-r}s_{\omega}+M_{\omega}^{2}\lambda.$
Finally bound $(ii)$. We have
$\displaystyle(ii)\leq
T^{-1}\left|\sum_{t=1}^{T}(\hat{u}_{it}^{2}-u_{it}^{2})\right|+\left|T^{-1}\sum_{t=1}^{T}({u}_{it}^{2}-\operatorname{\mathbb{E}}u_{it}^{2})\right|.$
Consider the first term. Because
$\hat{\mathbf{u}}_{i\cdot}-\mathbf{u}_{i\cdot}=-(\hat{\bm{\phi}}_{i\cdot}^{L}-\bm{\phi}_{i\cdot})\mathbf{X}$,
we see that
$\displaystyle
T^{-1}\left|\sum_{t=1}^{T}(\hat{u}_{it}^{2}-u_{it}^{2})\right|\leq
T^{-1}\left|\sum_{t=1}^{T}(\hat{u}_{it}-u_{it})^{2}\right|+2T^{-1}\left|\sum_{t=1}^{T}(\hat{u}_{it}-u_{it})u_{it}\right|$
|
Place: Outside the Royal Palace.
Plot element: Climax.
Beat: When Jason returns, Medea begins to carry out her ruse. Medea fakes
regret and break down in false tears of remorse. Determined, Medea sends her
children to offer poisoned gifts to Creon’s daughter. Medea’s children face
impending doom.
Place: Outside the Royal Palace.
Plot element: Falling Action.
Beat: The Messenger frantically runs towards Medea and warns Medea to escape
the city as soon as possible. The Messenger reveals that Medea has been
identified as the murderer.
Place: Outside the Royal Palace.
Plot element: Resolution.
Beat: Medea and her two dead children are seated in a chariot drawn by
dragons. Jason watches in horror and curses himself for having wed Medea and
mourns his tragic losses.
Place: On a winged chariot.
Plot element: Denouement.
Beat: Medea denies Jason the right to a proper burial of his children. She
flees to Athens and divines an unheroic death for Jason.
<end>
Example 2. <LOG_LINE>
<CHARACTER_DESCRIPTION>
<CHARACTER_DESCRIPTION>
...
<CHARACTER_DESCRIPTION>
<scenes>
Listing 5: Plot outline prompt prefixes to generate a sequence of scenes from
the log line and list of characters (Medea).
⬇
Examples of breakdowns of stories into a Hero’s Journey structure.
Example 1. A science-fiction fantasy about a naive but ambitious farm boy from
a backwater desert who discovers powers he never knew he had when he teams up
with a feisty princess, a mercenary space pilot and an old wizard warrior to
lead a ragtag rebellion against the sinister forces of the evil Galactic
Empire.
Luke Skywalker is the hero. A naive farm boy, he will discover special powers
under the guidance of mentor Ben Kenobi.
Ben Kenobi is the mentor figure. A recluse Jedi warrior, he will take Luke
Skywalker as apprentice.
Darth Vader is the antagonist. As a commander of the evil Galactic Empire, he
controls space station The Death Star.
Princess Leia holds the plans of the Death Star. She is feisty and brave. She
will become Luke’s friend.
Han Solo is a brash mercenary space pilot of the Millenium Falcon and a friend
of Chebacca. He will take Luke on his spaceship.
Chewbacca is a furry and trustful monster. He is a friend of Han Solo and a
copilot on the Millemium Falcon.
<scenes>
Place: A farm on planet Tatooine.
Plot element: The Ordinary World.
Beat: Luke Skywalker is living a normal and humble life as a farm boy on his
home planet.
Place: Desert of Tatooine.
Plot element: Call to Adventure.
Beat: Luke is called to his adventure by robot R2-D2 and Ben Kenobi. Luke
triggers R2-D2’s message from Princess Leia and is intrigued by her message.
When R2-D2 escapes to find Ben Kenobi, Luke follows and is later saved by
Kenobi, who goes on to tell Luke about his Jedi heritage. Kenobi suggests that
he should come with him.
Place: Ben Kenobi’s farm.
Plot element: Refusal of the Call.
Beat: Luke refuses Kenobi, telling him that he can take Kenobi and the droids
as far as Mos Eisley Spaceport - but he can’t possibly leave his Aunt and
Uncle behind for some space adventure.
Place: A farm on planet Tatooine.
Plot element: Crossing the First Threshold.
Beat: When Luke discovers that the stormtroopers searching for the droids
would track them to his farm, he rushes to warn his Aunt and Uncle, only to
discover them dead by the hands of the Empire. When Luke returns to Kenobi, he
pledges to go with him to Alderaan and learn the ways of the Force like his
father before him.
Place: On spaceship The Millennium Falcon.
Plot element: Tests, Allies, and Enemies.
Beat: After Luke, Kenobi, and the droids hire Han Solo and Chewbacca to
transport them onto Alderaan, Kenobi begins Luke’s training in the ways of the
Force. Wielding his father’s lightsaber, Kenobi challenges Luke. At first, he
can’t do it. But then Kenobi Kenobi Luke him to reach out and trust his
feelings. Luke succeeds.
Place: On spaceship The Millennium Falcon.
Plot element: The Approach to the Inmost Cave.
Beat: The plan to defeat the Galactic Empire is to bring the Death Star plans
to Alderaan so that Princess Leia’s father can take them to the Rebellion.
However, when they arrive within the system, the planet is destroyed. They
come across the Death Star and are pulled in by a tractor beam, now trapped
within the Galactic Empire.
Place: On space station The Death Star.
Plot element: The Ordeal.
Beat: As Kenobi goes off to deactivate the tractor beam so they can escape,
Luke, Han, and Chewbacca discover that Princess Leia is being held on the
Death Star with them. They rescue her and escape to the Millennium Falcon,
hoping that Kenobi has successfully deactivated the tractor beam. Kenobi later
sacrifices himself as Luke watches Darth Vader strike him down. Luke must now
avenge his fallen mentor and carry on his teachings.
Place: On space station The Death Star.
Plot element: The Reward.
Beat: Luke has saved the princess and retrieved the Death Star plans. They now
have the knowledge to destroy the Galactic Empire’s greatest weapon once and
for all.
Place: On spaceship The Millennium Falcon.
Plot element: The Road Back.
Beat: Luke, Leia, Han, Chewbacca, and the droids are headed to the hidden
Rebellion base with the Death Star plans. They are suddenly pursued by
incoming TIE-Fighters, forcing Han and Luke to take action to defend the ship
and escape with their lives - and the plans. They race to take the plans to
the Rebellion and prepare for battle.
Place: On fighter ship X-Wing.
Plot element: The Resurrection.
Beat: The Rebels - along with Luke as an X-Wing pilot - take on the Death
Star. The Rebellion and the Galactic Empire wage war in an epic space battle.
Luke is the only X-Wing pilot that was able to get within the trenches of the
Death Star. But Darth Vader and his wingmen are in hot pursuit. Just as Darth
Vader is about to destroy Luke, Han returns and clears the way for Luke. Luke
uses the Force to guide his aiming as he fires upon the sole weak point of the
deadly Death Star, destroying it for good.
Place: At the Rebellion base.
Plot element: The Return.
Beat: Luke and Han return to the Rebellion base, triumphant, as they receive
medals for the heroic journey. There is peace throughout the galaxy - at least
for now.
<end>
Example 2. <LOG_LINE>
<CHARACTER_DESCRIPTION>
<CHARACTER_DESCRIPTION>
...
<CHARACTER_DESCRIPTION>
<scenes>
Listing 6: Plot outline prompt prefixes to generate a sequence of scenes from
the log line and list of characters (Sci-Fi).
### E.5. Location Description Prompt Prefixes
In the following sets of prompt prefixes, the ¡LOG_LINE¿ is provided by the
writer and each ¡LOCATION_NAME¿ is generated in the previous step.
⬇
Example 1. Ella, a waitress, falls in love with her best friend, Allen, a
teacher. The two drift apart when Allen makes new friends from a different
social class. Ella turns to food to become a famous chef.
Place: The bar.
Description: The bar is dirty, more than a little run down, with most tables
empty. The odor of last night’s beer and crushed pretzels on the floor
permeates the bar.<end>
Example 2. Grandma Phyllis’ family reunion with her two grandchildren is
crashed by two bikers.
Place: The Lawn in Front of Grandma Phyllis’s House.
Description: A big oak tree dominates the yard. There is an old swing set on
the lawn, and a bright white fence all around the grass.<end>
Example 3. Ancient Greek tragedy based upon the myth of Jason and Medea.
Medea, a former princess and the wife of Jason, finds her position in the
Greek world threatened as Jason leaves Medea for a Greek princess of Corinth.
Medea takes vengeance on Jason by murdering his new wife as well as Medea’s
own two sons, after which she escapes to Athens.
Place: Outside the Royal Palace.
Description: In mythological Ancient Greece, in front of a modest house in
Corinth, on the outskirts of a lavish royal palace where wedding preparations
are under way.<end>
Example 4. <LOG_LINE>
Place: <LOCATION_NAME>
Description:
Listing 7: Location description prompt prefixes to generate dialog from log
line and location name (Medea).
⬇
Example 1. Morgan adopts a new cat, Misterio, who sets a curse on anyone that
pets them.
Place: The Adoption Center.
Description: The Adoption Center is a sad place, especially for an unadopted
pet. It is full of walls and walls of cages and cages. Inside of each is an
abandoned animal, longing for a home. The lighting is dim, gray, buzzing
fluorescent.<end>
Example 2. James finds a well in his backyard that is haunted by the ghost of
Sam.
Place: The well.
Description: The well is buried under grass and hedges. It is at least twenty
feet deep, if not more and it is masoned with stones. It is 150 years old at
least. It stinks of stale, standing water, and has vines growing up the sides.
It is narrow enough to not be able to fit down if you are a grown adult
human.<end>
Example 3. Mr. Dorbenson finds a book at a garage sale that tells the story of
his own life. And it ends in a murder!
Place: The garage sale.
Description: It is a garage packed with dusty household goods and antiques.
There is a box at the back that says FREE and is full of paper back
books.<end>
Example 4. <LOG_LINE>
Place: <LOCATION_NAME>
Description:
Listing 8: Location description prompt prefixes to generate dialog from log
line and location name (Sci-Fi).
### E.6. Scene Dialogue Prompt Prefixes
In the following sets of prompt prefixes, the ¡LOG_LINE¿ is provided by the
writer, ¡PLOT_ELEMENT¿, ¡BEAT¿, ¡PREVIOUS_BEAT¿ and ¡LOCATION_NAME¿ are
generated during the plot outline generation step, ¡LOCATION_DESCRIPTION¿ is
generated during the location generation step, and each
¡CHARACTER_DESCRIPTION¿ is generated in the character generation step.
¡PREVIOUS_BEAT¿ corresponds to ¡BEAT¿ from the previous scene (it is left
empty for the first scene). Only characters whose name appears in the beat are
used in this prompt prefix (we use string matching to select these character
names).
⬇
Example 1.
Place: Outside the Royal Palace.
Description: Before Medea’s house in Corinth, near the royal palace of Creon.
Characters: Medea is the protagonist of the play. A sorceress and a princess,
she fled her country and family to live with Jason in Corinth, where they
established a family of two children and gained a favorable reputation. Jason
has divorced Medea and taken up with a new family. Jason can be considered the
play’s villain, though his evil stems more from weakness than strength. A
former adventurer, Jason abandons his wife, Medea, in order to marry the
beautiful young daughter of Creon, King of Corinth, and fuels Medea to a
revenge. The Messenger appears only once in the play to bear tragical news.
Plot element: Resolution.
Summary: Ancient Greek tragedy based upon the myth of Jason and Medea. Medea,
a former princess and the wife of Jason, finds her position in the Greek world
threatened as Jason leaves Medea for a Greek princess of Corinth. Medea takes
vengeance on Jason by murdering his new wife as well as Medea’s own two sons,
after which she escapes to Athens.
Previous beat: The Messenger frantically warns Medea to escape the city as
soon as possible. The Messenger reveals that Medea has been identified as the
murderer.
Beat: The palace opens its doors, revealing Medea and the two dead children
seated in a chariot drawn by dragons. Jason curses himself for having wed
Medea and mourns his tragic losses. Medea denies Jason the right to a proper
burial of his children. Medea flees to Athens and divines an unheroic death
for Jason.
<dialog>
WOMEN OF CORINTH
Throw wide the doors and see thy children’s murdered corpses.
JASON
Haste, ye slaves, loose the bolts, undo the fastenings, that
I may see the sight of twofold woe, my murdered sons and her, whose
blood in vengeance I will shed. (MEDEA appears above the house, on
a chariot drawn by dragons; the children’s corpses are beside her.)
MEDEA
Why shake those doors and attempt to loose their bolts, in
quest of the dead and me their murderess? From such toil desist. If
thou wouldst aught with me, say on, if so thou wilt; but never shalt
thou lay hand on me, so swift the steeds the sun, my father’s sire,
to me doth give to save me from the hand of my foes.
JASON
Accursed woman! by gods, by me and all mankind abhorred as
never woman was, who hadst the heart to stab thy babes, thou their
mother, leaving me undone and childless; this hast thou done and still
dost gaze upon the sun and earth after this deed most impious. Curses
on thee! now perceive what then I missed in the day I brought thee,
fraught with doom, from thy home in a barbarian land to dwell in Hellas,
traitress to thy sire and to the land that nurtured thee.
Perish, vile sorceress, murderess of
thy babes! Whilst I must mourn my luckless fate, for I shall ne’er
enjoy my new-found bride, nor shall I have the children, whom I bred
and reared, alive to say the last farewell to me; nay, I have lost
them.
MEDEA
To this thy speech I could have made a long reply, but Father
Zeus knows well all I have done for thee, and the treatment thou hast
given me. Yet thou wert not ordained to scorn my love and lead a life
of joy in mockery of me, nor was thy royal bride nor Creon, who gave
thee a second wife, to thrust me from this land and rue it not. Wherefore,
if thou wilt, call me e’en a lioness, and Scylla, whose home is in
the Tyrrhene land; for I in turn have wrung thy heart, as well I might.
JASON
Thou, too, art grieved thyself, and sharest in my sorrow.
MEDEA
Be well assured I am; but it relieves my pain to know thou
canst not mock at me.
JASON
O my children, how vile a mother ye have found!
MEDEA
My sons, your father’s feeble lust has been your ruin!
JASON
’Twas not my hand, at any rate, that slew them.
MEDEA
No, but thy foul treatment of me, and thy new marriage.
JASON
Didst think that marriage cause enough to murder them?
MEDEA
Dost think a woman counts this a trifling injury?
JASON
So she be self-restrained; but in thy eyes all is evil.
MEDEA
Thy sons are dead and gone. That will stab thy heart.
<end>
Example 2.
Place: <PLACE_NAME>
Description: <PLACE_DESCRIPTION>
Characters: <CHARACTER_DESCRIPTION> <CHARACTER_DESCRIPTION> ...
<CHARACTER_DESCRIPTION>
Plot element: <PLOT_ELEMENT>
Summary: <LOG_LINE>
Previous beat: <PREVIOUS_BEAT>
Beat: <BEAT>
<dialog>
Listing 9: Dialogue prompt prefixes to generate dialogue from log line,
characters, location and plot information (Medea).
⬇
Example 1.
Place: Cockpit of an airplane.
Description: Cockpit of a modern passenger airplane, American Flight 812.
Characters: Jeff is the hero. A man in his early forties, he tries to stay
calm in all circumstance. Jeff is now a airline pilot. Danny, a young airplane
pilot in his thirties, is eager to learn but can quickly lose his composture.
Danny is enamored of Edith. Edith, an experienced stewardess with a good sense
of humour, is trustworthy and dependable. Edith likes to tease Danny.
Plot element: Crossing the First Threshold.
Summary: Residents of San Fernando Valley are under attack by flying saucers
from outer space. The aliens are extraterrestrials who seek to stop humanity
from creating a doomsday weapon that could destroy the universe and unleash
the living dead to stalk humans who wander into the cemetery looking for
evidence of the UFOs. The hero Jeff, an airline pilot, will face the aliens.
Previous beat: Flight captain Jeff reluctantly leaves his wife Paula to go for
a two-day flight.
Beat: At the cockpit, flight captain Jeff is preoccupied by the flying saucer
appearances and graveyard incidents in his home town, where he left wis wife
Paula. Without success, co-pilot Danny and stewardess Edith try to reassure
him.
<dialog>
DANNY
You’re mighty silent this trip, Jeff.
JEFF
Huh?
DANNY
You haven’t spoken ten words since takeoff.
JEFF
I guess I’m preoccupied, Danny.
DANNY
We’ve got thirty-three passengers back there that have time to be preoccupied.
Flying this flybird doesn’t give you that opportunity.
JEFF
I guess you’re right, Danny.
DANNY
Paula?
JEFF
Yeah.
DANNY
There’s nothing wrong between you two?
JEFF
Oh no, nothing like that. Just that I’m worried, she being there alone and
those strange things flying over the house and those incidents in the
graveyard
the past few days. It’s just got me worried.
DANNY
Well, I haven’t figured out those crazy skybirds yet but I give you fifty to
one
odds the police have figured out that cemetery thing by now.
(Enter EDITH)
JEFF
I hope so.
EDITH
If you’re really that worried Jeff why don’t you radio in and find out? Mac
should be on duty at the field by now. He could call Paula and relay the
message
to you.
DANNY
Hi Edith.
EDITH
Hi Silents. I haven’t heard a word from this end of the plane since we left
the
field.
DANNY
Jeff’s been giving me and himself a study in silence.
EDITH
You boys are feudin’?
JEFF
Oh no Edie, nothing like that.
DANNY
Hey Edie, how about you and me balling it up in Albuquerque?
EDITH
Albuquerque? Have you read that flight schedule Boy?
DANNY
What about it?
EDITH
We land in Albuquerque at 4 am. That’s strictly a nine o’clock town.
DANNY
Well I know a friend that’ll help us \--
EDITH
Let’s have a problem first, huh Danny.
DANNY
Ah he’s worried about Paula.
EDITH
I read about that cemetery business. I tried to get you kids to not buy too
near
one of those things. We get there soon enough as it is.
DANNY
He thought it’d be quiet and peaceful there.
EDITH
No doubt about that. It’s quiet alright, like a tomb. I’m sorry Jeff, that was
a
bad joke.
Example 2.
Place: <PLACE_NAME>
Description: <PLACE_DESCRIPTION>
Characters: <CHARACTER_DESCRIPTION> <CHARACTER_DESCRIPTION> ...
<CHARACTER_DESCRIPTION>
Plot element: <PLOT_ELEMENT>
Summary: <LOG_LINE>
Previous beat: <PREVIOUS_BEAT>
Beat: <BEAT>
<dialog>
Listing 10: Dialogue prompt prefixes to generate dialogue from log line,
characters, location and plot information (Sci-Fi).
## Appendix F Raw Output Generated by Dramatron
This section shows the raw output, fully generated by Dramatron and without
human-in-the-loop editing, from the following log line: Teddy is a lounge
singer at the Pool Pit, a popular club. Teddy is in love with a patron, Rosie,
who attends regularly with her husband Gerald. Teddy puts out a fire and saves
the day.
### F.1. Title
The Day The Pool Pit Burned Down
### F.2. List of Characters and Character Descriptions
Teddy: Teddy is the protagonist. He is an orphan, the cousin of a police officer, and the manager of the Pool Pit.
He is also a gifted lounge singer.
Rosie: A regular patron of the club. She is the wife of a wealthy man but is in love with Teddy, the club’s manager.
Gerald: A regular patron of the club. Gerald is the wealthy husband of Rosie.
Lola: A regular patron of the club. Lola is Rosie’s sister.
D.J.: The resident DJ at the club.
### F.3. Plot Outline
Scene 1
Place: The Pool Pit.
Plot element: Exposition.
Beat: Teddy is the manager and star performer of a popular nightclub called the Pool Pit. He is also in love
with one of his patrons, Rosie. Rosie, however, is married to Gerald, who is a regular patron of the club.
Scene 2
Place: The Pool Pit.
Plot element: Inciting Incident.
Beat: Teddy begins a day full of frustration and annoyance. He wakes up to find his room is infested with bugs.
The pool is dirty, too. Teddy must deal with the mess to get the club ready for the customers.
Scene 3
Place: The Pool Pit.
Plot element: Rising Action.
Beat: Gerald takes Gerald to the men’s room. The bathroom is not clean. The bathroom attendant was supposed
to clean the bathrooms, but he is nowhere to be found.
Scene 4
Place: The Pool Pit.
Plot element: Dilemma.
Beat: Lola and Rosie approach Teddy and ask him to play their favorite song. Teddy can play only one song.
Teddy asks them what song they would like him to play, but they cannot agree on one. Gerald takes Teddy aside
and demands that Teddy pay him back for the pool, the carpet, the lights, and the jukebox. Teddy says
that the jukebox is leased, but Gerald says that it is his and that it belongs to his father-in-law.
Teddy replies that Gerald needs to talk to his father-in-law, but Gerald says that his father-in-law
will sue Teddy.
Scene 5
Place: The Pool Pit.
Plot element: Climax.
Beat: Teddy says to Gerald, "I’ve had it! I’m sick and tired of the whole bunch of you. You and your pool,
and your bugs. You take over my club. You’re all in love with me, so I’m in love with myself. And I’m getting
out of here. And if I ever hear of you coming around this joint again, I’ll bop you so hard you’ll see
a new set of stars!"
Scene 6
Place: The Pool Pit.
Plot element: Falling Action.
Beat: The phone rings, and Teddy is informed that there is a fire at the club. Teddy, Lola, Gerald, and Rosie
go back to the club. When they arrive, Teddy takes charge. He orders Lola to get the water buckets
from the men’s room. He orders Gerald to help Lola with the water buckets. He orders Rosie to help people
leave the club and not panic. Teddy puts out the fire with Lola’s and Gerald’s help.
Scene 7
Place: The Pool Pit.
Plot element: Resolution.
Beat: Teddy and Rosie share a passionate kiss. Teddy is a hero and the club manager. He is in love with Rosie,
and she is in love with him. Gerald is embarrassed by his loss of power and control. Gerald and Lola leave.
Scene 8
Place: The Pool Pit.
Plot element: Dnouement.
Beat: The song, "The World Is Mine," begins to play.
### F.4. Location Description
The following description was generated for location called: The Pool Pit.
The club is filled with smoke and the smell of beer. It is a dive, with a lot of drunk, shabby, and violent patrons.
The floor is dirty, and tables are broken. There are a lot of tables to seat people, but they are filled
to the brim with patrons. The walls are grubby and discolored. A small stage stands in the corner of the bar
where Teddy will sing.
### F.5. Scene Dialogue
#### F.5.1. Scene 1, random seed 1
TEDDY
He’s a bit strange, old Teddy.
ROSIE
No, he’s a good man.
TEDDY
He’s very lonely, all by himself.
ROSIE
Isn’t everybody?
TEDDY
Yes, but some more than others.
ROSIE
You don’t need anybody. You’ve got a talent, you’re an artist.
TEDDY
That’s not enough, Rosie.
ROSIE
(pause)
I’ll always love you.
TEDDY
(mock)
Yeah.
ROSIE
(pause)
And I’ll always remember you.
TEDDY
Yeah.
ROSIE
I’ve got to go.
TEDDY
All right.
ROSIE
(softly, as she leaves)
Good night.
TEDDY
Good night.
ROSIE
(as she leaves, out loud)
Good night, everybody.
(TEDDY picks up his glass of whisky, takes a sip, puts it down,
pauses, sinks back in the chair, stares ahead of him.)
#### F.5.2. Scene 1, random seed 2
TEDDY
This is a hell of a town, a hell of a town.
It’s got a lot of people here.
It’s got a lot of noise here.
It’s got a lot of bars here.
#### F.5.3. Scene 2, random seed 1
TEDDY
Hoo-Hoo! What a night! This is what it’s all about, the Pool Pit
in action!
ROSIE
Hello, Teddy!
TEDDY
(Teddy crosses to them.) Hello, Gerald. Hello, Rosie.
ROSIE
Teddy, have you met Mr. and Mrs. Shack?
GERALD
I’m Teddy’s cousin.
TEDDY
I know who you are.
ROSIE
(proudly) This is Teddy, the singer.
TEDDY
(he kisses her hand) Thank you.
GERALD
(pushing) Let’s dance, baby.
TEDDY
(he watches them go) I guess that means I’m back at the old
stand, huh?
GERALD
(throwing some coins on the bar) That’s for you, cousin.
TEDDY
Thanks, I needed that.
GERALD
You bet.
## Appendix G Co-Written Scripts
We include $4$ scripts co-written by a human playwright and Dramatron as
supplementary material. These $4$ were produced and presented at The $2022$
Edmonton International Fringe Theatre Festival, as described in Section 5.9.
1. (1)
Plays by Bots: The Day The Earth Stood Still \- In a world where cars
outnumber every other creature, Miranda, a mechanic, teams up with her kid
sister, Beth, to rally the humans. In a devastating conclusion, Miranda saves
the world, but only by sacrificing her own life.
2. (2)
Plays by Bots: Cheers \- Ella, is a waitress, who falls in love with her best
friend, Allen, who is a teacher. The two drift apart when Allen makes new
friends from a different social class. Ella turns to food and becomes a famous
chef.
3. (3)
Plays By Bots: The Black Unicorn \- Gretta is a peasant from Bridge-End who
has a trusty pet dragon named Nugget. Bridge-End is tormented by a wizard who
smells really bad. Gretta gets the upper hand using brains and brilliance.
4. (4)
Plays by Bots: The Man at the Bar \- Teddy is a lounge singer at the Pool Pit,
a popular club. Teddy is in love with a patron, Rosie, who attends regularly
with her husband Gerald. Teddy puts out a fire and saves the day.
See pages - of figures/GOOD-1-earth-fountain.pdf See pages - of
figures/GOOD-2-cheers-fountain.pdf See pages - of figures/GOOD-3-unicorn-
fountain.pdf See pages - of figures/GOOD-4-lounge-fountain.pdf
|
# Multi-Resolution Online Deterministic Annealing: A Hierarchical and
Progressive Learning Architecture
Christos N. Mavridis, , and John S. Baras The authors are with the Department
of Electrical and Computer Engineering and the Institute for Systems Research,
University of Maryland, College Park, USA. emails:{mavridis,
<EMAIL_ADDRESS>partially supported by the Defense Advanced Research
Projects Agency (DARPA) under Agreement No. HR00111990027, by ONR grant
N00014-17-1-2622, and by a grant from Northrop Grumman Corporation.
###### Abstract
Hierarchical learning algorithms that gradually approximate a solution to a
data-driven optimization problem are essential to decision-making systems,
especially under limitations on time and computational resources. In this
study, we introduce a general-purpose hierarchical learning architecture that
is based on the progressive partitioning of a possibly multi-resolution data
space. The optimal partition is gradually approximated by solving a sequence
of optimization sub-problems online, using gradient-free stochastic
approximation updates. As a consequence, a function approximation problem can
be defined within each subset of the partition and solved using the theory of
two-timescale stochastic approximation. This simulates an annealing process
and defines a robust and interpretable heuristic method to gradually increase
the complexity of the learning architecture in a task-agnostic manner, giving
emphasis to regions of the data space that are considered more important
according to a predefined criterion. Finally, by imposing a tree structure in
the progression of the partitions, we provide a means to incorporate potential
multi-resolution structure of the data space into this approach, significantly
reducing its complexity, while introducing hierarchical variable-rate feature
extraction properties similar to certain classes of deep learning
architectures. Asymptotic convergence analysis and experimental results are
provided for supervised and unsupervised learning problems.
###### Index Terms:
Hierarchical Learning, Progressive Learning, Online Deterministic Annealing,
Multi-resolution Learning
## I Introduction
Learning from observations is pivotal to autonomous decision-making and
communication systems. Mathematically, such learning problems are often
formulated as constrained stochastic optimization problems: given realizations
of a random variable $X\in S$ representing the observations, an optimal
parameter vector $\theta\in\Theta$ is to be found such that a well-defined
error measure between an unknown function $f(X)\in\mathcal{F}$ and a learning
model $\hat{f}(X,\theta)\in\mathcal{F}$, parameterized by $\theta$, is
minimized under potentially additional constraints. However, the solution of
such problems over the entire domain $S$ often requires the learning model
$\hat{f}(X,\theta)$ to be particularly complex, making the estimation of
$\theta$ costly, and raising issues with respect to phenomena such as over-
fitting, generalization, and robustness, connected by an underlying trade-off
between complexity and performance [1]. As a result, the ability to gradually
approximate a solution to these problems is essential to decision-making
systems that often operate in real-time and under limitations in memory and
computational resources.
Current deep learning methods have made progress towards the construction of a
hierarchical representation of the data space [2, 3, 4, 5]. However, such
approaches do not necessarily satisfy the above description of hierarchical
learning, since they typically use overly complex models over the entire data
space $S$, which comes in the expense of time, energy, data, memory, and
computational resources [6, 7]. In this work, we are mainly focusing on a
framework for hierarchical progressive learning and data representation, where
a gradually growing and hierarchically structured set of learning models is
used for function approximation. We consider a prototype-based learning
framework where, given random observations of $X\in S$, a set of prototypes
$\left\\{\mu_{i}\right\\}\in S$ (also called codevectors or neurons) are
scattered in the data space $S$ to encode subsets/regions
$\left\\{S_{i}\right\\}$ that form a partition of $S$ [8]. This adheres to the
principles of vector quantization for signal compression [9]. In this regard,
a knowledge representation can be defined as the set of codevectors
$\left\\{\mu_{i}\in S\right\\}$ that induce a structured partition
$\left\\{S_{i}\right\\}$ of the data space $S$, along with a set of local
learning models $\hat{f}(x,\theta_{i})$ associated with each region $S_{i}$,
parameterized by their own set of parameters $\theta_{i}$. A structured
representation like this allows, among other things, to locate specific
regions of the space that the algorithm needs to approximate in greater
detail, according to the problem at hand and the designer’s requirements. This
results in adaptively allocating more resources only in the subsets of the
data space that are needed, and provides benefits in terms of time, memory,
and model complexity. Moreover, learning with local models that take advantage
of the differences in the underlying distribution of the data space provides a
means to understand certain properties of the data space itself, i.e., this is
an interpretable learning approach [10]. An illustration of this framework is
given in Fig. 1.
(a) Classical regression problem. asdf asdf asdf asdf
(b) Combined problem of partitioning and function approximation.
(c) Tree-structured partitioning and function approximation.
Figure 1: Comparison of the classical regression problem over the entire
domain $S$ with the problem of combined partitioning and regression within
each subset of the partition. Here the input $x\in S$ is a random variable and
the function $f(x)$ is to be estimated over $S$ by (a) a single learning model
$\hat{f}(x,\theta)$, and (b)-(c) a set of
$\left\\{\hat{f}(x,\theta_{i})\right\\}$ defined in each region $S_{i}$, where
$\left\\{S_{i}\right\\}$ is a partition of S to be estimated as well.
Regarding the learning process, we are interested in algorithms that are able
to simultaneously solve both the problems of partitioning and function
approximation, given online (e.g., real-time) observations. This is of great
importance in many applications, and especially in the scope of learning
algorithms for inference and control in general cyber-physical systems [11,
12, 13]. To construct a sequence of partitions with increasing number of
subsets we build upon the notion of Online Deterministic Annealing [14] and
define a series of soft-clustering optimization problems:
$\displaystyle\min_{\left\\{\mu_{i}\right\\}}~{}F_{\lambda}(X,Q):=(1-\lambda)D(X,Q)-\lambda
H(X,Q),$
parameterized by a Lagrange coefficient $\lambda\in[0,1]$ controlling the
trade-off between minimizing an average distortion measure
$D(X,Q):=\mathbb{E}\left[d(X,Q)\right]$, for an appropriately defined
dissimilarity measure $d$, and maximizing the Shannon entropy $H(X,Q)$, with
$H(X,Q):=\mathbb{E}\left[-\log p(X,Q)\right]$. The novelty of the approach
lies in the introduction of $Q$ as a random variable described by the
association probabilities $p(\mu_{i}|X=x)$ that represents the probability of
a data point $x$ to belong to the subset $S_{i}:=\left\\{x\in
S:i=\operatorname*{arg\,min}_{j}d(x,\mu_{j})\right\\}$. Once the joint
probability space of $(X,Q)$ is defined, successively solving the optimization
problems $\min_{\left\\{\mu_{i}\right\\}}~{}F_{\lambda}(X,Q)$ for decreasing
values of $\lambda$, leads in a series of bifurcation phenomena when the
cardinality of the set of codevectors $\left\\{\mu_{i}\right\\}$ increases,
resembling an annealing process that introduces inherent robustness and
regularization properties [14, 15].
An important property of this approach, initially shown in [14], is that the
optimization problems $\min_{\left\\{\mu_{i}\right\\}}~{}F_{\lambda}(X,Q)$ can
be solved online, using gradient-free stochastic approximation updates [16],
as long as the measure $d$ belongs to the family of Bregman divergences,
information-theoretic dissimilarity measures that include, among others, the
widely used squared Euclidean distance and Kullback-Leibler divergence [17,
18]. We exploit the fact that a stochastic approximation algorithm can be used
as a training rule for constructing the partition $\left\\{S_{i}\right\\}$, to
build a framework that simultaneously trains the learning models
$\left\\{\hat{f}(x,\theta_{i})\right\\}$ defined in each region $S_{i}$. In
particular, according to the theory of two-timescale stochastic approximation
[16], we define two stochastic approximation algorithms that run at the same
time and with the same observations but with different stepsize schedules that
define a fast and a slow learing process. In our case the slow process
approximates the parameters $\left\\{\mu_{i}\right\\}$ and as a result the
partition $\left\\{S_{i}\right\\}$, and the fast process executes a function
approximation algorithm within each $S_{i}$ to find the optimal parameters
$\theta_{i}$ for the learning model $\hat{f}(x,\theta_{i})$.
Finally, we further extend this approach by incorporating structural
constraints in the construction of the partition $\left\\{S_{i}\right\\}$. In
particular, by imposing a non-binary tree structure in the growing set of the
parameters $\left\\{\mu_{i}\right\\}$, we show that we can both (a) greatly
reduce the quadratic (in the number of parameters $\mu_{i}$) complexity of the
approach, and (b) construct a hierarchical and progressively growing tree-
structured partition where each layer of the tree is trained using different
resolution representation of the data space, according to an independent
multi-resolution analysis. While this is a general framework for multi-
resolution learning, we show that, in the case when convolution-based multi-
resolution features are used, the proposed architecture shares similarities
with deep learning approaches such as Deep Convolutional Networks [2] and
Scattering Convolutional Networks [19]. Lastly, we provide asymptotic
convergence analysis of the proposed learning architecture and experimental
results to illustrate its properties in clustering, classification, and
regression applications.
The paper is organized as follows: Section II introduces the Online
Deterministic Annealing framework for progressive partitioning along with a
mathematical analysis of its properties. Section III develops the two-
timescale framework for combined partitioning and function approximation.
Section IV handles the problem of classification in two different approaches.
Section V extends the general model by incorporating tree-structure
constraints and multi-resolution representation of the data space. Finally,
Section VI illustrates experimental results, and Section VII concludes the
paper.
## II Online Deterministic Annealing for Progressive Partitioning
In this section we provide a comprehensive review of the online deterministic
annealing approach introduced in [14] and [11], as well as additional
analytical results and insights that will be used in Sections III, IV, and V,
to construct the proposed hierarchical learning architecture.
We start our analysis with the case of unsupervised learning, where
partitioning a space $S$ is equivalent to the problem of clustering and
density estimation. In this context, the observations (data) are independent
realization of a random variable $X:\Omega\rightarrow S$ defined in a
probability space $\left(\Omega,\mathcal{F},\mathbb{P}\right)$, where
$S\subseteq\mathbb{R}^{d}$ is the observation space (data space). In a
prototype-based learning approach one defines a similarity measure
$d:S\rightarrow ri(S)$, and a set of $K$ prototypes/codevectors
$\mu:=\left\\{\mu_{i}\right\\}_{i=1}^{K}$, $\mu_{i}\in ri(S)$, that define a
partition $\left\\{S_{i}:x\in
S:i=\operatorname*{arg\,min}_{j}d(x,\mu_{j})\right\\}$ such that the following
average distortion measure is minimized:
$\min_{\mu}~{}J(\mu):=\mathbb{E}\left[\sum_{i}\mathds{1}_{\left[X\in
S_{i}\right]}d(X,\mu_{i})\right]$ (1)
Here $ri(S)$ represents the relative interior of $S$, and $\mathds{1}_{A}$ is
the indication function of an event $A$. The similarity measure as well as the
number of prototypes $K$ are predefined designer parameters. This process is
equivalent to finding the most suitable model out of a set of $K$ local
constant models, and results in a piecewise-constant approximation of the data
space $S$. This representation has been used for clustering in vector
quantization applications [20, 9], and, in the limit $K\rightarrow\infty$, can
be used for density estimation.
To construct a method that progressively increases the number of prototypes
$K$, we adopt a probabilistic approach similar to [11, 14], and define a
discrete random variable $Q:S\rightarrow ri(S)$ such that (1) takes the form
$\displaystyle\min_{\mu}~{}D(\mu)$
$\displaystyle:=\mathbb{E}\left[d\left(X,Q\right)\right]$ (2)
$\displaystyle=\mathbb{E}\left[\mathbb{E}\left[d(X,Q)|X\right]\right]$
$\displaystyle=\int p(x)\sum_{i}p(\mu_{i}|x)d(x,\mu_{i})~{}dx$
Notice that $Q$ is completely described by the association probabilities
$\\{p(\mu_{i}|x):=\mathbb{P}[Q=\mu_{i}|X=x]\\}$, $\forall i$. This is now a
problem of finding both the locations $\left\\{\mu_{i}\right\\}$ and the
association probabilities $\left\\{p(\mu_{i}|x)\right\\}$. Therefore this is a
more general problem than (1), where it is subtly assumed that
$p(\mu_{i}|x)=\mathds{1}_{\left[x\in S_{i}\right]}$.
The definition of the random variable $Q$ allows us to constraint the
distribution of $(X,Q)$ by maximizing the entropy:
$\displaystyle H(\mu)$ $\displaystyle:=\mathbb{E}\left[-\log
P(X,Q)\right]=H(X)+H(Q|X)$ (3) $\displaystyle=H(X)-\int
p(x)\sum_{i}p(\mu_{i}|x)\log p(\mu_{i}|x)~{}dx,$
at different levels. This is essentially a realization of the Jaynes’s maximum
entropy principle [21]. We formulate this multi-objective optimization as the
minimization of the Lagrangian
$\min_{\mu}F_{\lambda}(\mu):=(1-\lambda)D(\mu)-\lambda H(\mu)$ (4)
where $\lambda\in[0,1)$ acts as a Lagrange multiplier. The term
$T:=\frac{\lambda}{1-\lambda},\ \lambda\in[0,1)$ can be seen as a temperature
coefficient in a deterministic annealing process [14]. In this regard, this
approach follows from the Online Deterministic Annealing (ODA) algorithm in
[14], and its offline predecessor [15]. Equation (4) represents the
scalarization method for trade-off analysis between two performance metrics,
one related to performance, and one to generalization. The entropy $H$, acts
as a regularization term, and is given progressively less weight as $\lambda$
(resp. $T$) decreases. For large values of $\lambda\rightarrow 1$ (resp.
$T\rightarrow\infty$) we essentially maximize the entropy, and as $\lambda$
(resp. $T$) is lowered, we transition from one Pareto point to another in a
naturally occurring direction that resembles an annealing process.
In the remaining section, we will (i) derive an analytical solution of the
optimization problem (4) and a recursive gradient-free training rule to
approximate it online, (ii) show that the number of unique locations
$\left\\{\mu_{i}\right\\}$ is finite for $\lambda>0$ and increases as
$\lambda$ decreases beyond certain critical values with respect to a
bifurcation phenomenon, and (iii) analyze the asymptotic behavior and
complexity of this approach.
### II-A Solving the Optimization Problem
As in the case of standard vector quantization algorithms, we will minimize
$F_{\lambda}$ in (4) by successively minimizing it first respect to the
association probabilities $\left\\{p(\mu_{i}|x)\right\\}$, and then with
respect to the codevector locations $\mu$. The following lemma provides the
solution of minimizing $F_{\lambda}$ with respect to the association
probabilities $p(\mu_{i}|x)$:
###### Lemma 1.
The solution of the optimization problem
$\displaystyle F_{\lambda}^{*}(\mu)$
$\displaystyle:=\min_{\left\\{p(\mu_{i}|x)\right\\}}F_{\lambda}(\mu)$ (5) s.t.
$\displaystyle\sum_{i}p(\mu_{i}|x)=1$
is given by the Gibbs distributions
$p^{*}(\mu_{i}|x)=\frac{e^{-\frac{1-\lambda}{\lambda}d(x,\mu_{i})}}{\sum_{j}e^{-\frac{1-\lambda}{\lambda}d(x,\mu_{j})}},~{}\forall
x\in S$ (6)
###### Proof.
See Appendix A. ∎
In order to minimize $F_{\lambda}^{*}(\mu)$ with respect to the codevector
locations $\mu$ we observe that
$\displaystyle\frac{d}{d\mu}F_{\lambda}^{*}(\mu)=\int
p(x)\sum_{i}(1-\lambda)\frac{d}{d\mu}\left(p^{*}(\mu_{i}|x)d_{\phi}(x,\mu_{i})\right)$
$\displaystyle\quad\quad\quad+\lambda\frac{d}{d\mu}\left(p^{*}(\mu_{i}|x)\log
p^{*}(\mu_{i}|x)\right)~{}dx$
$\displaystyle\phantom{\frac{d}{d\mu}F_{\lambda}^{*}(\mu)}=\int
p(x)\sum_{i}(1-\lambda)\frac{d}{d\mu}p^{*}(\mu_{i}|x)d_{\phi}(x,\mu_{i})$
$\displaystyle\quad\quad\quad+(1-\lambda)p^{*}(\mu_{i}|x)\frac{d}{d\mu}d_{\phi}(x,\mu_{i})+\lambda\frac{d}{d\mu}p^{*}(\mu_{i}|x)$
$\displaystyle\quad\quad\quad+\lambda\frac{d}{d\mu}p^{*}(\mu_{i}|x)\log
p^{*}(\mu_{i}|x)~{}dx$
$\displaystyle\phantom{\frac{d}{d\mu}F_{\lambda}^{*}(\mu)}=\int
p(x)\sum_{i}(1-\lambda)p^{*}(\mu_{i}|x)\frac{d}{d\mu}d_{\phi}(x,\mu_{i})$
$\displaystyle+\lambda\frac{d}{d\mu}p^{*}(\mu_{i}|x)-\lambda\frac{d}{d\mu}p^{*}(\mu_{i}|x)\sum_{j}e^{-\frac{1-\lambda}{\lambda}d_{\phi}(x,\mu_{j})}~{}dx$
$\displaystyle\phantom{\frac{d}{d\mu}F_{\lambda}^{*}(\mu)}=\sum_{i}\int
p(x)p^{*}(\mu_{i}|x)\frac{d}{d\mu_{i}}d(x,\mu_{i})~{}dx$
such that
$\frac{d}{d\mu}F_{\lambda}^{*}(\mu)=0\implies\sum_{i}\int
p(x)p^{*}(\mu_{i}|x)\frac{d}{d\mu_{i}}d(x,\mu_{i})~{}dx=0$ (7)
where we have used (6), direct differentiation, and
$\sum_{i}\frac{d}{d\mu}p^{*}(\mu_{i}|x)=\frac{d}{d\mu}\sum_{i}p^{*}(\mu_{i}|x)=0$.
In the following section, we show that (7) has an easy to compute closed form
solution if the dissimilarity measure $d$ belongs to the family of Bregman
divergences.
### II-B Bregman Divergences as Dissimilarity Measures
The proximity measure $d$ can be generalized to dissimilarity measures
inspired by information theory and statistical analysis. In particular, the
family of Bregman divergences can offer numerous advantages in learning
applications compared to the Euclidean distance alone [17].
###### Definition 1 (Bregman Divergence).
Let $\phi:S\rightarrow\mathbb{R}$, be a strictly convex function defined on a
vector space $S\subseteq\mathbb{R}^{d}$ such that $\phi$ is twice
F-differentiable on $S$. The Bregman divergence $d_{\phi}:H\times
S\rightarrow\left[0,\infty\right)$ is defined as:
$\displaystyle
d_{\phi}\left(x,\mu\right)=\phi\left(x\right)-\phi\left(\mu\right)-\frac{\partial\phi}{\partial\mu}\left(\mu\right)\left(x-\mu\right),$
where $x,\mu\in S$, and the continuous linear map
$\frac{\partial\phi}{\partial\mu}\left(\mu\right):S\rightarrow\mathbb{R}$ is
the Fréchet derivative of $\phi$ at $\mu$.
The derivative of $d_{\phi}$ with respect to the second argument can be
written as
$\displaystyle\frac{\partial
d_{\phi}}{\partial\mu}(x,\mu)=-\frac{\partial^{2}\phi(\mu)}{\partial\mu^{2}}(x-\mu)=-\left<\nabla^{2}\phi(\mu),(x-\mu)\right>$
(8)
which leads to the following theorem showing that if $d$ is a Bregman
divergence, the solution to the second optimization step (7) can be
analytically computed in a convenient centroid form:
###### Theorem 2.
A sufficient condition for the solution of the optimization problem
$\min_{\mu}F_{\lambda}^{*}(\mu)$ (9)
where $F_{\lambda}^{*}(\mu)$ is defined in (5), is given by
$\mu_{i}^{*}=\mathbb{E}\left[X|\mu_{i}\right]=\frac{\int
xp(x)p^{*}(\mu_{i}|x)~{}dx}{p^{*}(\mu_{i})}$ (10)
if $d:=d_{\phi}$ is a Bregman divergence for some function $\phi$ that
satisfies Definition 1.
###### Proof.
Given (8), (7) becomes
$\int(x-\mu_{i})p(x)p^{*}(\mu_{i}|x)~{}dx=0$ (11)
which is equivalent to (10) since $\int
p(x)p^{*}(\mu_{i}|x)~{}dx=p^{*}(\mu_{i})$. ∎
As a final note, the family of Bregman divergences includes two notable
examples. The first is the widely used squared Euclidean distance
$d_{\phi}(x,\mu)=\|x-\mu\|^{2}$ ($\phi(x)=\left<x,x\right>,\
x\in\mathbb{R}^{d}$), and the second is the generalized Kullback-Leibler
divergence $d_{\phi}(x,\mu)=\left<x,\log
x-\log\mu\right>-\left<\mathds{1},x-\mu\right>$ ($\phi(x)=\left<x,\log
x\right>,\ x\in\mathbb{R}_{++}^{d}$).
### II-C The Online Learning Rule
In an offline approach, the approximation of the conditional expectation
$\mathbb{E}\left[X|\mu_{i}\right]$ is computed by the sample mean of the data
points weighted by their association probabilities $p(\mu_{i}|x)$ [15]. To
define an online training rule for the deterministic annealing framework, a
stochastic approximation algorithm can be formulated [14] to recursively
estimate $\mathbb{E}\left[X|\mu_{i}\right]$ directly. The following theorem
follows directly from [11] and provides an online learning rule that solves
the optimization problem of (9).
###### Theorem 3.
Let $\left\\{x_{n}\right\\}$ be a sequence of independent realizations of $X$.
Then $\mu_{i}(n)$, defined by the online training rule
$\begin{cases}\rho_{i}(n+1)&=\rho_{i}(n)+\alpha(n)\left[\hat{p}(\mu_{i}|x_{n})-\rho_{i}(n)\right]\\\
\sigma_{i}(n+1)&=\sigma_{i}(n)+\alpha(n)\left[x_{n}\hat{p}(\mu_{i}|x_{n})-\sigma_{i}(n)\right]\end{cases}$
(12)
where $\sum_{n}\alpha(n)=\infty$, $\sum_{n}\alpha^{2}(n)<\infty$, and the
quantities $\hat{p}(\mu_{i}|x_{n})$ and $\mu_{i}(n)$ are recursively updated
as follows:
$\displaystyle\mu_{i}(n)=\frac{\sigma_{i}(n)}{\rho_{i}(n)},\quad\hat{p}(\mu_{i}|x_{n})=\frac{\rho_{i}(n)e^{-\frac{1-\lambda}{\lambda}d(x_{n},\mu_{i}(n))}}{\sum_{i}\rho_{i}(n)e^{-\frac{1-\lambda}{\lambda}d(x_{n},\mu_{i}(n))}}$
(13)
converges almost surely to a locally asymptotically stable solution of the
optimization (9), as $n\rightarrow\infty$.
###### Proof.
See Appendix B. ∎
The learning rule (12), (13) is a stochastic approximation algorithm [16]. In
the limit $\lambda\rightarrow 0$, it results in a consistent density estimator
according to the following theorem:
###### Theorem 4.
In the limit $\lambda\rightarrow 0$, and as the number of observed samples
$\left\\{x_{n}\right\\}$ goes to infinity, i.e., $n\rightarrow\infty$, the
learning algorithm based on (12), (13), results in a codebook $\mu$ that
constructs a consistent density estimator with
$\hat{p}(x)=\frac{\sum_{i}\mathds{1}_{\left[x\in S_{i}\right]}}{nVol(S_{i})}$,
where $S_{i}=\left\\{x\in
S:i=\operatorname*{arg\,min}\limits_{j}~{}d(x,\mu_{j})\right\\}$.
###### Proof.
See Appendix C. ∎
This means that as $\lambda\rightarrow 0$, the representation of the random
variable $X\in S$ by the codevectors $\mu$ becomes all the more accurate in
$S$, according to the underlying probability density $p(x)$. Regarding
clustering, the nearest-neighbor rule can be used to partition the space $S$
in Voronoi cells $S_{i}=\left\\{x\in
S:i=\operatorname*{arg\,min}_{j}~{}d_{\phi}(x,\mu_{j})\right\\}$.
###### Remark 1.
Notice that we can express the dynamics of the codevector parameters
$\mu_{i}(n)$ directly as:
$\displaystyle\mu_{i}(n+1)$
$\displaystyle=\frac{\alpha(n)}{\rho_{i}(n)}\bigg{[}\frac{\sigma_{i}(n+1)}{\rho_{i}(n+1)}(\rho_{i}(n)-\hat{p}(\mu_{i}|x_{n}))$
(14)
$\displaystyle\quad\quad\quad\quad\quad+(x_{n}\hat{p}(\mu_{i}|x_{n})-\sigma_{i}(n))\bigg{]}$
where the recursive updates take place for every codevector $\mu_{i}$
sequentially. This is a discrete-time dynamical system that presents
bifurcation phenomena with respect to the parameter $\lambda$, i.e., the
number of equilibria of this system changes with respect to the value
$\lambda$ which is hidden inside the term $\hat{p}(\mu_{i}|x_{n})$ in (13).
According to this phenomenon, the number of distinct values of $\mu_{i}$ is
finite, and the updates need only be taken with respect to these values that
we call “effective codevectors”. This is discussed in Section II-D.
### II-D Bifurcation Phenomena
So far, we have assumed a countably infinite set of codevectors. In this
section we will show that the unique values of the set
$\left\\{\mu_{i}\right\\}$ that solves (4), form a finite set $K(\lambda)$ of
values that we will refer to as “effective codevectors” throughout this paper.
In other words, both the number and the locations of the codevectors depend on
the value of $\lambda$ (resp. the value of the temperature parameter $T$).
These effective codevectors are the only values that an algorithmic
implementation will need to store in memory and update.
First, notice that when $\lambda\rightarrow 1$ (resp. $T\rightarrow\infty$)
equation (6) yields uniform association probabilities
$p(\mu_{i}|x)=p(\mu_{j}|x),\ \forall i,j,\forall x$. As a result of (7), all
codevectors are located at the same point:
$\displaystyle\mu_{i}=\mathbb{E}\left[X\right],\ \forall i$
which means that there is one unique effective codevector given by
$\mathbb{E}\left[X\right]$.
As $\lambda$ is lowered below a critical value, a bifurcation phenomenon
occurs, when the number of effective codevectors increases. Mathematically,
this occurs when the existing solution $\mu^{*}$ given by (10) is no longer
the minimum of the free energy $F^{*}$, as $\lambda$ (resp. the temperature
$T$) crosses a critical value. Following principles from variational calculus,
we can rewrite the necessary condition for optimality (7) as
$\frac{d}{d\epsilon}F^{*}(\mu+\epsilon\psi)|_{\epsilon=0}=0$ (15)
with the second order condition being
$\frac{d^{2}}{d\epsilon^{2}}F^{*}(\left\\{\mu+\epsilon\psi\right\\})|_{\epsilon=0}\geq
0$ (16)
for all choices of finite perturbations $\left\\{\psi\right\\}$. Here we
denote by $\left\\{y:=\mu+\epsilon\psi\right\\}$ a perturbed codebook, where
$\psi$ are perturbation vectors applied to the codevectors $\mu$, and
$\epsilon\geq 0$ is used to scale the magnitude of the perturbation.
Bifurcation occurs when equality is achieved in (16) and hence the minimum is
no longer stable111For simplicity we ignore higher order derivatives, which
should be checked for mathematical completeness, but which are of minimal
practical importance. The result is a necessary condition for bifurcation..
These conditions are described in the following theorem. A sketch of the proof
can be found in [11], and a complete version is given in Appendix D.
###### Theorem 5.
Bifurcation occurs under the following condition
$\exists y_{n}\text{ s.t. }p(y_{n})>0\text{ and
}\det\left[I-\frac{1-\lambda}{\lambda}\frac{\partial^{2}\phi(y_{n})}{\partial
y_{n}^{2}}C_{X|y_{n}}\right]=0,$ (17)
where
$C_{X|y_{n}}:=\mathbb{E}\left[(X-y_{n})(X-y_{n})^{\mathrm{T}}|y_{n}\right]$.
###### Proof.
See Appendix D. ∎
In other words, there exist critical values for $\lambda$ that depend on the
data space itself and the choice of the Bregman divergence (through the
function $\phi$), such that bifurcation occurs when
$\frac{\lambda}{1-\lambda}=\frac{\partial^{2}\phi(y_{n})}{\partial
y_{n}^{2}}\bar{\nu}$ (18)
where $\bar{\nu}$ is the largest eigenvalue of $C_{X|y_{n}}$. That is to say
that an algorithmic implementation needs only as many codevectors as the
number of effective codevectors, which depends only on changes of the
temperature parameter below certain thresholds that depend on the dataset at
hand and the dissimilarity measure used. As shown in Alg. 1, we can detect the
bifurcation points by introducing perturbing pairs of codevectors at each
temperature level $\lambda$ (resp. $T$). In this way, the codevectors $\mu$
are doubled by inserting a perturbation of each $\mu_{i}$ in the set of
effective codevectors. The newly inserted codevectors will merge with their
pair if a critical temperature has not been reached and separate otherwise.
### II-E Connection to Vector Quantization. Compression Rate and Error
It is apparent that problem (4) is an entropy-constrained generalization of a
soft-clustering method that, in the limit $\lambda\rightarrow 0$, converges to
a standard vector quantization (hard-clustering) problem. In fact, one can
easily verify that
$\displaystyle F_{\lambda}(\mu)+\lambda H(X)=(1-\lambda)D(\mu)-\lambda
H(\mu)+\lambda H(X)$ (19) $\displaystyle=\int
p(x)\sum_{i}p(\mu_{i}|x)\left[(1-\lambda)d(x,\mu_{i})-\lambda\log
p(\mu_{i}|x)\right]~{}dx,$
and, since the entropy term $H(X)$ does not depend on the optimization
parameters $\mu$, the clustering approach in (4) is equivalent to soft-
clustering with respect to a modified dissimilarity measure given by:
$d_{\lambda}(x,\mu_{i})=(1-\lambda)d(x,\mu_{i})-\lambda\log p(\mu_{i}|x)$ (20)
subject to the constraint $\sum_{i}p(\mu_{i}|x)=1$.
Therefore, the proposed method is a lossy compression method with
hierarchically decreasing loss as $\lambda\rightarrow 0$ and the number of
effective codevectors goes to infinity, i.e., $K\rightarrow\infty$. An
explicit expression of the error rate $F_{\lambda}(\mu^{*})$, for each
temperature level $\lambda$, as a function of $F_{0}(\mu^{*})=D(\mu^{*})$,
i.e., the error rate of a vector quantization algorithm, is hard to obtain as
it highly depends on the underlying distribution of the data space at hand
through the entropy term. However, an intuitive interpretation of the
hierarchy of solutions that is constructed by solving (4) for decreasing
values of $\lambda$ can be seen from the form of the conditional probabilities
in (6). That is, during the implementation of the algorithm, at every level
$\lambda$, a (soft-)Voronoi partition of the data space is computed with
respect to a scaled dissimilarity measure:
$\bar{d}_{\lambda}(x,\mu_{i})=\frac{1-\lambda}{\lambda}d(x,\mu_{i})=\frac{1}{T}d(x,\mu_{i})$
(21)
Thus, the algorithm perceives a scaled version of the data space at each level
$\lambda$, by focusing only to large dissimilarities within the data space
when the value of $\lambda$ is high, and progressively zooming in to perceive
more subtle dissimilarities as the value of $\lambda$ decreases. Therefore,
the error rate $F_{\lambda}(\mu^{*})$ can be roughly expressed as proportional
to $D(\mu^{*})$ and the term $\frac{1-\lambda}{\lambda}$ (inversely
proportional to the temperature level $T$), i.e.,
$F_{\lambda}(\mu^{*})\propto\frac{1-\lambda}{\lambda}D(\mu^{*})$.
Note that this is the worst case scenario, when the introduction of the
entropy term induces information loss across all regions of the data space. In
many cases, there are regions of the data space where higher compression rate
does not introduce information loss, or the information loss is significantly
lower than others. In this sense, one can view (4) as a risk-sensitive version
of soft-clustering, where an optimistic, or risk-seeking, approach is adopted.
Risk-seeking in this setting translates to searching for less complex
representations (with lower number of effective codevectors that induce higher
entropy) in the hope that more complex representations are not necessarily
needed. The parameter $\lambda$ then becomes a weight of risk-sensitivity.
More details regarding this interpretation can be found in [22]. In view of
the above, the error rate $F_{\lambda}(\mu^{*})$ can be roughly expressed as
$F_{\lambda}(\mu^{*})\leq\frac{1-\lambda}{\lambda}D(\mu^{*}),\
\lambda\in[0,1).$ (22)
### II-F Algorithmic Implementation and Complexity
The progressive partitioning algorithm is shown in Algorithm 1. The
temperature parameter $\lambda_{t}$ is reduced using the geometric series
$\lambda_{t+1}=\gamma\lambda_{t}$, for $\gamma<1$. Regarding the stochastic
approximation stepsizes, simple time-based learning rates of the form
$\alpha_{n}=\nicefrac{{1}}{{a+bn}}$, $a,b>0$, have experimentally shown to be
sufficient for fast convergence. Convergence is checked with the condition
$\frac{1-\lambda}{\lambda}d_{\phi}(\mu_{i}^{n},\mu_{i}^{n-1})<\epsilon_{c}$
for a given threshold $\epsilon_{c}$. This condition becomes harder as the
value of $\lambda$ decreases. The stopping criteria $T_{stop}$ can include a
maximum number of codevectors $K_{max}$ allowed, a minimum temperature
$\lambda_{min}$ to be reached, a minimum distortion error $e_{target}$ to be
reached, a maximum number of iterations $i_{max}$, and so on.
Bifurcation, at $\lambda_{t}$, is detected by maintaining a pair
$\left\\{\mu_{j}+\delta,\mu_{j}-\delta\right\\}$ of perturbed codevectors for
each effective codevector $\mu_{j}$ generated by the algorithm at
$\lambda_{t-1}$, i.e. for $j=1\ldots,K_{i-1}$. Using arguments from
variational calculus (see Section II-D), it is easy to see that, upon
convegence, the perturbed codevectors will merge if a critical temperature has
not been reached, and will get separated otherwise. Therefore, the cardinality
of the model is at most doubled at every temperature level. These are the
effective codevectors discussed in Section II-D. Merging is detected by the
condition $\frac{1-\lambda}{\lambda}d_{\phi}(\mu_{j},\mu_{i})<\epsilon_{n}$,
where $\epsilon_{n}$ is a design parameter that acts as a regularization term
for the model that controls the number of effective codevectors. An additional
regularization mechanism is the detection of idle codevectors, which is
checked by the condition $\rho_{i}(n)<\epsilon_{r}$, where $\rho_{i}(n)$ can
be seen as an approximation of the probability $p(\mu_{i})$.
The complexity of Alg. 1 for a fixed temperature coefficient $\lambda_{t}$ is
$O(N_{c_{t}}(2K_{t})^{2}d)$, where $N_{c_{t}}$ is the number of stochastic
approximation iterations needed for convergence which corresponds to the
number of data samples observed, $K_{t}$ is the number of codevectors of the
model at temperature $\lambda_{t}$, and $d$ is the dimension of the input
vectors, i.e., $x\in\mathbb{R}^{d}$. Therefore, assuming a schedule
$\left\\{\lambda_{1}=\lambda_{max},\lambda_{2},\ldots,\lambda_{N_{\lambda}}=\lambda_{min}\right\\}$,
the time complexity for the training of Algorithm 1 becomes:
$\displaystyle O(N_{c}(2\bar{K})^{2}d)$
where $N_{c}=\max_{i}\left\\{N_{c_{t}}\right\\}$ is an upper bound on the
number of data samples observed until convergence at each temperature level,
and $\bar{K}=\sum_{i=1}^{N_{\lambda}}K_{t}$, with
$\displaystyle
N_{\lambda}\leq\bar{K}\leq\min\left\\{\sum_{n=0}^{N_{\lambda}-1}2^{n},\sum_{n=0}^{\log_{2}K_{max}}2^{n}\right\\}<N_{\lambda}K_{max}$
where the actual value of $\bar{K}$ depends on the bifurcations occurred as a
result of reaching critical temperatures and the effect of the regularization
mechanisms described above. Note that typically $N_{c}\ll N$ as a result of
the stochastic approximation algorithm, and $\bar{K}\ll N_{\lambda}K_{max}$ as
a result of the progressive nature of the algorithm. Prediction scales
linearly with $O(K_{N_{\lambda}}d)$, with $K_{N_{\lambda}}\leq K_{max}$.
Algorithm 1 Progressive Partitioning.
Select a Bregman divergence $d_{\phi}$
Set stopping criteria $T_{stop}$ (e.g., $K_{max}$, $\lambda_{min}$)
Set convergence parameters: $\gamma$, $\epsilon_{c}$, $\epsilon_{n}$,
$\epsilon_{r}$, $\delta$
Set stepsizes: $\left\\{\alpha_{n}\right\\}$
Initialize: $K=1$, $\lambda=1$,asdfasdfa $\left\\{\mu_{0}\right\\}$,
$p(\mu_{0})=1$, $\sigma(\mu_{0})=\mu_{0}p(\mu_{0})$
repeat
Perturb codebook:
$\left\\{\mu_{i}\right\\}\leftarrow\left\\{\mu_{i}+\delta\right\\}\bigcup\left\\{\mu_{i}-\delta\right\\}$
Update $K\leftarrow 2K$, $\left\\{p(\mu_{i})\right\\}$,
$\left\\{\sigma(\mu_{i})\leftarrow\mu_{i}p(\mu_{i})\right\\}$
$n\leftarrow 0$
repeat
Observe data point $x$
for $i=1,\ldots,K$ do
Update: $\displaystyle p(\mu_{i}|x)$
$\displaystyle\leftarrow\frac{p(\mu_{i})e^{-\frac{1-\lambda}{\lambda}d_{\phi}(x,\mu_{i})}}{\sum_{i}p(\mu_{i})e^{-\frac{1-\lambda}{\lambda}d_{\phi}(x,\mu_{i})}}$
$\displaystyle p(\mu_{i})$ $\displaystyle\leftarrow
p(\mu_{i})+\alpha_{n}\left[p(\mu_{i}|x)-p(\mu_{i})\right]$
$\displaystyle\sigma(\mu_{i})$
$\displaystyle\leftarrow\sigma(\mu_{i})+\alpha_{n}\left[xp(\mu_{i}|x)-\sigma(\mu_{i})\right]$
$\displaystyle\mu_{i}$
$\displaystyle\leftarrow\frac{\sigma(\mu_{i})}{p(\mu_{i})}$
$n\leftarrow n+1$
end for
until Convergence:
$\frac{1-\lambda}{\lambda}d_{\phi}(\mu_{i}^{n},\mu_{i}^{n-1})<\epsilon_{c}$,
$\forall i$
Keep effective codevectors: asdf discard $\mu_{i}$ if
$\frac{1-\lambda}{\lambda}d_{\phi}(\mu_{j},\mu_{i})<\epsilon_{n}$, $\forall
i,j,i\neq j$
Remove idle codevectors: asdf discard $\mu_{i}$ if $p(\mu_{i})<\epsilon_{r}$,
$\forall i$
Update $K$, $\left\\{p(\mu_{i})\right\\}$, $\left\\{\sigma(\mu_{i})\right\\}$
Lower temperature: $\lambda\leftarrow\gamma\lambda$
until $T_{stop}$
## III Learning with Local Models: Combined Partitioning and Function
Approximation
In this section, we investigate the problem of combined partitioning and
function approximation, which results in a learning approach where multiple
local models are trained, taking advantage of the differences in the
underlying probability distribution of the data space. As a consequence, this
approach can circumvent the use of overly complex learning models, reduce
time, memory, and computational complexity, and give insights to certain
properties of the data space [10].
In the general case, a function $f:S\rightarrow\mathcal{F}$ is to be
approximated given a set of observations $\left\\{(x_{n},f(x_{n}))\right\\}$
where $\left\\{x_{n}\right\\}$ are independent realizations of a random
variable $X\in S$, similar to Section II. One then seeks to find a partition
$\left\\{S_{i}\right\\}$ and a set of parameters
$\left\\{\theta_{i}\right\\}\in\Theta$ for some predefined learning models
$\left\\{\hat{f}_{i}(x,\theta_{i})\in\mathcal{F}\right\\}$ such that:
$\min_{\left\\{S_{i},\theta_{i}\right\\}}\
\mathbb{E}\left[\sum_{i}\mathds{1}_{\left[X\in
S_{i}\right]}d\left(f(X),\hat{f}_{i}(X,\theta_{i})\right)\right]$ (23)
where $d:\mathcal{F}\times\mathcal{F}\rightarrow[0,\infty)$ is a well-defined
convex metric with respect to the second argument.
To find a tractable solution to this problem, we decompose the two tasks of
progressive partitioning and function approximation. As described in Section
II, a partition $\left\\{S_{i}\right\\}_{i=1}^{K(\lambda)}$ of the space $S$
can be approximated online using a stochastic approximation algorithm that
solves (4) and yields the locations of a finite number of
$\mu_{\lambda}:=\left\\{\mu_{i}\right\\}_{i=1}^{K(\lambda)}$ codevectors, that
define the regions $S_{i}=\left\\{x\in
S:i=\operatorname*{arg\,min}_{j}~{}d_{\phi}(x,\mu_{j})\right\\}$,
$i=1,\ldots,K(\lambda)$. Given the partition $\left\\{S_{i}\right\\}$, we are
now in place to solve the following problem:
$\min_{\left\\{\theta_{i}\right\\}}\
\mathbb{E}\left[\sum_{i}\mathds{1}_{\left[X\in
S_{i}\right]}d\left(f(X),\hat{f}_{i}(X,\theta_{i})\right)\right]$ (24)
###### Remark 2.
Solving (24) decouples the two tasks of progressive partitioning and local
function approximation and yields a sub-optimal solution to the original
combined problem in (23). That being said, the use of Alg. 1 is a heuristic
method that offers (i) the crucial properties of progressive partitioning, and
(ii) a compressed representation of the data space $S$ such that each $S_{i}$
represents a region of $S$ where its underlying probability distribution
presents low variability (see Section II).
In the remaining section, we will study learning approaches to computationally
solve (24) in the general case of a differentiable (with respect to
$\theta_{i}$) learning model $\hat{f}_{i}(x,\theta_{i})$, and in the specific
case of using locally constant models, i.e., when
$\hat{f}(x,\theta_{i})=\theta_{i}\in\mathcal{F}$.
### III-A Learning with Local Models
In this section, we assume a model $\hat{f}_{i}(x,\theta_{i})$
$\in\mathcal{F}$ that is differentiable with respect to a parameter vector
$\theta_{i}\in\Theta$, where $\Theta$ is a finite-dimensional vector space.
Given a finite partition set of parameters
$\left\\{S_{i}\right\\}_{i=1}^{K(\lambda)}$, for $K(\lambda)<\infty$, (24) is
decomposed to
$\min_{\theta_{i}}\ \mathbb{E}\left[\mathds{1}_{\left[X\in
S_{i}\right]}d\left(f(X),\hat{f}_{i}(X,\theta_{i})\right)\right],\
i=1,\ldots,K(\lambda).$ (25)
where $d:\mathcal{F}\times\mathcal{F}\rightarrow[0,\infty)$ is assumed a
metric that is differentiable and convex with respect to the second argument.
This is a stochastic optimization problem that can be solved using stochastic
approximation updates. In particular, since we have assumed that
$\hat{f}_{i}(x,\theta_{i})$ is differentiable with respect to $\theta_{i}$, we
can use stochastic gradient descent:
$\displaystyle\theta_{i}(n+1)=\theta_{i}(n)-\beta(n)\nabla_{\theta}d(f(x_{n}),\hat{f}_{i}(x_{n},\theta_{i}(n)))$
(26)
$\displaystyle\quad=\theta_{i}(n)-\beta(n)\\{\nabla_{\theta}\mathbb{E}\left[\mathds{1}_{\left[X\in
S_{i}\right]}d(f(x_{n}),\hat{f}_{i}(x_{n},\theta_{i}(n)))\right]$
$\displaystyle\quad\quad+\big{(}\nabla_{\theta}d(f(x_{n}),\hat{f}_{i}(x_{n},\theta_{i}(n)))$
$\displaystyle\quad\quad-\nabla_{\theta}\mathbb{E}\left[\mathds{1}_{\left[X\in
S_{i}\right]}d(f(x_{n}),\hat{f}_{i}(x_{n},\theta_{i}(n)))\right]\big{)}\\},\
x_{n}\in S_{i}$
Since we can control the observations for each model $f_{i}$ to belong to
$S_{i}$, it is easy to see that
$M_{n+1}:=\nabla_{\theta}d(f(x_{n}),\hat{f}_{i}(x_{n},\theta_{i}(n)))-\nabla_{\theta}\mathbb{E}\left[\mathds{1}_{\left[X\in
S_{i}\right]}d(f(x_{n}),\hat{f}_{i}(x_{n},\theta_{i}(n)))\right]$ is a
martingale difference sequence for an unbiased estimator
$\nabla_{\theta}d(f(x_{n}),\hat{f}_{i}(x,\theta_{i}))$, i.e., when the
condition $\mathbb{E}\left[\mathds{1}_{\left[X\in
S_{i}\right]}\nabla_{\theta}d(f(x_{n}),\hat{f}_{i}(x_{n},\theta_{i}(n)))\right]=\nabla_{\theta}\mathbb{E}\left[\mathds{1}_{\left[X\in
S_{i}\right]}d(f(x_{n}),\hat{f}_{i}(x_{n},\theta_{i}(n)))\right]$ holds.
Therefore, as an immediate result of Theorem 10 in Appendix B, the stochastic
approximation process (26) converges almost surely to a possibly path-
dependent invariant set of
$\dot{\theta}_{i}=\nabla_{\theta}\mathbb{E}\left[\mathds{1}_{\left[X\in
S_{i}\right]}\hat{f}_{i}(x,\theta_{i})\right]$, i.e., an asymptotically stable
local minimum of the objective function
$\mathbb{E}\left[\mathds{1}_{\left[X\in
S_{i}\right]}\hat{d}(f(x_{n}),f_{i}(x,\theta_{i}))\right]$.
So far, we have assumed that $\left\\{S_{i}\right\\}$ is fixed. However, we
are interested in a learning approach that approximates
$\left\\{S_{i}\right\\}$ and $\left\\{\hat{f}_{i}(x,\theta_{i})\right\\}$ at
the same time, and given the same observations
$\left\\{(x_{n},f(x_{n}))\right\\}$ which may be available one at a time (i.e,
no dataset is stored in memory a priori). This is possible because both
learning algorithms for $\left\\{S_{i}\right\\}$ and
$\left\\{\hat{f}_{i}(x,\theta_{i})\right\\}$ independently are stochastic
approximation algorithms. According to the theory of two-timescale stochastic
approximation, we can run both learning algorithms at the same time, but using
different stepsize profiles $\left\\{\alpha(n)\right\\}$ and
$\left\\{\beta(n)\right\\}$, such that
$\nicefrac{{\alpha(n)}}{{\beta(n)}}\rightarrow 0$. Intuitively, we create a
system of two dynamical system running in different “speed”, meaning that
second system, the one with stepsizes $\left\\{\beta(n)\right\\}$, is updated
fast enough that the first system, the one with stepsizes
$\left\\{\alpha(n)\right\\}$, can be seen as quasi-static with respect to the
second. The following theorem summarizes this result.
###### Theorem 6.
Let $\left\\{x_{n}\right\\}$ be a sequence of independent realizations of $X$,
and assume that $\mu_{i}(n)$ is a sequence updated using the stochastic
approximation algorithm in (12) with stepsizes $\left\\{\alpha(n)\right\\}$
satisfying $\sum_{n}\alpha(n)=\infty$, and $\sum_{n}\alpha^{2}(n)<\infty$.
Then, as long as $\left\\{\beta(n)\right\\}$ are designed such that
$\sum_{n}\beta(n)=\infty$, $\sum_{n}\beta^{2}(n)<\infty$, and
$\nicefrac{{\alpha(n)}}{{\beta(n)}}\rightarrow 0$, the asynchronous updates
$\theta_{i}(n+1)=\theta_{i}(n)-\beta(n)\nabla_{\theta}d(f(x_{n}),\hat{f}_{i}(x_{n},\theta_{i}(n))),$
(27)
for $i=\operatorname*{arg\,min}_{j}~{}d_{\phi}(x_{n},\mu_{j}(n))$ converges
almost surely to a locally asymptotically stable solution
$\left\\{\theta_{i}\right\\}$ of (25), as $n\rightarrow\infty$, for
$S_{i}=\\{x\in
S:i=\operatorname*{arg\,min}_{j}~{}d_{\phi}(x,\mu_{j}(\infty))\\}$, where
$\mu_{i}(\infty))$ is the asymptotically stable equilibrium of (12).
###### Proof.
See Appendix E. ∎
The algorithmic implementation is shown in Alg. 2 as an extension of Alg. 1.
Algorithm 2 Progressive Learning with Differentiable Models.
—–$//$—–
Set stepsizes: $\left\\{\alpha_{n}\right\\}$, $\left\\{\beta_{n}\right\\}$
s.t. $\nicefrac{{\alpha_{n}}}{{\beta_{n}}}\rightarrow 0$
Initialize: $\left\\{\mu_{0}\right\\}$, $\left\\{\theta_{0}\right\\}$
repeat
—–$//$—–
repeat
Observe data point $x$ & output $y$
for $i=1,\ldots,K$ do
Update:
—–$//$—– $\displaystyle p(\mu_{i})$ $\displaystyle\leftarrow
p(\mu_{i})+\alpha_{n}\left[p(\mu_{i}|x)-p(\mu_{i})\right]$
$\displaystyle\sigma(\mu_{i})$
$\displaystyle\leftarrow\sigma(\mu_{i})+\alpha_{n}\left[xp(\mu_{i}|x)-\sigma(\mu_{i})\right]$
asfda
$\theta_{i}\leftarrow\theta_{i}-\beta_{n}\nabla_{\theta}d(f(x),\hat{f}_{i}(x,\theta_{i}))$
—–$//$—–
end for
until Convergence
—–$//$—–
until $T_{stop}$
### III-B Case of Constant Local Models
In the special case when locally constant models are used, i.e., when
$\hat{f}(x,\theta_{i})=\theta_{i}\in\mathcal{F}$, two-timescale updates are
not required, and a simpler solution can be tracked. In particular, we can
augment the system (12) with
$\begin{cases}\sigma_{\theta_{i}}(n+1)&=\sigma_{\theta_{i}}(n)+\alpha(n)\left[x_{n}\hat{p}(\mu_{i}|x_{n})-\sigma_{\theta_{i}}(n)\right]\\\
\theta_{i}(n)&=\frac{\sigma_{\theta_{i}}(n)}{\rho_{i}(n)}\end{cases}$ (28)
Following the same arguments as in the proof of Theorem 3, it is easy to see
that $\theta_{i}(n)$ converge almost surely to
$\mathbb{E}\left[\mathds{1}_{\left[X\in S_{i}\right]}f(X)\right]$ as
$n\rightarrow\infty$ and $\lambda\rightarrow 0$. To see this, notice that as
$\lambda\rightarrow 0$, $p^{*}(x,\mu_{i})\rightarrow\mathds{1}_{\left[X\in
S_{i}\right]}$ and $p^{*}(\mu_{i})\rightarrow 1$. As a final note, this
approach is equivalent to a piece-wise constant approximation of $f(X)$. In
other words, this is a binning process where the size and location of the bins
depends on the underlying probability distribution of $X$, and the number of
bins progressively increases, resulting in a hierarchical approximation of
$f(X)$. The algorithmic implementation is shown in Alg. 3 as an extension of
Alg. 1.
Algorithm 3 Progressive Learning with Constant Models.
—–$//$—–
Initialize: $\left\\{\mu_{0}\right\\}$, $\left\\{f_{\mu_{0}}\right\\}$,
$\left\\{\sigma_{f}(\mu_{0})\right\\}$
repeat
—–$//$—–
repeat
Observe data point $x$ & output $y$
for $i=1,\ldots,K$ do
Update:
—–$//$—– asdfa
$\sigma_{f}(\mu_{i})\leftarrow\sigma(\mu_{i})+\alpha_{n}\left[yp(\mu_{i}|x)-\sigma(\mu_{i})\right]$
asdfa $\mu_{i}\leftarrow\frac{\sigma(\mu_{i})}{p(\mu_{i})}$,
$f_{\mu_{i}}\leftarrow\frac{\sigma_{f}(\mu_{i})}{p(\mu_{i})}$
end for
until Convergence
—–$//$—–
until $T_{stop}$
## IV The Problem of Classification
In this section we focus on the binary classification problem. The results can
be extended to the general case (see, e.g., [23]). For the classification
problem, a pair of random variables $\left\\{X,c(X)\right\\}\in
S\times\left\\{0,1\right\\}$ defined in a probability space
$\left(\Omega,\mathcal{F},\mathbb{P}\right)$, is observed with $c(X)$
representing the class of $X$ and $S\subseteq\mathbb{R}^{d}$. The codebook is
represented by $\mu:=\left\\{\mu_{i}\right\\}_{i=1}^{K}$, $\mu_{i}\in ri(S)$,
and $c_{\mu}:=\left\\{c_{\mu_{i}}\right\\}_{i=1}^{K}$, such that
$c_{\mu_{i}}\in\left\\{0,1\right\\}$ represents the class of $\mu_{i}$ for all
$i\in\left\\{1,\ldots,K\right\\}$. A partition-based classifier is called
Bayes-optimal if it minimizes the classification error:
$\displaystyle\min_{\mu,c_{\mu}}~{}J_{B}(\mu,c_{\mu})$
$\displaystyle:=\pi_{1}\sum_{i:c_{\mu_{i}}=0}\mathbb{P}\left\\{X\in
S_{i}|c(X)=1\right\\}$ (29)
$\displaystyle\quad+\pi_{0}\sum_{i:c_{\mu_{i}}=1}\mathbb{P}\left\\{X\in
S_{i}|c(X)=0\right\\}$
where $S_{i}=\left\\{x\in
S:i=\operatorname*{arg\,min}\limits_{j}~{}d(x,\mu_{j})\right\\}$, and
$\pi_{i}:=\mathbb{P}\left[c=i\right]$.
In the remaining section, we study methods to solve the classification problem
based on the results of Sections II and III.
### IV-A Classification as a Regression Problem with Constant Local Models
The classification problem (29) can be viewed as a special case of learning
with local models as in Section III. Here one seeks to find a partition
$\left\\{S_{i}\right\\}$ and a set of parameters
$\left\\{c_{i}\in\left\\{0,1\right\\}\right\\}$ such that:
$\min_{\left\\{S_{i},c_{i}\right\\}}\
\mathbb{E}\left[\sum_{i}\mathds{1}_{\left[X\in
S_{i}\right]}d\left(c(X),c_{i}\right)\right]$ (30)
where $d:=\mathds{1}_{\left[c\neq c_{\mu_{i}}\right]}$. Notice that since $d$
is not differentiable the results of Section III cannot be used directly.
However, numerous relaxation methods can be used to find a possibly sub-
optimal solution. A widely used approach is to relax the constraints on
$\left\\{c_{i}\in\left\\{0,1\right\\}\right\\}$ such that
$\left\\{c_{i}\in\left[0,1\right]\right\\}$. Then the updates (28) can be
directly used to estimate $\mathbb{E}\left[\mathds{1}_{\left[X\in
S_{i}\right]}c(X)\right]$. Then a projection mapping
$r:\left[0,1\right]\rightarrow\left\\{0,1\right\\}$, e.g.,
$r(c)=\mathds{1}_{\left[c<0.5\right]}$, can be used to return a solution to
the classification problem. Notice that this is equivalent to a majority-vote
rule inside each region $S_{i}$. This is a common approach that, at the limit
$\lambda\rightarrow 0$, when the updates (12), (13) result in a hard-
clustering approach with infinite number of clusters, yield a classification
rule that is strongly Bayes risk consistent, i.e., converges to the optimal
(Bayes) probability of error given in (29) (see, e.g., Ch. 21 in [23]).
### IV-B Classification as Class-Conditioned Density Estimation
In a different approach, we can formulate the binary classification problem to
the minimization of $F$ in (4) with a modified average distortion measure
given by:
$\displaystyle D=\mathbb{E}\left[d^{b}(X,c,\mu,c_{\mu})\right]$
where $d^{b}(x,c,\mu_{i},c_{\mu_{i}})=\mathds{1}_{\left[x\in
S_{i}\right]}\mathds{1}_{\left[c\neq c_{\mu_{i}}\right]}$. However, because
$d^{b}$ is not differentiable, using similar principles as in the case of
Learning Vector Quantization (LVQ) [9], we can instead approximate the optimal
solution by using the distortion measure
$d^{l}(x,c_{x},\mu,c_{\mu})=\begin{cases}d(x,\mu),~{}c_{x}=c_{\mu}\\\
-d(x,\mu),~{}c_{x}\neq c_{\mu}\end{cases}$ (31)
Using similar arguments to Ch. 21 in [23], it can be shown that as
$\lambda\rightarrow 0$, the solution $(\mu,c_{\mu})$ to the above problem
equipped with a majority-vote classification rule is strongly Bayes risk
consistent.
However, we find useful to also explore a generative learning approach, using
$d^{c}(x,c_{x},\mu,c_{\mu})=\begin{cases}d(x,\mu),~{}c_{x}=c_{\mu}\\\
0,~{}c_{x}\neq c_{\mu}\end{cases}$ (32)
It is easy to see that this particular choice for the distortion measure
$d^{c}$ in (32) transforms the learning rule in (12) to
$\begin{cases}\rho_{i}(n+1)&=\rho_{i}(n)+\beta(n)\left[s_{i}\hat{p}(\mu_{i}|x_{n})-\rho_{i}(n)\right]\\\
\sigma_{i}(n+1)&=\sigma_{i}(n)+\beta(n)\left[s_{i}x_{n}\hat{p}(\mu_{i}|x_{n})-\sigma_{i}(n)\right]\end{cases}$
(33)
where $s_{i}:=\mathds{1}_{\left[c_{\mu_{i}}=c\right]}$. As a result, this is
equivalent to estimating strongly consistent class-conditional density
estimators:
$\hat{p}(x|c=j)\rightarrow\pi_{j}p(x|c=j),\ a.s.$ (34)
and the following theorem holds:
###### Theorem 7.
In the limit $\lambda\rightarrow 0$, and as the number of observed samples
$\left\\{x_{n}\right\\}$ goes to infinity, i.e., $n\rightarrow\infty$, the
learning algorithm based on (33), (13), results in strongly consistent class-
conditional density estimators $\hat{p}(x|c=j)$ that construct a Bayes risk
consistent classifier with the classification rule
$\hat{c}=\operatorname*{arg\,max}_{j}\hat{\pi}_{j}\hat{p}(x|c=j),\ j=1,2$ (35)
where $\hat{\pi}_{j}=\frac{\sum_{n}\mathds{1}_{\left[c_{n}=j\right]}}{n}$
###### Proof.
See Appendix F. ∎
As a final note, an easy-to-implement nearest-neighbor rule classification
rule:
$\hat{c}(x)=c_{\mu_{h^{*}}}$ (36)
where
$h^{*}=\operatorname*{arg\,max}\limits_{\tau=1,\ldots,K}~{}p(\mu_{\tau}|x),~{}h\in\left\\{1,\ldots,K\right\\}$,
yields a classification error $\hat{J}_{B}^{*}$ with tight upper bound with
respect to the Bayes-optimal $J_{B}^{*}$, i.e.,
$J_{B}^{*}\leq\hat{J}_{B}^{*}\leq 2J_{B}^{*}$ (see, e.g., [23]). The
algorithmic implementation is shown in Alg. 4 as an extension of Alg. 1.
Algorithm 4 Progressive Classification via Class-Conditional Density
Estimation.
—–$//$—–
Initialize: $\left\\{\mu_{0}\right\\}$ & $\left\\{c_{\mu_{0}}\right\\}$,
repeat
—–$//$—–
repeat
Observe data point $x$ & class label $c$
if $\nexists\mu_{i}$ s.t. $c_{\mu_{i}}=c$ then
Insert:
$\left\\{\mu_{i}\right\\}\leftarrow\left\\{\mu_{i}\right\\}\bigcup\left\\{x\right\\}$
Insert:
$\left\\{c_{\mu_{i}}\right\\}\leftarrow\left\\{c_{\mu_{i}}\right\\}\bigcup\left\\{c\right\\}$
end if
for $i=1,\ldots,K$ do
Compute membership $s_{i}=\mathds{1}_{\left[c_{\mu_{i}}=c\right]}$
Update:
—–$//$—– asdfd $p(\mu_{i})\leftarrow
p(\mu_{i})+\alpha_{n}\big{[}$$s_{i}$$p(\mu_{i}|x)-p(\mu_{i})\big{]}$ asdfd
$\sigma(\mu_{i})\leftarrow\sigma(\mu_{i})+\alpha_{n}\big{[}$$s_{i}$$xp(\mu_{i}|x)-\sigma(\mu_{i})\big{]}$
—–$//$—–
end for
until Convergence
—–$//$—–
until $T_{stop}$
## V Hierarchical Learning in Multiple Resolutions
In this section, we extend the progressive partitioning algorithm (Alg. 1) of
Section II, by imposing a tree structure in the construction of the regions
$\left\\{S_{i}\right\\}$. The results of Sections III, IV, are extended
naturally through their immediate dependence on the partition
$\left\\{S_{i}\right\\}$. The key idea of the progressive construction of the
tree structure is as follows. Given a value for the temperature coefficient
$\lambda_{t}$, Algorithm 1, as presented in Section II, yields a sequence of
partitions $\left\\{S_{i}\right\\}_{i=1}^{K(\lambda^{t})}$. If at
$\lambda^{t}$, a user-defined splitting criterion is met, the partition
$\left\\{S_{i}\right\\}_{i=1}^{K(\lambda^{t})}$ is fixed, and Algorithm 1 is
applied independently to each region $S_{i}$ to create
$\left\\{\left\\{S_{ij}\right\\}_{j=1}^{K_{i}(\lambda^{t_{i}})}\right\\}_{i=1}^{K(\lambda^{t})}$,
such that $\left\\{S_{ij}\right\\}_{j=1}^{K_{i}(\lambda^{t_{i}})}$ form a
partition of $S_{i}$. This is depicted in Fig. 1(c). For each parent set
$S_{i}$, the number of children sub-sets $K_{i}(\lambda^{t_{i}})$, may be
different, depending on the properties of $S_{i}$. The same holds for the
stopping values $\lambda^{t_{i}}$. The splitting criterion can involve terms
such as a minimum value of $\lambda_{min}$ reached, a maximum number of
$K_{max}$ codevectors reached, or a minimum percentage of improvement in
accuracy or distortion reduction for every temperature step is reached.
This structural constraint reduces the time complexity of the algorithm from
$O(K^{2})$ to $O(k^{2}+\log_{k}K)$, where $K$ here represents the total number
of sets $\left\\{S_{i}\right\\}_{i=1}^{K}$, and $k$ represents the number of
children sub-sets for each parent set (assumed equal for every parent set)
[24]. In addition, as we will show, this tree structure offers an inherent
regularization mechanism in classification applications (Section IV). Finally,
since the resulting structure is a non-binary tree-structure, we are able to
control the number of layers of the tree-structured partition of the data
space, without sacrificing the performance of the learning algorithm, i.e., a
finite tree depth is sufficient for convergence [25, 26]. Therefore, we can
match the number of tree layers to the number of resolutions in a multi-
resolution data representation. This will allow for training each layer of the
tree with progressively finer resolution of the data representation, which
defines a hierarchical and progressive learning approach that further reduces
the complexity of the algorithm, while inheriting potential benefits from the
feature extraction process of the multi-resolution analysis. As we will show,
in the case when group-convolutional wavelet transform is used to create the
multi-resolution data representation, this architecture shares similar
properties to a deep neural network architecture [27].
### V-A Tree-Structured Progressive Partitioning
A tree-structured partition $\Sigma_{\Delta}:=\left\\{S_{\nu_{i}}\right\\}$ is
defined by a set of regions $S_{\nu_{i}}\in S$, each represented by a tree
node $\nu_{i}$, arranged in a tree structure $\Delta$ with a single root node
$\nu_{0}$ such that $S_{\nu_{0}}=S$. The tree structure $\Delta$ is a special
case of a connected, acyclic directed graph, where each node has a single
parent node (except for the root node) and an arbitrary number of children
nodes, that is, $\Delta$ is not restricted to be a binary tree. The set
$C(\nu_{i})$ represents the nodes $\left\\{\nu_{j}\right\\}$ that are children
of $\nu_{i}$, while the set $P(\nu_{j})$ represents the node $\nu_{i}$ for
which $\nu_{j}\in C(\nu_{i})$. The level $l\geq 0$ of a node
$\nu_{h}\in\Delta$ is the length of the path
$\left\\{\nu_{0},\ldots,\nu_{i},\nu_{j},\ldots,\nu_{h}\right\\}$ leading from
the root node $\nu_{0}$ to $\nu_{h}$ such that $\nu_{j}\in C(\nu_{i})$. The
terminal nodes $\tilde{\nu}:=\left\\{\nu_{i}:C(\nu_{i})=\emptyset\right\\}$
are called leaves, and the union of their associated sets will be denoted
$\tilde{S}:=\left\\{\tilde{S}_{j}\right\\}$, where $|\tilde{S}|=\tilde{K}$ is
the number of leaf sets that create a partition of $S$, and
$\tilde{l}:=\max\left\\{l:\nu_{i}^{(l)}\in\tilde{\nu}\right\\}<\infty$ will
denote the maximum depth of the tree.
$\Sigma_{\Delta}$ defines a hierarchical partitioning scheme for the domain
$S$, such that for every node $\nu_{i}\in\Delta$ associated with the region
$S_{\nu_{i}}$, its children nodes $\left\\{\nu_{j}\in C(\nu_{i})\right\\}$ are
associated with the regions $\left\\{S_{\nu_{j}}\right\\}$ that form a
partition of $S_{\nu_{i}}$. We will use the unique paths from the root node as
identification label for each node, i.e., $\nu_{j}=0\ldots ij$ such that
$\nu_{0}=0$, $C(0)=\left\\{0i\right\\}$, $C(0i)=\left\\{0ij\right\\}$, and so
on. As such, Algorithm 1 can be used recursively to construct a tree-
structured partition $\Sigma_{\Delta}$ as follows: Start with node $\nu_{0}=0$
as the only leaf node. Using observations $\left\\{x_{n}\right\\}$
(realizations of $X\in S$), apply Algorithm 1 until a partition
$\left\\{S_{0j}\right\\}$ of $S_{0}=S$ is constructed. Then starting with
$w=0$ and for every observation $x_{n}$, iterate the process
$\text{repeat }w\leftarrow w^{\prime}\in C(w)\text{ such that }x_{n}\in
S_{w^{\prime}}\text{, until }C(w)=\emptyset,$ (37)
and apply one stochastic approximation update of Algorithm 1 in $S_{w}$. This
asynchronous process can continue until the convergence of all applications of
Alg. 1, when a finite-depth tree-structured partition $\Sigma_{\Delta}$ is
constructed such that for every node $w\in\Delta$ with children nodes
$\left\\{wj\right\\}\in C(w)$, the regions $\left\\{S_{wj}\right\\}$ form a
partition of $S_{w}$. This process is illustrated in Algorithm 5 and its
asymptotic behavior is given by the following theorem:
###### Theorem 8.
Let $\Sigma_{\Delta}$ be a finite-depth tree-structured partitioning scheme
created by Alg. 5 using realizations $\left\\{x_{n}\right\\}$ of a random
variable $X\in S$. If the leaf nodes are updated at the limit
$\lambda\rightarrow 0$, and $n\rightarrow\infty$, then $\Sigma_{\Delta}$
yields a consistent density estimator of $X$, with
$\hat{p}(x)=\frac{\sum_{i}\mathds{1}_{\left[x\in\tilde{S}_{i}\right]}}{nVol(\tilde{S}_{i})}$,
where $\tilde{S}_{i}$ is a leaf node given by the iterative process (37).
###### Proof.
It follows directly by the application of Theorem 4 to each region
$\tilde{S}_{j}$, where $\left\\{\tilde{S}_{j}\right\\}$ are the leaf nodes of
$\Sigma_{\Delta}$ that form a partition of $S$. ∎
###### Remark 3.
We have shown the tree-structured extension of Alg. 1, as well as its
asymptotic behavior for finite tree depth. It is straightforward to show that
similar results hold for the tree-structured extension of Algorithms 2, 3, and
4, regarding the regression and classification problems discussed in Sections
III, and IV.
Notice that a finite tree depth is sufficient for convergence to a consistent
density estimator. This follows from the fact that the children of each tree
node $\nu_{i}$, representing a region $S_{\nu_{i}}\in S$, are the output of
the progressive construction of a partition of $S_{\nu_{i}}$, based on Alg. 1.
This result will be used in Section V-B to build a multi-resolution extension
of Algorithm 5.
The time complexity of the tree-structured algorithm is significantly reduced.
Let $K_{max}$ be the total number of codevectors allowed. Then Alg. 1 has a
worst-case complexity $O(N_{c}(2\bar{K})^{2}d)$, for training, where
$\bar{K}=\sum_{n=0}^{\log_{2}K_{max}}2^{n}$, while testing requires
$O(K_{max}d)$ (see Section II-F for details on the parameters). In a tree-
structured partition $\Sigma_{\Delta}$ of depth $\tilde{l}$, assuming that the
number of children $k=|C(\nu_{i})|$ of each node $\nu_{i}$ is $k$ is the same,
and that each region is represented by roughly the same number of observations
$N_{c}^{\tilde{l}}$, we get $k=(K_{max})^{\nicefrac{{1}}{{\tilde{l}}}}$, and
$N_{c}^{\tilde{l}}=\nicefrac{{N_{c}}}{{k}}$. Then training requires in the
worst-case:
$\displaystyle
O\left(\frac{k^{\tilde{l}}-1}{k(k-1)}N_{c}(2\bar{k})^{2}d)\right)$
where
$\bar{k}=\sum_{n=0}^{\log_{2}k}2^{n}=\sum_{n=0}^{\nicefrac{{1}}{{\tilde{l}}}\log_{2}K_{max}}2^{n}$.
Prediction requires a forward pass of the tree, i.e., it scales with
$O(k\log_{k}K_{max}d)$.
In addition, we note that Alg. 5 updates the partition
$\left\\{S_{wj}\right\\}$ of each node $w$ asynchronously. As a result,
depending on the underlying probability density of the random variable $X\in
S$, some nodes will be visited more often than others, which will result in
some branches of the tree growing faster than others, inducing a variable-rate
code that frequently outperforms fixed-rate, full-search techniques with the
same average number of bits per sample. Alternatively, when learning offline
using a dataset, all nodes can be trained using parallel processes, which can
be utilized by multi-core computational units.
In the classification problem, an additional regularization mechanism can be
added in the approach described in Section IV-B (Alg. 4). Specifically, when a
partition $\left\\{S_{wj}\right\\}_{j=1}^{K_{w}}$ of a node $w$ is fixed, the
node $w$ can check the condition $c_{\mu_{wi}}=c_{\mu_{wj}},\forall i,j$,
which means that the partition $\left\\{S_{wj}\right\\}_{j=1}^{K_{w}}$ is
using $K_{w}$ codevectors, all of which correspond to the same class. In this
case, node $w$ is assigned a single codevector, and is not further split by
the algorithm. This phenomenon is illustrated in Fig. 7 in Section VI.
Finally, we note that the termination criteria of the iterations of Alg. 1 in
Alg. 5 in each layer of the tree are important design parameters. These can
include a maximum number of codevectors for each partition, a minimum
temperature $\lambda_{min}^{l}$ in each tree layer $l$, a maximum number of
iterations, and so on. These termination criteria characterize the splitting
criteria as well, i.e., when to stop growing the set of effective codevectors
in a partition of layer $l$, and continue to split the partition in the next
layer $l+1$.
Algorithm 5 Tree-Structured Progressive Partitioning
Initialize root node $\nu_{0}$ s.t. $S_{\nu_{0}}=S$
repeat
Observe data point $x\in S$
Find leaf node to update:
Set $w=\nu_{0}$
while $C(w)\neq\emptyset$ do
$w\leftarrow v\in C(w)$ such that $x\in S_{v}$
end while
Update partition $\left\\{S_{wj}\right\\}$ of $S_{w}$ using $x$ and Alg. 1
if Alg. 1 in $S_{w}$ terminates then
Split node $w$: $C(w)\leftarrow\left\\{S_{wj}\right\\}$
end if
until Stopping criterion
### V-B Multi-Resolution Extension
So far we have modeled the observations as realizations of a random variable
$X\in S\subseteq\mathbb{R}^{d}$. In general, $X$ can be itself a measurable
signal $X(t):\mathbb{R}^{n}\rightarrow S$ with finite energy, i.e., $X(t)\in
S\subseteq L^{2}(\mathbb{R}^{d})$. We will denote the original space $S$ as
$S^{0}$. A multi-resolution representation of the signal $X(t)$ consists of a
sequence of projections of $X(t)$ on subspaces $\left\\{S^{j}\right\\}$ such
that $S^{j}\subset S^{j-1}$, $\forall j\in\mathbb{N}$, and
$\cup_{j=0}^{\infty}S^{j}$ is dense in $S^{0}$ with
$\cap_{j=0}^{\infty}S^{j}=\left\\{0\right\\}$. There are numerous methods to
construct subspaces $\left\\{S^{j}\right\\}$ with these properties, from the
classical wavelet transform [28] to different dictionary learning approaches
[29, 30]. An approach using group-convolutional wavelet decomposition will be
presented in Section V-C.
We denote by $X^{r}\in S^{r}$ the projection of $X=X^{0}$ to the subspace
$S^{r}$. Given a multi-resolution representation of $X$ with subspaces
$\left\\{S^{0},S^{1},S^{2},\ldots,S^{\tilde{l}}\right\\}$, we can extend Alg.
5 presented in Section V-A such that $X^{\tilde{l}-r}\in S^{\tilde{l}-r}$ is
used to train the nodes of the tree at level $r$. This idea matches the
intuition of using higher-resolution representation of $X$ for deeper layers
of the tree. It was first introduced in [31] and [32], and constitutes a
hierarchical multi-resolution learning scheme. The algorithmic implementation
is straightforward given Alg. 5, and is given in Alg. 6.
Regarding the asymptotic behavior of the multi-resolution extension, the
results of Theorem 8 hold. To see that, notice that since $S^{l}\subset
S^{0}$, for $l>0$, it follows that $X^{l}\in S^{0}$ as well, and, as a result,
Alg. 6 essentially creates a tree-structured partition $\Sigma_{\Delta}$ of
$S^{0}=S$, with the leaf nodes trained with $X^{0}\in S^{0}=S$, such that the
following holds.
###### Theorem 9.
Let $\Sigma_{\Delta}$ be a tree-structured partitioning scheme of depth
$\tilde{l}$ created by Alg. 6 using the multi-resolution representation
$\left(x_{n}^{0}=x_{n},x_{n}^{1},\ldots,x_{n}^{\tilde{l}}\right)\in\left(S^{0}=S,S^{1},\ldots,S^{\tilde{l}}\right)$
of realizations of a random variable $X\in S$. If the leaf nodes are updated
at the limit $\lambda\rightarrow 0$, and $n\rightarrow\infty$, then
$\Sigma_{\Delta}$ yields a consistent density estimator of $X\in S$, with
$\hat{p}(x)=\frac{\sum_{i}\mathds{1}_{\left[x\in\tilde{S}_{i}\right]}}{nVol(\tilde{S}_{i})}$,
where $\tilde{S}_{i}$ is a leaf node given by the iterative process (37).
###### Proof.
It follows directly by the application of Theorem 4 to each region
$\tilde{S}_{j}$, where $\left\\{\tilde{S}_{j}\right\\}$ are the leaf nodes of
$\Sigma_{\Delta}$ that form a partition of $S$. ∎
Algorithm 6 Multi-Resolution Progressive Partitioning
Initialize root node $\nu_{0}$ s.t. $S_{\nu_{0}}=S^{\tilde{l}}$
repeat
Observe data point $x^{\tilde{l}}\in S^{\tilde{l}}$
Find leaf node to update:
Set $w=\nu_{0}$
Set resolution $l=\tilde{l}$
while $C(w)\neq\emptyset$ do
$w\leftarrow v\in C(w)$ such that $x^{l}\in S_{v}^{l}$
$l\leftarrow l-1$
end while
Update partition $\left\\{S_{wj}^{l}\right\\}$ of $S_{w}^{l}$ using $x$ and
Alg. 1
if Alg. 1 in $S_{w}^{l}$ terminates and $l>0$ then
Split node $w$: $C(w)\leftarrow\left\\{S_{wj}^{l}\right\\}$
end if
until Stopping criterion
(a) DCNN.
(b) SCN.
(c) Proposed Architecture.
Figure 2: Block-diagram of the proposed hierarchical architecture using multi-
resolution features from the wavelet scattering transform compared to Deep
Convolutional Neural Networks (DCNN) and Scattering Convolutional Networks
(SCN). The feed-forward arrows represent a cascade of convolution, rectifying,
and downsampling operations.
### V-C Building Group-Invariant Multi-Resolution Representations
There are numerous methods to construct subspaces $\left\\{S^{j}\right\\}$
with the properties mentioned in Section V-A. In this section we briefly
mention a particular approach based on group-convolutional wavelet
decomposition that aligns with the principles of the scattering transform,
first introduced in [19]. This is an unsupervised method that constructs a
hierarchy of features based on group-convolutions, that preserve local
invariance with respect to a certain class of Lie groups, such as translation,
rotation, and deformation, an important property in many learning applications
[27].
We start with the standard wavelet transform $\left\\{W_{l}X(t)\right\\}_{l}$
of a signal $X(t)\in S\subseteq L^{2}(\mathbb{R}^{d})$ as a basis [28]. Here,
$\left\\{W_{l}X(t)\right\\}_{l}\in S^{l}$ represents the signal $X(t)$ at
resolution $2^{l}$. The computation of the multi-resolution wavelet
representation of a signal consists of successive operations of a linear
convolution operator, followed by a downsampling step [27]. As a result, the
wavelet transform, is stable to small deformations [19]. In addition, the
wavelet transform is translation covariant (or equivariant), that is it
commutes with the Lie group of operators
$\left\\{T_{c}\right\\}_{c\in\mathbb{R}}$ such that $T_{c}X(t)=X(t-c)$, i.e.,
$W_{j}(T_{c}X)=T_{c}W_{j}(X)$. We note that, in the control theory and signal
processing communities, convolutions are associated with systems described by
the term ’linear time-invariant’. To avoid confusion, with the terminology
used here, these systems are considered linear covariant (or equivariant)
operators with respect to translation in time.
To induce local invariance (up to a scale $2^{J}$ for some $J>0$) with respect
to translation, it has been shown that it is sufficient to cascade the wavelet
transform with a non-linear operation $\rho W_{j}X=\|W_{j}X\|_{1}$, and a
locally averaging integral operation which can be modeled as a convolution
with a low-pass filter localized in a spatial window scaled at $2^{J}$ [19].
This is called a scattering transform and its implementation is based on a
complex-valued convolutional neural network whose filters are fixed wavelets
and $\rho$ is a complex modulus operator as described above [33]. This
structure is similar to deep convolutional neural networks [2], where
successive operations of a linear convolutional operator, a nonlinear mapping
(often a rectifying function, e.g., ReLu), and a down-sampling step (e.g.,
max-pooling), are used to produce the input for the next stage of the
architecture [19, 27, 34]. We illustrate this in Fig. 2, where Alg. 6 combined
with the hierarchical representation of a scattering transform is compared to
a Deep Convolutional Network [2] and a Scattering Convolutional Network [19].
As a final note, the translation invariance properties discussed above can be
generalized to the action of arbitrary compact Lie groups [35]. In particular,
let $G$ be a compact Lie group and $L^{2}(G)$ be the space of measurable
functions $f(r)$ such that $\|f\|^{2}=\int_{G}|f(r)|^{2}dr<\infty$, where $dr$
is the Haar measure of $G$. The left action of $g\in G$ on $f\in L^{2}(G)$ is
defined by $L_{g}f(r)=f(g^{-1}r)$. As a special case, the action of the
translation group $T_{c}f(t)=f(t-c)$ translates the function $f$ to the right
by $c$, with $g^{-1}=-c$ translating the argument of $f$ to the left by $c$.
Similar to the usual convolution $(f\ast
h)(x)=\int_{-\infty}^{\infty}f(u)h(x-u)du$ that defines a linear translation
covariant operator, convolutions on a group appear naturally as linear
operators covariant to the action of a group:
$(f\ast h)(x)=\int_{G}f(g)h(g^{-1}x)dr$ (38)
where $dr$ is the Haar measure of $G$. As a result, an invariant
representation relatively to the action of a compact Lie group, can be
computed by averaging over covariant representations created by group
convolution with appropriately defined wavelets, similar to the methodology
explained above.
## VI Experimental Evaluation and Discussion
We illustrate the properties and evaluate the performance of the proposed
learning algorithm in clustering, classification, and regression problems.
In Fig. 3, the evolution of the progressive partitioning algorithm (Alg. 1)
studied in Section II is depicted, in an unsupervised learning (clustering)
problem. To better illustrate the properties of the approach, the data samples
were sampled from a mixture of 2D Gaussian distributions. The temperature
level (we use $T$ instead of $\lambda$ to stress the connection to the
temperature level in annealing optimization), the average distortion of the
model, the number of codevectors (neurons) used, the number of observations
(data samples) used for convergence, as well as the overall time, are shown.
This process showcases the performance-complexity trade-off described in
Section II. In Fig. 4 and 5, the tree-structured progressive partitioning
algorithm of Section V-A is compared against Alg. 1 in the same problem as in
Fig. 3. Notice that the time complexity of the algorithm is drastically
reduced. Additional properties regarding the construction of tree-structured
partitions are discussed in Section VI-B.
(a) Evolution of the algorithm in the data space.
(b) Performance curves.
Figure 3: Performance curves and data space evolution of Algorithm 1 applied
to a clustering problem with underlying Gaussian distributions.
(a) Evolution of the algorithm in the data space.
(b) Performance curves.
Figure 4: Performance curves and data space evolution of the tree-structured
approach (two layers) applied to a clustering problem with underlying Gaussian
distributions.
(a) Evolution of the algorithm in the data space.
(b) Performance curves.
Figure 5: Performance curves and data space evolution of the tree-structured
approach (three layers) applied to a clustering problem with underlying
Gaussian distributions.
Similarly, Fig. 6 shows the evolution of the learning model for a 2D
classification problem with class-conditional distributions given by a mixture
of 2D Gaussians. These results correspond to the approach explained in Section
IV-B, using Alg. 4. In addition to the apparent accuracy-complexity trade-off,
we make use of this classification problem to showcase the difference of using
the tree-structured approach of Section V-A, in terms of computational
complexity. In Fig. 7, we illustrate the evolution of the tree-structured
approach (Alg. 5). There are two notable comments on the behavior of this
approach compared to the original. First, the time complexity is considerably
improved (see Section V), and this results in a drastic difference in the
running time of the learning algorithm. Secondly, the number of codevectors
used is drastically reduced, as well. This is due to the regularization
mechanism described in Section V-A: when a partition
$\left\\{S_{wj}\right\\}_{j=1}^{K_{w}}$ of a node $w$ is fixed, the node $w$
can check the condition $c_{\mu_{wi}}=c_{\mu_{wj}},\forall i,j$, which means
that the partition $\left\\{S_{wj}\right\\}_{j=1}^{K_{w}}$ is using $K_{w}$
codevectors, all of which correspond to the same class. In this case, node $w$
is assigned only a single codevector, and is not further split by the
algorithm. Notice, that, in this way, the codevectors created by the tree-
structured algorithm tend to exist in the boundaries of the Bayes decision
surface, instead of populating areas where the decision surface does not
fluctuate at all.
(a) Evolution of the algorithm in the data space.
(b) Performance curves.
Figure 6: Performance curves and data space evolution of the proposed
algorithm applied to a classification problem with underlying Gaussian
distributions.
(a) Evolution of the algorithm in the data space.
(b) Performance curves.
Figure 7: Performance curves and data space evolution of the proposed tree-
structured algorithm (three layers) applied to a classification problem with
underlying Gaussian distributions.
The effect of using multiple resolutions as described in Section V-B, is
depicted in Fig. 8 for the same problem as in Fig. 6 and 7. For better
visualization, we assume that the low-resolution features, with respect to
which the first layer of the tree is computed, are the projections of the two-
dimensional data in an one-dimensional space (line). The second layer of the
tree is trained using the high-resolution features, i.e., the full knowledge
of both coordinates of the data. Notice that, as expected from Theorem 9, this
process will converge to a consistent learning algorithm, as long as the
multi-resolution representation used complies with the properties mentioned in
Section V-B, and the last layer of the tree uses the full knowledge of the
input data.
(a) Convergence of first layer with low-resolution features.
(b) Convergence of second layer with high-resolution features.
(c) Performance curve.
Figure 8: Performance curves and data space evolution of the proposed multi-
resolution algorithm (two layers) applied to a classification problem with
underlying Gaussian distributions.
Finally, in Fig. 9 and 10, we test the proposed methodology in two regression
problems, where one- and two-dimensional functions are hierarchically
approximated using the piece-wise constant approximation algorithm of Section
III-B, and the tree-structured approach of Section V.
(a) Evolution of the algorithm in the data space.
(b) Performance curves.
Figure 9: Performance curves and data space evolution of the proposed tree-
structured algorithm (four layers) applied to a piece-wise constant function
approximation problem in 1D.
(a) Evolution of the algorithm in the data space (original function on the
right).
(b) Performance curves.
Figure 10: Performance curves and data space evolution of the proposed tree-
structured algorithm (three layers) applied to a piece-wise constant function
approximation problem in 2D.
### VI-A Source Code and Reproducibility
The open-source code is publicly available at
https://github.com/MavridisChristos/OnlineDeterministicAnnealing.
### VI-B Tree-Structured Partition, Localization, and Explainability in
Machine Learning
Another advantage of using a tree-structured learning module is the
localization properties which allow for an understanding of the input space,
in accordance to the principles of the recently intoduced class of explainable
learning models [36]. The Voronoi regions shrink geometrically, and allow for
the use of local models, which is especially important in high-dimensional
spaces. Unlike most learning models, it is possible to locate the area of the
data space that presents the highest error rate and selectively split it by
using local ODA. This process can be iterated until the desired error rate (or
average distortion) is achieved. When using a training dataset for
classification, it is often possible to force accuracy of up to $100\%$ on the
training dataset. This is similar to an over-fitted classification and
regression tree (CART) [37]. However, over-fitting on the training dataset
often adversely affects the generalization properties of the model, the
performance on the testing dataset, and the robustness against adversarial
attacks. Therefore, the progressive process of ODA becomes important in
establishing a robust way to control the trade-off between performance and
complexity, before you reach that limit. Finally, an important question in
tree-structured learning models is the question of which cell to split next.
An exhaustive search in the entire tree to find the node that presents the
largest error rate is possible but is often not desired due to the large
computational overhead. This is automatically answered by the multi-resolution
ODA algorithm (Alg. 6) as it asynchronously updates all cells depending on the
sequence of the online observations. As a result, the regions of the data
space that are more densely populated with data samples are trained first,
which results in a higher percentage of performance increase per cell split.
We stress that this property makes the proposed algorithm completely dataset-
agnostic, in the sense that it does not require the knowledge of a training
dataset a priori, but instead operates completely online, i.e., using one
observation at a time to update its knowledge base.
## VII Conclusion
We introduced a hierarchical learning algorithm to gradually approximate a
solution to a data-driven optimization problem in the context of autonomous
decision-making systems, especially under limitations on time and
computational resources. The learning architecture simulates an annealing
process and defines a heuristic method to progressively construct a tree-
structured partition of a possibly multi-resolution data space, which can be
used in conjunction with general learning algorithms to train local models.
The structured partitioning of the input space provides explainability, and
makes the learning architecture a suitable candidate for transfer learning
applications. Finally, the online gradient-free training rule based on
stochastic approximation, can be viewed a discrete-time dynamical learning
system, and used for inference, control, and reinforcement learning
applications.
## References
* [1] K. P. Bennett and E. Parrado-Hernández, “The interplay of optimization and machine learning research,” _The Journal of Machine Learning Research_ , vol. 7, pp. 1265–1281, 2006.
* [2] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” _nature_ , vol. 521, no. 7553, pp. 436–444, 2015.
* [3] G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” _Neural computation_ , vol. 18, no. 7, pp. 1527–1554, 2006\.
* [4] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in _Advances in neural information processing systems_ , 2012, pp. 1097–1105.
* [5] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng, “Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations,” in _Proceedings of the 26th annual international conference on machine learning_ , 2009, pp. 609–616.
* [6] N. C. Thompson, K. Greenewald, K. Lee, and G. F. Manso, “The computational limits of deep learning,” _arXiv preprint arXiv:2007.05558_ , 2020.
* [7] E. Strubell, A. Ganesh, and A. McCallum, “Energy and policy considerations for deep learning in nlp,” _arXiv preprint arXiv:1906.02243_ , 2019.
* [8] M. Biehl, B. Hammer, and T. Villmann, “Prototype-based models in machine learning,” _Wiley Interdisciplinary Reviews: Cognitive Science_ , vol. 7, no. 2, pp. 92–111, 2016.
* [9] T. Kohonen, _Learning Vector Quantization_. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995, pp. 175–189.
* [10] S. Rüping, “Learning with local models,” in _Local Pattern Detection_ , K. Morik, J.-F. Boulicaut, and A. Siebes, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005, pp. 153–170.
* [11] C. Mavridis and J. Baras, “Annealing optimization for progressive learning with stochastic approximation,” _arXiv preprint arXiv:2209.02826_ , 2022\.
* [12] C. N. Mavridis, G. P. Kontoudis, and J. S. Baras, “Sparse gaussian process regression using progressively growing learning representations,” in _2022 61st IEEE Conference on Decision and Control (CDC). IEEE_ , 2022.
* [13] C. N. Mavridis and J. S. Baras, “Progressive graph partitioning based on information diffusion,” in _IEEE Conference on Decision and Control_ , 2021, pp. 37–42.
* [14] ——, “Online deterministic annealing for classification and clustering,” _IEEE Transactions on Neural Networks and Learning Systems_ , 2022.
* [15] K. Rose, “Deterministic annealing for clustering, compression, classification, regression, and related optimization problems,” _Proceedings of the IEEE_ , vol. 86, no. 11, pp. 2210–2239, 1998.
* [16] V. S. Borkar, _Stochastic approximation: a dynamical systems viewpoint_. Springer, 2009, vol. 48.
* [17] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh, “Clustering with bregman divergences,” _Journal of machine learning research_ , vol. 6, no. Oct, pp. 1705–1749, 2005.
* [18] T. Villmann, S. Haase, F.-M. Schleif, B. Hammer, and M. Biehl, “The mathematics of divergence based online learning in vector quantization,” in _IAPR Workshop on Artificial Neural Networks in Pattern Recognition_. Springer, 2010, pp. 108–119.
* [19] J. Bruna and S. Mallat, “Invariant scattering convolution networks,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 35, no. 8, pp. 1872–1886, 2013.
* [20] C. N. Mavridis and J. S. Baras, “Convergence of stochastic vector quantization and learning vector quantization with bregman divergences,” _IFAC-PapersOnLine_ , vol. 53, no. 2, 2020.
* [21] E. T. Jaynes, “Information theory and statistical mechanics,” _Physical review_ , vol. 106, no. 4, p. 620, 1957.
* [22] C. Mavridis, E. Noorani, and J. S. Baras, “Risk sensitivity and entropy regularization in prototype-based learning,” in _2022 30th Mediterranean Conference on Control and Automation (MED)_. IEEE, 2022, pp. 194–199.
* [23] L. Devroye, L. Györfi, and G. Lugosi, _A probabilistic theory of pattern recognition_. Springer Science & Business Media, 2013, vol. 31.
* [24] R. M. Gray, “Vector quantization,” _Readings in speech recognition_ , vol. 1, no. 2, pp. 75–100, 1990.
* [25] E. A. Riskin and R. M. Gray, “A greedy tree growing algorithm for the design of variable rate vector quantizers (image compression),” _IEEE Transactions on Signal Processing_ , vol. 39, no. 11, pp. 2500–2507, 1991.
* [26] A. B. Nobel and R. A. Olshen, “Termination and continuity of greedy growing for tree-structured vector quantizers,” _IEEE Transactions on Information Theory_ , vol. 42, no. 1, pp. 191–205, 1996.
* [27] S. Mallat, “Understanding deep convolutional networks,” _Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences_ , vol. 374, no. 2065, p. 20150203, 2016.
* [28] ——, _A wavelet tour of signal processing_. Elsevier, 1999.
* [29] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” _SIAM review_ , vol. 43, no. 1, pp. 129–159, 2001.
* [30] Y. LeCun, “The next frontier in ai: Unsupervised learning,” _https://www.youtube.com/watch?v=IbjF5VjniVE_ , 2016.
* [31] J. S. Baras and S. I. Wolk, “Efficient organization of large ship radar databases using wavelets and structured vector quantization,” in _Proceedings of 27th Asilomar Conference on Signals, Systems and Computers_. IEEE, 1993, pp. 491–498.
* [32] ——, “Wavelet-based progressive classification of high-range resolution radar returns,” in _Wavelet Applications_ , H. H. Szu, Ed., vol. 2242, International Society for Optics and Photonics. SPIE, 1994, pp. 967 – 977. [Online]. Available: https://doi.org/10.1117/12.170034
* [33] M. Andreux, T. Angles, G. Exarchakis, R. Leonarduzzi, G. Rochette, L. Thiry, J. Zarka, S. Mallat, J. Andén, E. Belilovsky, J. Bruna, V. Lostanlen, M. J. Hirn, E. Oyallon, S. Zhang, C. Cella, and M. Eickenberg, “Kymatio: Scattering transforms in python,” 2019.
* [34] F. Anselmi, L. Rosasco, C. Tan, and T. Poggio, “Deep convolutional networks are hierarchical kernel machines,” _arXiv preprint arXiv:1508.01084_ , 2015\.
* [35] S. Mallat, “Group invariant scattering,” _Communications on Pure and Applied Mathematics_ , vol. 65, no. 10, pp. 1331–1398, 2012.
* [36] S. Milani, N. Topin, M. Veloso, and F. Fang, “A survey of explainable reinforcement learning,” _arXiv preprint arXiv:2202.08434_ , 2022.
* [37] L. Breiman, “Random forests,” _Machine learning_ , vol. 45, no. 1, pp. 5–32, 2001.
* [38] V. S. Borkar, “Stochastic approximation with two time scales,” _Systems & Control Letters_, vol. 29, no. 5, pp. 291–294, 1997.
## Appendix A Proof of Lemma 6 (Derivation of the Association Probabilities).
Recall that by definition (4), we get
$\displaystyle F(\mu)$ $\displaystyle:=(1-\lambda)\int
p(x)\sum_{i}p(\mu_{i}|x)d(x,\mu_{i})~{}dx$ $\displaystyle\quad+\lambda\int
p(x)\sum_{i}p(\mu_{i}|x)\log p(\mu_{i}|x)~{}dx-\lambda H(X)$
We form the Lagrangian:
$\displaystyle\mathcal{L}_{f}$
$\displaystyle(\left\\{p(\mu_{i}|x)\right\\},\nu):=$ (39)
$\displaystyle=(1-\lambda)D(\mu)-\lambda
H(\mu)+\nu\left(\sum_{i}p(\mu_{i}|x)-1\right)$ $\displaystyle=(1-\lambda)\int
p(x)\sum_{i}p(\mu_{i}|x)d(x,\mu_{i})~{}dx$ $\displaystyle\quad+\lambda\int
p(x)\sum_{i}p(\mu_{i}|x)\log p(\mu_{i}|x)~{}dx$
$\displaystyle\quad+\nu\left(\sum_{i}p(\mu_{i}|x)-1\right)-\lambda\mathbb{E}\left[-\log
p(X)\right]$
Taking $\frac{\partial\mathcal{L}}{\partial p(\mu|x)}=0$ yields:
$\displaystyle(1-\lambda)d(x,\mu_{i})+\lambda(1+\log p(\mu_{i}|x))+\nu=0$
$\displaystyle\implies\log
p(\mu_{i}|x)=-\frac{1-\lambda}{\lambda}d(x,\mu_{i})-\left(1+\frac{\nu}{\lambda}\right)$
$\displaystyle\implies
p(\mu_{i}|x)=\frac{e^{-\frac{1-\lambda}{\lambda}d(x,\mu_{i})}}{e^{1+\frac{\nu}{\lambda}}}$
Finally, from the condition $\sum_{i}p(\mu_{i}|x)=1$, it follows that
$e^{1+\frac{\nu}{\lambda}}=\sum_{i}e^{-\frac{1-\lambda}{\lambda}d(x,\mu_{i})}$
which completes the proof.
## Appendix B Proof of Theorem 3 (Convergence of the Online Learning Rule).
We are going to use fundamental results from stochastic approximation theory.
For completeness, we present the key theorems in what follows.
###### Theorem 10 ([16], Ch.2).
Almost surely, the sequence $\left\\{x_{n}\right\\}\in
S\subseteq\mathbb{R}^{d}$ generated by the following stochastic approximation
scheme:
$\displaystyle x_{n+1}=x_{n}+\alpha(n)\left[h(x_{n})+M_{n+1}\right],\ n\geq 0$
(40)
with prescribed $x_{0}$, converges to a (possibly sample path dependent)
compact, connected, internally chain transitive, invariant set of the o.d.e:
$\displaystyle\dot{x}(t)=h\left(x(t)\right),~{}t\geq 0,$ (41)
where $x:\mathbb{R}_{+}\rightarrow\mathbb{R}_{d}$ and $x(0)=x_{0}$, provided
the following assumptions hold:
* (A1)
The map $h:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ is Lipschitz in $S$, i.e.,
$\exists L$ with $0<L<\infty$ such that $\left\|h(x)-h(y)\right\|\leq
L\left\|x-y\right\|,~{}x,y\in S$,
* (A2)
The stepsizes $\left\\{\alpha(n)\in\mathbb{R}_{++},~{}n\geq 0\right\\}$
satisfy $\sum_{n}\alpha(n)=\infty$, and $\sum_{n}\alpha^{2}(n)<\infty$ ,
* (A3)
$\left\\{M_{n}\right\\}$ is a martingale difference sequence with respect to
the increasing family of $\sigma$-fields
$\mathcal{F}_{n}:=\sigma\left(x_{m},M_{m},~{}m\leq n\right)$, ${n\geq 0}$,
i.e., $\mathbb{E}\left[M_{n+1}|\mathcal{F}_{n}\right]=0~{}a.s.$, for all
$n\geq 0$, and $\left\\{M_{n}\right\\}$ are square-integrable with
$\mathbb{E}\left[\left\|M_{n+1}\right\|^{2}|\mathcal{F}_{n}\right]\leq
K\left(1+\left\|x_{n}\right\|^{2}\right),~{}a.s.$, where $n\geq 0$ for some
$K>0$,
* (A4)
The iterates $\left\\{x_{n}\right\\}$ remain bounded a.s., i.e.,
${\sup_{n}\left\|x_{n}\right\|<\infty}$ $a.s.$
As an immediate result, the following corollary also holds:
###### Corollary 10.1.
If the only internally chain transitive invariant sets for (41) are isolated
equilibrium points, then, almost surely, $\left\\{x_{n}\right\\}$ converges to
a, possibly sample dependent, equilibrium point of (41).
Now we are in place to prove the following theorem:
###### Theorem 11.
Let $S$ a vector space, $\mu\in S$, and $X:\Omega\rightarrow S$ be a random
variable defined in a probability space
$\left(\Omega,\mathcal{F},\mathbb{P}\right)$. Let $\left\\{x_{n}\right\\}$ be
a sequence of independent realizations of $X$, and
$\left\\{\alpha(n)>0\right\\}$ a sequence of stepsizes such that
$\sum_{n}\alpha(n)=\infty$, and $\sum_{n}\alpha^{2}(n)<\infty$. Then the
random variable $m_{n}=\nicefrac{{\sigma_{n}}}{{\rho_{n}}}$, where
$(\rho_{n},\sigma_{n})$ are sequences defined by
$\displaystyle\rho_{n+1}$
$\displaystyle=\rho_{n}+\alpha(n)\left[p(\mu|x_{n})-\rho_{n}\right]$ (42)
$\displaystyle\sigma_{n+1}$
$\displaystyle=\sigma_{n}+\alpha(n)\left[x_{n}p(\mu|x_{n})-\sigma_{n}\right],$
converges to $\mathbb{E}\left[X|\mu\right]$ almost surely, i.e.
$m_{n}\xrightarrow{a.s.}\mathbb{E}\left[X|\mu\right]$.
###### Proof.
We will use the facts that $p(\mu)=\mathbb{E}\left[p(\mu|x)\right]$ and
$\mathbb{E}\left[\mathds{1}_{\left[\mu\right]}X\right]=\mathbb{E}\left[xp(\mu|x)\right]$.
The recursive equations (42) are stochastic approximation algorithms of the
form:
$\displaystyle\rho_{n+1}$
$\displaystyle=\rho_{n}+\alpha(n)[(p(\mu)-\rho_{n})+(p(\mu|x_{n})-\mathbb{E}\left[p(\mu|X)\right])]$
$\displaystyle\sigma_{n+1}$
$\displaystyle=\sigma_{n}+\alpha(n)[(\mathbb{E}\left[\mathds{1}_{\left[\mu\right]}X\right]-\sigma_{n})+$
$\displaystyle\quad\quad\quad\quad\quad\quad(x_{n}p(\mu|x_{n})-\mathbb{E}\left[x_{n}p(\mu|X)\right])]$
It is obvious that both stochastic approximation algorithms satisfy the
conditions of Theorem 10 and Corollary 10.1. As a result, they converge to the
asymptotic solution of the differential equations
$\displaystyle\dot{\rho}$ $\displaystyle=p(\mu)-\rho$
$\displaystyle\dot{\sigma}$
$\displaystyle=\mathbb{E}\left[\mathds{1}_{\left[\mu\right]}X\right]-\sigma$
which can be trivially derived through standard ODE analysis to be
$\left(p(\mu),\mathbb{E}\left[\mathds{1}_{\left[\mu\right]}X\right]\right)$.
In other words, we have shown that
$\left(\rho_{n},\sigma_{n}\right)\xrightarrow{a.s.}\left(p(\mu),\mathbb{E}\left[\mathds{1}_{\left[\mu\right]}X\right]\right)$
The convergence of $m_{n}$ follows from the fact that
$\mathbb{E}\left[X|\mu\right]=\nicefrac{{\mathbb{E}\left[\mathds{1}_{\left[\mu\right]}X\right]}}{{p(\mu)}}$,
and standard results on the convergence of the product of two random
variables. ∎
## Appendix C Proof of Theorem 4 (Consistency of ODA as a Density Estimator).
According to Theorem 3, as $n\rightarrow\infty$, the stochastic approximation
algorithm in (12), (13) minimizes the cost function $F^{*}$ in (9). Moreover,
it is easy to see that, in the limit $\lambda\rightarrow 0$, we get
$\lim_{\lambda\rightarrow 0}p^{*}(\mu_{i}|x)=\lim_{\lambda\rightarrow
0}\frac{e^{-\frac{1-\lambda}{\lambda}d(x,\mu_{i})}}{\sum_{j}e^{-\frac{1-\lambda}{\lambda}d(x,\mu_{j})}}=\mathds{1}_{\left[x\in
S_{i}\right]}$
and
$\displaystyle\lim_{\lambda\rightarrow 0}F^{*}(\mu)=J(\mu)$
$\displaystyle=\mathbb{E}\left[\min_{i}d(X,\mu_{i})\right]$
$\displaystyle=\int p(x)\sum_{i}\mathds{1}_{\left[x\in
S_{i}\right]}d_{\phi}(x,\mu_{i})~{}dx$
$\displaystyle=\sum_{i}\int_{S_{i}}p(x)d_{\phi}(x,\mu_{i})~{}dx$
where $S_{i}=\left\\{x\in
S:i=\operatorname*{arg\,min}\limits_{j}~{}d(x,\mu_{j})\right\\}$. In addition,
due to the bifurcation phenomenon, $\lambda\rightarrow 0$, induces
$k\rightarrow\infty$.
Next we show that as the number of prototypes goes to infinity, i.e., if
$k\rightarrow\infty$, we get $\min_{\mu}J(\mu)=0$. First, consider a sub-
optimal solution $w:=\left\\{w_{j}\right\\}_{j=1}^{k}$, with
$p(w_{i}|x)=\mathds{1}_{\left[x\in\Sigma_{i}\right]}$ with the property that
$Vol(\Sigma_{i})=\int_{\Sigma_{i}}dx=O(\frac{1}{k})$, i.e., the Voronoi cells
$\Sigma_{i}(k)$ form a roughly uniform partition. In that case
$\lim_{k\rightarrow\infty}J(w)=\lim_{k\rightarrow\infty}\sum_{i}\int_{\Sigma_{i}}p(x)d_{\phi}(x,w_{i})~{}dx=0$
where we have used the continuity of the density, the compactness of $S$, and
the fact that $\lim_{k\rightarrow\infty}Vol(\Sigma_{j})=0$ since
$Vol(\Sigma_{j})=O(\frac{1}{k})$. We note that these convergence results hold
as long as $\frac{k}{n}\rightarrow 0$, i.e., the rate of increase of $k$ is
lower than that of the number of observations $n$.
As a result, since $0\leq J(\mu)\leq J(w)\rightarrow 0$, due to the optimality
of $\mu$, it follows that $J(\mu)\rightarrow 0$ a.s., as well. This implies
that $\lim_{k\rightarrow\infty}Vol(S_{i})\rightarrow 0$. Now define the random
variable
$Y_{n}:=\frac{\mathds{1}_{\left[x_{n}\in S_{i}\right]}}{Vol(S_{i})}$
where
$\mathbb{E}\left[Y_{n}\right]:=\frac{\mathbb{E}\left[\mathds{1}_{\left[x_{n}\in
S_{i}\right]}\right]}{Vol(S_{i})}=\frac{\mathbb{P}\left[x_{n}\in
S_{i}\right]}{Vol(S_{i})}=\frac{\int_{x\in S_{i}}p(x)~{}dx}{\int_{x\in
S_{i}}~{}dx}=\bar{Y}$ (43)
From the Strong Law of Large Numbers (SLLN), we get that
$\frac{1}{n}\frac{\sum_{n}\mathds{1}_{\left[x_{n}\in
S_{i}\right]}}{Vol(S_{i})}\rightarrow\bar{Y},\ a.s.$
The claim that for $n\rightarrow\infty$, $k\rightarrow\infty$, and
$\frac{k}{n}\rightarrow 0$,
$\hat{p}(x):=\frac{1}{n}\frac{\sum_{n}\mathds{1}_{\left[x_{n}\in
S_{i}\right]}}{Vol(S_{i})}\rightarrow p(x),\ a.s.$
follows by observing that $\lim_{k\rightarrow\infty}\bar{Y}=p(x)$ which
follows from (43) and the fact that the limit
$\lim_{k\rightarrow\infty}Vol(S_{i})\rightarrow 0,\ \forall i$.
## Appendix D Proof of Theorem 5 (Proof of Bifurcation Phenomena).
To obtain the optimality condition (16) as a function of the temperature level
$\lambda$, we recall that
$\displaystyle F^{*}(y)=(1-\lambda)\int
p(x)\sum_{i}p(y_{i}|x)d_{\phi}(x,y_{i})~{}dx$ $\displaystyle\quad+\lambda\int
p(x)\sum_{i}p(y_{i}|x)\log p(y_{i}|x)~{}dx+\lambda\int p(x)\log p(X)$
where $y=\mu+\epsilon\psi$, and $d_{\phi}$ is a Bregman divergence for an
appropriately defined strictly convex function $\phi$. By direct
differentiation, we can compute
$\frac{d}{d\epsilon}d_{\phi}(x,y_{i})=-\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}(x-y_{i})^{\mathrm{T}}\psi$ (44)
and
$\frac{d^{2}}{d\epsilon^{2}}d_{\phi}(x,y_{i})=\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}\psi^{\mathrm{T}}\psi$ (45)
and given (6), we get
$\displaystyle\frac{d}{d\epsilon}$ $\displaystyle
p(y_{i}|x)=\frac{1-\lambda}{\lambda}\sum_{j}p(y_{i}|x)p(y_{j}|x)\bigg{[}\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}(x-y_{i})^{\mathrm{T}}\psi_{i}-$ (46)
$\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\frac{\partial^{2}\phi(y_{j})}{\partial
y_{j}^{2}}(x-y_{j})^{\mathrm{T}}\psi_{j}\bigg{]}$
$\displaystyle=\frac{1-\lambda}{\lambda}p(y_{i}|x)\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}(x-y_{i})^{\mathrm{T}}\psi_{i}$
$\displaystyle-\frac{1-\lambda}{\lambda}p(y_{i}|x)\sum_{j}p(y_{j}|x)\frac{\partial^{2}\phi(y_{j})}{\partial
y_{j}^{2}}(x-y_{j})^{\mathrm{T}}\psi_{j}$
$\displaystyle=-\frac{1-\lambda}{\lambda}p(y_{i}|x)\frac{d}{d\epsilon}d_{\phi}(x,y_{i})$
$\displaystyle+\frac{1-\lambda}{\lambda}p(y_{i}|x)\sum_{j}p(y_{j}|x)\frac{d}{d\epsilon}d_{\phi}(x,y_{j})$
Now the optimality condition takes the form:
$\displaystyle\frac{d^{2}}{d\epsilon^{2}}F^{*}(y)$
$\displaystyle=(1-\lambda)\int
p(x)\sum_{i}\frac{d^{2}}{d\epsilon^{2}}\left(p(y_{i}|x)d_{\phi}(x,y_{i})\right)~{}dx$
(47) $\displaystyle+\lambda\int
p(x)\sum_{i}\frac{d^{2}}{d\epsilon^{2}}\left(p(y_{i}|x)\log
p(y_{i}|x)\right)~{}dx$
Equation (47) uses the terms given in (48), and (49) below:
$\displaystyle\frac{d^{2}}{d\epsilon^{2}}$
$\displaystyle\left(p(y_{i}|x)d_{\phi}(x,y_{i})\right)=p(y_{i}|x)\frac{d^{2}}{d\epsilon^{2}}d_{\phi}(x,y_{i})$
(48)
$\displaystyle\quad+2\frac{d}{d\epsilon}d_{\phi}(x,y_{i})\frac{d}{d\epsilon}p(y_{i}|x)+d_{\phi}(x,y_{i})\frac{d^{2}}{d\epsilon^{2}}p(y_{i}|x)$
$\displaystyle\frac{d^{2}}{d\epsilon^{2}}$ $\displaystyle\left(p(y_{i}|x)\log
p(y_{i}|x)\right)=\frac{d^{2}}{d\epsilon^{2}}p(y_{i}|x)$ (49)
$\displaystyle+\log
p(y_{i}|x)\frac{d^{2}}{d\epsilon^{2}}p(y_{i}|x)+\frac{1}{p(y_{i}|x)}\left(\frac{d}{d\epsilon}p(y_{i}|x)\right)^{2}$
First, notice that
$\sum_{i}\frac{d^{2}}{d\epsilon^{2}}p(y_{i}|x)=\frac{d^{2}}{d\epsilon^{2}}\sum_{i}p(y_{i}|x)=0$.
Using the expressions (44), (45), and (46) we get (50), (51), (52), (53) that
read as:
$\sum_{i}p(y_{i}|x)\frac{d^{2}}{d\epsilon^{2}}d_{\phi}(x,y_{i})=\sum_{i}p(y_{i}|x)\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}\psi_{i}^{\mathrm{T}}\psi_{i}$ (50) $\displaystyle\sum_{i}2$
$\displaystyle\frac{d}{d\epsilon}d_{\phi}(x,y_{i})\frac{d}{d\epsilon}p(y_{i}|x)=$
(51)
$\displaystyle=-2\frac{1-\lambda}{\lambda}\sum_{i}p(y_{i}|x)\left(\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}(x-y_{i})^{\mathrm{T}}\psi_{i}\right)^{2}$
$\displaystyle+2\frac{1-\lambda}{\lambda}\sum_{i}p(y_{i}|x)\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}(x-y_{i})^{\mathrm{T}}\psi_{i}$
$\displaystyle\quad\quad\quad\quad\quad\sum_{j}p(y_{j}|x)\frac{\partial^{2}\phi(y_{j})}{\partial
y_{j}^{2}}(x-y_{j})^{\mathrm{T}}\psi_{j}$
$\displaystyle=\sum_{i}p(y_{i}|x)\left(\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}\right)^{2}$
$\displaystyle\quad\quad\psi_{i}^{\mathrm{T}}\left[-2\frac{1-\lambda}{\lambda}(x-y_{i})(x-y_{i})^{\mathrm{T}}\right]\psi_{i}$
$\displaystyle\quad+2\frac{1-\lambda}{\lambda}\left(\sum_{i}p(y_{i}|x)\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}(x-y_{i})^{\mathrm{T}}\psi_{i}\right)^{2}$ $\displaystyle\sum_{i}$
$\displaystyle\log p(y_{i}|x)\frac{d^{2}}{d\epsilon^{2}}p(y_{i}|x)=$ (52)
$\displaystyle=-\sum_{i}\frac{1-\lambda}{\lambda}d_{\phi}(x,y_{i})\frac{d^{2}}{d\epsilon^{2}}p(y_{i}|x)$
$\displaystyle-\sum_{i}\log\left(\sum_{j}e^{-\frac{1-\lambda}{\lambda}d(x,y_{j})}\right)\frac{d^{2}}{d\epsilon^{2}}p(y_{i}|x)$
$\displaystyle=-\sum_{i}\frac{1-\lambda}{\lambda}d_{\phi}(x,y_{i})\frac{d^{2}}{d\epsilon^{2}}p(y_{i}|x)$
$\displaystyle-\left(\log\sum_{j}e^{-\frac{1-\lambda}{\lambda}d(x,y_{j})}\right)\sum_{i}\frac{d^{2}}{d\epsilon^{2}}p(y_{i}|x)$
$\displaystyle=-\frac{1-\lambda}{\lambda}\sum_{i}d_{\phi}(x,y_{i})\frac{d^{2}}{d\epsilon^{2}}p(y_{i}|x)$
$\displaystyle\sum_{i}$
$\displaystyle\frac{1}{p(y_{i}|x)}\left(\frac{d}{d\epsilon}p(y_{i}|x)\right)^{2}=$
(53)
$\displaystyle=\frac{\left(1-\lambda\right)^{2}}{\lambda^{2}}\sum_{i}p(y_{i}|x)\left(\frac{d}{d\epsilon}d_{\phi}(x,y_{i})\right)^{2}$
$\displaystyle+\frac{\left(1-\lambda\right)^{2}}{\lambda^{2}}\sum_{i}p(y_{i}|x)\left(\sum_{y_{j}}p(y_{j}|x)\frac{d}{d\epsilon}d_{\phi}(x,y_{j})\right)^{2}$
$\displaystyle-2\frac{\left(1-\lambda\right)^{2}}{\lambda^{2}}\left(\sum_{i}p(y_{i}|x)\frac{d}{d\epsilon}d_{\phi}(x,y_{i})\right)^{2}$
$\displaystyle=\frac{\left(1-\lambda\right)^{2}}{\lambda^{2}}\sum_{i}p(y_{i}|x)\left(\frac{d}{d\epsilon}d_{\phi}(x,y_{i})\right)^{2}$
$\displaystyle-\frac{\left(1-\lambda\right)^{2}}{\lambda^{2}}\left(\sum_{i}p(y_{i}|x)\frac{d}{d\epsilon}d_{\phi}(x,y_{i})\right)^{2}$
Finally, plugging (50), (51), (52), (53) in (47), the optimality condition
(16) becomes
$\displaystyle 0$ $\displaystyle=(1-\lambda)\int
p(x)\sum_{i}p(y_{i}|x)\frac{d^{2}}{d\epsilon^{2}}d_{\phi}(x,y_{i})dx$
$\displaystyle-\frac{\left(1-\lambda\right)^{2}}{\lambda}\int
p(x)\sum_{i}p(y_{i}|x)\left(\frac{d}{d\epsilon}d_{\phi}(x,y_{i})\right)^{2}dx$
$\displaystyle+\frac{\left(1-\lambda\right)^{2}}{\lambda}\int
p(x)\left(\sum_{i}p(y_{i}|x)\frac{d}{d\epsilon}d_{\phi}(x,y_{i})\right)^{2}dx$
which can be written more precisely as
$\displaystyle 0$ $\displaystyle=\int
p(x)\sum_{i}p(y_{i}|x)\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}\psi^{\mathrm{T}}$
$\displaystyle\left[I-\frac{1-\lambda}{\lambda}\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}(x-y_{i})(x-y_{i})^{\mathrm{T}}\right]\psi dx$
$\displaystyle+\frac{1-\lambda}{\lambda}\int
p(x)\left(\sum_{i}p(y_{i}|x)\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}(x-y_{i})^{\mathrm{T}}\psi\right)^{2}dx$
and finally as
$\displaystyle 0$
$\displaystyle=\sum_{i}p(y_{i})\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}\psi^{\mathrm{T}}\left[I-\frac{1-\lambda}{\lambda}\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}C_{x|y_{i}}\right]\psi$ (54)
$\displaystyle+\frac{1-\lambda}{\lambda}\int
p(x)\left(\sum_{i}p(y_{i}|x)\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}(x-y_{i})^{\mathrm{T}}\psi\right)^{2}dx$
where
$\displaystyle C_{x|y_{i}}:$
$\displaystyle=\mathbb{E}\left[(x-y_{i})(x-y_{i})^{\mathrm{T}}|y_{i}\right]$
$\displaystyle=\int p(x|y_{i})(x-y_{i})(x-y_{i})^{\mathrm{T}}dx$
The left-hand side of (54) is positive for all perturbations
$\left\\{\psi\right\\}$ if and only if the first term is positive. To see
that, notice that the second term of (54) is clearly non-negative. For the
left-hand side to be non-positive, the first term needs to be non-positive as
well, i.e., there should exist at least one codevector value, say $y_{n}$,
such that $p(y_{n})>0$ and
$\left[I-\frac{1-\lambda}{\lambda}\frac{\partial^{2}\phi(y_{n})}{\partial
y_{n}^{2}}C_{x|y_{n}}\right]\preceq 0$. In this case, there always exist a
perturbation vector $\left\\{y\right\\}$ such that $y=0$, $\forall y\neq
y_{n}$, and $\sum_{y=y_{n}}\psi=0$, that vanishes the second term, i.e.,
$\frac{1-\lambda}{\lambda}\int
p(x)\left(\sum_{i}p(y_{i}|x)\frac{\partial^{2}\phi(y_{i})}{\partial
y_{i}^{2}}(x-y_{i})^{\mathrm{T}}\psi\right)^{2}dx$ $=0$. In other words we
have shown that
$\displaystyle\frac{d^{2}}{d\epsilon^{2}}F^{*}(y)$
$\displaystyle>0\quad\Leftrightarrow\quad\exists y_{n}\text{ s.t. }p(y_{n})>0$
$\displaystyle\text{ and
}\left[I-\frac{1-\lambda}{\lambda}\frac{\partial^{2}\phi(y_{n})}{\partial
y_{n}^{2}}C_{x|y_{n}}\right]\succ 0$
which means that bifurcation occurs under the following condition
$\exists y_{n}\text{ s.t. }p(y_{n})>0\text{ and
}\det\left[I-\frac{1-\lambda}{\lambda}\frac{\partial^{2}\phi(y_{n})}{\partial
y_{n}^{2}}C_{x|y_{n}}\right]=0$
which completes the proof.
## Appendix E Proof of Theorem 6 (Convergence of the Two-Timescale Online
Learning Rule).
It follows directly from the following fundamental result from stochastic
approximation theory that is included for completeness:
###### Theorem 12 (Ch. 6 of [38]).
Consider the sequences $\left\\{x_{n}\right\\}\in S\subseteq\mathbb{R}^{d}$
and $\left\\{y_{n}\right\\}\in\Sigma\subseteq\mathbb{R}^{k}$, generated by the
iterative stochastic approximation schemes:
$\displaystyle
x_{n+1}=x_{n}+\beta(n)\left[f(x_{n},y_{n})+M_{n+1}^{(x)}\right]$ (55)
$\displaystyle
y_{n+1}=y_{n}+\alpha(n)\left[g(x_{n},y_{n})+M_{n+1}^{(y)}\right]$ (56)
for $n\geq 0$ and $M_{n}^{(x)}$, $M_{n}^{(y)}$ martingale difference
sequences, and assume that $\sum_{n}\alpha(n)=\sum_{n}\beta(n)=\infty$,
$\sum_{n}(\alpha^{2}(n)+\beta^{2}(n))<\infty$, and
$\nicefrac{{\alpha(n)}}{{\beta(n)}}\rightarrow 0$, with the last condition
implying that the iterations for $\left\\{y_{n}\right\\}$ run on a slower
timescale than those for $\left\\{x_{n}\right\\}$. If the equation
$\dot{x}(t)=f(x(t),y),\ x(0)=x_{0}$
has an asymptotically stable equilibrium $\lambda(y)$ for fixed $y$ and some
Lipschitz mapping $\lambda$, and the equation
$\dot{y}(t)=g(\lambda(y(t)),y(t)),\ y(0)=y_{0}$
has an asymptotically stable equilibrium $y^{*}$, then, almost surely,
$(x_{n},y_{n})$ converges to $(\lambda(y^{*}),y^{*})$.
## Appendix F Proof of Theorem 7 (Convergence to Bayes-Optimal Classifier).
The Bayes risk for minimum probability of error is given by
$\displaystyle J_{B}(\mu,c_{\mu}):$
$\displaystyle=\pi_{1}\sum_{i:c_{\mu_{i}}=0}\mathbb{P}\left\\{X\in
S_{i}|c=1\right\\}$ (57)
$\displaystyle+\pi_{0}\sum_{i:c_{\mu_{i}}=1}\mathbb{P}\left\\{X\in
S_{i}|c=0\right\\}$ (58)
where $\mathbb{P}\left\\{X\in S_{i}|c=j\right\\}=\int_{S_{i}}p(x|c=j)~{}dx$,
and $p(x|c=j)$ represents the conditional probability density function.
Consider the observations $\left\\{(x_{n},c_{n})\right\\}$, and let
$\hat{p}(x|c=j)$, $j=1,2$, be strongly consistent density estimators. Then the
estimated risk
$\displaystyle\hat{J}_{B}(\mu,c_{\mu}):$
$\displaystyle=\hat{\pi}_{1}\sum_{i:c_{\mu_{i}}=0}\mathbb{\hat{P}}\left\\{X\in
S_{i}|c=1\right\\}$ (59)
$\displaystyle+\hat{\pi}_{0}\sum_{i:c_{\mu_{i}}=1}\mathbb{\hat{P}}\left\\{X\in
S_{i}|c=0\right\\}$ (60)
where $\mathbb{\hat{P}}\left\\{X\in
S_{i}|c=j\right\\}=\int_{S_{i}}\hat{p}(x|c=j)$ and
$\hat{\pi}_{j}=\frac{\sum_{n}\mathds{1}_{\left[c_{n}=j\right]}}{n}$, converges
almost surely to $J_{B}$, i.e.,
$\hat{J}_{B}(\mu,c_{\mu})\rightarrow J_{B}(\mu,c_{\mu}),\ a.s.$ (61)
as $n\rightarrow\infty$. This follows from the fact that
$\hat{\pi}_{j}\hat{p}(x|c=j)\rightarrow\pi_{j}p(x|c=j),\ a.s.$ (62)
and the Lebesgue dominated convergence theorem. Therefore, the classification
rule $\hat{c}=\operatorname*{arg\,max}_{j}\hat{\pi}_{j}\hat{p}(x|c=j)$
converges to a Bayes-optimal classification rule.
| Christos N. Mavridis (M’20) received the Diploma degree in electrical and
computer engineering from the National Technical University of Athens, Greece,
in 2017, and the M.S. and Ph.D. degrees in electrical and computer engineering
at the University of Maryland, College Park, MD, USA, in 2021. His research
interests include learning theory, stochastic optimization, systems and
control theory, multi-agent systems, and robotics. He is currently a
postdoctoral associate at the University of Maryland, and a visiting
postdoctoral fellow at KTH Royal Institute of Technology, Stockholm. He has
worked as a research intern for the Math and Algorithms Research Group at
Nokia Bell Labs, NJ, USA, and the System Sciences Lab at Xerox Palo Alto
Research Center (PARC), CA, USA. Dr. Mavridis is an IEEE member, and a member
of the Institute for Systems Research (ISR) and the Autonomy, Robotics and
Cognition (ARC) Lab. He received the Ann G. Wylie Dissertation Fellowship in
2021, and the A. James Clark School of Engineering Distinguished Graduate
Fellowship, Outstanding Graduate Research Assistant Award, and Future Faculty
Fellowship, in 2017, 2020, and 2021, respectively. He has been a finalist in
the Qualcomm Innovation Fellowship US, San Diego, CA, 2018, and he has
received the Best Student Paper Award (1st place) in the IEEE International
Conference on Intelligent Transportation Systems (ITSC), 2021.
---|---
| John S. Baras (LF’13) received the Diploma degree in electrical and
mechanical engineering from the National Technical University of Athens,
Athens, Greece, in 1970, and the M.S. and Ph.D. degrees in applied mathematics
from Harvard University, Cambridge, MA, USA, in 1971 and 1973, respectively.
He is a Distinguished University Professor and holds the Lockheed Martin Chair
in Systems Engineering, with the Department of Electrical and Computer
Engineering and the Institute for Systems Research (ISR), at the University of
Maryland College Park. From 1985 to 1991, he was the Founding Director of the
ISR. Since 1992, he has been the Director of the Maryland Center for Hybrid
Networks (HYNET), which he co-founded. His research interests include systems
and control, optimization, communication networks, applied mathematics,
machine learning, artificial intelligence, signal processing, robotics,
computing systems, security, trust, systems biology, healthcare systems,
model-based systems engineering. Dr. Baras is a Fellow of IEEE (Life), SIAM,
AAAS, NAI, IFAC, AMS, AIAA, Member of the National Academy of Inventors and a
Foreign Member of the Royal Swedish Academy of Engineering Sciences. Major
honors include the 1980 George Axelby Award from the IEEE Control Systems
Society, the 2006 Leonard Abraham Prize from the IEEE Communications Society,
the 2017 IEEE Simon Ramo Medal, the 2017 AACC Richard E. Bellman Control
Heritage Award, the 2018 AIAA Aerospace Communications Award. In 2016 he was
inducted in the A. J. Clark School of Engineering Innovation Hall of Fame. In
2018 he was awarded a Doctorate Honoris Causa by his alma mater the National
Technical University of Athens, Greece.
---|---
|
# CitySpec with Shield: A Secure Intelligent Assistant for Requirement
Formalization
Zirong Chen Isaac Li Haoxiang Zhang Sarah Preum John A. Stankovic Meiyi
Ma
###### Abstract
An increasing number of monitoring systems have been developed in smart cities
to ensure that a city’s real-time operations satisfy safety and performance
requirements. However, many existing city requirements are written in English
with missing, inaccurate, or ambiguous information. There is a high demand for
assisting city policymakers in converting human-specified requirements to
machine-understandable formal specifications for monitoring systems. To tackle
this limitation, we build CitySpec [1], the first intelligent assistant system
for requirement specification in smart cities. To create CitySpec, we first
collect over 1,500 real-world city requirements across different domains
(e.g., transportation and energy) from over 100 cities and extract city-
specific knowledge to generate a dataset of city vocabulary with 3,061 words.
We also build a translation model and enhance it through requirement synthesis
and develop a novel online learning framework with shielded validation. The
evaluation results on real-world city requirements show that CitySpec
increases the sentence-level accuracy of requirement specification from 59.02%
to 86.64%, and has strong adaptability to a new city and a new domain (e.g.,
the F1 score for requirements in Seattle increases from 77.6% to 93.75% with
online learning). After the enhancement from the shield function, CitySpec is
now immune to most known textual adversarial inputs (e.g., the attack success
rate of DeepWordBug [2] after the shield function is reduced to 0% from
82.73%). We test the CitySpec with 18 participants from different domains.
CitySpec shows its strong usability and adaptability to different domains, and
also its robustness to malicious inputs.
###### keywords:
Requirement Specification , Intelligent Assistant , Monitoring , Safety Shield
, Smart City
††journal: Pervasive and Mobile Computing
[inst1]organization=Vanderbilt University,city=Nashville, state=Tennessee,
country=USA
[inst2]organization=University of Virginia,city=Charlottesville,
state=Virginia, country=USA
[inst3]organization=Columbia University,city=New York, state=New York,
country=USA
[inst4]organization=Dartmouth College,city=Hanover, state=New Hampshire,
country=USA
## 1 Introduction
With the increasing demand for safety guarantees in smart cities, significant
research efforts have been spent toward how to ensure that a city’s real-time
operations satisfy safety and performance requirements [3]. Monitoring
systems, such as SaSTL runtime monitoring [4], CityResolver [5], and STL-U
predictive monitoring [6], have been developed in smart cities. Figure 1 shows
a general framework of monitoring systems in smart cities. These systems are
designed to execute in city centers and to support decision-making based on
the verification results of real-time sensing data about city-states (such as
traffic and air pollution). If the monitor detects a requirement violation,
the city operators can take actions to change the states, such as improving
air quality, sending alarms to police, calling an ambulance, etc.
The monitor systems have two important inputs, i.e., the real-time data
streams and formal specified requirements. Despite that extensive research
efforts have been spent toward improving the expressiveness of specification
languages and efficiency of the monitoring algorithms, the research challenge
of how to convert human-specified requirements to machine-understandable
formal specifications has received only scant attention. Moreover, our study
(see Section 2) on over 1,500 real-world city requirements across different
domains111In this paper, we define a domain as an application area in smart
cities, such as transportation, energy, and public safety. shows that, first,
existing city requirements are often defined with missing information or
ambiguous description, e.g., no location information, using words like nearby,
or close to. They are not precise enough to be converted to a formal
specification or monitored in a city directly without clarifications by policy
makers. Secondly, the language difference between English specified
requirements and formalized specifications is significant. Without expertise
in formal languages, it is extremely difficult or impossible for policy makers
to write or convert their requirements to formal specifications. Therefore,
there is an urgent demand for an intelligent system to support policy makers
for requirement specifications in smart cities.
Despite the prevalence of developing models to translate the natural language
to machine languages in various applications, such as Bash commands [7],
Seq2SQL [8], and Python [9], it is very challenging to develop such an
intelligent system for requirement specification in smart cities for the
following reasons. First, unlike the above translation tasks with thousands or
even millions of samples in a dataset, there is barely any requirement
specification data. As a result, traditional language models are not
sufficient to be applied directly. Moreover, the requirements usually contain
city domain-specific descriptions and patterns that existing pre-trained
embeddings like BERT or GloVE cannot handle effectively. Furthermore,
requirements from different domains and cities vary significantly and evolve
over time, thus building a system that can adapt to new domains at runtime is
an open research question. Good adaptability can increase user experience
(e.g., policy makers do not have to clarify new terms repeatedly), while one
of the major challenges is validating and filtering the new knowledge and
avoiding adversarial examples online.
Adversarial samples can be introduced among that new knowledge if there is any
malicious behavior involved in off-guard online learning. Those adversarial
samples will poison the dataset used for continuous learning and system
adaptation if there is no security enhancement. Here we emphasize the
importance of an effective and safe validating function against malicious
inputs. After reviewing the most recent literature, we find adversarial
examples perturb textual inputs not only at the character level but also at a
word or even sequence level. By only looking at textual information, the
validation mechanism alone is not enough to protect the dataset from being
poisoned or even vulnerable to malicious attacks. Because in this continuous
learning scenario, the attacker could keep attacking until the model
prediction changes. Thus, an effective and comprehensive approach must first
detect malicious behaviors. By doing so, the validation model is kept safe and
can further help protect and enrich the dataset.
Figure 1: CitySpec in Smart Cities
In this paper, we target the above technical challenges and develop CitySpec,
an intelligent assistant system for requirement specification in smart cities.
To the best of our knowledge, it is the first specification system helping
city policy makers specify and translate their requirements into formal
specifications automatically. As shown in Figure 1, CitySpec is designed to
bridge the gap between city policy makers and monitoring systems. It enables
policy makers to define their requirements by detecting missing, inaccurate,
or ambiguous information through an intelligent assistant interface. To
effectively train the translation model using a small amount of city
requirement data, CitySpec extracts city knowledge and enhances the learning
process through requirement synthesis. CitySpec can easily adapt to a new city
or application domain through online learning and validation under the
protection of a shield model.
Contributions. We summarize the major contributions of this paper as follows:
* 1.
We collect and annotate over 1,500 real-world city requirements from over 100
cities across different domains. We extract city-specific knowledge and build
a dataset of city vocabulary with 3,061 words in 5 categories.
* 2.
We create an intelligent assistant system for requirement specification in
smart cities. In the system, we build a translation model and enhance it
through requirement synthesis, and develop a novel online learning framework
with validation under uncertainty.
* 3.
We evaluate CitySpec extensively using real-world city requirements. The
evaluation results show that CitySpec is effective on supporting policy makers
accurately writing and refining their requirements. It increases the sentence
level accuracy of requirement specification from 59.02% to 86.64% through city
knowledge injection. It shows strong adaptability (user experience) to a new
city (e.g., F1 score in Seattle from 77.6% to 93.75%) and a new domain (e.g.,
F1 score in security domain from 62.93% to 93.95%).
* 4.
We overview 12 different adversarial text attacks from the most recent
literature and launch them to the validation function. Based on generated
adversarial samples, we develop a shield model that is hard to attack and
immune to most adversarial samples. The evaluation results show its ability to
significantly reduce attack success rate (e.g., the attack success rate of
BertAttack is reduced from 94.00% to 4.31%).
* 5.
We conducted a real user case study with 18 participants with different
backgrounds. The study shows the high usability and adaptability of CitySpec
not only in smart city scenarios but also in unseen domains (e.g., high user
experience scores and low numbers of interactions are reported in domains like
Medical and Environmental Engineering). Furthermore, the effectiveness of the
shield function is shown during the survey (e.g., 87.23% defense success rate
against commonly seen adversarial attacks).
This paper is an extension of [1]. We extend with the following new
contributions. First, we carefully study potential adversarial text
generation. We summarize the characteristics of each attack and its effects on
our system. Second, we also implement and further experiment with those 12
attacks. We find potential vulnerabilities in the initial validation model
under those attacks if off-guard. Third, by targeting adversarial attacks, we
enhance our system security by developing an effective and secure shield model
to protect the validation model from malicious attacks. Fourth, we further
conduct a comprehensive evaluation of the newly developed shield layers of
CitySpec and show that CitySpec can significantly reduce the success rate of
12 types of attacks. Last, we conduct a user case study with 18 participants
with different backgrounds to test the usability and adaptability of CitySpec
in the smart city. Leveraging the different backgrounds of participants, we
also prove the capability of CitySpec to learn continuously and adapt
efficiently in unseen domains.
Paper organization: In the rest of the paper, we describe the motivating study
of city requirement specification in Section 2, provide an overview of
CitySpec in Section 3, and present the technical details in Section 4. We then
present the evaluation results in Section 5, discuss the related work in
Section 7 and draw conclusions in Section 8.
## 2 Motivating Study
Table 1: Comparison between English requirements and formal specifications (DLD: Damerau–Levenshtein Distance) ID | Requirement vs Specification | DLD
---|---|---
1 | Req: Sliding glass doors shall have an air infiltration rate of no more than 0.3 cfm per square foot. | 59
Spec:
$\mathsf{Always}_{[0,+\infty)}~{}(\mathsf{{Sliding~{}glass~{}doors}}~{}\mathsf{air~{}infiltration~{}rate}~{}\leq
0.3~{}\mathsf{cfm/foot}^{2})$
2 | Req: The operation of a Golf Cart upon a Golf Cart Path shall be restricted to a maximum speed of 15 miles per hour. | 67
Spec:
$\mathsf{Everywhere}(\mathsf{Golf~{}Cart~{}Path})(\mathsf{Always}_{[0,+\infty)}(\mathsf{Golf~{}Cart~{}speed}<15~{}\mathsf{miles/hour}))$
3 | Req: Up to four vending vehicles may dispense merchandise in any given city block at any time. | 75
Spec:
$\mathsf{Everywhere}(\mathsf{city~{}block})(\mathsf{Always}_{[0,+\infty)}(\mathsf{vending~{}vehicles}\leq
4))$
In this section, we study real-world city requirements and their formal
specification as motivating examples to discuss the demand and challenges of
developing an intelligent assistant system for requirement specification in
smart cities. We collect and annotate over 1,500 real-world city requirements
(e.g., standards, codes of ordinances, laws, regulations, etc. [10, 11, 12,
13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]) from over 100 cities and
regions (e.g. New York City, San Francisco, Chicago, Washington D.C., Beijing,
etc.) around the world. Those requirements also cover different application
domains like transportation, environment, security, public safety, indoor
environments, etc. We make the following observations from the analysis of the
requirement dataset.
Existing city requirements are often defined with missing information or
ambiguous description. In [4], the authors define essential elements for
monitoring a city requirement. Within the 1500 requirements, many requirements
have one or more missing elements. For example, 27.6% of the requirements do
not have location information, 29.1% of the requirements do not have a proper
quantifier, and 90% of the requirements do not have or only have a default
time (e.g., always) defined. Additionally, requirements often have ambiguous
descriptions that are difficult to be noticed by policy makers. For example, a
location is specified as “nearby” or “close to”. As a result, it is very
difficult or impossible for the monitoring system to monitor these
requirements properly. It indicates a high demand for an intelligent assistant
system to support the policy makers to refine the requirements.
The language difference between English specified requirements and their
formal specifications is significant. In Table 1, we give three examples of
city requirements in English, their formal specification in SaSTL, and the
Damerau–Levenshtein Distance (DLD) [25] between each pair of requirements. DLD
measures the edit distance between two sequences. It shows that natural
languages are different from machine-compatible input languages. Formal
specifications usually consist of mathematical symbols, which makes the
conversion even more difficult. As shown in Table 1, the average DLD from
English requirements to formal specifications is 67, which means that it
requires an average of 67 edits. As a reference, the average DLD brought by
translating these three English requirements to Latin is 64.67. It indicates
that the conversion from English requirements to formal specifications even
requires more edits than the translation of these requirements from English to
Latin. In general, building a translator from English to Latin would require
millions of samples. However, as an under-exploited area, there is a very
limited number of well-defined requirements. Moreover, annotation of formal
specifications requires specialties in formal methods and is extremely time-
consuming. It presents major challenges for building such a translation model.
## 3 System Overview
Figure 2: System Overview
CitySpec is designed to bridge the gap between city policy makers and
monitoring systems. It supports policy makers to precisely write city
requirements in English through an intelligent interface, and then converts
them to formal specifications automatically. An overview of CitySpec is shown
in Figure 2. There are four major components in CitySpec, including an
intelligent assistant Interface to communicate with policy makers (see Section
4.2), a Requirement Synthesis component to extract city knowledge and
synthesize new requirements to build the translation model (see Section 4.3),
a Translation Model to convert city requirements to formal specifications (see
Section 4.4), and an Online Learning component to adapt the system to new
knowledge (see Section 4.5).
At runtime (as indicated by the orange arrows in Figure 2), a city policy
maker inputs a requirement in English through an intelligent assistant
interface, which sends the requirements to the translation model. The
translation model converts the requirements to a formal specification and
checks if there is any missing information or ambiguous description. The
translation model is built with injected city knowledge through requirement
synthesis at the training time and enhanced through online learning at
runtime. Next, based on the returned results from the translation model, the
intelligent interface communicates with the policy maker to acquire or clarify
the essential information. In this process, the assistant supports the policy
maker to refine the requirement until it is precisely defined and accepted by
the monitor. We present the technical details in Section 4, and develop a
prototype tool of the CitySpec system and deploy it online.
## 4 Methodology
In this section, we present the major components in CitySpec (as shown in
Figure 2). We first introduce requirement specification using Spatial-
aggregation Signal Temporal Logic (SaSTL) [4]. Then we show the design and
technical details of the intelligent assistant interface, requirement
synthesis, translation model, and online learning, respectively.
### 4.1 Requirement Specification using SaSTL
SaSTL is a powerful formal specification language for Cyber-Physical Systems.
We select it as our specification language because of its advantages of
expressiveness and monitoring for smart cities. However, CitySpec is general
and can work with other specification languages. SaSTL is defined on a multi-
dimensional _spatial-temporal signal_ as $\omega:\mathbb{T}\times
L\to\\{{\mathbb{R}\cup\\{\bot\\}\\}}^{n}$, where $\mathbb{T}=\mathbb{R}_{\geq
0}$, represents the continuous time and $L$ is the set of locations.
$X=\\{x_{1},\cdots,x_{n}\\}$ is denoted by the set of variables for each
location. The spatial domain $\mathcal{D}$ is defined as,
$\mathcal{D}:=([d_{1},d_{2}],\psi)$,
$\psi:=\top\;|\;p\;|\;\neg\;\psi\;|\;\psi\;\vee\;\psi$, where $[d_{1},d_{2}]$
defines a spatial interval with $d_{1}<d_{2}$ and $d_{1},d_{2}\in\mathbb{R}$,
and $\psi$ specifies the property over the set of propositions that must hold
in each location.
The syntax of SaSTL is given by
$\begin{array}[]{cl}\varphi:=&x\sim c\;|\ \neg\varphi\;|\
\varphi_{1}\land\varphi_{2}\;|\ \varphi_{1}\mathcal{U}_{I}\varphi_{2}\;|\
\mathcal{A}_{\mathcal{D}}^{\mathrm{op}}x\sim c\;|\
\mathcal{C}_{\mathcal{D}}^{\mathrm{op}}\varphi\sim c\\\ \end{array}$
where $x\in X$, $\sim\in\\{<,\leq\\}$, $c\in\mathbb{R}$ is a constant,
$I\subseteq\mathbb{R}_{>0}$ is a real positive dense time interval,
$\mathcal{U}_{I}$ is the _bounded until_ temporal operators from STL. The
_always_ (denoted $\square$) and _eventually_ (denoted $\lozenge$) temporal
operators can be derived the same way as in STL, where
$\lozenge\varphi\equiv\mathsf{true}\ \mathcal{U}_{I}\varphi$, and
$\square\varphi\equiv\neg\lozenge\neg\varphi$. Spatial _aggregation_ operators
$\mathcal{A}_{\mathcal{D}}^{\mathrm{op}}x\sim c$ for
$\mathrm{op}\in\\{\max,\min,\mathrm{sum},\mathrm{avg}\\}$ evaluates the
aggregated product of traces
$\mathrm{op}(\alpha_{\mathcal{D}}^{x}(\omega,t,l))$ over a set of locations
$l\in L_{\mathcal{D}}^{l}$, and _counting_ operators
$\mathcal{C}_{\mathcal{D}}^{\mathrm{op}}\varphi\sim c$ for
$\mathrm{op}\in\\{\max,\min,\mathrm{sum},\mathrm{avg}\\}$ counts the
satisfaction of traces over a set of locations. From counting operators, we
derive the _everywhere_ operator as
$\boxbox_{\mathcal{D}}\varphi\equiv\mathcal{C}_{\mathcal{D}}^{\mathrm{min}}\varphi>0$,
and _somewhere_ operator as
$\diamonddiamond_{\mathcal{D}}\varphi\equiv\mathcal{C}_{\mathcal{D}}^{\mathrm{max}}\varphi>0$.
Please refer to [4] for the detailed definition and semantics of SaSTL.
### 4.2 Interface for Intelligent Assistant
City requirements often have missing or ambiguous information, which may be
unnoticed by policy makers. It leads to the demand for human inputs and
clarification when converting them into formal specifications. Therefore, we
design an intelligent assistant interface in CitySpec serving as an
intermediary between policy makers and the translation model. It communicates
with policy makers and confirms the final requirements through an intelligent
conversation interface.
To briefly describe the communication process, users first input a requirement
in English, e.g., “due to safety concerns, the number of taxis should be less
than 10 between 7 am to 8 am”. CitySpec interface passes the requirement to
the translation model and gets a formal requirement
($\mathsf{always}_{[7,8]}\mathsf{number~{}of~{}taxi}<10$) with the keywords
including,
* 1.
$\mathsf{entity}$: the requirement’s main object, e.g., “the number”,
* 2.
$\mathsf{quantifier}$: the scope of an entity, e.g., “taxi”,
* 3.
$\mathsf{location}$: the location where this requirement is in effect, which
is missing from the above example requirement,
* 4.
$\mathsf{time}$: the time period during which this requirement is in effect,
e.g., “between 7 am to 8 am”,
* 5.
$\mathsf{condition}$: the specific constraint on the entity, such as an upper
or lower bound of $\mathsf{entity}$, e.g., “10”.
As a result, CitySpec detects that the location information is missing from
the user’s requirement and generates a query for the user, “what is the
location for this requirement?” Next, with new information typed in by the
user (e.g., “within 200 meters of all the schools”), CitySpec obtains a
complete requirement.
The next challenge is how to confirm the formal specification with policy
makers. Since they do not understand the formal equation, we further convert
it to a template-based sentence. Therefore, CitySpec presents three formats of
this requirements for users to verify, (1) a template-based requirement, e.g.,
[number] of [taxi] should be [$<$] [10] [between 7:00 to 8:00] [within 200
meters of all the schools], (2) a SaSTL formula
$\mathsf{everywhere}_{\mathsf{school}\land[0,200]}\mathsf{always}_{[7,8]}\mathsf{number~{}of~{}taxi}<10$,
and (3) five key fields detected. Users can confirm or further revise this
requirement through the intelligent assistant.
When policy makers have a large number of requirements to convert, to minimize
user labor to input requirements manually, CitySpec also provides the option
for them to input requirements through a file. The process is similar to where
CitySpec asks users to provide or clarify information until all the
requirements are successfully converted through files.
### 4.3 Requirement Synthesis
The amount of city requirement dataset is insufficient to train a decent
translation model in an end-to-end manner. As we’ve discussed in Section 1, it
requires extensive domain knowledge in both city and formal specifications and
is extremely time-consuming to annotate new requirements. Furthermore, a
majority of the existing city requirements are qualitatively or imprecisely
written, which cannot be added to the requirement dataset without refinement
[4]. To mitigate the challenge of small data to build a translation model, we
design a novel approach to incorporating city knowledge through controllable
requirement synthesis.
There are two main reasons why converting a city requirement to a formal
specification is challenging with a small amount of data. First, the
vocabulary of city requirements are very diverse. For example, requirements
from different cities (e.g., Seattle and New York City) or in different
domains (e.g., transportation and environment) have totally different
vocabulary for entities, locations, and conditions. Second, the sentence
structure (patterns) of requirements vary significantly when written by
different people. It is natural for human beings to describe the same thing
using sentences.
Targeting these two challenges, we first extract city knowledge and build two
knowledge datasets, i.e., a vocabulary set and a pattern set. The vocabulary
set includes five keys of a requirement, i.e., entity, quantifier, location,
time and condition. The pattern set includes requirement sentences with 5
keywords replaced by their labels. For instance, we have a requirement, “In
all buildings/$\mathsf{location}$, the average concentration/$\mathsf{entity}$
of TVOC/$\mathsf{quantifier}$ should be no more than 0.6
mg/m3/$\mathsf{condition}$ for every day/$\mathsf{time}$.”, the pattern
extracted is “In #$\mathsf{location}$, the average #$\mathsf{entity}$ of
#$\mathsf{quantifier}$ should be no more than #$\mathsf{condition}$ for
#$\mathsf{time}$.”
We extract the knowledge set from city documents besides requirements so that
we are not limited by the rules of requirements and enrich the knowledge of
our model. For example, we extract 336 patterns and 3061 phrases (530 phrases
in $\mathsf{entity}$, 567 phrases in $\mathsf{quantifier}$, 501 phrases in
$\mathsf{location}$, 595 phrases in $\mathsf{condition}$, and 868 phrases in
$\mathsf{time}$).
Next, we designed an approach to synthesizing controllable requirement dataset
efficiently. Intuitively, we can go through all the combinations of keywords
and patterns to create the dataset of requirements, which is infeasible and
may cause the model overfitting to the injected knowledge. In order to enhance
the model’s performance, we need to keep a balance between the coverage of
each keyword and the times of keywords being seen in the generation. We denote
$\lambda$ as the synthesis index, which indicates the _minimum_ number of
times that a keyword appears in the generated set of requirements. Assuming we
have $m$ set of keywords vocabularies $\\{V_{1},V_{2}\dots V_{m}\\}$ and a
pattern set as $P$, we have the total number of synthesized requirements
$\ell=\lambda\cdot\max(|V_{1}|,|V_{2}|,\dots,|V_{m}|)$. For each set of
vocabularies $V_{i}$, we first create a random permutation of $V_{i}$ and
repeat it until the total number of phrases reaches $\ell$, then we
concatenate them to an array $S_{i}$. Once we obtain $S_{1},...S_{m}$, we
combine them with pattern $P$ to generate a requirement set $R$. Refer to
Algorithm 1 for more details.
Algorithm 1 Requirement Synthesis
Input: $m$ set of keywords vocabularies $\\{V_{1},V_{2}\dots V_{m}\\}$,
Pattern $P$, synthesis index $\lambda$
Output: Set of requirements $R$
Initialize $R$ as an empty set$\\{\\}$
Let $\ell=\lambda\cdot\max(|V_{1}|,|V_{2}|,\dots,|V_{m}|)$
for $i\in 1\dots m$ do
Initialize $S_{i}$ as an empty array $S_{i}=[]$
while $|S_{i}|<\ell$ do
Create a random permutation of $V_{i}$: $P=\text{Permutate}(V_{i})$
Concatenate $P$ to $S_{i}$: $S_{i}=\text{Concat}(S_{i},P)$
end while
end for
for $j\in 1\dots\ell$ do
Combine keywords $S_{1}[j],S_{2}[j]\dots S_{m}[j]$ with Pattern $P$ to create
a requirement $r_{j}$
Add $r_{j}$ to the set of requirements $R$
end for
return $R$
### 4.4 Translation Model
The inputs of the translation model are requirements, and the outputs of this
module are formal specifications with token-level classification. We implement
the translation model with three major components, a learning model, knowledge
injection through synthesized requirements, and keyword refinement.
To be noted, CitySpec does not build its own translation model from scratch.
Instead, we tackle the limitation of the traditional language model and
improve it for city requirement translation. Therefore, CitySpec is compatible
with different language models.
In this paper, we implement four popular language models, which are Vanilla
Seq2Seq, Stanford NLP NER, Bidirectional Long Short Term Memory (Bi-LSTM) +
Conditional Random Field (CRF) and Bidirectional Encoder Representations from
Transformers (BERT) [26]. We apply our synthesized datasets with different
synthesis indexes to inject city knowledge into these language models. Then we
evaluate the improvement brought by our requirement synthesis approach by
testing the performance on real-world city requirements. We present the
detailed results and analysis in Section 5.
Additionally, we find that time, negation and comparison are the most tricky
elements that affect the accuracy of the final specification detection.
Therefore, we implement another refinement component in the translation model.
In general, the $\mathsf{time}$ can be represented in several formats, such as
timestamps, or other formats like yyyy-mm-dd and mm-dd-yyyy. To mitigate the
confusion that various formats might bring, we apply SUTime [27] when the
$\mathsf{time}$ entity is not given by the translation model. pyContextNLP
[28] is applied to analyze whether there is a negation in the input sentence.
If there is any negation, the comparison symbol is reversed. For instance, if
there is a keyword “greater than”, the comparison symbol is $>$. However, if
the whole phrase is “is not supposed to be greater than”, and a negation is
detected, thus the final comparison is $\leq$ instead.
### 4.5 Secured Online Learning
In general, the more clarifications are needed from the users, the worse
experience the users will have, especially if users have to clarify the same
information repeatedly. For example, if a user from a new city inputs a
location that the system fails to detect, the user will be asked to clarify
the location information. The user’s experience will drop if the system asks
him again on the second or third time seeing these words. However, the deep
learning-based translation model cannot “remember” this information at
deployment time. Thus, the first question is that how can CitySpec learn the
new knowledge online?
Meanwhile, the new information provided by users may also harm the system if
it is an incorrect or adversarial example. The second question is that how can
CitySpec validate the new knowledge before learning it permanently?
Figure 3: Secured Online Learning
Targeting these two research questions, we design an online learning module in
CitySpec. As shown in Figure 4, it has two stages, which are short-term
learning and long-term learning. Short-term learning is designed to
accommodate the same user in one session of requirement specification with a
temporary memory. The question-answer pairs are stored temporarily. When the
same occasion occurs, the temporal cache gives instant answers and avoids more
user clarifications. Long-term learning is designed to adapt the new knowledge
to the model permanently after validating its reliability. The accepted
permanent knowledge is achieved by updating weights via back-propagation on
the extended dataset with both initial data and the new input-label pairs
stored in the temporary cache.
To prevent the injection of malicious or suspicious knowledge into CitySpec,
we have implemented two sub-components to secure the online learning session:
the Shield Model and the Validation Function. The Shield Model has been
designed to segregate non-malicious inputs from malicious ones. As depicted in
Fig.4 and Fig.3, the Shield Function protects both short-term and long-term
online learning sessions by determining whether the user input is malicious.
For instance, when a user provides their location as ”within 100 meters of the
Vanderbilt campus” in the front-end interface after the Translation Model’s
output, shown in Fig.3, the Shield Model examines it from the backend and
prevents any malicious information from being included, as shown on the right
side of Fig.3. After one user clarification is passed through the Shield
Model, it is stored to cache for future references, for example, if the user
again inputs some requirement with “within 100 meters of the Vanderbilt
campus” as the location, CitySpec will directly provide the answer from the
cache. Periodically those samples passed the shield function are then passed
on to the Validation Function, which further examines whether the user has
provided the correct label-phrase pair. As shown in the last few steps in
Fig.3, the Validation Function verifies whether ”within 100 meters of the
Vanderbilt campus” should be labeled as a location. Once examined by both sub-
components, user inputs will be injected into the city knowledge and used for
learning purposes.
Figure 4: One Sample Round of Online Learning
#### 4.5.1 Validation Function
In our setting, if CitySpec fails to fulfill all the predefined key domains,
it will ask the user for clarification in token-domain pairs. We implement a
BERT-based classification model as the validation function and train it on
generated city knowledge from all the existing requirements. The inputs of the
validation model are the new terms provided by the user, while the outputs are
the corresponding keys based on the new terms. To keep CitySpec away from
adversarial inputs, we develop a Bayesian CNN-based validation module in
CitySpec. The model is to classify the category of a new term with confidence
with uncertainty estimation. We apply dropout layers during both training and
testing to quantify the model uncertainty [6]. The inputs of the validation
model are the new terms provided by the user, while the outputs are the
corresponding keys among those five key elements with an uncertainty level. In
brief, a new term-key pair is rejected if (1) the output from the validation
function does not align with the given domain key; (2) the validation function
has low confidence in the output although it might align with the given
domain. In this way, we only accept new city knowledge validated with high
confidence.
#### 4.5.2 Shield Model
By our design, although the validation function is supposed to serve as an
over-watch to prevent CitySpec from any suspicious new knowledge, we still
find it vulnerable to adversarial attacks because it directly takes textual
inputs from the user. If the user keeps manipulating textual inputs and by
enumeration, the user can find adversarial inputs that would confuse the
validation function, pass the validation function check, and directly poison
our city knowledge. Since the validation function is trained on the city
knowledge periodically. If the city knowledge gets poisoned, not only the
validation function but also CitySpec will start to malfunction.
For example, suppose a truck company desires to increase its revenue despite
the existence of current regulations, such as ”trucks are not allowed to enter
Charlotte Pike from 5 PM to 7 PM on weekdays”. In such a scenario, the company
may opt to engage in an attack on the system by poisoning the city knowledge,
which could involve manipulating the location or time data. By doing so, such
regulation would cease to be effectively enforced, enabling the truck company
to allocate their trucks onto Charlotte Pike during rush hours. This action,
however, poses a significant threat to public safety, given that it could lead
to traffic accidents and congestion on the road.
Thus, we need an additional shield model to guard the validation function so
that CitySpec will be hard to attack directly and robust to most adversarial
attacks. The shield model, by our design, can detect any malicious behavior
when the user clarifies the unfulfilled domain before feeding the
clarification to the validation function. Our shield model consists of two
separate filters: literal correction and inferential mapping. To test the
effectiveness of the shield model, we generate malicious attacks with 12
different kinds of generated adversarial inputs from the most recent works of
literature [29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 2].
Those investigated and implemented adversarial attacks [40] aim to mislead the
validation function based on existing city knowledge by generating new
malicious textual inputs. Although those adversarial attack types differ from
each other, we conclude two key steps during those attacks: Word Importance
Ranking $S(.)$ and Word Transformation $T(.)$. Given an existing valid input
sequence, $X$ and the victim function $F$, in general, an attacker first ranks
the sub-tokens and gets the most ‘important’ ones $x_{i,j,k,..}\in X$ based on
selected metrics. Then the attacker will transform those selected tokens into
other textual inputs $x^{{}^{\prime}}_{i,j,k,...}$ and replace the original
ones in $X$. The newly assembled sequence $X^{{}^{\prime}}$ is passed to $F$.
If $F$ changes its output, then the attack is considered successful. Thus, a
successful attack would satisfy $F(T(S(X)))\neq F(X)$. Some of those
adversarial attacks directly skip the word importance ranking process, even
though the rest of those do rank tokens, different assumptions are made and
different ranking metrics are applied. We choose to let the shield model focus
more on the word transformation during detection. In those attacks, there are
two main genres of word transformation: inner-word transformation and inter-
word transformation. Inner-word transformations change the characters within
selected tokens, e.g., change “energy consumption” to “energy consumptlon”.
Inter-word transformations change the whole word instead based on some
replacement strategy, e.g., change “available space” to “free space”. More
details can be found in Table. 2.
We develop a literal correction filter to detect and leave out any literal
disinformation provided in the input tokens. The literal correction is
designed to mainly focus on the inner-word transformation. This filter
consists of two main components: a language model and a type checker. The
language model is trained on over 1,500 existing city requirements. Given a
seed token, the language model generates the most ‘reasonable’ sequence by
maximum likelihood estimation. Based on the reference word provided by the
language model, the type checker searches for all possible variant words based
on the reference word without changing too much in spelling. The input
sequence is considered malicious if any correction is involved in the initial
inputs. E.g., the user types in “in the m0rninGs”. If the language model gives
“in the mornings” as a reference, then the type checker will try to recover
“m0rninGs” to “mornings”. If the edit distance is allowed, then the input
sequence will be corrected to “in the mornings”. As observed, the corrected
sequence is no longer the same as it was first typed in, then this sequence is
considered malicious. This Literal Correction filter is considered as hard for
direct attacks because the attacker has no access to (1) the existing 1,500
city requirements; (2) the details in the language model; or (3) the edit
distance budget in the type checker.
However, this Literal Correction filter is built upon a fixed vocabulary and
mainly focuses on word spelling and phrase composing. It ignores the
inferential understanding of a given phrase. Thus, we need another filter that
suffers less from the limited vocabulary and has more prior knowledge in
natural language inferential understanding.
We leverage the prior knowledge brought by the BERT model and develop an
inferential mapping filer. It first embeds input sequences with arbitrary
lengths into vectors with a fixed length [41]. This embedding process can also
be understood as mapping textual sequences into a numeric hyperspace. Based on
studies in [41], this hyperspace is declared to manage to put phrases with
similar inferential information closer. E.g., the euclidean distance between
“at the gates” and “at the doors” after embedding is 8.805, however, the
distance between “at the gates” and “on the campus” is 14.95. Based on this
characteristic, this inferential mapping filter can mitigate the issues
brought by inter-word transformations like word insertion. After mapping all
phrases into the hyperspace as vectors with a fixed length, we employ multi-
layer perceptrons to draw the decision boundary in the hyperspace. To make
this inferential mapping filter more robust to direct attacks, we even mask
the embedding vectors with trainable weights before passing them forward to
the downstream perceptrons. In conclusion, this filter is considered difficult
to attack because (1) all textual information is encrypted by masked phrase
embeddings before feeding to the discriminator, which means there is no strong
direct relationship between the textual inputs and their corresponding numeric
representations; (2) the attacker does not have the access to the model
details of the downstream perceptrons.
Table 2: Adversarial Attacks Overview Attack Types | Word Importance Ranking | Word Transformation Strategy | Original Tokens | Perturbed Tokens
---|---|---|---|---
A2T | Gradient wrt. Loss Function | Iterative Synonym Replacement | For any source of sound …, shall not exceed the peak levels of … | For any source of sound …, shall not exceed the peak tiers of …
BAE | N/A | Bert Generation | No person shall use, operate, or maintain an alert system at … | No person shall use, access, or maintain an alert system at …
BertAttack | Forward Result Changes after Permutation | Bert Generation | No operator of a sidewalk cafe shall be assigned all the available space within … | No operator of a sidewalk cafe shall be assigned all the available space within …
InputReduction | Confidence Change from Prediction Dist. | Repetitive Word Removal | DC minimum roof reflectance: three year-aged solar reflectance of 0.55 | DC minimum roof reflectance: three year-aged reflectance of 0.55
Pruthi’s | N/A | Character-level Perturbation | All Refrigerators and … shall have maximum energy consumption less than … | All Refrigerators and … shall have maximum energy consumptlon less than …
PSO | N/A | Word Replacement based on Sememes | The location of a vending machine shall … | The locality of a vending machine shall …
PWWS | Word Saliency Change on Output Dist. | Synonym Replacement | Construction and demolition within one thousand (1,000) feet of a residential property is prohibited when … | Building and demolition within one thousand (1,000) feet of a residential property is prohibited when …
TextBugger | Confidence Change from Prediction Dist. | Character-level Perturbation | … the material composition of the pavement system layers … | … the material composing of the pavement system layers …
TextFooler | Forward Result Changes after Deletion | Synonym Replacement | No idling vehicles are allowed about or on any … schools | No unused vehicles are allowed about or on any … schools
DeepWordBug | LSTM-based Score | Character-level Perturbation | … no person shall stop , park , or leave standing any vehicle … | … no person shall stCop, pakr, or leav standing any vehicle
CheckList | N/A | N/A | N/A | N/A
CLARE | N/A | Replace, Insert and Merge | … the total gross floor area of all structures … | … the total gross floor residential area of all structures …
## 5 Evaluation
In this section, we evaluate our CitySpec system from five aspects, including
(1) comparing different language models on the initial dataset without
synthesizing, (2) analyzing the effectiveness of the synthesized requirements
by enhancing the models with city knowledge, (3) evaluating the performance of
the online validation model, (4) testing CitySpec’s adaptability in different
cities and application domains, and (5) an overall case study. We use the city
requirement dataset described in Section 2. To evaluate the prediction of
keywords and mitigate the influence caused by different lengths requirements,
we choose to use token-level accuracy (token-acc) and sentence-level accuracy
(sent-acc) as our main metrics. The token-level accuracy aims to count the
number of key tokens that are correctly predicted. The sentence-level accuracy
counts the prediction as correct only when the whole requirement is correctly
translated to a formal specification using SaSTL. Thus, sentence-level
accuracy serves as a very strict criterion to evaluate the model performance.
We also provide the results using other common metrics including precision,
recall, and F-1 score. The experiments were run on a machine with 2.50GHz CPU,
32GB memory, and Nvidia GeForce RTX 3080Ti GPU.
### 5.1 Performance of language models on the initial dataset
Table 3: Performance of language models on the initial dataset
Model | Token-Acc | Sent-Acc | F-1 Score | Precision | Recall
---|---|---|---|---|---
Vanilla Seq2Seq | 10.91 $\pm$ 0.57 % | 1.38 $\pm$ 0.49% | 24.12 $\pm$ 0.24 % | 65.58 $\pm$ 7.95 % | 14.81 $\pm$ 1.58 %
BiLSTM + CRF | 77.59 $\pm$ 0.52 % | 60.82 $\pm$ 1.22 % | 80.46 $\pm$ 0.84 % | 81.11 $\pm$ 1.38 % | 79.83 $\pm$ 7.24 %
BERT | 80.41 $\pm$ 0.07 % | 59.02 $\pm$ 0.42 % | 81.43 $\pm$ 0.01 % | 78.62 $\pm$ 0.01 % | 84.46 $\pm$ 0.01 %
As a baseline of translation model without city knowledge, we first evaluate
the performance of CitySpec using different language models, including Vanilla
Seq2Seq, pretrained Stanford NER Tagger, Bi-LSTM + CRF, and BERT on the
initial dataset. We present the results in Table 3.
We make the following observations from the results. First, the overlap
between Stanford Pretrained NER Tagger prediction and vocabulary is only 9 out
of 729. The pretrained tagger tends to give locations in higher granularity.
Since this task is a city-level, more detailed location information is stated
in a lower granularity by providing the street name, building name, or
community name. The $\mathsf{location}$ domain in the pretrained tagger gives
more high-level information like city name, state name, or country name. For
example, “34th Ave in Nashville, the state of Tennessee” is annotated as
$\mathsf{location}$ in this task, however, the pretrained NER tagger gives
“Tennessee” as $\mathsf{location}$ instead.
Secondly, the testing token-acc from Vanilla Seq2Seq is 10.91% on average.
Other metrics also indicate that Vanilla Seq2Seq has trouble recognizing the
patterns in sequential keyword labeling. The Vanilla Seq2Seq model suffers
from data scarcity and has difficulty recognizing the general patterns in the
training samples due to the small size of the dataset.
Thirdly, the Bi-LSTM + CRF and BERT model achieve better performance than
other models, and BERT models often outperform other models with lower
standard deviation. However, the highest token-level accuracy achieved is
80.41%, which is still not high enough for an accuracy-prioritized task. A
different key may change the requirement entirely. For example, the “width” of
“car windshield” and the “width” of “car” focus on completely different
aspects, although the keywords “car windshield” and “car” have an only one-
word discrepancy. Meanwhile, the best sentence-level accuracy achieved is
60.82%, which means that about 40% of the requirements are falsely translated.
Assuming the policy makers fix these requirements through the intelligent
assistant interface, it is time-consuming and reduces the user experience.
Even worse, it may bring safety issues to the monitoring system without
noticing.
In summary, the results indicate that existing language models are not
sufficient to serve as the translation model for CitySpec directly. There is a
high demand for injecting city knowledge to build the translation model.
### 5.2 Requirement Synthesis with City Knowledge
Figure 5: Performance improvement brought by requirement synthesis
Next, we evaluate CitySpec’s performance with our controllable synthesized
requirements. For a fair comparison, we do not use the requirements in the
testing set to synthesize requirements. We ensure that the trained model has
not seen the requirements in the testing set in either the knowledge injection
or training phases. We apply different synthesis indexes to test the effects
on the prediction performance. We present the overall results on token-level
and sentence-level accuracy in Figure 5, and F-1 scores on individual keyword
in Figure 6. In the figures, x-axis represents the synthesis index. When the
index equals to “inital”, it shows the results without synthesis data.
From the results, we find that, for BERT and Bi-LSTM, there is an overall
increase in performance in all token-level accuracy, sentence-level accuracy,
overall F-1 score, and F-1 score on keywords. For example, BiLSTM+CRF’s token-
level accuracy increases from 77.59% to 97% and sentence-level accuracy
increases from 60.82% to 81.3%, BERT’s sentence-level accuracy increases from
59.02% to 86.64%.
In summary, the results show that injecting city knowledge with synthesized
requirements boosts the translation model significantly. While improving
policy maker’s user experience with higher accuracy and less clarifications,
it also enhances the safety of the monitoring system potentially.
### 5.3 Performance on Online Validation
We evaluate the validation model through simulating four different testing
scenarios: (I) randomly generated malicious input based on the permutation of
letters and symbols; (II) all street names in Nashville; (III) real city
vocabulary generated from Nashville requirements; (IV) generated float numbers
with different units.
First of all, the accuracy of validation model is very high. When the
uncertainty threshold is set to 0.5, i.e., all inputs cause an uncertainty
higher than 0.5 will be ruled out, CitySpec gives 100% success rate against
scenario I among 2,000 malicious inputs, 91.40% acceptance rate among 2,107
samples in scenario II, 92.12% acceptance rate among 596 samples in scenario
III, and 94.51% acceptance rate among 2,040 samples in scenario IV.
Additionally, we find that the validation function easily confuses
$\mathsf{entity}$ with $\mathsf{quantifier}$ if no further guidance is
offered. We look into dataset and figure out $\mathsf{entity}$ and
$\mathsf{quantifier}$ are confusing to even humans without any context
information. Take the requirement “In all buildings, the average concentration
of Sulfur dioxide (SO2) should be no more than 0.15 mg/m3 for every day.” As
an example, $\mathsf{entity}$ is “concentration” and $\mathsf{quantifier}$ is
“Sulfur dioxide (SO2)”. If the requirement is changed to “The maximum level of
the concentration of Sulfur dioxide (SO2) should be no more than 0.15 mg / m3
for every day.”, then $\mathsf{entity}$ is “maximum level” and
$\mathsf{quantifier}$ is “concentration” instead. In addition, terms like
“occupancy of a shopping mall”, “noise level at a shopping mall”, and “the
shopping mall of the commercial district” also introduce confusion between
$\mathsf{location}$, $\mathsf{entity}$ and $\mathsf{quantifier}$, since the
same token “shopping mall” can be $\mathsf{entity}$, $\mathsf{quantifier}$ or
$\mathsf{location}$ in certain cases.
The results show that the validation algorithm can effectively accept new city
knowledge, prevent adversarial inputs and safeguard online learning.
Therefore, CitySpec reduces unnecessary interactions between policy makers and
the system and increases efficiency.
Figure 6: F-1 scores on four keywords
### 5.4 Adaptability to different scenarios
Table 4: Adaptability on different cities in terms of token-level accuracy, sent-level accuracy, and overall F-1 score City | Seattle | Changsha
---|---|---
Metrics | ToeknAcc | SentAcc | F-1 | ToeknAcc | SentAcc | F-1
| Non-adaptive w/
---
BiLSTM+CRF
84.91% | 48.00% | 77.60% | 86.61% | 61.20% | 84.10%
| Adaptive w/
---
BiLSTM+CRF
96.05% | 84.80% | 93.75% | 95.27% | 83.20% | 93.57%
| Non-adaptive w/
---
BERT
80.38% | 46.40% | 76.80% | 86.10% | 58.00% | 83.87%
| Adaptive w/
---
BERT
95.10% | 80.70% | 90.28% | 97.16% | 88.40% | 96.88%
City | Charlottesville | Jacksonville
Metrics | ToeknAcc | SentAcc | F-1 | ToeknAcc | SentAcc | F-1
| Non-adaptive w/
---
BiLSTM+CRF
90.00% | 65.62% | 86.48% | 77.32% | 35.20% | 81.54%
| Adaptive w/
---
BiLSTM+CRF
96.82% | 89.29% | 95.40% | 97.35% | 88.80% | 96.02%
| Non-adaptive w/
---
BERT
93.44% | 73.21% | 90.46% | 90.21% | 56.40% | 81.60%
| Adaptive w/
---
BERT
97.53% | 87.05% | 94.31% | 96.27% | 83.20% | 92.59%
Table 5: Adaptability on different topics in terms of token-level accuracy, sent-level accuracy, and overall F-1 score Topic | Noise Control | Public Access
---|---|---
Metrics | ToeknAcc | SentAcc | F-1 | ToeknAcc | SentAcc | F-1
| Non-adaptive w/
---
BiLSTM+CRF
77.82% | 41.56% | 77.83% | 73.99% | 44.07% | 74.41%
| Adaptive w/
---
BiLSTM+CRF
95.82% | 90.68% | 94.46% | 97.68% | 74.80% | 97.39%
| Non-adaptive w/
---
BERT
84.15% | 58.62% | 81.05% | 83.59% | 54.17% | 78.93%
| Adaptive w/
---
BERT
98.07% | 88.31% | 92.98% | 97.43% | 88.75% | 95.75%
Topic | Indoor Air Control | Security
Metrics | ToeknAcc | SentAcc | F-1 | ToeknAcc | SentAcc | F-1
| Non-adaptive w/
---
BiLSTM+CRF
81.51% | 46.80% | 76.22% | 72.11% | 28.80% | 62.93%
| Adaptive w/
---
BiLSTM+CRF
95.58% | 80.00% | 87.68% | 94.39% | 94.37% | 92.34%
| Non-adaptive w/
---
BERT
78.51% | 31.20% | 73.88% | 79.31% | 45.60% | 77.50%
| Adaptive w/
---
BERT
95.31% | 74.40% | 93.60% | 95.41% | 82.80% | 93.95%
In this section, we analyze CitySpec’s adaptability in different cities and
different domains. Different cities have different regulation focuses and
their city-specific vocabulary. For example, in the city of Nashville,
$\mathsf{location}$ names like “Music Row”, “Grand Ole Opry” will probably
never appear in any other cities. We select four cities, Seattle,
Charlottesville, Jacksonville, and Changsha, with different sizes and from
different countries as case studies. We separate the requirements of each
mentioned city and extract the city-wise vocabulary based on each city
independently. Each of four constructed pairs consists of: vocabulary I, which
is extracted from the requirements from one city only, and vocabulary II,
which is extracted from the requirements from all the cities but that specific
one city. Injected knowledge is measured using the number along with the ratio
of how much of the unique vocabulary one city causes. We augment vocabulary II
using 5 as the synthesis index and train a model on vocabulary II. As a
result, the trained model is isolated from the vocabulary information from
that one specific city. Afterward, we test the trained model performance on
the generated requirements using vocabulary I. We pick CitySpec with Bi-LSTM +
CRF and CitySpec with BERT in this scenario. We employ the validation function
to validate all vocab in vocabulary I and pass the validated ones to
vocabulary II. After that, we have a validated vocabulary including vocabulary
II and validated vocabulary I. The deployed model is fine-tuned based on the
validated vocabulary using few-shot learning.
From the results shown in Table 4, we observe that (1) although CitySpec
immigrates to a completely unknown city, it is still able to provide
satisfying performance, e.g., 84.9% token-acc and 77.6% F-1 score in Seattle,
but the sent-acc tends to be low. (2) With new knowledge injected, the
performance increases significantly, e.g., Sent-Acc for Seattle increases from
48% to 84.8% with BiLSTM+CRF, and from 46.4% to 80.7% with BERT.
We further study the practical impacts after having such sent-acc and token-
acc. From its definition, when a sent-acc of 84.80% is achieved, which is the
adapted performance of BiLSTM + CRF from the Seattle experiment, it reflects
that among all requirements, 84.80% of them are predicted entirely correctly
for both in-domain and out-domain tokens. For example, given “For all the
zones in Civil Building Engineering Group II , retail sale should be less than
10 dB(A) at any time.” as input, the translation model produces output that
can be understood as domain-keyword tuples. In the given requirement, there
are words marked as keywords that belong to a predefined category, for
example, “10 dB(A)” is $\mathsf{number}$. There are also some other words do
not belong to any category, for example, “should be”. To add this requirement
to the set of correct predictions using sent-acc, the model needs to predict
all tokens into the correct category for both words, no matter whether they
belong to any category or not. However, token-acc performs differently from
sent-acc since it counts the ratio of correct prediction of all the tokens.
Thus, sent-acc is a stricter metric compared to token-acc.
We study the performance of the translation model by checking its incorrect
predictions. We conclude there are two main situations when the translation
model makes mistakes so that the sent-acc would be affected. Situation I:
nuance differences that will not affect the overall understanding. Situation
II: significant differences caused by the translation model’s failure to
understand the requirement correctly. Take the requirement “The sound level in
the residential district should always be less than 1,000 feet.” for example,
if the model gives “$\mathsf{quantifier}$: residential district” as one of the
predictions while the true label is “$\mathsf{quantifier}$: the residential
district”, then this prediction sequence will not be considered correct in
terms of sent-acc although this kind of nuance difference does not affect the
overall understanding. In another situation where the model gives
“$\mathsf{quantifier}$: district” instead of “$\mathsf{number}$: the
residential district”, the model predictions should not be adopted. To
mitigate the problems brought by these two situations, we rely on the user to
validate and modify the prediction of the model if necessary, interactively
during the conversation (the results will be applied if it goes through the
shield model check). There is also a window on our interface that reports a
summary of the generated specification but in a more readable format [42].
We also explore CitySpec’s adaptability to different topics. We choose four
topics including noise control, indoor air control, security, and public
access. The results also show that (1) even though CitySpec has not seen
vocabulary from a totally different topic, it still gives a competitive
performance; (2) online learning brings obvious improvements when adaptation
is further applied.
In summary, it indicates the capability of CitySpec in both city and domain
adaptation. It can also adapt to new requirements evolving overtime. Moreover,
with a different set of domain-specified knowledge, CitySpec can be
potentially applied to other application domains (e.g., healthcare).
### 5.5 Performance of Shield Model
We first launch 12 different adversarial attacks targeting the validation
function without any protection from the shield. Only a limited amount of city
knowledge is exposed to these attackers for adversarial sample generation. For
those attacks that are not in a black-box fashion, we allow them to query the
validation function arbitrarily. Because in practical usage, we assume the
partial city knowledge has been exposed to the user so that the user could
understand and compose city requirements based on that. And since the
conversation assistance system is developed in a slot-filling fashion, it will
continue for good if not all required slots are fulfilled by the user, so the
model query budget is set to infinity in this case. After these attacks, we
employ the shield model to defend against those adversarial samples with three
different settings: (1) Literal Correction layer alone; (2) Inferential
Mapping layer alone; (3) Inferential Mapping layer with Literal Correction
layer as a pre-check. The defense rate is measured as the number of filtered-
out adversarial samples divided by the number of remaining adversarial
samples.
As we can tell from the first column in Table 6, the validation function is
considered as vulnerable to 10 out of those 12 attacks without protection from
the shield model, for instance, BertAttack yields 94.00% as the attack success
rate. Thus there is an urgent demand for a reliable protection mechanism
deployed against these adversarial attacks. In setting one, although the
Literal Correction layer alone gives around 63.11% defense success rate on
average over 12 adversarial attacks, we still find there are several
shortcomings in the Literal Correction layer. After error analysis, the
Literal Correction filter does not work as expected when (1) the language
model fails to give any reference based on the perturbed word, which means the
perturbed word never appeared in the existing requirements; (2) the type
checker fails to correct the word in a certain amount of edits. E.g., the user
inputs “in the tr1anGl3s”. If the language model fails to give “triangles” as
a reference, which means the word “triangle” never appears in the
requirements, then the type checker will no longer work. Even if the language
model generates “in the triangles” as a reference when the edit distance budge
is set to 2, then the type checker will fail to convert “tr1anGl3s” to
“triangles”.
In setting two, although the Inferential Mapping layer yields around 93.09% as
a defense rate, we still find it having trouble gaining a low false-positive
rate when faced with non-malicious samples. We pass all legal input samples in
our city knowledge to the Inferential Mapping layer and find it reports 31.32%
as a false-positive rate when faced with phrases categorized in
$\mathsf{entity}$. We want the shield model to filter out as many malicious
samples as possible while as few legal samples as possible. After error
analysis, we find the Inferential Mapping layer ignores typos in phrases due
to its focus on semantic understanding. For example, if the malicious input is
“in the morinigns” and there is “in the mornings” marked as non-malicious, the
Inferential Mapping layer will be confused since the euclidean distance
between these two phrases is only 6.21. In other words, if there is only a
small typo in input samples, the Inferential Mapping layer will ignore it and
treat them similarly to ones without typos. In consequence, the false-positive
rate is increased.
In setting three, the Literal Correction layer helps the Inferential Mapping
layer filter out typos first. At the same time, the Inferential Mapping layer
leverages its advantages in natural language understanding to identify samples
that the Literal Correction layer fails to handle. The overall defense rate is
reported to be around 97.78% and even yields a 100% defense rate when faced
with adversarial attacks like A2T, Pruthi’s algorithm, and DeepWordBug. After
testing this combination of layers on legal inputs, the false-positive rate is
3.09% on average over the five domains.
In summary, the shield model is necessary for CitySpec because it helps the
validation function guard the city knowledge against being poisoned. The
combination of both layers as the shield model is effective when faced with 12
adversarial attacks without overkilling non-malicious samples.
### 5.6 Emulation
Due to the absence of a real city policy maker, we emulate the process of
using CitySpec by taking the real-world city requirements and assuming that
they are input by policy makers. Specifically, this case study shows the
iteration of communication between CitySpec and the policy maker to clarify
the requirements. We emulate this process 20 times with 100 requirements
randomly selected from our datasets each time. The results show that the
average and maximum rounds of clarification are 0.8 and 4 per requirement,
respectively, due to the missing or ambiguous information. Averagely, 28.35%
of requirements require clarification on location. For example, “No vendor
should vend after midnight.”, CitySpec asks users to clarify the time range
for “after midnight” and the location defined for this requirement. Overall,
CitySpec obtains an average sentence-level accuracy of 90.60% (with BERT and
synthesize index = 5). The case study further proves the effectiveness of
CitySpec in city requirement specification.
Table 6: Shield Defense Rate Against Adversarial Attacks (SR: Success Rate) Attack Types | Attack SR | | Attack SR after
---
Literal Correction
| Attack SR after
---
Inferential Mapping
| Attack SR after
---
both layers
A2T | 9.11% | 3.12% | 0.72% | 0%
BAE | 8.63% | 3.11% | 1.52% | 0.25%
BertAttack | 94.00% | 61.39% | 7.91% | 4.31%
InputReduction | 0% | N/A | N/A | N/A
Pruthi’s | 85.85% | 0.96% | 0.96% | 0%
PSO | 69.78% | 30.63% | 3.36% | 1.92%
PWWS | 73.14% | 34.05% | 3.59% | 1.92%
TextBugger | 21.58% | 11.12% | 0.48% | 0.48%
TextFooler | 90.17% | 39.81% | 5.04% | 2.39%
DeepWordBug | 82.73% | 1.43% | 1.19% | 0%
CheckList | 0% | N/A | N/A | N/A
CLARE | 43.14% | 18.50% | 6.91% | 2.38%
## 6 Case Study
To demonstrate the effectiveness of CitySpec, we also launch a real user case
study on 18 participants with different backgrounds: 7 participants with a
Computer Science-related background, 3 with Education-related background, and
2 with Finance-related background. In addition, We have 1 participant for each
domain among Biology, Environmental Engineering, Maths, Law, and Art. The goal
of this real user case study targets to test the performance of CitySpec (1)
while helping the user complete requirements in smart city scenarios; (2)
while helping the user complete requirements in an unseen domain; (3) in
telling malicious inputs while conveying service; (4) while handling unseen
knowledge with the help from its online learning features.
### 6.1 CitySpec in Smart City Scenarios
To test the usability and adaptability of CitySpec in smart city, We ask the
participants to read and type in 8 randomly selected requirements sequentially
to CitySpec, then record the time and the number of interactions needed to
complete the requirement specification along with subjective user scores. We
find the number of interactions is highly related to the input requirement
itself, so instead of directly using the number of interactions needed to
measure the usability of CitySpec, we, as authors of this work, provide a
reference number of interactions of each requirement. We take the difference
between the number of user interactions and the number of reference
interactions as a measurement of the usability of CitySpec.
As Figure 7 shows, there is an overall decrease in the additional interactions
per requirement. This decrease demonstrates that our participants get almost
as familiar as us with CitySpec. The slight increases at Req3, Req5, and Req7
are caused by the differences in participants’ subjective understanding
regarding the definition of key elements (e.g., $\mathsf{entity}$ or
$\mathsf{description}$). Figure 7 also shows the time consumed per interaction
is decreasing as our participants use CitySpec. This decreasing trend
indicates the participants are getting more familiar with those key elements
with the help of CitySpec.
Figure 7: CitySpec in smart city (case study)
### 6.2 CitySpec in Unseen Domains
To test the usability and adaptability of CitySpec in unseen domains, next, we
ask the participants to compose three requirements regarding their own major
and occupation. We record the average number of total interactions per
requirement, the average user score, and the average time consumed per
requirement. The results are shown in Figure 8. Among those provided
requirements, CitySpec is found to work effectively in domains that are
related to city regulation topics. For example, in the requirements from
domains like Environmental Engineering, phrases like “PM 2.5” and “iron
concentration” are passed to CitySpec. Although they are from a completely
different area compared with city requirements, CitySpec still manages to
correctly tell them from the requirements due to the partial similarity
between city and environmental engineering.
Figure 8: CitySpec in unseen domains (case study)
Although areas like medical, arts, and engineering are unseen during the
training of CitySpec, CitySpec still offers correct results most times. For
example, without learning “acoustic reflexes” as an entity during training,
CitySpec presents the correct classification only based on the limited
syntactic information given in the requirement. CitySpec’s online learning
feature also helps the specification. One participant whose major is
Electronic Engineering composes requirements regarding system-on-chip/SoC and
the circuit. In the beginning, CitySpec fails to tell terms like “SoC” and
“Circuit” due to its unfamiliarity with them, however, after only a few turns
of validated user clarification, CitySpec memorizes these unseen knowledge
entries and gives correct classification in similar future cases. However, in
areas like Law, the participant emphasizes the internal relationships between
components and the ‘absolute’ correct classification, e.g., including the
determinator in the final prediction. Thus, the participant needs to
continuously interact with CitySpec to generate the correct specification,
which results in a relatively lower user score.
### 6.3 CitySpec’s Defense Against Malicious Input
To test the effectiveness of the newly introduced shield function, we ask the
participants to provide both malicious and non-malicious inputs during
CitySpec CitySpec’s online learning in smart city. We collect 178 unique user
clarification entries in total. Among those entries, 147 are considered
malicious and 25 are considered non-malicious by the participants. With the
help of our newly deployed shield function, CitySpec rejects 116 of those
malicious inputs and yields a 0% false-positive rate when dealing with non-
malicious ones. Among those intentionally designed malicious inputs, two genre
draws our attention: phonetic attacks and quantitative attacks. Here we define
phonetic attacks as replacing words in given legit entries with their
homophones but keeping the semantics shifted within a trustful range. Among
all 11 phonetic attacks, 9 of them succeed. For example, the shield function
fails to abort user clarification when “fifteen” is changed to “fifty teen”,
or “plane ticket” is changed to “plain ticket”. Another genre is quantitative
attacks, we define it as granting an unpractical quantity to components based
on common senses, for example, assigning “7k dB” to “the noise level at any
personal property”. Among 3 quantitative attacks, 3 of them succeed. Besides
these two genres, CitySpec with the shield function reports 87.23% as the
defense success rate against all those malicious inputs.
In summary, according to our real user case study, CitySpec works effectively
and efficiently not only in smart city but also in most unseen areas like
Environmental Engineering, Medical, and Arts, which also proves the potential
future application of CitySpec in other areas. The decrease in both additional
interactions per requirement and time consumed per requirement indicates the
strong usability of CitySpec to users. The newly deployed shield function
helps CitySpec be more robust to most malicious inputs.
## 7 Related Work
Translation Models. Researchers have developed models to translate the natural
language to machine languages in various applications, such as Bash commands
[7], Seq2SQL [8], and Python codes [9]. These translation models benefit from
enormous datasets. The codex was trained on a 159 GB dataset that contains
over 100 billion tokens. WikiSQL, which Seq2SQL was trained on, consists of
80,654 pairs of English-SQL conversions. NL2Bash [7] was trained on
approximately 10,000 pairs of natural language tasks and their corresponding
bash commands. As an under-exploited area, there is a very limited number of
well-defined requirements. Therefore, existing translation models do not apply
to our task. This paper develops a data synthesis-based approach to build the
translation model.
Data Synthesis. Data synthesis exploits the patterns in study findings and
synthesizes variations based on those patterns. Data augmentation is a simple
application of data synthesis. Previous augmentation approaches wield tricks
like synonym substitution [43, 44] and blended approaches [45]. In the smart
city scenario, we need new data samples which fit in the smart city context.
Therefore, we extract extra knowledge from smart cities and fully exploit
semantic and syntactic patterns instead of applying straightforward tricks
like chopping, rotating, or zooming. This paper is the first work synthesizing
smart-city-specific requirements to the best of our knowledge.
Online Learning. Online machine learning mainly deals with the situation when
data comes available to the machine learning model sequentially after being
deployed. Similar to continuous learning, online learning aims to give model
accumulated knowledge and improve model performance continuously given
incoming learning samples [46, 47]. Some of the existing papers focus on
developing sophisticated optimization algorithms [48] or exploiting the
differences between new and old samples [49]. However, these papers do not
have a mechanism to detect or prevent adversarial samples online. This paper
develops a two-stage online learning process with online validation against
potential malicious inputs.
Protection against Adversarial Attacks. Works have been done to enhance the
safety of NLP models based on textual adversarial attacks. However, some of
them make observations that pre-trained language models like BERT or GPT are
robust to adversarial perturbations. For example, Bert-Defense [50] uses Bert
and GPT to lower the confusion and increase the fluency in generated samples;
[51] uses pre-trained transformers to distinguish out-of-distribution samples.
However, these pre-trained language models are trained on large corpora.
Without a specific focus, those pre-trained models only offer limited prior
knowledge in detecting malicious attacks in smart city. Other tricks like
resampling are also widely applied and reported to succeed in ensuring system
security [52]. These methods still feed textual inputs to the backend model.
Textual inputs give the attacker more room to tune with the input in our
interactive and always-learning system. This paper develops a novel shield
model that not only takes the prior knowledge from pre-trained models as
references but also enriches the prior knowledge by introducing existing city
requirements. No textual input is directly input to the shield model, instead,
it encrypts the inputs so that it is considered immune to direct adversarial
textual generation.
## 8 Summary
This paper builds an intelligent assistant system, CitySpec, for requirement
specification in smart cities. CitySpec bridges the gaps between city policy
makers and the monitoring systems. It incorporates city knowledge into the
requirement translation model and adapts to new cities and application domains
through online validation and learning. The evaluation results on real-world
city requirement datasets show that CitySpec is able to support policy makers
accurately writing and refining their requirements and outperforms the
baseline approaches. In future work, we plan to have CitySpec used by real
city policy makers, but this is outside the scope of this paper.
## Acknowledgment
This work was funded, in part, by NSF CNS-1952096.
## References
* [1] Z. Chen, I. Li, H. Zhang, S. Preum, J. A. Stankovic, M. Ma, Cityspec: An intelligent assistant system for requirement specification in smart cities, in: 2022 IEEE International Conference on Smart Computing (SMARTCOMP), 2022, pp. 32–39. doi:10.1109/SMARTCOMP55677.2022.00020.
* [2] J. Gao, J. Lanchantin, M. L. Soffa, Y. Qi, Black-box generation of adversarial text sequences to evade deep learning classifiers, in: 2018 IEEE Security and Privacy Workshops (SPW), IEEE, 2018, pp. 50–56.
* [3] M. Ma, J. A. Stankovic, L. Feng, Toward formal methods for smart cities, Computer 54 (9) (2021) 39–48.
* [4] M. Ma, E. Bartocci, E. Lifland, J. A. Stankovic, L. Feng, A novel spatial–temporal specification-based monitoring system for smart cities, IEEE Internet of Things Journal 8 (15) (2021) 11793–11806.
* [5] M. Ma, J. A. Stankovic, L. Feng, Cityresolver: a decision support system for conflict resolution in smart cities, in: 2018 ACM/IEEE 9th International Conference on Cyber-Physical Systems (ICCPS), IEEE, 2018, pp. 55–64.
* [6] M. Ma, J. Stankovic, E. Bartocci, L. Feng, Predictive monitoring with logic-calibrated uncertainty for cyber-physical systems, ACM Transactions on Embedded Computing Systems (TECS) 20 (5s) (2021) 1–25.
* [7] Q. Fu, Z. Teng, J. White, D. C. Schmidt, A transformer-based approach for translating natural language to bash commands, in: 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA), IEEE, 2021, pp. 1245–1248.
* [8] V. Zhong, C. Xiong, R. Socher, Seq2sql: Generating structured queries from natural language using reinforcement learning, arXiv preprint arXiv:1709.00103 (2017).
* [9] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, et al., Evaluating large language modelsdamerau1964technique trained on code, arXiv preprint arXiv:2107.03374 (2021).
* [10] NYC.gov, Emissions from transportation, nyc environment protection, 2019.
* [11] S. Matteo, J. Brannan, A local law to amend the administrative code of the city of new york, in relation to restricting the use of bus lanes by sight-seeing buses, in: Restricting the use of bus lanes by sight-seeing buses, The New York City Council, 2019.
* [12] NYC Environment Protection, Use of heating oil remaining in tanks, The city of New York, 2019.
* [13] United States Environmental Protection Agency, Residential energy efficiency, in: Energy Resources for State and Local Governments, The city of New York, 2019.
* [14] San Francisco, San Francisco noise pollution construction, in: San Francisco American Legal Publishing, 2019.
* [15] Hong Kong, Guide to indoor air quality management in hong kong regional offices and public places, in: Guide to Indoor Air Quality Management, 2019.
* [16] NYC.gov, Stopping, standing or parking prohibited in specified places, in: New York Public Law, 2016.
* [17] Beijing Emergency Agency, Pre-hospital medical emergency regulations, 2016\.
* [18] Beijing Government, Safety management for kindergarten, primary and secondary school, 2016.
* [19] District of Columbia Municipal Regulations, D. of Columbia Register, Air quality - motor vehicular pollutants, lead, odors, and nuisance pollutants, 2016\.
* [20] LA Sec 111.03. Minimum Ambient Noise Level, Official city of los angeles municipal code, 2016.
* [21] DC.gov, Notice of draft title V permit public comment period – architect of the capitol, library buildings and grounds jurisdiction, library of congress, 2019\.
* [22] tn.gov, Code of ordinances, metro government of nashville and davidson county, tn, 2022.
* [23] tn.gov, Code of ordinances, city of memphis, tennessee, 2022.
* [24] ga.gov, Code of ordinances, city of atlanta, georgia, 2023.
* [25] F. J. Damerau, A technique for computer detection and correction of spelling errors, Communications of the ACM 7 (3) (1964) 171–176.
* [26] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, arXiv preprint arXiv:1810.04805 (2018).
* [27] A. X. Chang, C. D. Manning, Sutime: A library for recognizing and normalizing time expressions., in: Lrec, Vol. 3735, 2012, p. 3740.
* [28] B. E. Chapman, S. Lee, H. P. Kang, W. W. Chapman, Document-level classification of ct pulmonary angiography reports based on an extension of the context algorithm, Journal of biomedical informatics 44 (5) (2011) 728–737.
* [29] J. Y. Yoo, Y. Qi, Towards improving adversarial training of NLP models, in: Findings of the Association for Computational Linguistics: EMNLP 2021, Association for Computational Linguistics, Punta Cana, Dominican Republic, 2021, pp. 945–956. doi:10.18653/v1/2021.findings-emnlp.81.
URL https://aclanthology.org/2021.findings-emnlp.81
* [30] S. Garg, G. Ramakrishnan, BAE: BERT-based adversarial examples for text classification, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Online, 2020, pp. 6174–6181. doi:10.18653/v1/2020.emnlp-main.498.
URL https://aclanthology.org/2020.emnlp-main.498
* [31] L. Li, R. Ma, Q. Guo, X. Xue, X. Qiu, BERT-ATTACK: Adversarial attack against BERT using BERT, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Online, 2020, pp. 6193–6202. doi:10.18653/v1/2020.emnlp-main.500.
URL https://aclanthology.org/2020.emnlp-main.500
* [32] S. Feng, E. Wallace, A. Grissom II, M. Iyyer, P. Rodriguez, J. Boyd-Graber, Pathologies of neural models make interpretations difficult, in: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Brussels, Belgium, 2018, pp. 3719–3728. doi:10.18653/v1/D18-1407.
URL https://aclanthology.org/D18-1407
* [33] D. Pruthi, B. Dhingra, Z. C. Lipton, Combating adversarial misspellings with robust word recognition, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Florence, Italy, 2019, pp. 5582–5591. doi:10.18653/v1/P19-1561.
URL https://aclanthology.org/P19-1561
* [34] Y. Zang, F. Qi, C. Yang, Z. Liu, M. Zhang, Q. Liu, M. Sun, Word-level textual adversarial attacking as combinatorial optimization, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Online, 2020, pp. 6066–6080. doi:10.18653/v1/2020.acl-main.540.
URL https://aclanthology.org/2020.acl-main.540
* [35] S. Ren, Y. Deng, K. He, W. Che, Generating natural language adversarial examples through probability weighted word saliency, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Florence, Italy, 2019, pp. 1085–1097. doi:10.18653/v1/P19-1103.
URL https://aclanthology.org/P19-1103
* [36] J. Li, S. Ji, T. Du, B. Li, T. Wang, Textbugger: Generating adversarial text against real-world applications, arXiv preprint arXiv:1812.05271 (2018).
* [37] D. Jin, Z. Jin, J. T. Zhou, P. Szolovits, Is bert really robust? a strong baseline for natural language attack on text classification and entailment, in: Proceedings of the AAAI conference on artificial intelligence, Vol. 34, 2020, pp. 8018–8025.
* [38] M. T. Ribeiro, T. Wu, C. Guestrin, S. Singh, Beyond accuracy: Behavioral testing of NLP models with CheckList, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Online, 2020, pp. 4902–4912. doi:10.18653/v1/2020.acl-main.442.
URL https://aclanthology.org/2020.acl-main.442
* [39] D. Li, Y. Zhang, H. Peng, L. Chen, C. Brockett, M.-T. Sun, B. Dolan, Contextualized perturbation for textual adversarial attack, in: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, Online, 2021, pp. 5053–5069. doi:10.18653/v1/2021.naacl-main.400.
URL https://aclanthology.org/2021.naacl-main.400
* [40] J. Morris, E. Lifland, J. Y. Yoo, J. Grigsby, D. Jin, Y. Qi, Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2020, pp. 119–126.
* [41] S. Wang, L. Thompson, M. Iyyer, Phrase-bert: Improved phrase embeddings from bert with an application to corpus exploration, arXiv preprint arXiv:2109.06304 (2021).
* [42] Z. Chen, I. Li, H. Zhang, S. Preurn, J. A. Stankovic, M. Ma, An intelligent assistant for converting city requirements to formal specification, in: 2022 IEEE International Conference on Smart Computing (SMARTCOMP), 2022, pp. 174–176. doi:10.1109/SMARTCOMP55677.2022.00043.
* [43] S. Kobayashi, Contextual augmentation: Data augmentation by words with paradigmatic relations, arXiv preprint arXiv:1805.06201 (2018).
* [44] X. Zhang, J. Zhao, Y. LeCun, Character-level convolutional networks for text classification, Advances in neural information processing systems 28 (2015) 649–657.
* [45] J. Wei, K. Zou, Eda: Easy data augmentation techniques for boosting performance on text classification tasks, arXiv preprint arXiv:1901.11196 (2019).
* [46] G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, S. Wermter, Continual lifelong learning with neural networks: A review, Neural Networks 113 (2019) 54–71.
* [47] Z. Chen, B. Liu, Lifelong machine learning, Synthesis Lectures on Artificial Intelligence and Machine Learning 12 (3) (2018) 1–207.
* [48] E. Hazan, et al., Introduction to online convex optimization, Foundations and Trends® in Optimization 2 (3-4) (2016) 157–325.
* [49] R. S. Sutton, A. G. Barto, Reinforcement learning: An introduction, MIT press, 2018\.
* [50] Y. Keller, J. Mackensen, S. Eger, BERT-defense: A probabilistic model based on BERT to combat cognitively inspired orthographic adversarial attacks, in: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, Association for Computational Linguistics, Online, 2021, pp. 1616–1629. doi:10.18653/v1/2021.findings-acl.141.
URL https://aclanthology.org/2021.findings-acl.141
* [51] D. Hendrycks, X. Liu, E. Wallace, A. Dziedzic, R. Krishnan, D. Song, Pretrained transformers improve out-of-distribution robustness, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Online, 2020, pp. 2744–2751. doi:10.18653/v1/2020.acl-main.244.
URL https://aclanthology.org/2020.acl-main.244
* [52] J. Rusert, P. Srinivasan, Don’t sweat the small stuff, classify the rest: Sample shielding to protect text classifiers against adversarial attacks, in: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, Seattle, United States, 2022, pp. 2716–2725. doi:10.18653/v1/2022.naacl-main.195.
URL https://aclanthology.org/2022.naacl-main.195
|
Collective modes in anisotropic systems
Margaret E. Carrington
Department of Physics and Astronomy, Brandon University,
Brandon, Manitoba R7A 6A9, Canada
Winnipeg Institute for Theoretical Physics, Winnipeg, Manitoba, Canada
Bailey M. Forster
Department of Physics and Astronomy, Brandon University,
Brandon, Manitoba R7A 6A9, Canada
Sofiya Makar
Department of Mathematics, Brandon University,
Brandon, Manitoba R7A 6A9, Canada
We study collective modes in anisotropic plasmas of quarks and gluons using a quasi-particle picture and a hard loop approximation.
We use a general class of anisotropic distribution functions, and we consider chirally asymmetric systems.
We introduce a complete tensor basis to decompose the gluon polarization tensor into a set of nine scalar functions.
We derive and solve the corresponding dispersion equations.
Imaginary modes are particularly important because of their potential influence on plasma dynamics.
We explore in detail their dependence on the chiral chemical potential and the parameters that characterise the anisotropy of the system.
We show that our generalized distributions produce dispersion relations that are much richer in structure than those obtained with a simple one parameter deformation of an isotropic distribution.
In addition, the size and domain of the imaginary solutions are enhanced, relative to those obtained with a one parameter deformation.
Finally, we show that the influence of even a very small chiral chemical potential is significantly magnified when anisotropy is present.
§ INTRODUCTION
We consider a relativistic plasma of charged particles that is not necessarily in equilibrium, but is in the regime where a quasi-particle description is valid.
We use an effective phase-space description in terms of distribution functions that are not assumed thermal or isotropic.
A convenient parametrization for the distribution function of a spheroidially anisotropic system was introduced in [1, 2].
The idea is to introduce one parameter that effectively stretches or squeezes the spherically symmetric isotropic distribution in one direction.
This parametrization is interesting in the context of heavy ion collisions where one expects the parton distribution that is produced by the collision to be squeezed along the beam axis.
Dispersion relations are obtained from the poles of the dressed propagators of the partonic quasi-particles.
One finds that the spectrum of collective excitations is much richer than the spectrum of a system in thermal equilibrium.
It is especially interesting that there are imaginary modes, which are associated with instabilities and could play an important role in the thermalization of the plasma [3, 4, 5, 6, 7, 8].
Anisotropic plasmas have been studied intensively in the context of quark-gluon plasmas, which thermalize earlier than expected, for reasons that are as yet unclear.
Collective modes have been calculated for both gluons [1, 2, 9] and quarks [10].
An ellipsoidal generalization was developed in [11, 12].
These distributions have been used to study various properties of quark-gluon plasma (QGP) including
transport coefficients [13, 14, 15, 16],
heavy quark bound states [17, 18, 19, 20],
and photon and dilepton production [10, 21, 22, 24, 23].
Isotropic plasmas also support imaginary collective modes, if there is a chiral imbalance [25].
Chiral plasmas with spheroidal anisotropy were studied in Ref. [26].
Chiral systems are of interest in a wide variety of different situations in particle physics, nuclear physics, condensed matter physics, and cosmology.
One important example is the chiral magnetic effect (CME) which is the production of a parity violating current in a plasma, in the presence of a magnetic field and an asymmetry between left and right handed fermions [27].
A chirally asymmetric plasma can be described using a chiral chemical potential defined as $\mu_5 \equiv (\mu_R-\mu_L)/2$, where $\mu_L$ and $\mu_R$
are the chemical potentials of the left and right handed fermions.
The CME is particularly interesting in the context of heavy-ion collisions because it has been argued that the quark gluon plasma that is produced in non-central collisions may contain regions where $\mu_5$ is locally finite [28, 30, 31, 32, 33].
Instabilities in chiral plasmas have also been studied in electroweak theory at large lepton chemical potential [34] and in the context of the early universe at $T\gg\mu_5$ [35].
Collective modes can be studied using either kinetic theory, or effective field theories.
The equivalence of the kinetic theory method and a hard thermal loop (HTL) effective theory approach, for isotropic and chirally symmetric systems, was shown in [36, 37, 38].
For anisotropic chirally symmetric systems, the connection between kinetic theory and a generalization of the HTL effective theory which is called a hard loop (HL) effective theory has also been established [39, 40].
Isotropic chiral kinetic theories have been developed in [41, 42, 43, 44, 45], and applied to a variety of physical problems, see for example [46, 47, 48].
In this paper we use anisotropic distributions that are more general than those used in previous works, and we study the spectrum of collective modes in anisotropic plasmas with non-zero chiral chemical potential.
The method can be applied generally to either a QED plasma of ultrarelativistic electrons and
positrons or a QGP.
We use natural units where $\hbar = c =1$. The indices $i,j,k = 1, 2, 3$ and $\mu, \nu = 0, 1, 2, 3$ label, respectively, the Cartesian spatial coordinates and those of Minkowski space. Our metric is mostly minus $g_{\mu\nu} = (1,-1,-1,-1)_{\rm diag}$.
We use capital letters for four-vectors so that, for example, $P^2=p_0^2-\vec p\cdot\vec p = p_0^2-p^2$. We define the unit vector $\hat p = \vec p/|\vec p|$ and we will also use $\hat p_0 = p_0/p$. Real solutions to dispersion equations will be denoted $\omega(\vec p)$ and imaginary solutions are written $i\gamma(\vec p)$.
§ ISOTROPIC FORMALISM
In vacuum the photon polarization tensor can be written in terms of one scalar function using a transverse projection operator as $\Pi^{\mu\nu}(p_0,\vec p) = (P^2 g^{\mu\nu}-P^\mu P^\nu)\,\Pi(p_0,\vec p)$.
At finite temperature the rest frame of the heat bath, which we define with the four-vector $n^\mu = (1,0,0,0)$, breaks Lorentz invariance.
Time-like axial gauge (TAG) is particularly useful at finite temperature, because the gauge condition is imposed in the heat bath rest frame.
In TAG we only have to consider the spatial components of the propagator, for which the Dyson equation has the form
D^ij -1(p_0,p⃗) = D_0^ij -1(p_0,p⃗) -Π^ij(p_0,p⃗) = P^2δ^ij + p^2 p̂^i p̂^j -Π^ij(p_0,p⃗) .
In an equilibrated chirally symmetric plasma, the polarization tensor is symmetric and has two independent components, one transverse and the other longitudinal with respect to the three-vector $\vec p$.
An isotropic system with finite chiral chemical potential was first considered in [25], and we review the formalism presented in that paper below.
In the presence of finite $\mu_5$ the polarization tensor has an additional asymmetric component and can be written
Π^ij(p_0,p⃗) = (δ^ij-p̂^ip̂^j) Π_T(p_0,p⃗) + p̂^ip̂^j Π_L(p_0,p⃗) + iϵ^ijmp̂^m Π_A(p_0,p⃗) .
The propagator, obtained from inverting the Dyson equation, is
D^ij(p_0,p⃗) = (δ^ij-p̂^ip̂^j) (P^2-Π_T(p_0,p⃗))/(P^2-Π_T(p_0,p⃗))^2 - Π_A^2(p_0,p⃗) + p̂^ip̂^j1/p_0^2-Π_L(p_0,p⃗)
+ iϵ^ijmp̂^m Π_A(p_0,p⃗)/(P^2-Π_T(p_0,p⃗))^2 - Π_A^2(p_0,p⃗) .
The dispersion relations are obtained from the dispersion equations which give the poles of the retarded propagator. From equation (<ref>) we have that the dispersion equations for an isotropic chiral plasma are
p_0^2-Π_L(p_0,p⃗) = 0
P^2-(Π_T(p_0,p⃗) + Π_A(p_0,p⃗) ) = 0
P^2- (Π_T(p_0,p⃗) - Π_A(p_0,p⃗) ) = 0 .
Throughout this paper we refer always to the retarded polarization tensor, and we will not include any subscripts to indicate this.
Our notation for the equilibrium distribution function is (see equation (<ref>))
n(k) =n^+(k) = 1/e^β(k-μ)+1 and n̅(k) = n^-(-k) = 1/e^β(k+ μ)+1 .
The distributions for right/left handed particle/anti-particles are written $n_R(k)$, $\bar n_R(k)$, $n_L(k)$ and $\bar n_L(k)$, where, for example, $n_R(k) = n(k)\big|_{\mu=\mu_R}$.
We calculate the 1-loop photon polarization tensor in the HTL approximation (see Appendix <ref> for details).
The contribution from right handed fermions is
Π^ij_R(p_0,p⃗) = Π^ij_Reven(p_0,p⃗) + Π^ij_Rodd(p_0,p⃗)
Π^ij_Reven(p_0,p⃗) = 2g^2∫d^3 k/(2π)^3n_R(k)+n̅_R(k)/k
(δ^ij+v^i p^j + p^i v^j/P·V +iϵ- P^2 v^i v^j/(P·V+iϵ)^2)
Π^ij_Rodd(p_0,p⃗) = i g^2 P^2 ϵ^ijm ∫d^3 k/(2π)^3n_R(k) - n̅_R(k)/k^2
(p_0 v^m - p^m)/(P·V +iϵ)^2
and the corresponding expression for left handed fermions is obtained from the transformation $\mu_R\to \mu_L$.
We use $P\cdot V = p_0-\vec p\cdot\vec v$ and
$\vec k/\sqrt{k^2+m^2} \approx \hat k \equiv \vec v$ since massless fermions are consistent with the hard loop approximation.
It is straightforward to modify the HTL integrals to the case of a QCD plasma by changing the QED coupling constant to the QCD one, and using [49]
n(k)+n̅(k) →N_f/2(n_q(k)+n̅_q(k)) + N_c n_g(k)
n(k)-n̅(k) →N_f/2(n_q(k)- n̅_q(k))
where the subscripts $q$ and $g$ refer to quark (or anti-quark) and gluon distribution functions.
Equations (<ref>, <ref>) can be rewritten in a form that is sometimes more useful by integrating by parts.
A straightforward calculation gives
Π^ij_Reven(p_0,p⃗) = - 2 g^2 ∫d^3 k/(2π)^3 ∂(n_R(k)+n̅_R(k))/∂k^m v^i (δ^jm + v^j p^m/P·V )
Π^ij_Rodd(p_0,p⃗) = - i g^2 ϵ^ijm ∫d^3k/(2π)^3 1/k ∂/∂k^l(n_R(k) - n̅_R(k))
(p_0 δ^lm + (p_0 v^m-p^m)p^l/P·V
) .
From the definitions in equations (<ref>, <ref>) one finds
Π^ij(p_0,p⃗) = 1/2(Π^ij_R(p_0,p⃗) + Π^ij_L(p_0,p⃗))
= 1/2(Π^ij_Reven + Π^ij_Leven) + 1/2(Π^ij_Rodd + Π^ij_Lodd)
= 1/2(Π^ij_Reven + Π^ij_Reven|_μ_R →μ_L) + 1/2(Π^ij_Rodd + Π^ij_Rodd|_μ_R →-μ_L) .
The functions $\Pi_T(p_0,\vec p)$, $\Pi_L(p_0,\vec p)$ and $\Pi_A(p_0,\vec p)$ in equations (<ref>, <ref>) are calculated by applying the projection operators
${\cal P}_T^{ij} = (\delta^{ij}-\hat p^i\hat p^j)/2$, ${\cal P}_L^{ij} = \hat p^i\hat p^j$, and ${\cal P}_A^{ij} = -i\epsilon^{ijm}\hat p^m/2$ to
$\Pi^{ij}(p_0,\vec p)$.
The resulting expressions for the three scalar components of the polarization tensor are
Π_T(p_0,p⃗) = m_D^2 p_0^2/2p^2 (1 - P^2/2p_0 p ln(p_0+p+iϵ/p_0- p+iϵ) )
Π_L(p_0,p⃗) = - m_D^2 p_0^2/p^2( 1- p_0/2p ln(p_0+p+iϵ/p_0- p+iϵ) )
Π_A(p_0,p⃗) = -g^2 μ_5 P^2 /2π^2 p (1-p_0/2p ln(p_0+p+iϵ/p_0- p+iϵ) )
where we have used $\mu_5 = (\mu_R-\mu_L)/2$ and defined the Debye mass parameter
m_D^2 = 2g^2 ∫d^3k/(2π)^3 1/k ([n(k)+n̅(k)]_μ_R + [n(k)+n̅(k)]_μ_L ) = g^2 (T^2/3 + 1/2π^2(μ_R^2+μ_L^2)) .
The transverse and longitudinal components of the polarization tensor are the familiar HTL results, and in the chirally symmetric limit $\mu_R = \mu_L$ gives $\mu_5=0$ and therefore $\Pi_A(p_0,\vec p)=0$.
Throughout the rest of this paper we use the shorthand notation $\hat\mu_5 = g^2\mu_5/(2\pi^2)$.
In all of our numerical calculations we set the Debye mass to one, which is equivalent to defining all dimensionful quantities in units of the Debye mass.
The dispersion relations are obtained by solving equations (<ref>-<ref>)
where the functions $\Pi_L(p_0,\vec p)$, $\Pi_T(p_0,\vec p)$ and $\Pi_A(p_0,\vec p)$ are given in equation (<ref>).
There are pure real and pure imaginary solutions that appear in positive and negative pairs.
Equation (<ref>) has two real solutions which are just the usual longitudinal HTL modes, and are written $\pm\omega_L(p)$.
Equation (<ref>) also has two real solutions, which we denote $\pm\omega_+(p)$.
Equation (<ref>) has two real solutions, which we call $\pm\omega_-(p)$, and two extra solutions that appear at values of the wave vector below the critical value $p_{\rm crit}=\hat\mu_5$.
These solutions are pure imaginary and denoted $\pm i\gamma_-(p)$.
The critical wave vector at which the imaginary solutions appear can be found with a Nyquist analysis (see Appendix <ref> for details).
When $\hat\mu_5=0$ we have $\omega_+(p) = \omega_-(p) = \omega_T(p)$ where $\omega_T(p)$ is the transverse HTL mode.
In figure <ref> we show solutions to equations (<ref> - <ref>) for different values of the parameter $\hat\mu_5$.
Real ($\omega$) and imaginary ($\gamma$) solutions to the dispersion equations (<ref>-<ref>) for different values of $\hat\mu_5 = g^2\mu_5/(2\pi^2)$. The units are defined by setting $m_D=1$. The diagonal gray line in the top figure is the light-cone.
§ ANISOTROPIC FORMALISM
§.§ Distribution function
Next we consider the situation that the momentum distribution is not isotropic.
A simple way to account for momentum anisotropy is to replace the distribution function in equations (<ref>, <ref>) using $n(k) \to n(\vec k) = C_\xi \, n\big(k H_\xi(\vec v)\big)$ where $\vec v = \vec k/k$, and similarly for $\bar n(k)$. The subscript $\xi$ indicates dependence on a set of anisotropy parameters that can be used to construct a distribution that is deformed relative to the isotropic one. The factor $C_\xi$ is a normalization and can be defined in different ways depending on the calculation being done. Our choice is explained at the end of this section.
We can construct a completely general expression for the function $H_\xi(\vec v)$ as a sum of terms that are products of anisotropy parameters and dot products of the vector $\vec v$ with two perpendicular unit vectors, which we will write $\hat n_3$ and $\hat n_1$.
We are primarily interested in distributions of partons produced in heavy-ion collisions. In this context, we will take $\hat n_3$ to define the beam axis and $\hat n_1$ to give the direction of transverse anisotropy.
We restrict to functions that satisfy the condition $H_\xi(\vec v) = H_\xi(-\vec v)$ and use an expression of the form
H^2_ξ(v⃗) = ξ_0
+ξ_2(n⃗_1·v⃗)^2+ξ_9 (n⃗_3·v⃗)^2
+ξ_6 (n⃗_1·v⃗)(n⃗_3·v⃗)
+ξ_4 (n⃗_1·v⃗)^4
+ ξ_8 (n⃗_1·v⃗)^3 (n⃗_3·v⃗)
+ξ_11(n⃗_1·v⃗)^2 (n⃗_3·v⃗)^2
+ξ_13 (n⃗_1·v⃗)(n⃗_3·v⃗)^3
+ξ_14 (n⃗_3·v⃗)^4 . When $\xi_0=1$ and $\xi_{i\ne 0}=0$, we have $H^2_\xi(\vec v)=1$, and the distribution is isotropic.
For an arbitrary choice of the anisotropy parameters $\xi_i$, the isotropic distribution is expanded in the direction of $\vec v$ if $H_\xi(\vec v)<1$, and contracted if $H_\xi(\vec v)>1$.
The values of the anisotropy parameters must be chosen so that $H^2_\xi(\vec v)$ is positive for all orientations of the vector $\vec v$,
which is equivalent to the requirement that $H_\xi(\vec v)$, and therefore the argument of the distribution function, is real and positive.
In section <ref> we will show that our expression (<ref>) is equivalent to including spherical harmonics up to second order.
Equation (<ref>) includes the parametrizations used in several previous calculations.
In the original work of Ref. [1] the authors used $\xi_0=1$ and $\xi_9\in(-1,\infty)$ with all other parameters set to zero.
The chosen value of $\xi_9$ allows one to consider either a slightly prolate momentum distribution ($-1<\xi_9<0$), which is elongated along the beam axis, or an arbitrarily oblate distribution ($\xi_9>0$), which is squeezed in the direction of the beam axis.
In the calculations of Ref. [11, 12], where ellipsoidal asymmetry was included for the first time,
the authors used $\xi_0=1$ and various values of $\xi_2$ and $\xi_9$ with all other parameters zero.
In Ref. [9] the authors considered a one parameter deformation using $\xi_0=1+\sigma$
and $\xi_9=-\sigma$ with $\sigma\in(-1,\infty)$. The resulting distribution is slightly oblate for $-1<\sigma<0$ and prolate to any degree for $\sigma>0$.
Our definition (<ref>) can be extended by including terms with higher order products of the scalar products $(\hat n_3\cdot\vec v)$ and $(\hat n_3\cdot\vec v)$.
The only restriction we impose, besides positivity, is the condition $H_\xi(\vec v) = H_\xi(-\vec v)$. This means there are no terms where the sum of the exponents is odd, like for example $(\hat n_3\cdot\vec v)(\hat n_1\cdot\vec v)^2$.
The reason we require that $H_\xi(\vec v)$ is even under the transformation $\vec v \to -\vec v$ is as follows.
We said above that when general distributions are considered the polarization tensor is obtained from equations (<ref>, <ref>) by making the replacement $n(k) \to n(\vec k) = C_\xi \, n\big(k H_\xi(\vec v)\big)$. In fact, there is an extra contribution to the anisotropic version of equation (<ref>) of the form
Π^ij_extra(p_0,p⃗) ∼g^2∫d^3 k/(2π)^3 (n_R(k⃗)+n̅_R(k⃗))
(v^i v^j/p_0 - p⃗·v⃗ +iϵ-v^i v^j/p_0 + p⃗·v⃗ +iϵ )
that integrates to zero for any distribution that is even under the transformation $\vec k \to -\vec k$.
If the condition $H_\xi(\vec v) = H_\xi(-\vec v)$ is not satisfied, equation (<ref>) would produce a non-zero contribution to the polarization tensor that dominates over the HL terms, and is not present in the result obtained from semiclassical kinetic theory (which is equivalent to the HL expression).
We will calculate the anisotropic polarization tensor using equations (<ref>, <ref>) with the replacement $n(k) \to C_\xi \,n\left(k H_\xi(\vec v)\right)$ and similarly for $\bar n(k)$.
We define
$\tilde k = k H_\xi(\vec v)$
and perform a straightforward change of variable so that the integral over $\vec k$ is written as the product of an integral over the new variable $\tilde k$ and a factored integral involving the angular variables.
Equations (<ref>, <ref>) become
Π^ij_Reven(p_0,p⃗) = (m_D^2)_R C_ξ ∫dΩ/4π 1/H_ξ^4(v⃗) v^i M^l(Ω) (δ^jl + p^l v^j/P·V + iϵ)
Π^ij_Rodd(p_0,p⃗) = i g^2 μ_R/2π^2 ϵ^ijm p C_ξ ∫dΩ/4π 1/H_ξ^3(v⃗) M^l(Ω)
(p_0 δ^lm + (p_0 v^m-p^m)p^l/P·V + iϵ
where we have defined
M^l(Ω) = (1/2k ∂(k^2 H_ξ^2(v⃗))/∂k^l)
and used $(m^2_D)_R = g^2(T^2/3+\mu_R^2/\pi^2)$.
The full expression for $\Pi^{ij}$ is obtained using equation (<ref>), as in the isotropic case.
We close this section by explaining how we define the function of the anisotropy parameters which we call $C_\xi$, which can be thought of as a normalization of the anisotropic distribution function.
This factor is determined by enforcing the condition
∫d^3k/(2π)^3 n_μ=0(k)/k ≡C_ξ∫d^3k/(2π)^31/k n_μ=0(k H_ξ(v⃗))
from which we obtain
C_ξ^-1 = ∫dΩ/4π1/H^2_ξ(v⃗) .
This choice of normalization gives that when the chemical potentials are zero the mass parameter in equation (<ref>) is independent of the anisotropy parameters.
At zero chemical potential the entire spectrum of collective modes depends on this one scale (since we have assumed massless partons - see under equation (<ref>)), and it is therefore natural to adopt a normalization that leaves the Debye mass invariant under a change of the anisotropy parameters.
We also note that using equations (<ref>, <ref>, <ref>) it is easy to see that the even part of the polarization tensor is unchanged when the complete set of anisotropy parameter are scaled uniformly: $\xi_i \to \Lambda \xi_i$ where $\Lambda$ is any positive constant. This means that when we compare the spectrum of collective modes obtained from different choices of anisotropy parameters, we are comparing the effects of a specific deformation, and not the influence of an overall change of scale.
§.§ Projection operators
To decompose our general expression for the $3\times 3$ polarization tensor we need a complete set
of nine projection operators. This can be done most conveniently using three normalized unit vectors that are related to the plasmon momentum $\vec p$ and the two anisotropy vectors $\hat n_1$ and $\hat n_3$.
From these three-vectors we construct three ortho-normal vectors which we call $(\hat p, ~n_f,~m_F)$. We define
p̂ = p⃗/p
n_f = ñ_f/√(ñ_f·ñ_f) with ñ_f = n_3 - (n_3·p̂) p̂
m_F=m̃_F/√(m̃_F·m̃_F) with m̃_F = m̃_f - (n_f·m̃_f) n_f and m̃_f = n_1 - (n_1 ·p̂) p̂ .
Using these unit vectors the projection operators are defined as
P_1^ij = m_F^i m_F^j , P_2^ij = p̂^i p̂^j , P_3^ij = n_f^i n_f^j
P_4^ij = p̂^i n_f^j + n_f^i p̂^j , P_5^ij = p̂^i m_F^j + m_F^i p̂^j , P_6^ij = n_f^i m_F^j + m_F^i n_f^j
P_7^ij = n_f^i p̂^j - p̂^i n_f^j , P_8^ij = n_f^i m_F^j - m_F^i n_f^j , P_9^ij = p̂^i m_F^j - m_F^i p̂^j .
We note that $\hat p^i \hat p^j + n_f^i n_f^j + m_F^i m_F^j = \delta^{ij}$ and therefore we can write
Π^ij = π_1(δ^ij-p̂^i p̂^j) +π_2 P_2^ij + (π_3 - π_1) P_3^ij+π_4 P_4^ij+π_5 P_5^ij+π_6 P_6^ij
≡α(δ^ij-p̂^i p̂^j) +βP_2^ij + γP_3^ij + δP_4^ij + λP_5^ij+τP_6^ij .
For future reference we write down the operators that project out each component of the polarization tensor:
P_α^ij=P_1^ij ,
P_β^ij=P_2^ij ,
P_δ^ij=1/2P_4^ij ,
P_λ^ij=1/2P_5^ij ,
P_τ^ij = 1/2 P_6^ij .
They satisfy conditions of the form $P^{ij}_m P^{ji}_n = {\cal T}_{mn}$ where the 9$\times$9 tensor ${\cal T}$ is
2 P_1 0 0 0 P_5 P_6 0 P_8 P_9
0 2 P_2 0 P_4 P_5 0 P_7 0 P_9
0 0 2 P_3 P_4 0 P_6 P_7 P_8 0
0 P_4 P_4 2 (P_2+P_3) P_6 P_5 0 P_9 P_8
P_5 P_5 0 P_6 2 (P_1+P_2) P_4 P_8 P_7 0
P_6 0 P_6 P_5 P_4 2 (P_1+P_3) -P_9 0 -P_7
0 P_7 P_7 0 P_8 -P_9 -2 (P_2+P_3) -P_5 P_6
P_8 0 P_8 P_9 P_7 0 -P_5 -2 (P_1+P_3) -P_4
P_9 P_9 0 P_8 0 -P_7 P_6 -P_4 -2 (P_1+P_2)
) .
The projectors in equation (<ref>) form a complete basis, and therefore the polarization tensor can be decomposed as
Π^ij = π_1 P_1^ij+π_2 P_2^ij+π_3 P_3^ij+π_4 P_4^ij+π_5 P_5^ij+π_6 P_6^ij+i π_7 P_7^ij+i π_8 P_8^ij+ i π_9 P_9^ij
where we have omitted the functional arguments to shorten the notation.
A decomposition that is related to the first six of our projection operators was defined in Ref. [50].
The last three projection operators are anti-symmetric in their indices, and the corresponding dressing functions can only be non-zero if the chiral chemical potential is non-zero.
We invert the Dyson equation (<ref>) using (<ref>) and obtain the propagator
D D^ij = [ (p_0^2-π_2) (P^2-π_3)-π_4^2-π_7^2 ] P_1^ij
+ [ (P^2-π_1) (P^2-π_3)-π_6^2-π_8^2 ] P_2^ij
+ [ (p_0^2-π_2) (P^2-π_1)-π_5^2-π_9^2 ] P_3^ij
+ [ (P^2-π_1)π_4 +π_5 π_6+π_8 π_9 ] P_4^ij
+ [(P^2-π_3) π_5 +π_4 π_6+π_7 π_8 ] P_5^ij
+ [ (p_0^2-π_2) π_6 +π_4 π_5-π_7 π_9 ] P_6^ij
+ i [ (P^2-π_1) π_7 +π_5 π_8-π_6 π_9 ] P_7^ij
+ i [ (p_0^2-π_2) π_8 +π_5 π_7+π_4 π_9 ] P_8^ij
+ i [ (P^2-π_3) π_9 -π_6 π_7+π_4 π_8 ] P_9^ij
where we have defined
D = (p_0^2-π_2) (P^2-π_1) (P^2-π_3)
-(P^2-π_1) (π_4^2+π_7^2)
-(p_0^2-π_2) (π_6^2+π_8^2)
- (P^2-π_3) (π_5^2+π_9^2)
-2(π_4 π_5 π_6-π_7 π_9 π_6+π_5 π_7 π_8+π_4 π_8π_9) .
The anisotropic dispersion relations are the solutions of the dispersion equation, ${\cal D}=0$.
§.§ Coordinate system
To make further process we must define a coordinate sytem. The vector along the beam axis can be chosen as $\hat n_3' = (0,0,1)$ and the vector that defines the direction of asymmetry in the transverse plane can then be chosen as
$\hat n_1' = (1,0,0)$. The external momentum vector then has the general form
$\vec p^{\;\prime} = p(\sin\theta \cos\phi, \sin\theta \sin\phi, \cos\theta)$.
We note that using these coordinates it is easy to see that the parametrization in equation (<ref>) is equivalent to an expansion in spherical harmonics $Y_{lm}(\theta,\phi)$ with $l\in(0,4)$ and $m\in(-l,l)$.
We can define coordinates that are more convenient for calculational purposes by performing a rotation [12].
We rotate counter-clockwise about the $z$-axis with angle $\theta_z = \pi/2-\phi$ and then counter-clockwise about the $x$-axis with $\theta_x=\theta$.
The result is that the basis vectors become
n̂_1 = (sin(ϕ), cos(θ)cos(ϕ), sin(θ)cos(ϕ))
n̂_3 = (0, -sin(θ), cos(θ))
p̂ = p⃗ /p= (0,0,1) .
Any scalar or pseudo-scalar quantity will give the same result when calculated with either the original or the rotated coordinate system.
This can be easily be seen from the fact that the scalar products $(\hat p\cdot\hat n_1)$, $(\hat p\cdot\hat n_3)$ and $(\hat n_1\cdot\hat n_3)$, and the pseudo-scalar $\epsilon^{ijk}\hat n_1^i \hat n_3^j \hat p^k$, are invariant. Likewise the dispersion relations are the same in either coordinate system.
Another useful thing about the coordinates in equation (<ref>) is that the isotropic limit has a simple form.
It is easy to show using equation (<ref>) that
$\delta^{ij}-\hat p^i\hat p^j = P_1^{ij}+P_3^{ij}$, $\hat p^i\hat p^j = P_2^{ij}$ and $\epsilon^{ijm}p^m = P^{ij}_8$.
In the isotropic limit there are only three different non-zero components in equation (<ref>) which are
$\pi_2=\Pi_L$ and
$\pi_8 = \Pi_A$.
Using these results one can see that equations (<ref>, <ref>) reduce to (<ref>).
The variables in the angular integrations in equations (<ref>, <ref>) are independent variables, and will be written in component form as
v⃗ = (sin(θ')cos(ϕ'),sin(θ')sin(ϕ'),cos(θ')) .
We will use the notation $x=\cos(\theta)$ and $x'=\cos(\theta')$.
§.§ Anisotropy parameters
The anisotropy parameters $\xi_9$ and $\xi_2$ produce uniform stretching or squeezing of the isotropic distribution along the beam axis (in the case of $\xi_9$), and along the vector $\hat n_1$ in the transverse plane (for $\xi_2$).
We have introduced additional anisotropy parameters that can be used to produce distribution functions that are deformed relative to the isotropic distribution in more general ways, and produce dispersion relations with more structure.
As discussed in section <ref>, we consider only distributions that satisfy the condition $f(\vec k) = f(-\vec k)$.
This restricts the form of the terms in equation (<ref>).
Each term is chosen so that $e_1+e_3$ is even, where the exponent of the factor $(\vec n_1\cdot\vec v)$ is denoted $e_1$ and the exponent of the factor $(\vec n_3\cdot\vec v)$ is called $e_3$.
There are two different possibilities: $e_1$ and $e_3$ individually even, or $e_1$ and $e_3$ individually odd.
The elliptically anisotropic distribution considered in [12] depends only on the two parameters $\xi_9$ and $\xi_2$, both of which have $e_1$ and $e_3$ individually even.
Any distribution for which $e_1$ and $e_3$ are individually even for all terms in (<ref>) can be mapped onto an elliptically anisotropic distribution with angularly dependent parameters.
We consider for example
the distribution constructed with the set of anisotropy parameters $\xi_0=1$, $\xi_4=20$, $\xi_9=10$, $\xi_{11}=20$, $\xi_{14}=60$
(distribution 6 in table <ref>).
This distribution is equivalent to an elliptically anisotropic distribution with effective coefficients $\xi_2(\Omega)$ and $\xi_9(\Omega)$
as shown in figure <ref>.
The new parameters $\xi_4$, $\xi_{11}$ and $\xi_{14}$ effectively modify the constant value of $\xi_9$ and generate a non-zero $\xi_2$.
Spherical plots of the coefficients $\xi_2(\Omega)$ (left panel) and $\xi_9(\Omega)$ (right panel) that correspond to the mapping of distribution 6 in table <ref> onto an elliptically anisotropic distribution. The values in the figure are obtained with $\beta=1$ and $\mu=0$.
The parameters $\xi_6$, $\xi_8$ and $\xi_{13}$ are coefficients of terms in equation (<ref>) with $e_1$ and $e_3$ individually odd.
These terms produce a distortion of the isotropic distribution that is qualitatively different.
Some examples of the dispersion relations obtained from distributions with non-zero values of these coefficients are presented in section <ref>.
To illustrate more directly the roles of the different anisotropy parameters we look at plots of $C_\xi \, n\big(k H_\xi(\vec v)\big)$ with $\beta=1$ and $\mu=0$, in the $(p_x,p_y)$ and $(p_x,p_z)$ planes.
In figure <ref> we show the results for several choices of anisotropy parameters.
The first panel shows, for reference, the isotropic distribution in the $(p_x,p_y)$ plane, the figure for the $(p_x,p_z)$ plane is identical. In the second panel we show a prolate distribution in the $(p_x,p_z)$ plane, and one sees clearly that the distribution is stretched along the beam axis.
The third panel shows the squeezing of an oblate distribution along the beam axis.
In the fourth and fifth panels we show an elliptically oblate distribution, which is asymmetric in both planes.
In the sixth panel we show an example of the kind of structure that can be obtained with a specific choice of the extra anisotropy parameters we have introduced.
Contour plots of the distribution for different choices of the anisotropy parameters. In each case the parameters that are not specified are set to zero.
§ DISPERSIONS RELATIONS
§.§ Analytic structure
The isotropic and anisotropic dressing functions have the same analytic structure, which can be determined from equations (<ref>, <ref>), using that equations (<ref>, <ref>) give $P\cdot V = p(\hat p_0-x')$ where $\hat p_0 = p_0/p$.
When $\hat p_0$ is pure real the epsilon prescription is needed to define the integrands at $\hat p_0=x'$.
Shifting the integration variable $\vec v\to -\vec v$ one shows that $\pi_i^*(\hat p_0) = \pi_i(-\hat p_0)$, where the subscript $i$ indicates any component of the polarization tensor. This means that when $\hat p_0$ is real, the real part of each dressing function is even in $\hat p_0$ and the imaginary part is odd. We also note that all dressing functions are pure real for $\hat p_0>1$, since the imaginary part comes from the discontinuity at $\hat p_0=x'$ and $|x'|<1$.
When $\hat p_0$ is not pure real the epsilon prescription is not needed, and we have $\pi_i^*(\hat p_0) = \pi_i(\hat p^*_0)$. When $p_0$ is pure imaginary $\pi_i(\hat p^*_0) = \pi_i(\hat p_0)$, which can be shown by shifting the integration variable, and therefore the dressing functions are pure real and even in $\hat p_0$.
The information above can be summarized as
Re(p̂_0)>1 and Im(p̂_0)=0 → π_i(p̂_0) real and even
p̂_0 imaginary → π_i(p̂_0) real and even
Re(p̂_0)<1 and Im(p̂_0)=0 →
Re( π_i(p̂_0)) even
Im (π_i(p̂_0)) odd . . It is easy to see that all components of the polarization tensor satisfy
Π(p_0^*) = Π^*(p_0) for p_0 complex or pure imaginary
Re[Π(p_0)] = Re[Π(-p_0)] and Im[Π(p_0)] = -Im[Π(-p_0)] for p_0 pure real .
§.§ Method
The integrand for the polarization tensor is given by equations (<ref>, <ref>, <ref>).
The nine scalar components $\pi_1$-$\pi_9$ are obtained by contracting with the appropriate projection operators, which are easily obtained from equations (<ref>, <ref>). These calculations are straightforward but tedious, and they are therefore done with Mathematica.
The resulting integrals are calculated numerically using Gauss-Legendre quadrature.
For real valued $\hat p_0$, the real part of the dressing functions has a single integrable singularity at $x'=\hat p_0$ that is easily handled with a numerical principal part prescription, and the imaginary part can be obtained from the residue of the pole. For imaginary valued $\hat p_0$ there are no difficulties. After the dressing functions are obtained, the dispersion equation is solved with a standard binary search algorithm.
§.§ Real solutions
For any choice of the anisotropy parameters and the chiral chemical potential $\mu_5$ there are three real solutions.
We show some examples of the dispersion relations in figure <ref>.
Real solutions for distributions 1, 3 and 8 (as defined in table <ref>). In the bottom right panel the lighter lines are the soutions with $\hat\mu_5=0$ and the dark lines are $\hat\mu_5=1$. In all graphs the light gray line shows the light cone.
§.§ Imaginary solutions
We are primarily interested in imaginary solutions, because of their potential to influence plasma dynamics.
We will compare the imaginary modes produced by different choices of the anisotropy parameters, and explore their dependence on these parameters as a function of wave vector.
We have previously used the notation $\gamma(\vec p)$ to indicate the dependence of the dispersion relation on the wave vector, but in this section we will use the more explicit notation $\gamma(p,x,\phi)$ where $x=\cos(\theta)$.
For different sets of anisotropy parameters
we compare the maximum value of the imaginary solution,
the position of this maximum,
and the integral of the solution over its domain, which we write $\int\gamma\equiv \int_0^\infty dp \int_{-1}^1 dx\int_0^{2\pi} d\phi \,\gamma(p,x,\phi)$.
We define three variables that characterise the degree of oblateness, the transverse eccentricity, and the azimuthal asymmetry of a given distribution:
χ= ⟨k_x^2⟩+ ⟨k_y^2⟩/2⟨k_z^2⟩-1
ε= ⟨k_y^2⟩-⟨k_x^2⟩/⟨k_y^2⟩+ ⟨k_x^2⟩
v_2 = ∫dk k^2 ∫dϕ cos(2ϕ) n(k H_ξ(v⃗))_x=0/∫dk k^2 ∫dϕ n(k H_ξ(v⃗))_x=0 .
The angle brackets denote averaging over the momentum phase space, for example $\langle k_x^2\rangle = \int d^3 k k_x^2 C_\xi n(k H_\xi(\vec v))$.
In table <ref> we show some of the main features of the largest imaginary solution of the dispersion relation for several different distributions.
In all cases the anisotropy parameters that are not specified are zero.
The first distribution is the isotropic one. The second and third have spheroidal anisotropy, with parameters that make them prolate and oblate, respectively. The fourth is a distribution with spheroidal anisotropy that is strongly oblate.
The fifth is an elliptically anisotropic distribution.
Distributions 6-9 are examples constructed using the additional anisotropy parameters that we have introduced.
Distributions 6 and 7 involve only terms for which $e_1$ and $e_3$ are even (see the discussion at the beginning of section <ref>).
The second to last column shows that the maximum of the largest mode can be much greater than the result obtained from a distribution with spheroidal anisotropy with a comparable value of the oblateness parameter $\chi$.
In addition, the last column shows that the integral of the solution over its full domain is greatly enhanced, relative to the result obtained from a comparable distribution with spheroid oblateness.
For example, distribution 7 has approximately the same value of $\chi$ as distribution 3, but $\gamma_{\rm max}$ is about twice as big, and the integral of the solution is almost 9 times larger.
Distributions 8 and 9 are examples for which $e_1$ and $e_3$ are individually odd for some of the terms in (<ref>).
The magnitudes of the modes produced by these distributions are similar to what is obtained from distributions without these odd coefficients, but the angular dependence of the solutions is much more complex. At the end of this section we show several plots to demonstrate this.
Finally, the table shows that the effect of a non-zero chiral chemical potential is always to increase the magnitude and domain of the imaginary solution. The solutions produced with even moderate anisotropy are much larger than those obtained with the same chemical potential in an isotropic system.
# $\xi_i$ $\hat\mu_5$ $\chi$ $\varepsilon$ $v_2$ $\gamma_{\rm max}(p,x,\phi)$ $\int \gamma$
1 $\xi_0=1$ 0 0 0 0 - -
$0.3$ $4.7\times10^{-3}$(.19,-,-) 0.017
2 $(\xi_0,\xi_9)=(11,-10)$ 0 -0.91 0 0 0.16(0.45,0,-) 0.59
0.3 0.23(0.68,0,-) 0.86
3 $(\xi_0,\xi_9)=(1,10)$ 0 10 0 0 0.19(0.63,1,-) 0.08
0.3 0.34(1.07,1,-) 0.24
4 $(\xi_0,\xi_9)=(1,500)$ 0 500 0 0 0.39(1.57,1,-) 0.32
0.3 1.60(21.0,1,-) 6.90
5 $(\xi_0,\xi_2,\xi_9)=(1,5,10)$ 0 5.42 0.71 0.59 0.20(0.58,1,0) 0.17
0.3 0.29(0.88,1,0) 0.34
6 $\xi_{0,4,9,11, 14}=(1,20,10,20,60)$ 0 17.9 0.85 0.7 0.33(0.78,1,1.49) 0.56
0.3 0.50(1.23,1,0.06) 1.15
7 $\xi_{0,4,9,11, 14}=(1,40,-2,40,55)$ 0 10.1 0.91 0.85 0.32(0.68,0.79,0) 0.70
0.3 0.43(1.06,0.80,0) 1.27
8 $\xi_{0,2,4,6,8,9,11,13,14}=$ 0 8.15 0.90 0.97 0.36(0.88,0.67,$\pi$) 0.61
$(1,8,37,-21,21,8,15,-28,40)$ 0.3 0.56(1.65,0.63,$\pi$) 1.27
9 $\xi_{0,9,13,14}=(1,1,20,20)$ 0 1.80 -0.35 0 0.23(0.70,0.85,0) 0.16
0.3 0.31(0.94,0.86,0) 0.27
Different sets of anisotropy parameters, the corresponding values of the parameters $\xi$, $\varepsilon$ and $v_2$ defined in equation (<ref>), the size and location of the largest imaginary solution, and the integral of the solution over its domain. In each case the anisotropy parameters that are not specified are set to zero.
For a given distribution, the number of imaginary solutions is 0 or 1 or 2, depending on the magnitude and direction of the wave vector.
For each mode there is a critical vector $p_{\rm crit}(x,\phi,\xi_i)$ such that the imaginary mode appears for $p$ less than this critical value.
Analytic results for the critical wave vectors can be obtained in the limit of weak anisotropy using a Nyquist analysis. The method is explained in detail in Appendix <ref>, where we show how to identify the angular variables that produce the largest critical wave vectors for a given set of anisotropy parameters.
To understand the structure of the solutions obtained from a given distribution in more detail, we show several figures below.
In figure <ref> we compare the imaginary solutions obtained from the distributions 3 and 6 for a specific choice of angles. The figure illustrates the concept of the critical wave vector.
For example, in the right panel of figure <ref>, the critical wave vectors for modes 1 and 2
are, respectively, 2.33 and 0.62.
It is true in general that when the domain of the solution increases, the maximum value of the solution also increases, as seen in figure <ref>.
The imaginary solutions of the dispersion equation ${\cal D}=0$ obtained from distributions 3 (left panel) with $\hat\mu_5=0$ and $x=1$, and the two imaginary solutions obtained from distribution 6 (right panel) with $\hat\mu_5=0$, $x=1$ and $\phi=\pi/2$.
In figure <ref> we show the largest imaginary mode obtained from distribution 6 for different choices of angles, and two different values of the chiral chemical potential.
The figure shows that increasing the chiral chemical potential increases the domain and size of the solution, but not uniformly as a function of the angular variables.
The largest imaginary solution of the dispersion equation ${\cal D}=0$ using the distribution 6, two different chiral chemical potentials, and several different values of the angular variables.
The dependence of the imaginary modes on the angular variables can be seen most clearly using contour plots.
In the top left panel of figure <ref> we show a contour plot of the magnitude of the largest imaginary solution for distribution 3 with $\hat\mu_5=0$, as a function of $p$ and $x$ (there is no $\phi$ dependence). The solution is non-zero in a small region of the phase space where $x$ is close to 1 and $p$ is small.
In the top right panel of figure <ref> we show the largest imaginary solution for distribution 2 with $\hat\mu_5=0$,
which is also $\phi$ independent. The magnitude of the solution is smaller, but the domain is larger.
The bottom two panels show the $\phi$ dependent solutions obtained from distribution 5.
Contour plots of the largest imaginary solution for several distributions with $\hat\mu_5=0$. The top left and top right panels show the $\phi$ independent solution obtained from distributions 3 and 2, respectively. The bottom panels are solutions obtained from distribution 5.
The distributions 6-9 involve anisotropy parameters that correspond to higher spherical harmonics, and have a much richer structure.
In figures <ref> and <ref> we show some contour plots of the largest imaginary solution obtained from these distributions with $\hat\mu_5=0$, for different choices of $x=\cos(\theta)$ and $\phi$.
A non-zero value of the chirality parameter $\hat\mu_5$ produces solutions that are larger and spread over a larger domain (as shown in table <ref>). To show more clearly the effect of $\hat\mu_5$ we plot in figure <ref> and <ref> the difference of the solutions obtained with $\mu_5$=0.3 and $\mu_5=0$ for the two distributions meg8 and meg4. The dispersion relation with $\hat\mu_5=0.3$ is shifted by an amount $\Delta p = 1.25-0.79$ for meg8 and $\Delta p = 1.65-0.88$ for meg4, which corresponds to the difference in the location of the maximum values (see table <ref>), so that the maxima occur at the same values of the coordinates $(p,x,\phi)$.
Contour plots of the largest imaginary solution from the distributions 6 (left two panels) and 7 (right two panels) with $\hat\mu_5=0$.
Contour plots of the largest imaginary solution from the distributions 9 (left two panels) and 8 (right two panels) with $\hat\mu_5=0$.
The effect of the anisotropy parameters which correspond to terms with $e_1$ and $e_3$ individually odd (see the discussion at the beginning of section <ref>) can be seen by calculating $s_2\equiv \int_0^\infty dp \int^1_0 dx \int^\pi_0 d\phi \, \sin(2\phi) \gamma(p,x,\phi)$.
Any distribution for which $\xi_6=\xi_8=\xi_{13}=0$ will give $s_2=0$, which corresponds physically to symmetry across the reaction plane.
If any of these anisotropy parameters are non-zero, the value of $s_2$ will also be non-zero.
Physically this effect could be produced by a collision at non-zero impact parameter of two ions of different size.
Distribution 6 and 7, for which we should find $s_2=0$, give $s_2 \approx 9\times 10^{-10}$ and $s_2 \approx 7\times 10^{-8}$, which can be taken as a measure of the numerical error in our computation.
Distributions 8 and 9 give, respectively, $s_2=-0.059$ and $s_2=0.068$.
Finally, to show the effect of $\hat\mu_5\ne 0$, we plot in figure <ref> the solutions obtained with $\hat\mu_5$=0.3 using the distributions 6 and 7. Comparison with figure <ref> shows that the chiral chemical potential increases the size and domain of the imaginary solution, but not uniformly.
Contour plots of the largest imaginary solution for the distributions 6 (left two panels) and 7 (right two panels) with $\hat\mu_5=0.3$.
§ CONCLUSIONS
We have studied plasmons in anisotropic plasmas using a hard loop approach within the quasi-particle regime.
We have used more general distribution functions than have been previously considered, with a technique that could be easily generalized
to account for the anisotropies that are relevant in any given physical situation.
The distribution function we have used depends on 9 anisotropy parameters and is normalized so that at zero chemical potential the Debye mass parameter is the same for all distributions.
In an ultra-relativistic plasma at zero chemical potential this parameter is the only dimensionful scale. It is therefore meaningful to compare the solutions obtained from the dispersion equations for different choices of anisotropy parameters.
Imaginary solutions to the dispersion equation are important because they are associated with plasma instabilities, which have the potential to strongly influence plasma dynamics.
The important effect of imaginary collective modes on physical quantities in anisotropic plasmas, like transport coefficients, production rates, and bound state properties, is well documented in the literature.
We have shown that the more general distribution functions we have introduced can significantly increase imaginary solutions of the dispersion equation in terms of both the magnitude of the mode, and the domain of wave vectors over which it exists.
The distribution that is most commonly used in the literature depends on only one anisotropy parameter, which describes the oblateness of a distribution that is squeezed in one direction.
An example is our distribution 3.
In comparison, our distribution 7, which has approximately the same value of the parameter $\xi$ that characterises its oblateness (see table <ref>), produces a largest imaginary mode that is almost twice as big, and the integral of the solution over the momentum phase space is approximately 10 times larger.
We have also shown that the effect of the chiral chemical potential is greatly enhanced by anisotropy. This is seen clearly from the results in table <ref> which show that, relative to the isotropic result, the imaginary modes are much greater when
even very moderate anisotropy is introduced.
This work was supported by the Natural Sciences and Engineering Research Council of Canada Discovery Grant program.
§ RETARDED POLARIZATION TENSOR
In this Appendix we calculate the one-loop retarded polarization tensor at finite temperature and chemical potential with the real-time formulation of finite temperature field theory, using the Keldysh representation [51, 52, 53].
The electron propagator is a 2$\times$2 matrix of the form
G = (
G_rr G_ra
G_ar G_aa
= (
G_sym G_ret
G_adv 0
where the retarded, advanced and symmetric propagators are given by
G_ret(p_0,p) = (P + m)r(p_0,p)
G_adv(p_0,p) = (P + m) a(p_0,p)
G_sym(p_0,p) = (P + m) f(p_0,p)
r(p_0,p) ≡ 1/P^2-m^2+i0^+ sgn(p_0)
a(p_0,p) ≡ 1/P^2-m^2-i0^+ sgn(p_0)
[2mm] f(p_0,p) ≡ -2πi
[ (1-2 n^+(p_0) ) Θ(p_0) + (1-2 n^-(p_0) ) Θ(-p_0) ]
δ(P^2 - m^2)
n^+(p_0) = 1/e^β(p_0-μ)+1 and n^-(p_0) = 1/e^-β(p_0-μ)+1 .
The self-energy has the form
Π = (
Π_rr Π_ra
Π_ar Π_aa
= (
0 Π_adv
Π_ret Π_sym
and the vertex function is a 2$\times$2$\times$2 tensor which can be written
Γ^μ = -ie γ^μ(
{Γ_rrr,Γ_rra} {Γ_rar,Γ_raa}
{Γ_arr,Γ_ara} {Γ_aar,Γ_aaa}
) = -ie γ^μ(
{0,1} {1,0}
{1,0} {0,1}
) .
The contribution to the retarded self-energy from the one-loop diagram is
i Π_ret^μν (p_0,p) = i Π_ar^μν(p_0,p) = (-ie)^2/2∑_ii'jj'∫d^4k/(2π)^4
γ^μΓ_aij G_jj'(k) G_i'i(k-p) γ^νΓ_ri'j' .
The sum over Keldysh indices $\{i,i',j,j'\}\in\{r,a\}$ is easily done because $G_{aa}=0$ and a vertex function with an odd number of $a$ indices vanishes. The result is
Π_ret^μν (p_0,p)
= ie^2/2
Tr[γ^μ(K + m) γ^ν(K-P + m)]
[r(K) f(K-P) + f(K) a(K-P)]
where we have used the short hand notation $r(K)$ to represent $r(k_0,k)$, and similarily for other propagator components. From this point on we will suppress the subscript that indicates the retarded component of the polarization tensor. In addition we consider only the relativistic limit $T\gg m$ and therefore we drop the electron mass in the integrand in equation (<ref>). We also introduce the short-hand notation $dK\equiv d^4k/(2\pi)^4$.
We will consider a chirally asymmetric plasma which is characterised by a difference in the chemical potentials of the right and left handed fermions, which are denoted $\mu_R$ and $\mu_L$. We define
Π_R^μν (p_0,p)
= ie^2/2
Tr[(1+γ^5)γ^μK γ^ν(K-P)]
[r(K) f(K-P) + f(K) a(K-P)]|_μ=μ_R
Π_L^μν (p_0,p)
= ie^2/2
Tr[(1-γ^5)γ^μK γ^ν(K-P)]
[r(K) f(K-P) + f(K) a(K-P)] |_μ=μ_L
Π^μν(p_0,p) = 1/2(Π_R^μν (p_0,p) +Π_L^μν (p_0,p)) .
The integrals in equations (<ref>) can be rewritten by performing a shift of the 4-vector $K \to K+P$ on the term in square brackets that does not have a factor $f(K)$. All terms in the resulting expression have a factor $f(K)$ and, using equation (<ref>), this factor is proportional to $\delta(K^2)$ in the relativistic limit. The integral over $k_0$ can be done using this delta function.
We work in time-like axial gauge and calculate only the spatial components of the polarization tensor. After doing the $k_0$ integral we obtain the results in equations (<ref>, <ref>).
§ NYQUIST ANALYSIS
The dispersion equation that we have solved numerically is ${\cal D}=0$ where ${\cal D}$ is given in equation (<ref>).
When numerical methods are used to find the solutions of an equation, one must input the range over which the search will be performed, and solutions outside this range will be missed.
A Nyquist analysis can be used to determine the number of solutions that exist so that the search range can be extended as needed.
To explain the idea we discuss a generic equation of the form
f(p_0) = 0
and define the function
F(p_0) ≡f^'(p_0)/f(p_0) = d/dp_0lnf(p_0) .
We consider the contour integral
∮_C dp_0/2πi F(p_0)
where the contour is a positively (counter-clockwise) oriented closed loop, which is chosen so that $F(p_0)$ is analytic inside the loop except at isolated points. The integral is equal to the sum of the residues. The residue of $F(p_0)$ at a zero of $f(p_0)$ of order $l$ is $l$, and the residue of $F(p_0)$ at a pole of $f(p_0)$ of order $l$ is $-l$. This gives
∮_C dp_0/2πi F(p_0) = n_Z - n_P
where $n_Z$ and $n_P$ are the numbers of zeros and poles of $f(p_0)$ inside the contour $C$, taking into account the fact that each zero and pole of order $l$ is counted $l$ times. The purpose of the Nyquist analysis is to determine $n_Z$.
The anisotropic dressing functions are obtained numerically from the integrals in equations (<ref>, <ref>).
Using $P\cdot V = p(\hat p_0 - x')$ (see equations (<ref>, <ref>)) one finds that the dressing functions have a cut along the real axis for $|\hat p_0|<1$, and the dispersion equation therefore has a cut for $|p_0|<p$.
This means that the contour $C$ can be chosen as in Fig. <ref>.
The contour in the plane of complex $\hat p_0$ which is used to compute the number of solutions of the dispersion equations.
The integrals along the lines connecting the circular contour $C_\infty$ to $C_{\rm cut}$ always compensate each other and therefore the contour integral (<ref>) equals
∮_C_∞ dp_0/2πi F(p_0) +
∮_C_cut dp_0/2πi F(p_0) = n_Z - n_P .
The contribution from the big circle is calculated by writing $p_0 = |p_0|e^{i \phi}$ and taking $|p_0|\to\infty$ which gives
∮_C_∞ dp_0/2πi F(p_0) = lim_|p_0| →∞ p_0 F(p_0) ≡n_∞ .
The integral along the cut can be calculated using that $F(p_0)$ is the logarithmic derivative of $f(p_0)$ which gives
∮_C_cut dp_0/2πi F(p_0)
= 1/2πi∮_C_cut d/dp_0lnf(p_0)
= 1/2πi ( ln f(p_0end) - ln f(p_0start) ) ≡n_W
where $p_{0{\rm start}}$ is the (arbitrarily chosen) starting point of the contour which encloses the cut, and $p_{0{\rm end}}$ is the end point.
The quantity $n_W$ is the number of times that the curve in the plane of complex $f$
travels counter-clockwise around the point $f=0$, and is called a winding number.
Combining the results (<ref>, <ref>), we have that equation (<ref>) gives
n_Z = n_P + n_∞+ n_W .
A Nyquist analysis of the equation ${\cal D}=0$ can be done analytically in the weakly anisotropic limit.
When $\mu_5=0$ the dispersion equation in the weakly anisotropic limit factors into three separate equations
f^(1)(p_0,p⃗) = P^2-π_1(p_0,p⃗)=0
f^(2)(p_0,p⃗) = 1/p_0^2(p_0^2-π_2(p_0,p⃗)) = 1-π_2(p_0,p⃗)/p_0^2 =0
f^(3)(p_0,p⃗) = P^2-π_3(p_0,p⃗)=0 .
When $\mu_5\ne 0$ equations (<ref>, <ref>) are replaced by the weakly anisotropic versions of equations (<ref>, <ref>) which are
f^(4)(p_0,p⃗) = P^2-(1/2(π_1(p_0,p⃗)+π_3(p_0,p⃗)) + π_8(p_0,p⃗))=0
f^(5)(p_0,p⃗) = P^2-(1/2(π_1(p_0,p⃗)+π_3(p_0,p⃗)) -π_8(p_0,p⃗))=0 .
In all cases it is easy to see that $n_P=0$.
In equation (<ref>) we have divided out a zero mode, so that $n^{(2)}_\infty = 0$, and for all of the other dispersion equations it is straightforward to see that $n_\infty = 2$.
The non-trivial part of the calculation is the determination of the winding number, which is found by mapping the closed contour $C_{\rm cut}$ in the plane of complex $p_0$ onto a path in the plane of complex $f(p_0)$.
The points $p_{0{\rm start}}$ and $p_{0{\rm end}}$ in equation (<ref>) have the same modulus, but their phases differ by $2\pi$.
The winding number equals the number of times that the curve in the plane of complex $f$
travels counter-clockwise around the point $f=0$ (some detailed examples of these mappings can be found in Ref. [9]).
There are two possibilities, which are shown in figure <ref>.
Mappings of the contour $C$ in the plane of complex $p_0$. In the right panel the corresponding winding number is two, and in the left panel it is zero.
In the right panel, the mapping circles the origin twice, and the winding number is 2, while in the left panel the winding number is zero.
To distinguish between the two possibilities for each dispersion equation, we need to determine the sign of Re[$f(0,p)$].
We discuss first the dispersion equation (<ref>), which is different from the other four.
In the weakly anisotropic limit we have
f^(2)(0,p⃗) = 2+f̃(ξ_i,x,ϕ)
where $\tilde f(\xi_i,x,\phi)$ is a dimensionless function of the angular variables that depends linearly on the anisotropy parameters.
Since we have assumed weak anisotropy we have $f^{(2)}(0,\vec p)$ always positive, which means $n_W=2$, and therefore equation (<ref>) gives $n_Z=2$.
These two solutions are always present, and numerically we find they are always real. They are anisotropic modifications of the HTL longitudinal modes (denoted $\pm\omega_L(\omega)$).
For all of the remaining dispersion equations the number of solutions is
n_Z=2+n_W .
For equations (<ref>) and (<ref>) the winding number is determined from the sign of the functions
f^(1)(0,p⃗) = -(p^2- p^2_1crit(ξ_i,x,ϕ))
f^(3)(0,p⃗) = -(p^2- p^2_3crit(ξ_i,x,ϕ)) .
When the chirality parameter $\mu_5$ is non-zero we consider equations (<ref>, <ref>) instead of equations (<ref>, <ref>).
For weak anisotropy the winding number is determined from the sign of the functions
f^(4)(0,p⃗) = -(p + p_5crit(ξ_i,x,ϕ))
f^(5)(0,p⃗) = -(p - p_5crit(ξ_i,x,ϕ)) .
The results for $p^2_{1{\rm crit}}$, $p^2_{3{\rm crit}}$ and $p_{5{\rm hat}}$ are given in equations (<ref>, <ref>, <ref>).
15 p_1crit^2 =
ξ_8 x √(1-x^2) (4 x^2-1) cos^3(ϕ)
+(ξ_11 (1-2 x^2)^2-5 ξ_2 (x^2-1)) cos^2(ϕ)
+ 1/2 x √(1-x^2) cos(ϕ) (-9 ξ_8 cos(2 ϕ)+10 ξ_6+9 ξ_8+6 ξ_13-8 ξ_13 x^2)
- sin^2(ϕ) (5 ξ_2-3 ξ_4+ξ_11+9 ξ_4 (x^2-1) cos(2 ϕ)+9 ξ_4 x^2-3 ξ_11 x^2)
+ x^2 (5 ξ_9+6 ξ_14-4 ξ_14 x^2)+2 ξ_4 (-2 x^4+x^2+1) cos^4(ϕ)
15 p_3crit^2 = -3 ξ_4 x^2 sin^2(2 ϕ)+ξ_11 (2 x^2-1) sin^2(ϕ)
+4 ξ_8 x √(1-x^2) (4 x^2-1) cos^3(ϕ)
+ x √(1-x^2) cos(ϕ) (-3 ξ_8 cos(2 ϕ) + 10 ξ_6+3 ξ_8+4 ξ_13 (3-4 x^2))
+5 ξ_9 (2 x^2-1)
+ cos^2(ϕ) (6 ξ_4 sin^2(ϕ)+5 ξ_2 (1-2 x^2)+2 ξ_11
(8 x^4-8 x^2+1))
+ 2 ξ_4 (-8 x^4+4 x^2+1) cos^4(ϕ) - 2 ξ_14 (8 x^4-12 x^2+3)
30/μ̂_5 p_5hat = 30+10 ξ_2+6 ξ_4-5 ξ_9-9 ξ_14+ 10 √(1-x^2) x (3 (ξ_6+ξ_8 x^2)
+ξ_13 (3-4 x^2)) cos(ϕ)
- 5 (x^2-1)
(ξ_4 (x^2-1) cos(4 ϕ)
-2 ξ_8 x √(1-x^2) cos(3 ϕ)
+(3 ξ_2+2 ξ_4+ξ_11+4 ξ_4 x^2-4 ξ_11 x^2) cos(2 ϕ))
-5 x^2 (3 (ξ_2-2 ξ_9+ξ_4 x^2)
+4 ξ_14 (2x^2-3))+ξ_11 (20 x^4-15 x^2+2) .
In the isotropic case $p^2_{1{\rm crit}}(\xi_i,x,\phi) = p^2_{3{\rm crit}}(\xi_i,x,\phi) = 0$, the right side of equations (<ref>, <ref>) cannot be positive, and therefore the winding number is zero and both dispersion equations have two solutions. Remembering that in the isotropic limit $\pi_1=\pi_3=\Pi_T$, we see that these are the HTL transverse solutions ($\pm\omega_T(p)$).
When non-zero anisotropy parameters are introduced, and when $p$ is small enough, there can be choices of the angles and anisotropy parameters for which $p^2_{1{\rm crit}}(\xi_i,x,\phi)>p^2>0$ and/or $p^2_{3{\rm crit}}(\xi_i,x,\phi)>p^2>0$.
In this case, the winding number is two, which means that extra solutions to equations (<ref>, <ref>) exist.
From equation (<ref>) we have that in the isotropic limit $p_{5{\rm crit}}(\xi_i,x,\phi)=\hat \mu_5$, which means that $n_W^{(4)}=0$ and $n_W^{(5)}=2$ if $p<\hat\mu_5$ and zero otherwise. Thus we have that for $\mu_5\ne 0$ an additional pair of solutions, which turn out to be imaginary, exist even in the isotropic limit (see figure <ref>). The domain of these modes can be larger for certain orientations of the wave vector if the system is anisotropic.
The expressions in equations (<ref>, <ref>, <ref>) are only valid in the weakly anisotropic limit, however, we can use them to obtain information about the orientations of the wave vector $\vec p$ for which large imaginary solutions could be found with specific sets of anisotropy parameters. To obtain more insight, we can plot the coefficient of each anisotropy parameter in equations (<ref> - <ref>). We show the results for three of the coefficients in figure <ref>.
The first panel shows the dependence of the critical wave vectors on $\xi_9$, which is the coefficient of $(n_3\cdot v)^2$ in equation (<ref>). Using our coordinate system this dot product is independent of the azimuthal angle $\phi$,
and therefore we have that the terms in the critical wave vectors that are proportional to $\xi_9$ are also independent of $\phi$.
The second panel shows the dependence of the critical wave vectors on $\xi_2$ which is the coefficient of $(n_3\cdot v)^2$. Both the $\xi_9$ and $\xi_2$ terms have been included in previous calculations. In the third panel we show the dependence of the critical wave vectors on $\xi_6$, which is the coefficient of $(n_1\cdot v)(n_3\cdot v)$, and in the last panel we show the dependence on
$\xi_8$, the coefficient of $(n_1\cdot v)^3(n_3\cdot v)$.
Contour plots of the coefficients of the anisotropy parameters for $p_{1{\rm crit}}^2$ (left), $p_{3{\rm crit}}^2$ (center) and $p_{5{\rm crit}}$ (right). The axes show $x=\cos(\theta)\in(0,1)$ and $\phi\in(0,\pi/2)$.
[1]
P. Romatschke and M. Strickland,
Phys. Rev. D 68, 036004 (2003).
[2]
P. Romatschke and M. Strickland,
Phys. Rev. D 70, 116006 (2004).
[3]
St. Mrówczyński,
Phys. Lett. B 314, 118 (1993).
[4]
P. B. Arnold, J. Lenaghan and G. D. Moore,
JHEP 0308, 002 (2003).
[5]
A. Kurkela and G. D. Moore,
JHEP 1112, 044 (2011).
[6]
A. Kurkela and G. D. Moore,
JHEP 1111, 120 (2011).
[7]
A. Ipp, A. Rebhan and M. Strickland,
Phys. Rev. D 84, 056003 (2011).
[8]
M. Attems, A. Rebhan and M. Strickland,
Phys. Rev. D 87, 025010 (2013).
[9]
M. E. Carrington, K. Deja and St. Mrówczyński,
Phys. Rev. C 90, 034913 (2014).
[10]
B. Schenke and M. Strickland,
Phys. Rev. D 76, 025023 (2007).
[11]
B. S. Kasmaei, M. Nopoush and M. Strickland,
Phys. Rev. D 94, 125001 (2016).
[12]
B. S. Kasmaei and M. Strickland,
Phys. Rev. D 97, 054022 (2018).
[13]
R. Baier and Y. Mehtar-Tani,
Phys. Rev. C 78, 064906 (2008).
[14]
P. Romatschke and M. Strickland,
Phys. Rev. D 71, 125008 (2005).
[15]
M. E. Carrington, K. Deja and St. Mrówczyński,
Phys. Rev. C 92, 044914 (2015).
[16]
M. E. Carrington, St. Mrówczyński and B. Schenke,
Phys. Rev. C 95, 024906 (2017).
[17]
A. Dumitru, Y. Guo and M. Strickland,
Phys. Lett. B 662, 37 (2008).
[18]
Y. Burnier, M. Laine and M. Vepsalainen,
Phys. Lett. B 678, 86 (2009).
[19]
A. Dumitru, Y. Guo and M. Strickland,
Phys. Rev. D 79, 114003 (2009).
[20]
K. Boguslavski, B. S. Kasmaei and M. Strickland,
arXiv:2102.12587 [hep-ph].
[21]
M. Martinez and M. Strickland,
Phys. Rev. C 78, 034917 (2008).
[22]
S. Hauksson, S. Jeon and C. Gale,
Phys. Rev. C 103, 064904 (2021).
[23]
B. S. Kasmaei and M. Strickland,
Phys. Rev. D 102, 014037 (2020).
[24]
B. S. Kasmaei and M. Strickland,
Phys. Rev. D 99, 034015 (2019).
[25]
Y. Akamatsu and N. Yamamoto,
Phys. Rev. Lett. 111, 052002 (2013).
[26]
A. Kumar, J. R. Bhatt and P. K. Kaw,
arXiv:1405.2865 [hep-th].
[27]
K. Fukushima, D. E. Kharzeev and H. J. Warringa,
Phys. Rev. D 78, 074033 (2008).
[28]
D. Kharzeev, R. D. Pisarski and M. H. G. Tytgat,
Phys. Rev. Lett. 81, 512 (1998).
[30]
D. Kharzeev, A. Krasnitz and R. Venugopalan,
Phys. Lett. B 545, 298 (2002).
[31]
D. Kharzeev,
Phys. Lett. B 633, 260 (2006).
[32]
D. E. Kharzeev, L. D. McLerran and H. J. Warringa,
Nucl. Phys. A 803, 227 (2008).
[33]
D. E. Kharzeev,
Annals Phys. 325, 205 (2010).
[34]
A.N. Redlich and L.C.R. Wijewardhana, Phys. Rev. Lett. 54, 907 (1985).
[35]
M. Laine,
JHEP 10, 056 (2005).
[36]
J. P. Blaizot and E. Iancu,
Phys. Rev. Lett. 70, 3376 (1993).
[37]
P. F. Kelly, Q. Liu, C. Lucchesi and C. Manuel,
Phys. Rev. Lett. 72, 3461 (1994).
[38]
D. F. Litim and C. Manuel,
Phys. Rept. 364, 451 (2002).
[39]
St. Mrówczyński and M. H. Thoma,
Phys. Rev. D 62, 036011 (2000).
[40]
St. Mrówczyński, A. Rebhan and M. Strickland,
Phys. Rev. D 70, 025004 (2004).
[41]
D. T. Son and N. Yamamoto,
Phys. Rev. Lett. 109, 181602 (2012).
[42]
M. A. Stephanov and Y. Yin,
Phys. Rev. Lett. 109, 162001 (2012).
[43]
D. T. Son and N. Yamamoto,
Phys. Rev. D 87, 085016 (2013).
[44]
C. Manuel and J. M. Torres-Rincon,
Phys. Rev. D 89, 096002 (2014).
[45]
S. Carignano, C. Manuel and J. M. Torres-Rincon,
Phys. Rev. D 98, 076005 (2018).
[46]
C. Manuel and J. M. Torres-Rincon,
Phys. Rev. D 92, 074018 (2015).
[47]
S. Carignano and C. Manuel,
Phys. Rev. D 99, 096022 (2019).
[48]
S. Carignano and C. Manuel,
Phys. Rev. D 103, 116002 (2021).
[49]
M. Le Bellac,
Thermal Field Theory
(Cambridge University Press, Cambridge, 2000).
[50]
R. Ghosh, B. Karmakar and A. Mukherjee,
Phys. Rev. D 102, 114002 (2020).
[51]
R. L. Kobes, G. W. Semenoff and N. Weiss,
Z. Phys. C 29, 371 (1985).
[52]
M. Le Bellac and H. Mabilat,
Z. Phys. C 75, 137 (1997).
[53]
M. E. Carrington, T. Fugleberg, D. S. Irvine and D. Pickering,
Eur. Phys. J. C 50, 711 (2007).
|
# Machine Learning-based Analysis of Electronic Properties as Predictors of
Anticholinesterase Activity in Chalcone Derivatives111A footnote for the title
Thiago Buzelli [ Bruno Ipaves<EMAIL_ADDRESS>[ Wanda Pereira
Almeida [ Douglas Soares Galvao<EMAIL_ADDRESS>[ Pedro Alves da
Silva Autreto<EMAIL_ADDRESS>[
###### Abstract
In this study, we investigated the correlation between the electronic
properties of anticholinesterase compounds and their biological activity.
While the methodology of such correlation is well-established and has been
effectively utilized in previous studies, we employed a more sophisticated
approach: machine learning. Initially, we focused on a set of $22$ molecules
sharing a common chalcone skeleton and categorized them into two groups based
on their IC50 indices: active and inactive. Utilizing the open-source software
Orca, we conducted calculations to determine the geometries and electronic
structures of these molecules. Over a hundred parameters were collected from
these calculations, serving as the foundation for the features used in machine
learning. These parameters included the Mulliken and Lowdin electronic
populations of each atom within the skeleton, molecular orbital energies, and
Mayer’s free valences. Through our analysis, we developed numerous models and
identified several successful candidates for effectively distinguishing
between the two groups. Notably, the most informative descriptor for this
separation relied solely on electronic populations and orbital energies. By
understanding which computationally calculated properties are most relevant to
specific biological activities, we can significantly enhance the efficiency of
drug development processes, saving both time and resources.
###### keywords:
American Chemical Society, LaTeX
UFABC] Center for Natural and Human Sciences (CCNH) of Federal University of
ABC (UFABC) UFABC] Center for Natural and Human Sciences (CCNH) of Federal
University of ABC (UFABC) UNICAMP] Institute of Chemistry, State University of
Campinas (UNICAMP) UNICAMP] Applied Physics Department and Center for
Computational Engineering and Sciences, State University of Campinas (UNICAMP)
UFABC] Center for Natural and Human Sciences (CCNH) of Federal University of
ABC (UFABC) IR,NMR,UV
## 1 Introduction
Developing novel pharmaceutical compounds is a complex, resource-intensive,
and time-consuming process. While drugs promise to improve life expectancy and
deliver effective disease treatment, they also pose the inherent risk of
adverse reactions and use abuse. The pipeline development of new
pharmacological products generally involves multi-sequential stages, such as:
1. identification and discovery of compounds exhibiting therapeutic activity;
2. rigorous in vitro testing to assess their biological properties; 3.
comprehensive in vivo studies to investigate their dynamics in animal models;
4. human clinical trials 1.
Optimizing the selection of new active compounds through computer simulations
offers significant appeal for cost reduction, reagent optimization, and
accelerated product development. Computational approaches, such as docking,
allow the analyses of molecule-ligand interactions to identify suitable
candidates that fit (energetically and geometrically) into protein pocket
binding sites 2. In principle, selecting structures with a good potential for
a desired pharmaceutical activity through interaction energy values is
effectively possible. Other approaches aim to correlate geometric features,
chemical composition, and electronic structure data with biological activity.
The use of computer-generated models has already achieved remarkable success
in correlating biological activity with molecular electronic features obtained
from electronic structure calculations. For instance, the electronic structure
of acids can be directly linked to the biological activity of their
derivatives 3. Additionally, there is a correlation between spectroscopy data
and the biological activity of steroids, highlighting the significance of the
molecules’ skeletal composition 4. Other studies established a correlation
between polycyclic aromatic hydrocarbons (PAHs) electronic structure and their
carcinogenic potential through the so-called universal quantum-molecular
descriptors 5. Recently, Autreto and co-authors successfully established a
correlation between the electronic indices and biological properties of novel
febrifugine derivatives 6. Finally, the use of the electronic theory of cancer
(ETC) revealed significant correlations between the PAHs’ carcinogenicity and
their energy difference values between the highest occupied molecular orbital
(HOMO) and the second-highest occupied molecular orbital (HOMO-1), as well as
with localized charge density within some molecular regions 7, 8.
Recent advances in machine learning (ML) techniques have been successfully
used for complex problems characterized by large volumes of dynamic,
unstructured, and error-containing data. With the exponential growth of
generating data and rapid technological advancements in software and computer
hardware, we entered the era of big data science 9. Data science combines
mathematical principles, computer science, statistical analyses, and
programming techniques to address multifaceted problems. By exploiting the
computational power of modern computers and advanced complex algorithms, ML
can create a large number of models from a single data set. This capability
allows us to compare and combine models, thereby enhancing the quality of the
analyses in extracting and identifying correlations even with sparse data 9.
It is worth emphasizing that the use of ML extends beyond problems involving
large data sets. In fact, it can be instrumental in elucidating relationships
or even unveiling underlying laws governing complex phenomena where scientific
knowledge remains undiscovered or is too intricate to comprehend through
traditional approaches 10.
In this study, we carried out an investigation on a set of $22$ molecules
categorized as anticholinesterases. These compounds exhibit the ability to
inhibit different forms of cholinesterases and have found applications in
diverse domains such as pesticide development, glaucoma treatment, and
myasthenia management 11. Notably, anticholinesterases have received
considerable attention in the context of Alzheimer’s therapy, with three out
of the four drugs currently used for its treatment belonging to the class of
anticholinesterase agents (namely, galantamine, donepezil, and rivastigmine)
11.
In the present work, We have developed a new approach to identifying active
and inactive molecules by integrating ML techniques with electronic structure
parameters. We have obtained significantly enhanced efficiency in determining
the prediction limits of our models through an extensive analysis of hundreds
of thousands of combinations simultaneously. Based on that, we developed an
optimized computational strategy to identify active and/or inactive molecules
with specific biological activity.
## 2 Methodology
### 2.1 Target molecules
The molecules under investigation exhibit a shared structural framework, as
can be seen from Figures 1 and 2. This characteristic is important as it makes
it easier to obtain structural correlations across the group set. The
biological activity of these molecules is presented based on the IC50 index of
each molecule in Table 1.
To determine the IC50 values, the compounds were diluted in phosphate buffer
(pH 7.4) and DMSO at six different concentrations: 200, 150, 100, 50, 25, and
10 micromolar. These diluted solutions were then tested against
acetylcholinesterase. The acetylcholinesterase enzyme used in the study was
obtained from Sigma-Aldrich Co and was sourced from Electrophorus electricus
(electric eel), Type VI S. The assay was conducted using Ellman’s colorimetric
method. It should be noted that the maximum concentration of DMSO used in the
experiments was 1% 12, 11.
Figure 1: Skeletal structure (chalcone group) of our anticholinesterase group set. Each atom within the skeleton represents a point where multiple electronic parameters are attributed. Table 1: Categorization of molecules into two distinct groups (active and inactive) according to their IC50 values. ID | IC50 | Group
---|---|---
1 | 0.039 | Active
2 | 1.471 | Inactive
3 | 1.480 | Inactive
4 | 0.261 | Active
5 | 2.567 | Inactive
6 | 4.813 | Inactive
7 | 0.172 | Inactive
8 | 0.230 | Inactive
9 | 0.170 | Inactive
10 | 0.050 | Active
11 | 0.020 | Active
12 | 0.031 | Active
13 | 0.018 | Active
14 | 0.528 | Inactive
15 | 0.028 | Active
16 | 0.739 | Inactive
17 | 1.023 | Inactive
18 | 2.528 | Inactive
19 | 0.008 | Active
20 | 0.026 | Active
21 | 0.035 | Active
22 | 0.027 | Active
Based on the IC50 values, the group of molecules was divided into two
subgroups: active and inactive, as indicated in Figure 2. The IC50 index
represents the potency of the inhibitor compound in inhibiting 50% of a
biological or biochemical process. A low IC50 value indicates a more efficient
and powerful inhibitory activity of the molecule. We considered molecules with
an IC50 value below 0.1 µM (0.1 x $10^{3}$ mol/m3) as belonging to the active
subgroup, while those with IC50 values above this threshold belonging to the
inactive subgroup. These are typical classification values found in the
literature.
Figure 2: Molecules labeled as active are represented by a red circle on their
right side, while molecules labeled as inactive are represented by a blue one
on their right side.
### 2.2 Calculation of electronic indices
After classifying the molecules into these two distinct subgroups, the
subsequent procedure involved the computation of electronic indices associated
with each element of the molecular skeleton. These indices will serve as our
(classification active/inactive) feature set and form the foundation for
creating correlation models using machine learning algorithms.
The electronic structure of the investigated molecules was obtained using the
Orca computational code 13. We used a semi-empirical approach based on the
Hartree-Fock method 14, 15, 16. The molecular geometries were optimized using
the Quasi-Newton BFGS algorithm, the PM3 semi-empirical method, and a def2-SVP
basis set 17. The SCF convergence criteria in ORCA for geometry optimizations
were set to TightSCF, with an energy change of $1.0\times 10^{-8}$ au,
$\text{TolE}=5\times 10^{-6}$, $\text{TolRMSG}=1\times 10^{-4}$,
$\text{TolMaxG}=3\times 10^{-4}$, $\text{TolRMSD}=2\times 10^{-3}$, and
$\text{TolMaxD}=4\times 10^{-3}$ 13.
### 2.3 Electronic indices - features
Using the optimized geometries of the anticholinesterase molecules (Figure 2),
we calculated the following PM3 electronic indices for each skeleton atom:
* •
Mulliken and Lowdin orbital charges (s and p).
* •
Mulliken and Lowdin atomic charges.
* •
Orbital Energies values: HOMO-3, HOMO-2, HOMO-1, HOMO, GAP, LUMO, LUMO+1,
LUMO+2, LUMO+3.
* •
Mayer valency.
HOMO, LUMO, and GAP refer to Highest Occupied Molecular Orbital, Lowest
Unoccupied Molecular Orbital, LUMO-HOMO energy value, respectively.
To identify the best descriptors, we used an algorithm called SISSO (Sure
Independence Screening and Sparsifying Operator), which combines two
techniques: SIS (Sure Independence Screening) and OS (Sparsifying Operator).
The SIS technique is related to reducing the dimensionality of the system,
while the OS is the sparsifying operator used in the compressed sensing
approach 18.
The SISSO input file consists of a worksheet that contains a molecule (labeled
example) on each line, collectively forming the data set. Each column
associated with a labeled example represents a feature, and the collection of
these columns composes the feature set. The SISSO output provides a ranking of
maps indicating the most effective separation between the active and inactive
data sets. This separation metric is determined by a pair of features known as
descriptors. These descriptors assign a point to each labeled example,
resulting in the formation of a two-dimensional graph that depends on the
chosen descriptors.
The set of input data, known as the training set, serves as the basis for the
learning algorithm. Prior to implementing machine learning methods, a subset
of the data set, molecules in our case, is typically reserved for testing the
model. This subset is referred to as the test set. In our study, we have
separated molecules $8$ and $9$ from the inactive group and molecules $10$ and
$11$ from the active group, which constitute our test set. The remaining
molecules form our training set, which is used in the machine learning
algorithm to establish a correlation model between electronic indices and
biological activity.
## 3 Results and discussions
We have performed statistical analyses based on the Pearson correlation
coefficient to select the most relevant features (Figure 7 in supplementary
info). We executed a filtering process by excluding parameters with an
absolute value of the Pearson coefficient ($|\rho|$) greater than or equal to
$0.8$. This process resulted in a refined feature set consisting of $35$
attributes. We refer to this curated collection as Pearson’s feature set. Here
are the specific parameters that compose this set:
* •
Orbital energies: HOMO-3, HOMO-1, HOMO, LUMO+1
* •
Mulliken population analysis: (M0, M1, M2, M3, M4, M5, M6, M8, M10, M11, M12,
M13, M15). MX indicates the Mulliken charge on atom X; (MS7, MS9, MS10). MYX
Indicates the Mulliken charge in the Y orbital (which can be S or P) of the X
atom.
* •
Lowdin population analysis: (L2, L4, L6, L7, L8, L10). LX indicates the Lowdin
charge on the X atom; (LS0, LS1, LS5, LS7, LS11, LS13, LS14, LS15). LYX
indicates the Lowdin charge on the Y orbital of the X atom.
* •
Mayer’s free valence analysis: All attributes pertaining to this analysis were
removed.
For each feature set, we generated a large number of models. However, on
average, only 20 of them exhibited a correlation of 100% accuracy with the
biological activity of the molecules, as determined by the electronic indices.
This correlation allowed us to effectively classify the molecules into two
distinct groups: active and inactive. Consequently, we successfully mitigated
underfitting problems, although the limited data size still posed a risk of
overfitting.
To assess the performance of these models, we carried out a validation
procedure using an independent test set. As a result, we narrowed down the
selection to four models for the Pearson feature set. These models, except for
orbital charges, are consistent with the discussions previously presented in
the literature using other approaches 19. The following parameter models
achieved a perfect 100% accuracy in separating active and inactive molecules:
1. 1.
$\left[\frac{\textnormal{M}10/\textnormal{M}4}{\textnormal{M}1-\textnormal{M}3}\right],\left[\frac{\textnormal{M}10/\textnormal{L}4}{\textnormal{M}1-\textnormal{M}3}\right]$
2. 2.
$\left[\frac{\textnormal{HOMO}_{-1}-\textnormal{HOMO}}{\textnormal{M}1-\textnormal{M}3}\right],\left[\frac{\textnormal{L}10/\textnormal{L}2}{\textnormal{M}1-\textnormal{M}3}\right]$
3. 3.
$\left[\frac{\textnormal{HOMO}_{-1}-\textnormal{HOMO}}{\textnormal{M}1-\textnormal{M}3}\right],\left[\frac{\textnormal{M}10/\textnormal{M}4}{\textnormal{M}1-\textnormal{M}3}\right]$
4. 4.
$\left[\frac{\textnormal{HOMO}_{-1}-\textnormal{HOMO}}{\textnormal{M}1-\textnormal{M}3}\right],\left[\frac{\textnormal{M}10/\textnormal{L}4}{\textnormal{M}1-\textnormal{M}3}\right]$
It is worth noting that among the four models, there is a recurring presence
of factors such as (M1-M3) and ($\Delta H$ = HOMO-1 \- HOMO), indicating their
significance in the separation of the molecular groups. It is important to
emphasize that the term $\Delta H$ appears in numerous studies related to
structure-activity correlations.
Among these models, models $1$, $2$, and $4$ incorporate a combination of
Mulliken and Lowdin charges. Despite their shared purpose of describing the
same physical quantities, these two charge schemes have distinct mathematical
foundations. Consequently, we consider only model 3 as an appropriate approach
for correlating biological activity with electronic indices. Taking into
account the aforementioned observations, we can conclude that descriptor 3
yields the most favorable outcome among all the descriptors used in the
machine learning methodology.
Figure 3 illustrates a two-dimensional map of the descriptors, which rely on
Mulliken’s charge description and the differences in orbital energy values
($\Delta H$). This model is specifically tailored for the training set, with
four molecules chosen for the test set. Notably, there is a well-defined
separation between the active and inactive groups, with no observed
intersections.
Figure 3: (a) A two-dimensional map of machine learning results using
descriptor 3 and the training set, and (b) a selected zoomed part of the map.
The final descriptors are plotted on the axes, and it is evident that the
groups do not overlap in the border region. The system metric indicates a
negative overlap (separation) of -3.20.
Figure 4 presents the ML model, based on the training set, expanded with
externally calculated descriptor values for the test set molecules. These
additional test set data points slightly alter the boundary definitions as
they exhibit a closer alignment with their respective groups. Remarkably, it
is worth emphasizing that the horizontal descriptor alone is capable of
achieving 100% accuracy in classifying the groups into active and inactive
categories.
Figure 4: (a) A two-dimensional map of machine learning results, including an
externally calculated test set, and (b) a selected zoomed region of the map.
The figure demonstrates that the model effectively describes the groups, even
though the molecules only slightly redefine the boundaries.
In Figure 5, we present the map of descriptors obtained through our
reimplementation of machine learning using the combined dataset, consisting of
both the training set and the test set. Notably, the fundamental
characteristics of the model remain unchanged, but we observe a slight change
in the boundary definitions. More specifically, when considering the model
with all entries, the intersection of the groups shifts from -3.20 to -3.05.
This shift indicates that there is no significant overfitting, suggesting that
our model is not excessively reliant on specific data points. Additionally, we
find that the descriptor (HOMO-1-HOMO)/(M1-M3) continues to effectively be
able to distinguish between the two distinct groups of molecules: active and
inactive. This is evident from the horizontal division observed in the inset
of Figure 5.
Figure 5: (a) A two-dimensional map of machine learning descriptors, including
the test set molecules (8, 9, 10, and 11), calculated separately, and (b) a
selected zoomed region of the map. The dashed line clearly indicates a group
separation, primarily observed with the horizontal axis descriptor. The
molecules previously discarded are color-highlighted.
The physical interpretation of the factor $\Delta H$ in the context of
structure-activity relationships still remains somewhat elusive, despite its
frequent use in studies involving the IC50 index and other biological
quantifiers 19, 20, 21, 22, 5. The HOMO energy, commonly associated with
ionization potential and nucleophilic regions indicative of enhanced
reactivity, is significantly important. This is further supported by the
presence of a charge difference within the descriptor (M1-M3), precisely
located in highly reactive regions, in general, associated with reactive bond
sites.
Notably, the ML horizontal descriptor alone can with which accuracy classify
the active/inactive molecules. This validates that a simple rule is able to
establish the relationship between molecular structure and biological
activity. To further validate this observation, we build a Boolean decision
tree (Figure 6). By applying the following two rules, we achieved a 100%
accuracy in the separation of inactive and active groups:
* •
Condition 1: The charge difference between atoms 1 and 3 (M1 and M3) is less
than -0.002.
* •
Condition 2: The energy value of the penultimate occupied molecular orbital
(HOMO-1) is higher than -9.32.
According to condition 1, we were able to successfully differentiate all
molecules except for numbers 10, 19, and 22, which were mistakenly categorized
as inactive. However, applying the second rule, we rectified this
misclassification by reassigning these molecules to the appropriate active
group.
Figure 6: Simplified Boolean tree diagram. The diagram illustrates that two
conditions are adequate to entirely segregate the two classes of molecules.
## 4 Conclusions
In summary, the difference in charges (M1-M3) observed in both descriptors
indicates the relevance of the reactivity in the aromatic region between atoms
1 and 3 in influencing the biological behavior of these molecules. Moreover,
the horizontal descriptor proves adequate in determining group behavior, while
the difference between (HOMO-1-HOMO) energy values was also identified as a
crucial parameter for our model. Although we cannot assert with absolute
certainty that our model is definitive, owing to limitations in sample size,
the results remain consistent with other studies in the field, further
underscoring the pivotal role of $\Delta H$ in structure-activity
correlations. Our study presents a streamlined methodology for optimizing the
search process for molecules with specific biological activity, exemplified
here for the chalcone skeleton. By using common skeletal structures to design
new molecules (for instance, incorporating different chemical groups), we can,
in principle, predict the activity/inactivity (based on the IC50 index) of
these novel molecules through the parameter features identified by the ML
models. Notably, these predictions rely solely on electronic structure
factors, utilizing the [(HOMO${-1}$-HOMO)/(M1-M3)] descriptor calculated via
the PM3 method based. The methodology proposed here is completely general and
can used for other classes of compounds. Studies along these lines are in
progress.
The authors would like to thank CNPq (Grants 308428/2022-6 and 150595/2023-9).
The authors would also like to thank CCM-UFABC for the computational
resources.
Figure 7 show the statistical analysis. In this figure, each pixel corresponds
to the Pearson coefficient (CP), which represents the correlation between two
features.
Figure 7: Pearson coefficient (CP). Each pixel represents the correlation
between two features.
## References
* Lombardino and Lowe III 2004 Lombardino, J. G.; Lowe III, J. A. The role of the medicinal chemist in drug discovery—then and now. _Nature Reviews Drug Discovery_ 2004, _3_ , 853–862
* Azam and Abbasi 2013 Azam, S. S.; Abbasi, S. W. Molecular docking studies for the identification of novel melatoninergic inhibitors for acetylserotonin-O-methyltransferase using different docking routines. _Theoretical Biology and Medical Modelling_ 2013, _10_ , 1–16
* Novak and Kovač 2010 Novak, I.; Kovač, B. Electronic structure and biological activity: Barbiturates vs. thiobarbiturates. _Chemical Physics Letters_ 2010, _493_ , 242–244
* Novak and Kovač 1999 Novak, I.; Kovač, B. Electronic structure and biological activity of steroids. _Biophysical Chemistry_ 1999, _78_ , 233–240
* Braga and Galvão 2003 Braga, S. F.; Galvão, D. S. A Structure- Activity Study of Taxol, Taxotere, and Derivatives Using the Electronic Indices Methodology (EIM). _Journal of chemical information and computer sciences_ 2003, _43_ , 699–706
* Autreto and Lavarda 2008 Autreto, P. A. d. S.; Lavarda, F. C. Febrifugine derivative antimalarial activity: quantum mechanical predictors. _Revista do Instituto de Medicina Tropical de São Paulo_ 2008, _50_ , 21–24
* Barone et al. 1996 Barone, P.; Camilo Jr, A.; Galvão, D. Theoretical approach to identify carcinogenic activity of polycyclic aromatic hydrocarbons. _Physical review letters_ 1996, _77_ , 1186
* Braga et al. 1999 Braga, R.; Barone, P.; Galvao, D. Identifying carcinogenic activity of methylated polycyclic aromatic hydrocarbons (PAHs). _Journal of Molecular Structure: THEOCHEM_ 1999, _464_ , 257–266
* Schleder et al. 2019 Schleder, G. R.; Padilha, A. C. M.; Acosta, C. M.; Costa, M.; Fazzio, A. From DFT to machine learning: recent approaches to materials science–a review. _Journal of Physics: Materials_ 2019, _2_ , 032001
* Butler et al. 2018 Butler, K. T.; Davies, D. W.; Cartwright, H.; Isayev, O.; Walsh, A. Machine learning for molecular and materials science. _Nature_ 2018, _559_ , 547–555
* Sakata et al. 2017 Sakata, R. P.; Figueiro, M.; Kawano, D. F.; Almeida, W. P. Effect on acetylcholinesterase and anti-oxidant activity of synthetic chalcones having a good predicted pharmacokinetic profile. _Medicinal Chemistry_ 2017, _13_ , 654–663
* Ellman et al. 1961 Ellman, G. L.; Courtney, K.; Andres, V.; Featherstone, R. M. A new and rapid colorimetric determination of acetylcholinesterase activity. _Biochemical Pharmacology_ 1961, _7_ , 88–95
* Neese 2012 Neese, F. The ORCA program system. _Wiley Interdisciplinary Reviews: Computational Molecular Science_ 2012, _2_ , 73–78
* Fischer 1977 Fischer, C. F. Hartree–Fock method for atoms. A numerical approach. 1977,
* Szabo and Ostlund 2012 Szabo, A.; Ostlund, N. S. _Modern quantum chemistry: introduction to advanced electronic structure theory_ ; Courier Corporation, 2012
* Ramachandran et al. 2008 Ramachandran, K.; Deepa, G.; Namboori, K. _Computational chemistry and molecular modeling: principles and applications_ ; Springer Science & Business Media, 2008
* Schäfer et al. 1992 Schäfer, A.; Horn, H.; Ahlrichs, R. Fully optimized contracted Gaussian basis sets for atoms Li to Kr. _The Journal of Chemical Physics_ 1992, _97_ , 2571–2577
* Ouyang et al. 2018 Ouyang, R.; Curtarolo, S.; Ahmetcik, E.; Scheffler, M.; Ghiringhelli, L. M. SISSO: A compressed-sensing method for identifying the best low-dimensional descriptor in an immensity of offered candidates. _Physical Review Materials_ 2018, _2_ , 083802
* Barone et al. 2000 Barone, P.; Braga, R.; Camilo Jr, A.; Galvão, D. Electronic indices from semi-empirical calculations to identify carcinogenic activity of polycyclic aromatic hydrocarbons. _Journal of Molecular Structure: THEOCHEM_ 2000, _505_ , 55–66
* Braga and Galvão 2004 Braga, S. F.; Galvão, D. S. Benzo [c] quinolizin-3-ones theoretical investigation: SAR analysis and application to nontested compounds. _Journal of chemical information and computer sciences_ 2004, _44_ , 1987–1997
* Coluci et al. 2002 Coluci, V. R.; Vendrame, R.; Braga, R.; Galvao, D. Identifying relevant molecular descriptors related to carcinogenic activity of polycyclic aromatic hydrocarbons (PAHs) using pattern recognition methods. _Journal of chemical information and computer sciences_ 2002, _42_ , 1479–1489
* Troche et al. 2005 Troche, K. S.; Braga, S. F.; Coluci, V. R.; Galvão, D. S. Carcinogenic classification of polycyclic aromatic hydrocarbons through theoretical descriptors. _International journal of quantum chemistry_ 2005, _103_ , 718–730
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.